The anatomy of my typical coaching engagement

Context

COVID-19 has forced the issue. All my teams are now distributed because everyone’s working from home. As a coach, this has given me a few things to think about. Mostly about how I need to rethink many of my coaching strategies, as they mostly take advantage of the rich information flow that’s tacitly a part of face to face.

As part of my attempts at designing new coaching strategies that could be used with entirely remote teams, I’ve been going back to basics. This post is about the basic structure that my coaching engagements end up being designed around.

Structure

Broadly speaking, I’ve found that all my coaching boils down to one of two categories of topics – known unknowns and unknown unknowns. My coaching engagements usually have two strands running, one for each of these categories. They have different cadences and topics from the “unknown unknown strand” can make their way over into the “known unknown strand”. I’ve not yet seen a topic move in the other direction. Please read The Structure of a Coaching Intervention for a more accurate view of the content I cover when I coach.

Known unknowns are relatively straightforward. This is closer to training, or facilitated learning. Both coachee and I are clear about what knowledge is needed, and we can usually rely on accurate insights from the coachee about their levels of expertise and whether or not it is increasing. I usually end up producing a catalogue of concepts and lesson topics, and my coachee orders them in a list. I suggest changes to the ordering only if I feel there’s a more consumable flow of content (or some pre-requisites). This also has the handy side effect of demonstrating how a delivery team and a product owner can work together to collectively order a backlog.

Unknown unknowns are much harder (especially if the gaps are deep ones such as culture or values & beliefs). Some unknown unknowns can be converted into known unknowns with identification (as there’s a degree of unconscious knowledge in the coachee). Maintaining a general principle of doing no harm, I usually end up doing something along these lines

  1. Observe the natural state
  2. Form a hypothesis
  3. Run an experiment to test
  4. If proven, identify “treatment” options
  5. At some point, bring the coachee up-to speed with what’s happened
  6. Together with the coachee, agree the treatment option
  7. Design the treatment plan
  8. Implement, measure effect, choose to keep or roll back
  9. Restart the loop.

Step 1 cannot be rushed, otherwise my biases play too big a part. In step 8, I’ve only ever had to rollback once in my career, otherwise I’d never have even considered it an option.

Remote Execution

For the known unknown category of topics, being remote poses no fundamental problems, mostly logistical challenges and a greater noise-to-signal ratio in the communication between my coachee and I. It also adds delay to the building of rapport, but that is less crucial when both parties know exactly what needs to be learned, as the coachee can also attempt to infer credibility by their perception of the quality of my coaching materials (legitimately or not is beside the point).

Being remote adds a lot of complexity to my structure – specifically the first step (but none of the steps gets away unaffected). I’ll write up how I approach this later and link it.

The Structure of a Coaching Intervention

Coaching is an act of leadership. The main purpose of coaching a team is to improve the overall effectiveness of that team. Three key dimensions in assessing this are:

  • Productive output that meets or exceeds the standards of quality/quantity/timeliness of the consumer
  • Social processes the team use that enhance the future capability of the members to work interdependently as a team
  • The group experience that contributes positively to the learning and personal wellbeing of the members

There is an additional factor, often classed as crucial to team effectiveness – the quality of personal relationships within the team. However, I’d class any work done with the team that directly addresses problems with personal relationships to be counselling in nature, and isn’t in scope for this post. In my experience, when the delivery effectiveness of a team improves, the morale boost in the team members has a side-effect of improving personal relationships.

The relative importance of these three dimensions will vary over time, but successful teams always make sure that they are all considered and balanced over time, never completely sacrificing one to optimise the other.

The effectiveness of a team at performing a task is influenced by these three key performance factors:

  • The effort a team expends
  • The appropriateness of the strategies and techniques that the team uses
  • The skills and knowledge that the team can bring to bear

Therefore, in order to have any sort of effect on a team’s performance (beyond the Hawthorne Effect), coaching will need to help one or more of these factors.

There are three important factors to consider when delivering a coaching intervention, and all three must be balanced in order for the intervention to be effective:

  • Content
  • Delivery approach
  • Timing

Coaching Content

The messages that are conveyed during a coaching intervention typically conform to one or more of these three main patterns:

  • Motivational: coaching that addresses effort, e.g. inspiring team members to increase effort, or minimising freewheeling
  • Consultative: coaching that addresses strategy choice, method selection, helps teams determine locally optimised methods
  • Educational: coaching that addresses skills and knowledge gaps

It is important to use the right content to address the specific challenges that the team has. For example, if the challenge is a shortfall in specialised skills needed for a task, providing a highly rousing motivational speech isn’t going to help.

Three Main Coaching Approaches (plus a fourth – “eclectic”)

The reality is that most coaching interventions will have aspects that originate from more than one of these three approaches.

  • Process Consultation: structured/clinical examination of interactions from a workflow perspective:
    • between the team and external teams/stakeholders/etc
    • internal to the team
  • Behavioural Models: feedback on individual and team behaviours, mainly focussing on relationships and how feedback is given and received; often involves operant conditioning
  • Development Coaching: identification of areas that need improvement, along with focussed time set aside for learning / training sessions
  • Eclectic interventions: ad hoc interventions, with no specific underlying theoretical model; most commonly limited to personal relationships when coaches aren’t familiar with the specifics of the work the team is doing

It is important to use the most appropriate coaching style / approach to suit the context, the audience and the message. For example, when working with an inexperienced team to improve the effectiveness of their in-flight process flow, it is better to use their real work and their real process as opposed to a classroom styled session with an abstract case study.

Intervention Points

Teams go through different phases as part of them starting, working on, and finally finishing, a piece of work. There are several broad model categories that attempt to describe the team and their approach to work temporally (see https://en.wikipedia.org/wiki/Group_development ). Two patterns that I’m broadly aware of are:

  • Incremental (or Linear) Models (e.g. Tuckman)
  • Punctuated Equilibrium Models (e.g. Gersick)

The interesting thing (for me) is that I’ve seen groups of people becoming teams display characteristics found in both. For example, I see teams regularly using retrospectives, but for the most part, the improvements are relatively narrow and focussed in effect. However, there are usually a small number of seismic shifts in ways of working – usually as a result of a significant “precipice” being felt. The most common precipice is the realisation by the team that they’re halfway through their estimated (or constrained) timeframe. These events are where coaching can deliver the most impactful benefits and are typically found at:

  • the beginning,
  • the midpoint, and
  • the end

A side-effect of teams splitting large blocks of work down into smaller pieces (e.g. Epics, Stories etc.) is that these events occur far more frequently – each user story has a beginning/midpoint/end set that could be used as opportunities for coaching interventions. I’ve rarely seen coaching interventions at user story boundary, but I have seen them at Epic or Feature level.

Other team processes that also create opportunities for beginning/midpoint/ending events to occur include the use of iterative (or sprint) based development, using a delivery lifecycle (such as DAD’s risk/value lifecycle) etc.

Designing a Coaching Intervention

The design and structure of a team has a significant effect on the effectiveness of coaching interventions. “Well designed” teams gain more value from a coaching intervention, even a poorly executed coaching intervention. “Poorly designed” teams can suffer negatively during a poorly executed coaching intervention. I’ll focus on team design in a separate post. For this post, I’ll assume no ability to fundamentally change the structure of a team to better match the workload.

Looking at the three performance factors, what are the underlying constraints? If a team has a constrained “strategy & technique” factor, then any attempts to change the execution strategy is likely to be met with frustration.

Once you have some clarity on the performance factor you can help improve, pay attention to when you introduce the intervention. Teams at the start of a piece of work are generally unwilling (or sometimes even unable) to have an informed discussion about what their optimum delivery strategy should be – it’s usually better to get started and then make an adjustment at the midpoint event (when they have some actual experience to base the decision on).

Target Performance FactorEffectiveAvoid
Effort – Motivational Coaching
– The beginning
– Consultative Coaching
– Educational Coaching
Strategy  & Technique – Consultative Coaching
– The midpoint
– The end
Skill & Knowledge – Educational Coaching
– The end
– Motivational Coaching
– The beginning

Once you’ve got a sense of the target performance factor, the timing of the coaching intervention and the key messages that needs to be conveyed, the next step is the execution approach. It’s worth investing effort in explicitly designing coaching interventions (not to mention ensure sufficient variety to keep things interesting for you and your team). However, there are some natural alignments between the conceptual approaches and the content to be conveyed which could help get you started.

Coaching Content Approach
Motivational Behavioural Coaching – personal motivation, team camaraderie.
Developmental Coaching – focussed time set aside for “kick off events” (such as an Inception workshop or a Visioning workshop).
Consultative Process Coaching – clear understanding on pros and cons of alternative techniques so a mid-flow course correction can be made.
Educational Development Coaching – periods of reflection and improvement via a team retrospective, knowledge transfer via lunch & learn sessions, skills acquisition sessions using learning katas.

Courageous Executives and the Permafrost

Last week I started to think about how different parts of an organisation have different views on what is important.

This is nothing new. Why should I keep reading?

I think the existence of that permafrost layer is problematic if want to be inventive or innovative. If you’re looking to evolve into a “courageous executive” then credibility associated with reducing that middle layer could be useful. However, to stand a chance of success when you “fight the machine”, you should pay attention to the basics, including:

  • the degree of effective support you’ll have,
  • the culture underpinning your sub-organisation, and
  • how different the “volumes” are when comparing your sub-organisation’s culture with the wider organisation’s culture.

In this context, “sub-organisation refers to the subset of the organisation under your sphere of influence, either formally via org chart or informally via your influence, credibility, relationships etc.

If you intend to create space for your organisation to innovate and experiment, then it’d be advantageous if it is more naturally innovative and experimental. That requires a different attitude to failure than when saving face is a preferred reaction to failure. To paraphrase, you’ve got to shrink that middle layer (conceptually).

A useful strategy that can reduce the size/significance of this middle layer is dealing with fear from a cultural standpoint. Something that can help the formulation of specific strategies is an understanding of how the Loss Aversion and Loss Attention cognitive biases manifest in the individuals that you identify as being significant anchor points in this middle layer.

  • Loss Aversion: It is better to not lose £10 than it is to find £10.
  • Loss Attention: Tasks that involve losses get more attention than tasks that do not.

A clue to something that seemed to help me is that bottom level. When this scenario was presented to delivery teams, they didn’t seem worried about saving face when faced with that hypothetical scenario. Digging further, it wasn’t that face/reputation was unimportant, it’s that they didn’t care all that much about what the “middle management types” thought about them (it’s how individuals seemed to interpret the scenario). That middle management group was not considered to be their judging community. They were far more concerned about their reputation amongst other delivery folks. For example, a developer might try and force a software library to work, applying workaround after workaround, instead of just accepting that the library was the wrong fit for what’s needed (because they wanted a reputation of being able to make anything work). However, that same developer might not be concerned if their boss’s peers don’t think much of them, if they don’t care about the office politics.

That got me thinking that perhaps one way of reducing that awkward middle layer, was to change their perceptions about what was important to the community that they considered was judging them. I think this is different to tackling their priorities head on, in that it’s less confrontational, so stands a better chance of working (at least partially). That middle layer would need to view organisationally significant things like money or time or customers etc. to be something that could be truly lost (so that their normal loss aversion and attention biases would influence them in ways that were beneficial, if you were indeed trying to grow into a courageous executive). They would also need to feel that personal reputation could not be lost in the same way.

Changes to that community can be as a result of external or internal pressures. Assuming you’re not “senior enough” to be able to enforce a new operating model the community must comply with, your more effective strategies would be the ones that originate from inside that community.

Potentially Useful Infiltration Techniques

  1. Repeated Messaging: Humans are influenced by exposure. The first time something controversial is heard, it’s shocking. After the hundredth time, it barely registers consciously. Interestingly though, the subconscious still registers. In that way, people can be programmed. By repeating your message regularly into the target community (and with variations to keep it interesting), over time you’ll lower resistance to your ideas.
  2. Let others get credit, even if the ideas are yours: Having others in the community get an endorphin rush when they share an idea influences them to repeat the behaviour. So it’s your idea really, so what? You’re aiming for something else. Besides, a few people will know anyway. That “inside knowledge” can be a powerful aide to your attempts at growing an organisational culture that has you as an executive – it creates a sense of belonging between you and them, which if nurtured can transform into loyalty.
  3. Be seen to be visible to the next few power levels up: For the hierarchically minded, seeing you playing nicely with their boss and their boss’s boss, can signal that it’s more acceptable for them to align with what you’re saying. It has a secondary effect to help you gauge whether or not what you want to do is palatable to the next few levels up the power structure. If it is, then it can be an indicator that there is space for you to grow your leadership potential.

Building a Knowledge Sharing Community

Why am I trying to establish this?

Having an effective knowledge sharing strategy that my consultants and coaches use effectively can significantly boost the quality of their deliverables on engagements, as their access to knowledge and experiences will be richer. Richer knowledge leads to better decisions which lead to better outcomes blah blah blah.

No really, why?

The truth is far less grandiose. And much more personal. I want better relationships with my colleagues. And the main reason for that is selfish. When I’ve got something interesting/gnarly to solve, I’d MUCH rather solve it collaboratively with someone. I find the ideas that come out of a buzzing pair/trio are generally FAR superior (not just in terms of merit, but also the emotional responses – things like surprise, delight and even just pure joy) than anything I’d come up with on my own. A major contributor to that heightened emotional response is the fact that it’s a shared experience – this has a reinforcing effect on the individuals.

This topic is also related to my post on Courageous Executives – I think being able to create an environment where knowledge and help is shared freely and easily is helpful in establishing a progressive organisational culture.

Some things to consider

The first thing I need, is critical mass. I need enough colleagues with sufficient latent willingness to participate to increase the odds of interesting interactions to occur.

The second thing, is to recognise the reality of the group dynamic. I work for a consulting organisation, and like most others, they staff people on engagements. Those engagements have teams. Back at base though, I’m grouped with a set of similar people under the same line manager. That line manager’s “team” is not a team from a behavioural dynamics perspective, regardless of how the individuals describe themselves. Esther Derby is my go-to source for concise articulation of what it means to be a team.

The third fundamental aspect to consider, is how people participate. The “1/9/90” rule of thumb has been around for over a decade now, potentially longer. A quick recap: For online groups, there are three broad categories of interaction

  • 1% of the population will initiate a discussion thread
  • 9% of the population will actively contribute to discussion threads
  • 90% of the population will lurk

I also reasoned that in order to sell what I wanted people to do, it needed to be more engaging/arresting than just these numbers (which no doubt many of my target audience would have heard already, so there’d be little impact). While daydreaming about how to go about launching this, an idea flitted across my mind, which amused me. I ran with it, just to see how far I could go. Fishing.

  • Lures: These would be the “1% and 9%” of the population. Their job is to make the environment interesting/appealing enough for the others to participate.
  • Fish: These are the 90% of the population who lurk. My objective is to convert them into lures by engaging them.

Above all else, the most important thing to remember was that knowledge management was all about people. We have to avoid the temptation to create yet-another-document-repository, as those end up generally being pointless (keeping a stack of documents current is a huge time investment, so very few people do – the documents become outdated quickly and users lose confidence in the repository as a source of relevant information).

How did we start?

The first step was to get a sense of that latent willingness that I needed. To avoid unnecessary confusion, I stuck to a typical technique – I ran a workshop. The stated objective was to understand the key topic areas / themes that as a collective, we had some self-professed expertise in. The exam question was

write down topics, regardless of scale, that you would be happy for a colleague to come to you if they needed some help

Approaching the audience in this way would nudge them into feeling valued from the outset (the alternative would be giving them a candidate set of themes and asking them to sign up. The list at the end of both approaches would be the same, but the first scenario would have far more engaged individuals as they’d own the list). This workshop also let me find co-conspirators.

Then what?

In a word, admin. We had to create a navigable map of the topics that the audience had supplied (it would help people find the right place to ask questions). In the end, we settled on a very basic two-level tree consisting of high-level themes and more detailed topics. That allowed the grouping of individuals to be based on themes with related topics. The main rationale was that as experience and knowledge changed over time, the specifics of the topics would evolve, but the main theme would remain constant. That allowed for stability of membership – and that membership stability is a significant factor in determining whether or not a theme would survive. The candidate themes also had candidate lures – they were the list of people who volunteered for the topics that were under that theme.

We had to sell the themes to the potential lures.

We also had to set some expectations about what being a lure entailed. By this point the term “Theme Guardian” had started to emerge as the role that was to be played. This is what we ended up with.

  • Guardians own their Theme
  • Guardians are responsible for the quality and integrity of the Theme’s content
  • Guardians should invest in PDCA cycles to improve the environment in the Theme.
  • Guardians need to evolve their vision and strategy for their Theme.
  • Have “something” to help new joiners understand your Theme.
    • This doesn’t necessarily have to be a document. It could be quarterly intro sessions on WebEx (for example).
  • Set expectations of how you’d like the community to operate – and remind your community regularly

I find analogies very useful as abstraction models to help me understand a domain/problem. In the event that at least some of the candidate guardians operated in a similar manner, I picked a couple of potential models that they could use to refine their thinking about how they wished to operate:

  • Town Planners and Communal Spaces. What makes some public spaces incredibly successful, and others turn into ghost town?
  • Aboriginal Storytelling. In particular, explore the claims that the Dreamtime stories have remained intact for over 10,000 years without degrading, despite only having a verbal/pictorial, not written form.

Selling the concept

Now that we had a starting position (themes, candidate guardians, some guardian responsibilities), we needed to launch. To help that, we produced some general use guidance on themes:

  • People are at the core of any successful knowledge management strategy.
  • Information held in a person’s head is updated as a by-product of things that person does. Information stored in documents requires additional explicit effort.
  • For a Theme to be useful, knowledge needs to flow from person to person.
  • If a Theme’s only got one person who’s interested, it’s not a Theme.
  • When a question is asked, directly answer. Even if it’s been asked before. Never rely on a document (or link etc.) to answer for you. If necessary, end your answer with “this document/link/other goes further” (or words to that effect).
  • Think about what your “background radiation” looks like. Themes need to feel active otherwise you won’t get people stopping by and asking questions.
  • Have a variety in the complexity / subtlety / nuanced nature of the conversations and discussions. For example, if you only have very high brow discussions, you’re likely to put off the inexperienced. If you only have introductory content, then the experts may not participate.

Launch Day

These were our objectives

  • Start small: We picked one Theme that the co-conspirators were willing to act as Guardians or as participating members. We would attempt to orchestrate and create an active community for that theme.
  • Momentum: We wanted to create some observer habits in the wider community. With enough people checking in daily (for example), we’d greatly increase the chances that conversations would spark up. But we had to kick start the making-it-worth-everyone’s-time process.
  • Win over an influential sceptic: Having a known sceptic promote what we were trying to do would help persuade other sceptics that there may be some mileage in investing in this strategy.

What happens/happened next?

1 Week After Launch

There’s a smattering of interest from a handful of people. A few posts have been made and there has been some commenting on posts. Some of the early efforts from the co-conspirators have been around motivating and inspiring the community to participate. There is some optimism around that this approach feels different (probably because it isn’t tools oriented).

Predictions

It’s still early days, but these are my (current) predictions.

1 Month After Launch

The conversation topics broadly split into a handful of themes. Most of the themes appear to be consistent with what emerged from the initial workshop, but the actual topics discussed are quite different. There is some dissonance from the earlier adopters as there are multiple unrelated themes being discussed in the same “place”, causing confusion.

3 Months After Launch

Enough interest in different themes has triggered new spaces on the platform for the conversations about those themes to be segregated, to simplify the cognitive load on the users. There is some frustration from some members who are interested in multiple topics, most likely due to how the individuals have modelled the interactions mentally – e.g. why should two people who are talking about a range of subjects have to keep switching which “chatroom” they converse in. Relationships are still point-to-point.

6 Months After Launch

Most of the theme chatrooms are now dormant, most of the activity has gravitated towards one or two Themes. There’s some blurring between the competing mental models – relationships are person-to-person and person-to-community.

Analysing my predictions

One of the most significant challenges I’ve seen in Knowledge Management things is the belief (usually tacit) that it’s all about the content. I believe it’s all about the relationship, and the knowledge of who to talk to when a person needs to know something. My predictions have a base assumption that Knowledge can be structured and organised at a fine grain. I think that’s an assumption that’s also being made by the majority. I’m expecting this assumption to be proven to be false and that we will pivot back to trying to be more of a community than a knowledge repository. Looking at the population numbers, I don’t believe there’s any need for more than two or possibly three communities (eventually).

Are you limited by your Agile Coach Sourcing Strategy?

This is about a revolving door programme of hiring coaches. One of the things I’ve observed regularly, is that these kinds of hiring programmes create an implicit timeframe by which improvements must be “delivered”. In itself, that’s not necessarily a problem. What is a problem, is that the timeframes are nowhere near long enough (IMHO) if the aim is to change the culture.

How you measure your coaches also has an obvious effect on your outcomes. Procurement plays a big part here. I’ve seen a lot of coaches who have their success defined by some variation of how much effort they expend. Training courses run. Numbers of teams “coached”. Amount of collateral documented in a wiki/repository/etc. It’s even worse if that’s also what the coach genuinely believes is a measure of success.

It’s easy to buy the visible stuff. However, the visible stuff has no sustaining ability – that comes from the hidden stuff. Agile’s a culture, but how do you buy a culture? It’s much easier to buy a new set of processes, dress code, office layout, stationery, reporting templates etc.

By buying the visible stuff, you easily get to a set “maturity” of what-to-do (because your coach has implemented this a million times), but you might lose the learning-about-why and your ability to improve at some point comes to a crashing halt when your coach runs out of instructions to give you (or they leave). You’ve substituted thinking for doing what you’re told because an expert tells you.

One of the things that could lessen the likelihood of this happening to you is longevity. Although that does bring its own set of challenges. Longevity can come from humans, or guidance that’s respected enough to be followed (or at least attempted) – e.g. the guided continuous improvement advice from Disciplined Agile Delivery’s collateral.

Longevity can increase the chances that the underlying factors get some attention – beliefs, culture, mindset. But longevity is hard. When things go wrong (and lets face it, things always go wrong), it’s easy to blame the changes that are being attempted. I’ve often seen blame being thrown about as a precursor to the environment becoming more toxic. In those environments, unless there’s sufficient emotional investment, coaches leave.

In my view, the most effective counteracting force against that toxicity, is leadership. Your senior leadership must have the trust of the members of the organisation. However, trust doesn’t come from great speeches but from credibility. That means the senior leadership must also evolve and adopt a more collaborative/open/trusting/etc culture. They must behave differently. And they also need to provide regular, ongoing and consistent reassurance that the environment is safe so that teams can evolve to a more agile culture with no repercussions for steps that don’t quite make it on the first try. I think that works because humans emulate leaders. If the leaders are open and collaborative, then the teams are more likely to also become more open and collaborative.

Courageous Executives – Coping without one

The notion of a “Courageous Executive” is not a new one. However, what is changing, is the awareness of how significant this organisational pattern can be when it comes to disruption and innovation. Here’s a link from mid-2017 from one of my former employers on the topic.

Relying on a courageous executive to solve all of your “agile transformation” related problems only really stands a chance of working if they have enough authority/persuasiveness to be able to get their way in the CxO arena. That might be fine for forward thinking / dynamic / exciting / other-superlative organisations, but what if your organisation doesn’t have that open culture? Or you don’t have access to someone willing to stick their head over the parapet? What if (like a large percentage of the working population), you work for a late-majority or laggard organisation?

I’ve been thinking about this sort of thing recently. Mostly because many of the bigger problems I face in my work life these days are all around how my large bureaucratic clients can work effectively with large, bureaucratic suppliers and partners, when everyone states they want “to do agile” [sic] and yet are afraid to change anything about their operating or engagement models.

I’ve been trying to organise my thoughts around “difficult conversations that are simplified/made trivial if a courageous executive existed”. I’ve also been trying to organise any messaging I deliver to my clients to make it easier to convert an existing executive (perhaps with a courageous bias) into someone who can be labelled as a Courageous Executive [massive disclaimer: I’m in no way going anywhere near a formalised definition, just pushing the boundaries of what my gut feels like]. I’ll write up as I go and link to this post.

Agile Maturity – Getting past “Shu”

Why am I writing this? I read this post a little while ago and I wanted to revisit what I thought about Shu-Ha-Ri.

Context

My current employer (very large IT Consultancy) has gone through a significant sheep-dipping exercise. It’s an extremely large organisation (employee headcount isn’t that far away from the population of Edinburgh) and has placed an organisational big bet on “Agile”. So pretty much everyone is being trained by a combination of classroom and online learning modules, backed up by an internal certification scheme, with employee targets and financially significant objectives (e.g. a small part of the performance related pay element is only accessible with certification).

It’s had a degree of success. It’s certainly helped provide an air of confidence for the sales teams and a degree of comfort for potential clients (especially those categorised as Late Majority or Laggards, who are by nature sceptical of new things. And yes do have to stifle the odd smile or two whenever I use the word “new” to describe “Agile”).

The Problem

My problem with this, is it sort of misses the point of “agile” (this in itself is a poorly worded sentence, as agile was never really the point, but it’ll do for now).

What this sheep-dipping exercise seems to have done (from what I’ve been able to observe), is install a new set of practices to be employed religiously. Some of these new practices appear to be little more than a branded re-skin (my “favourite” example of this is the use of the User Story to contain all the requirements documentation, and must be written and signed off before it’s given to a development team to deliver).

If I’m being optimistic, the installation of new techniques (mostly the visible ones such as a stand-up) have increased the overall delivery quality by a small amount (I vaguely recall one presentation by Mark Lines that quoted a 6% improvement for teams that only adopted the mechanical aspects of Scrum, but I’m happy to be corrected).

If I’m being cynical, it’s the fight back of a hierarchical command and control culture against the invading collaborative and flat and open culture.

Not all bad news though

There are glimmers of hope though. There is more recognition of the need for greater levels of autonomy and empowerment, however limited the concessions may be. That is what the rest of this post will focus on.

Why “Shu Ha Ri”?

Simply because I found the “Learning to cook” analogy along with a catchy (ok, catchy is pushing it) slogan to be one of the more effective memory aids I’ve experienced. It also gave me an extremely lightweight structure on which I could help a “continuous learning” mentality to take hold.

Shu

This was clearly our starting position. As “Trainee Chefs”, the most favourable outcome from an intensive training programme, was the awareness of a lot of pieces of jargon, with perhaps the basic knowledge required to use a “standard configuration” of rules and practices.

One of the big “shorter term” goals for new recruits is to develop the equivalent of muscle memory – the imprinting of the basic rules and practices such that they can be performed without a great deal of cognitive effort (the cognitive effort would be reserved for the actual work being done). In food terms, these folk can now make a respectable standard lasagne.

In theory, as teams become more comfortable with the set of practices, they’ll begin to experiment, and vary some of the specifics, in an attempt to test the boundaries of their practices. In theory.

So what prevents nature from taking it’s course?

I think a big contributing factor is a belief, reinforced by authority (e.g. the hierarchy) thinking that the agile stuff is all about the work. Better quality software, more aligned with user needs etc. In other words, it’s about what you do.

That perspective misses (IMHO) a more valuable element of this agile culture. I think the stuff about the quality and alignment is valuable, but for me it’s a side effect of having an entity (i.e. the team) that is very adaptable and can adjust itself to be able to cope with any scenario it finds itself in. I think a key trait needed for adaptability is to always be curious about why you do something. And then do something with that insight.

At least, that what seems to be happening in pockets at my employer. Newly formed teams are given runbooks to operate (the what), but little in the way of support to help nurture the curiosity about why those techniques work, or even what the tradeoffs associated with those techniques are (all practices have tradeoffs, even the “blindingly obvious everyone should be doing this” sort). The vast majority of the coaching support accessible to the team is optimised to roll out the runbook and maintain compliance to said runbook.

Why? I think it’s a return-on-investment calculation made by authority figures. The biggest percentage gain per unit coaching effort is the gain that takes you from zero to non-zero. Installing some basic agile practices into a low maturity team will have a significant effect on their Velocity (story points per iteration) in a fairly small timeframe. A few coaching interventions later and their Velocity is likely to have improved significantly. But then the gains become harder to find. Improvements become more marginal. Sometimes a seismic shift is needed, but that would have short term detrimental effects on Velocity. And for environments that believe that output is the important thing, then getting less stuff per iteration is a BAD THING and must be avoided.

It’s also a pattern that can be reinforced by the having a revolving door policy on your agile coaches (IMHO. I think I’ll have a go at blogging about this, if nothing else to help me crystallise my thoughts on the topic).

Is there anything we can do?

Assuming there’s enough truth in the hypothesis to be worth doing something about it, what can/should we do about it?

The aim of whatever-we-do-to-improve-matters has got to be to increase the ability of a team to question why-it-does-something along with an intrinsic confidence to be able to invent a change to their process. The ability to invent will dramatically increase the self sufficiency of a team.

Both of these elements – asking “why” and being able to invent, require a team to have a significant degree of psychological safety. Creating the conditions for the degree of psychological safety to improve is a core function of leadership.

With sufficient psychological safety (a subjective term), comes the capability for changing. What it doesn’t guarantee, is whether or not change will occur, or whether the direction of change is a “viable” one or not. This is where additional support is helpful, potentially using some form of expert. Agile coaches can be helpful here, especially if they’re working in the open (e.g. via using some variation of a guided continuous improvement strategy).

Ha

Something a good Agile Coach is well placed to do, is help their team understand the underlying models that underpin the team’s ways of working. That would help the teams get to that deeper understanding of why their techniques work, the pros and cons etc. That can create the conditions for much more interesting process changes to be created by the team. These highly context specific and tailored techniques are likely to be more effective than more “generic” equivalents.

To me, that sounds like the beginnings of Ha.

Evolving beyond SAFe using the DAD toolkit

Intended Audience

So you’ve bought into the SAFe brand, adopted it pretty much wholesale. You might even have had some successes after the initial adoption. But at some point, you’ve hit a plateau (or a major problem that you simply can’t get past with just SAFe’s guidance). Perhaps you need a shorter time to market than a quarterly planning cycle gives you. What do you do?

The rest of this post is structured very loosely as a set of steps. They generally make sense as a progression, but the reality is that for even remotely complex contexts, if you attempted to follow this line through just once, you’re likely to end up with more pain. There are feedback loops all over the place, and artificially trimming them into a single line is a waterfall-esque strategy for complexity management, which is suboptimal.

Step 0 – Don’t Panic!

You’re already on a transformation journey (Before SAFe -> Adopting SAFe -> Using SAFe -> ?). This is just the next step.

Step 1 – Start where you are

There’s no sense ramping up the psychological harm by saying you were wrong. Elements such as the Prime Directive of Retrospectives also echo the underlying fact that at the time, you made the best decision possible given the data you had access to. Now that you have more (or perhaps just different) data, it’s time to make the best decision possible today.

Step 2 – Take Stock

Look at what you have (in this case, SAFe) and focus on the mindset, and principles, as those generally remain valid. What will change is the practices, techniques and strategies that you use in order to sustain that mindset and deliver value continuously. For example, “Alignment” is great. SAFe’s implementation strategy for achieving alignment is generally by starting from the Portfolio level and “working down”. Other strategies include starting from the team and “working outwards”. Changing the scale of programmes of work (e.g. by picking an architectural style that facilitates this) can change the relative importance of alignment from being a pre-requisite into more of a side-effect. Disciplined Agile Delivery (DAD) has an Enterprise Aware principle that can be used to facilitate a discussion about Alignment.

Step 3 – Visualise The Implicit

Visualise your Ways of Working. (WoW). If you’ve adopted SAFe, then the SAFe “Big Picture” is probably it, or very close to it. DAD has a Program Lifecycle, which bar some label changes is very similar. Once your WoW is visual, overlay it with DAD terminology. This is a form of gap analysis and it should help you turn implicit process knowledge into explicit knowledge. The biggest insights would come from the associated “Process Goals” for the practices you are using. That’s a big step towards understanding why a practice is used. SAFe’s guidance is very good at telling you what to do and has some coverage of why it’s worthwhile. What is lacking though, are alternative approaches for achieving the same objective. That’s where DAD can offer significant advantages.

Step 4 – Target Process Improvements

With a map of process goals, now it becomes possible to target improvements to struggling areas. The areas to target will be highly context specific, and it’s worth using some root-cause-analysis techniques (e.g. 5 Whys or Ishikawa diagrams) to find them. Using a process goal as an anchor, you are more easily able to explore alternative strategies to solve the problem you’ve got. For example, if your current processes struggle with effective delivery of complex non-functional-requirements, then something like http://disciplinedagiledelivery.com/strategies-for-verifying-non-functional-requirements/ could help.

Step 4A – Have Meaningful Conversations

An interesting side effect of making your process goals visible, is you can engage your teams and stakeholders in a more meaningful discussion about the approach to the work. This can create space for highly innovative and inventive strategies for problem solving and value delivery to emerge.

Or in other words, instead of only relying on your team to think-outside-the-box, you get to redefine-what-you-mean-by-box.

Step 5 – Rinse and Repeat – Ad Nauseum

Don’t forget one of the pillars of the (SAFe) House of Lean – Relentless Improvement. Your drive to improve shouldn’t end. One of the challenges with adopting SAFe (or in fact any other branded framework that has a relatively prescriptive nature) is the illusion that there’s an End State to reach, and once you get there, you win the game.

Looking at the Twitterverse, I’ve seen a few people talk about SAFe being potentially a good way to start getting some agility into your large scale programme delivery – mainly because can be perceived by senior-management-buyers as a Tangible Thing, which is easier to accept than a grass-roots-hearts-and-minds-campaign-led-by-developers. However, at some point, a continuously evolving organisation will evolve beyond it. That said, the subset of SAFe that’s Don Reinertson’s work on Economics and Flow will remain relevant long after the SAFe practices are abandoned for more lightweight alternatives.

Making sense of System/Design Thinking

(or how I learned to stop worrying and love the Thinking)

Why I’m writing this

I have a new client. They’re a large Financial Organisation, therefore operating in a highly regulated market, with heavy compliance etc. requirements. The sorts of things that end up creating a paranoid organisational mindset, with a significant audit theme running through everything that they do.

The nature of this client is such that new ideas take a while (if at all) to establish. That pace isn’t helped by the fact that organisations like these generally are flush with cash – they can afford to overspend as well as maintain “death-march projects”. This is a functional characteristic and is neither good nor bad. It does, however, mean that delivery techniques that are radically different culturally to the prevailing winds are unlikely to be welcomed with open arms (i.e. there is no urgency to change). One such idea, is “User Centric Design”. This is a premise that the right thing to do is design services specifically for your Users. This cultural anchor (the Customer is central to everything) is more prevalent in front office teams (e.g. for a Bank, those might be branch staff), but isn’t necessarily the case in back office teams (e.g. the “IT Department”). User Centred Thinking may lead to better bank accounts, savings accounts, or interest rates, but is less likely to be adopted to improve the systems that an actuary might use.

Enter the labels “Design Thinking” and “System Thinking”.

What is “Design Thinking”?

The term has been gaining popularity over the last few years. I currently believe the term “Design Thinking” that represents a design thinking process (with terms like empathise, define, ideate, prototype and test) was popularised by the likes of Tim Brown and others at IDEO. The term Design Thinking might have been coined in the seventies, but ancestor terms such as “wicked problems” are far older. Even older are a lot of the concepts – divergent thinking to gather options, followed by convergent thinking to make a choice, test, learn, rinse and repeat – must have been around for about as long as humans have been experimenting. The Double Diamond process was created by The British Design Council for example. This a sketch:

How about “Systems Thinking”?

Disclosure: My introduction to “systems thinking” (lowercase, not branded, definitely not a Proper Noun) came from my undergraduate degree. Except I learnt about this under the banner “Systems Engineering”. Well, more specifically, when in a Mechanical Engineering lecture, my professor got us all to play a heavily modified version of Mousetrap (I’m sure other games are available). That was my first introduction to stock and flow diagrams. That also made me realise that I’d been using a system thinking mindset for a while leading up to that lecture. It turns out that my dad and I playing with Meccano and Dominos at the same time – building mechanised arms to tip dominos over is surprisingly good fun for most of the time, although the clean-up at the end is a royal PITF (pain in the foot) – is a good way for me to begin to work out how to build models in my head to predict the future, especially complicated when there are multiple events occurring simultaneously. That Engineering thing carried on into my Electronics and Control Theory (academic) life.

In business, the term seems to have more than the abstract analytical mindset that I learned about at University. The work that Deming did with Japanese manufacturers during the ‘50s and onwards started with a very basic model:

Image from https://blog.deming.org/2012/10/appreciation-for-a-system/

In this diagram, the customer (consumer) is another component, interacting with everything else. “Frameworks” such as Vanguard and books such as The Fifth Discipline etc. have moved the customer to be far closer to the centre of the system model. More precisely, in order to better define the purpose of the system. In itself, that’s not a problem, but it did take me a surprisingly long time to reconcile my perspective (systems-thinking-is-an-analytic-discipline) with the customer-centricity that I was seeing. Mostly because, I was seeing that human-centricity aspect as “Design Thinking” (rightly or wrongly, probably wrongly).

My View of the Differences between Design Thinking and Systems Thinking…

One of the things I find helpful for my focus when learning (or even just thinking) about a topic, is the ability to detect when I go off track and get distracted. To help me reason about either Design Thinking or System Thinking, it helps if I can create some space between the two concepts, just so that if necessary, I can also think about what a concept is not. It’s a form of abstraction and an aid to my thinking.

Comparing the different “zones of interest” between Design Thinking (pink) and Systems Thinking (blue)

…and Why it doesn’t really Matter

Assuming that the previous section resonates with you, dear reader, the main reason why I don’t believe it really matters where the precise boundary is between the two concepts, is because you need to incorporate both thinking models into your overall problem solving to genuinely make a difference to your customer. For example, whether you think about User Needs because you’re using Design Thinking or System Thinking isn’t relevant. What is important, is the fact that you’re actually considering User Needs.

Why is my client trying these?

This is an interesting question, and I’ve no real way of getting a completely accurate answer from anyone I’m in contact with, so this is where I get my theorising kicks from.

From what I’ve been able to observe, this client has had multiple attempts at one form of “agile transformation” or another over the past several years. While all of these attempts moved the organisation forward (for some definition of forward), none of the attempts did much more than improve the practices in effect at that organisation. Manager types still managed (although the role names did change a few times over the years). Requirements specialists still produced documents (although the name of the documents and the templates followed changed). Business engagement was still limited – in this case, limited to Product Managers. Product Owners (there is a difference at this client) were more likely to be a subject matter expert or a requirements specialist. In other words, a proxy to someone who more may be more appropriate to “own” what’s being delivered, but has insufficient time away from the day job. A centralised architecture function would determine standards to adhere to, and would be where approvals would be sought when teams wished to implement a design pattern (sometimes quite a low level decision).

The underlying culture appears to be quite resilient to change, and is what I would expect to see in a typical hierarchical, command & control, low trust, low empowerment organisation.  Again, no value judgements from me, but it is a prevailing culture that’s a polar opposite to the empowered, distributed authority, high trust, high collaboration culture that an agile way of working would require to truly shine. I think this underlying cultural resistance is one of the drivers for using these “differently termed techniques”. In particular, there’s a keenness by (very) senior leadership to adopt Systems Thinking, as the term doesn’t match any of the existing organisational silos or fiefdoms. In that respect, “Design Thinking” is a little harder, as one could argue (for example) that “As the Enterprise Architects are responsible for Service Design, that’s clearly where Design Thinking fits in”. Granted, your sanity could be questioned if you argued that point of view, but still, it’s possible. That is one of the strategies that a resilient bureaucracy can use to negate the risks associated with an invading culture – convert the new concepts into existing ones, and thereby eliminate the need to change. In order to stand a chance of converting the status quo, one needs to be careful about the battles that are picked.

Dear Board, what sort of “Agile Transformation” are you really after?

My problem with an “Agile Transformation” as a term, is that it’s not really helpful when trying to talk to people.

OK maybe if I used the definition of an Agile Entity as a thing that continuously adapts and evolves to best take advantage of it’s environment, then an “Agile Transformation”, being pedantic, could represent that first step the Entity takes from it’s Creation State to the first Changed State, with the constraint that the Changed State includes a working mechanism for internally triggered evolution (irrespective of how effective).

However, that’s also an unhelpful definition, as I haven’t seen that interpretation in anyone at C-Level who “buys an Agile Transformation Engagement from a Consultancy Organisation”. And that’s what I wanted to write about here. What I see there is usually something along the lines of “We should be better at our IT Development. Other organisations have used Agile and they seem to be better at IT. Maybe we should get some of that Agile IT to increase our effectiveness”.

Therefore I’m parking that thought, and once I’ve worked out how to clearly articulate what my opinion is on that subject, I’ll link to it here.

In the mean time, this is a list of questions I think are worth knowing the answers to as soon as possible in the life of an “Agile Transformation Engagement”. Or even before one starts.

Agile Organisations have very different cultural styles to hierarchical / bureaucratic / “traditional” (for the defensive clients) organisations. A Transformation Initiative (if such a thing can exist) will be about helping an organisation transform itself culturally into something else, one with more pronounced Agile Characteristics and Traits. I think anything that maintains the cultural status quo (especially if it’s a “negative-in-the-context-of-an-agile-organisation” one) is heading towards the Lip Service end of the adoption scale.

Warning: Some of these questions can be tricky to ask without risking being fired :-). All of these are aimed at either the Sponsor of the Transformation Engagement, or the C-Level Board if it’s broad enough.

Scope

Question: Where do you WANT to draw the boundary for the Transformation Initiative?

Question: Where do you HAVE to draw the boundary for the Transformation Initiative?

Outcomes

Question: WHAT do you define as “a Successful Transformation”?

Question: WHY do you think you need “an Agile Transformation”?

Question: Are you “all-in” on this Transformation? What happens if it’s not successful? Do you need a Backup Plan (or Escape Plan)?

Question: If the Transformation results in jobs changing, CAN you update your policy / HR strategy etc to suit? WILL you?

Question: If the transformation results in making some people redundant if they are unable/unwilling to re-train to the target operating model (candidates include middle managers), WILL you make them redundant?

Question: What did you have to do in order to get Board buy-in for the need for “an Agile Transformation”?

Question: When do you “need Demonstrable Results”? Why then? What constitutes “Demonstrable Results”?

Question: Do you have a “Trusted Adviser” to fact-check what you’re being told? If not, what is your level of trust? How can that be increased?

Sustainability

Question: What’s your strategy for sustaining the Transformation Initiative after I’ve left?

Question: What would happen if “after a while” (e.g. a couple of years), your organisation reverted back to the operating model as it stands now? Do you have to prevent that?

Finances

Question: How much are you willing to invest? Over what time frame? Is any of that investment conditional on evidence of progress?

Question: Can you tolerate the “Agile Transformation Initiative” as an annual overhead cost to be paid before projects and change budgets are calculated?

Question: Can you tolerate the “Agile Transformation Initiative” as just another project and therefore will need to justify its budget?

Agile Brands

Question: Is there a perception in the “entity-under-consideration” that Brand A/B/C is the right one? Have conferences, articles, experiences etc influenced that perception?

Question: Is there a strategic goal of “Installing” / “Implementing” Brand X? Or is the strategic goal “Adopting” Brand X?

Question: What happens to your Credibility / Authority / Respectability if despite the prevailing “Brand X Bias”, an alternative Brand is Implemented / Adopted?