OKR challenges: Outcome thinking is hard!

Outcomes vs Outputs

This is the typical challenge, and pretty much everyone finds writing outcome oriented statements difficult. In my experience, it’s not because people don’t know what outcomes are.

I think the biggest challenge is because people spend the vast amount of their thinking time thinking about outputs – what their tasks are, what they need to produce, are they late, is the quality OK, do they need help, are there any risks etc. They spend comparatively little time thinking about the bigger picture of why they’re doing all that work. I think output thinking is so prevalent in society, that if by chance, you met someone who spends most of their time thinking about why they do things, you might casually wonder if that person is going through some form of existential crisis.

Given the relative familiarity of output thinking and the relative novelty of outcome thinking, it’s little wonder that early attempts at defining outcome oriented Key Results are generally poor. Here’s an example of a collection of collaborating teams who collectively own API Management within my client’s IT Organisation had as a “relatively mature OKR”.

Make services easier to use
- Increase the number of APIs available to the customer from X to Y
- Increase the number of automated releases per week from X to Y
- Document the CI/CD pipeline by EOY

While the Objective (“make services easier to use”) is great, even a cursory examination of the Key Results confirm that they’re all in direct control of the teams, and are therefore outputs. After some coaching, they were able to rewrite it like this:

Make services easier to use
- Increase the number of API calls per day from X to Y
- Decrease the lead time to develop an API from X to Y days

It’s hard to live with OKRs if your management paradigm doesn’t let you

Context

I have an interesting engagement with a client that is trying to adopt OKRs as an alignment and coordination tool. This particular organisation is a large corporate in a heavily regulated environment. I’m working with functional areas within their traditional risk averse IT organisation. Their historical response to their environment has been to favour command and control strategies.

The scenario that triggered this post

An organisational unit had two in flight projects (both delivering production releases reasonably regularly). One had all the hallmarks of a well run project – including a prediction that they’d deliver all their requirements on time. The other project had several challenges and was predicted to finish fairly late and over budget. Both projects were funded as preferred options on their respective OKRs. The well run project was having no appreciable effect on its OKR. In contrast, the “late” project was already having a positive effect on their OKR. Which project do you imagine was stopped?

The Ideological Conflict

The challenge stems from the nature of OKRs. More precisely, what they’re inherently there to manage. OKRs emerged at Intel as a way of better managing discovery-type work. Work where the precise nature of what was going to be produced would be hard to predict, but the desired effects on their market could be articulated. This articulation came in the form of outcomes – the changes in behaviour of their market. In order to track whether or not progress was being made (as opposed to effort being spent), the Key Results construct were used to articulate the visible signs of their market responding “favourably”.

An Objective would be “Establish the 8086 as the most popular 16-bit microcomputer solution.”. The Key Results for the same objective would be “Win 2,000 designs for Intel in the next 18 months.”

And now to the challenge. It’s essentially a misalignment between corporate culture and the nature of “what is important in OKRs”.

Organisations with a strong “PMO core” typically value the predictability of projects. They like “green projects” running “on time and on budget”. Nothing wrong with that, predictability is a side effect of having your uncertainty and risks under control. And what organisation would turn their noses up at that?

OKRs change the focus on what is important. The construct has three basic levels. The two important ones are in the name – Objectives and Key Results.

Key Results define a measurable way of detecting a desired outcome. It’s the observable effects. What Key Results don’t do however, is define how you achieve that outcome. That’s the third level – the options that you identify and the decision on which option you take. These options are definitions of output.

These outputs themselves don’t matter. Only the effects (or lack of) they have.

The idea is that if an option isn’t having the desired effect moving the needle on your Key Result, then you pivot. The sooner you pivot, the less you waste. Successful execution of a project (green / on-time / on-budget) has no real bearing on whether or not it’s the right thing to do.

Unless your organisation changes the internal model of what is valuable from output delivery to outcome realisation, then the delivery management function (e.g. PMO) will continue to focus on output delivery optimisation instead of output effectiveness.

Dealing with unexpected work

Problem Statement

One of my large corporate clients has teams who’re all struggling to cope with what they see as large amounts of unplanned work. There’s a great degree of commonality in the conversations that I have with the teams. Most of them are using Scrum at the team level.

Opening Gambit

The first thing to do (IMHO) is to build a model of the different types of unexpected work that is hitting the team. I usually use “predictability” and “plannability” as my two classifying dimensions. That leads to three types of unexpected work (the fourth category is planned work):

  • Unpredictable and Unplannable. This is stuff that you simply don’t realise is out there. It hits you out of the blue
  • Predictable but Unplannable. You know there are predictable patterns of work, but you don’t know the specifics. For example you know you get about 100 tickets a week from your support queues.
  • Plannable but Unpredictable. You can plan the work, but you don’t realise you have to. For example your priorities might shift towards things further down your backlog.
  • Plannable and Predictable. The ideal case.

Understanding what the categories of the unexpected work can trigger more strategic thinking. For example, you might have a blind spot in your priorities (Note: In this client’s case, they’re using OKRs to articulate intent). You might also determine that you are able to turn down some of these requests (or reroute them to a more appropriate team).

But What About Today’s Symptoms?

The next thing to do, is find out whether it can wait until the next sprint. This allows the team to maintain the momentum in their current sprint while prioritising the new work.

Depending on the stakeholder making the request, your relationship, the amount of power held by you/them, your shoe size, their attitude to hats (*), there may be a right way and a wrong way to approach this. I’ve usually found “young” organisations (from an agile maturity perspective) require more visible signs of authority such as messages like “the Scrum process we’re following has a mechanism for accepting late changes, and they go to the top of the product backlog ready for the next sprint”. In order for the team to maintain a good flow of work, they may also need to schedule in a backlog refinement session to get that new work request ready for a sprint.

What if the work can’t wait?

There will be situations when the work can’t wait even a single sprint and needs to be tackled now. To have a controlled acceptance of the work into the team’s current sprint, it’s worth exploring a few aspects of the work.

  • Does the new work align to the current sprint goal?
  • How big is the new work (for example T-shirt sizing)?
  • Is the work a direct replacement for something already planned?

If the new work aligns with the sprint goal, then the substitution of already planned work can be relatively straightforward. If the new work doesn’t align, or worse, is in direct conflict with the sprint goal, then it’s worth exploring the team’s strategic priorities to detect potential blind spots. If they’ve been able to size the new work, they should be in a position to swap a similar amount of work in the sprint.

What if the team’s not in a position to deliver less than forecasted?

The team might be in some situations where sprint forecasts are used by other teams as part of their planning, and delivering less than forecasted amounts would be problematic. While it’s not a great situation to be in (and is potentially a sign that the overall level of agile maturity is low-ish), it is something that teams sometimes have to contend with.

In this scenario, the easiest strategy is to plan with a buffer. A bit like the old MoSCoW prioritisation technique for timeboxes. The DSDM guidance (the origin of the terms Timebox and MoSCoW prioritisation) typically recommends no more than 60% of a timebox is Must Have requirements (for the scope of the timebox). This means that while the team will deliver as much as it can, consumers of the timebox should only really expect 60% of the total timebox plan with certainty, the remainder being variable.

Buffers work well when they’re in the form of a percentage of remaining time. This means the available buffer drops as the sprint executes. The danger of a fixed time buffer is it’s easy to misunderstand how much change/disruption the team can tolerate towards the latter part of the sprint.

What if the buffer amount needed changes drastically from sprint to sprint?

If the Scrum team is unable to apply any of the previous bits of advice, then there’s a good chance that there is too much uncertainty in the team’s context for Scrum to be an effective team process, as Scrum relies on being able to predict a sprint length into the future. The team may get better results from adopting Kanban as a team process, and tuning their WIP limits to keep enough slack in the system to respond to their changing demands.

The anatomy of my typical coaching engagement

Context

COVID-19 has forced the issue. All my teams are now distributed because everyone’s working from home. As a coach, this has given me a few things to think about. Mostly about how I need to rethink many of my coaching strategies, as they mostly take advantage of the rich information flow that’s tacitly a part of face to face.

As part of my attempts at designing new coaching strategies that could be used with entirely remote teams, I’ve been going back to basics. This post is about the basic structure that my coaching engagements end up being designed around.

Structure

Broadly speaking, I’ve found that all my coaching boils down to one of two categories of topics – known unknowns and unknown unknowns. My coaching engagements usually have two strands running, one for each of these categories. They have different cadences and topics from the “unknown unknown strand” can make their way over into the “known unknown strand”. I’ve not yet seen a topic move in the other direction. Please read The Structure of a Coaching Intervention for a more accurate view of the content I cover when I coach.

Known unknowns are relatively straightforward. This is closer to training, or facilitated learning. Both coachee and I are clear about what knowledge is needed, and we can usually rely on accurate insights from the coachee about their levels of expertise and whether or not it is increasing. I usually end up producing a catalogue of concepts and lesson topics, and my coachee orders them in a list. I suggest changes to the ordering only if I feel there’s a more consumable flow of content (or some pre-requisites). This also has the handy side effect of demonstrating how a delivery team and a product owner can work together to collectively order a backlog.

Unknown unknowns are much harder (especially if the gaps are deep ones such as culture or values & beliefs). Some unknown unknowns can be converted into known unknowns with identification (as there’s a degree of unconscious knowledge in the coachee). Maintaining a general principle of doing no harm, I usually end up doing something along these lines

  1. Observe the natural state
  2. Form a hypothesis
  3. Run an experiment to test
  4. If proven, identify “treatment” options
  5. At some point, bring the coachee up-to speed with what’s happened
  6. Together with the coachee, agree the treatment option
  7. Design the treatment plan
  8. Implement, measure effect, choose to keep or roll back
  9. Restart the loop.

Step 1 cannot be rushed, otherwise my biases play too big a part. In step 8, I’ve only ever had to rollback once in my career, otherwise I’d never have even considered it an option.

Remote Execution

For the known unknown category of topics, being remote poses no fundamental problems, mostly logistical challenges and a greater noise-to-signal ratio in the communication between my coachee and I. It also adds delay to the building of rapport, but that is less crucial when both parties know exactly what needs to be learned, as the coachee can also attempt to infer credibility by their perception of the quality of my coaching materials (legitimately or not is beside the point).

Being remote adds a lot of complexity to my structure – specifically the first step (but none of the steps gets away unaffected). I’ll write up how I approach this later and link it.

The Structure of a Coaching Intervention

Coaching is an act of leadership. The main purpose of coaching a team is to improve the overall effectiveness of that team. Three key dimensions in assessing this are:

  • Productive output that meets or exceeds the standards of quality/quantity/timeliness of the consumer
  • Social processes the team use that enhance the future capability of the members to work interdependently as a team
  • The group experience that contributes positively to the learning and personal wellbeing of the members

There is an additional factor, often classed as crucial to team effectiveness – the quality of personal relationships within the team. However, I’d class any work done with the team that directly addresses problems with personal relationships to be counselling in nature, and isn’t in scope for this post. In my experience, when the delivery effectiveness of a team improves, the morale boost in the team members has a side-effect of improving personal relationships.

The relative importance of these three dimensions will vary over time, but successful teams always make sure that they are all considered and balanced over time, never completely sacrificing one to optimise the other.

The effectiveness of a team at performing a task is influenced by these three key performance factors:

  • The effort a team expends
  • The appropriateness of the strategies and techniques that the team uses
  • The skills and knowledge that the team can bring to bear

Therefore, in order to have any sort of effect on a team’s performance (beyond the Hawthorne Effect), coaching will need to help one or more of these factors.

There are three important factors to consider when delivering a coaching intervention, and all three must be balanced in order for the intervention to be effective:

  • Content
  • Delivery approach
  • Timing

Coaching Content

The messages that are conveyed during a coaching intervention typically conform to one or more of these three main patterns:

  • Motivational: coaching that addresses effort, e.g. inspiring team members to increase effort, or minimising freewheeling
  • Consultative: coaching that addresses strategy choice, method selection, helps teams determine locally optimised methods
  • Educational: coaching that addresses skills and knowledge gaps

It is important to use the right content to address the specific challenges that the team has. For example, if the challenge is a shortfall in specialised skills needed for a task, providing a highly rousing motivational speech isn’t going to help.

Three Main Coaching Approaches (plus a fourth – “eclectic”)

The reality is that most coaching interventions will have aspects that originate from more than one of these three approaches.

  • Process Consultation: structured/clinical examination of interactions from a workflow perspective:
    • between the team and external teams/stakeholders/etc
    • internal to the team
  • Behavioural Models: feedback on individual and team behaviours, mainly focussing on relationships and how feedback is given and received; often involves operant conditioning
  • Development Coaching: identification of areas that need improvement, along with focussed time set aside for learning / training sessions
  • Eclectic interventions: ad hoc interventions, with no specific underlying theoretical model; most commonly limited to personal relationships when coaches aren’t familiar with the specifics of the work the team is doing

It is important to use the most appropriate coaching style / approach to suit the context, the audience and the message. For example, when working with an inexperienced team to improve the effectiveness of their in-flight process flow, it is better to use their real work and their real process as opposed to a classroom styled session with an abstract case study.

Intervention Points

Teams go through different phases as part of them starting, working on, and finally finishing, a piece of work. There are several broad model categories that attempt to describe the team and their approach to work temporally (see https://en.wikipedia.org/wiki/Group_development ). Two patterns that I’m broadly aware of are:

  • Incremental (or Linear) Models (e.g. Tuckman)
  • Punctuated Equilibrium Models (e.g. Gersick)

The interesting thing (for me) is that I’ve seen groups of people becoming teams display characteristics found in both. For example, I see teams regularly using retrospectives, but for the most part, the improvements are relatively narrow and focussed in effect. However, there are usually a small number of seismic shifts in ways of working – usually as a result of a significant “precipice” being felt. The most common precipice is the realisation by the team that they’re halfway through their estimated (or constrained) timeframe. These events are where coaching can deliver the most impactful benefits and are typically found at:

  • the beginning,
  • the midpoint, and
  • the end

A side-effect of teams splitting large blocks of work down into smaller pieces (e.g. Epics, Stories etc.) is that these events occur far more frequently – each user story has a beginning/midpoint/end set that could be used as opportunities for coaching interventions. I’ve rarely seen coaching interventions at user story boundary, but I have seen them at Epic or Feature level.

Other team processes that also create opportunities for beginning/midpoint/ending events to occur include the use of iterative (or sprint) based development, using a delivery lifecycle (such as DAD’s risk/value lifecycle) etc.

Designing a Coaching Intervention

The design and structure of a team has a significant effect on the effectiveness of coaching interventions. “Well designed” teams gain more value from a coaching intervention, even a poorly executed coaching intervention. “Poorly designed” teams can suffer negatively during a poorly executed coaching intervention. I’ll focus on team design in a separate post. For this post, I’ll assume no ability to fundamentally change the structure of a team to better match the workload.

Looking at the three performance factors, what are the underlying constraints? If a team has a constrained “strategy & technique” factor, then any attempts to change the execution strategy is likely to be met with frustration.

Once you have some clarity on the performance factor you can help improve, pay attention to when you introduce the intervention. Teams at the start of a piece of work are generally unwilling (or sometimes even unable) to have an informed discussion about what their optimum delivery strategy should be – it’s usually better to get started and then make an adjustment at the midpoint event (when they have some actual experience to base the decision on).

Target Performance FactorEffectiveAvoid
Effort – Motivational Coaching
– The beginning
– Consultative Coaching
– Educational Coaching
Strategy  & Technique – Consultative Coaching
– The midpoint
– The end
Skill & Knowledge – Educational Coaching
– The end
– Motivational Coaching
– The beginning

Once you’ve got a sense of the target performance factor, the timing of the coaching intervention and the key messages that needs to be conveyed, the next step is the execution approach. It’s worth investing effort in explicitly designing coaching interventions (not to mention ensure sufficient variety to keep things interesting for you and your team). However, there are some natural alignments between the conceptual approaches and the content to be conveyed which could help get you started.

Coaching Content Approach
Motivational Behavioural Coaching – personal motivation, team camaraderie.
Developmental Coaching – focussed time set aside for “kick off events” (such as an Inception workshop or a Visioning workshop).
Consultative Process Coaching – clear understanding on pros and cons of alternative techniques so a mid-flow course correction can be made.
Educational Development Coaching – periods of reflection and improvement via a team retrospective, knowledge transfer via lunch & learn sessions, skills acquisition sessions using learning katas.

Courageous Executives and the Permafrost

Last week I started to think about how different parts of an organisation have different views on what is important.

This is nothing new. Why should I keep reading?

I think the existence of that permafrost layer is problematic if want to be inventive or innovative. If you’re looking to evolve into a “courageous executive” then credibility associated with reducing that middle layer could be useful. However, to stand a chance of success when you “fight the machine”, you should pay attention to the basics, including:

  • the degree of effective support you’ll have,
  • the culture underpinning your sub-organisation, and
  • how different the “volumes” are when comparing your sub-organisation’s culture with the wider organisation’s culture.

In this context, “sub-organisation refers to the subset of the organisation under your sphere of influence, either formally via org chart or informally via your influence, credibility, relationships etc.

If you intend to create space for your organisation to innovate and experiment, then it’d be advantageous if it is more naturally innovative and experimental. That requires a different attitude to failure than when saving face is a preferred reaction to failure. To paraphrase, you’ve got to shrink that middle layer (conceptually).

A useful strategy that can reduce the size/significance of this middle layer is dealing with fear from a cultural standpoint. Something that can help the formulation of specific strategies is an understanding of how the Loss Aversion and Loss Attention cognitive biases manifest in the individuals that you identify as being significant anchor points in this middle layer.

  • Loss Aversion: It is better to not lose £10 than it is to find £10.
  • Loss Attention: Tasks that involve losses get more attention than tasks that do not.

A clue to something that seemed to help me is that bottom level. When this scenario was presented to delivery teams, they didn’t seem worried about saving face when faced with that hypothetical scenario. Digging further, it wasn’t that face/reputation was unimportant, it’s that they didn’t care all that much about what the “middle management types” thought about them (it’s how individuals seemed to interpret the scenario). That middle management group was not considered to be their judging community. They were far more concerned about their reputation amongst other delivery folks. For example, a developer might try and force a software library to work, applying workaround after workaround, instead of just accepting that the library was the wrong fit for what’s needed (because they wanted a reputation of being able to make anything work). However, that same developer might not be concerned if their boss’s peers don’t think much of them, if they don’t care about the office politics.

That got me thinking that perhaps one way of reducing that awkward middle layer, was to change their perceptions about what was important to the community that they considered was judging them. I think this is different to tackling their priorities head on, in that it’s less confrontational, so stands a better chance of working (at least partially). That middle layer would need to view organisationally significant things like money or time or customers etc. to be something that could be truly lost (so that their normal loss aversion and attention biases would influence them in ways that were beneficial, if you were indeed trying to grow into a courageous executive). They would also need to feel that personal reputation could not be lost in the same way.

Changes to that community can be as a result of external or internal pressures. Assuming you’re not “senior enough” to be able to enforce a new operating model the community must comply with, your more effective strategies would be the ones that originate from inside that community.

Potentially Useful Infiltration Techniques

  1. Repeated Messaging: Humans are influenced by exposure. The first time something controversial is heard, it’s shocking. After the hundredth time, it barely registers consciously. Interestingly though, the subconscious still registers. In that way, people can be programmed. By repeating your message regularly into the target community (and with variations to keep it interesting), over time you’ll lower resistance to your ideas.
  2. Let others get credit, even if the ideas are yours: Having others in the community get an endorphin rush when they share an idea influences them to repeat the behaviour. So it’s your idea really, so what? You’re aiming for something else. Besides, a few people will know anyway. That “inside knowledge” can be a powerful aide to your attempts at growing an organisational culture that has you as an executive – it creates a sense of belonging between you and them, which if nurtured can transform into loyalty.
  3. Be seen to be visible to the next few power levels up: For the hierarchically minded, seeing you playing nicely with their boss and their boss’s boss, can signal that it’s more acceptable for them to align with what you’re saying. It has a secondary effect to help you gauge whether or not what you want to do is palatable to the next few levels up the power structure. If it is, then it can be an indicator that there is space for you to grow your leadership potential.

Building a Knowledge Sharing Community

Why am I trying to establish this?

Having an effective knowledge sharing strategy that my consultants and coaches use effectively can significantly boost the quality of their deliverables on engagements, as their access to knowledge and experiences will be richer. Richer knowledge leads to better decisions which lead to better outcomes blah blah blah.

No really, why?

The truth is far less grandiose. And much more personal. I want better relationships with my colleagues. And the main reason for that is selfish. When I’ve got something interesting/gnarly to solve, I’d MUCH rather solve it collaboratively with someone. I find the ideas that come out of a buzzing pair/trio are generally FAR superior (not just in terms of merit, but also the emotional responses – things like surprise, delight and even just pure joy) than anything I’d come up with on my own. A major contributor to that heightened emotional response is the fact that it’s a shared experience – this has a reinforcing effect on the individuals.

This topic is also related to my post on Courageous Executives – I think being able to create an environment where knowledge and help is shared freely and easily is helpful in establishing a progressive organisational culture.

Some things to consider

The first thing I need, is critical mass. I need enough colleagues with sufficient latent willingness to participate to increase the odds of interesting interactions to occur.

The second thing, is to recognise the reality of the group dynamic. I work for a consulting organisation, and like most others, they staff people on engagements. Those engagements have teams. Back at base though, I’m grouped with a set of similar people under the same line manager. That line manager’s “team” is not a team from a behavioural dynamics perspective, regardless of how the individuals describe themselves. Esther Derby is my go-to source for concise articulation of what it means to be a team.

The third fundamental aspect to consider, is how people participate. The “1/9/90” rule of thumb has been around for over a decade now, potentially longer. A quick recap: For online groups, there are three broad categories of interaction

  • 1% of the population will initiate a discussion thread
  • 9% of the population will actively contribute to discussion threads
  • 90% of the population will lurk

I also reasoned that in order to sell what I wanted people to do, it needed to be more engaging/arresting than just these numbers (which no doubt many of my target audience would have heard already, so there’d be little impact). While daydreaming about how to go about launching this, an idea flitted across my mind, which amused me. I ran with it, just to see how far I could go. Fishing.

  • Lures: These would be the “1% and 9%” of the population. Their job is to make the environment interesting/appealing enough for the others to participate.
  • Fish: These are the 90% of the population who lurk. My objective is to convert them into lures by engaging them.

Above all else, the most important thing to remember was that knowledge management was all about people. We have to avoid the temptation to create yet-another-document-repository, as those end up generally being pointless (keeping a stack of documents current is a huge time investment, so very few people do – the documents become outdated quickly and users lose confidence in the repository as a source of relevant information).

How did we start?

The first step was to get a sense of that latent willingness that I needed. To avoid unnecessary confusion, I stuck to a typical technique – I ran a workshop. The stated objective was to understand the key topic areas / themes that as a collective, we had some self-professed expertise in. The exam question was

write down topics, regardless of scale, that you would be happy for a colleague to come to you if they needed some help

Approaching the audience in this way would nudge them into feeling valued from the outset (the alternative would be giving them a candidate set of themes and asking them to sign up. The list at the end of both approaches would be the same, but the first scenario would have far more engaged individuals as they’d own the list). This workshop also let me find co-conspirators.

Then what?

In a word, admin. We had to create a navigable map of the topics that the audience had supplied (it would help people find the right place to ask questions). In the end, we settled on a very basic two-level tree consisting of high-level themes and more detailed topics. That allowed the grouping of individuals to be based on themes with related topics. The main rationale was that as experience and knowledge changed over time, the specifics of the topics would evolve, but the main theme would remain constant. That allowed for stability of membership – and that membership stability is a significant factor in determining whether or not a theme would survive. The candidate themes also had candidate lures – they were the list of people who volunteered for the topics that were under that theme.

We had to sell the themes to the potential lures.

We also had to set some expectations about what being a lure entailed. By this point the term “Theme Guardian” had started to emerge as the role that was to be played. This is what we ended up with.

  • Guardians own their Theme
  • Guardians are responsible for the quality and integrity of the Theme’s content
  • Guardians should invest in PDCA cycles to improve the environment in the Theme.
  • Guardians need to evolve their vision and strategy for their Theme.
  • Have “something” to help new joiners understand your Theme.
    • This doesn’t necessarily have to be a document. It could be quarterly intro sessions on WebEx (for example).
  • Set expectations of how you’d like the community to operate – and remind your community regularly

I find analogies very useful as abstraction models to help me understand a domain/problem. In the event that at least some of the candidate guardians operated in a similar manner, I picked a couple of potential models that they could use to refine their thinking about how they wished to operate:

  • Town Planners and Communal Spaces. What makes some public spaces incredibly successful, and others turn into ghost town?
  • Aboriginal Storytelling. In particular, explore the claims that the Dreamtime stories have remained intact for over 10,000 years without degrading, despite only having a verbal/pictorial, not written form.

Selling the concept

Now that we had a starting position (themes, candidate guardians, some guardian responsibilities), we needed to launch. To help that, we produced some general use guidance on themes:

  • People are at the core of any successful knowledge management strategy.
  • Information held in a person’s head is updated as a by-product of things that person does. Information stored in documents requires additional explicit effort.
  • For a Theme to be useful, knowledge needs to flow from person to person.
  • If a Theme’s only got one person who’s interested, it’s not a Theme.
  • When a question is asked, directly answer. Even if it’s been asked before. Never rely on a document (or link etc.) to answer for you. If necessary, end your answer with “this document/link/other goes further” (or words to that effect).
  • Think about what your “background radiation” looks like. Themes need to feel active otherwise you won’t get people stopping by and asking questions.
  • Have a variety in the complexity / subtlety / nuanced nature of the conversations and discussions. For example, if you only have very high brow discussions, you’re likely to put off the inexperienced. If you only have introductory content, then the experts may not participate.

Launch Day

These were our objectives

  • Start small: We picked one Theme that the co-conspirators were willing to act as Guardians or as participating members. We would attempt to orchestrate and create an active community for that theme.
  • Momentum: We wanted to create some observer habits in the wider community. With enough people checking in daily (for example), we’d greatly increase the chances that conversations would spark up. But we had to kick start the making-it-worth-everyone’s-time process.
  • Win over an influential sceptic: Having a known sceptic promote what we were trying to do would help persuade other sceptics that there may be some mileage in investing in this strategy.

What happens/happened next?

1 Week After Launch

There’s a smattering of interest from a handful of people. A few posts have been made and there has been some commenting on posts. Some of the early efforts from the co-conspirators have been around motivating and inspiring the community to participate. There is some optimism around that this approach feels different (probably because it isn’t tools oriented).

Predictions

It’s still early days, but these are my (current) predictions.

1 Month After Launch

The conversation topics broadly split into a handful of themes. Most of the themes appear to be consistent with what emerged from the initial workshop, but the actual topics discussed are quite different. There is some dissonance from the earlier adopters as there are multiple unrelated themes being discussed in the same “place”, causing confusion.

3 Months After Launch

Enough interest in different themes has triggered new spaces on the platform for the conversations about those themes to be segregated, to simplify the cognitive load on the users. There is some frustration from some members who are interested in multiple topics, most likely due to how the individuals have modelled the interactions mentally – e.g. why should two people who are talking about a range of subjects have to keep switching which “chatroom” they converse in. Relationships are still point-to-point.

6 Months After Launch

Most of the theme chatrooms are now dormant, most of the activity has gravitated towards one or two Themes. There’s some blurring between the competing mental models – relationships are person-to-person and person-to-community.

Analysing my predictions

One of the most significant challenges I’ve seen in Knowledge Management things is the belief (usually tacit) that it’s all about the content. I believe it’s all about the relationship, and the knowledge of who to talk to when a person needs to know something. My predictions have a base assumption that Knowledge can be structured and organised at a fine grain. I think that’s an assumption that’s also being made by the majority. I’m expecting this assumption to be proven to be false and that we will pivot back to trying to be more of a community than a knowledge repository. Looking at the population numbers, I don’t believe there’s any need for more than two or possibly three communities (eventually).

Are you limited by your Agile Coach Sourcing Strategy?

This is about a revolving door programme of hiring coaches. One of the things I’ve observed regularly, is that these kinds of hiring programmes create an implicit timeframe by which improvements must be “delivered”. In itself, that’s not necessarily a problem. What is a problem, is that the timeframes are nowhere near long enough (IMHO) if the aim is to change the culture.

How you measure your coaches also has an obvious effect on your outcomes. Procurement plays a big part here. I’ve seen a lot of coaches who have their success defined by some variation of how much effort they expend. Training courses run. Numbers of teams “coached”. Amount of collateral documented in a wiki/repository/etc. It’s even worse if that’s also what the coach genuinely believes is a measure of success.

It’s easy to buy the visible stuff. However, the visible stuff has no sustaining ability – that comes from the hidden stuff. Agile’s a culture, but how do you buy a culture? It’s much easier to buy a new set of processes, dress code, office layout, stationery, reporting templates etc.

By buying the visible stuff, you easily get to a set “maturity” of what-to-do (because your coach has implemented this a million times), but you might lose the learning-about-why and your ability to improve at some point comes to a crashing halt when your coach runs out of instructions to give you (or they leave). You’ve substituted thinking for doing what you’re told because an expert tells you.

One of the things that could lessen the likelihood of this happening to you is longevity. Although that does bring its own set of challenges. Longevity can come from humans, or guidance that’s respected enough to be followed (or at least attempted) – e.g. the guided continuous improvement advice from Disciplined Agile Delivery’s collateral.

Longevity can increase the chances that the underlying factors get some attention – beliefs, culture, mindset. But longevity is hard. When things go wrong (and lets face it, things always go wrong), it’s easy to blame the changes that are being attempted. I’ve often seen blame being thrown about as a precursor to the environment becoming more toxic. In those environments, unless there’s sufficient emotional investment, coaches leave.

In my view, the most effective counteracting force against that toxicity, is leadership. Your senior leadership must have the trust of the members of the organisation. However, trust doesn’t come from great speeches but from credibility. That means the senior leadership must also evolve and adopt a more collaborative/open/trusting/etc culture. They must behave differently. And they also need to provide regular, ongoing and consistent reassurance that the environment is safe so that teams can evolve to a more agile culture with no repercussions for steps that don’t quite make it on the first try. I think that works because humans emulate leaders. If the leaders are open and collaborative, then the teams are more likely to also become more open and collaborative.

Courageous Executives – Coping without one

The notion of a “Courageous Executive” is not a new one. However, what is changing, is the awareness of how significant this organisational pattern can be when it comes to disruption and innovation. Here’s a link from mid-2017 from one of my former employers on the topic.

Relying on a courageous executive to solve all of your “agile transformation” related problems only really stands a chance of working if they have enough authority/persuasiveness to be able to get their way in the CxO arena. That might be fine for forward thinking / dynamic / exciting / other-superlative organisations, but what if your organisation doesn’t have that open culture? Or you don’t have access to someone willing to stick their head over the parapet? What if (like a large percentage of the working population), you work for a late-majority or laggard organisation?

I’ve been thinking about this sort of thing recently. Mostly because many of the bigger problems I face in my work life these days are all around how my large bureaucratic clients can work effectively with large, bureaucratic suppliers and partners, when everyone states they want “to do agile” [sic] and yet are afraid to change anything about their operating or engagement models.

I’ve been trying to organise my thoughts around “difficult conversations that are simplified/made trivial if a courageous executive existed”. I’ve also been trying to organise any messaging I deliver to my clients to make it easier to convert an existing executive (perhaps with a courageous bias) into someone who can be labelled as a Courageous Executive [massive disclaimer: I’m in no way going anywhere near a formalised definition, just pushing the boundaries of what my gut feels like]. I’ll write up as I go and link to this post.

Agile Maturity – Getting past “Shu”

Why am I writing this? I read this post a little while ago and I wanted to revisit what I thought about Shu-Ha-Ri.

Context

My current employer (very large IT Consultancy) has gone through a significant sheep-dipping exercise. It’s an extremely large organisation (employee headcount isn’t that far away from the population of Edinburgh) and has placed an organisational big bet on “Agile”. So pretty much everyone is being trained by a combination of classroom and online learning modules, backed up by an internal certification scheme, with employee targets and financially significant objectives (e.g. a small part of the performance related pay element is only accessible with certification).

It’s had a degree of success. It’s certainly helped provide an air of confidence for the sales teams and a degree of comfort for potential clients (especially those categorised as Late Majority or Laggards, who are by nature sceptical of new things. And yes do have to stifle the odd smile or two whenever I use the word “new” to describe “Agile”).

The Problem

My problem with this, is it sort of misses the point of “agile” (this in itself is a poorly worded sentence, as agile was never really the point, but it’ll do for now).

What this sheep-dipping exercise seems to have done (from what I’ve been able to observe), is install a new set of practices to be employed religiously. Some of these new practices appear to be little more than a branded re-skin (my “favourite” example of this is the use of the User Story to contain all the requirements documentation, and must be written and signed off before it’s given to a development team to deliver).

If I’m being optimistic, the installation of new techniques (mostly the visible ones such as a stand-up) have increased the overall delivery quality by a small amount (I vaguely recall one presentation by Mark Lines that quoted a 6% improvement for teams that only adopted the mechanical aspects of Scrum, but I’m happy to be corrected).

If I’m being cynical, it’s the fight back of a hierarchical command and control culture against the invading collaborative and flat and open culture.

Not all bad news though

There are glimmers of hope though. There is more recognition of the need for greater levels of autonomy and empowerment, however limited the concessions may be. That is what the rest of this post will focus on.

Why “Shu Ha Ri”?

Simply because I found the “Learning to cook” analogy along with a catchy (ok, catchy is pushing it) slogan to be one of the more effective memory aids I’ve experienced. It also gave me an extremely lightweight structure on which I could help a “continuous learning” mentality to take hold.

Shu

This was clearly our starting position. As “Trainee Chefs”, the most favourable outcome from an intensive training programme, was the awareness of a lot of pieces of jargon, with perhaps the basic knowledge required to use a “standard configuration” of rules and practices.

One of the big “shorter term” goals for new recruits is to develop the equivalent of muscle memory – the imprinting of the basic rules and practices such that they can be performed without a great deal of cognitive effort (the cognitive effort would be reserved for the actual work being done). In food terms, these folk can now make a respectable standard lasagne.

In theory, as teams become more comfortable with the set of practices, they’ll begin to experiment, and vary some of the specifics, in an attempt to test the boundaries of their practices. In theory.

So what prevents nature from taking it’s course?

I think a big contributing factor is a belief, reinforced by authority (e.g. the hierarchy) thinking that the agile stuff is all about the work. Better quality software, more aligned with user needs etc. In other words, it’s about what you do.

That perspective misses (IMHO) a more valuable element of this agile culture. I think the stuff about the quality and alignment is valuable, but for me it’s a side effect of having an entity (i.e. the team) that is very adaptable and can adjust itself to be able to cope with any scenario it finds itself in. I think a key trait needed for adaptability is to always be curious about why you do something. And then do something with that insight.

At least, that what seems to be happening in pockets at my employer. Newly formed teams are given runbooks to operate (the what), but little in the way of support to help nurture the curiosity about why those techniques work, or even what the tradeoffs associated with those techniques are (all practices have tradeoffs, even the “blindingly obvious everyone should be doing this” sort). The vast majority of the coaching support accessible to the team is optimised to roll out the runbook and maintain compliance to said runbook.

Why? I think it’s a return-on-investment calculation made by authority figures. The biggest percentage gain per unit coaching effort is the gain that takes you from zero to non-zero. Installing some basic agile practices into a low maturity team will have a significant effect on their Velocity (story points per iteration) in a fairly small timeframe. A few coaching interventions later and their Velocity is likely to have improved significantly. But then the gains become harder to find. Improvements become more marginal. Sometimes a seismic shift is needed, but that would have short term detrimental effects on Velocity. And for environments that believe that output is the important thing, then getting less stuff per iteration is a BAD THING and must be avoided.

It’s also a pattern that can be reinforced by the having a revolving door policy on your agile coaches (IMHO. I think I’ll have a go at blogging about this, if nothing else to help me crystallise my thoughts on the topic).

Is there anything we can do?

Assuming there’s enough truth in the hypothesis to be worth doing something about it, what can/should we do about it?

The aim of whatever-we-do-to-improve-matters has got to be to increase the ability of a team to question why-it-does-something along with an intrinsic confidence to be able to invent a change to their process. The ability to invent will dramatically increase the self sufficiency of a team.

Both of these elements – asking “why” and being able to invent, require a team to have a significant degree of psychological safety. Creating the conditions for the degree of psychological safety to improve is a core function of leadership.

With sufficient psychological safety (a subjective term), comes the capability for changing. What it doesn’t guarantee, is whether or not change will occur, or whether the direction of change is a “viable” one or not. This is where additional support is helpful, potentially using some form of expert. Agile coaches can be helpful here, especially if they’re working in the open (e.g. via using some variation of a guided continuous improvement strategy).

Ha

Something a good Agile Coach is well placed to do, is help their team understand the underlying models that underpin the team’s ways of working. That would help the teams get to that deeper understanding of why their techniques work, the pros and cons etc. That can create the conditions for much more interesting process changes to be created by the team. These highly context specific and tailored techniques are likely to be more effective than more “generic” equivalents.

To me, that sounds like the beginnings of Ha.