Who should you have as a Product Owner? What are you optimising for?

Subtitle: Are you optimising for development efficiency, bleeding edge innovation, production stability or credibility?

Context

I have a client that’s going through an agile transformation. They’re a core Infrastructure “horizontal” global (sub)organisation. In other words, they keep the lights on, look after all the Crown Jewels, and their customers tend to be application teams OR “everyone” (e.g. Exchange, DNS etc). They’ve adopted one of the many “spotify-variants” and are trying hard to make those structures work in their context.

This post was triggered by a set of conversations I’ve been having with “platform leads” (higher end of middle management in the classic sense, they report directly to a C-suite senior leader). The topic was team design. More specifically, who should be appointed as the Product Owner for any given team. The first thing they needed to understand, was why the decision was architecturally significant (beyond the significance of the team that owns a service). Character traits and personal priorities in the product owner would shape the nature of the engagement a team has with their customers and stakeholders.

Before

There tended to be a couple of significant patterns.

Pattern 1: The most senior team member, sometimes a manager. These tended to be people who’ve been at this organisation the longest.

Pattern 2: The most technically skilled developer/technical architect/technical specialist. These tended to be people who know the most about what the product(s) or technologies are capable of.

The Problem

The main problem is the trade-offs aren’t explicitly visible. For example, sure it’d be great for technical quality and feature set richness if the forward thinking R&D person is put in charge, but there’s a significant risk to credibility when dealing with operational teams who have to service customer requests (if for example, the forward thinking person is uninterested in the “mundane day-to-day” and recommends waiting for a solution that isn’t live yet). For teams that are part of an ecosystem of teams that collectively deliver a live service, coordination and collaboration tend to be much more valuable than raw technical expertise, especially services that must not fail, such as DNS and DHCP (but YMMV).

After

The only significant change in how roles were allocated was to consciously discuss the trade-offs associated with each of the candidates across more than just the “technical expertise” dimension. How that person will be perceived by downstream teams is also important, especially if multiple teams need to coordinate in order to deliver a service. While shaky at the start, they soon developed a degree of skill and nuance when thinking about these additional, non-technical dimensions.

I’ve also recommended that as part of the pod mission statements that they also make some brief comments about the trade-offs that they’ve made to come up with their operating model and structure. This is doubly important for teams that haven’t explicitly articulated their purpose before. The deliberate articulation of (for example) prioritising credibility with peers and service stability over raw speed of delivery makes it much clearer about why a team engages and prioritises the way that it does. It also acts as a trigger for team leaders to develop a range of leadership styles, as they’re able to recognise traits and weaknesses in themselves that little bit more easily.

OKR challenges: Outcome thinking is hard!

Outcomes vs Outputs

This is the typical challenge, and pretty much everyone finds writing outcome oriented statements difficult. In my experience, it’s not because people don’t know what outcomes are.

I think the biggest challenge is because people spend the vast amount of their thinking time thinking about outputs – what their tasks are, what they need to produce, are they late, is the quality OK, do they need help, are there any risks etc. They spend comparatively little time thinking about the bigger picture of why they’re doing all that work. I think output thinking is so prevalent in society, that if by chance, you met someone who spends most of their time thinking about why they do things, you might casually wonder if that person is going through some form of existential crisis.

Given the relative familiarity of output thinking and the relative novelty of outcome thinking, it’s little wonder that early attempts at defining outcome oriented Key Results are generally poor. Here’s an example of a collection of collaborating teams who collectively own API Management within my client’s IT Organisation had as a “relatively mature OKR”.

Make services easier to use
- Increase the number of APIs available to the customer from X to Y
- Increase the number of automated releases per week from X to Y
- Document the CI/CD pipeline by EOY

While the Objective (“make services easier to use”) is great, even a cursory examination of the Key Results confirm that they’re all in direct control of the teams, and are therefore outputs. After some coaching, they were able to rewrite it like this:

Make services easier to use
- Increase the number of API calls per day from X to Y
- Decrease the lead time to develop an API from X to Y days

It’s hard to live with OKRs if your management paradigm doesn’t let you

Context

I have an interesting engagement with a client that is trying to adopt OKRs as an alignment and coordination tool. This particular organisation is a large corporate in a heavily regulated environment. I’m working with functional areas within their traditional risk averse IT organisation. Their historical response to their environment has been to favour command and control strategies.

The scenario that triggered this post

An organisational unit had two in flight projects (both delivering production releases reasonably regularly). One had all the hallmarks of a well run project – including a prediction that they’d deliver all their requirements on time. The other project had several challenges and was predicted to finish fairly late and over budget. Both projects were funded as preferred options on their respective OKRs. The well run project was having no appreciable effect on its OKR. In contrast, the “late” project was already having a positive effect on their OKR. Which project do you imagine was stopped?

The Ideological Conflict

The challenge stems from the nature of OKRs. More precisely, what they’re inherently there to manage. OKRs emerged at Intel as a way of better managing discovery-type work. Work where the precise nature of what was going to be produced would be hard to predict, but the desired effects on their market could be articulated. This articulation came in the form of outcomes – the changes in behaviour of their market. In order to track whether or not progress was being made (as opposed to effort being spent), the Key Results construct were used to articulate the visible signs of their market responding “favourably”.

An Objective would be “Establish the 8086 as the most popular 16-bit microcomputer solution.”. The Key Results for the same objective would be “Win 2,000 designs for Intel in the next 18 months.”

And now to the challenge. It’s essentially a misalignment between corporate culture and the nature of “what is important in OKRs”.

Organisations with a strong “PMO core” typically value the predictability of projects. They like “green projects” running “on time and on budget”. Nothing wrong with that, predictability is a side effect of having your uncertainty and risks under control. And what organisation would turn their noses up at that?

OKRs change the focus on what is important. The construct has three basic levels. The two important ones are in the name – Objectives and Key Results.

Key Results define a measurable way of detecting a desired outcome. It’s the observable effects. What Key Results don’t do however, is define how you achieve that outcome. That’s the third level – the options that you identify and the decision on which option you take. These options are definitions of output.

These outputs themselves don’t matter. Only the effects (or lack of) they have.

The idea is that if an option isn’t having the desired effect moving the needle on your Key Result, then you pivot. The sooner you pivot, the less you waste. Successful execution of a project (green / on-time / on-budget) has no real bearing on whether or not it’s the right thing to do.

Unless your organisation changes the internal model of what is valuable from output delivery to outcome realisation, then the delivery management function (e.g. PMO) will continue to focus on output delivery optimisation instead of output effectiveness.

Converting a broadcast into a conversation

This post came about accidentally because of a discussion I’ve been having with a colleague about the Elevator Pitch and how we’ve been trying to use it as part of a vision statement. I say “trying to use it” as my team’s primary service is that of organisational change via agile transformations, and the product isn’t as tangible as real-life products like bicycles, cars etc.

The context we were discussing, was early on in an engagement, soon after when a client “buys an agile transformation from my employer” (and I’ll park my many issues with that statement – see things like this to give you a sense of how long I’ve been carrying that baggage). One of the first tasks is to broaden the awareness of what’s going to happen to the client’s employees and some lightweight communication & broadcasting is a typical medium for starting this awareness drive. As the elevator pitch is a natural fit for lightweight, accessible communication, it’s a fairly obvious choice as one of the tools to be used.

What’s the problem?

One of the common mistakes that I’ve seen with teams producing elevator pitches, is they spend most of their efforts trying to craft the perfect words to get across the sheer breadth and scope of what it is they’re trying to achieve. The work itself is rewarding, as successive iterations of the elevator pitch can heighten the emotional connection between the team and their product. Teams often hope that the energy and enthusiasm that they display while giving the elevator pitch is infectious enough for their audience to engage. To my mind, that approach is inefficient and can be ineffective. I also feel that strategy does not respect the perspective of the listener – there’s a good chance that your listener is not a “willing participant” (i.e. they’re already overworked and there you are trying to pitch something to them – even the apocryphal story of the lift journey has the CTO “trapped” in the lift with you).

Get to the conversation bit already!

A real conversation between two people only really happens when both people are listening. When working with elevator pitches, it’s not certain whether or not your target will actually be interested in listening. While that’s out of your hands, the amount of energy and effort they have to expend to begin listening to you is partially under your control. By lowering the barrier to entry, you have a chance at making it much easier for your target to engage with you. It’s what makes snake oil salesmen so effective – they understand what their victim want to hear, and know how to give that message in a way that’s captivating specifically to them

By making sure your content and your expressions use the language and semantics of the person you’re trying to reach, you drastically reduce the cognitive load that person has in order to understand what you’re saying. Otherwise they’d have to translate what they’re hearing into what they can understand. Which can be hard work, and if they’re not already invested in you, it’s quite easy for them to avoid the effort and delegate the task to one of their direct reports. Which essentially defeats the point of having an elevator pitch in the first place. By putting in the additional effort to make it easy for your target to consume your message, you also demonstrate a degree of respect of your target’s time.

A useful effect of this degree of tailoring of your content, is that aspects that may be unimportant to you would gain prominence in your message if it’s important to your target. Most organisations have very fragmented views on what is important, and a single unifying elevator pitch can be very hard to create, and may not have the impact you desire. By tailoring, you significantly increase the chances of making meaningful connection with your audience, one person at a time.

Use Personas

A good way to start this tailoring, is for you and your teams to create personas representing the people you wish to connect with, and then create tailored elevator pitches for each of these persona. Empathy maps are especially helpful in this regard.

It’s possible (if the personas are different enough) that different personas may prefer different communication channels or media. Some might prefer an informal conceptual discussion over a coffee, while others might like to see more tangible aspects as part of a guided demo. By creating models of your potential audience, you greatly increase the relevance of your content to them.

A meeting with a nervous CIO

Background

I very recently had an interview for a broad coaching role and I wasn’t happy with the answer I gave to one of the role playing scenarios. This post is me getting the jumbled up thoughts under control and structured in a more coherent form, so hopefully if the scenario happens again, I’ll be much clearer.

The Question

Discussion with the CIO of an organisation. They’re trying to transform their organisation (7000 people). Essentially, their strategy distils down into a conceptually three stages of “train everyone” – “change the way of working” – “see how it goes and evolve”. The exam question was “I’m nervous, am I missing something?”.

My refined Answer

Firstly I’ll define your organisation with a boundary containing two fundamentally different processes:

  1. Organisational processes. These processes collectively define how your organisation behaves. Transformational change only occurs when these processes evolve. The key ones are
    1. interaction processes (communication)
    2. visioning processes (purpose)
    3. motivating processes (alignment)
    4. learning processes.
  2. Operational processes. These processes collectively define what your organisation does and simplistically, all other processes belong here. Continuous and incremental improvements happen when these processes evolve.

Agile and Training

Next lets look at “agile”. There are two main parts that I want to cover. The mechanical elements such as practices, techniques – the “how to do agile”; and the behavioural elements such as mindset, principles, values – the “how to be agile”.

Training can provide awareness and skill. Skill in the mechanical elements, and awareness of the behavioural aspects. Learning and improving skills can be done in a training context. However, changing established behavioural patterns is an internal struggle and takes time and requires support.

This means that your training-based transformation strategy can potentially deliver incremental improvements (Scott Ambler, from Disciplined Agile Delivery has been running a long standing agile survey that has helped him determine that adopting the mechanical aspect of agile methods can realistically yield a 6-10% improvement in overall effectiveness these days). However, this strategy is unlikely to deliver transformational improvements.

From training to coaching

In order to make a lasting effect on the organisational processes, additional leadership coaching should be employed. As a leader, you have to set the example that you wish your organisation to follow, as your behaviour patterns will be emulated through your direct reports into your organisation. Like it or not, you are one of the coaches that influences your organisation’s leadership community.

The Servant-Leadership model has a natural fit with an agile culture and elements can be incorporated by your leaders into their leadership styles. The greater the adoption, the easier it is for their part of the organisation to evolve into a more agile culture.

Evolving your management

Any changes to your organisation’s leadership models will almost certainly require changes to your management methods and measurements.

Note: This might turn into another post.

Things I read to help me articulate this

  1. https://en.wikipedia.org/wiki/Chester_Barnard
  2. https://en.wikipedia.org/wiki/The_Functions_of_the_Executive
  3. https://en.wikipedia.org/wiki/Mary_Parker_Follett
  4. https://cvdl.ben.edu/blog/leadership_theories_part1/
  5. https://cvdl.ben.edu/blog/leadership_theories_part2/
  6. https://clearthinking.co/a-simple-model-of-culture-change/

Cost cutting and agile teams – finding a way to cope

Disclaimer

  • This is not a post about avoiding cost cutting.
  • Not about paying lip service to cost cutting.
  • Not about hostage situations (we’ve come this far, you have to give us more money)
  • This is also not aimed at extremely agile organisations that have active executive engagement with delivery teams, they’re the lucky ones.
  • It’s also not aimed at organisations that have a “courageous executive” supporting an agile approach to delivery. Again, they’re relatively lucky.

Basic Admin

Determine your team’s annual run rate – most likely it’s the base cost to employ your team for a year. Can be awkward if you’re a contractor and have permanent members of staff in your team, but there’s usually a way. Many organisations have either a median cost figure for an “average employee” or a mid-band figure for each employee grade. Make sure you factor in holidays.

It is vital that “team” in this context is the full cross functional team. It makes far less sense to reduce headcount of just one or two roles. By reducing the team skill capacity “relatively evenly”, you reduce the chances that you make a fatal cut. The corollary to this, is if your team is significantly unbalanced when compared to the work they deliver, then consider a rebalancing exercise first.

It’s pessimistic, but reliable to assume that all the other “overhead” costs associated with project execution will remain unaffected. Given the likely conditions, being able to reduce the non-value-add-overheads would be something outside your sphere of influence.

Convert your cost cutting target into an equivalent team burn rate. This gives you an indication of how much smaller your team has to be in order for you to fit within the requested cost envelope.

Your most expensive people are usually more experienced. Therefore, it’s reasonable to assume that your pared down team will not be as capable, and therefore will increase the rate that errors are produced. You have a trade-off decision to make – accept the lower quality output or sacrifice some of your (reduced) “volume output” to maintain your quality levels.

The most significant factor (for me) in making this decision, is how long that system / component needs to be changed/developed. For example a single use component that will only be used for 3 months (i.e. disposable) can be generally tolerated with a significantly lower quality than a system that needs to regularly evolve over a period of a decade. Lehman’s Laws of Software Evolution are worth revisiting for inference.

Assuming that your reduced team will have to maintain their system for a long time (the average life of an IT system is about a decade, if this article is to be believed – “Software Lifetime and its Evolution Process over Generations”) then you have no real choice when it comes to quality or output – you have to prioritise quality.

That brings you to your next challenge. How do you compensate for the fact that your team will simply be not as skilled once you lose some members? Simply continuing your team’s way of working and hoping for the best is unlikely to succeed – with the reduction in expertise, your team will produce more bugs.

Your basic strategy should consist of two strands – to cope with the increased bug density, and to reduce that skills deficit. To cope with the bugs, use a suitably balanced combination of tactics:

  • detect bugs earlier in the lifecycle,
  • reduce the complexity of the bugs that are found,
  • reduce the cognitive load required to fix the bugs, and
  • reduce the impact  of the bugs that do make it through your delivery process unnoticed

A word of caution on the reduction of skills deficit strand – it’s a much slower solution than introducing coping mechanisms for the increased number of bugs and cannot be relied on as the primary solution strategy for short-medium term improvements.

When developing your mitigation tactics, it is sensible to avoid as many options as possible that rely on individuals in teams “working harder” or working with “elevated skills” as those are unrealistic. The lever you can exert the most significant change with is team behaviour, specifically team ways of working. It would also be sensible to incorporate working patterns that boost individual learning, as that is an approach to (eventually) reducing that expertise gap.

Detect bugs earlier

All bugs are detected because of an incongruity between what is observed and what is expected from a model (regardless of whether or not this model is tangibly identified or implicitly part of the knowledge that your product owner / subject matter expert / developer / architect etc. has. Each person / role would have a different model representation and therefore would be able to detect different bugs.

One strategy for detecting bugs earlier, is to compare observations to as many of these mental models as soon as a possible, for example by having product owners or SMEs be actively embedded within your delivery teams, working alongside the developers daily. The most established strategy for increasing visibility of the work being done as early as humanly possible is pairing, especially pairing with different roles (e.g. developer and analyst, or analyst and tester). Pairing is also one of the most effective mechanisms available for reducing that skills deficit.

Another strategy for detecting bugs earlier, is to perform downstream activities sooner (for example production deployments). There are associated costs – for example the delivery strategy will need to be based on incremental development where thin slices of “complete” functionality are built, integrated and deployed. Organisations that are more sequential in nature (for example, organisations that have a difficult / scary / time consuming / manual “route-to-live” process) tend to get the biggest benefit from attempting this, but they’re also the organisations that are the most afraid to try.

Reducing the overall duration between creating the bug, detecting it, and fixing it will reduce the cognitive load required to fix the bug, as the team’s working memory will already be loaded with the appropriate context. Discovering a bug “a long time” after it was created will require significantly more mental preparation to re-gain the mental models in play at the time of creating the bug.

Reduce the complexity of bugs

Given that all bugs are coded, the only fundamentally viable strategy to reduce the complexity of bugs that are found is to write simpler bugs. I find Einstein’s original quote can be helpful to communicate this message:

“It can scarcely be denied that the supreme goal of all theory is to make the irreducible basic elements as simple and as few as possible without having to surrender the adequate representation of a single datum of experience”

Albert Einstein

A scientific approach to analysis can be helpful as it can help reduce the number of “implementation patterns” that exist in your IT solution. That may introduce learning curve challenges, e.g. if your delivery teams are unaccustomed with hypothesis driven development.

Structured techniques such as test driven development can also help, as they help manage the “thinking complexity” of software development, as well as have the side effect of producing tests to be able to continuously monitor your code quality over time (even more helpful when your team’s expertise has been lowered). But watch out for common problems in tests – e.g. https://www.yegor256.com/2018/12/11/unit-testing-anti-patterns.html

But the biggest strategy to use to help reduce the complexity (and number) of bugs in your system, is to write less code.

Reduce the cognitive load

Employing strategies that reduce the complexity of bugs also have the side effect of reducing the cognitive load needed to understand and fix the bug. Additional strategies can include:

Working in smaller blocks would also serve to act as a limiter for the amount of complexity that can possibly be present in each block. Work partitioning techniques such as user story decomposition can help.

Techniques that use both visual and auditory inputs are easier to process by people, as the two modes are processed by different channels and use different working memories. Techniques such as Rubber Duck Debugging are a form of a think aloud protocol where the auditory channel can help an individual formulate a hypothesis along a fundamentally different line to what they can see, thereby increasing their effective cognitive ability.

Working in larger groups (for example, see mob programming) can be an extremely effective technique for always maintaining a high degree of cognitive capacity consistently as the overall effect is to smooth the natural peaks and troughs in the cognitive abilities of the individuals (e.g. some people are morning people, others are night owls etc., but there’ll always be someone “firing on all cylinders”). Linus’ Law is a concise articulation – given enough eyeballs, all bugs are shallow.

Reduce the impact of bugs that escape

It’s inevitable that bugs will escape into production. The final dimension to consider is about recovery. The easier and faster it is to recover from a production incident, the less severe the effects of the problem. Recovery time is spent:

  • locating the root cause of the fault
  • fixing the root cause
  • delivering the fix

Locating the fault: there are two basic (and complementary if required) steps to take. Your triage and fault finding processes could determine if the bug was introduced as part of the last release, as well as where specifically in your solution the fault lies. The first piece of insight could take considerably less time than the second. If the fault was introduced as part of the last release, one possible early response could be to roll-back the release, which act to stop further failures from occurring. A robust fix can then be developed and a new release planned. This naturally comes with an “opportunity cost” associated with not having access to the rest of the functionality also present in the pulled release. This opportunity cost can be virtually eliminated, but it will require the ability to release every single change independently. Development and deployment strategies (e.g. using trunk based development and feature toggles) can greatly reduce the complexity associated with this.

Fixing the root cause: strategies outlined earlier to reduce the cognitive load and identify bugs earlier also help here.

Delivering the fix: This is inherently limited by the speed, flexibility, resilience and degree of automation of your route-to-live processes. The single biggest change you can make to drastically improve your delivery processes is to reduce the size of each release (get as close to release each minor change independently) and repeat the release processes constantly (I’ve experienced multiple releases a day into production, even in a public sector context). Attempting to do this will identify all of the sticking points and problems associated with your route-to-live processes, and will give you areas to target improvements. It is almost always helpful to decouple software deployments (technical, automated, controlled by delivery teams) from business releases (business triggered, business features are toggled, aligned with wider organisational change programmes) as you are then able to decouple automation improvements from business change readiness.

Final Thoughts

These are useful strategies for coping when your delivery teams have to compensate for a reduction in their base capabilities. However, there is nothing fundamentally preventing delivery organisations from just implementing these strategies to increase the effectiveness of the teams that they currently have. Is anything stopping you?

Courageous Executives and the Permafrost

Last week I started to think about how different parts of an organisation have different views on what is important.

This is nothing new. Why should I keep reading?

I think the existence of that permafrost layer is problematic if want to be inventive or innovative. If you’re looking to evolve into a “courageous executive” then credibility associated with reducing that middle layer could be useful. However, to stand a chance of success when you “fight the machine”, you should pay attention to the basics, including:

  • the degree of effective support you’ll have,
  • the culture underpinning your sub-organisation, and
  • how different the “volumes” are when comparing your sub-organisation’s culture with the wider organisation’s culture.

In this context, “sub-organisation refers to the subset of the organisation under your sphere of influence, either formally via org chart or informally via your influence, credibility, relationships etc.

If you intend to create space for your organisation to innovate and experiment, then it’d be advantageous if it is more naturally innovative and experimental. That requires a different attitude to failure than when saving face is a preferred reaction to failure. To paraphrase, you’ve got to shrink that middle layer (conceptually).

A useful strategy that can reduce the size/significance of this middle layer is dealing with fear from a cultural standpoint. Something that can help the formulation of specific strategies is an understanding of how the Loss Aversion and Loss Attention cognitive biases manifest in the individuals that you identify as being significant anchor points in this middle layer.

  • Loss Aversion: It is better to not lose £10 than it is to find £10.
  • Loss Attention: Tasks that involve losses get more attention than tasks that do not.

A clue to something that seemed to help me is that bottom level. When this scenario was presented to delivery teams, they didn’t seem worried about saving face when faced with that hypothetical scenario. Digging further, it wasn’t that face/reputation was unimportant, it’s that they didn’t care all that much about what the “middle management types” thought about them (it’s how individuals seemed to interpret the scenario). That middle management group was not considered to be their judging community. They were far more concerned about their reputation amongst other delivery folks. For example, a developer might try and force a software library to work, applying workaround after workaround, instead of just accepting that the library was the wrong fit for what’s needed (because they wanted a reputation of being able to make anything work). However, that same developer might not be concerned if their boss’s peers don’t think much of them, if they don’t care about the office politics.

That got me thinking that perhaps one way of reducing that awkward middle layer, was to change their perceptions about what was important to the community that they considered was judging them. I think this is different to tackling their priorities head on, in that it’s less confrontational, so stands a better chance of working (at least partially). That middle layer would need to view organisationally significant things like money or time or customers etc. to be something that could be truly lost (so that their normal loss aversion and attention biases would influence them in ways that were beneficial, if you were indeed trying to grow into a courageous executive). They would also need to feel that personal reputation could not be lost in the same way.

Changes to that community can be as a result of external or internal pressures. Assuming you’re not “senior enough” to be able to enforce a new operating model the community must comply with, your more effective strategies would be the ones that originate from inside that community.

Potentially Useful Infiltration Techniques

  1. Repeated Messaging: Humans are influenced by exposure. The first time something controversial is heard, it’s shocking. After the hundredth time, it barely registers consciously. Interestingly though, the subconscious still registers. In that way, people can be programmed. By repeating your message regularly into the target community (and with variations to keep it interesting), over time you’ll lower resistance to your ideas.
  2. Let others get credit, even if the ideas are yours: Having others in the community get an endorphin rush when they share an idea influences them to repeat the behaviour. So it’s your idea really, so what? You’re aiming for something else. Besides, a few people will know anyway. That “inside knowledge” can be a powerful aide to your attempts at growing an organisational culture that has you as an executive – it creates a sense of belonging between you and them, which if nurtured can transform into loyalty.
  3. Be seen to be visible to the next few power levels up: For the hierarchically minded, seeing you playing nicely with their boss and their boss’s boss, can signal that it’s more acceptable for them to align with what you’re saying. It has a secondary effect to help you gauge whether or not what you want to do is palatable to the next few levels up the power structure. If it is, then it can be an indicator that there is space for you to grow your leadership potential.

Save Money or Save Face – What would you choose?

Thought Experiment

You’re in charge of “A Thing” and are accountable for the successful execution of the objectives that the Thing has (e.g. you’re the CTO in charge of the IT Department, or you’re the Project Manager of a strategic project). Which of these scenarios is worse?

  1. A decision you make results in a significant financial loss for the Thing. However, you do not suffer any loss of face (e.g. you can successfully employ the “circumstances outside my control” defence).
  2. A decision you make results in the Thing avoiding a significant financial loss. However, it goes against the political winds and you suffer a significant loss of face.

Just what is important?

I’ve been trying understand how the notion of “what is important” varies in organisational structures. A pattern that seems to be emerging is that what is of primary importance falls into one of three zones.

  • The first zone values money, externally mandated timescales (e.g. legislation etc)
  • The second zone values their personal reputation, their personal “empire”
  • The third zone values their use of time – being able to spend it on valuable or meaningful work

These three zones look very similar to the zones in organisations when thinking about appetite for change (e.g. very senior leadership is the first zone, the delivery teams are the third zone and the “permafrost layer” are the second zone).

The thickness of each of the three zones/layers is an organisationally specific function, and seems to exist no matter what the scale (for example, I’ve even seen this sort of structure in project teams, for example when a product owner is too busy and a business analyst operates as a message passing proxy). The existence of this middle layer correlates with size, but is not caused by size. One factor that reinforces the middle layer is fear of failure, which is heavily influenced by the personal history of everyone concerned. Given any degree of employee churn, I think it’s impossible to have zero middle layer.

The important question for me, is so what? I’ll look at that next.

Building a Knowledge Sharing Community

Why am I trying to establish this?

Having an effective knowledge sharing strategy that my consultants and coaches use effectively can significantly boost the quality of their deliverables on engagements, as their access to knowledge and experiences will be richer. Richer knowledge leads to better decisions which lead to better outcomes blah blah blah.

No really, why?

The truth is far less grandiose. And much more personal. I want better relationships with my colleagues. And the main reason for that is selfish. When I’ve got something interesting/gnarly to solve, I’d MUCH rather solve it collaboratively with someone. I find the ideas that come out of a buzzing pair/trio are generally FAR superior (not just in terms of merit, but also the emotional responses – things like surprise, delight and even just pure joy) than anything I’d come up with on my own. A major contributor to that heightened emotional response is the fact that it’s a shared experience – this has a reinforcing effect on the individuals.

This topic is also related to my post on Courageous Executives – I think being able to create an environment where knowledge and help is shared freely and easily is helpful in establishing a progressive organisational culture.

Some things to consider

The first thing I need, is critical mass. I need enough colleagues with sufficient latent willingness to participate to increase the odds of interesting interactions to occur.

The second thing, is to recognise the reality of the group dynamic. I work for a consulting organisation, and like most others, they staff people on engagements. Those engagements have teams. Back at base though, I’m grouped with a set of similar people under the same line manager. That line manager’s “team” is not a team from a behavioural dynamics perspective, regardless of how the individuals describe themselves. Esther Derby is my go-to source for concise articulation of what it means to be a team.

The third fundamental aspect to consider, is how people participate. The “1/9/90” rule of thumb has been around for over a decade now, potentially longer. A quick recap: For online groups, there are three broad categories of interaction

  • 1% of the population will initiate a discussion thread
  • 9% of the population will actively contribute to discussion threads
  • 90% of the population will lurk

I also reasoned that in order to sell what I wanted people to do, it needed to be more engaging/arresting than just these numbers (which no doubt many of my target audience would have heard already, so there’d be little impact). While daydreaming about how to go about launching this, an idea flitted across my mind, which amused me. I ran with it, just to see how far I could go. Fishing.

  • Lures: These would be the “1% and 9%” of the population. Their job is to make the environment interesting/appealing enough for the others to participate.
  • Fish: These are the 90% of the population who lurk. My objective is to convert them into lures by engaging them.

Above all else, the most important thing to remember was that knowledge management was all about people. We have to avoid the temptation to create yet-another-document-repository, as those end up generally being pointless (keeping a stack of documents current is a huge time investment, so very few people do – the documents become outdated quickly and users lose confidence in the repository as a source of relevant information).

How did we start?

The first step was to get a sense of that latent willingness that I needed. To avoid unnecessary confusion, I stuck to a typical technique – I ran a workshop. The stated objective was to understand the key topic areas / themes that as a collective, we had some self-professed expertise in. The exam question was

write down topics, regardless of scale, that you would be happy for a colleague to come to you if they needed some help

Approaching the audience in this way would nudge them into feeling valued from the outset (the alternative would be giving them a candidate set of themes and asking them to sign up. The list at the end of both approaches would be the same, but the first scenario would have far more engaged individuals as they’d own the list). This workshop also let me find co-conspirators.

Then what?

In a word, admin. We had to create a navigable map of the topics that the audience had supplied (it would help people find the right place to ask questions). In the end, we settled on a very basic two-level tree consisting of high-level themes and more detailed topics. That allowed the grouping of individuals to be based on themes with related topics. The main rationale was that as experience and knowledge changed over time, the specifics of the topics would evolve, but the main theme would remain constant. That allowed for stability of membership – and that membership stability is a significant factor in determining whether or not a theme would survive. The candidate themes also had candidate lures – they were the list of people who volunteered for the topics that were under that theme.

We had to sell the themes to the potential lures.

We also had to set some expectations about what being a lure entailed. By this point the term “Theme Guardian” had started to emerge as the role that was to be played. This is what we ended up with.

  • Guardians own their Theme
  • Guardians are responsible for the quality and integrity of the Theme’s content
  • Guardians should invest in PDCA cycles to improve the environment in the Theme.
  • Guardians need to evolve their vision and strategy for their Theme.
  • Have “something” to help new joiners understand your Theme.
    • This doesn’t necessarily have to be a document. It could be quarterly intro sessions on WebEx (for example).
  • Set expectations of how you’d like the community to operate – and remind your community regularly

I find analogies very useful as abstraction models to help me understand a domain/problem. In the event that at least some of the candidate guardians operated in a similar manner, I picked a couple of potential models that they could use to refine their thinking about how they wished to operate:

  • Town Planners and Communal Spaces. What makes some public spaces incredibly successful, and others turn into ghost town?
  • Aboriginal Storytelling. In particular, explore the claims that the Dreamtime stories have remained intact for over 10,000 years without degrading, despite only having a verbal/pictorial, not written form.

Selling the concept

Now that we had a starting position (themes, candidate guardians, some guardian responsibilities), we needed to launch. To help that, we produced some general use guidance on themes:

  • People are at the core of any successful knowledge management strategy.
  • Information held in a person’s head is updated as a by-product of things that person does. Information stored in documents requires additional explicit effort.
  • For a Theme to be useful, knowledge needs to flow from person to person.
  • If a Theme’s only got one person who’s interested, it’s not a Theme.
  • When a question is asked, directly answer. Even if it’s been asked before. Never rely on a document (or link etc.) to answer for you. If necessary, end your answer with “this document/link/other goes further” (or words to that effect).
  • Think about what your “background radiation” looks like. Themes need to feel active otherwise you won’t get people stopping by and asking questions.
  • Have a variety in the complexity / subtlety / nuanced nature of the conversations and discussions. For example, if you only have very high brow discussions, you’re likely to put off the inexperienced. If you only have introductory content, then the experts may not participate.

Launch Day

These were our objectives

  • Start small: We picked one Theme that the co-conspirators were willing to act as Guardians or as participating members. We would attempt to orchestrate and create an active community for that theme.
  • Momentum: We wanted to create some observer habits in the wider community. With enough people checking in daily (for example), we’d greatly increase the chances that conversations would spark up. But we had to kick start the making-it-worth-everyone’s-time process.
  • Win over an influential sceptic: Having a known sceptic promote what we were trying to do would help persuade other sceptics that there may be some mileage in investing in this strategy.

What happens/happened next?

1 Week After Launch

There’s a smattering of interest from a handful of people. A few posts have been made and there has been some commenting on posts. Some of the early efforts from the co-conspirators have been around motivating and inspiring the community to participate. There is some optimism around that this approach feels different (probably because it isn’t tools oriented).

Predictions

It’s still early days, but these are my (current) predictions.

1 Month After Launch

The conversation topics broadly split into a handful of themes. Most of the themes appear to be consistent with what emerged from the initial workshop, but the actual topics discussed are quite different. There is some dissonance from the earlier adopters as there are multiple unrelated themes being discussed in the same “place”, causing confusion.

3 Months After Launch

Enough interest in different themes has triggered new spaces on the platform for the conversations about those themes to be segregated, to simplify the cognitive load on the users. There is some frustration from some members who are interested in multiple topics, most likely due to how the individuals have modelled the interactions mentally – e.g. why should two people who are talking about a range of subjects have to keep switching which “chatroom” they converse in. Relationships are still point-to-point.

6 Months After Launch

Most of the theme chatrooms are now dormant, most of the activity has gravitated towards one or two Themes. There’s some blurring between the competing mental models – relationships are person-to-person and person-to-community.

Analysing my predictions

One of the most significant challenges I’ve seen in Knowledge Management things is the belief (usually tacit) that it’s all about the content. I believe it’s all about the relationship, and the knowledge of who to talk to when a person needs to know something. My predictions have a base assumption that Knowledge can be structured and organised at a fine grain. I think that’s an assumption that’s also being made by the majority. I’m expecting this assumption to be proven to be false and that we will pivot back to trying to be more of a community than a knowledge repository. Looking at the population numbers, I don’t believe there’s any need for more than two or possibly three communities (eventually).

Are you limited by your Agile Coach Sourcing Strategy?

This is about a revolving door programme of hiring coaches. One of the things I’ve observed regularly, is that these kinds of hiring programmes create an implicit timeframe by which improvements must be “delivered”. In itself, that’s not necessarily a problem. What is a problem, is that the timeframes are nowhere near long enough (IMHO) if the aim is to change the culture.

How you measure your coaches also has an obvious effect on your outcomes. Procurement plays a big part here. I’ve seen a lot of coaches who have their success defined by some variation of how much effort they expend. Training courses run. Numbers of teams “coached”. Amount of collateral documented in a wiki/repository/etc. It’s even worse if that’s also what the coach genuinely believes is a measure of success.

It’s easy to buy the visible stuff. However, the visible stuff has no sustaining ability – that comes from the hidden stuff. Agile’s a culture, but how do you buy a culture? It’s much easier to buy a new set of processes, dress code, office layout, stationery, reporting templates etc.

By buying the visible stuff, you easily get to a set “maturity” of what-to-do (because your coach has implemented this a million times), but you might lose the learning-about-why and your ability to improve at some point comes to a crashing halt when your coach runs out of instructions to give you (or they leave). You’ve substituted thinking for doing what you’re told because an expert tells you.

One of the things that could lessen the likelihood of this happening to you is longevity. Although that does bring its own set of challenges. Longevity can come from humans, or guidance that’s respected enough to be followed (or at least attempted) – e.g. the guided continuous improvement advice from Disciplined Agile Delivery’s collateral.

Longevity can increase the chances that the underlying factors get some attention – beliefs, culture, mindset. But longevity is hard. When things go wrong (and lets face it, things always go wrong), it’s easy to blame the changes that are being attempted. I’ve often seen blame being thrown about as a precursor to the environment becoming more toxic. In those environments, unless there’s sufficient emotional investment, coaches leave.

In my view, the most effective counteracting force against that toxicity, is leadership. Your senior leadership must have the trust of the members of the organisation. However, trust doesn’t come from great speeches but from credibility. That means the senior leadership must also evolve and adopt a more collaborative/open/trusting/etc culture. They must behave differently. And they also need to provide regular, ongoing and consistent reassurance that the environment is safe so that teams can evolve to a more agile culture with no repercussions for steps that don’t quite make it on the first try. I think that works because humans emulate leaders. If the leaders are open and collaborative, then the teams are more likely to also become more open and collaborative.