Who should you have as a Product Owner? What are you optimising for?

Subtitle: Are you optimising for development efficiency, bleeding edge innovation, production stability or credibility?

Context

I have a client that’s going through an agile transformation. They’re a core Infrastructure “horizontal” global (sub)organisation. In other words, they keep the lights on, look after all the Crown Jewels, and their customers tend to be application teams OR “everyone” (e.g. Exchange, DNS etc). They’ve adopted one of the many “spotify-variants” and are trying hard to make those structures work in their context.

This post was triggered by a set of conversations I’ve been having with “platform leads” (higher end of middle management in the classic sense, they report directly to a C-suite senior leader). The topic was team design. More specifically, who should be appointed as the Product Owner for any given team. The first thing they needed to understand, was why the decision was architecturally significant (beyond the significance of the team that owns a service). Character traits and personal priorities in the product owner would shape the nature of the engagement a team has with their customers and stakeholders.

Before

There tended to be a couple of significant patterns.

Pattern 1: The most senior team member, sometimes a manager. These tended to be people who’ve been at this organisation the longest.

Pattern 2: The most technically skilled developer/technical architect/technical specialist. These tended to be people who know the most about what the product(s) or technologies are capable of.

The Problem

The main problem is the trade-offs aren’t explicitly visible. For example, sure it’d be great for technical quality and feature set richness if the forward thinking R&D person is put in charge, but there’s a significant risk to credibility when dealing with operational teams who have to service customer requests (if for example, the forward thinking person is uninterested in the “mundane day-to-day” and recommends waiting for a solution that isn’t live yet). For teams that are part of an ecosystem of teams that collectively deliver a live service, coordination and collaboration tend to be much more valuable than raw technical expertise, especially services that must not fail, such as DNS and DHCP (but YMMV).

After

The only significant change in how roles were allocated was to consciously discuss the trade-offs associated with each of the candidates across more than just the “technical expertise” dimension. How that person will be perceived by downstream teams is also important, especially if multiple teams need to coordinate in order to deliver a service. While shaky at the start, they soon developed a degree of skill and nuance when thinking about these additional, non-technical dimensions.

I’ve also recommended that as part of the pod mission statements that they also make some brief comments about the trade-offs that they’ve made to come up with their operating model and structure. This is doubly important for teams that haven’t explicitly articulated their purpose before. The deliberate articulation of (for example) prioritising credibility with peers and service stability over raw speed of delivery makes it much clearer about why a team engages and prioritises the way that it does. It also acts as a trigger for team leaders to develop a range of leadership styles, as they’re able to recognise traits and weaknesses in themselves that little bit more easily.

OKR challenges: Outcome thinking is hard!

Outcomes vs Outputs

This is the typical challenge, and pretty much everyone finds writing outcome oriented statements difficult. In my experience, it’s not because people don’t know what outcomes are.

I think the biggest challenge is because people spend the vast amount of their thinking time thinking about outputs – what their tasks are, what they need to produce, are they late, is the quality OK, do they need help, are there any risks etc. They spend comparatively little time thinking about the bigger picture of why they’re doing all that work. I think output thinking is so prevalent in society, that if by chance, you met someone who spends most of their time thinking about why they do things, you might casually wonder if that person is going through some form of existential crisis.

Given the relative familiarity of output thinking and the relative novelty of outcome thinking, it’s little wonder that early attempts at defining outcome oriented Key Results are generally poor. Here’s an example of a collection of collaborating teams who collectively own API Management within my client’s IT Organisation had as a “relatively mature OKR”.

Make services easier to use
- Increase the number of APIs available to the customer from X to Y
- Increase the number of automated releases per week from X to Y
- Document the CI/CD pipeline by EOY

While the Objective (“make services easier to use”) is great, even a cursory examination of the Key Results confirm that they’re all in direct control of the teams, and are therefore outputs. After some coaching, they were able to rewrite it like this:

Make services easier to use
- Increase the number of API calls per day from X to Y
- Decrease the lead time to develop an API from X to Y days

It’s hard to live with OKRs if your management paradigm doesn’t let you

Context

I have an interesting engagement with a client that is trying to adopt OKRs as an alignment and coordination tool. This particular organisation is a large corporate in a heavily regulated environment. I’m working with functional areas within their traditional risk averse IT organisation. Their historical response to their environment has been to favour command and control strategies.

The scenario that triggered this post

An organisational unit had two in flight projects (both delivering production releases reasonably regularly). One had all the hallmarks of a well run project – including a prediction that they’d deliver all their requirements on time. The other project had several challenges and was predicted to finish fairly late and over budget. Both projects were funded as preferred options on their respective OKRs. The well run project was having no appreciable effect on its OKR. In contrast, the “late” project was already having a positive effect on their OKR. Which project do you imagine was stopped?

The Ideological Conflict

The challenge stems from the nature of OKRs. More precisely, what they’re inherently there to manage. OKRs emerged at Intel as a way of better managing discovery-type work. Work where the precise nature of what was going to be produced would be hard to predict, but the desired effects on their market could be articulated. This articulation came in the form of outcomes – the changes in behaviour of their market. In order to track whether or not progress was being made (as opposed to effort being spent), the Key Results construct were used to articulate the visible signs of their market responding “favourably”.

An Objective would be “Establish the 8086 as the most popular 16-bit microcomputer solution.”. The Key Results for the same objective would be “Win 2,000 designs for Intel in the next 18 months.”

And now to the challenge. It’s essentially a misalignment between corporate culture and the nature of “what is important in OKRs”.

Organisations with a strong “PMO core” typically value the predictability of projects. They like “green projects” running “on time and on budget”. Nothing wrong with that, predictability is a side effect of having your uncertainty and risks under control. And what organisation would turn their noses up at that?

OKRs change the focus on what is important. The construct has three basic levels. The two important ones are in the name – Objectives and Key Results.

Key Results define a measurable way of detecting a desired outcome. It’s the observable effects. What Key Results don’t do however, is define how you achieve that outcome. That’s the third level – the options that you identify and the decision on which option you take. These options are definitions of output.

These outputs themselves don’t matter. Only the effects (or lack of) they have.

The idea is that if an option isn’t having the desired effect moving the needle on your Key Result, then you pivot. The sooner you pivot, the less you waste. Successful execution of a project (green / on-time / on-budget) has no real bearing on whether or not it’s the right thing to do.

Unless your organisation changes the internal model of what is valuable from output delivery to outcome realisation, then the delivery management function (e.g. PMO) will continue to focus on output delivery optimisation instead of output effectiveness.

Dealing with unexpected work

Problem Statement

One of my large corporate clients has teams who’re all struggling to cope with what they see as large amounts of unplanned work. There’s a great degree of commonality in the conversations that I have with the teams. Most of them are using Scrum at the team level.

Opening Gambit

The first thing to do (IMHO) is to build a model of the different types of unexpected work that is hitting the team. I usually use “predictability” and “plannability” as my two classifying dimensions. That leads to three types of unexpected work (the fourth category is planned work):

  • Unpredictable and Unplannable. This is stuff that you simply don’t realise is out there. It hits you out of the blue
  • Predictable but Unplannable. You know there are predictable patterns of work, but you don’t know the specifics. For example you know you get about 100 tickets a week from your support queues.
  • Plannable but Unpredictable. You can plan the work, but you don’t realise you have to. For example your priorities might shift towards things further down your backlog.
  • Plannable and Predictable. The ideal case.

Understanding what the categories of the unexpected work can trigger more strategic thinking. For example, you might have a blind spot in your priorities (Note: In this client’s case, they’re using OKRs to articulate intent). You might also determine that you are able to turn down some of these requests (or reroute them to a more appropriate team).

But What About Today’s Symptoms?

The next thing to do, is find out whether it can wait until the next sprint. This allows the team to maintain the momentum in their current sprint while prioritising the new work.

Depending on the stakeholder making the request, your relationship, the amount of power held by you/them, your shoe size, their attitude to hats (*), there may be a right way and a wrong way to approach this. I’ve usually found “young” organisations (from an agile maturity perspective) require more visible signs of authority such as messages like “the Scrum process we’re following has a mechanism for accepting late changes, and they go to the top of the product backlog ready for the next sprint”. In order for the team to maintain a good flow of work, they may also need to schedule in a backlog refinement session to get that new work request ready for a sprint.

What if the work can’t wait?

There will be situations when the work can’t wait even a single sprint and needs to be tackled now. To have a controlled acceptance of the work into the team’s current sprint, it’s worth exploring a few aspects of the work.

  • Does the new work align to the current sprint goal?
  • How big is the new work (for example T-shirt sizing)?
  • Is the work a direct replacement for something already planned?

If the new work aligns with the sprint goal, then the substitution of already planned work can be relatively straightforward. If the new work doesn’t align, or worse, is in direct conflict with the sprint goal, then it’s worth exploring the team’s strategic priorities to detect potential blind spots. If they’ve been able to size the new work, they should be in a position to swap a similar amount of work in the sprint.

What if the team’s not in a position to deliver less than forecasted?

The team might be in some situations where sprint forecasts are used by other teams as part of their planning, and delivering less than forecasted amounts would be problematic. While it’s not a great situation to be in (and is potentially a sign that the overall level of agile maturity is low-ish), it is something that teams sometimes have to contend with.

In this scenario, the easiest strategy is to plan with a buffer. A bit like the old MoSCoW prioritisation technique for timeboxes. The DSDM guidance (the origin of the terms Timebox and MoSCoW prioritisation) typically recommends no more than 60% of a timebox is Must Have requirements (for the scope of the timebox). This means that while the team will deliver as much as it can, consumers of the timebox should only really expect 60% of the total timebox plan with certainty, the remainder being variable.

Buffers work well when they’re in the form of a percentage of remaining time. This means the available buffer drops as the sprint executes. The danger of a fixed time buffer is it’s easy to misunderstand how much change/disruption the team can tolerate towards the latter part of the sprint.

What if the buffer amount needed changes drastically from sprint to sprint?

If the Scrum team is unable to apply any of the previous bits of advice, then there’s a good chance that there is too much uncertainty in the team’s context for Scrum to be an effective team process, as Scrum relies on being able to predict a sprint length into the future. The team may get better results from adopting Kanban as a team process, and tuning their WIP limits to keep enough slack in the system to respond to their changing demands.

Agile Coaching – Building my mental model of a team

Introduction

This post follows on from Agile Coaching – Building my mental model of a person and also comes into play when I work with teams.

This is a basic model, but it’s been good enough so far
Structure

The one element worth expanding on is “Hierarchy”. The reason I find this valuable is simply because of how I engage with teams as an Agile Coach. I’ve never once been directly approached by a team requesting my services as a coach as their initial contact. It’s always been a request from their boss (or boss’ boss) or there’s some form of Agile Transformation in progress and the teams get access to coaches. This engagement often has an element of “something being done to the team”, albeit sometimes only a trivial amount. If I understand how they perceive authority, if necessary I can temporarily align myself with an element of authority they recognise, just so that they’ll listen to me, at least initially. It’s not a long term relationship builder, but it is a foot in the door.

Constraints

These are all of the constraints that the team has to operate under. This section of the model helps me build empathy with the team, and also helps me narrow down all the possible ways I could help them into the subset that’s the most relevant to them (otherwise I’ll waste their time). The “History” element is particularly valuable as that helps me get underneath statements like “It’ll never work – we tried that before”.

Purpose

There are two sets of purposes at play when working with corporate teams, typically because the reason the team exists in the first place is outside the control of any of the team members. This duality isn’t usually present in teams that have ultimate control over their destinies.

  • Why does the organisation want the team in place?
  • Why do team members think the team exists?

The “Inner Purpose” element is my representation of the internal monologue in the minds of the team members, combined with how well they form sub groups within the team. By understanding the differences in these two sets, I’m more able to connect with the team as a set of individuals, and also help them evolve towards a team that is as aligned as possible with their externally stated purpose.

I’ve noticed that the more divergent these two models, the more I find “us and them” language and behaviour patterns. I’ve also noticed that in some teams, the “organisation” part can include the team leader.

Behaviour

In addition to the expected benefits of understanding the current state behaviours in the team, I get an additional benefit – it can help me blend into the team, camouflage if you like.

When I first engage with a team, there’s that initial period where no one’s quite sure where this relationship is going. As I’m typically brought in by someone outside the team, I need to stick around long enough for me to be able to help them. The initial period is handled by the “Hierarchy” element described earlier, but there’s a limited window before that perception becomes damaging. If I’m able to integrate sufficiently into the team dynamic, any new ideas or suggestions I offer will feel to them like it’s (partially) coming from inside the team. It belongs to them a little more than if I was thought of as an outsider. That’s the dynamic that I need in order to sustainably help them. Coaching is a relationship after all.

Exploring “Design Thinking” using the notion of a Frame of Reference

Introduction

This is a short piece aimed at the curious. Someone who’s been exposed to Design Thinking workshops and would like to know a little bit more about just why it works. While it’s helpful to have experienced a design thinking workshop, the lack of exposure can be compensated for by a little imagination.

The Lens

The tool used in this post is the notion of a Frame of Reference. A frame of reference is a complex schema of unquestioned beliefs, values and so on that we use when inferring meaning. If any part of that frame is changed then the meaning that is inferred may change.

A person’s frame of reference is always subject to change. Any new insight, belief or value adopted by a person has the potential to change their frame of reference. While it is far more likely that changes to an individual’s frame of reference are minor, there is always the potential for a fundamental shift in perspective (for example, anyone who’s ever said “I found out about X and it turned my world upside down”).

Some useful links about the Frame Of Reference
https://en.wikipedia.org/wiki/Cognitive_reframing
http://changingminds.org/explanations/models/frame_of_reference.htm
Tversky, A. and Kahneman, D. (1981). The framing of decisions and psychology of choice. Science, 211, 453-458.

Problem Solving – Solo vs Teamwork

Using the extremely simplistic model that a person is T-shaped from a skills and expertise perspective (awareness of a broad range of topics and in depth expertise of a specialist area), when a person tries to solve a problem on their own, they’re typically thinking at the limits of their expertise. A bit like the red ellipse in this sketch…

Any new insights they come up with are also likely to also be in the red ellipse, meaning the associated changes to their frame of reference will also be minor at best.

Contrast this to how a collaborating team operates. With the same models, each person in the team is thinking at the limits of their knowledge, but the main difference is that they’re also sharing their insights across the team. This usually means that the changes to an individual’s frame of reference can be significantly greater, as it can be influenced by new insights that they simply would not be able to derive by themselves.

You think differently when you’re in a team

Your Focus Areas Change

A significant part of a good design thinking workshop is a focus on the customer. It’s imperative to build a strong empathic relationship with your customer. Capturing those insights as personas or empathy maps is common. Using the same visual language of T-shaped skills and expertise, the insights that the team derives can be represented as follows

You think about different things when you focus on your customer

If done well, the red T will contain the values, priorities, pain points, motivations etc of the customer. By integrating these insights into the frame of reference of the team members, the ideas that they produce would, almost automatically, have a much better alignment with real customer value and customer priorities.

You Have More Time

Design Thinking workshops have a natural double diamond pattern, with alternating phases of divergent and convergent thought processes.

However, as a participant going through this structure (especially someone who “already knows what needs to be done, they just want to get on with it”) this can feel extremely slow and frustrating.

This “slowness” does have benefits. The main one is that it gives the insights floating around the team much more time to work its way into an individual’s frame of reference and change how they think. In other words, it prevents participants from leaping to conclusions. While there’s nothing inherently wrong about “heading straight to the answer”, the solutions you develop using this type of strategy are not innovative, as by definition you’ve come up with them before. By maximising the opportunities to change your frame of reference, you increase the likelihood that your solutions are much more innovative.

Converting a broadcast into a conversation

This post came about accidentally because of a discussion I’ve been having with a colleague about the Elevator Pitch and how we’ve been trying to use it as part of a vision statement. I say “trying to use it” as my team’s primary service is that of organisational change via agile transformations, and the product isn’t as tangible as real-life products like bicycles, cars etc.

The context we were discussing, was early on in an engagement, soon after when a client “buys an agile transformation from my employer” (and I’ll park my many issues with that statement – see things like this to give you a sense of how long I’ve been carrying that baggage). One of the first tasks is to broaden the awareness of what’s going to happen to the client’s employees and some lightweight communication & broadcasting is a typical medium for starting this awareness drive. As the elevator pitch is a natural fit for lightweight, accessible communication, it’s a fairly obvious choice as one of the tools to be used.

What’s the problem?

One of the common mistakes that I’ve seen with teams producing elevator pitches, is they spend most of their efforts trying to craft the perfect words to get across the sheer breadth and scope of what it is they’re trying to achieve. The work itself is rewarding, as successive iterations of the elevator pitch can heighten the emotional connection between the team and their product. Teams often hope that the energy and enthusiasm that they display while giving the elevator pitch is infectious enough for their audience to engage. To my mind, that approach is inefficient and can be ineffective. I also feel that strategy does not respect the perspective of the listener – there’s a good chance that your listener is not a “willing participant” (i.e. they’re already overworked and there you are trying to pitch something to them – even the apocryphal story of the lift journey has the CTO “trapped” in the lift with you).

Get to the conversation bit already!

A real conversation between two people only really happens when both people are listening. When working with elevator pitches, it’s not certain whether or not your target will actually be interested in listening. While that’s out of your hands, the amount of energy and effort they have to expend to begin listening to you is partially under your control. By lowering the barrier to entry, you have a chance at making it much easier for your target to engage with you. It’s what makes snake oil salesmen so effective – they understand what their victim want to hear, and know how to give that message in a way that’s captivating specifically to them

By making sure your content and your expressions use the language and semantics of the person you’re trying to reach, you drastically reduce the cognitive load that person has in order to understand what you’re saying. Otherwise they’d have to translate what they’re hearing into what they can understand. Which can be hard work, and if they’re not already invested in you, it’s quite easy for them to avoid the effort and delegate the task to one of their direct reports. Which essentially defeats the point of having an elevator pitch in the first place. By putting in the additional effort to make it easy for your target to consume your message, you also demonstrate a degree of respect of your target’s time.

A useful effect of this degree of tailoring of your content, is that aspects that may be unimportant to you would gain prominence in your message if it’s important to your target. Most organisations have very fragmented views on what is important, and a single unifying elevator pitch can be very hard to create, and may not have the impact you desire. By tailoring, you significantly increase the chances of making meaningful connection with your audience, one person at a time.

Use Personas

A good way to start this tailoring, is for you and your teams to create personas representing the people you wish to connect with, and then create tailored elevator pitches for each of these persona. Empathy maps are especially helpful in this regard.

It’s possible (if the personas are different enough) that different personas may prefer different communication channels or media. Some might prefer an informal conceptual discussion over a coffee, while others might like to see more tangible aspects as part of a guided demo. By creating models of your potential audience, you greatly increase the relevance of your content to them.

A meeting with a nervous CIO

Background

I very recently had an interview for a broad coaching role and I wasn’t happy with the answer I gave to one of the role playing scenarios. This post is me getting the jumbled up thoughts under control and structured in a more coherent form, so hopefully if the scenario happens again, I’ll be much clearer.

The Question

Discussion with the CIO of an organisation. They’re trying to transform their organisation (7000 people). Essentially, their strategy distils down into a conceptually three stages of “train everyone” – “change the way of working” – “see how it goes and evolve”. The exam question was “I’m nervous, am I missing something?”.

My refined Answer

Firstly I’ll define your organisation with a boundary containing two fundamentally different processes:

  1. Organisational processes. These processes collectively define how your organisation behaves. Transformational change only occurs when these processes evolve. The key ones are
    1. interaction processes (communication)
    2. visioning processes (purpose)
    3. motivating processes (alignment)
    4. learning processes.
  2. Operational processes. These processes collectively define what your organisation does and simplistically, all other processes belong here. Continuous and incremental improvements happen when these processes evolve.

Agile and Training

Next lets look at “agile”. There are two main parts that I want to cover. The mechanical elements such as practices, techniques – the “how to do agile”; and the behavioural elements such as mindset, principles, values – the “how to be agile”.

Training can provide awareness and skill. Skill in the mechanical elements, and awareness of the behavioural aspects. Learning and improving skills can be done in a training context. However, changing established behavioural patterns is an internal struggle and takes time and requires support.

This means that your training-based transformation strategy can potentially deliver incremental improvements (Scott Ambler, from Disciplined Agile Delivery has been running a long standing agile survey that has helped him determine that adopting the mechanical aspect of agile methods can realistically yield a 6-10% improvement in overall effectiveness these days). However, this strategy is unlikely to deliver transformational improvements.

From training to coaching

In order to make a lasting effect on the organisational processes, additional leadership coaching should be employed. As a leader, you have to set the example that you wish your organisation to follow, as your behaviour patterns will be emulated through your direct reports into your organisation. Like it or not, you are one of the coaches that influences your organisation’s leadership community.

The Servant-Leadership model has a natural fit with an agile culture and elements can be incorporated by your leaders into their leadership styles. The greater the adoption, the easier it is for their part of the organisation to evolve into a more agile culture.

Evolving your management

Any changes to your organisation’s leadership models will almost certainly require changes to your management methods and measurements.

Note: This might turn into another post.

Things I read to help me articulate this

  1. https://en.wikipedia.org/wiki/Chester_Barnard
  2. https://en.wikipedia.org/wiki/The_Functions_of_the_Executive
  3. https://en.wikipedia.org/wiki/Mary_Parker_Follett
  4. https://cvdl.ben.edu/blog/leadership_theories_part1/
  5. https://cvdl.ben.edu/blog/leadership_theories_part2/
  6. https://clearthinking.co/a-simple-model-of-culture-change/

Agile Coaching – Building my mental model of a person

Introduction

I’ve been thinking about how to remain effective as an Agile Coach while not being co-located with the entity (person / team / project / programme) that wants to be coached.

While a long term viable solution might be for me to rebuild my approach from the ground up as a “remote native” thing, I reckon my MVP is to patch up my existing process (see “The anatomy of my typical coaching engagement“) so that it is still fed with as rich a dataset as possible.

Mental model of a Person

Mindmap of the categories of information I use to build a mental model of a person
This is the mental model I generally try to build of each person I work with as an Agile Coach.
  1. What they do
    • things like their role, career history, future plans, what’s their speciality in the team etc.
  2. How they think
    • things like how they build their mental models, how they make decisions, how they prioritise etc. I also look to understand how they see themselves.
  3. Main character traits
    • things like whether they’re an extrovert, are they motivated by fame/recognition/peer acceptance, how they respond to stress and challenges etc.

I’m also looking to distil all this into some insights into what they feel is important, what they see is urgent and what they think they must avoid.

Benefit I get from using this model

The core benefit I get from populating this model of my coaching client, is it gives me a far richer relationship with my client, which typically translates into better / more accurate communication. These are some examples.

  1. By learning about the “What They Do” tree, I’m able to work out if I can mentor them, or who would be a good person to mentor them and help them progress from Present to Future Ambitions
  2. If I can combine aspects of “What They Think Is Important”, some of their Character Traits (e.g. Individualism or Collectivism) and that is either something I align with OR can mimic sufficiently to communicate, I can begin to build rapport with them. Rapport is the first step towards trust.
  3. If I understand “How They Make Decisions”, it makes it easier for me to structure a compelling argument, thereby increasing my levels of persuasion.
  4. If I understand how they’re motivated, that can also support my ability to structure a compelling argument.
  5. If I understand how they manage their risks, and what they’re trying to avoid, then not only does it help me structure an argument, I can help them think about relevant risk mitigation strategies.
  6. If I understand “How They Make Decisions”, and can describe it to them in a way that makes implicit processes more explicit, I have a chance to help them learn about new thinking models and they can devise new problem solving strategies.

Building this model

Co-located

This is the easiest scenario. Much of this model can be built by forming hypotheses informed by the actions and behaviours that I observe. The person is also accessible so that I can discuss my models and predictions and see how close I can get to what they think. The main challenge is the amount of time it can take before some of the deeper elements can be inferred / estimated, if for no other reason that it requires a great deal of trust and honesty between the person and I, and that trust takes time to build.

Remote

Conceptually, my starting point was to find techniques for collecting the same raw data that worked with the alternate channels available – voice & video calling and email / written form. As everyone else was in the same boat, they also would need to work out digital equivalents for their normal team ceremonies and so I could piggy back and observe those easily enough.

I quickly hit a problem. While this approach was fine and would yield accurate data for the known unknowns (see this post for some context) such as the “What they do” sub-tree, it is easily corrupted into opinion (e.g. “How they think”) or even a duplicitous facade, portraying a view that they believe is socially acceptable even though it’s not the reality (e.g. “Character traits”).

I’m currently exploring what I can infer reliably from more open “essay writing” styled requests. For example:

  1. Please describe your recollection of the last completed iteration, highlighting the events that you personally saw as significant for your role in the team.
  2. Please describe the events during the iteration that led you to believe for the first time that your iteration goal may be at risk unless you and your team were able to get on the front foot.
  3. Please describe the events that led your team and product owner to collectively agree what stories were going to be brought into the current iteration.

I’m exploring the different perspectives each team member has on the same shared experience. My hypothesis is that it’ll give me a few insights into how various team members think (relative to each other, as it were) and maybe a few nuggets about some of the more prominent personality traits that I need to pay attention to. I’m also hoping that the different language styles would be useful, but it seems a bit too early to tell.

The anatomy of my typical coaching engagement

Context

COVID-19 has forced the issue. All my teams are now distributed because everyone’s working from home. As a coach, this has given me a few things to think about. Mostly about how I need to rethink many of my coaching strategies, as they mostly take advantage of the rich information flow that’s tacitly a part of face to face.

As part of my attempts at designing new coaching strategies that could be used with entirely remote teams, I’ve been going back to basics. This post is about the basic structure that my coaching engagements end up being designed around.

Structure

Broadly speaking, I’ve found that all my coaching boils down to one of two categories of topics – known unknowns and unknown unknowns. My coaching engagements usually have two strands running, one for each of these categories. They have different cadences and topics from the “unknown unknown strand” can make their way over into the “known unknown strand”. I’ve not yet seen a topic move in the other direction. Please read The Structure of a Coaching Intervention for a more accurate view of the content I cover when I coach.

Known unknowns are relatively straightforward. This is closer to training, or facilitated learning. Both coachee and I are clear about what knowledge is needed, and we can usually rely on accurate insights from the coachee about their levels of expertise and whether or not it is increasing. I usually end up producing a catalogue of concepts and lesson topics, and my coachee orders them in a list. I suggest changes to the ordering only if I feel there’s a more consumable flow of content (or some pre-requisites). This also has the handy side effect of demonstrating how a delivery team and a product owner can work together to collectively order a backlog.

Unknown unknowns are much harder (especially if the gaps are deep ones such as culture or values & beliefs). Some unknown unknowns can be converted into known unknowns with identification (as there’s a degree of unconscious knowledge in the coachee). Maintaining a general principle of doing no harm, I usually end up doing something along these lines

  1. Observe the natural state
  2. Form a hypothesis
  3. Run an experiment to test
  4. If proven, identify “treatment” options
  5. At some point, bring the coachee up-to speed with what’s happened
  6. Together with the coachee, agree the treatment option
  7. Design the treatment plan
  8. Implement, measure effect, choose to keep or roll back
  9. Restart the loop.

Step 1 cannot be rushed, otherwise my biases play too big a part. In step 8, I’ve only ever had to rollback once in my career, otherwise I’d never have even considered it an option.

Remote Execution

For the known unknown category of topics, being remote poses no fundamental problems, mostly logistical challenges and a greater noise-to-signal ratio in the communication between my coachee and I. It also adds delay to the building of rapport, but that is less crucial when both parties know exactly what needs to be learned, as the coachee can also attempt to infer credibility by their perception of the quality of my coaching materials (legitimately or not is beside the point).

Being remote adds a lot of complexity to my structure – specifically the first step (but none of the steps gets away unaffected). I’ll write up how I approach this later and link it.