Some characteristics of a great team

Note: Credit for most of this goes to @susannehusebo

  • They start their daily stand ups on time
  • They tend to laugh a lot and have fun
  • Everybody in the team gets to express themselves
  • They correct and edit each other when they go off track
  • They try out new things with appetite. But they are quite willing to admit those things that don’t succeed
  • They often don’t care very much if it looks like they’re working hard
  • They encourage each other to leave on time
  • They talk about “we built” or “we failed”, rather than “I built” or “I failed”
  • They have lunch together, or some other time within work hours where they talk about other things than work
  • They welcome more junior members of the team and enjoy mentoring them
  • They have inside jokes
  • They share responsibility for communicating with outside stakeholders
  • They don’t agree on everything
  • They debate between short term benefits and long term strategy. They reach compromise
  • They question each other on topics like accessibility and inclusivity in design and development
  • When they finish a task they check if they can help someone else finish theirs
  • There’s room in good teams for extroverts and introverts. And those in between.
  • Team members are aware of their needs and communicate those to others
  • I’ve seen some great teams have seriously stubborn people in them. They can be great when the team needs reminding of why they came up with a certain rule: specifically so the team wouldn’t compromise when they’re in a hurry
  • Good teams often fight for independence to make their own decisions
  • They ask “why” a lot
  • They discuss the users of their products every day, and user experience is viewed as everyone’s responsibility
  • If they are remote, they try new ways to make everyone equal, even if it means compromising the experience of those people that are in a shared space
  • They are not dogmatic
  • Testers and designers are included in discussions of estimations and backlog refining
  • They respect agreed decision making structures, but argue their points
  • The people are not too similar to one another. They think about problems from different angles
  • If someone on the team is ill, the others figure out how to get by without that person
  • They have quiet time. In whatever amount is valuable to them
  • They don’t interrupt each other. They take equal turns in speaking
  • They get each other tea/coffee/water
  • They have one-to-one chats with each other to discuss points of agreement or disagreement
  • They know each other’s personal and professional goals and aspirations, and try to support them where possible
  • They are not all equally skilled at everything. But they try to work on things where they can learn
  • When people pair on a task, the less experienced person is usually the “driver”
  • Senior team members ask for advice and feedback from more junior team members
  • They don’t have to give positive feedback every time they give negative feedback
  • They thank each other for favours, for tea, for good ideas, for bad ideas, for observations…
  • They use the products they’re building
  • When someone has an idea, the other team members build on it and add to it
  • They accept that people have “off” days
  • They are generally resilient when it comes to change, including change of direction, goals and vision
  • They can work quickly, but it’s not always crunch time. There’s a sustainable cadence to work
  • They regularly talk about how they work together, and try to improve on that
  • Team members generally know what every other team member is working on, and what kind of issues they’re having
  • They encourage each other to share work early, rather than wait for it to be “perfect”
  • They have some shared values or principles that guide their interactions
  • They try new tools and methods quite easily
  • They might defend each other to people outside the team
  • Before they ask for feedback or reviews from other team members, they read through their own code/work
  • They have time to/they prioritise automating tasks that are not valuable
  • They talk about technical debt, and teach stakeholders about it
  • The work they do feels important
  • They argue
  • They talk about how well they’re doing compared to expectations. If the estimates turn out to be off, that’s not a disaster
  • They take turns dealing with boring or time consuming tasks
  • They are interested in each other personally
  • They talk about problems without talking about people’s characters, rather they focus on the work
  • In meetings they put their phones down or close their laptop lids when they’re listening
  • They have a say in who joins the team
  • They have shortcuts for common communication (like signals for when the conversation is derailing, or when they’re losing focus)
  • They don’t mistake fun for a lack of discipline
  • They consider different communication styles. Meetings are not just held so the loudest speakers are heard the most
  • They care that other team members and stakeholders understand the work they’re doing
  • Documentation is updated whenever errors are spotted, by the person that spots the error
  • Team members feel a shared sense of pride and ownership of their work
  • It’s not ok to notice a problem, and not do something about it, even if it’s not in an individual’s immediate area of responsibility
  • Developers participate in sketching sessions, designers understand how git works
  • Team members reach out to their personal networks to ask for help for other team members
  • Everybody tests
  • Team members let each other try things out even if they think it will fail (sometimes, if constructive)
  • They notice when other team members seem worried or down. They ask about it
  • Everyone knows that it’s ok to be wrong, as the rest of the team have your back
  • Everybody participates in user research
  • Everybody’s involved in pairing or non-solo work
  • They have a shared history and sense of purpose
  • They argue like siblings. Intensely, but when it’s over they’re still teammates
  • There’s an awful lot of trust sloshing around the team. All of that trust has been earned by people doing what they say and looking out for each other
  • Work is criticised. People aren’t
  • No-one is “keeping score”
  • “I don’t know” is not a dirty phrase. “Let’s find out” is an even better one

Team Stability vs Personal Freedom

Background

I’ve been thinking about a project team characteristic that I’d never previously worked with before. This project had a dedicated staffing stream of work, which supported an ongoing employee rotation plan as part of normal project execution. Any given team member would have a 9 month stint, after which they’d rotate off the account and go do something else. With the size of the project team as well as some contractual constraints, that amounted to about 3-4 people a month, or nearly 1 person a week.

 

The Bad News

A basic awareness of group dynamics as considered in Sociology, Psychology and whatever other -ology you the reader is fond of, leads to a conclusion that highly effective teams take a while to emerge from a group of people. These patterns have been articulated, perhaps most famously by Bruce Tuckman (i.e. Forming-Storming-Norming-Performing) but other variations and extensions have been suggested.

Tuckman_0

Of these stages, the Storming stage is the most difficult stage, and many groups never make it past this stage. Teams that do make it through this stage, have invested in the time it takes for individuals in a group to establish some form of social structure along with compatible working patterns, relationships etc. These individuals have learnt how to build on their collective strengths, as well as offset the weaknesses that each of them, as individuals, bring.

Team Effectiveness

Each swap made within a team reduces the stability of that team’s position in the team maturity model, and significant enough disturbances can move a team from a relatively comfortable “performing” state, down a notch, to “norming” and potentially even worse, a rather torrid “storming” state. The team would need to invest further in order to rebuild and develop into a high performing team. The practical upshot of this regular rotation is that the project team will always be struggling to maintain the highest potential performance.

Domain Knowledge

A second, more insidious side effect, is that the team as a whole may only really have a relatively shallow level of knowledge and expertise in the problem domain they’re solving. Picking up the basics of any new context is easy enough. However, to really build up a nuanced understanding of a domain, a person requires a decent amount of time to carefully mull over a stack of facts, figures and data points. I’d argue that it takes at least 3-6 months of focussed learning before enough is known about a new domain for significant decisions to be made safely. With a 9 month term, that suggests that really, a person only has, at most, 3-6 months of meaningful work before it’s time to move on.

A Sense of Purpose and Belonging

One of the joys of working in a highly motivated team with a strong sense of purpose, is the feeling of accomplishment when a milestone is reached (as long as that milestone is meaningful to the team). With team membership always having a tangible end date, it’s much harder to really get a sense of belonging to that team, and (deeply) feeling that sense of purpose. There’s the danger that an individual feels like they’re a replaceable cog in a machine – largely because that’s what will happen as everyone gets replaced fairly regularly.

 

The Good News

It’s not all bad news. On the plus side, it does put the individual’s “perspective” at the centre of the project’s thinking. As an individual would only be exposed to a problem domain for a finite period, it means that there is a regularly changing supply of new things for them to think about. That can include new problem domains, new implementation technologies. This can help keep the individual fresh, enthusiastic, and in an explicit learning mode for much more time than otherwise.

Additionally, the downsides are potentially easier to deal with. In the event that the individual isn’t compatible with the project and the demands, it’s easier for the individual to just “grin and bear it” for 9 months, as there’s light at the end of the tunnel, and the individual doesn’t have to do anything drastic (e.g. resign). Worst case and an earlier exit is required, that’s also likely to be a lot easier to deal.

Another advantage is that it can make something that isn’t long term sustainable (e.g. remoteness of location) something that is more sustainably managed, as it distributes the workload across a greater number of people than if the team were fixed.

An interesting potential positive, is that this forces a few XP principles into the team (if they’re not applying them already):

KISS: If nothing else, the team have to reduce the complexity of what is being built as there will be a regular (and ongoing) instances of a new person having to learn about what is being built. There’s relatively little that can be done to reduce the inherent complexity of the problem domain but navigation aids (e.g. context maps) will help and are recognisably important investments

Automated testing and test coverage: New starters generally go through an initial period of higher-than-normal anxiety as that’s when they’re operating almost entirely in the dark. As this is a regular occurrence, it’s unfeasible for a project team to “cheat” and ride it out (which is a possible approach if you’ve got a permanent team). Automated tests and a high coverage are a safety net, making it much less scary to work on code Again, that’s explicitly recognised as valuable.

Collective ownership: As no one person will be around permanently, it’s much harder for territories to be formed beyond team level boundaries (in a multi-team project) as the only things which are “stable enough” to be able to own anything (along with the typical behaviours of defending it, curating and growing it etc.) is a team. However, this is a double edged sword, as it’s also possible that in the face of a significant enough problem, some deflection might occur (e.g. “oh that’s not my/our fault. This was <Person X, now left>, they’re to blame”). This “no-responsibility-anywhere problem” is a normal part of collective ownership, it’s just more likely in this model than in stable team environments, as it’s far easier to blame someone who isn’t around.

 

Conclusion

 

I’m treating this project execution model as just another lever to be adjusted as necessary. My current focus is on optimising for a sustainable and resilient project team – one with a high enough morale to cope with the difficult circumstances that the project needs to execute within, with the employee characteristics that are available to me. In this instance, I’ve got over-supply of graduate and apprentice developers, but a scarcity of senior and lead developers. I also have a project location that’s a little too far away from home base for comfort, and a mandated work pattern that throws the work/life balance off kilter.

My key measures are employee engagement & feedback, general mood measures and team retrospectives (shareable content only). Additional measures include sickness days (as a potential visualisation of a deeper problem), as well as the levels of enthusiasm and participation in non-core work activities – e.g. community events etc.

My key feedback mechanisms are regular 1-2-1s (I’m trying to catch up with at least one project team member every day), open Q&A once a week and more formalised “account update” sessions once a quarter.

Power Models and Method Affinity

a.k.a. why are some people SAFe oriented, others DAD biased, yet others LeSS enthusiasts, and not forgetting the DSDM gang, the Nexus collective…and I’m running out of terms.

I’m sure there are a whole host of reasons, but during my time observing people, and knowing a bit about their past and what they’re like, it has led me to form an interesting (to me anyway) hypothesis. It’s to do with power.

A Brief History of Power

Without digging too far too much into Taylorism or others, there are a handful of basic power models that can theoretically exist in an organisation (heavily simplified for illustrative purposes, as real life is never this “simple”).

  • Power in the hierarchy, the higher up you go, the more power you have
  • Power in the middle management – sometimes disparagingly called the permafrost
  • Power on the edges – people on the ground in front of customers

These can manifest themselves onto a project context in a few ways (note for want of better options, I’m labelling these categories with overloaded terms)

  • Power lies with the “planners” – e.g. project managers, PMO
  • Power lies with the “architects” – EAs, Solution Architects
  • Power lies with the “developers” – developers, testers, BAs

Human nature is such that we flock towards things that are like us. Planners are more likely to favour other planners, and work using systems where the balance of power is in their direction. The same sort of thing is true for the Architects and the Developers.

Methods

And now we get to the Methods part of this blog. I’m going to focus on agile methods, and agile scaling frameworks. And in particular, how these methods and frameworks are perceived, at least initially. That bit is key. Most of this “natural affinity” stuff is emotional in nature, and not fundamentally driven by rational thinking (hint: there’s a lot of religion in this area). As there are lots of them out there, I’ll just pick the three major ones (based entirely on how often clients talk to me about agile at scale, and nothing remotely scientific).

SAFe

The overall guidance is dominated by the navigable map. It has several terms that will be comforting and reassuring to hierarchical type organisations with traditional reporting lines and financial controls – Programme / Portfolio Management, Enterprise Architect, as well as some guidance on mixing waterfall and agile deliveries. This looks to be solidly planted in the middle of the “Planners” camp.

Based on the hypothesis, likely proponents and allies are to be found within PMO, Project Governance,Configuration Managers, hierarchical organisations with a centralised power model, and organisations that perceive themselves to be traditional with a rich history / heritage.

DAD

The first thing that strikes you when you first look at DAD is that it’s rammed to the rafters with choices. It has a risk-value lifecycle (but you can choose others), many options on how to achieve pretty much any delivery related goal that you may have – from big ticket items such as considering the future – how much architectural insight do you need, to very focussed options like the right level of modelling to use. And that’s just part of the “Identify Initial Technical Strategy” goal. This resonates well with those with an architectural bias – architecture is mostly about decision making and communication.

Likely proponents and allies are to be found in technical leadership – Architects, DBAs, and organisations with a strong technical bias.

LeSS

The navigation map for LeSS in contrast to the previous two, looks relatively uncluttered. There are large concepts identified (such as Systems Thinking, Adoption) but these are all located around the periphery of the diagram. Slap bang in the middle is the engine, and those are feature teams. This puts the Developer at the centre of the universe (as it were).

Likely proponents and allies are to be found within teams and individuals using Scrum and XP on a regular / daily basis, and organisations that “have a small company vibe”, which may be startups on a growth spurt, or organisations in a highly fluid environment with significant localised decision making.

The Goldilocks Solution

As the heading suggests, I think the right mix for any given organisation is somewhere in the middle. Power isn’t solely contained within a single area (though granted, in many cases, the vast majority of the power is indeed concentrated that way), and any scaled agile adoption strategy will need to understand and accommodate that to increase the chances of tangible benefits being felt by the organisation.

 

Feedback

As this is just a hypothesis I’ve got, I’d love to hear what you think, whether you’ve observed things that support this theory, disprove it entirely, or somewhere in between.

Learning about estimation when you don’t care about estimates

This post follows on from the previous one, linked here – Help! How do we start estimating?

The problem with running a training course on estimation, is that there’s a danger that poor assumptions are made about the things being estimated – i.e. you could end up “assuming the problem away”. If that happens, the exercise becomes too sanitised and relatively meaningless other than in a theoretical and abstract sense. And that’s often hard to relate to.

Using analogies – e.g. the “throwing the cat game” [http://tastycupcakes.org/2016/05/throw-the-cat-and-other-objects/] can help. They show you the dangers of assumptions etc., but in all the cases I’ve seen, the variability is low-ish. You’re doing the same thing to all items.

But what if you’re doing potentially fundamentally different things on different tickets? For example, if writing a validation method with some clever logic is a “5”, what is “patching a Docker template”? With the industry using the term “DevOps” like there’s no tomorrow, the variability of work a single team will perform will inevitably rise.

We, as humans, are generally far better at comparison based estimation than comparing to an absolute measure (especially an abstract one like distance or time). We’re also very good at spotting patterns. Of if real patterns don’t exist, then inventing them (one of the reasons why the saying “correlation does not imply causation” exists – we need to be reminded!)

That innate ability to see patterns, regardless of whether or not they exist, can lead people to try and find a relationship or common attribute across entirely disparate work items. Some teams make a virtue of this, using terms like “complexity” or “relative effort”, which make reasonable sense – irrespective of work, it can be categorised as complex/simple or quick/time consuming. That allows a single team to use a single “scale” to estimate everything that they could do.

That single scale, is one of the reasons that things can get a bit confusing. With radically different types of work, one thing that can happen is that natural clusters may form. Infrastructure management tasks may hover around the 1-5 mark, development work might be 2-8, something architecturally significant might be 5-13 and spikes might be as high as a 20(*).

If that’s happening, one of a few things could be the cause

  • It could be true
  • There could be the hidden view that “development stuff” is harder than “infrastructure management” (or something along those lines) and that bias is gaming the numbers
  • Anything else I’ve not thought of right now – depends on the room, dynamics etc.

Relative estimation works well within a logical domain – there are sufficient overlaps and related attributes for a meaningful comparison. A comparison across fundamentally different domains makes far less sense. There’s even an old simile on the subject – “as different as chalk and cheese”.

To combat this, I usually recommend having a lot more than just a reference story. I suggest having a catalogue of items, from as many of the affected domains as possible, making sure we’ve got examples of a few of the magic numbers in each domain. When attempting to size an item, the first question to answer is which catalogue item is the closest match, and then go up or down however many sizes as is appropriate from there. It can take an awful lot of stress and confusion out of estimation as you’re no longer trying to shoehorn a square peg into a round hole.

There is a price to pay for this additional freedom – your velocity figures become less relevant, as you can’t really compare sprint against sprint as simply as before, in case the “mix of work” is changing. Your burnup charts may still look like they work, but scope changes are harder to visualise – some of your “scope changes” are likely to be technical debt that you’ve discovered as you could be making platform changes with no change in business vision or scope. It also takes a lot longer to create a useful catalogue, when compared to “just picking an average looking story and calling it a 5”

Teams that go through a learning process like this, usually end up realising that there isn’t a simple textbook answer, and their only viable option to cope is to be alert and have an open mind.

 

(*) All using the scale 0,0.5,1,2,3,5,8,13,20,40,100 for illustrative purposes only

Help! How do we start estimating?

…came the plea from the team.

Context: Here’s a team that’s primarily a support oriented team. As in, their stated purpose is “to keep the lights on while delivering improvements”. Recent history (as in several months) has had them in a firefighting mode where they just “did stuff”. Planning was iffy at best and they had lots of difficulty in gauging what was reasonable in a two-week sprint.

Hence the ask.

The first question to ask (like everywhere else IMHO) is “Why”. Why do you want to estimate?

Most of the answers I hear generally fall into a few areas:

  • We can use the points to work out our velocity
  • We can build information radiators with burndown charts etc.
  • We can tell if a story will fit into the sprint
  • We can have more credible estimates using relative sizing with points than if we had absolute estimates using hours or days
  • We can plan our releases – especially dates

Very occasionally, someone pipes up with something like:

  • We can work out if we’ve all understood the story well enough to deliver it in a sprint

 

For me, that last one’s easily the best reason. By coming up with a “magic number” estimate during the planning game, each person’s number represents all of the assumptions they’re making about the work. And during the process of playing the planning game, the most extreme sets of assumptions are surfaced and it gives the team as a whole the ability to learn more about the work from each other and gain some consensus. That shared understanding is the real prize. The number is just a side effect.

There’s a growing interest in “No Estimates” (#noestimates on Twitter). A highly simplified explanation is that estimation is a wasteful event, and you’re better off breaking your work down into small pieces and just work on them one at a time, while maintaining a smooth flow of work. For me the most interesting thing about this movement, is not the “inflammatory” stance on estimation, but about what a team gets by breaking down the work into small pieces and just working on them one at a time. They get work that naturally has very few assumptions inherent in it (smaller work = fewer places for assumptions to hide). And they also get very quick learning about the different assumptions that everyone in the team has about the work, because they’re working on one thing at a time. For me, that’s a similar sort of outcome to only using estimation as a means of flushing out the assumptions and gaining a shared understanding, and frankly, ignoring the magic number side effect. Breaking the work up into smaller pieces is just generally a good idea.

 

So, with all of that “intro stuff” out of the way, just how do we help a team learn how to get the benefits of estimation? Like most other attempts at teaching/learning, we need to find something that the team actually cares about, and help them learn how to solve the problem / make things better. In this case, we had to get to the root of what the team wanted to get out of an estimation-based process. That’s assuming that an estimation based process was in fact the right thing to adopt. We’d have to examine that too.

It looked like there were two main things that the team needed. The management elements of the team wanted to keep track of progress, detect when things were slowing down, and report upwards.

The rest of the team didn’t seem to care about that. They just wanted to make sure that they weren’t being overloaded with unmanageable levels of work. The reactive nature of a lot of their work was such that there was very little that could be predicted about a sizeable portion of their workload. They needed a mechanism that would let them work on enough planned work, while keeping sufficient capacity to cope with the unexpected.

 

I’ll write up how we ran the estimation training and practice/guided sessions later on.

The “Good Cop, Bad Cop” relationship with Tools and Automation

Context: This all started with a request to make the tool a team was using (JIRA) prevent a developer from moving a task that was “in progress” back to “not started”. The manager’s rationale was to forcibly highlight when the team started to work on too much stuff. The developer’s rationale was to fix an incorrectly started task.

It got me thinking about how we open up this discussion and make it inclusive. Some questions that seemed to work were:

  • Why do you need to be told how to “be good”?
  • When do you need the tool to “pick up after you”?
  • When do you need to be reminded about what your way of working is?
  • What mistakes do you need the tool to forgive or punish (or both)?
  • When do you need the tool to prevent all mistakes?

All of this stuff is enabled by automation. So just how much automation do you need, and what is it to be used for?

  • Automate or mechanise the “boring parts” of your job?
  • Validation for error prevention or error detection?
  • Automate all of the work variants? Or just the common ones?

 

As I like analogies, I thought I’d explore this using one, in this case, one of my favourite topics – “food”. For the purposes of this blog, assume time is magic and doesn’t cause any problems.

Making Soup

I want to eat something. Perhaps a bowl of soup.

soup
just in case you don’t know what it looks like

 

The full process starts in the ground. I could start there with a fully manual solution:

crops
The Good Life

Grow the raw ingredients from scratch. Prepare them as I want, cook them and voila, a delicious meal.

  • + complete control of ingredients
  • – slowest method
  • – I could grow the wrong thing

 

I could “automate” that growing process, by buying the raw ingredients from a shop:

ingredients
Carrot, Celery, Very-Strange-Onion
  • + pretty good control
  • – may not have what I want in stock
  • – might not be able to manage the quality “precisely”

 

I could increase my levels of automation and also automate the preparation work by buying the pre-prepared stuff:

prepared
Definitely carrot sticks
  • – only partial control
  • – may not have what I want in stock
  • – may not be prepared as I need – e.g. carrot sticks when I need grated carrot

 

Or I could be extreme and also automate the cooking process:

tin

  • + fast
  • – limited control
  • – may not have what I want – e.g. I want “carrot and celery”, but all I can buy is “carrot and coriander”
  • – mass production, so probably very generic

 

Each of these levels of automation is accompanied by varying degrees of “policing”. If I’m a danger to myself when trying to chop vegetables with a knife, automating the preparation work is probably a good idea. But with that are constraints – I can only eat meals that can be made from pre-prepared ingredients.

 

Project Team Outcome

In the end, we left it such that you could move tickets back, and the policing aspect would be done by the humans in the team. Hopefully they chose that because it was the right thing to do, and not because they wanted my food analogy to stop…