Research Based Development

The following diagrams show how one of my projects applied User Research to drive the software development process. There were two distinct modes of operation – before we went live with Release 1, and afterwards.

Before Release 1

Research was lab based, and essentially theoretical – we would do the best we could to recruit actual and potential customers and find out what they would do if they were faced with our system, and these test subjects would do the best they could to pretend that they needed our services.

Here’s a simplified workflow:hypothesis_pre_r1

We would start off with a hypothesis, which we’d refine by thinking about it and working with the business. We would design how the lab would run, what experiments we’d perform, all to maximise the learning that we could get from the lab. We’d build multiple prototypes to test the variations on the theme and we’d conduct several lab sessions. At some point our analysis would show whether or not the concept we tested had legs, possibly requiring a few refinements and retests. If proven, we would then tease out what we needed to do to deliver a product increment that capitalised on what we’d learnt, build it and demonstrate it to our stakeholders.

At this point, we’d hope we’ve done the right thing. We’d only really find out when we deploy into production.

Production Based Research

Even after we’d gone live for the first time with our Release One, our reliance on User Research remained. We’d still come up with hypotheses and design suitable experiments to prove or disprove them. All that fundamentally changed though, was how and where we ran our experiments.hypothesis_liveInstead of conducting the experiment with theoretical users and pretend demands like before, we’d employ A/B or multivariate testing, build alternative flows through our system and just measure what our actual customers did. To measure demand for a new feature, we’d just put a button on the screen and count the number of times it was clicked (users would be given a “we’re assessing the demand for this feature, your interest has been registered, in the meantime do this” message). The proven hypotheses were demonstrably proven as the outcomes were better (as opposed to before when they were still technically just theoretical). The cost was that disproved hypotheses needed additional work to remove the code from the live system. We thought it was a cost worth paying.

 

 

 

User research is a team sport. Why?

A quick trawl through the internet with this blog post as the search term throws up lots and lots of results that all assert that user research is indeed a team sport. However, far fewer seem to go into why it’s important. Sure, the “usual stuff” about collaboration leading to better solutions because the problem is being observed through different lenses is certainly true, but that’s really about collaboration, and not about user research.

So why then, is user research specifically a team sport?

One thing I’ve observed, is that participation in user research sessions tends to make a somewhat abstract notion of a “user” rather more tangible. Painfully so, in cases where your users are vulnerable segments of society. It builds empathy.

Empathy’s a powerful motivator. One common effect is it can instil the desire to “do the right thing” especially if it helps someone. An effective team is one that has a shared purpose. And worthwhile purposes (i.e. helping someone) are fantastic motivators. If only half the team are motivated in this way and want to “do the right thing” and the other half aren’t, then you’ve probably got a problem.

It’s more than the pithy slogan “user research is a team sport”. The whole team need to feel motivated to “do the right thing” and user research makes it possible to tell just what that is. Just one reason why research should be a team sport. Surely this one’s enough?

The Secret Life of a Requirement

There is a lot of guidance out on the Internet about how to manage your requirements. The brand leader vehicle for a requirement is the User Story, which is essentially a place-holder for a conversation between two parties – someone who understands the problem or pain, and someone who can do something about it. This project is no different, we generally use User Stories as our requirements vessel. However, we also use other variations, as needs dictate – for example, Technical Stories and Spikes.

This project is being run in a way that’s compliant with the GDS lifecycle – it’s driven by user needs. So, with this in mind, how do our User Stories spring into existence? Especially as User Needs can represent very large things (consider stuff like Maslow’s work). Well, we think that we can look at User Needs in order to form hypotheses on what can be done to meet the need / alleviate the pain. These hypotheses need to be tested, and we do that with a mix of Lab testing, pop-up testing and specialist user testing (for example, we have good working relationships with Action For Blind People and we regularly test with them). Theories that have been proven are now at the next stage, and are ready to be analysed and expressed as a set of stories. These stories are usually fairly large (Epic or Theme scale), though there may be a few small, focussed quick-win items that are easily identified. These items go onto a list that’s ordered by timeliness – starting from things that need to be looked at sooner, and ending with things that’ll get looked at later. We call this list our Product Roadmap.

step_1

We have a regular process of reviewing and refining our Product Roadmap. Due to the nature of roadmap items, this refinement process is only matched to sprint durations (two weeks right now). We take a roadmap item, and as a team spend some time trying to understand it, break it up into smaller chunks that we need to prioritise over the remainder, and tease out these high priority elements and articulate them as User Stories. These go onto the Product Backlog. What’s leftover from the large Epic/Theme/Feature goes back onto the Roadmap. The separation of the Roadmap from the Product Backlog allows us to more easily focus either on a more strategic longer term vision, or on more immediate “next release planning”.

step_2

Once a User Story has been added to the Product Backlog, it becomes ready for thinking/analysis effort in order to understand it well enough to be sized and planned in an iteration. We’ve had to articulate a Definition-Of-Ready to capture this, as early on we detected inconsistencies in our understanding of the User Stories, which was leading to poorer estimates and a less predictable flow of work. We’ve called this process the “Three Amigos” (it matches what other projects in the area call this aspect of backlog refinement).

step_3

When it’s ready for estimation, we use the Planning Game to size the Stories. As we have a variety of backlog items – requirements based stories, technical stories, we use a set of reference points – for example we have a requirement based reference story that’s a “5” as well as a DevOps environment configuration reference story that’s also a “5”. This makes it easier to estimate using relative sizing as we can always compare like with like.

Once a story is sized, it can then be prioritised to the top of the backlog, ready for Sprint Planning.

 

 

Building credibility

There are a lot of approaches to building a good relationship with your client. All the ones I’ve seen boil down into a couple of basic themes:

  • Option A: keep telling them to trust you and believe you. Maybe one day they will
  • Option B: Keep your promises and do stuff for them. Let the evidence do the talking

Most of the projects I’ve seen that end up with a confrontational relationship between supplier and customer have used a variation of Option A. For a long time. Customers get frustrated when they think they’re being lied to. And not doing what you say counts as lying.

One of my recent clients had been burnt badly by the curse of large failing IT projects. Their operational units were disengaged with their IT Department as they’d been let down for the best part of a decade by cancelled projects, cost overruns and major defects. Like many others in that situation, they’ve done what they could to work around their “substandard” technology. You’ve seen this too – businesses running on strange Excel spreadsheets with even stranger macros.

What we did could be considered by “old fashioned IT Departments” to be tantamount to treason – we integrated with the business. So much so that we requested that at least one representative of the business be part of our delivery team and be part of doing the work. And we wanted regular access to the operational unit so we could run workshops, run demos etc. Sadly, they had to draw the line at the delivery team being located on the operational unit’s floor (no desk space).

And we did stuff. Iterative development models have an end-of-sprint demo event, so we demoed what we’d built to the business. They’d give us feedback, and two weeks later, they saw the effects. They saw us listening to them and doing what they asked immediately. Their voices mattered. We never told them to believe us or to trust us. We never had to. We just demonstrated that we were trustworthy and they took the initiative. Part of being trustworthy was saying “no” (with a good reason) when we needed to.

And when we launched the first version of the system, trust was even more important. Because now their customers (the public) would also see the system, what we’d built had an effect on how the customer perceived the business. So, when demo after demo we showed them the changes we’d been making because of what their customers found difficult or confusing, the goodwill that created was simply huge. We’d shown the business that not only did we listen to them, we listened to the people they thought were important – their customers.

That’s a lot of credibility.

Obligatory First Post

Hello World

I’ve been making lots of notes in all manner of notebook (hardback spiral notebooks were a special source of joy for that extra-close-to-the-margins writing) about how well or badly my coaching attempts have gone. I used to use them as fuel for conference submissions, but this year I thought I’d try something else.

I can’t promise you a life changing post. Or even a relevant one. When I write one of these entries, I’ve only got two “people” in mind – me and the person / team / project / whatever I’m trying to help. I get to do some thinking to learn what I can from the experience, and hopefully they get a memory aid to help them get better at whatever it is that they’re trying to do. If you’re able to get something useful from any of these posts, that’s a great bonus.

I’m also not entirely sure what this site will end up looking like. So rather than overthink it I figure I might as well start somewhere and see how it goes.