Making sense of System/Design Thinking

(or how I learned to stop worrying and love the Thinking)

Why I’m writing this

I have a new client. They’re a large Financial Organisation, therefore operating in a highly regulated market, with heavy compliance etc. requirements. The sorts of things that end up creating a paranoid organisational mindset, with a significant audit theme running through everything that they do.

The nature of this client is such that new ideas take a while (if at all) to establish. That pace isn’t helped by the fact that organisations like these generally are flush with cash – they can afford to overspend as well as maintain “death-march projects”. This is a functional characteristic and is neither good nor bad. It does, however, mean that delivery techniques that are radically different culturally to the prevailing winds are unlikely to be welcomed with open arms (i.e. there is no urgency to change). One such idea, is “User Centric Design”. This is a premise that the right thing to do is design services specifically for your Users. This cultural anchor (the Customer is central to everything) is more prevalent in front office teams (e.g. for a Bank, those might be branch staff), but isn’t necessarily the case in back office teams (e.g. the “IT Department”). User Centred Thinking may lead to better bank accounts, savings accounts, or interest rates, but is less likely to be adopted to improve the systems that an actuary might use.

Enter the labels “Design Thinking” and “System Thinking”.

What is “Design Thinking”?

The term has been gaining popularity over the last few years. I currently believe the term “Design Thinking” that represents a design thinking process (with terms like empathise, define, ideate, prototype and test) was popularised by the likes of Tim Brown and others at IDEO. The term Design Thinking might have been coined in the seventies, but ancestor terms such as “wicked problems” are far older. Even older are a lot of the concepts – divergent thinking to gather options, followed by convergent thinking to make a choice, test, learn, rinse and repeat – must have been around for about as long as humans have been experimenting. The Double Diamond process was created by The British Design Council for example. This a sketch:

How about “Systems Thinking”?

Disclosure: My introduction to “systems thinking” (lowercase, not branded, definitely not a Proper Noun) came from my undergraduate degree. Except I learnt about this under the banner “Systems Engineering”. Well, more specifically, when in a Mechanical Engineering lecture, my professor got us all to play a heavily modified version of Mousetrap (I’m sure other games are available). That was my first introduction to stock and flow diagrams. That also made me realise that I’d been using a system thinking mindset for a while leading up to that lecture. It turns out that my dad and I playing with Meccano and Dominos at the same time – building mechanised arms to tip dominos over is surprisingly good fun for most of the time, although the clean-up at the end is a royal PITF (pain in the foot) – is a good way for me to begin to work out how to build models in my head to predict the future, especially complicated when there are multiple events occurring simultaneously. That Engineering thing carried on into my Electronics and Control Theory (academic) life.

In business, the term seems to have more than the abstract analytical mindset that I learned about at University. The work that Deming did with Japanese manufacturers during the ‘50s and onwards started with a very basic model:

Image from https://blog.deming.org/2012/10/appreciation-for-a-system/

In this diagram, the customer (consumer) is another component, interacting with everything else. “Frameworks” such as Vanguard and books such as The Fifth Discipline etc. have moved the customer to be far closer to the centre of the system model. More precisely, in order to better define the purpose of the system. In itself, that’s not a problem, but it did take me a surprisingly long time to reconcile my perspective (systems-thinking-is-an-analytic-discipline) with the customer-centricity that I was seeing. Mostly because, I was seeing that human-centricity aspect as “Design Thinking” (rightly or wrongly, probably wrongly).

My View of the Differences between Design Thinking and Systems Thinking…

One of the things I find helpful for my focus when learning (or even just thinking) about a topic, is the ability to detect when I go off track and get distracted. To help me reason about either Design Thinking or System Thinking, it helps if I can create some space between the two concepts, just so that if necessary, I can also think about what a concept is not. It’s a form of abstraction and an aid to my thinking.

Comparing the different “zones of interest” between Design Thinking (pink) and Systems Thinking (blue)

…and Why it doesn’t really Matter

Assuming that the previous section resonates with you, dear reader, the main reason why I don’t believe it really matters where the precise boundary is between the two concepts, is because you need to incorporate both thinking models into your overall problem solving to genuinely make a difference to your customer. For example, whether you think about User Needs because you’re using Design Thinking or System Thinking isn’t relevant. What is important, is the fact that you’re actually considering User Needs.

Why is my client trying these?

This is an interesting question, and I’ve no real way of getting a completely accurate answer from anyone I’m in contact with, so this is where I get my theorising kicks from.

From what I’ve been able to observe, this client has had multiple attempts at one form of “agile transformation” or another over the past several years. While all of these attempts moved the organisation forward (for some definition of forward), none of the attempts did much more than improve the practices in effect at that organisation. Manager types still managed (although the role names did change a few times over the years). Requirements specialists still produced documents (although the name of the documents and the templates followed changed). Business engagement was still limited – in this case, limited to Product Managers. Product Owners (there is a difference at this client) were more likely to be a subject matter expert or a requirements specialist. In other words, a proxy to someone who more may be more appropriate to “own” what’s being delivered, but has insufficient time away from the day job. A centralised architecture function would determine standards to adhere to, and would be where approvals would be sought when teams wished to implement a design pattern (sometimes quite a low level decision).

The underlying culture appears to be quite resilient to change, and is what I would expect to see in a typical hierarchical, command & control, low trust, low empowerment organisation.  Again, no value judgements from me, but it is a prevailing culture that’s a polar opposite to the empowered, distributed authority, high trust, high collaboration culture that an agile way of working would require to truly shine. I think this underlying cultural resistance is one of the drivers for using these “differently termed techniques”. In particular, there’s a keenness by (very) senior leadership to adopt Systems Thinking, as the term doesn’t match any of the existing organisational silos or fiefdoms. In that respect, “Design Thinking” is a little harder, as one could argue (for example) that “As the Enterprise Architects are responsible for Service Design, that’s clearly where Design Thinking fits in”. Granted, your sanity could be questioned if you argued that point of view, but still, it’s possible. That is one of the strategies that a resilient bureaucracy can use to negate the risks associated with an invading culture – convert the new concepts into existing ones, and thereby eliminate the need to change. In order to stand a chance of converting the status quo, one needs to be careful about the battles that are picked.

Dear Board, what sort of “Agile Transformation” are you really after?

My problem with an “Agile Transformation” as a term, is that it’s not really helpful when trying to talk to people.

OK maybe if I used the definition of an Agile Entity as a thing that continuously adapts and evolves to best take advantage of it’s environment, then an “Agile Transformation”, being pedantic, could represent that first step the Entity takes from it’s Creation State to the first Changed State, with the constraint that the Changed State includes a working mechanism for internally triggered evolution (irrespective of how effective).

However, that’s also an unhelpful definition, as I haven’t seen that interpretation in anyone at C-Level who “buys an Agile Transformation Engagement from a Consultancy Organisation”. And that’s what I wanted to write about here. What I see there is usually something along the lines of “We should be better at our IT Development. Other organisations have used Agile and they seem to be better at IT. Maybe we should get some of that Agile IT to increase our effectiveness”.

Therefore I’m parking that thought, and once I’ve worked out how to clearly articulate what my opinion is on that subject, I’ll link to it here.

In the mean time, this is a list of questions I think are worth knowing the answers to as soon as possible in the life of an “Agile Transformation Engagement”. Or even before one starts.

Agile Organisations have very different cultural styles to hierarchical / bureaucratic / “traditional” (for the defensive clients) organisations. A Transformation Initiative (if such a thing can exist) will be about helping an organisation transform itself culturally into something else, one with more pronounced Agile Characteristics and Traits. I think anything that maintains the cultural status quo (especially if it’s a “negative-in-the-context-of-an-agile-organisation” one) is heading towards the Lip Service end of the adoption scale.

Warning: Some of these questions can be tricky to ask without risking being fired :-). All of these are aimed at either the Sponsor of the Transformation Engagement, or the C-Level Board if it’s broad enough.

Scope

Question: Where do you WANT to draw the boundary for the Transformation Initiative?

Question: Where do you HAVE to draw the boundary for the Transformation Initiative?

Outcomes

Question: WHAT do you define as “a Successful Transformation”?

Question: WHY do you think you need “an Agile Transformation”?

Question: Are you “all-in” on this Transformation? What happens if it’s not successful? Do you need a Backup Plan (or Escape Plan)?

Question: If the Transformation results in jobs changing, CAN you update your policy / HR strategy etc to suit? WILL you?

Question: If the transformation results in making some people redundant if they are unable/unwilling to re-train to the target operating model (candidates include middle managers), WILL you make them redundant?

Question: What did you have to do in order to get Board buy-in for the need for “an Agile Transformation”?

Question: When do you “need Demonstrable Results”? Why then? What constitutes “Demonstrable Results”?

Question: Do you have a “Trusted Adviser” to fact-check what you’re being told? If not, what is your level of trust? How can that be increased?

Sustainability

Question: What’s your strategy for sustaining the Transformation Initiative after I’ve left?

Question: What would happen if “after a while” (e.g. a couple of years), your organisation reverted back to the operating model as it stands now? Do you have to prevent that?

Finances

Question: How much are you willing to invest? Over what time frame? Is any of that investment conditional on evidence of progress?

Question: Can you tolerate the “Agile Transformation Initiative” as an annual overhead cost to be paid before projects and change budgets are calculated?

Question: Can you tolerate the “Agile Transformation Initiative” as just another project and therefore will need to justify its budget?

Agile Brands

Question: Is there a perception in the “entity-under-consideration” that Brand A/B/C is the right one? Have conferences, articles, experiences etc influenced that perception?

Question: Is there a strategic goal of “Installing” / “Implementing” Brand X? Or is the strategic goal “Adopting” Brand X?

Question: What happens to your Credibility / Authority / Respectability if despite the prevailing “Brand X Bias”, an alternative Brand is Implemented / Adopted?

Janus and the art of partitioning

Why Janus? Well, not only is he the Roman God of Beginnings and Gateways (or doors, transitions, journeys, that sort of thing), he’s normally depicted with two heads, one looking to the past or beginning and the other looking to the future. Seems to resonate with what I try to do with my clients – help them along a journey.

Why this post? It was triggered by me reading the book Unlearn by Barry O’Reilly. The book helped me crystallise or make explicit something I think I’ve been doing often enough to be considered “typical behaviour” – the act of unlearning.

I’ve been thinking about how to use this new found explicit conscious awareness to help my current employer (as well as my current and future clients). My typical role on an engagement is some form of…no wait, this link might give you a sense. Alternatively choose any number of words from the following set and arrange in any order to give you my title (naturally, duplicates are encouraged) “agile/ninga/coach/therapist/comic/beer-buyer/disruptor/transformation/hygienist”. The key thing about my clients (and my current employer) is  that their overall delivery maturity is nothing worth writing home about. They’re certainly no bleeding edge delivery outfit, constantly testing hypotheses by using practical experiments, with unicorn tendencies.

So who hires me? Mostly Laggards and Late Majority organisations (See Everett Roger’s book Diffusion of Innovations).

What does that mean for me? Generally, the mental models I have in my head for how “software delivery should work” are two or three decades(*) ahead of the executives and senior management types that I spend time with. That means that unless I take great care, I’ll end up using language that’s incongruous with the recipient’s world view. And there are enough snake oil salesmen out there to replace me, making it easy to put lipstick on a pig – see the Internet on cases where “agile transformations don’t work” or “agile transformations not delivering on orders of magnitude improvements” or variations on the theme. For me, that’s Janus looking forwards.

How am I trying to solve this? I learned a foreign language. This one’s called “Management from Yesteryear”. I learned it because if I was to stand any hope of working out if something I say is being misinterpreted, then I need to understand how it’s interpreted. Some people could call this empathy, but I disagree. It’s more a model of a person from the nineties (say), to help me understand how a real person would behave. For me, that’s Janus looking backwards.

So how does Barry’s book fit into all this? Well I realised that if my client needs to evolve by 20 years, they’ll need to go through an awful lot of unlearning and relearning. That’s assuming that if someone is going through that much evolution, they’re able to skip a few of the intermediate states. If that’s not possible, well that just increases the amount that needs to be learned and unlearned in a comparatively short space of time.

So what I need to do, is help them get starting along that journey, and supply the occasional nudge if they start going too far off track – say unlearning something that’s still relevant or learning something that isn’t helpful. That’s harder than it sounds, as it’s tricky to understand what off-track actually means, not to mention how on earth I’d be able to observe this. What would be ideal, is if they themselves could work out how to tell if they were going off track. A more realistic scenario is that I’d have a sense of how they’re thinking and try to use that to gauge the degree of discomfort they’re feeling and from that attempt to infer the degree of off-track-ness. That said, sometimes it’s helpful in the longer term to learn something, realise it’s incorrect and fix that. All adding yet more uncertainty into the “are they on track or not” overly simplistic question.

That’s where my internal-model-of-a-person-from-the-nineties comes in. I fancy myself in (very) amateur dramatics circles, as someone of the method acting school of thought. When working with someone, and I’m asking them to go through cycles of unlearning and learning, it seems only fair that a part of me goes along that journey with them. It’ll help me build some empathy. It’ll also help me spot when things aren’t going well, because in addition to seeing their expressions, I’ll be feeling similar things too. It’ll help them trust me a little more than otherwise. It’s a great way for us to discover potentially exciting new things that neither of us would have foreseen. And finally, that shared experience is also a good foundation for building some psychological safety.

A brief segue into psychological safety.

I’m trying this approach with my current client, and so far it’s been proving to be an interesting experiment. I’m not too clear about how cleanly I’m partitioning the me-from-now and the virtualised-me-from-the-nineties, but if nothing else, I do appear to have more empathy and connection with my client. So that seems to be positive. I’ll keep trying this approach and see if it gets any better or give’s me something different. Might even blog about this later.

 

 

(*) I know that sounds harsh, but for example organising teams by components and architectural layers for efficiency reasons is just so nineties. And not in a cool retro way.

 

 

The Importance of Psychological Safety

It’s vital for making mistakes and learning from them.

It’s not a new idea.

Hard to give. Easy to take away.

Children have a lot. It’s usually drummed out of them at school.

Adults don’t have much. People who do are seen as “brave”, “courageous”, “bold”.

It’s infectious. People who work with those with a lot of psychological safety, begin to feel safer.

  1. Managing the risk of learning: Psychological safety in work teams – Amy C. Edmondson Associate Professor, Harvard Business School
  2. Psychological Safety: The History, Renaissance, and Future of an Interpersonal Construct
  3. High-Performing Teams Need Psychological Safety. Here’s How to Create It – Hbr.org

Learning SAFe by comparing it with DAD

I started writing this post as a gut reaction to the style of the language I found in SAFe – https://www.scaledagileframework.com/ as I have been trying to learn what I can, and I find connecting/comparing new things to stuff I already know to be an effective memory aid.

Background

I’ve spent the last few weeks dipping in and out of SAFe – my current employer has adopted it “in a big way” and as it’s one of the few agile scaling frameworks that I know relatively little about, I figured I might as well see what the fuss is about.

Disclaimer: I’ve known about SAFe for a while now, and the early versions were shall we say, “limited” in my opinion. But SAFe’s had a lot of content updates over the years, and having listened to a few SAFe people talk/defend SAFe, some of their language reminded me of some of the things the RUP lot were trying to say as they tried to explain RUP to a hostile audience.

I’ve mostly structured the comparison along different abstraction levels

The Strategic Lens

SAFe: Business Canvas, Portfolios and Budgets

SAFe borrows heavily from Lean thinking and uses a Business Canvas (extended to become a Portfolio Canvas, but the underlying concept is the same) as well as thoughts on Lean budgets for Value Streams. These are wrapped up in some structured guidance, using McKinsey’s Horizon Model and support mechanisms & tools for SAFe Consultants. I’d expect senior management to be comfortable with this degree of “formality” and structure.

Image from https://www.scaledagileframework.com/portfolio-level/

DAD: Disciplined Agile Enterprise

DAD is far lighter in it’s approach to guidance. There’s an underlying assumption that organisations are more sophisticated than simplistic models could describe, and so it’s more appropriate to provide thoughts and principles for the major Departments (e.g. HR, Sales, Finance etc) to be used as the client organisation sees fit. The lack of significant cross-organisation structure will require greater experience from an Organisation’s Agile Consultants to be able to evolve these Departments to being more compatible with an Agile Mindset.

Image from http://www.disciplinedagiledelivery.com/dae/

The IT Delivery Organisation

SAFe: Managing Change

SAFe has a lot of content all focused on “managing the environment”. From role titles (“Product Management”, “Solution Management”, “Lean Portfolio Management”), standardisation instructions across teams within a bounded context (e.g. all user stories are sized relative to the same baseline story for all teams) as well as a structured model for requirements (Epics -> Capabilities -> Features -> Stories), the sense I’m getting from the SAFe guidance is that of Management / Senior Leadership setting boundaries to operate within. This is much more closely aligned (at least when looking at the narrative) with most organisations. The overall SAFe diagram also has a very structured feel to it:

Image from https://www.scaledagile.com/whats-new-in-safe-4-6/

That suggests to me that there’s a general feeling that changes are optional – that you can say “no”, or at least “not yet”. At least for some types / categories of change. What’s interesting to me is this is a managerial style of thinking – “I will make a decision”.

DAD: Goal Driven

DAD has a different feel when navigated. A lot of the content is structured around “Process Goals” and the various options that could be chosen in order to achieve a degree of progress towards these goals. While there is some “structure” from an organisational perspective, it’s very lightweight. The visual style suggests a more team-centric bias (“Solution Delivery” is in the centre-ish). Though be warned that is just my gut response when looking at the visuals:

DAD Poster Image
Image from http://www.disciplinedagiledelivery.com/dait-workflow/

The visual style also suggests to me that there’s a general feeling that changes are mandatory – that as a team you have to operate in a way that is flexible enough to deal with whatever comes your way. What’s interesting to me is this is more of a “DevOps cultural” style of thinking – “How do I cope with this?

DevOps

No comparison would be complete without comparing the two stances on DevOps (at least at time of writing…).

SAFe: Continuous Deployments, Release on Demand

SAFe describes Continuous Delivery as a four stage pipeline – “Continuous Exploration, Continuous Integration, Continuous Deployment and Release on Demand”. Basic descriptions are provided, and a CALMR acronym features as an aide memoire for the approach [Culture, Automation, Lean Flow, Measurement, Recovery].

Image from https://www.scaledagileframework.com/devops-and-release-on-demand/

DAD: Disciplined DevOps

DAD is rather more thorough in it’s description of DevOps, showing how more operational and non functional concerns (e.g. Security and Data Management) fit into the larger picture, admittedly by pushing the boundaries of portmanteau-decency (DevSecOps or DevDataOps anyone?). The sensible combination of these facets is simply called “Disciplined DevOps”, fitting in with the high level naming strategy.

Image from http://www.disciplinedagiledelivery.com/disciplineddevops/

Teams

One of the few things that every single Agile Scaling Framework has in common, is the reliance on empowered, self sufficient teams producing high quality output (e.g. Scrum+XP) as the basic building block.

SAFe: Alignment using Planning Increments

SAFe uses both cadence and synchronisation to keep all teams working towards a common objective in the ART (Agile Release Train) aligned. Teams are free to pick either Scrum or Kanban as their team execution pattern, however, the Kanban variant used isn’t “vanilla Kanban” (or whatever textbook version would be called), as the teams using Kanban would still be expected to align and coordinate with the overall iteration heartbeat used by the ART. Kanban teams would also be “forced” to plan in batch, just to allow them to operate within the PI Planning event. The PI Planning event is essentially a big room planning exercise to plan about three month’s worth of work. I’ve seen commentary that suggests that the PI Planning Event is instrumental to the successful use of SAFe.

This is an example of a Program Board that also serves as a “Big Visible Information Radiator”

Image from https://www.scaledagileframework.com/pi-planning/

DAD: Lifecycle Models and Program Maturity

One of the DAD Lifecycle Models is the Program Lifecycle

Image from http://www.disciplinedagiledelivery.com/lifecycle/program/

The DAD Program Lifecycle also integrates key elements from the Risk/Value Lifecycle, making explicit the need to get a stable base in place before scaling the program up to several development teams.

Other Perspectives – Agile Transformations

Warning: Many large top down transformations fail. Many bottom up transformations never make it out of the teams.

SAFe: Playbook for the Agile Transformation Consultant

SAFe includes John Kotter’s work on Leading Change [link is to slightly older material than the current view on kotterinc.com], albeit modified slightly, but maintaining the original premise – leaders should lead the change, enabling their teams to act on the Vision.

Image from https://www.scaledagileframework.com/implementation-roadmap/

One thing to bear in mind is how integrated the SAFe training courses are in this Transformation Roadmap. When using the shu-ha-ri metphor, I’d say this is good for Ha levels of Transformation Consultant (if you’re Shu level, then being an Agile Transformation consultant is dangerous IMHO). The availability of coherent reference material will also make the Executive Engagement activities a little easier, so it should be easier to start larger scaled transformation endeavours. This feels more like a top-down transformation initiative.

DAD: Structured as a Reference

DAD on the other hand, doesn’t really have anything like this – it’s structured as Reference Material. You’d need to be a Ri level of Transformation Consultant to get maximum advantages of the DAD framework. If you’re at Ha level, then you’ll still get value, but it may be a more nerve wracking experience as you’ve not got a specific playbook to use. The DAD books (e.g. Choose Your WoW! ) are aimed at filling that gap, giving teams support to help them evolve their ways of working. This feels more like a grass-roots / bottom-up transformation initiative.

Name: Victor Frankenstein. Subject: Data Warehouse

Data Warehouses are slightly different from my typical project past (I’m far more au fait with custom software or package configuration / implementation gigs).

However, there’s an interesting aspect of Data Warehouses that can make them very suitable for more experimental/research oriented delivery techniques. To set that sentence in some context, I see two main categories of advantage to having a Data Warehouse:

  1. Regular Reporting that’s more insulated from changes to operational systems that feed the reporting (this is an Efficiency type of advantage)
  2. By aggregating multiple data items, it is possible to increase the probability of discovering previously unexplored patterns and relationships (this is an Insight type of advantage)

It’s this second advantage (see something like https://www.youtube.com/watch?v=X30tpFcKlp8 ), if a little hard to quantify, that I’m interested in.

Running an Experiment

Assuming the data already exists somewhere in the Warehouse, the high level end to end flow is fairly straightforward

warehouse_basic_flow
(With the usual complexity that running A/B or multivariate or blind etc. tests give)

Delivering “The Tech”

The analysis and transformation work seeks to create information by transforming the underlying data into useful structures. It is sometimes impossible to predetermine exactly what structures would be useful, leading to the need for experimentation / trial-and-error or other uncertainty-reduction-strategies. The transformation efforts can be split into two parts (if necessary). Converting a source system data model into a logical (or canonical) model is a useful step, as it allows the Data Warehouse users to decouple themselves from the inner workings of a source system. This canonical form is typically in 3NF. Converting the canonical form into a Facts and Dimensions model is a useful step to improve the reportability characteristics of the data. However, this step can often only be performed once there is enough insight into the sorts of questions that the datasets need to answer.

The aggregation work seeks to combine potentially related sets of information to be able to reveal additional patterns and trends and create insight. This insight is where the value is when using a Data Warehouse (or more precisely, these insights can be acted upon to generate additional value for the business).

A continuous delivery model can work well in this context, particularly if there is a strong desire to create differentiation in a mature market. Work can usually be delivered quickly as in these scenarios there are usually no new architecturally significant requirements that need to be met (mainly because the Data Warehouse already exists, is operational, and has been sized to also cope with a reasonable degree of ad hoc reporting).

Mind the Gap

There are scenarios that don’t easily fall into that model. One significant one is when the raw data to be used does not yet exist in the Data Warehouse.

The solution to this problem is to introduce additional steps in the value stream, moving it left to also include ingesting the new data source as well as integrating (e.g. cleansing) the data and storing it in a coherent manner.

warehouse_wider_flow

Filling the Gap

The Ingest & Store activities can be performed with a couple of different strategies.

  • The first strategy is to only Ingest & Store what is required by the downstream work (i.e. the Analyse and Transform work identifies the specific data gaps. Those are filled)
  • The second strategy is to Ingest and Store everything that is available from the data source, regardless of whether or not it is needed

The main benefit of the first strategy, is that only valuable work will be carried out (more just-in-time and less just-in-case). The main disadvantage is the time/effort it takes to build or change an Ingest process (this is typically an architecturally significant piece of work). An interesting side effect of only ingesting what is needed, is that it’s much slower to “just explore data and discover new insights”. In other words, serendipity is far less likely to work for you.

The main disadvantage of the second strategy, is that it can take more time (and money) than the first strategy to get to the version-one-dataset (i.e. the dataset known to be needed). The main advantage is that it is more likely that unforeseen relationships can be discovered, and those insights could be a source of a competitive advantage. However, there are no guarantees

The Client Specific Problem

There is a lot of advice and guidance that’s applicable to evolving “established” Data Warehouses. There is comparatively little advice and guidance on how to bootstrap your Data Warehouse and start generating value from it even when there’s extremely little data contained. A lot of the latter guidance uses tangible outputs (e.g. a report or a dashboard) as a way to bound the early work.

The initial ask of the project team (more accurately a platform team, but that’s a topic for another day) was to produce two reports for two business units. These two reports represented a significant piece of value. These were both fairly mature reports, having been developed and enhanced over a number of years. This was the strategy for being able to unlock value from the Data Warehouse at an early stage.

However, as the work progressed, it became apparent that there were significantly more data sources involved in the production of the complete reports (only discovered when new outputs were compared to the existing reports). In this environment, the work required to Ingest a new data source has significant architectural requirements to be satisfied before the data contained is accessible for use. The initial planning assumptions that led to the selection of these reports to be the first business use of the Data Warehouse were proven to be false. Seeing as all of the reports / dashboards that were deemed to be valuable were of similar ages (i.e. have been evolved and refined over a number of years) it was reasonable to assume that similar deal breakers would emerge unless far more thorough data analysis was performed.

A different strategy was needed. Instead of “trying to deliver a pre-defined thing”, what would happen if the focus shifted towards “get datasets into the Warehouse ASAP”? The ability to deliver a specific thing would suffer. However, the ability for the business to explore their data and attempt to derive meaning and insight from it would be improved. It involves a reframing of the ask. Moving from “Recreate X” towards “Try to make use of Y”. It gives the business users a chance to stretch their creativity muscles.

This proved to be a bridge too far. The rest of the “Change Organisation” had been optimised for projects. Or perhaps that should be “had evolved to make Projects the easiest vessel for money to effect changes”. Asking that organisation to switch to what is essentially a far more open ended “Research” mode of working wasn’t feasible in the short-medium (and I’m not holding out much hope for “long” right now) term.

So we found a halfway house. A Frankenstein-esque mesh of “push oriented work” and “value/pull driven work”. Development efficiency was the main driving force behind the sequencing of the Ingest/Store/Analyse elements. Datasets would be pushed onto the Data Warehouse in as efficient a manner as possible. Experimentation and “Product Development” was the main driving force behind the Analyse/Aggregate/Trial/Adopt elements. The most valuable reports and KPIs were targeted first, and experiments would be run to work out viable ways of delivering the insights using whatever datasets were available at that time. Interim solutions would be accepted, knowing that when additional datasets were delivered onto the Warehouse, it offered the chance for refactoring and redevelopment to occur.

It’s not pretty, but it’s a huge step forward from where the team was when I joined. Hopefully this one remains benevolent…

warehouse_multi_flow

Some characteristics of a great team

Note: Credit for most of this goes to @susannehusebo

  • They start their daily stand ups on time
  • They tend to laugh a lot and have fun
  • Everybody in the team gets to express themselves
  • They correct and edit each other when they go off track
  • They try out new things with appetite. But they are quite willing to admit those things that don’t succeed
  • They often don’t care very much if it looks like they’re working hard
  • They encourage each other to leave on time
  • They talk about “we built” or “we failed”, rather than “I built” or “I failed”
  • They have lunch together, or some other time within work hours where they talk about other things than work
  • They welcome more junior members of the team and enjoy mentoring them
  • They have inside jokes
  • They share responsibility for communicating with outside stakeholders
  • They don’t agree on everything
  • They debate between short term benefits and long term strategy. They reach compromise
  • They question each other on topics like accessibility and inclusivity in design and development
  • When they finish a task they check if they can help someone else finish theirs
  • There’s room in good teams for extroverts and introverts. And those in between.
  • Team members are aware of their needs and communicate those to others
  • I’ve seen some great teams have seriously stubborn people in them. They can be great when the team needs reminding of why they came up with a certain rule: specifically so the team wouldn’t compromise when they’re in a hurry
  • Good teams often fight for independence to make their own decisions
  • They ask “why” a lot
  • They discuss the users of their products every day, and user experience is viewed as everyone’s responsibility
  • If they are remote, they try new ways to make everyone equal, even if it means compromising the experience of those people that are in a shared space
  • They are not dogmatic
  • Testers and designers are included in discussions of estimations and backlog refining
  • They respect agreed decision making structures, but argue their points
  • The people are not too similar to one another. They think about problems from different angles
  • If someone on the team is ill, the others figure out how to get by without that person
  • They have quiet time. In whatever amount is valuable to them
  • They don’t interrupt each other. They take equal turns in speaking
  • They get each other tea/coffee/water
  • They have one-to-one chats with each other to discuss points of agreement or disagreement
  • They know each other’s personal and professional goals and aspirations, and try to support them where possible
  • They are not all equally skilled at everything. But they try to work on things where they can learn
  • When people pair on a task, the less experienced person is usually the “driver”
  • Senior team members ask for advice and feedback from more junior team members
  • They don’t have to give positive feedback every time they give negative feedback
  • They thank each other for favours, for tea, for good ideas, for bad ideas, for observations…
  • They use the products they’re building
  • When someone has an idea, the other team members build on it and add to it
  • They accept that people have “off” days
  • They are generally resilient when it comes to change, including change of direction, goals and vision
  • They can work quickly, but it’s not always crunch time. There’s a sustainable cadence to work
  • They regularly talk about how they work together, and try to improve on that
  • Team members generally know what every other team member is working on, and what kind of issues they’re having
  • They encourage each other to share work early, rather than wait for it to be “perfect”
  • They have some shared values or principles that guide their interactions
  • They try new tools and methods quite easily
  • They might defend each other to people outside the team
  • Before they ask for feedback or reviews from other team members, they read through their own code/work
  • They have time to/they prioritise automating tasks that are not valuable
  • They talk about technical debt, and teach stakeholders about it
  • The work they do feels important
  • They argue
  • They talk about how well they’re doing compared to expectations. If the estimates turn out to be off, that’s not a disaster
  • They take turns dealing with boring or time consuming tasks
  • They are interested in each other personally
  • They talk about problems without talking about people’s characters, rather they focus on the work
  • In meetings they put their phones down or close their laptop lids when they’re listening
  • They have a say in who joins the team
  • They have shortcuts for common communication (like signals for when the conversation is derailing, or when they’re losing focus)
  • They don’t mistake fun for a lack of discipline
  • They consider different communication styles. Meetings are not just held so the loudest speakers are heard the most
  • They care that other team members and stakeholders understand the work they’re doing
  • Documentation is updated whenever errors are spotted, by the person that spots the error
  • Team members feel a shared sense of pride and ownership of their work
  • It’s not ok to notice a problem, and not do something about it, even if it’s not in an individual’s immediate area of responsibility
  • Developers participate in sketching sessions, designers understand how git works
  • Team members reach out to their personal networks to ask for help for other team members
  • Everybody tests
  • Team members let each other try things out even if they think it will fail (sometimes, if constructive)
  • They notice when other team members seem worried or down. They ask about it
  • Everyone knows that it’s ok to be wrong, as the rest of the team have your back
  • Everybody participates in user research
  • Everybody’s involved in pairing or non-solo work
  • They have a shared history and sense of purpose
  • They argue like siblings. Intensely, but when it’s over they’re still teammates
  • There’s an awful lot of trust sloshing around the team. All of that trust has been earned by people doing what they say and looking out for each other
  • Work is criticised. People aren’t
  • No-one is “keeping score”
  • “I don’t know” is not a dirty phrase. “Let’s find out” is an even better one

Team Stability vs Personal Freedom

Background

I’ve been thinking about a project team characteristic that I’d never previously worked with before. This project had a dedicated staffing stream of work, which supported an ongoing employee rotation plan as part of normal project execution. Any given team member would have a 9 month stint, after which they’d rotate off the account and go do something else. With the size of the project team as well as some contractual constraints, that amounted to about 3-4 people a month, or nearly 1 person a week.

 

The Bad News

A basic awareness of group dynamics as considered in Sociology, Psychology and whatever other -ology you the reader is fond of, leads to a conclusion that highly effective teams take a while to emerge from a group of people. These patterns have been articulated, perhaps most famously by Bruce Tuckman (i.e. Forming-Storming-Norming-Performing) but other variations and extensions have been suggested.

Tuckman_0

Of these stages, the Storming stage is the most difficult stage, and many groups never make it past this stage. Teams that do make it through this stage, have invested in the time it takes for individuals in a group to establish some form of social structure along with compatible working patterns, relationships etc. These individuals have learnt how to build on their collective strengths, as well as offset the weaknesses that each of them, as individuals, bring.

Team Effectiveness

Each swap made within a team reduces the stability of that team’s position in the team maturity model, and significant enough disturbances can move a team from a relatively comfortable “performing” state, down a notch, to “norming” and potentially even worse, a rather torrid “storming” state. The team would need to invest further in order to rebuild and develop into a high performing team. The practical upshot of this regular rotation is that the project team will always be struggling to maintain the highest potential performance.

Domain Knowledge

A second, more insidious side effect, is that the team as a whole may only really have a relatively shallow level of knowledge and expertise in the problem domain they’re solving. Picking up the basics of any new context is easy enough. However, to really build up a nuanced understanding of a domain, a person requires a decent amount of time to carefully mull over a stack of facts, figures and data points. I’d argue that it takes at least 3-6 months of focussed learning before enough is known about a new domain for significant decisions to be made safely. With a 9 month term, that suggests that really, a person only has, at most, 3-6 months of meaningful work before it’s time to move on.

A Sense of Purpose and Belonging

One of the joys of working in a highly motivated team with a strong sense of purpose, is the feeling of accomplishment when a milestone is reached (as long as that milestone is meaningful to the team). With team membership always having a tangible end date, it’s much harder to really get a sense of belonging to that team, and (deeply) feeling that sense of purpose. There’s the danger that an individual feels like they’re a replaceable cog in a machine – largely because that’s what will happen as everyone gets replaced fairly regularly.

 

The Good News

It’s not all bad news. On the plus side, it does put the individual’s “perspective” at the centre of the project’s thinking. As an individual would only be exposed to a problem domain for a finite period, it means that there is a regularly changing supply of new things for them to think about. That can include new problem domains, new implementation technologies. This can help keep the individual fresh, enthusiastic, and in an explicit learning mode for much more time than otherwise.

Additionally, the downsides are potentially easier to deal with. In the event that the individual isn’t compatible with the project and the demands, it’s easier for the individual to just “grin and bear it” for 9 months, as there’s light at the end of the tunnel, and the individual doesn’t have to do anything drastic (e.g. resign). Worst case and an earlier exit is required, that’s also likely to be a lot easier to deal.

Another advantage is that it can make something that isn’t long term sustainable (e.g. remoteness of location) something that is more sustainably managed, as it distributes the workload across a greater number of people than if the team were fixed.

An interesting potential positive, is that this forces a few XP principles into the team (if they’re not applying them already):

KISS: If nothing else, the team have to reduce the complexity of what is being built as there will be a regular (and ongoing) instances of a new person having to learn about what is being built. There’s relatively little that can be done to reduce the inherent complexity of the problem domain but navigation aids (e.g. context maps) will help and are recognisably important investments

Automated testing and test coverage: New starters generally go through an initial period of higher-than-normal anxiety as that’s when they’re operating almost entirely in the dark. As this is a regular occurrence, it’s unfeasible for a project team to “cheat” and ride it out (which is a possible approach if you’ve got a permanent team). Automated tests and a high coverage are a safety net, making it much less scary to work on code Again, that’s explicitly recognised as valuable.

Collective ownership: As no one person will be around permanently, it’s much harder for territories to be formed beyond team level boundaries (in a multi-team project) as the only things which are “stable enough” to be able to own anything (along with the typical behaviours of defending it, curating and growing it etc.) is a team. However, this is a double edged sword, as it’s also possible that in the face of a significant enough problem, some deflection might occur (e.g. “oh that’s not my/our fault. This was <Person X, now left>, they’re to blame”). This “no-responsibility-anywhere problem” is a normal part of collective ownership, it’s just more likely in this model than in stable team environments, as it’s far easier to blame someone who isn’t around.

 

Conclusion

 

I’m treating this project execution model as just another lever to be adjusted as necessary. My current focus is on optimising for a sustainable and resilient project team – one with a high enough morale to cope with the difficult circumstances that the project needs to execute within, with the employee characteristics that are available to me. In this instance, I’ve got over-supply of graduate and apprentice developers, but a scarcity of senior and lead developers. I also have a project location that’s a little too far away from home base for comfort, and a mandated work pattern that throws the work/life balance off kilter.

My key measures are employee engagement & feedback, general mood measures and team retrospectives (shareable content only). Additional measures include sickness days (as a potential visualisation of a deeper problem), as well as the levels of enthusiasm and participation in non-core work activities – e.g. community events etc.

My key feedback mechanisms are regular 1-2-1s (I’m trying to catch up with at least one project team member every day), open Q&A once a week and more formalised “account update” sessions once a quarter.

Power Models and Method Affinity

a.k.a. why are some people SAFe oriented, others DAD biased, yet others LeSS enthusiasts, and not forgetting the DSDM gang, the Nexus collective…and I’m running out of terms.

I’m sure there are a whole host of reasons, but during my time observing people, and knowing a bit about their past and what they’re like, it has led me to form an interesting (to me anyway) hypothesis. It’s to do with power.

A Brief History of Power

Without digging too far too much into Taylorism or others, there are a handful of basic power models that can theoretically exist in an organisation (heavily simplified for illustrative purposes, as real life is never this “simple”).

  • Power in the hierarchy, the higher up you go, the more power you have
  • Power in the middle management – sometimes disparagingly called the permafrost
  • Power on the edges – people on the ground in front of customers

These can manifest themselves onto a project context in a few ways (note for want of better options, I’m labelling these categories with overloaded terms)

  • Power lies with the “planners” – e.g. project managers, PMO
  • Power lies with the “architects” – EAs, Solution Architects
  • Power lies with the “developers” – developers, testers, BAs

Human nature is such that we flock towards things that are like us. Planners are more likely to favour other planners, and work using systems where the balance of power is in their direction. The same sort of thing is true for the Architects and the Developers.

Methods

And now we get to the Methods part of this blog. I’m going to focus on agile methods, and agile scaling frameworks. And in particular, how these methods and frameworks are perceived, at least initially. That bit is key. Most of this “natural affinity” stuff is emotional in nature, and not fundamentally driven by rational thinking (hint: there’s a lot of religion in this area). As there are lots of them out there, I’ll just pick the three major ones (based entirely on how often clients talk to me about agile at scale, and nothing remotely scientific).

SAFe

The overall guidance is dominated by the navigable map. It has several terms that will be comforting and reassuring to hierarchical type organisations with traditional reporting lines and financial controls – Programme / Portfolio Management, Enterprise Architect, as well as some guidance on mixing waterfall and agile deliveries. This looks to be solidly planted in the middle of the “Planners” camp.

Based on the hypothesis, likely proponents and allies are to be found within PMO, Project Governance,Configuration Managers, hierarchical organisations with a centralised power model, and organisations that perceive themselves to be traditional with a rich history / heritage.

DAD

The first thing that strikes you when you first look at DAD is that it’s rammed to the rafters with choices. It has a risk-value lifecycle (but you can choose others), many options on how to achieve pretty much any delivery related goal that you may have – from big ticket items such as considering the future – how much architectural insight do you need, to very focussed options like the right level of modelling to use. And that’s just part of the “Identify Initial Technical Strategy” goal. This resonates well with those with an architectural bias – architecture is mostly about decision making and communication.

Likely proponents and allies are to be found in technical leadership – Architects, DBAs, and organisations with a strong technical bias.

LeSS

The navigation map for LeSS in contrast to the previous two, looks relatively uncluttered. There are large concepts identified (such as Systems Thinking, Adoption) but these are all located around the periphery of the diagram. Slap bang in the middle is the engine, and those are feature teams. This puts the Developer at the centre of the universe (as it were).

Likely proponents and allies are to be found within teams and individuals using Scrum and XP on a regular / daily basis, and organisations that “have a small company vibe”, which may be startups on a growth spurt, or organisations in a highly fluid environment with significant localised decision making.

The Goldilocks Solution

As the heading suggests, I think the right mix for any given organisation is somewhere in the middle. Power isn’t solely contained within a single area (though granted, in many cases, the vast majority of the power is indeed concentrated that way), and any scaled agile adoption strategy will need to understand and accommodate that to increase the chances of tangible benefits being felt by the organisation.

 

Feedback

As this is just a hypothesis I’ve got, I’d love to hear what you think, whether you’ve observed things that support this theory, disprove it entirely, or somewhere in between.