Confirmation bias in design thinking

Strategy, Architecture & Problem-Solving

Confirmation bias in design thinking

Post-it Notes

Every few years another fad comes around.

Workshop

Workshop

Look back long enough and you’ll see lean, systems thinking, TQM, CRM, structured systems and many, many more methodologies and/or approaches.

The problem is in the delivery of the projects when the terms become more widespread.

Background

What we’re seeing is the climb of two methodologies: Service Design and Design Thinking. You can probably add in Human-Centered Design and Inclusive Design into that mix. Many organisations are adopting these methodologies to solve their existing problems, switching from a lack of methodology or from more formal, structure methods to ones centred around design.

I’m increasingly seeing Design Thinking heralded as the way forwards, but there are some issues with that approach.

To be clear, I’m a supporter of Design Thinking and many related methodologies. I’m not necessarily a supporter of how those methodologies are implemented in many organisations. I don’t believe design thinking is a fad or at least a significant number of its concepts will remain even if a newer or revised method takes pole position.

Question

What we’re not seeing though are the reports of the design thinking projects that have failed. I find it hard to believe that design thinking and similar, related methodologies are the magic bullets that we’ve all been waiting for.

So do failed projects exist?

For every other methodology, there are numerous reports about how it has failed the client. The application of Six Sigma to Nortel comes to mind as a famous example. But there are plenty of other reports detailing failed projects.

So where are those reports for Design Thinking? It could be that it’s too early to tell since we’re on the early adopters curve.

Analysis

Methodology Maturity

Gartner produce the hype curve for technology adoption. It describes how people react to new technology, by relating their emotions to the stage of the technology and their use of it. We can apply a similar description to most methodologies.

They usually start with practitioners noticing something wrong in their current way of implementing projects. So they add, modify or remove elements. This may be a 2.0 version of the original method. Or if sufficient changes have been made, then it becomes a new method. Occasionally changes are blended from another method or a practice outside of the change domain. This new 2.5 version may take on a life of its own and become yet another rebranded method; this time at 1.0. So this fresh 1.0 version has only been used on a few projects and has had great success from the perspective of the facilitators. Therefore it’s a brilliant solution to fixing any problem and becomes heralded as the latest and greatest. Unfortunately those implementations are only just becoming lived-with, i.e. the changes occurred but they haven’t had time to become embedded in the organisation. That’s when murmurings of disgruntlement appear, sometimes with a “told you so” attitude depending on whether or not the change team had listened to the advice of their subject matter expects. Often the change team has moved on to the next project, this time with a couple of tweaks to their methodology, so version 1.1 which addresses some of the issues they were confronted with at the time of making their changes, but importantly, not those that appeared following the change.

The combined status of design thinking is relatively immature, compared to SSADM as an example and I’d measure that maturity through the age of the methodology, the number of project years it has been exposed to and the rigour involved in the definition of methodology. Design Thinking could potentially have more project years (due to the vast number of projects running now), but many of those projects would be, themselves, immature (due to people trying it out for the first time). Moreover there’s no single agreed definition of the method to follow, but instead many different flavours as different authors, consultancies and training organisations create the latest version of design thinking.

So whereas other methodologies are further along the hype curve, with many of them on the “slope of enlightenment”, there are times with design thinking when I think we’re at the “peak of mount stupid”. I could probably say that about a number of projects, regardless of methodology employed, it’s just that more projects now are aligned with design thinking than before.

Confirmation bias and success bias

We only want to hear what we want to hear and what we’ve planned to hear. So we will either not hear or discount information that contradicts our beliefs or expectations. This selective perspective allows us to promote one methodology above another without viewing all the evidence. It takes time and experience to continue to assess beyond those initial boundaries and look for evidence that contradicts our expectations.

I suggested that the reason we’re not hearing about the failed design thinking projects is that they haven’t been in place for long enough for us to realise they were failed projects. But perhaps, it’s that there’s something more insidious about the design thinking movement, in that only the positives are celebrated. Even to the point that failed projects receive political spin so that the positive elements of them are celebrated.

I’ve seen examples where the design team ran their sessions and sprints, came to implement new organisation designs, process designs, etc and were heralded as successful, even as leading lights as other similar organisations came in to learn how to achieve the same magic. However that same team hadn’t engaged with many of the other departments required to make the changes work, most notably IT. That meant that they had a great design that would take 2 years to implement if and only if the IT department had the same vision and decided to implement. Remember that IT was just one of the forgotten departments. Would the design have looked different had the other departments been involved? I’d bet money on it.

I’ve also wondered if it’s the age and associated behaviour of the people performing these sprints. Remember that most of these will be millennial, coming from a segment of society that did not get overtly negative feedback at school. There’s no right or wrong there, just a different perspective and a different approach to giving and receiving feedback.

Conclusion

Many change professionals have been combining elements from numerous methodologies or creating some methods almost from scratch for years. You can only do that successfully and repeat successfully with the experience of having first worked with a number of different methodologies. Once you have the background of what works and what doesn’t, then you can better understand what is likely to work in any one situation.

That leads to a few issues that we can learn from:

Issue 1: Vanilla Method

My view is that no methodology should be applied to an organisation without some tailoring. Applying the out-of-the-box tasks and activities makes a mess of an organisation, where the objectives become secondary to the method. Part of the skill of any methodologist or senior business analyst is in knowing which parts of a methodology can be removed while still providing acceptable risk to the organisation and its objectives. Otherwise, we would all be producing documents for every part of the methodolgy rather than getting any work done.

Issue 2: Inexperience in change

Reading the book doesn’t make you an expert. Methodologies take tact, understanding and planning to implement. They also take the extrapolation of what’s worked and what hasn’t worked so well. Without that analysis, whether seen first hand or learned from others, the method is untested.

Issue 3: Inexperience in business

A significant number of people I see involved in design methodologies are new to their careers; any career in fact. Would I expect them at the start of their career to know how HR, OD, IT and legal interact. I’ve been doing this for years and I still have to question each of my clients what their HR function actually does. I learnt to ask since in some it’s purely advisory, in others in full-stack recruitment, sickness management and training. It’s different in every client (with a few common patterns appearing if you work across enough clients). That’s part of what we need to unpick before we can figure out who needs to be in the room. I wouldn’t expect that from a relatively new starter.

Issue 4: Underestimating

Not only underestimating the number of people that need to be involved in the change, including taking those off the front-line so that they can contribute, but also underestimating the skill set required to facilitate changes through implementation and beyond, or better still, underestimating the skills set required to have the changes graciously accepted.

Issue 5: Learn from mistakes

We all make mistakes. Even the best project has had a few errors. Learn from them and take action to not repeat them. By moving design teams onto new challenges too quickly, they won’t be in a position to receive that valuable feedback that is required.

Issue 6: Implement

Design with a view to implementation. Design for design’s sake will get you so far, probably for your outputs to sit on a shelf. If you’re designing to implement, then you’ll think differently about who to include and also how to include them. Thinking about the fact that it needs implementation forces you to think through timescales (e.g. what’s feasible?), think through the politics and governance, think through the people you’re going to affect internally.

Afterword

Think of this current article as being a first draft. It’s likely that my thoughts will clarify as I hear of other projects and have further discussions about this concept. If you’d like to discuss it more, get in touch at @alanward

Print Friendly, PDF & Email