If you create an appropriate framework, people can understand what to do when you haven’t told them the details.
All too often, organisations define rules that do not need defining. They may choose to set criteria for approvals, or host panels in order to evaluate submissions. A better approach in many cases is to create a suitable framework and devolve the authority to those who need it.
How would we know whether a framework is suitable
What to do under what conditions
What to do when those conditions aren’t met
Guidance on what to do when the conditions do not make sense/do not apply
Guidance on what to do when the process cannot be followed
Let’s have a look at Bobby Mcferrin and how he is presenting the scale.
He creates a framework, defining how we are meant to respond based on the presenting condition in front of us (i.e. where he stands)
He indicates that we’re only interested in the semitones – i.e. we’re not meant to respond to every slight step or assess the accuracy of where he has landed in order to produce microtones.
The framework he has created allows to very quickly infer what the proper response should be when presented with a new condition (i.e. he steps outside of the boundary).
He builds on our already existing knowledge (e.g. what we’ve learnt in our early years, what we’ve observed from watching other musicians such as pianists, etc) and combines that with what he’s defined so far, so that the audience can infer the next note. Even though he hasn’t actually told us what that note is.
What could you take away from that?
Are there set (often regular) meetings within your organisation? Look at them and see what could be devolved either to smaller groups (even down to one person) if the rules can be defined.
I’m writing a new book, this will be my second*. I’d written a couple of chapters last week, one of which focussed on how organisations leave problems for the customers to resolve, but that they don’t think of it that way. In one chapter, I used the example of “Warning. Hot Water.” signs, stipulating that the organisation has decided that rather than fix the problem, they’ll leave it to the customer to work around the problem. Every day, every day they use the tap. When viewed like that, putting a sign up doesn’t really resolve the problem.
I was in a hotel at the weekend and I tend to think of hotels as having solved this particular problem a long time ago; it’s usually workplace offices that still have these signs. But enter the bathroom and it’s plastered with “Warning. Hot Water” signs. And it was seriously hot, close to scalding. For IHG, you’ll find this out when I review the hotel. It was just one of a list of issues. I travel a lot. I’m pretty flexible and lenient as far as hotels go. If there’s a problem, as long as it’s resolved, I’m happy. By that, I meant that I recognise that there are faults in any system, in any organisation and that’s ok by me. But if it’s a systemic failure, then I’m concerned. This hotel had a number of repeated failings. A quick look at trip advisor shows the issues are not isolated.
At what point does someone responsible for fixing a problem decide that a sign is enough? That the customer can have the problem? Did they work through a customer journey? Did they wonder what it would be like to be tired after travelling, hungry, thirsty, maybe a headache? Maybe not speaking English as a first language. Maybe not being used to English norms regarding taps (plumbing doesn’t seem to be standardised across the world)?
Why would we expect a foreign guest and customer to be familiar with the quirks our hotel’s plumbing?
Signs such as this protect the organisation. They inform the customer, but they do not remove the problem.
*If you’re curious why you haven’t seen the first book, it’s because I haven’t released it yet. The first draft is ready and I’m taking a short break from it to gain some distance before returning for the final push.
There are a number of company-hosted competitions, events, hackathons all with the aim of introducing innovation to the host company. I questioned the rationale behind these initiatives in the first part of Outsourcing Your Future.
Whereas I questioned the motivation behind the initiative in the previous post, I want to look at other aspects in this post.
Maturity of the host
When I talk about maturity, I’m usually thinking of the difference between experience and wisdom. Someone can have a great number of experiences, but they may not be wise from what they’ve experienced. Similarly, an organisation that is mature in age is not necessarily mature in its capabilities.
By hosting innovation events, older companies are trying to introduce the capability of innovation into their organisation. It’s a parallel move to that which we saw in call-centres, then contact centres and also in shared services solutions. The company focusses on its core and outsources some standardised capabilities of its business.
In principle, that seems fair, since innovating is just one of many capabilities (we could give it a better name, but it’s still innovation). The bigger issue is that the target of these innovation events is often the core business; something which very few chief executives would ever dream of outsourcing. However in hosting innovation events, that’s what they’re doing; they’re outsourcing the company’s future.
Having read through a number of calls-for-applications and similar invites, plus being familiar with a larger number of events, I see two directions forming.
Rather than the innovation happening on the inside and pushing it’s way out, the innovation is nurtured on the outside and is adopted internally. Or more often, it meets the resistance of the host organisation and fizzles.
Innovation happens on the outside and is then partnered, e.g. you keep the external startup as an external and then purchase services from it (which may be viewed as allowing it access to your procurement team, but it’s still money transferring for services). That partnership arrangement keeps the innovation skills on the outside, but allows you the benefit of the innovation for a cost.
Considering the age of many companies hosting these events, they will have rigid governance procedures. Startups, on the other hand, do not. They are more flexible, more able to change direction and quicker to deliver. By allowing other companies into your problem space, you take advantage of their ability to take shortcuts that wouldn’t be allowed in your organisation. Those short-cuts may not be short-cuts in reality, it could well be that your organisation has created obstacles that do not need to be there. However, the result is that the external startup can deliver more quickly than your internal teams. That speed of delivery has value in terms of being able to conduct business experiments and learn from the experiments more quickly.
But as well as being able to make short-cuts, startups can take riskier approaches, which is easy to see when one of the guiding mantras of the startup ecosystem is “Do things that don’t scale” originally from Paul Graham.
By hosting innovation events, you’re outsourcing some of your risk management. You allow yourself to focus on the product, not how the product was developed. That doesn’t free you from all responsibility, but it does allow a shift in responsibility at significant points in the development process.
There’s been a growing trend of recognising the concept of technical debt. In the same way that shortcuts or short-term decisions for technology have to be paid back later, there are other forms of debt. I’ve discussed process debt before.
Innovation events, especially sprints have an element of creating debt. It’s not necessarily bad debt, since the act of bringing people together to progress a common goal has significant value, but the team involved may decide to do something quickly because of the time available. Even if the decision is “I’ll do it in this tool to get it ready by Thursday evening and, if the concept is accepted, then we’ll do it properly next week.” – that’s still debt. And we’ll see those decisions across process, technology, management structure, job descriptions, skills, stakeholder management, customer engagement, etc.
At the point that you want to bring the innovation in-house, you will have to pay that debt, so where have you found yourself? Did hosting the innovation event outweigh the debt incurred? Sometimes yes, sometimes no.
And that brings me to my last point. I’m struggling to think of – actually, I can’t think of – a single company that has ran an innovation event and then openly discussed those innovations a year later. There are companies that regularly host innovation events and there are those that are starting out in 2018 for the first time. Of those that have hosted previously, none publish what’s happened since. Some do not refer to previous events. A few publish what happened soon after the event, but do not follow-up with current news, reflecting on the value realised through hosting the event.
I can think of one company that has benefited from an innovation event from a cultural perspective; being able to expose its wider workforce to innovation through immersing them in a week-long festival. Even in that case, one which they openly refer to previous innovations, I do not know which of the innovations are currently active one year later.
For instance, I’d be interested to see previous entrants to the event, how they were engaged following the event and what progress has been made up to now.
I can think of one brand-led accelerator, Collider, that does publish details about previous cohorts.
Overall, it looks like the pickings are slim when trying to evaluate the performance and value of outsourcing innovation through hosting an innovation event.
After a late evening fixing a plumbing emergency at home, I’m reminded of the concept of contingency and how it can’t be practically be used as a buffer for all non-planned events.
So with my current main client, I start out early in the morning, long before must people (or birds) have risen. My wife told me about an hour before I was due to go to bed early in order to wake early that we had an issue.
Applying typical contingency management as found in most projects wouldn’t help. That’s the type where a task is estimated to take 2 days, so you add in some contingency for that task. What would have happened in that case? All the tasks had finished, we were effectively waiting for a deploy (ok, it was a deploy of one person to a train, so I’m stretching the analogy a bit)
Applying a buffer contingency may have helped somewhat, but again, the tasks had all completed, there was no buffer to call upon before the deploy.
House alarms/Burglar alarms encourage similar behaviours. You only discover that the alarm isn’t working when you come to set it on departing your house. Again, no typical contingency would resolve the issues.
In the above examples, we’d usually have to add more time for the journey in the morning (possibly even travelling the day before). But the more we do that, the more ridiculous the timescales become and the demands on those involved become more exorbitant to accommodate for any issues. Even with buffer management, at some point we’ve passed the point where the buffer can be applied.
The only alternative that I’m aware of for this type of issue is one of preparation. It becomes more about damage limitation. So in my example, have I prepared for what I would do if I have to remain at home, if the train is delayed or cancelled, if the my taxi doesn’t arrive on time, if my car doesn’t start? Those are more the failure modes, as can be explored using an FMEA matrix for example.
From that perspective, contingency isn’t just a buffer (whether applied to a single task or applied to the project), it’s a behaviour and it’s planning about what if. It’s about ensuring that you know what to do, have the resources to do it and can execute in the time required, at whatever time it happens.
Testing is not just for software, but for the business processes, organisation or service that you’re implementing?
I’ve seen many test routines that are too artificial, too removed from the reality of what the users will go through. Fortunately this factor has improved over time, especially with more focus on user stories.
Let’s consider one of the best examples of testing I’ve ever seen. Guitar amps are generally fragile. They’re usually robust enough for scrapes and minor bashes as you’re carrying them through doorways, but they don’t survive being dropped down stairs very well.
One amp manufacturer had a test routine of removing the glass valves (they’re replaceable consumables) and then throwing the test amp from the roof of the building to emulate the journey that some amps go through. On the ground, they inserted valves and powered it up to see if it would work.
How does that compare to your test routine? Is yours as accurate to the reality that it will be used in?
I heard this quote the other day, but I didn’t catch who originally said it.
Art requires rigour, science requires creativity
The first point is that it’s contrary to the standard view. The second point is that both perspectives are valid and that there shouldn’t be that much of a difference.
It then made me think of typical transformation programme roles and the relation between creativity and rigour. Most roles have a balance between the two, with that balance changing according to the standard role and, at times, according to the demands on that role.
For instance, process analysts should generally follow a set of standards. Business Analysts have to be more creative, but still have methodologies to follow. Service Designers have less rigour methods, usually a composition of tools and techniques rather than the standardised methodologies of previous decades. At the more rigorous side, project managers have their methodologies and frameworks to follow. Programme managers see a wider scope and have more creativity in organising the interdependencies. Which then fits nicely with my normal comment that a Business Architect has more in common with a Programme Manager than a Project Manager; there are more skills in common, even though the professional methods involved are different. Which leads me to the Business Architect who has to know when to be standardised and when to be creative. There has to be the flexibility to modify the approach to suit the needs of the client, depending on the stage of transformation.
Once again, I’m using Archi (or ArchimateTool) with the Archimate modelling language.
OKRs do not cascade
Just because the diagram depicts a hierarchy, doesn’t mean that the objectives cascade down the organisational hierarchy. Following the logic in OKRs don’t cascade, I’ve taken the approach of the depicting the hierarchy, rather than how that hierarchy is achieved. In the article, Felipe mentions that objectives should not be cascaded down the organisation. Instead, objectives and key results should be discussed and agreed at each level. The resulting picture is the same either way, but the content of the objectives and key results may be different depending on the route.
Depending on the level of the organisation, many of the components that achieve an Objective will not be Key Results, but instead will be lower level Objectives (e.g. of the next team down in the corporate hierarchy or downstream in case of a flatter hierarchy). The diagram allows both Key Results and Objectives to form part of an Objective.
Modelling Goals and Objectives
Key Results have been modelled as Outcomes. Objectives and Contributing Goals (lower-level Objectives) have been modelled as Goals. In doing so, I’ve allowed for a hierarchy of Objectives to fulfil the concept of Contributing Goals. Had I gone with a model of Objective = Outcome, we would have seen a model of hierarchical outcomes which would not have made as much sense, especially to those having to achieve those outcomes.
From the perspective of Business Architecture, I’m interested in the alignment of actions to the overall vision. I like to see a clean line connecting actions of the workforce to corporate objectives to vision. Many organisations suffer because the objectives are cascaded down rather than agreed at each level. Combining OKRs with a culture of joint-goal setting has a good chance of resolving that core issue.
Notes about the diagram
The content is fabricated; completely artificial. I haven’t populated every single branch, but enough to indicate what could be captured. For those areas that I did populate, I kept to the concept of 3 key results per Objective, of which any of the Key Results can be replaced with Contributing Goals. You can flex that as you wish.
I’ve created a tiny environment in which the OKRs operate, featuring an internal driver for change, an external driver, the assessments for both and the corporate vision and missions.
The interesting concept for me regarding business motivation is that the diagram is agnostic of the organisation structure in that it doesn’t indicate which team or who is responsible for achieving which objectives or key results. I’ve done that on purpose.
If we imagine a typical organisation of 400 people. Each of those named 400 individuals could have Key Results to deliver. Some of those Key Results would contribute to team Objectives. Some of those team Objectives would coalesce to fulfil higher level Objectives and so on. That’s the bottom-up picture.
The top-down picture is that the strategy needs to pervade the organisation and steer the choice of actions and the delivery of those actions. At the top level, the objectives may be independent of who is going to deliver them, but shortly thereafter the key results or contributing goals would have to be assigned. And it’s likely that they’ll be assigned to relevant directors (in the case of stretch targets and keeping the operation running) or delivery teams (in the case of changes). However each of the delivery teams should have a sponsor. It’s that sponsor that’s actually accountable in this case for the delivery of the key result, whereas in many organisations it would be the delivery team.
Overall, OKRs force a concept of personal responsibility or rather, a concept of personal accountability if we follow a RACI model. For the majority of a workforce, the individual is likely to be both accountable and responsible for their key results.
What I haven’t address is the non-aligned use of OKRs, e.g. allowing or encouraging the setting of key results that do not fit with corporate objectives.
Ask anyone who’s been involved in any significant implementation and they’ll have come across the waterfall approach. It typically leads to a contractual relationship between one team who are working on artefacts that are then handed over to a subsequent team. While the flaws of waterfall have been well-documented, this concept of contract versus collaboration extends to many areas of work.
Let’s use a brief story as an analogy for the concepts of contract and collaboration. It’s an incredibly simple story, but even with the simplicity, we can see the complications that can arise from a contractual relationship.
My wife and I walked into a coffee shop. I was left to order the drinks at the counter. So I’d asked what drink did she want.
“Mint tea please”
She orders a lot more hot drinks than I do, so she’s more familiar with the script that we all follow in coffee shops. And she definitely knows I do not order as many drinks, so she’s aware that I’m not that familiar with the standard scripts. In fact, I order hot drinks (maybe once a year) so rarely that it’s almost a new experience every time.
So I’m at the till and I’ve asked for mint tea. I hear in the barista’s response that it’s blended mint tea. So I’m then thinking:
Is blended mint tea acceptable?
Are there are other teas in this coffee shop which are more acceptable?
I decide it’s acceptable.
I’m then asked what size. How would I know? I was just told mint tea. And I know for a fact that my wife chooses different size drinks, but mostly takes the larger options in general.
So I ask for big.
Not good enough, it’s one of those shops. They have small, large and grande (or some phrase like that). So which is it; big = large or big = grande?
And what happened to medium? (but that’s another story)
I choose large.
Then I’m asked take away or stay in. Thought I’d already asked for the tea to go, but no worries, it’s a busy shop with background noise, so I say to take away because I know the context.
I’m then asked if I want one teabag or two. Woah, where did that one come from? It’s a cup of tea, a bag goes in in order to flavour the hot water. The longer you leave the bag in, the stronger the tea. Mint tea works the same way, doesn’t it? So what would be the advantage of two? Rather than ask what that benefit would be, I chose one since I hadn’t seen many cups with two bags before.
Seeing the way the interaction was heading, I waited for the “do you want extra water with that?” question. Fortunately that didn’t come.
On being given the takeaway cup, I notice it’s hot to touch, but it doesn’t take a genius to realise that. So I look for the holders and place one around the cup.
That coffee-shop interaction was a contractual one. It depicted a scenario where one person (my wife) stated her requirements, which were then interpreted and delivered by another person (me).
Although it’s a very simple scenario, it highlights how much needs to be known to be able to be fulfil the customer’s expectations.
There were numerous attributes that had to be chosen in order to complete the transaction and deliver the request:
Feature/Product = Mint Tea
Variety = Blended Mint Tea
Size = Large
Number of teabags = One
In or out = Takeaway
Temperature to hold = Need a holder
My wife only stated one of those. I, as the contracted partner, had to answer a variety of questions based on knowledge, size based on estimate, number of teabags based on memory, takeaway based on context and joint understanding, holder based on real-world experience.
That ratio isn’t uncommon with any set of requirements. No matter how well or how detailed you define your requirements, there will always be questions that need to be answered. If you’re not there to answer the questions, then it introduces a delay or it introduces a risk of diverging from your (unstated) requirements.
Now, imagine the same scene, but with my wife also involved. The barista would be able to ask her directly, or at worst, ask me to then ask her (e.g. if she’s had to sit at the table rather than stand at the counter). While the latter scenario introduces some back and forth, it’s still more timely and risk-reducing than me guessing and getting the order wrong.
That’s the situation that occurs in many organisations. An artefact is commissioned. It’s defined and passed onto another group. Who digest the documentation and then ask questions. And it takes time to progress through this rigmarole.
While I don’t think the people involved must be present in the same room all the time, they do have to be able to communicate in a way that doesn’t lead towards a waterfall approach.
So the default position could then become one of collaboration rather than contract, with those involved working together to define and review. Now while that concept is well-adopted in more agile organisations, those organisations that have remote development teams can struggle with some of the implementations.
However, it’s not just IT development. The concept of collaboration should be present for any change, any time that an organisation progresses, especially within the organisation itself. And that’s where a number of organisations fail.
A number of years ago, I was transforming a city’s social care directorate and, as part of that transformation, we aimed to reduce the time it took to do anything when interacting with the service. The transformation was based on a more fundamental need to free up workers to be able to do the work they were meant to do rather than having to fight the fires caused by delays and resulting failure demand. I instigated a methodical approach for identifying which cycles to focus on first. As the team progressed through the cycles, I noticed a pattern; it’s the spike of activity followed by a lengthy delay as discussed in a previous article.
As we looked in particular at a few cycles of spike followed by a delay, I routinely advised the team to question the need for that common feature of bureaucracy: the signature.
Why require a signature?
In the case of social care, signatures are often required from service users or their representatives. This can be as proof that the content of a form is accurate or as a record of the service user providing consent (either for data to be shared from the form or for the authority to request data from other agencies).
These signatures create the spike-delay pattern in which a short spike of activity is followed by a lengthy delay while the authority sends the form to the customers and waits for the return of a signed copy. Part of that delay is caused by the postal service in both directions. Part will be the time it takes the service user to open the letter, read the form, make amendments, find an envelop and stamp and then go to the post box. Considering the high percentage of infirm service users compared to the general population, that sequence of activities can take a long time. Then we have the additional wait time caused by processing the response as it arrives into the authority.
So, my first instinct is to remove the need for a signature and thereby remove the need for the spike-delay round. This could be changed from requesting a signature to providing information on the form that the data will be used. If you don’t agree, don’t submit the form. The response from staff was that we needed the signature as a record of consent and/or accuracy, depending on the form in question.
On the fact of it, that seems a reasonable and fair response.
What does the evidence say?
However, the data showed a different reality. What actually happened is that, even if the form wasn’t returned, the process could still go ahead. True, it didn’t go ahead for every service user, but the fact that it could proceed implied that the signature wasn’t always required. Or rather, wasn’t required all of the time. We were able to look at the data to understand how many service users progressed without signature, we were able to look at common characteristics, etc.
By presenting this understanding back, we ended up moving forward in our joint understanding of the process; joint in that the consultant and the team had the same understanding. Before that point, they had had different interpretations.
So where does that leave us?
An undocumented process or exception is a risk. In the above case, we had uncovered that some of the cases were allowed to progress without signature, but there was no documentation defining which cases could proceed and which cases had to stop. Instead it was left to individual judgment, but again without defined criteria. So what happens if the usual staff members weren’t present? Were the decisions they made equal and equitable to all involved? How did we measure the outcomes?
Depending on the type of organisation and service involved, there will be a different focus regarding the risk involved.
In this case, we had a process with an unclear gateway, e.g. do we continue or do we halt and wait?
Complete the analysis in terms of understanding when the process can continue.
Engage with service users to understand what they need out of the process, what their engagement should be
As a team, choose a default option, either they progress by default or they pause by default
Help the team define the rules that govern the exceptions
Implement a training and induction programme for ensuring that everyone knows how to apply the rules
I always prefer the default option to be the one that improves efficiency, e.g. the one that’s the most common option or the one that removes a spike-delay pattern.
The wider understanding that, in most cases, the signature wasn’t required let us to a better solution. Had we not challenged either with data or further questioning, we would have been left with the difficult situation of lack of signatures stopping the process and the resulting action of requiring signatures in order to proceed. Instead by challenging the assumption and developing solutions to the issues of the spike-delay caused by several signatures for a sequence of documents, we were able to reduce the expected time from 6 months down to just over one day (actually 2.5 completions per week). That’s a massive difference in expectations for customer and the organisation serving the customer.