There’s a growing trend in organisations to outsource their future through innovation labs and innovation competitions.
I like to question the rationale behind these decisions and look at the host company more closely. After all, what is behind its decision to handle innovation from outside-in, rather than inside-out?
So let’s explore. Why would an organisation decide to innovate from outside?
Because directors/senior management do not believe the company holds the skills for innovation within itself
Because it doesn’t actually hold the skills for innovation within itself
Because it doesn’t have a culture that nurtures innovation
Because it doesn’t have a financial model that permits innovation
Because it wants to perform innovation theatre
Let’s look at each of those in turn and see if the innovation competition is the best approach
1. Because directors/senior management do not believe the company holds the skills for innovation within itself
This is an issue of perception and/or distance. Perception in that the skills exist but are not exposed in a way that directors know that the skills are there. Maybe those skills are hidden performing other tasks. Distance in that the directors are too remote from the sources of innovation in the company. In this case, outsourcing innovation is likely to be met with resistance internally, by those who have ideas, have an appropriate approach and behaviour but are not recognised.
2. Because it doesn’t hold the skills for innovation within itself
In this scenario, the host is attempting to outsource the provision of innovation skills. However it’s only sourcing them for the life of the innovation programme, typically an accelerator. It may have a desire to borrow innovation skills from the startups it works with, however the issues are often more fundamental than that. Leading to requiring a change in culture rather than just skills. And a change in culture may require fresh blood.
3. Because it doesn’t have a culture that nurtures innovation
By mixing its own employees with that of startups, the host organisation hopes to have some of the culture rub off on its own employees. This culture-transference is fine in principle, but only works for those that are directly engaged. The effect dissipates quickly as those that were engaged then re-encounter the culture of the host organisation, especially if that host organisation has severe governance procedures.
4. Because it doesn’t have a financial model that permits innovation
Spending money on an innovation programme is a known cost within standard parameters. The host organisation can commission the accelerator or competition under its in-house business rule policies. Whereas if the same business case authors had presented individual and separate innovations to the same approval board, they may have been rejected due to the differences between innovation accounting and more traditional financial accounting.
5. Because it wants to perform innovation theatre
I’d like to think that innovation theatre is a product of accident. In that I’d hope that no organisation sets out with the express wish, whether in terms of vision or other goal, of performing innovation theatre.
Assuming it occurs by accident, we find examples of idea generation, possibly in terms of a internal staff panel competition (think Dragon’s Den/Shark Tank), running a 12 week incubator, hosting a hackathon. How many of those innovation events result in real, lasting change of the same magnitude as predicted during the innovation session? Or do they fizzle when they encounter the host organisation?
I read Vim‘s article on What Does Failure Mean for Public Services and I wanted to respond. I wanted to build upon Vim’s thoughts from my own perspective. I’ve developed that perspective over a couple of decades working across front-line teams and supporting teams, transforming workforces across public and private sector. This results in me having to balance many different levels of change (including success and failure) ranging from a conversation discussing funding allocations over £100m, followed by a conversation discussing attitudes to change, shortly followed by another discussing the approach for very local decision-making such as choosing the ideal location for a printer.
It’s mostly the same
We should see failure in the public sector in the same way that we see failure in the private sector with the one, not so subtle difference; the public sector is there to make a difference to the population. Pick a public sector service, if it’s not making a positive difference to the population, then it’s failing. I can’t think of a simpler definition. Every other private sector metric (perhaps with some tailoring in the case of profit metric) should apply to the public sector.
Unfortunately, the more we pick it apart, the more difficult it becomes to define failure. And most of that difficulty comes from the difference between providing for the population and providing for an individual.
Most of the failure is seen at the individual level, but not extrapolated quickly enough to realise that the service is failing. Most of the success is also seen at the individual level but we don’t really celebrate these as much unless they’re specifically health-related. For many services, we only notice when they go wrong. For instance, how many local authorities celebrated 0 people in the queue for social housing back in the 70s when it was feasible? Perversely, we may be able to achieve 0 in the queue now, but that could be because eligibility thresholds have risen. It’s not the same service or the same level of service anymore.
At one extreme, we see the death of an individual, we see one person homeless. Then we see multiple people homeless due to congregating together, but it takes longer for social consciousness to become more aware of the deaths of increasing number of individuals. All of this could be failure.
Criminal or Incompetent?
“He’s either criminally incompetent or incompetently criminal”
It’s a phrase I heard years ago from a charitable organisation that raised no money after holding an event where lots of people attended and money changed hands but no profit was made to turn into funds for the charity. We’re talking small money here, margins were very tight, but even so, no-one quite knew what happened.
That’s partly how I think about the systemic failure within public services although I’ll broaden the definition of criminal to include unethical, immoral or against the mass of service users you’re meant to be serving. When a service is failing, I wonder where the decision was that caused it to fail. Was someone competently unethical or incompetently ethical? Competently immoral or incompetently moral? Bear in mind that leaves out the options of competently ethical (where they’ve chosen to improve services and made it happen) or incompetently unethical (where they tried to restrict services but enacted it poorly).
Was it an implementation or management decision by the team manager to assign a lesser-skilled worker to the case where a more experienced one was required? Implying incompetence on the manager’s part.
Or was it because there wasn’t enough money to pay for more experienced workers, resulting in only newly-qualified workers being available? Implying a deliberate decision to underfund on the funder’s part.
Did the funder underfund because they’d allocated more funds to other services? implying an incompetence on the funder’s part.
Or did that funder not have enough money to distribute to the services because of a reduction in centralised funds, e.g. from central government? Again, implying a conscious decision to underfund numerous services.
Public services are funded to meet the demand that’s expected to present to that service, e.g. through referrals from other services and bodies, through walk-in or through outreach (where the team goes out educating the population on the service available). It’s always a balance between who is at most need of the service, the funds available, the skills and experience of the team available and the time available to respond.
Considering public services have a duty to provide for the population, if a service cannot meet the demands placed upon it, who is criminal and who is competent/incompetent?
The manager should understand the costs of the service and the variation based on demand being presented. At the point that it becomes underfunded, it’s time to shout. For many services, that point was passed many years ago. Underfunding results in some people not being served or the quality of service (in terms of what can be provided, e.g. the duration of engagement such as number of CBT sessions) is reduced. That then has a further human impact, e.g. people being homeless or in debt which both can lead to homeless and in debt, which leads to decreasing health, which leads to inability to work (but possibly not recognised as inability). Even one day of no service provision can escalate quickly, exacerbated by the climate of mistrust and unbalanced power between those services with funds and those people applying for funds. That one day can result in missing benefits, resulting in deteriorating health (have to choose between rent, paying heating/lighting bills, feeding children and self, getting to a job interview, clothing, etc). So underfunding a service so that it can’t provide to all it’s designed to deliver to has a cascading effect on the system through shifting referrals elsewhere or to a position of no services available and has a cascading effect on the individual.
When viewed that way, is the funder criminal (or at least unethical or immoral) if they don’t fund the service?
The issue and opportunity to this point over the last couple of decades has been the inefficiency inherent in the public sector system. Public sector services do not get the same level of investment as private sector. A telco can choose to spend multiple millions of pounds on a transformation programme and it will happen. No questions (or at least no scrutiny other than board approvals and monitoring). A public service has to jumpthrough many hoops (each costing time, effort and money) to prove it’s spending the money wisely. So public sector transformation programmes usually start smaller than private sector counterparts to make the programme easier to approve, and end up being smaller still after being watered down through many approval boards. Each of these transformations leaves an effect, usually positive in terms of efficiency, but often negative in terms of morale and capacity to flex for the next transformation.
There is still room to go in terms of efficiency. There are still pockets with severe inefficiencies, but they’re rarely on the front-line teams to the scale that’s expected. And it’s these teams that are usually the focus of funding pressures, especially in response to changing demographics, e.g. people living longer and living with more serious needs.
In addition, services need active maintenance, to some extent in the same way that you take your car in for regular maintenance. However it’s more than that. Active maintenance is not simply day-to-day management and keeping it running. It’s observing the service from multiple angles to understand what’s happening that shouldn’t be, to uncover why it’s happening and to resolve it so it doesn’t happen again. That takes an investment of time and energy.
In most public sector hierarchies that responsibility falls to the manager. The better managers (there are a few of them) have empowered their team to do this daily. They’re succeeding in keeping the service to acceptable levels (although still probably underfunded to do the job they were originally tasked to do) and keeping ahead of changes in demand. Then there are others who are just managing the day-to-day or take on adapting to change themselves. Even if competent as day-to-day managers, they’re incompetent overall since the service remains static.
Vim mentions that “Failure in the public sector is also rooted in a culture that means you can’t fail”. The issue is wider than that. It’s already failing. It’s already underfunded. Austerity or not, there isn’t sufficient money to meet front-line services at their current level of demand in the way that they are currently working. Asking a team to be prepared to fail is an awkward request since in their hearts, they’re already aware of the people they’re not able to help. Most of the professional health colleges put a focus on treating the person in front of you, not those in the queue later on. Give proper treatment to the person that you’re currently treating. In a throughput setting, such as a hospital ward with a flow of patients in and out, that makes sense. In a setting where you have a caseload, such as found in most social care settings, that makes less sense overall. The opener to this conundrum of supply and demand is that we may be able to help more people and help them better than now through experimentation. And that has to be allowed to fail.
Even with that opener, bear in mind that there are ethical considerations in most public sector departments, especially those in education, health or care settings. The Authority has a duty to treat everyone from an equitable position, not necessarily equally. So it can’t create an experiment that disadvantages a customer segment. This can be inadvertent, e.g. by promoting one customer segment’s needs, it alters that principle of equitability. So by improving the service for one segment, it can’t make the rest of the service worse. It’s also widening the gap between the treatment of segments. That’s not a blanket “no”, just be prepared to think it through and complete an Equalities Impact Assessment before you start.
I’m divided in this, but lean towards only brief training, just enough to inform them, rather than enough to practice.
On one hand, it pays to understand why change in general is necessary and specifically, why the change that you’re about to implement is necessary. Often I see professionals who spend time with the person sat in front of them (and so they are patient-centred) but ignore the mass of people also requiring the same service. It’s not that they can’t see the queue (whether a real standing queue or a waiting list), it’s that if they recognise the queue then they realise that they can’t serve everyone to the same level. For some, it’s a question of professional ethics, where their professional body demands that they treat the person in front of them to the best of their ability, regardless of the needs of others. There are good reasons for that approach.
Usually, someone, e.g. a manager or budget holder, recognises the capacity issue and so increases the eligibility threshold or reduces the professional time available for that treatment. This is an attempt to average it out. However it misses the point that some treatments take time to work, if you half the time available, then you may get zero results, not 50% of the results had you allowed the necessary time for full treatment. It also leads to a worsening service as the capacity gets further reduced through a series of cuts, so that wouldn’t be the answer that we’d choose given a choice.
More fundamentally, the communities that the local and regional health providers serve are different to those that existed 30 years ago and the changed communities have different needs. So, it seems obvious that we have to adapt the service to meet the changed needs.
On the other hand, the health professionals are just that; professionals in health. There will be some with additional skills; some complementary, some tangential. I wouldn’t expect health professionals to be experts at change. However they do need to be aware of the change and why they have to contribute. As do we all, no matter what job we perform, no matter which sector we work in.
By recognising the above issues, we can more easily understand why we have to continually change. It’s a matter of adapting to needs. However that doesn’t feel like it requires a formal training in the guise of a university module, more an hour or so during induction combined with some questions during the interview to assess their attitude to change. I expect the professionals to know the service best, so they should be best placed to change it rather than having budgetary changes applied without thought to impact on patients.
To get this message across and gain acceptance and commitment from the group, I usually go through the need for change at the start of any change programme and definitely before each intervention.
One area where I think some training could be useful is in negotiating and debating how services will change. The changes will happen, but being able to influence the changes could be invaluable. Oddly enough though, it’s probably not the health professionals who need the training, instead it’s for anyone who’s trying to change the service, e.g. performance improvement staff, HR/OD, commissioners, etc.
I think the reason for my varied opinions above is that I see a difference between management and change management. I acknowledge that management techniques should be taught in advance as well as broad concepts of changes management, whereas the required, more detailed parts of change management can be taught as required.
I don’t believe we should conduct changes without speaking to the end customer. Taking on the role of patient, I’d much prefer the consultant to have spent their time learning how to treat patients, rather than learning how to manage change. Let’s permit some degree of functional specialisation, with front-line professionals continuing to be good at what they do and change professionals helping them create/design the service that the patients need.
However, I recognise that many front-line professionals either don’t have access to change professionals or do have access but that they’re not listened to. Hence the need for a book that’s applied for front-line staff.
I heard this quote the other day, but I didn’t catch who originally said it.
Art requires rigour, science requires creativity
The first point is that it’s contrary to the standard view. The second point is that both perspectives are valid and that there shouldn’t be that much of a difference.
It then made me think of typical transformation programme roles and the relation between creativity and rigour. Most roles have a balance between the two, with that balance changing according to the standard role and, at times, according to the demands on that role.
For instance, process analysts should generally follow a set of standards. Business Analysts have to be more creative, but still have methodologies to follow. Service Designers have less rigour methods, usually a composition of tools and techniques rather than the standardised methodologies of previous decades. At the more rigorous side, project managers have their methodologies and frameworks to follow. Programme managers see a wider scope and have more creativity in organising the interdependencies. Which then fits nicely with my normal comment that a Business Architect has more in common with a Programme Manager than a Project Manager; there are more skills in common, even though the professional methods involved are different. Which leads me to the Business Architect who has to know when to be standardised and when to be creative. There has to be the flexibility to modify the approach to suit the needs of the client, depending on the stage of transformation.
Some organisations have a different approach to how they handle the status of a document. The approach belies a more fundamental culture of how work is commissioned and reviewed and how staff are viewed.
One of my clients exhibited odd behaviour regarding commissioning work and approving it.
Due to the nature of the engagement, decisions were made by me and then relayed to the client. That, almost unilateral, form of decision-making has not be the norm for my engagements. Instead, I’d have preferred to have reviewed the actions while I was working on those actions (rather than after the fact). It was all a bit backwards compared to any other client engagement, where we would address scope questions early on and progress from that more detailed, joint understanding.
Even though I was assessing business capability maturity, it felt contractual. I would have preferred a more collaborative approach, but the organisation’s approach to generating change was a contractual one. It’s an issue I’ve seen before but not as stark as with this client.
What I’d noticed with this client, was that if a document were released (no matter what version or draft status), it would be treated as final and published. The review comments would imply that the author had made mistakes and that it should never have been released in that format. Fortunately that didn’t happen to me, but that’s probably more to do with how I released documents. My documents had the same version control I’m used to including with many clients and consultancies. Draft documents (assuming little or no sensitive content) are published early to the intended audience for review, in order to influence the outcome and content of the report. The more sensitive the content, the more restricted the initial distribution and the earlier that guidance is requested.
With this particular client in mind, that approach would raise conflicting issues. The reviewers wanted to be able to influence the outcome of the commissioned work, due to the political status within the organisation. But the reviewers wanted to meet as a group to review the version, not necessarily as a steering group, to guide the work to completion, but as a review panel.
I had to tread carefully as to what documents I would release to anyone, regardless of draft status. While I’m used to not initially sharing electronic versions of documents with some clients, it was more important with this client. It created an odd culture, where people would complete work before releasing it, which then created rework and longer delays due to having to fit in reviews and changes.
Perversely, it also created a set of behaviours where many documents never reached a true state of finalisation or approval. Instead, they continued in some draft existence until ignored or replaced. That was a common occurrence, where I’d be looking for a previous strategy document, to find that it never reached completion, but became generally accepted as defining a destination or discarded. However, there had been no formal acceptance or rejection of the content, just a tacit decision across many people.
Reverse-Engineering the Culture
I think that much of the commission and review behaviour occurred due to the hierarchical nature of the organisation. That culture enforced a situation where superiors reviewed the output of their underlings. Couple that with an admonishing culture, rather than a praising culture, and you end up with a situation in which documents have to be final, or the critique becomes more about the person than about the document itself.
This was more than just a client seeing a document and then acting on it, treating it as a final document, e.g. to assist with negotiation or alter their position within the organisation. This was a systematic approach to not adhering to how artefacts are created and developed through to release and acceptance.
There was a severe hierarchy in the organisation where one grade couldn’t comfortably jump a grade when communicating, instead everything had to be passed up or down the chain. While organisations can work like that, many adapt and maintain the lines of communication even with flexibility and the exigencies of operating in any modern market. This organisation did not flex and those that did flex were generally put them back into place.
All of this led to gross inefficiencies and confusion due to navigating the corporate hierarchy. I’ll reiterate, the concept isn’t rare, but the ultra-strict adherence to hierarchy is rare.
This particular client compounded the inefficiencies from the hierarchy with the inefficiencies of poor document version management (or rather the document acceptance process), resulting in intricate, exhausting dance of what to share and what not to share, who to share it with and who not to share it with. All of this encouraged and promoted a contractual culture rather than a collaborative culture.
As news increases of looming bankruptcy for Gibson, the guitar manufacture, I’m left to wonder what happened. How did such an iconic brand end up in such a situation.
Gibson is iconic. Ask anyone to name a brand of guitar and it’ll be the answer roughly 50% of the time. I’m reminded of Harley Davidson (quick quiz: can you name another brand whose customers tattoo the brand name on their body?). Maybe Gibson fits more closely with Jack Daniels. Both represent a way of life, on the edge of 50s/60s rebellion, now more refined.
So what did happen?
I think of Rover, the once British car manufacturer, primarily based at its Longbridge plant, Birmingham. At one point, it was close to leading the world alongside other main brands, such as GM, Ford, Fiat. But over time, a lack of focus, resulted in cars that were of lower quality than those coming from elsewhere. From living in Birmingham at the time, and being subject to a number of Rover cars as fleet hire cars, I saw first-hand the issues with the brand.
Being at the head of the pack, it was easy to pick on Gibson. They were some of the most expensive factory-built, mass-produced guitars on the market. With that price tag, you would usually expect the best quality. Perhaps the best quality you can afford before having to stretch for hand-made guitars. But Gibson was releasing guitars from its plants with issues. Many of the issues may have been minor, but from a customer perspective, the severity of a minor issue inflates quickly in line with the cost of the guitar. So even a small issue on an otherwise-perfect guitar would lead to complaints.
Gibson suffered from being a old brand with the advent of the Internet, specifically newsgroups and forums where users could complain about their new purchase. Anecdotal of course, but I can’t remember seeing a good post about 90s/00s Gibsons, all posts were detrimental. For every post asking about Gibsons, there would be a number of responders pointing out how their guitar had issues, or how they’d avoided them based on having tried a few Gibsons in the past.
On the other side, Gibson benefitted massively from the increased reach that the Internet allowed them. Not solely direct advertising, but MTV + Internet showcased Gibson guitars in the hands of idols. This was back in the time before musical instrument sales had dropped significantly.
Inflation and Asian Manufacturing
The cost of guitars, especially US-built guitars was rocketing. The stronger the US Dollar, the more expensive the guitar to other purchasers in other countries. These price increases also impacted the cost of labour, with build costs increasing along with living costs (but not necessarily equal). There’s nothing special about that; it’s common for many industries.
Back in the 1970s, Japanese guitar builders started building US designs (sometimes under licence and at the request of the US manufacturer). The factories then had the ability to build their own brand guitars or for more local brands. This resulted in “lawsuit guitars” and a lot of mystique associated with them.
Following Japan, we saw Indonesia increasing the quality of basic guitars. It reached the point in late 90s where many starter guitars were sufficient for a lifetime. Similar to the concept of quick and dirty solutions being good enough, we started to see guitarists settle for cheaper guitars because the gap between the one they had and the next guitar up in quality was too much.
Following Indonesia, we began to see new successful plants in India and China. Each reducing the cost of production, but also increasing the overall quality floor, i.e. the basic guitars were improving.
The used instrument market has changed. eBay, Reverb, Craigslist and other sites and apps allow easier access to buy and sell guitars. Every year, new guitars are produced, but not as many guitars are destroyed. So overall, the guitar population increases year on year. That means that for a purchaser, there are now more used guitars to purchase from than new guitars.
That becomes interesting for a few reasons:
Used guitars are usually cheaper – considering inflation. Similar to most industries, the original purchaser has taken the bulk of the depreciation.
Gigging guitarists use guitars – so are less worried about wear and tear than in other sectors
Used guitars could come from a quality with a better perceived quality
Used guitars could be more iconic – representing the guitar sound from a particular era
From my own experience, having bought new and used, I only buy used now. There are enough guitars in the world, I’m sure I can find a used one to meet my needs.
So if your business model is centred on selling new guitars, it’s going to get difficult.
Good now is better later
Unlike many products, there’s no obsolescence in a guitar. The general trend is the opposite to the trend in technology. Guitars are usually deemed to get better with age. That’s not something I necessarily agree with, but I won’t argue with the market. There are two elements here:
Perception that guitars get better with age
Perception that older guitars are better because they were originally built better
So there’s little point in buying new when used guitars are good enough if not better. Whether planned or not, technology has a tendency towards obsolescence; guitars do not. So there’s little incentive for a guitar owner to buy a new guitar, based on the state of their current guitar. A guitarist will buy a new guitar when they have outgrown their current one, e.g. if they bought a basic guitar and then wanted to upgrade to a better model. Guitarists can also be fickly and buy based on colour or other specification, but that behaviour is equal across all ranges and hasn’t changed with time, so irrelevant here.
The protection of certain timbers (and hence the living trees) as found in CITES regulation is a red-herring for this brand. On one hand, it affects all guitar manufacturers, or at least the majority of them. On the other hand, it does affect the margin by increasing the cost. It slows down transactions, resulting in some sellers and purchasers not wishing to go through the hoops necessary to sell a guitar, especially cross-border. Bear in mind that some woods involved in CITES were primary woods for guitar-building; it didn’t affect only rare woods. Gibson have had issues with the formality of wood laws, but other than a potential knock-on affect due to impacting cashflow a few years ago, I don’t see a major issue here. Like many guitar builders, alternative woods or sources of woods are considered.
Lack of innovation
My issue here isn’t the lack of innovation at Gibson, but probably the adoption of innovation. Gibson have innovated on the product, feature the much-maligned robot tuners, chambered bodies, locking sockets and so on. Most of those innovations have been ridiculed as moving the brand away from what it’s meant to be (in the eyes of the customer). Guitarists can be funny about this idea, they want the guitar as it was built in 1959, or any another famous year; complete with substandard components.
With innovation, I usually see types within a company. The first is the product innovation, i.e. what innovations can be added to the existing product line or new products created. That’s where Gibson have gone. The second is innovation in how the product is produced. I’m not sure if Gibson have done this or not.
Investments and Non-Core Business
Similar to other major guitar-building brands, Gibson have entered other related markets and customer segments. They acquired other guitar brands to appeal to other users – including the ability to separate cheaper related guitars, in the case of Epiphone. Gibson also acquire Cakewalk for its software and has very recently stood down that product line in order to focus on its core business.
The other main divergence from core is into audio electronics being the majority stakeholder in TEAC and the acquisition of Royal Philips consumer audio division, Woox.
What’s the Answer?
You can see evidence of a few well-tried strategies already in place in Gibson.
In light of reducing guitar sales, it reduced the workforce
In light of producing fewer newer guitars, it requires less manufacturing space and so can sell existing plants and search for cheaper sites
In light of reducing revenue and profit in its main revenue stream, it has been diversify; exploring other related markets. I’m reminded of BATS‘ diversification into more acceptable brands.
In light of reducing profit, it has stopped production on a non-core product (Cakewalk)
All of those are typical strategies, but in a way, they feel more tactical with the aim of reducing the current pain, without longer-term direction.
The longer-term direction may be more painful still, if the company is to survive. The guitar-building business and the consumer electronics business are very different. They could sell to the same segment, allowing for some positive combination of cross-product marketing, but they don’t need to be part of the same group.
At some point, we’d have to question how profitable the underlying core of the business is. Simply put, does Gibson guitars generate more money than it spends to build and sell the guitars? Then factor in the cost of loans and debts, what scale of business does actually work?
Ideally, we’d see Gibson retreat to a smaller size, concentrating on its iconic models, simplifying the product range to those that sell well. That company does not need to innovate in the product line, instead it needs to innovate in how it builds the guitars and how it sells. It could learn lessons from Schecter or PRS as they were back in the 1980s and 1990s. What scale worked for them? What did they focus on? How did they ensure quality? Unfortunately we may not see that see that comfortable retreat.
I’ve written about Gibson as a single entity. In face there are two main companies to be aware of::
Gibson Brands – this is the primary company
Gibson Innovations – home audio leisure company
There are other additional companies held by Gibson Brands, e.g. TEAC.
Primarily, my interest is with Gibson Brands. That’s the original company (although under different names previously). That’s the core business.
But regardless of how good, well-liked, or profitable a core business is (at least in terms of unit cost), if it is saddled with too much debt, then there are risks. If you want to read about the effects of debt on otherwise-healthy companies, the Rolling Stone article on Mitt Romney is a good starter.
From reading the post at Far Out Magazine, it could be that the recent news is a ploy regarding a struggle between investors and the board. Everything I’ve written above is still true.
In “When is it innovation?“, I introduced the idea of a sector’s familiarity with an concept. I’ve just read Bloomberg’s Innovation Index and I find a few of the variables used to be old-school to say the least. It made me wonder what the index should actually include to be relevant to innovation.
Issue 1: Patents
The first item I noticed was the grading of countries based on the number of patents. I think of patents as being the enemy of innovation, especially when we consider the role of patent trolls in the marketplace. If someone can create an idea, patent it, but have no intention of delivering it. But only holds onto it in order to prevent the person who does actually figure out how to build it, isn’t that stifling innovation?
To some extent, I’m just shouting against a wall. The establishment and market of patents exists and I do not possess the influence to change it. I’m ok with that. However, considering the position of patents, why are they in the Innovation Index? Wouldn’t a better figure be, the ratio of patents created in that country against the number of patented, implemented products? That at least would account for ideas that have been translated into reality.
Issue 2: Non-innovation Metrics
There are several metrics in the table that do not fit with innovation, at least in any definition I’m aware of, and definitely not in the definition I’d proposed in “When is it innovation?“.
For instance, Manufacturing-Value Add is a good metric for assessing the transformation of materials in higher value, e.g. taking a raw material and refining it, taking sheet metal and producing a finished product, etc. But that’s not innovation. That’s just doing your job. It’s business as usual. You could innovate in that area, e.g. a novel way of refining, etc. But that metric doesn’t measure that.
Also, the Productivity metric based on GDP and GNI exhibits similar issues. It’s a measure of how much money is generated relative to the population. That isn’t innovation. True, a high productivity score could be attributed to high levels of innovation (in terms of increasing the output of each person), but not necessarily.
There are times when the cheap and nasty solution is so economically efficient that it can preclude doing it properly later on.
Background – The Fax
Just under a decade ago, I was working with a local authority client and their NHS hospital partner. The interpretation of the law at that time was that email was considered a non-secure channel. Fax was at the time the chosen channel since it was considered to be secure.
So documents were sent from the hospital, via the fax machine to the fax machine in the social care offices. Continuing Health Care panels met to decide on whether the NHS or the local authority paid for the care, based on whether the primary need was a health need or a social care need. That’s simplifying the logic behind the process and the decision but it’s enough detail for this article.
To be able to make that decision on tens or hundreds of thousands of pounds per year per person, that panel needed to review the data about that individual carefully. So this meant that 40-150 pages per person would be faxed from the hospital to the social care office.
The process for this was relatively convoluted:
the hospital professional (therapist, nurse, discharge planner, etc) collates the documents
they ring the social care office and tell them they’re about to send the documents
they feed the documents into the fax machine
they’re sending more than the fax machine can fit into its auto-document feeder, so they have to standby to top it up
at the other end, the fax machine starts printing
the social worker picks up the paper before it falls onto the floor
the fax machine runs out of paper (several hundred pages per panel and it’s likely that you’ll have to refill the paper)
the social worker obtains blank paper, loads the fax machine with the new paper
the social worker collates all the faxes
the hospital professional rings the social worker to confirm that they have the documents.
The First Proposal – Email
Naturally, the partners want to make this more efficient so the design conversation usually reverts to proposal of email. But, as mentioned earlier, that’s not considered secure. Or at least the email solutions available at that time were not secure.
But there is a strange alternative.
The Implemented Solution – Fax Gateway
We used fax gateways at either end. It turns an email into a fax to be communicated on the phone line, to then be converted back to an email at the other end. The revised process was a lot more efficient:
the hospital professional (therapist, nurse, discharge planner, etc) collates the images ready to be sent (e.g. prints to file or scans in the remaining few that they don’t have electronically)
they send an email containing the fax header and the documents to their fax gateway
at the other end, the fax gateway converts the received fax into an email for the social worker.
the social worker reads the email and downloads the attachments ready for the panel
It’s a solution that shouldn’t have existed. It relied on old technology but until the law caught up with the technology (or the technology caught up with what it had to do to be secure, e.g. nhs.net accounts, etc), then it was the cheap, workable solution. But it was messy and I shudder every time I think of it as a solution. However it made it better for the clients, making the process simpler for them as end-users and freeing up time to do more important work.
That’s what the current state of RPA feels like to me. Not the whole of RPA, but the element that’s involved in the user front-end of systems. It’s like the fax gateway. So instead of the better solution of orchestrations between the various IT systems involved, we’ll automate the front-ends.
Now I’m wondering if we’ll see the same situation with RPA as we did from implementing the fax gateways. We found ourselves with a cheap and nasty solution which then made the business cases for full integration prohibitive.
Why would you spend hundreds of thousands of pounds on a better solution when the cheap one works adequately?
So if that angle of RPA solves the automation from a front-end, replacing the mundane tasks performed by employees, why would we look to orchestrate the back-end?
Will initial RPA implementations deter us from better integration of products? And, more importantly, is that necessarily a bad thing? After all, my NHS and LA client were still able perform better with the cheap solution than they were able to without it, and they also avoided a costly integrated solution. In the end, it was a temporary measure until secure email became a practical solution for them and their partners. I’d expect to see parallel initiatives nowadays with RPA, with clients improving their efficiency through the introduction of RPA, but avoiding more costly integration. Especially, as a temporary measure that will likely have a longer-than-intended lifespan.
Once again, I’m using Archi (or ArchimateTool) with the Archimate modelling language.
OKRs do not cascade
Just because the diagram depicts a hierarchy, doesn’t mean that the objectives cascade down the organisational hierarchy. Following the logic in OKRs don’t cascade, I’ve taken the approach of the depicting the hierarchy, rather than how that hierarchy is achieved. In the article, Felipe mentions that objectives should not be cascaded down the organisation. Instead, objectives and key results should be discussed and agreed at each level. The resulting picture is the same either way, but the content of the objectives and key results may be different depending on the route.
Depending on the level of the organisation, many of the components that achieve an Objective will not be Key Results, but instead will be lower level Objectives (e.g. of the next team down in the corporate hierarchy or downstream in case of a flatter hierarchy). The diagram allows both Key Results and Objectives to form part of an Objective.
Modelling Goals and Objectives
Key Results have been modelled as Outcomes. Objectives and Contributing Goals (lower-level Objectives) have been modelled as Goals. In doing so, I’ve allowed for a hierarchy of Objectives to fulfil the concept of Contributing Goals. Had I gone with a model of Objective = Outcome, we would have seen a model of hierarchical outcomes which would not have made as much sense, especially to those having to achieve those outcomes.
From the perspective of Business Architecture, I’m interested in the alignment of actions to the overall vision. I like to see a clean line connecting actions of the workforce to corporate objectives to vision. Many organisations suffer because the objectives are cascaded down rather than agreed at each level. Combining OKRs with a culture of joint-goal setting has a good chance of resolving that core issue.
Notes about the diagram
The content is fabricated; completely artificial. I haven’t populated every single branch, but enough to indicate what could be captured. For those areas that I did populate, I kept to the concept of 3 key results per Objective, of which any of the Key Results can be replaced with Contributing Goals. You can flex that as you wish.
I’ve created a tiny environment in which the OKRs operate, featuring an internal driver for change, an external driver, the assessments for both and the corporate vision and missions.
The interesting concept for me regarding business motivation is that the diagram is agnostic of the organisation structure in that it doesn’t indicate which team or who is responsible for achieving which objectives or key results. I’ve done that on purpose.
If we imagine a typical organisation of 400 people. Each of those named 400 individuals could have Key Results to deliver. Some of those Key Results would contribute to team Objectives. Some of those team Objectives would coalesce to fulfil higher level Objectives and so on. That’s the bottom-up picture.
The top-down picture is that the strategy needs to pervade the organisation and steer the choice of actions and the delivery of those actions. At the top level, the objectives may be independent of who is going to deliver them, but shortly thereafter the key results or contributing goals would have to be assigned. And it’s likely that they’ll be assigned to relevant directors (in the case of stretch targets and keeping the operation running) or delivery teams (in the case of changes). However each of the delivery teams should have a sponsor. It’s that sponsor that’s actually accountable in this case for the delivery of the key result, whereas in many organisations it would be the delivery team.
Overall, OKRs force a concept of personal responsibility or rather, a concept of personal accountability if we follow a RACI model. For the majority of a workforce, the individual is likely to be both accountable and responsible for their key results.
What I haven’t address is the non-aligned use of OKRs, e.g. allowing or encouraging the setting of key results that do not fit with corporate objectives.
There was a radio commercial the other day with the following line:
“We’ll beat your quote from a well known car buyer”
So that’s one company referring to a second company but without mentioning to that second company by name. At what point did they realise that the other brand was so strong, they didn’t actually have to name it, since it would be obvious through context to anyone listening which company they were referring to?
It implies that, for the intended audience, it’s the unnamed brand that is the strongest in that market sector for that product/service, not the company that’s chosen to advertise.
So did the company who commissioned that spot realise that they’ve surrendered market space to another brand? Was this an attempt to gain market share by encroachment through undercutting (“we do the same job for you as Company X, but we’re cheaper/provide better value”)?
It’s an odd strategy to take and I can’t think of a company that has survived longer-term with that approach.
Looking at Mintzberg’s 5Ps, we can see that as a strategy as a position, although the position taken or aimed for is usually a stronger one. Instead, this creates a competition of price and the resulting race to the bottom.