Why KPIs are not the answer for complex systems (part 2)

Let's get something out of the way. A common fallacy about complex systems goes like this: If we could only observe all aspects of a complex system, we could predict its behaviour perfectly.

It's an easy trap to fall into, because in complex systems the chain of events leading to an outcome can be traced after the event. We see this all the time in coronial inquiries and other kinds of port-mortems. But despite the fact that complex systems aren't random, their non-determinism is fundamental.

Therefore, improved observation can improve our ability to detect and respond, but not to control. (But observation isn't effect-free, either; the very act of observing can alter a system's dynamics.)

The performance of any system, complex or simple, can be measured in six fundamental ways (hat tip to Bob Lewis):

  • fixed cost - costs that are incurred regardless of levels of system activity
  • incremental cost - cost to execute a task
  • cycle time - time to execute a task from start to finish
  • throughput - how many tasks can be completed in a given unit of time (capacity)
  • quality - the absence of flaws in task execution
  • excellence - delivering more than what your customers expect

These measurements can be used on any part or whole of the system. Some of these measures are inherently in tension with each other: higher quality (good) is likely to mean higher incremental cost (bad), and so on. A genuine productivity improvement occurs when some of these measures can be improved while others remain constant. By contrast, improving some measures while worsening others is a performance trade-off.

But, I hear you say, why measure these things if they can't be controlled and therefore can't be assigned a KPI?

Simple: most complex systems operate in a dynamic equilibrium, also known as homeostasis. This is a "natural" state, maintained through negative feedback loops that act as inbuilt stabilisers. Some people go further and assert that some complex systems actively work towards sustaining a particular operating state, which is known as autopoiesis. This is most commonly seen in systems where various actors that form part of the system have a vested interest in resisting change.

Systems where stabilisers are strong will have performance measurements that fluctuate but average out over time. However, if a large disruptive force is introduced into the system, the system may settle into a different point of equilibrium. In these environments, change management is achieved through three steps:

  1. modify system constraints or boundaries to make the desired direction of change coherent
  2. introduce disruptive forces to move the system away from homeostasis
  3. monitor the system as it settles into a new dynamic equilibrium and evaluate whether the new performance parameters are an improvement

The idea of coherent change is key. The act of cutting wages cannot realistically be predicted to improve productivity; nor is asking people to focus on quality when their performance evaluation is made purely on the quantity of tickets they resolve. These are incoherent expectations.

Sometimes, the mere act of changing system constraints is sufficient disruption to make change happen. Often, a disruptive act is necessary to kickstart the change - traditionally, this is done through the announcement of a new initiative by executive leadership (but make sure the announcement is coherent with expectations, see above).

Lastly, once you have set the scene and introduced your disruption, the hardest step is to sit back and watch. If your system dynamics parameters are correct, your chance of success will be greater than not - but never guaranteed.

[Important: In many cases, you won't know how to develop coherent disruption strategies. In this case, disrupting through multiple safe-fail initiatives is the quickest way to identify successful experiments which can be replicated on a larger scale.]

An alternative way of visualising this change process is the K3 Consulting four stage adaptive cycle, illustrated below:

So to sum up: we have outlined a process to evolve complex systems, and hopefully improve the system's performance. But how do you evaluate the performance of your team in this situation? I would argue that the key is to look at:

  • how they are influencing coherent outcomes
  • how they are introducing systems disruptions
  • how they are monitoring results
  • what evidence they are choosing to use to justify their actions in relation to the first three components

By doing this, you move the onus of performance away from results achieved, and towards how people create, communicate, and execute on their plans for improvement. It's like Shane Battier -- his importance to the team's win record wasn't in his performance from game to game, but in the quality of his execution against an agreed plan which maximised the chances of desirable results.

In Part 3, we'll examine the relationship between performance, robustness, and resilience, and how the trade offs form one of the most key strategic decisions you'll ever make about your organisation.

This is Part 2 of a series on performance management for complex systems.

Did you know...

Our expertise in complex systems analysis, combined with a deep understanding of technology and modern, agile management and leadership techniques makes knowquestion uniquely positioned to find strategic solutions to your tough problems. Contact us today.

Comments

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Allowed HTML tags: <a> <em> <strong> <small> <blockquote> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <br>
  • Lines and paragraphs break automatically.

More information about formatting options