You’re the person in charge of safety on the Titanic. The designers have told you that this state-of-the-art ship is virtually unsinkable. You need to equip the ship with a contingent of lifeboats but, as always, you need to stay in budget. So how many lifeboats should you fit? Factors such as likelihood and severity start swimming around in your mind.

So here’s a suggestion. The ship will almost certainly never, ever sink, so let’s equip it with enough lifeboats for, say, half the passengers. That should be enough. What do you say?

Well, the designers of the Titanic went ahead with this decision and we all know the disastrous results. There was no logical argument for fitting half the lifeboats. Either the Titanic could sink or it couldn’t. If it couldn’t sink then it didn’t need any lifeboats. But if it could sink then it needed enough lifeboats for all the passengers, not half of them. Titanic’s designers had confused the likelihood of an event with the severity of an event.

Effective cyber security, indeed any kind of security, is underpinned by an assessment of risk. You probably write or speak the word several times a day. But we often make risk decisions on shaky grounds. Many people have difficulty coping with basic probability, often wildly overestimating the likelihood of something happening let alone differentiating it from severity. Even worse, ask twenty people to define “risk” and you’ll get back twenty different definitions. It all seems to boil down to the probability of something happening multiplied by the resulting cost if it does. Risk, threats, assets, vulnerability and severity all become blurred and we end up not really understanding what risk means.

Put simply, risk is more than the probability of success or failure, it’s about uncertainty: what we cannot predict, control or understand. For example, 18th century gunpowder factories were always exploding. They did this so often that insurers came to know the odds quite accurately, fixed the premiums and always made a profit. As long as the probability of disaster was well known they could reduce their risk almost to zero.

Risk is also about goals and the things upon which those goals depend. With no goals there is no risk and “risk” only makes sense in terms of your goals. If your goal is to stay alive then the edge of a dangerous cliff might be a risk. But if you are hell-bent on suicide then the cliff could be an opportunity and a safety net a threat.

So, putting this all together, a better risk definition is “the degree to which the chances of achieving our goals are affected by things we cannot control, predict or understand” [1].

Those responsible for assessing cyber risk must therefore make sure they ask the right question to kick-start their analysis. Conventional risk doctrine tells us to ask what could go wrong and then attempt to enumerate every possible threat. While that worked in simpler times, today’s organizations reside in a highly complex, massively interconnected world. A bottom-up, “potential failure to potential consequence” approach cannot handle unforeseen threats, is too subjective and misses the crucial interdependencies that can propagate and amplify risk events.

If we define risk as the degree to which the chances of achieving our goals are affected by things we cannot control, predict or understand then this leads us to a better up-front question, namely what does a successful business look like? Decomposing this goal and the things it depends upon into sub-goals and the things they in turn depend upon will lead us to those critical parts of the business where dependency is high, certainty is low, failure is potentially catastrophic and risk mitigation a priority.

It’s at the very edges of this analysis where we find uncertainty — what we cannot control, predict or understand — creeping in. And this is where situational awareness plays a key role. It becomes the primary means of reducing uncertainty – and hence risk — by illuminating the darkness and shining a spotlight on the real threats.

The esteemed statistician George Box once said that all models are wrong, but some are useful. When grappling with today’s complex cyber threat environment we must be smarter and develop risk models that, by being focused in the right places, are just enough. As we pilot our business across dangerous seas we may not be able to predict every single iceberg that might come along, but we can better understand our greatest points of dependency and uncertainty to help us withstand whatever the enemy decides to throw at us.

[1] Gordon, J., Dependency modelling and understanding risks to the infrastructure, 2011/Burnap, P., Hilton, J., Gordon, J. and Slater, D., A goal-oriented approach to determining risk in distributed interdependent systems, 2011.