The key to the stochastic modelling of terror begins with two mathematical principles that have a certain beauty to their simplicity.
Before the early 2000s insurers were uncertain about how the risks posed by acts of terrorism should be measured.
While risk analysis firms including AIR Worldwide and Eqecat (now CoreLogic) developed their own models following the same theory, it was Risk Management Solutions (RMS) that got there first.
In 2007, one of the firm’s leading risk analysts, mathematician Gordon Woo, published a paper showing that acts of violent terrorism – unlike natural catastrophes – do not follow the rules required to be considered mathematically random.
The Poisson process and the Markov state are two theorems that demonstrate events are probabilistically unconnected and take place randomly. Woo realised that acts of terrorism do not follow the necessary criteria to adhere to the theorems.
The Poisson process relies on the notion that the arrivals of events are independent from time and independent of other events. If an event fulfils the Markov state it means the distribution of future states of the process rely only on the present state.
Human nature makes catastrophic damage from terrorism less unpredictable than natural disasters as there are characteristics of such events that make the physical damage predictable and understandable in a way that the impact of natural catastrophes such as hurricane and earthquake simply is not.
“I was really the first person to draw attention to the fact terrorism is fundamentally different from natural hazard, because there is an adversarial risk. The terrorists can’t just do what they want,” Woo says.
“You can bring terrorists to justice in a way you can’t bring natural hazards to justice,” he adds.
While the two assertions seem obvious, the principles they encapsulate were fundamental in helping risk modellers realise that risk analysis could be applied to acts of terrorism. With the stakes so high, any structure enabling the quantification and management of the physical damage risks posed by terrorism is widely sought after.
RMS and Woo, its principal terrorism risk modeller, established the first probabilistic terrorism model; a structure that estimates the risk of macroterrorism – a term used by the firm to refer to attacks capable of causing: 1) more than $1bn economic losses; 2) more than 100 fatalities; and 3) very symbolic damage.
Starting with specific attack scenarios, the model assesses the threat of various types of attacks on different targets, the vulnerability of the targets under consideration to attack, and the expected annual consequences of successful attacks in terms of casualty and property loss.
Woo uses the flow of water through pipes as an analogy for the appearance of acts of terrorism: “
He is insistent on the point that the human element of terror attacks introduces a level of predictability that for mathematicians and risk modellers can trace back:
“Take windstorm risk in London, for example,” says Woo. “If the Mayor of London suddenly decrees that every building in the city has to have stormproof windows and has to have protective roofs, the wind god isn’t suddenly going to say “well there’s no point hitting London” and the wind hit Birmingham instead. The direction of the wind does not change just because the probability of hitting the buildings has been reduced.”
Relying on Game Theory which outlines a zero-sum game in which one person’s loss is another’s gain, Woo designed a parametric algorithm that quantified the probability of targets coming under attack using the parameters of city, attack type, and the defensive resources available.
The second key advance in probabilistic modelling of terrorism risk over the last two decades came in the form of social network analysis.
Having worked closely with intelligence and enforcement agencies, Woo realised the same network analysis techniques used by the likes of UK electronic intelligence body GCHQ could be used by insurers to work out the chance of terror plots being foiled.
However uncomfortable to imagine, acts of violence designed to cause significant loss of life require an inordinate amount of preparation and organisation and the number of individuals likely to be involved increases significantly.
As the number of individuals involved in an attack is directly associated with the size and location of a loss, this offered a new method of estimating the likelihood of significant damage. As Woo explains:
“Before doing the work I did to create my model in 2009, if you wanted to ask the question ‘what is the likelihood of different types of attack occurring?’ nobody could really tell you.
“Before I did this analysis, if you wanted to know what the likelihood of someone setting off a five-tonne truck bomb, calculating the chance of a plot being indicted as a function of the size of the cell. I’m pleased to say that no one else has replicated it so far.
“The function I came up with tells you what the relative likelihood is of explosions taking place with different size bombs. If there are six people involved then there is an 86% chance you will be arrested, for example.”
Looking to the future
So how is the model changing, and where will it go in the future? While RMS continues to bring out technical updates to its design, offering new bells and whistles, Woo explains that much of the prescient theory behind it remains the same:
“One thing I’m proud of is that much of the theory that formed the basis of our model back in 2002, is still valid today. If you take the last 15 years it has been complete mayhem in the Middle East and South Asia – you’ve had political conflicts everywhere. But the key point about terrorism risk is that it constitutes what is left after two countervailing forces,” says Woo.
“According to the so-called heartland theory of terrorism terrorists can attack anyway simply to make people scared. You could blow up a vehicle bomb in Shropshire, intimidating farmers. However, this has simply never happened.
“Where successful attacks have actually occurred, when information about the perpetrators is actually released it transpires in many cases they were actually known to the authorities,” he adds.
It seems the principals behind the probability theory on which Woo’s model is based largely remain the same; as difficult and senseless as recent attacks seem, there is a repetitious mundanity to the pattern of losses that reduces uncertainty for insurers and could allay fears for the public at large.
“That’s why I say that terrorism insurance is actually insurance against the failure of counter-terrorism, because in many cases these people are known to the security services, and if they had done an even better job, then these attacks would have been stopped,” Woo says.
For the mathematician, reducing uncertainty has been his central motivation for working on the theory over the last two decades, something he is determined to continue: “One of the reasons I’m trying to make this point – and one of the points I’ve made over the last 15 years is to try to give underwriters confidence in writing terrorism risk,” he says.
“Going back 15 years – and to some extent this is still true today – people didn’t understand terrorism risk so they weren’t comfortable writing it. In fact it’s still the case that senior management of insurance companies, by and large, are very nervous about writing terrorism risk – even now.”