Quantifying the possible risk of cyber losses is very different from modelling natural catastrophe claims. With fire or flood claims, insurers have a wealth of knowledge and data to draw on. But with cyber-attacks, this is not the case. The breach of a firm’s technological defences is harder to detect, making the risks more complex to understand. In addition, the inductive approach to risk modelling used for most perils begins to break down with cyber, leading some in the industry to argue for greater government involvement.
If the industry gets this wrong the consequences could be catastrophic for insurers and insureds in the event of a serious multi-billion-dollar cyber event. Establishing a government-backed cyber reinsurance scheme, a government-organised risk cyber pool, or enhanced attack data collection schemes are some ways of making this happen.
In its March 2017 Sigma report, Swiss Re explored the benefits of such a programme, suggesting governments could be “instrumental in setting up insurance pools that enable private re/insurers to share exposures to a particular peril and underwrite each others’ risks”.
The study highlighted the benefits of a pool system that would allow private re/insurers to share exposure to uncertain perils and underwrite each others’ risks.
“Public sector involvement can facilitate collaboration and information exchange among market participants and potentially take on some of the administration costs,” the report says. “Government-sponsored insurance pools can also safeguard any pooling arrangements from contravening applicable competition laws.”
The economics are pretty simple: asymmetric information – when one party in an economic transaction possesses greater material knowledge than the other party – is one sure-fire route to market failure and the aggregation of systemic risk.
This represents the latest iteration of the argument posed by those in the industry concerned by the lack of detailed information about cyber threats available to insurers. Speaking at an Insurance Insider conference in 2015 following the compromise of data at US health insurer Anthem, Stephen Catlin, executive deputy chairman of XL Catlin, warned that the systemic risks brought by cyber were too large for insurers to tackle and should be left to the government. “Our balance sheets are not large enough to pay for that,” he said.
The industry, however, is divided. Speaking at the CFC Cyber Symposium in November last year, Inga Beale, chief executive (CEO) of Lloyd’s of London, warned it was “not good enough” for reinsurers to be passing large exposure to governments. The executive said that according to recent research carried out by the Lloyd’s market, cyber exposures at syndicates are currently no bigger than other risks taken on by the market.
Meanwhile, many of those working at the coalface in the market greet the prospect of government involvement with suspicion. Speaking to Reactions, Russell Heaton, cyber class underwriter at ArgoGlobal, said: “I personally am sceptical about using taxpayers’ money to fund a government-organised cyber pool.” He makes the argument that the biggest and best innovations come from the private sector, and that it has the ability to move quickly and look aheads. Anything that might jeopardise this is a problem. Chris Cotterell, CEO of broker Safeonline, shares this view, making the point that the market simply is not big enough yet – nowhere near enough business is being written to justify intervention on such a scale.
Whether or not governments instigate fully-fledged cyber pool schemes, as the recent Swiss Re report indicates, data and research on emerging cyber threats remains woefully inadequate. Data collected by agencies such as the US Geological Survey and the Japan Earthquake Agency enables firms to assess the frequency and impact of the natural catastrophes, giving them a more accurate picture of how to underwrite certain risks. For cyber risks this is not the case.
The lack of available data poses a considerable problem for conventional actuarial models that focus on the aggregate loss probability distribution. As the report from Swiss Re highlights, “there is a lack of historical data on cyber incidents from which to extrapolate data from future losses”. Cyber-attacks are difficult to detect – often a firm will not realise they have been targeted until a significant loss occurs, which greatly inhibits data collection.
A company’s desire to avoid brand damage caused by public admission of a loss has a dampening impact on the number of breaches reported has led to a behind-closed-doors attitude towards threat assessments, and a cyber risk assessment arms race.
In a bid to acquire the highest quality information, firms have struck up partnerships with cybersecurity consultancies they believe will quantify risks as accurately as possible. Chubb, Allianz and Beazley have partnered with cyber security firm FireEye, while in 2015 AIG acquired a minority stake in investigative consulting firm K2 Intelligence. Insurer CNA Hardy recently launched a partnership with cyber security firm Waterfall Security solutions.
Government intervention in the reporting process already takes place in a number of jurisdictions – ranging from the UK’s Cyber Security Information Sharing Partnership, to Germany’s Kooperation zwischen Betreibern Kritischer Infrastrukturen, and the cyber security Information Sharing Act in the US (CISA). These schemes and laws help firms to share data while retaining the confidentiality of their information and where necessary facilitate a state response.
Generally speaking, this is supported by the industry at large. Geoff White, CEO at Neon-owned cyber venture Tarian, says: “Collaboration with governments to enable knowledge sharing is extremely useful and necessary.” However, as Swiss Re’s report highlights, almost two-thirds of industry respondents to a Swiss Re/IBM survey indicated that government actions can increase cyber risks, which may reflect concerns that data can be leaked or misused.
Meanwhile, arguments over the creation of government-backed reinsurance programmes or insurance pools raise a plethora of questions. A pooling system may tackle the possibility of market failure caused by adverse selection, but is the government really the right body to be managing risk in this way? It is also unclear whether such a scheme would be permanent, or if it would be implemented as a temporary measure to correct market imbalances. In the current global political climate it seems unthinkable administrations would look to replace the market but instead seek to correct it.
Flood Re in the UK was designed with its own demise in mind and will remain in place until 2039 in a bid to correct the market. It is highly likely that any cyber scheme would be created along these lines, instead of a government making a permanent commitment to take risk from the market.
Technology shifts at a faster pace than flood defences, making it likely that any such scheme would have to incorporate a way of dealing with the rapid evolution of technology. Nevertheless, as Julian Enoizi, the CEO of Pool Re, points out, the UK government’s terrorism reinsurance scheme was almost wound up after the Good Friday agreement in 1998 but managed to evolve. There is no reason why a backstop reinsurance scheme for uninsurable cyber risk could not evolve in a similar way.
Although the necessity of large-scale cyber backstop schemes is far from certain, the majority of industry insiders appear to support increased information sharing and government collaboration when it comes to the reporting of cyber attacks. Technology consultancies may baulk at the idea of government intervention eating into the profits of lucrative research contracts, but the outcome of better information sharing would be extremely positive both for insurers and insureds.
The industry regards the introduction of cyber pools and reinsurance schemes as a remote possibility, but if a significant systemic loss hits the insurance market in the next few years this could well change.