Catastrophe Risk Model Roundtable

Catastrophe Risk Model Roundtable

Catastrophe risk modelling is in a period of transition: regulators expect underwriters to develop their own view of risk; cat models grow increasingly sophisticated; and modellers face pressure to prove relevance to events that take place, as well as to offer more transparency. Reactions hosted a roundtable, sponsored by Eqecat, with these topics in mind.



Participants (left to right, back row):
David Lightfoot, international head, Guy Carpenter Analytics
Paul Little, president, Eqecat
Justin Davies, UK and Asia-Pacific practice leader, Eqecat

Participants (left to right, front row):
Rob Stevenson, head of insurance operations, Kiln Group
Paul Miller, head of international catastrophe management, Aon Benfield Analytics
Moderator: David Benyon, deputy editor, Reactions

with additional comments from Conor McMenamin, European CRO, RenaissanceRe (inset)

Welcome, everybody, to this discussion about catastrophe modelling issues. A crucial question to begin with: why is it so important for insurers and reinsurers to assess their own view of risk? 

Paul Miller, Aon Benfield: Ultimately, your view of risk is what’s driving your capital. I think the market has changed from a position where it was perhaps comfortable to predominantly outsource its view of risk to third-party vendors in the past – modelling firms among others. Events, regulation and other factors have led to recognition that the old standard modelled view of risk is not necessarily fit for their own specific business – particularly as they were playing with control of their own capital. It’s not right to outsource how your capital is managed to somebody outside of your business.

The market has changed. I think there was an alignment of the stars in 2011, if people needed reminding that cat models weren’t perfect, that was the change point for me. That was when we saw much more of the market saying I must understand and own my own view of risk.

Conor McMenamin, RenaissanceRe: We would not be comfortable outsourcing a part of that risk understanding to a third-party, even one that is bringing the most up-to-date science to bear on the problem. There are inherent uncertainties in the models we use – the risk is that outsourcing the opinion piece leads to distortions in the decision-making process. There have been cases in the past of insurers optimising into holes in a model – maybe concentrating insured values in areas where their model under-represented risk, or using secondary modifiers in an inappropriate way – the best outcome you can then hope for is that a model change acts as a wake-up call. If a common sense view is applied to the process, alongside the best science, this type of problem becomes much less likely.

Rob Stevenson, Kiln Group: We have a responsibility to our capital providers to quantify the risks we are taking and to set our own risk appetite. When writing Catastrophe business the cat models provide a benchmark but modelling is an incredibly difficult thing to do. This is demonstrated where credible model vendors generate different honest answers to the same question, there is no one right answer. It is therefore our responsibility to choose a view, which can end up with something different from a vendor.

David Lightfoot, Guy Carpenter: Because cat model results are typically an integral part of internal capital models and as those models are being used to inform an increasing array of business decisions, the importance of management confidence in the cat modelling results is similarly increasing. In other words, the more confident a company feels about the inputs into the cat model and the results the model produces, the more confident they’re going to be in terms of using those results to support strategic decisions.

Rob Stevenson, Kiln Group: We also need to mitigate large changes in cat models. We have seen when a company is highly calibrated to a third-party model, the financial impact from models changes can be significant.

Paul Miller, Aon Benfield: I remember 10 years ago being at a conference when one of the modelling firms announced some significant changes and we saw underwriters get up from their chairs and run to the phones, because this was going to have such an influence on their business. Thankfully, that is not the case anymore. I think 2011 was a nail in the coffin for the attitudes of some people, who were throwing their hands up in the air in frustration, questioning how they could run a business from a model that changes by 50% year on year.

Paul Little, Eqecat: We've been big proponents of companies owning their view of risk, having clients across the spectrum insurers, reinsurers and brokers, I’d say reinsurers have more rapidly developed pricing and aggregation tools that allow them to own their unique view, they have made the investment in research to understand the key risk drivers and how the vendor models treat those elements. Reinsurers have invested heavily in creating data warehouses so that they have their own version of a market-wide or industry portfolio.

One really needs to understand the nature of the policy conditions and understand what the exposures are, including the non-modelled exposures. So, when certain reinsurers did not react as much as others to significant model changes, it was because they had already made their own adjustments. Apparently those firms understood a little bit more about the strengths and weaknesses of certain models.


Rob Stevenson, Kiln Group: As Paul alludes to, this is going back to basics. Forget the probabilistic view for a moment. Ten to 15 years ago collecting detailed policy and location data was a real challenge. Nowadays the challenge is being able to use that data to understand in more detail the profile of your portfolio. The data has improved partly due to the requirements and capabilities of cat models. I think over the last few years, people in the North America and European markets are collecting data much more effectively.

The modellers do huge amounts of work in building industry exposure data sets and using them around the market. The probabilistic view keeps changing all the time but the thing I know won’t change, because its simple maths in effect, is adding up pure exposures. I want to discover what are my exposures; where are they and how much are they worth. In many examples they are more valuable than a probabilistic view.

Paul Miller, Aon Benfield: We’ve also seen an upsurge in requirements for scenario modelling. So, for example, what would a flood scenario do in this particular area? People are saying they’ve had enough of these black box solutions they don’t fully understand; let’s go back; let’s get our exposure right; let’s test it with some scenarios. They are going backwards in order to leap forward.

Justin Davies, Eqecat: One thing that I’d be interested in is whether or not those companies with a smaller R&D department will need to grow that, because they need to do all this research. So, are they going to have to grow or be left behind?

Rob Stevenson, Kiln Group: I would like to think that there will be software providers who see the opportunity to provide firms with flexibility, so that they’re able to answer those questions, but yes, to fully validate a US hurricane model or an earthquake model, you’re probably going to need some qualified people working on that.

Have we got to the stage now where it’s no longer driven by regulation, for example, and that there is a more strategic desire to understand your exposures? 

Conor McMenamin, RenaissanceRe: There are now external pressures on reinsurers to do this, and not just from regulations such as Solvency II, but also from rating agencies. That’s something we generally welcome, but it does have its pitfalls. We have a strong belief that risk is a cultural issue; we look to develop and own our view of risk because that in turn allows us to manage – and price – risk in the best way possible. But historically the level of risk ownership and risk understanding has varied from company to company; one possible concern is that the market ends up being pushed towards a lowest common denominator approach to risk ownership – one that does just enough to tick the regulators boxes. Without the right motivations, namely the genuine need or desire to truly understand and capture the risk, companies will not realise the potential benefits. In the right culture the desire to own the view of risk comes from within.

Paul Miller, Aon Benfield: People complain about the onerous nature of Solvency II and regulation, but I think one of the things that the regulator should be recognised for is that they have accelerated the desire and the need to better understand your exposure, to better understand your decisions, to better understand your view of risk, and so that’s why I say it’s a mix of strategy and regulation. The regulators have had a positive role to play, but it’s not just about regulation.

David Lightfoot, Guy Carpenter: I think the regulators and the rating agencies have been looking at how companies are enhancing their understanding of the results as part of their assessment of the company’s ability to manage its risks such there is alignment in this respect

Let’s talk about the benefits and the challenges associated with blending catastrophe models. 

Conor McMenamin, RenaissanceRe: We use multiple models, but we don’t ‘blend’ models per se. We use multiple models because we like to access a variety of views of any given risk; each view, each model, will have its own strengths and weaknesses which we seek to understand on an on-going basis. Looking at the output of the models separately can help us to understand where fundamental uncertainties exist, and protects us against optimising into the holes in any one model.

Paul Miller, Aon Benfield: Blending has been sexy for the last couple of years. However, if you ask different people what they mean by blending, they’ll probably give you different answers. I worry in the first instance that blending for some people is nothing more than a way to get back to an answer that they want, ie. they’re not blending models because they think that 70% of model A and 30% of model B gets them to a more scientifically robust answer.

If they’re doing it because that’s what they need to do to write a risk, then there’s nothing wrong with that, as long as they understand what they’re doing. However, blending, in its simplest format, could be somebody throwing their hands in the air and saying: ‘I don’t know the right answer, so why don’t I take a midpoint? Or ‘Why don’t I take a third, a third and a third across all three models? Then I’m aware of all views of risk.’

That approach doesn’t mean you understand the models or your risk. It is not the same as saying that you have selected the most scientific robust model but think it’s got some issues, requiring adjustments – that to me is blending. It’s about blending aspects of experience you believe to be accurate with a core model chosen to be the most scientific and robust and then adjusting for components you are uncomfortable with or feel are missing.

I think part of our role as brokers is to provide all views to our clients, but I don’t agree that just vaguely knowing those models and taking a point between them can be a better solution than understanding the models, picking the most robust as your start point, and then forming your view of risk around it, I think that’s a stronger strategy.

David Lightfoot, Guy Carpenter: As we progress in this space, we will likely be able to say things like vendor A has a really good correlation approach to European wind events, vendor B has a preferred vulnerability approach and then use this knowledge to recalibrate vendor model C’s model

So, it’s an evolving process but I think what’s really important is that companies develop a framework that then they can use to support their blending decisions – so it’s not ‘a third, a third, a third’. Model blending is a journey where companies should be able to say at this point in time, this is our best view and be able to confidently discuss this view with their stakeholders.

Is the black box approach to modelling, characterised by lack of transparency, now a thing of the past? 

David Lightfoot, Guy Carpenter: No, as long as companies are building a business around licensing cat modelling results, there’s going to be some intellectual property to protect. I think the industry understands that. What the industry wants is more transparency around the key assumptions and key bits of information to better evaluate and test the model, so that they can get the comfort with regards to the results that they’re looking for.

Paul Little, Eqecat: It depends on the components on the model. Certainly on the hazard side I think there’s a lot more openness to share that information. There may be very specific research that a cat model vendor has done that informs their view of a particular hazard that they may consider as intellectual property that has to be protected. From a hazard perspective, I think the black box approach has gone, but less so for vulnerability.

Paul Miller, Aon Benfield: I think we all agree that there’s been a positive cultural change in the market towards better understanding the models. The users of these solutions need to invest in better understanding, and the vendors need to be comfortable with greater transparency around their tools. I think the vendor firms are being more open than they ever have been, but they are each in a different place in terms of transparency.

What matters is that we understand where our comfort levels are, through a level of understanding, and we can adjust for it. Nobody’s looking for a perfect cat model because it doesn’t exist. What we’re looking for is a model that we fully understand and we can adjust for the things that we disagree with.

Justin Davies, Eqecat: There’s certainly been a huge change in the culture of the company and our openness about transparency in the last two years. We’ve got a way to go, we realise that, to really satisfy the market, but we are definitely moving in the right direction.

What other lessons have modellers learnt from cat losses in the past few years, 2011 in particular? 

Paul Miller, Aon Benfield: Part of the role of a cat model user is to take ownership around what this model does and what it doesn’t do. What we saw in 2011 was there was an awful lot of the things not included, tsunami, for example, while liquefaction wasn’t fully covered. So those events definitely opened up the eyes of the users to make sure that their start point is in simplicity. They were prompted to think: ‘Let me list all of the areas that I can get a loss from. Let me do some research into that. What does the cat model cover, what doesn’t it? And for the things that it doesn’t do, is there somewhere else I can look?’ Drawing on research is an important part of it, asking where else can I look for evidence of this element of loss that may have occurred elsewhere.

Rob Stevenson, Kiln Group: If you went back 10 years and took the risks that you’ve written today, and ran them in the older models, your modelled losses would be significantly lower than they are today. So the cat models are now more reflective of actual losses than they were 10 years ago, but we will always have to respect the limitations of the model.

Responding with industry loss numbers soon after a major event is of value. However, it seemed a competition for vendors to get out the industry loss number, when we know it takes two or three months to work out what a loss is actually going to cost. The Sandy actual industry loss was being finalised three months later so for someone to come out with an accurate loss figure the day after an event is ambitious. You’ve got to get loss adjusters in and assess loss, and that takes time.

A claims department will tell you it takes three months before you actually get a figure within 5% or 10% of the ultimate cost. So, I wish that the vendors would take their time when they produce such estimates, because people latch on to that number, rather than reading the caveats.

Paul Miller, Aon Benfield: Look at what happens to your share price when you get your number wrong, because you have a problem if investors lose faith in your understanding of the business.

David Lightfoot, Guy Carpenter: I think it’s helpful to have a sense of the total insured loss relatively soon after an event. Some may use that number in decisions without an appropriate understanding of the inherent uncertainty in the number. From an individual company’s perspective, prudence and feeling confident about the loss estimate as well as providing a sense of the associated uncertainty is the right way to go.

Rob Stevenson, Kiln Group: The worst thing to do is to post a number and then keep moving it up and up and up. Numbers that were posted for Sandy on day one or two, compared to where it ended up were significantly different. It seems to happen on all the major events.

Paul Little, Eqecat: I do appreciate the comments about how that number’s latched onto by investors and generally the media. It can whip everyone into a frenzy and have implications for companies. As we look at what the exposures are in the area, we talk about the uncertainty associated with the amount of exposure and then the damage rates that we are assuming, based on the footprint of the event itself.

When we go through that exercise, we do come up with a range and then we talk about all of the other factors that we’re not able to account for, certainly for commercial risk, more so than residential business interruption and loss of use and a number of other areas that policies cover that we’re not able to capture in a broad sense.

As we try to balance whether or not it makes sense to come out with a number early rather than wait, we see it as a service that we provide, that we are comfortable with the methodology that we use, and we disclose that. We do wait until after the event has occurred and we have some greater level of understanding of what has happened on the ground. I think over time we’ve proven to be reasonably good at our estimates. We have adjusted them based on new information that we discovered for Sandy. However we avoid the wide ranges alluded to earlier. We don’t see that as being very helpful.

Justin Davies, Eqecat: I think the market is quite polarised on this. You get a lot of people who say that they do want that number quickly. They want a good understanding of the number but they still want that figure.

Returning to the transparency issue, how much transparency do clients want and how much can they handle? 

Rob Stevenson, Kiln Group: There’s a broad spectrum of abilities across the industry. Some entities employ people equally as qualified as the cat model vendors. These people are likely to want all the data. I think the vendors have to recognise that fact and provide it as required. But then, at the same time, some entities don’t have that level of resource and understanding within the company. This is where assistance from brokers has added value over the years.

Conor McMenamin, RenaissanceRe: Ability to handle relates back to investment in expertise and research capability, but I think clients would all want as much transparency as they can get even if they’re not able to leverage it. However providing that costs vendors money; balancing the relationship between cost and benefit to users is probably the only way to resolve this to mutual satisfaction.

Paul, from previous conversations with Eqecat, I know you do make available large volumes of relevant information. Do you find in general that’s satisfying the clients’ demands or are you getting further requests for detail that you’re not already providing? 

Paul Little, Eqecat: With the release of RQE we have provided more detailed documentation than ever before and that is to meet the needs of our clients. We do find that there is even a greater level of transparency that's being asked of us. For example there are a number of assumptions that we would not necessarily detail in all of our documentation, so there are opportunities for us to go down another level in terms of transparency. As to whether companies can handle it, there is this move towards everyone having their own R&D. We’ve made the investment and we're willing and able to reveal all of the assumptions and underpinnings of the model - we consider this a strength and key area of differentiation.

I understand why the brokers invest in analytics but for other companies to make the same investment there is clearly a duplication that's occurring. They do need to own their own risk. They need to understand it, and they need to have a level of sophistication to consume the documentation that is given to them. But to the extent that the level of transparency gets to a point where companies are effectively creating their own models, you may get to a point where a rating agency is looking at a company and saying, essentially you're a modelling company in and of yourself and you're creating your own models. So the implication is they will have the responsibility for continuing to update that view of risk. I don’t know if insurance companies are prepared for the ongoing investment over time.

Paul Miller, Aon Benfield: We acknowledge that there may be a different take across the risk carriers in the market but the modelling firm needs to play to the area of greatest capability, not the lowest. If somebody can’t handle it, then, you know, at least give them the ability to understand transparency over time, if they will invest in more resources and more people. Part of this market does have the capability to understand and deal with the transparency that could and should be provided by the models.

Justin Davies, Eqecat: We have a certain balancing act in terms of the amount of information we make available. If we were to triple the size of those documents, would the clients thank us? We are heading to the point where we’re giving enough information for clients to get a really good understanding of the models, but as they want to go deeper, if they come back to us and ask those questions then we’ll answer them.

What are the benefits and challenges of Open Source cat modelling? 

Rob Stevenson, Kiln Group: If you’re moving away from the traditional vendor built model, and using Open Source as your primary model, it presents some operational challenges. If I have this Open Source framework, I have to find someone to give me my hazard and vulnerability data. In addition I’ve got to make up a correlation matrix index across every peril in every country. That sounds quite onerous. If I want to model 40 countries in the world, it could in theory mean 80 separate contracts with providers. That is very complicated and would be a sea change from where we are today.

Conor McMenamin, RenaissanceRe: One clear benefit of something like Oasis are that they can provide more niche models with a route to market. An academic with a great occurrence model for Indian flood is not going to be able to create their own modelling platform, but an open model framework can provide them with a way to get people using their work. Open Source approaches should also deliver increased transparency, although that’s likely to have a cost associated with it for the client, at least in terms of understanding the open aspects of the platform. And any model provider adds to the diversity of risk views that are available; we see that as a major benefit.

Paul Miller, Aon Benfield: I think transparency comes before Open Source. I think we mean open environment. We’re not asking for source code, we’re asking for the ability to understand. So, transparency to me plays a bigger role in understanding the strengths and weaknesses of cat models than an open environment or Open Source solution possibly does. I think where it does come in is potentially it leads to a bigger engagement with the scientific community, which plays an important role here. Being able to reach out more broadly and connect with other experts in the field is to be seen as a positive thing.

Rob Stevenson, Kiln Group: I think the barrier it throws up is you’re dealing with people who aren’t necessarily familiar with our industry, whereas our current vendors are pretty familiar with it. How is that communication going to take place?

Paul Miller, Aon Benfield: I think it’s another one of those buzzwords like blending. I’m not sure if everyone entirely understands what they expect from an Open Source solution. There are various vendor models, and now suddenly there are going to be various Open Source platforms. With market solutions like Oasis on the horizon, there are vendor firms creating what they’re calling Open Source solutions. If I’m using a cat model in Europe and an alternative model in the US, and then I’ve got my financial model, the idea of being able to feed data from one into all of those solutions or take outputs from your preferred solution and feed it into another – all of that excites me That’s efficiency and if that’s what Open Source will allow us to do, then great.

Rob Stevenson, Kiln Group: From an underwriting perspective you want an exposure management platform of which cat modelling forms a part. So, you’ve just got one place for your exposure data to reside and be analysed from. Having things chopped up in different systems is inefficient. So, while I know we’re talking cat modelling here, it’s also exposure management. Don’t separate the two.

Justin Davies, Eqecat: Open Source certainly has its place. There are certainly country perils where we cannot build a business case to go ahead and develop a model – the Open Source solution for that is there. There will be someone out there who will build that model and that will therefore be of benefit to the market.

Does storing and sharing data via a cloud offer opportunities for new generations of flexible, open and transparent models? 

Paul Little, Eqecat: We install our models in our clients' environments, and whether that is in their office, whether it's in a private cloud is not material to us. What is material is the extent that there is a cloud environment that our client doesn't own is the question of support associated with updates and things like down time and security of the IP within our models and of course client information. On balance I can see huge benefit in terms of speed and cost savings for deployment if it's in a cloud environment.

There is a lot of work associated with the clients' use of the model. In this case I don’t mean the model’s interpretation, but interactions over time between the model and client data. That needs to be addressed, and if it's a third party involved in any way, we need to figure out whose responsibility is it to sort out whatever the issue may be.

Conor McMenamin, RenaissanceRe: There’s no doubt that centralising maintenance is likely to make the model vendor’s life easier – but centralised infrastructure is a double-edged sword, in that the reliability of that infrastructure reflects directly on the vendor. It will make it easier to roll out upgrades and enhancements although, as we have seen recently, not all enhancements will be received as such by every client. In terms of openness and transparency, a model can only be open and transparent if the information to understand the components of the model is available to the client; the same outcomes could be achieved without the cloud if there is sufficient trust between the vendor and the client.

David Lightfoot, Guy Carpenter: I see one of the biggest benefits of moving to a cloud computing environment is the speed, from the time it takes from inputting exposure to getting results, because pricing algorithms are increasingly dependent on the ability to frequently model the marginal impact to capital of the risk being underwritten.

Paul Miller, Aon Benfield: If I’m modelling a portfolio for my client, my understanding of cloud is that I will in theory be able to grant rights to that model output to the relevant parties who could then drag that down, look at the assumptions that have been made, which are transparent, and that ability has to lead to greater efficiency. There’s just that big caveat at the start: that clients need to consent to their data being in a cloud.

Rob Stevenson, Kiln Group: At the moment, limits around technology stop us from doing as much as we would like. If this is the change that enables us to free our imaginations, then great, but the cost and efficiency of the infrastructure needs to be clarified.

Latest Issue

November 2017

 

In this month's Reactions

  • Hurricane update
  • North America Awards
  • PCI Roundup
  • Baden Baden update

CLICK HERE TO READ THE LATEST ISSUE

 

Follow Us on Twitter @reactionsnet

Catastrophe Centre

Catastrophe Centre