The march of emerging technologies is transforming labour markets, increasingly taking over lower skilled jobs in developed countries, and pushing tomorrow’s options for career paths towards different roles. While some parallel forces of market globalisation may these days be resisted by protectionism, technological progress cannot be held up so easily. Insurance is not immune to the trend, reflecting the markets it serves, and some parts of the sector are even moving for the driving seat.
People tend to cost more to employ than machines, are not capable of 24/7 operation, and they also tend to make more mistakes. Automated underwriting is still a distant prospect for the less commoditised forms of insurance, particularly the more sophisticated big-ticket commercial and specialty lines, where relationships still count. However, the claims functions of insurers are using cognitive technology with wide-reaching potential applications, experimenting with artificial intelligence (AI) bots to achieve robotic process automation (RPA).
Take some news from Japan, a country associated with leading technology and robotics industries. There, Fukoku Mutual Life Insurance is already using AI software for RPA, replacing human workers for calculating claims pay outs to its policyholders. Also in Japan, Dai-Ichi Life Insurance is using a similar system – both bots are derived from IBM’s Watson Explorer – to automate payments. Japan Post Insurance is reportedly looking at doing the same.
“There’s increased interest in using robotic process automation in outsourcing technology to do relatively mundane activities that don’t require humans – software bots, basically,” says Peter Dickinson, a corporate partner in legal firm Mayer Brown’s London office and co-head of its technology transactions practice.
Fukoku’s system is costing $1.8m. The insurer’s plans so far foresee a 30% increase in productivity through automating its claims functions, a return on investment within two years, and a reduction in headcount, reached already by redundancies, of some 34 employees.
Able to read reams of data for claims submissions – unstructured text, medical certificates, images, audio and video – the technology aims to think like a human, and make the same assessments by analysis, and to achieve this much faster and more reliably than mankind ever could. A human manager still gives approval for the work done by the machine before any money changes hands.
“First and foremost companies need to consider the impact on the workforce for a number of careers,” Dickinson says. “It should be possible to do large parts of the auditing process using AI, for example. Do you need hundreds of audit professionals if you can implement technology solutions? Instead, you could reduce that number and redeploy skilled people to more rewarding tasks, as well as over time reducing the overall headcount.”
The IBM Watson platform is also being used for assessing supply chain risks in a different context. Logistics firm DHL meanwhile runs its own Resilience 360 automated system. This suggests that such systems could be used to make short work of big data problems, collecting and analysing data to find risk accumulations (explored in this month’s editor’s comment).
Who’s in control?
A technological shift towards automation changes the dynamics between insurers and their service providers. Dickinson charts a move towards output-focused contracts between firms and their outsourced providers. “It changes the way that firms contract with each other, for example regarding service levels in a contract, accuracy and availability,” he says.
Designing and developing cutting-edge AI bots in house is likely beyond the means of all but the biggest insurance companies. Consider this: 30 years ago fixing a mechanical appliance might be possible with a toolkit at home; whereas today, repairing a defective smartphone – or even understanding its inner workings – is beyond most people. Likewise, when outsourced functions were manual by nature, the delegating company had greater understanding of their workings and could exercise greater control over them.
“In the case of supplier-developed AI programs, for their insurance company clients there’s a risk of becoming reliant on a supplier which is different to the situation insurers found themselves in previously,” says Dickinson. “If an IT service provider develops a bespoke AI solution for a client, does that client need to stay with them for the long term?”
He thinks this might mean more focus on the need to future proof contracts. Swapping between competing providers might become desirable for the same reasons firms are getting into AI and RPA in the first place: competitive cost savings; potential for reliability problems; or if a rival technologist can offer a cleverer bots.
“Clients need to maintain ownership of their data and processes, with an eye to when a contract terminates,” says Dickinson. “This is particularly true for claims management, where there’s a focus on the benefits that can be brought about by using RPA and AI in due course.”
Maintaining the ability to analyse data, while avoiding becoming hostage in a relationship to any particular provider, means being able to keep the ability to switch from one service provider’s system to another. Otherwise one might face compatibility problems comparable to those sometimes experienced when trying, for example, to migrate data from one spreadsheet format to a different file type used on a rival operating system.
“We have clients doing this work, but it’s not the biggest issue,” says Adrian Rands, CEO of QuanTemplate. This is an example of a technology firm which specialises in reducing this risk for insurers wanting to migrate from one provider to another, combining data from different sources. His firm works with managing agents, he notes, including claims data.
He stresses that the ability to “harmoniously” analyse data, taken from one system to another, is the most important aspect of compatibility between the various parties and platforms potentially involved.
“The motivation here is a better service to clients, and for claims to be settled in the correct quantum,” Rands says. “As a result of that there’s a financial cost saving that can come from harvesting data from various TPAs
Such TPAs are being used a lot more these days, and Rands notes that conflicting business objectives can be as much of a problem between TPAs and insurers as managing the technology involved. This is suggested with a nod to the present soft market conditions where claims decisions are influenced by the desire to maintain long-term lucrative client relationships. At the same time, other soft market symptoms, such as more use of managing general agents and delegated underwriting authorities also encourage reliance on the services of the various TPAs.
“The motivations of insurers are usually holistic – the lifetime value of the relationship with the insured,” says Rands. “The claim itself needs to be measured on the relationship as much as the merits of each individual claim. A lower claims payout tends to be how a TPA’s performance is measured, alongside the usual service provider factors of efficiency and quality of service.”
Keeping some claims management in-house can also make sense – whether utilising an AI or human intelligence-led approach. “The smaller, simpler claims don’t need to use a TPA to manage them, as essentially the insurer is just paying out an extra commission fee,” suggests Rands.
Regulation is another factor coming to the fore for insurers considering putting their claims houses in order. One piece in particular is the European Union’s General Data Protection Regulation (GDPR), slated for implementation in 2018, and therefore to be brought into force in the UK before Brexit can take place.
This piece of regulation, once brought into force, will have a lot to do with how firms invest in their claims management technology, as well as how they consider the potential sins of their TPAs and other outsourced tech service providers they may consider getting into bed with.
Like other EU regulatory updates, such as Solvency II, any benefits of rolling them back after Brexit would likely be offset relative to costs already incurred, the desire to maintain good-neighbourly equivalency between regulators, as well as the rules’ consumer protection purposes, meaning they are likely to stay.
In this respect, the GDPR replaces a lighter regime under the UK’s Data Protection Act of 1984. The penalties for breaking GDPR rules are much stricter, amounting to €20m fines or 4% of company-wide profit, whichever is higher. Rands notes that the UK is committed to a comprehensive data protection regime.
One important GDPR aspect is the “right to erasure”. Despite an intriguing title, this is neither a reference to 1980s electropop, nor a Robocop dictum to terminate a corrupt crime boss. It is rather a GDPR replacement to tighten up the old “right to be forgotten” principle within UK data protection law.
“In other words, insurers need a system in place to remove a customer’s data,” says Rands. “It has an impact on the data architecture that insurers need to implement. A lot of insurance companies are not tooled up to deal with it, as they do not have a full handle on their underlying data.”
A related provision is that insurance firms will need to create their own in-house enforcer roles, in the form of data protection officers, if they have not already done so. Many firms have attached this responsibility to the chief compliance officer’s role, which the GDPR will soon change.
“The frameworks of many insurers need tightening, in terms of their data controllers and data processes”, Rands says, “particularly if extracting information from databases can become more problematic”.