Article

Artificial intelligence and risk management in the insurance sector

August 21, 2021
#
Financial services Artificial intelligence Insurance

A longer version of this article first appeared on Financier Worldwide.

Artificial intelligence (AI) adoption among insurers is accelerating rapidly, especially since the pandemic has forced companies to shift from relying on in-person interactions, appraisals and inspections. While insurance executives understand that AI is reshaping the competitive marketplace, there are still significant roadblocks that many insurers face in optimizing the huge amounts of data within their legacy systems to realize the full value of AI technology.

Over the last decade, insurance companies have focused more on data transformation initiatives in the interest of building a robust data foundation. The richness of data collected by insurance companies goes beyond policy and claims data and has been further enhanced by the rise of telematics, usage-based insurance products and Internet of Things (IoT) devices. To operationalize this data, companies are beginning to deploy modern, self-service analytics platforms to empower business users.

This evolving data ecosystem creates a perfect breeding ground for innovation in AI and machine learning (ML) across many operational insurance processes, including underwriting, pricing, claims and more. Insurance companies that tap into this potential can set themselves up for future success.

Augmenting risk management

AI is transforming risk management, particularly in the areas of claims and underwriting. Insurers are using AI to enhance risk management tasks like recognizing underwriting risks and detecting fraud more effectively. Some insurers, for instance, are leveraging AI’s natural language processing and advanced analytics capabilities to extract pertinent risk information from emails to identify underwriting risks and optimize risk selections.

AI and ML techniques can also have profound applications in corporate functions that are continuously looking to technology to enhance the efficiency and accuracy of their processes. Here are three examples:

  • Finance departments can leverage AI-enabled tools like intelligent automation to perform tasks like order processing, journal entries and reconciliations.
  • Risk management departments can use ML techniques to move away from monitoring lagging performance metrics to uncovering forward-looking key risk indicators.
  • Actuarial departments continue to make significant advancements in customer segmentation, pricing and reserving models with ML techniques.

Risk managers can use the predictive power of AI to improve decision-making, boost productivity and reduce the frequency and severity of allocated loss adjustment expenses in claims. Insurance companies that invest in AI technologies focused on risk control and mitigation can achieve a reduction in insurance claims and create a sustainable, long-term competitive advantage.

Potential challenges

Challenges around adopting AI in the insurance sector often begin with planning and strategy. It is important to think of AI itself as not just a specific solution, but also a tool. Employee enablement and providing value to customers can get lost in the shuffle during implementation if business leaders are not intentional.

Access to quality data can also be a challenge when implementing AI into existing systems. Most insurance companies still operate today with one or more legacy information technology systems, often from prior mergers and acquisitions activity or because of the large investment required for system modernization. For many leadership teams, it can feel daunting to know where to start when implementing AI. Overcoming the data quality challenge is not a trivial exercise for insurance companies. Historically, companies have made significant investments in data foundation and transformation initiatives to extract data from legacy IT systems and transform them into usable data assets.

However, converting legacy systems to harness historical data should not impede implementing an AI strategy. When implementing AI into business processes, insurance companies should focus on developing solutions for processes found in modern insurance technology stacks, where data recency is critical and legacy data is less relevant.

Shifting liabilities

Many organizations realize that using AI solutions could create ethical, technological or regulatory issues that may jeopardize their brand. At the end of the day, human oversight is needed at critical business decision points involving AI.

There are two main liability scenarios as the shift to machine-run processes continues—discriminatory underwriting actions and biases in claim settlement. Here is a look at each:

  • AI technologies and discrimination in underwriting processes: From both a risk selection and pricing standpoint, insurance companies must monitor process results for unintended biases. Say, for example, a ML algorithm is used to set the price for a given risk. The model input data may fully comply with various regulatory guidelines and restrictions around the use of characteristics like race, gender and creditworthiness. However, the model output may still produce a bias related to one of these characteristics if combinations of other variables are a good proxy for the restricted variable, leading to various compliance-related issues.
  • AI technologies and biases in claims processes: Fraud detection models in claims are often touted as one of the leading use cases for AI in insurance. However, models may present biases in fraud detection and, without appropriate human intervention, customers could have their meritorious claim denied by a machine. In both liability scenarios, a robust AI monitoring program can help to continuously test for unintended model biases.

In the years ahead, we will see a fundamental change in the role of risk management, where humans and machines collaboratively work together. Oversight and risk management will be built into systems and continuously monitored by virtual risk agents, while humans will continuously monitor the systems and algorithms.

RSM contributors