Embracing artificial intelligence responsibly in the insurance industry

Developing a responsible AI framework will be crucial

August 08, 2024

Key takeaways

New rules require insurance companies to disclose the use of AI in decision-making processes.

Addressing biased and discriminatory practices is a top priority for industry leaders.

AI governance frameworks should focus on policies/procedures, collaboration and risk management.

#
Financial services Artificial intelligence Insurance

Prioritizing the responsible use of artificial intelligence allows the insurance industry to navigate complexities, mitigate risks and unlock potential for innovation and growth. AI's rapid transformation of the insurance landscape offers immense opportunities and significant risks, particularly in underwriting, claims processing and customer service.

Addressing biased and discriminatory practices is now a top priority for regulators and industry leaders. For many insurance companies using AI, developing a responsible AI framework will be crucial for maintaining fairness and transparency. 

Key regulations and their impact

Recently, numerous states have introduced regulations to ensure AI technologies in the insurance industry are transparent, fair and accountable. These new rules require insurance companies to disclose the use of AI in decision-making processes and provide evidence that their algorithms are free from bias. This shift is both a compliance issue and a fundamental business imperative.

States are implementing robust measures to promote responsible AI. The California Consumer Privacy Act, for example, mandates that insurance companies disclose their use of AI in decisions about pricing and coverage, with noncompliance resulting in significant fines. This underscores the importance of integrating responsible AI practices into workflows.

TAX TREND: Artificial intelligence

Insurers may develop AI software themselves or license it from a third party. The tax and accounting implications of each approach will factor into overall cost-benefit analyses.

A business that licenses software usually deducts each year’s license expense from its taxable income in that taxable year. Conversely, the tax treatment of software development expenses became less favorable in 2022 when a law change took effect.

Similarly, in New York, the Department of Financial Services requires insurers to perform regular bias audits on AI models, submit detailed transparency reports and inform consumers when AI influences underwriting, pricing or claims decisions. These measures enhance compliance and build consumer trust, a critical component in maintaining the industry's reputation.

Washington and Texas have introduced similar measures focusing on regular audits, detailed reporting and stringent accountability mechanisms. Insurers must maintain comprehensive records of their AI processes and outcomes, ready for regulatory review. This approach ensures that biases are quickly identified and addressed, safeguarding consumer interests and fostering a more equitable market.

Beyond state regulations, the National Association of Insurance Commissioners has issued a model bulletin that emphasizes insurer accountability for third-party AI systems. The bulletin outlines expectations for transparency, fairness and compliance, encouraging insurers to establish robust governance frameworks to manage third-party relationships. Additionally, the Federal Trade Commission and the Consumer Financial Protection Bureau hold companies accountable for the actions of their third-party AI systems to prevent unfair or deceptive practices.

CONSULTING INSIGHT: Artificial intelligence governance services

AI is a powerful tool that can transform your business. However, it can also introduce new risks without proper controls. Effective governance is a critical element of a successful AI strategy, ensuring outputs are unbiased and aligned with business strategies and regulatory guidelines. Learn how to develop a comprehensive approach for responsible AI adoption.    

Insurers developing an AI governance framework would do well to focus on three key areas: policies and procedures, collaboration, and risk management.

  1. Policies and procedures
    Establishing clear and comprehensive guidelines for data management, algorithm transparency and ethical considerations, including the standards for developing, deploying and monitoring AI systems, is mission critical. These policies should not only ensure compliance with privacy laws such as the California Consumer Privacy Act and General Data Protection Regulation, but also enforce strict adherence by third-party vendors. Additionally, incorporating regular reviews and updates to these policies can help insurers adapt to evolving technologies and regulatory landscapes. Empowering a dedicated team or third party to oversee these policies and procedures can ensure consistent application across all AI initiatives, fostering a culture of responsibility and ethical AI use. 

  2. Collaboration
    Collaboration across the functions of the business ensures the AI governance framework benefits from diverse perspectives and expertise. For example, IT provides technical insights, compliance ensures regulatory adherence and the legal department addresses risks and ethical considerations. Cross-functional collaboration fosters continuous improvement and innovation, enabling departments to share best practices and develop efficient AI solutions.

    Training employees in responsible AI practices and ensuring all stakeholders understand their roles is crucial. Effective communication and training programs reinforce these guidelines, ensuring everyone from executives to front-line employees maintains high standards for AI governance.

  3. Risk management
    A comprehensive risk management strategy should integrate AI risk management into the overall enterprise framework. This includes identifying potential risks associated with AI deployment, continuously monitoring these risks and establishing mitigation plans to address any issues that arise. Regular bias audits are essential to verify compliance and identify vulnerabilities. Additionally, implementing a system to track and manage risks effectively ensures that any potential risks are promptly identified and mitigated.
     

A critical shift

The transition toward responsible AI in the insurance industry is both necessary and advantageous. Regulations are setting new standards for ethical AI use, which protect consumers and promote trust. As more states adopt these practices, the industry will see a broader shift toward ethical and responsible AI usage. By prioritizing responsible AI, the insurance industry can navigate complexities, mitigate risks and unlock significant potential for innovation and growth, ultimately building stronger, more equitable products and services.

Artificial intelligence icon

Where are you on your AI journey? 

With services for AI strategy and governance, generative AI, predictive data analytics and more, RSM can help no matter where you are on adopting AI.