New rules require insurance companies to disclose the use of AI in decision-making processes.
High Contrast
New rules require insurance companies to disclose the use of AI in decision-making processes.
Addressing biased and discriminatory practices is a top priority for industry leaders.
AI governance frameworks should focus on policies/procedures, collaboration and risk management.
Prioritizing the responsible use of artificial intelligence allows the insurance industry to navigate complexities, mitigate risks and unlock potential for innovation and growth. AI's rapid transformation of the insurance landscape offers immense opportunities and significant risks, particularly in underwriting, claims processing and customer service.
Addressing biased and discriminatory practices is now a top priority for regulators and industry leaders. For many insurance companies using AI, developing a responsible AI framework will be crucial for maintaining fairness and transparency.
Recently, numerous states have introduced regulations to ensure AI technologies in the insurance industry are transparent, fair and accountable. These new rules require insurance companies to disclose the use of AI in decision-making processes and provide evidence that their algorithms are free from bias. This shift is both a compliance issue and a fundamental business imperative.
States are implementing robust measures to promote responsible AI. The California Consumer Privacy Act, for example, mandates that insurance companies disclose their use of AI in decisions about pricing and coverage, with noncompliance resulting in significant fines. This underscores the importance of integrating responsible AI practices into workflows.
Insurers may develop AI software themselves or license it from a third party. The tax and accounting implications of each approach will factor into overall cost-benefit analyses.
A business that licenses software usually deducts each year’s license expense from its taxable income in that taxable year. Conversely, the tax treatment of software development expenses became less favorable in 2022 when a law change took effect.
Similarly, in New York, the Department of Financial Services requires insurers to perform regular bias audits on AI models, submit detailed transparency reports and inform consumers when AI influences underwriting, pricing or claims decisions. These measures enhance compliance and build consumer trust, a critical component in maintaining the industry's reputation.
Washington and Texas have introduced similar measures focusing on regular audits, detailed reporting and stringent accountability mechanisms. Insurers must maintain comprehensive records of their AI processes and outcomes, ready for regulatory review. This approach ensures that biases are quickly identified and addressed, safeguarding consumer interests and fostering a more equitable market.
Beyond state regulations, the National Association of Insurance Commissioners has issued a model bulletin that emphasizes insurer accountability for third-party AI systems. The bulletin outlines expectations for transparency, fairness and compliance, encouraging insurers to establish robust governance frameworks to manage third-party relationships. Additionally, the Federal Trade Commission and the Consumer Financial Protection Bureau hold companies accountable for the actions of their third-party AI systems to prevent unfair or deceptive practices.
AI is a powerful tool that can transform your business. However, it can also introduce new risks without proper controls. Effective governance is a critical element of a successful AI strategy, ensuring outputs are unbiased and aligned with business strategies and regulatory guidelines. Learn how to develop a comprehensive approach for responsible AI adoption.
Insurers developing an AI governance framework would do well to focus on three key areas: policies and procedures, collaboration, and risk management.
The transition toward responsible AI in the insurance industry is both necessary and advantageous. Regulations are setting new standards for ethical AI use, which protect consumers and promote trust. As more states adopt these practices, the industry will see a broader shift toward ethical and responsible AI usage. By prioritizing responsible AI, the insurance industry can navigate complexities, mitigate risks and unlock significant potential for innovation and growth, ultimately building stronger, more equitable products and services.