Regulatory Trends: AI and Insurance

By Mark A. Sayre

NewsDirect, October 2021

The past few years have borne witness to a broad proliferation of artificial intelligence across the insurance industry. Insurance companies, especially InsurTechs, have already adopted AI to streamline underwriting, improve customer experiences, and predict fraud.[1] And while insurance companies are no strangers to regulation, the rapid expansion of AI in insurance practices has taken place in an almost entirely unregulated space. At the time of this article, there is not a single regulation at the state or federal level that specifically governs the use of artificial intelligence in the insurance sector.[6] But this regulatory wild west is set to change in the very near future, with multiple state legislatures and insurance departments actively researching and drafting new rules. In this article, I will outline a few of the more mature regulatory actions currently underway and briefly discuss possible impacts to insurance companies, with a particular focus on the life insurance sector.

Algorithmic Justice and Online Platform Transparency Act of 2021[2]

This bill, introduced in Senate and the House on May 27, 2021, by Senator Markey and Congresswoman Matsui, respectively, proposes brand new requirements for entities that operate public-facing websites, online applications or services, or mobile applications, and who use algorithmic processes for such sites or applications in order to “withhold, amplify, recommend or promote content” to a user. An algorithmic process is defined as a “computational process, including one derived from machine learning or other artificial intelligence techniques, that processes personal information ... for the purpose of determining the order or the manner that a set of information is provided, recommended to, or withheld from a user. ...” Although the language used in the bill clearly corresponds to content recommendation engines such as those found on YouTube, Netflix, or Amazon, it is possible that the language could also be applied to product recommendation algorithms or quoting algorithms found on insurer online sites or in insurer mobile applications.

The bill would require users of algorithmic processes to disclose these processes to users of their sites, including the categories of personal information used by the algorithms, the manner in which the personal information is collected, how that information is processed by the algorithms, and the method the algorithm uses to weight or rank different categories of personal information. It also requires recordkeeping for up to five years of any information processed by algorithms, in a de-identified manner. Companies that use algorithmic processes for the purposes of content moderation or advertising placement have additional requirements specific to those practices.

The most critical, and potentially most challenging new requirement, for insurers and entities in other sensitive sectors (including housing, education, credit, etc.) is the need to perform an assessment of any disparate outcomes caused by algorithmic processes with respect to “actual or perceived race, color, ethnicity, sex, religion, national origin, gender, gender identity, sexual orientation, familial status, biometric information, or disability status.” Given that insurers do not collect, or in some cases are possibly prohibited from collecting, some of these markers, significant work may be required by companies to operationalize these assessments.

Lastly, the bill includes an Equal Opportunity section that makes it illegal for an online platform to “employ any proprietary online platform design features, including an algorithmic process ... for the purpose of advertising, marketing, soliciting, offering, selling ... or otherwise commercially contracting for ... insurance ... in a manner that discriminates against or otherwise makes the opportunity unavailable on the basis of an individual’s actual or perceived race, color, ethnicity, sex, religion, national origin, gender, gender identity, sexual orientation, familial status, biometric information, or disability status.” This could potentially upend long-established frameworks for what constitutes unfair discrimination in insurance, an industry whose products by their very nature are intended to discriminate between individuals based on their individual risk and where the drivers of risk may strongly correlate to characteristics such as gender or disability status (e.g., mortality risk, morbidity risk).

For a more in-depth discussion in favor of the bill, see https://techpolicy.press/evaluating-the-algorithmic-justice-online-transparency-act/.

The bill has not made significant progress since its introduction, and it appears highly unlikely that the bill will pass during this congressional session. However, given the potentially large impact on the insurance industry, insurance companies would be wise to monitor its progress closely.

European Union Artificial Intelligence Act[3]

Although this recent proposal by the European Commission has no bearing on U.S. companies (except those that market or use AI systems in the E.U.), the proposal is important to note here for two reasons: (1) As seen with the E.U.’s General Data Privacy Regulation’s (GDPR) sizable influence on both California and Virginia Data Privacy Laws,[4] E.U. regulatory trends can be predictive of regulatory trends in the U.S; and (2) large companies are increasingly adopting unified global privacy practices based on the most mature regulatory regimes, as seen with GDPR, so it’s reasonable to assume that large U.S. companies might do the same with respect to any AI regulation that arises from the E.U. proposal.

The proposal has 5 major impacts:

  1. Prohibited AI—the act specifically bans the use of specific AI practices, including the use of subliminal techniques, the exploitation of a person’s vulnerabilities such as age or disability, social behavior or scoring AI used by public authorities in damaging ways, and the use of real-time biometric identification systems by law enforcement except for in specific situations.
  2. High-risk AI System Classification and Governance—the act creates a category of “high-risk AI system” (which specifically includes AI used for employment purposes, facial recognition or creditworthiness evaluation and may likely include AI used by insurers for underwriting, fraud prevention, and benefits management) and establishes broad risk management, data quality and governance, record-keeping and documentation, and transparency and oversight criteria that these systems must comply with.
  3. AI Provider Requirements—the act also establishes requirements and standards for companies that provide or use AI, including a duty to notify and cooperate with authorities regarding high-risk AI systems, a responsibility to correct high-risk AI systems that do not conform with the regulations, and a requirement to keep logs related to the usage of high-risk AI systems.
  4. Certified AI Systems—the act creates a new certification, based on adherence to the regulations, that must be displayed visibly by providers and users of high-risk AI systems. 
  5. AI Regulatory Sandboxes—the act allows Member States or the E.U. itself to establish regulatory sandboxes that facilitate the development of new AI systems, and allows for an exemption from the GDPR personal information purpose limitation principle in specific instances.

As shown above, the Artificial Intelligence Act would represent the creation of an entirely new regulatory regime for AI systems, likely including many of those used by insurance companies, that is both broad in scope and deep in details. Companies should actively monitor the progress of this proposal and begin to evaluate the potential impacts to their Data Science or AI programs.

Connecticut Insurance Department Notice of April 14, 2021[5]

Our last example for review brings us much closer to home with a notice from the Connecticut Insurance Department. The notice was issued on April 14, 2021, and concerns “The usage of big data and avoidance of discriminatory practices.” If the E.U. Artificial Intelligence Act represents the future of AI regulation, the Connecticut Notice is an illustration of the messy path regulators may carve on the way to that future. The notice reminds insurance companies that their use of Big Data (Connecticut provides examples such as social media, credit, geographic location, psychographic and wearable device data) must be in compliance with Federal and State anti-discrimination laws, and claims that the State of Connecticut Insurance Department has the authority “to require that insurance carriers and third-party data vendors, model developers and bureaus provide the Department with access to data used to build models or algorithms included in all rate, form and underwriting filings.”[5] The requirement that companies provide the Department with all information related to their use of data and models, without clear criteria the Department will use to evaluate compliance, could introduce greater legal and compliance risk to insurance companies that are already making extensive use of AI.

The notice indicates that the Department is concerned with three areas in particular:

  1. Internal Data Deployment,
  2. Internal Data Governance, and
  3. Risk Management and Compliance.

The notice does not clarify the relationship of these specific areas of concern to the potential for violation of anti-discrimination laws.

What Companies Can Do Now to Prepare for the Future

In the absence of comprehensive federal regulation on the development and use of artificial intelligence, it is likely that short-term activity on this topic will come from state Departments of Insurance. This guidance is likely to focus on bias and discrimination issues related to the use of data and models (such as New York’s Circular Letter No. 1 (2019)), on data protection, governance and management considerations, or on a combination of both (such as the Connecticut notice discussed above). However, insurance companies need not wait for comprehensive federal legislation to prepare for the future of AI regulation. Companies can and should take the following steps now to be best positioned to succeed as the regulation of AI advances:

Step 1

Work with your data science team (or external partners) to draft a Data Governance policy, which establishes categories of data, outlines controls on data access, and clarifies acceptable use of data for artificial intelligence development.

Step 2

Establish a company definition of fairness as it relates to AI and require model builders to incorporate fairness checks into model acceptance and sign-off processes (AI Fairness Policy). Ensure that your data scientists and actuaries are educating themselves on emerging practice related to issues of bias and fairness with the use of artificial intelligence. 

Step 3

Take an inventory of the artificial intelligence models that are currently in use or are being evaluated for use by your company. Categorize these models by risk and commit to a risk-prioritized review of every model.

Step 4

Form an AI Governance committee, which combines legal, actuarial, data science, and operational resources, to regularly monitor the company’s adherence to its Data Governance and AI Fairness policies and alignment of these policies to regulatory developments, and recommend changes or mitigations as needed.

Regardless of the future path that regulatory actors may take, these steps will help position insurance companies to better navigate legal and regulatory risks associated with their use of artificial intelligence. In the words of Howard Marks: “You can’t predict. You can prepare.”

Statements of fact and opinions expressed herein are those of the individual authors and are not necessarily those of the Society of Actuaries, the newsletter editors, or the respective authors’ employers.


Mark A. Sayre, FSA, CERA, is an actuary and data privacy consultant, and the Center for Law + Innovation Privacy Law fellow for the Maine Law Class of 2024. He can be reached at mark.sayre@maine.edu.


[1] https://content.naic.org/cipr_topics/topic_artificial_intelligence.htm (Last accessed Aug. 26, 2021)

[2] https://www.congress.gov/bill/117th-congress/senate-bill/1896/text (Last accessed Oct. 5, 2021)

3 Available at https://eur-lex.europa.eu/legal-content/EN/TXT/?qid=1623335154975&uri=CELEX%3A52021PC0206 (Last accessed Aug. 22, 2021)

4 https://www.natlawreview.com/article/gdpr-usa-new-state-legislation-making-closer-to-reality (Last accessed Aug. 22, 2021)

5 https://portal.ct.gov/-/media/CID/1_Notices/Big-Data-Usage-Notice.pdf (Last accessed Aug. 22, 2021)

6However, the NAIC has released guiding principles on the use of AI (available at https://content.naic.org/article/news_release_naic_unanimously_adopts_artificial_intelligence_guiding_principles.htm) and Colorado recently passed a statute enabling the Commissioner to develop and promulgate regulations on this topic (see http://leg.colorado.gov/sites/default/files/2021a_169_signed.pdf)