By Anthony Cappelletti
The December 2017 edition of GI Insights contained the article AI in GI: Overview of an Annual Meeting Session. I wrote that article summarizing the session “Use of Artificial Intelligence in General Insurance: Industry Opportunities and Regulatory Challenges” from the 2017 Society of Actuaries (SOA) Annual Meeting & Exhibit. That article outlined some of the uses of Artificial Intelligence (AI) in General Insurance (GI) and also provided a regulatory perspective. Not surprisingly, use of AI affecting the insurance sector has developed over the past few years. In this article I’ll look at several current issues with AI in GI that I think GI actuaries should be aware of. The first issue is a regulatory development (NAIC adopts AI principles), the second is a general insurer’s potentially problematic use of AI (Lemonade Insurance data collection) and the third is a vehicle manufacturer’s use of AI that may affect liability (Tesla’s Autopilot).
Regulatory Development for AI and Insurance: NAIC Principles on AI
In 2019, the Organisation for Economic Co-operation and Development (OECD) adopted a set of principles for AI. In 2020, the National Association of Insurance Commissioners (NAIC) adopted a set of principles on AI based upon the OECD AI Principles.
The NAIC AI Principles are a set of guiding principles for those implementing AI tools in the insurance industry. It is split into five categories under the acronym FACTS:[1] (See Figure 1)
Figure 1
NAIC AI Principles
Fair and Ethical |
Use of AI should follow all laws and regulations. AI systems should avoid discrimination against protected classes through the use of proxy variables |
|
Accountable |
AI actors should be accountable for the AI system (including creation, implementation and consequences of the system). |
|
Complaint |
AI actors should be knowledgeable of applicable insurance laws and regulations and should ensure that the AI system complies with these laws and regulations |
|
Transparent |
AI actors should be transparent by providing appropriate disclosures to the stakeholders (including regulators and consumers) of the AI system. Notwithstanding this, AI Actors should have the ability to protect confidentiality on proprietary aspects of the AI system. |
|
Secure, Safe and Robust |
AI actors should ensure there are sufficient safeguards in place to address risks related to the AI system. These risks include privacy, security of information and unfair discrimination. |
These principles provide a framework that insurers should follow when creating, implementing, and maintaining AI systems. Much of what is included in these principles is just stating what is good governance over systems. They shouldn’t be overly burdensome for insurers. But there are some issues that need closer examination. One of these is that AI systems must avoid discrimination against protected classes through the use of proxy variables.
Testing whether or not an AI system discriminates against protected classes means that data on protected classes must be captured in the data. These tests should not only be done when developing the system but also periodically during its use to ensure that it continues to comply with the law. These tests should be documented in sufficient detail so that a regulatory review can be satisfied that all applicable anti-discrimination laws and regulations are being followed. AI systems also need to be reviewed to ensure that the underlying data/inputs used in it do not include inherent biases, especially those against marginalized groups.
Another issue to consider is that of accountability. The principles note that AI actors should be accountable for the consequences of the AI system. The principles actually go further to say that this includes both the intended and unintended consequences of the AI system. What this means is that AI systems need to have sufficient checks and safeguards against unintended consequences.
Lastly, the issue of transparency is very important. Insurers need to provide communications to insureds on insurance decisions provided by AI systems. These communications need to be conveyed in simple terms that most consumers can understand. But the communications cannot be overly simplified to the point that no real information is given. The principles include the requirement that AI actors describe what factors the AI systems use to make decisions affecting the consumer’s insurance.
Now that the NAIC has adopted these principles, it is important that insurers using AI systems document how they are following them. If any questions should arise from a regulator on the use of an AI system, a likely starting point for the regulatory review would be whether or not these principles were followed.
“These principles are aspirational guideposts for the industry to ensure the technology is used effectively and responsibly—to help insurance organizations and professionals continue to innovate while protecting the consumer.” Jon Godfread, North Dakota insurance commissioner and chair of NAIC’s AI Working Group
Insurer Use of AI: Lemonade
Many insurers have already implemented AI systems for some functions. However, the greatest use of AI systems by insurers is with new InsurTech companies. These companies did not start out with the legacy systems of established insurers but were created with making the most use of technology to build a profitable insurance business. One such example is Lemonade Inc.
Lemonade was founded in 2015 by individuals in the tech industry, not the insurance industry.
“Lemonade offers renters, homeowners, pet, and life insurance. Powered by artificial intelligence and behavioral economics, Lemonade’s full stack insurance carriers in the US and the EU replace brokers and bureaucracy with bots and machine learning, aiming for zero paperwork and instant everything.” Lemonade website, Company Overview
Lemonade promotes its use of technology and AI. One of the more prominent uses of AI in its operations is claims handling. The Lemonade claims experience involves the use of an AI-bot to handle claims, even assessing the likelihood of fraud in real-time. But the AI-bot does not make a decision on fraud. The AI-bot will pay the claim if it determines it is not suspicious or it will assign the claim to a human claims handler. Most claims are handled quickly.
Lemonade began as a startup in New York State and has quickly expended throughout the United States and into Europe. It has quicky grown over the years. As of year-end 2018, in-force premium was $45 million. As of year-end 2020, in-force premium grew to $213 million with over one million customers. While Lemonade has had success with growth, it has not yet produced a profit. This is not unusual for a startup insurer with rapid growth in new lines of business and new territories. The company’s use of AI has fueled their growth. But the use of AI is not without risk. Lemonade now faces a class action lawsuit regarding the AI-driven claims handling system.
In August of this year, plaintiff Mark Pruden filed a class action lawsuit against Lemonade in New York federal court (Pruden v. Lemonade, Inc et al.). The allegation in the lawsuit is that Lemonade collects biometric data from policyholders without their knowledge and consent. Specifically, Pruden alleges that Lemonade’s claims handling AI-bot uses customers’ faceprints as part of its fraud detection process by requiring a claimant to record themselves describing what occurred and uploading the video and passing the claim to a human claims handler if there is a likelihood of fraud. In a series of subsequently deleted tweets, Lemonade touted the use of non-verbal cues from submitted claims videos to detect fraud. This use of non-verbal cues from claims videos would appear to make use of facial recognition technology which is considered to include biometric data. Lemonade has since provided a statement that its use of the term “non-verbal cues” was simply a “bad choice of words” and that it only uses the information to detect whether or not the same person submits a claim under different identities.[2]
The true issue in this lawsuit is not the AI system of fraud detection using biometric data, it is the use of biometric data without informed consent. Lemonade asserts in its Data Privacy Pledge that is does not collect, require, sell, or share biometric information. The allegation in the lawsuit exposes Lemonade to a potential violation of New York’s Uniform Deceptive Trade Practices Act. This lawsuit is still in the very early stages. It will be interesting to see what happens as it moves forward.
Use of AI in Automobiles: Tesla Autopilot
The automobile and tech industries are working toward a future with self-driving autonomous vehicles. There are some self-driving autonomous vehicles on the roads today but most of these are in the testing phase. However, automobile manufacturers are phasing in driving aids that make use of AI for mass produced vehicles available to the public. One of the more well-known and advanced of these systems is Tesla’s Autopilot. But before we get into Tesla Autopilot, it’s important to review the levels of driving automation as defined by SAE International:
SAE Levels of Driving Automation:[3]
- Level 0: No Driving Automation
- Level 1: Driver Assistance
- Level 2: Partial Driving Automation
- Level 3: Conditional Driving Automation
- Level 4: High Driving Automation
- Level 5: Full Driving Automation
SAE Levels 0 to 2 involve driving support features and require a driver to supervise the operation of the vehicle even when the features are in use. SAE Level 3 has automated driving features that only operate under limited conditions, but require a driver to be prepared to assume control when the feature requests it. SAE Levels 4 and 5 are autonomous driverless cars. The distinction being that Level 4 only works under limited conditions (e.g., location specific), whereas Level 5 can drive anywhere under any conditions that a human could drive a vehicle.[4]
Under the SAE definitions, Tesla’s Autopilot is Level 3. That is, it can take control of all driving functions under limited conditions, in this case on the highway. But Autopilot requires the driver to be alert as it may request the driver to take control at any time. In August of this year, the National Highway Traffic Safety Administration (NHTSA) opened an investigation into Tesla’s Autopilot regarding 11 accidents at first responder scenes since 2018.[5] These accidents involved Tesla vehicles, with Autopilot engaged, colliding with stationary objects (vehicles, people, equipment) at crash scenes with scene control in effect (e.g., flashing lights, flares, road cones). Most of these accidents occurred in the dark.
Because Autopilot requires a driver to be ready to assume control, at-fault accidents occurring while Autopilot is engaged would usually be considered the fault of the driver and not the AI system. However, at least one of these reported accidents at a first responder scene has resulted in a lawsuit directly against Tesla.
In September of this year, Tesla was sued by five law enforcement officers from Montgomery County Texas who were seriously injured in an accident that occurred in February of this year. The lawsuit alleges that the crash was due to design and manufacturing defects (i.e., inability of Autopilot to detect objects at first responder scenes at night with flashing lights) and that Tesla was aware of this defect and did nothing to correct it. The lawsuit also alleges that Tesla’s own advertisements tout that Autopilot is better than a human driver giving drivers a false sense of security that may make a driver believe they do not need to pay much attention to driving when Autopilot is engaged despite the Tesla user manual stating that the driver must be ready to assume control. In this case, the driver of the Tesla was reportedly intoxicated and not awake at the time of the accident.[6]
“Autopilot introduces new features and improves existing functionality to make your Tesla safer and more capable over time. Autopilot enables your car to steer, accelerate and brake automatically within its lane. Current Autopilot features require active driver supervision and do not make the vehicle autonomous.” Tesla website, Autopilot
The allegation regarding Tesla giving drivers a false sense of security is an interesting one. There are plenty of videos on the internet showing Tesla’s in Autopilot mode with drivers doing a variety of activities that would not have them ready to assume control (e.g., watching a movie, reading a book, playing a video game, sleeping, moving out of driver’s seat). It is safe to say that Tesla is aware, or should be aware, of the misuse of Autopilot. It is also technologically possible for Tesla to include safeguards to prevent certain types of misuse (e.g., sensors to detect driver in the driver’s seat with eyes looking forward, require periodic control of steering). Perhaps Tesla’s naming of this AI system as autopilot is an issue. The term autopilot makes most people think of autopilot in an aircraft that provides a much greater level of automation than Tesla’s Autopilot and may allow the operator of the aircraft to temporarily move away from the controls. Despite these issues, Tesla has moved forward with a more advanced version of Autopilot named “Full Self-Driving” or FSD. FSD is Autopilot modified to be enabled on local roads as well as highways.
While I do not know how this lawsuit will end up, this does show that use of AI systems opens up liability issues that will have implications for liability insurers.
An interesting development in this case is that the NHTSA investigation has been expanded to all automobile manufacturers that have Level 2 automation of driving (i.e., vehicles with adaptive cruise control and lane control engaged at the same time). The NHTSA investigation will look into whether or not these systems also have an issue with flashing lights and has requested that these manufactures provide all relevant data to the NHTSA.[7]
Additional Thought: Deepfakes
These three AI issues are just the latest ones that have come up and have an effect on the GI industry. But they are not the only ones. Insurers need to stay informed of developments in the world of AI. One interesting development is “deepfakes.” Deepfakes are AI enhanced videos that can distort reality.
“Deepfake is a term for videos and presentations enhanced by artificial intelligence and other modern technology to present falsified results.” Techopedia, Definition Deepfake
The software for creating deepfakes is now readily accessible to the public. This can be an issue for claims fraud where claims reporting and evidence of claims are through video submissions by the claimant. This can also be an issue for video submissions for underwriting with respect to loss control measures and the existence (and condition) of insured assets. Deepfakes are improving to the point that it may be impossible for fraud detection to identify that a video is altered, whether the fraud detection is performed by human investigators or AI systems.
I look forward to reviewing all of these AI issues again in a future article.
Statements of fact and opinions expressed herein are those of the individual authors and are not necessarily those of the Society of Actuaries, the editors, or the respective authors’ employers.
Anthony Cappelletti, FSA, FCIA, FCAS, is a staff fellow for the SOA. He can be contacted at acappelletti@soa.org.