December 2017

AI in GI: Overview of an Annual Meeting Session

By Anthony Cappelletti

As a fan of sci-fi (or is it speculative fiction now?), talk of artificial intelligence, or AI, brings forth thoughts of some of the novels I’ve read [1]and movies I’ve seen. [2]Of course, when we speak of AI applications being used in business now, we are not talking about sentient intelligence looking to control the world. We are talking about applications that use machine learning on data sets. These programs search for patterns in big data in ways that would not be practical using traditional statistical analysis controlled by a person. We don’t yet have to worry about implementing Asimov’s laws of robotics for artificial intelligence in General Insurance. [3]

For those unfamiliar with Asimov’s laws of robotics, they are:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Isaac Asimov's “Zeroth Law of Robotics,” introduced later, takes precedence over the three laws:

0.A robot may not injure humanity, or, by inaction, allow humanity to come to harm.

The 2017 Society of Actuaries (SOA) Annual Meeting & Exhibit included the session “Use of Artificial Intelligence in General Insurance: Industry Opportunities and Regulatory Challenges.” This session was a natural follow-up to the 2016 SOA Annual Meeting & Exhibit session “Big Data and Price Optimization in General Insurance.”  

Last year’s session on big data was highly informative and generated lots of questions from those in attendance. The only negative from my perspective was that it was rushed. We had four excellent presenters, each with a unique point of view (industry, consumer, legal and regulatory). Unfortunately, each presenter had so much to say that it was hard to squeeze it into the time allotted.

When I thought of coordinating a follow-up session on AI in GI, I needed to ensure that the session was not rushed. To that end, I limited the session to two presenters, one speaking on industry opportunities and one speaking on regulatory challenges. These were the two perspectives that generated the most questions during last year’s big data session.

Dan Adamson provided the industry perspective. Dan is one of the world's foremost experts on investigative use of big data and cognitive computing as well as vertical search. He has more than 15 years of experience in the industry. He holds a Master of Science degree from U.C. Berkeley and is the CEO for OutsideIQ. OutsideIQ is a leader in investigative cognitive computing. It developed DDIQ and Insurance IQ. DDIQ is an investigative due diligence search platform used today by leading financial institutions and insurance organizations. InsuranceIQ is an automated solution that allows underwriters to quickly assess insurability of risks using comprehensive profiles using big data. Prior to founding OutsideIQ, Dan was a Microsoft search expert and technical lead for Bing. 

Angela Nelson provided the regulatory perspective. Angela is the Director of the Market Regulation Division and Chief Industry Liaison for the Missouri Department of Insurance. The Market Regulation Division is responsible for reviewing insurance policies/rates and ensuring the fair treatment of policyholders. Before this, she was the director of the Consumer Affairs Division for the Missouri Department of Insurance. She holds a Bachelor of Science in Management and an MBA from William Woods University. Last year, she gave an excellent presentation from the regulatory perspective at the session “Big Data and Price Optimization in General Insurance.”

Dan’s presentation started with a description of terms like cognitive computing, deep learning, machine learning and artificial intelligence. He went on to describe the many ways that AI is currently being used in general insurance, from front-line customer service to back-office underwriting and claims functions. His overview of how AI in underwriting for commercial risks can quickly make decisions using publicly available data was quite interesting. As noted in his presentation, there are three phases for the development of AI in underwriting.

  • Phase 1: Underwriters use AI to assist them.
  • Phase 2: AI underwrites routine risks and refers exceptional risks to underwriters.
  • Phase 3: AI is fully incorporated in underwriting and pricing risks.

As a GI actuary, I found the use of AI in the underwriting/pricing function very interesting. It won’t replace actuaries, but it will give us more tools to work as efficiently as possible. Clearly, insurers are very interested in using AI wherever it makes sense, not just underwriting. Dan went on to describe a case study for its use in customer service.

Using AI for claims adjusting and fraud detection is now a high priority for many insurers. AI can dig for information quickly to make determinations that could be extremely difficult for a human claims adjuster. Insurers can improve profitability just through more efficient claims handling and enhanced fraud detection.

He then went on to explain some of the challenges insurers can have with regulators when utilizing AI: black box may obfuscate variables being used, traditional proven regulatory review may not work with AI methodologies. This provided a good transition to Angela’s presentation. Please be sure to check out Dan’s presentation on the SOA website.

There tends to be regulatory and consumer fears raised when new technologies are introduced. AI is no exception. Some fears from the use of AI may be because we think of it as seen in movies. I often recall the following lines from a movie I’ve seen numerous times: [4]

Dave:    Hello, HAL. Do you read me, HAL?

HAL:       Affirmative, Dave. I read you.

Dave:    Open the pod bay doors, HAL.

HAL:       I'm sorry, Dave. I'm afraid I can't do that.

Of course, AI in GI is not HAL 9000 from “2001: A Space Odyssey” or the “Terminator.” They are just tools to make the business of insurance more efficient.

Angela’s presentation had three messages. First was regulatory challenges. This was followed by regulatory opportunities. She then concluded her presentation by describing how insurers should communicate with regulators when introducing AI.

Regarding regulatory challenges, Angela emphasized that regulators must perform a balancing act: innovation vs compliance, confidentiality vs transparency, and thoroughness of reviews vs speed to market. Regulators do not wish to stifle innovation. Innovation is good for competition in markets which is ultimately good for consumers. However, regulators have to ensure that any new innovations follow the law and do not harm consumers or unfairly discriminate against consumers.

One of the biggest changes for regulators regarding AI is its use in rating and underwriting. Historically, rating and underwriting in GI involved a simple rating formula summarized in a rating manual with clearly specified underwriting rules summarized in an underwriting manual. Generally speaking, these could be easily reviewed by regulators. With AI, rating and underwriting may no longer be a simple rating formula with clearly specified underwriting rules. Furthermore, AI will likely make use of non-traditional data. Regulators have confidence in traditional audited insurance data (whether for the company or the industry). With non-traditional data, regulators must review the sources to determine reliability. 

With respect to claims handling, regulators support fraud detection. AI systems have been introduced that improve fraud detection. However, regulators must ensure that AI systems for claims handling do not sacrifice accuracy for speed.  

Angela followed this by speaking of regulatory opportunities. I did not anticipate this. Maybe I should have. Regulators are currently examining opportunities for using AI in compliance reviews and market analysis. AI can make compliance reviews more efficient freeing time for other areas of regulatory oversight. With respect to market analysis, AI can look at patterns that would be difficult to detect using traditional methods limited to traditional insurance data.

She then concluded her presentation with some helpful suggestions for insurers when discussing implementation of AI with regulators. Insurers must expect regulatory scrutiny with the introduction of new technology. But this does not mean there is a desire by regulators to block its use.

The key is communication. Insurers should be prepared for discussions with regulators. AI systems must be tested and validated with thorough documentation. Insurers should be receptive to regulatory feedback and be prepared to make revisions to the system. Regulators will respect confidentiality with propriety insurer information. But the insurer should expect to be transparent with the regulator. Finally, the insurer should conduct a complete and full legal analysis to ensure compliance.  As Angela stated, “the key to successful implementation of any new technology is communication and collaboration with insurance regulators.”

After the presentation, Angela told me that she hopes that her presentation made it clear that “regulators are not opposed to technology or innovation.” I think she did. AI in GI is here and insurers that do not utilize these systems will be left behind by those who do.

All this talk of AI in GI was focused on systems that insurers can use. However, that is not the complete story. AI is increasingly being embedded into automobiles by their manufacturers. The ultimate goal of automobile manufacturers is the autonomous vehicle (aka driverless car). This will fundamentally affect the business of insurance and its regulation. This topic was examined in the annual meeting session “The General Insurance Industry and Autonomous Vehicles.”

The only way of discovering the limits of the possible is to venture a little way past them into the impossible.

– Arthur C. Clarke, in the essay “Hazards of Prophecy: The Failure of Imagination,” 1962

Anthony Cappelletti, FSA, FCIA, FCAS, is a staff fellow for the SOA. He can be contacted at acappelletti@soa.org.

[1] My list is lengthy but some examples are: Isaac Asimov’s Robot and Foundation series, Arthur C. Clarke’s Odyssey series, Robert Silverberg & Isaac Asimov’s Positronic Man, Philip K. Dick’s Do Androids Dream of Electric Sheep?,  Douglas Preston’s The Kraken Project, Dan Brown’s Digital Fortress. My favorites are Asimov’s Robot and Foundation series.

[2] My list for this is also lengthy. Examples are “Star Wars” series of movies, “Terminator” series of movies, “Alien” series of movies, “2001: A Space Odyssey,” “Blade Runner,” “I Robot,” and “A.I.: Artificial Intelligence.” It’s hard for me to pick a favorite. I should also note that a number of TV shows I watched highlighted AI. These include “Star Trek” (can’t forget Commander Data), “Battlestar Galactica” (both new and old), “Terminator: The Sarah Connor Chronicles,” “Person of Interest,” “Humans,” and “Westworld.”

[3]The three laws are cited in various works of Isaac Asimov. It was introduced in his 1942 short story “Runaround.” The zeroth law was introduced in his 1986 novel, Foundation and Earth.

[4] Dr. Dave Bowman, Mission Commander for Discovery 1 speaking to artificial intelligence HAL 9000 in Stanley Kubrick’s 1968 movie “2001: A Space Odyssey.” The movie was based upon Arthur C. Clarke’s 1951 short story, “The Sentinel.” It should be noted that, in 1968, Arthur C. Clarke created the novel 2001: A Space Odyssey at the same time the movie was made based upon “The Sentinel.”