Responsible Machine Learning: Living Up to Our Professional Code of Conduct, ASOPs, and Corporate Requirements

By James Dolstad and Abby Steele

Health Watch, September 2022

hw-2022-08-dolstad-hero.jpg

Block chain, immunotherapies and robotics are technologies that continue to have a large impact on the health care industry. However, the technology expected to have the biggest impact in this industry is artificial intelligence (AI), with over 65 percent of respondents in a January 2020 Statista survey selecting that technology.[1] Numerous use cases for AI’s impact on the health care industry are emerging on a daily basis, and a 2021 article in Toward Data Science highlighted a wide range of applications, including predicting heart failure in mobile health; mental illness prediction, diagnosis and treatment; personalized therapy in ovarian cancer treatment; and supply chain management.[2]

Machine learning (ML) is a subset of AI that many actuaries are using more frequently in their daily work, as it can increase both the accuracy of and insights into projections and solving business problems. The Society of Actuaries (SOA) commissioned a research paper in December 2019 titled “Literature Review: Artificial Intelligence and Its Use in Actuarial Work” that lays out numerous use cases.[3] In addition, there are other use cases specific to health care, including but not limited to product strategy, reduction in total cost of care in specific service areas, block of business review, provider performance assessment, and reserving. One of the main benefits of using ML is that it allows us to use a much broader set of data that enables us to not only increase our accuracy but also understand the drivers of the prediction and what drivers are addressable. Actuarial Standard of Practice (ASOP) 56—Modeling provides guidance on the issues actuaries should be considering when developing and applying predictive models.[4] The SOA has also provided several classes and webinars on this topic.

While this information remains relevant and useful, the world has changed significantly over the last few years. Statistical bias continues to be a primary concern, but others, including potential biases against underserved populations, now are equally important. Some states are now enacting legislation around the types of data that can be used and requirements to demonstrate that any models used have been tested for biases. Governments, corporations and our own professional organizations are all focused on how ML and the underlying data can be used in a responsible manner.

In addition to the extra scrutiny being placed on the use of ML applications, the speed at which everything is changing makes it difficult for actuaries to stay current on the professional, state and federal requirements. The issue is compounded in that the data required to test for bias may be imperfect and likely to be biased. Publicly available models provide approaches on how to test for bias. These models are good initial attempts but may not directly apply to health care, and they have significant room for improvement. Duke’s Margolis Center for Health Policy released a paper, “Preventing Bias and Inequities in AI-Enabled Health Tools,” in 2022 that discusses not just the potential bias in AI models but the challenges of underrepresentative data, biased training data, issues around model selection and several other key areas of consideration to ensure responsible model development.[5] The article concluded by noting that there are no simple checklists to test for bias, and there is a growing need to “build toward consensus on standards and framework.”

The use of ML raises a caution flag for some, as it is thought to be a black box: we are unable to see what is happening inside and therefore cannot trust the output. Because of this, requirements for model explainability, or transparency about how a prediction is achieved, are becoming more important in certain uses. The health care industry sees the tremendous benefits of leveraging AI and ML to improve outcomes and reduce the total cost of care, but that use comes with challenges. Challenges that can be overcome.

Actuaries have a great opportunity to work with their data science, legal and compliance colleagues to be part of the solution. As actuaries familiarize themselves with the issues at hand, they often reach the same solution as their data science colleagues—there are currently no best practices, only better practices. There are also far more questions than answers. The Federal Trade Commission (FTC) released a blog in April 2021 that provided guidance in this area and also appears to recognize there are currently no best practices.[6] It consists of the following seven directives:

  1. Start with the right foundation.
  2. Watch out for discriminatory outcomes.
  3. Embrace transparency and independence.
  4. Don’t exaggerate what your algorithm can do or whether it can deliver fair or unbiased results.
  5. Tell the truth about how you use data.
  6. Do more good than harm.
  7. Hold yourself accountable—or be ready for the FTC to do it for you.

Health care actuaries are used to working with imperfect data and challenging situations. This situation, however, comes with more sensitivity and scrutiny than most of the work we do. Stakeholders, including the FTC, are particularly focused on the use of AI or ML when it leads to decisions that deny value to consumers. The FTC blog specifically acknowledges that intention can be good and still result in biased outcomes. As a result, it comes with more risk to us personally, the actuarial profession and our employers. To address this issue, the SOA has developed a five-month Ethical and Responsible Use of Data and Predictive Models Certificate Program, with the first program starting in March 2022.

Many companies are focused on augmenting internal standard practices and governance with formal use of AI programs to help achieve responsible ideation, development and use of ML models. These programs typically include internal guidance for all teams, education initiatives and mechanisms to inventory and provide oversight and governance of ML models developed internally or accessed through third-party vendors. Actuaries can play an important role in helping to develop such standards, working with data scientists, legal, compliance and other subject matter experts, such as clinicians, to develop oversight and governance frameworks that will address the evolving expectations of our stakeholders.

Such efforts will place increasing importance on standardized documentation of machine learning models, including clear definitions of the intended use of the model, data used in its development and standardized performance measurement. They will help focus the organization on standardizing approaches to assess the data and the model itself for potential unintended consequences, including bias. A cross-discipline governance model will help provide consistent oversight and open discussion to help identify sources of risk and approaches to mitigate that risk.

Guidance is based on emerging federal and local policies and a growing body of research that also helps bring into focus the ways in which ML can have unintended consequences. Corporate guidelines for data custodianship and privacy principles, as well as for the responsible development and use of ML, offer guideposts to follow. Employees need specific recommendations for thinking through possible implications of ML systems through every phase of ML creation and use. Given the relative maturity of this field, guidance should be revisited and refined periodically to reflect new external perspectives and key lessons from internal teams. Accordingly, actuarial teams must be prepared to review and revise standard practices and to provide feedback to help shape the overall capability.

Any employee involved with developing, deploying and maintaining an ML solution will need learning resources to understand the evolving expectations related to the responsible development and use of ML. Learning may need to contemplate foundational understanding of related topics such as bias, equity and new policies or regulations.

Actuaries will also need to identify how these emerging expectations should be implemented in the specific context of their work. They will play an invaluable role in the design and execution of responsible ML programs in their organizations.

Statements of fact and opinions expressed herein are those of the individual authors and are not necessarily those of the Society of Actuaries, the editors, or the respective authors’ employers.


James Dolstad, ASA, MAAA, FCA, is vice president of Actuarial Consulting at Optum. Jim can be reached at james.dolstad@optum.com.

Abby Steele is senior director of the AI/ML Responsible Use Program at UnitedHealth Group. Abby can be reached at abby.steele@optum.com.


Endnotes

[1] Stewart, Conor. Technologies with the Most Impact on Health Care 2020. Statista, Aug. 3, 2020, https://www.statista.com/statistics/1091107/technologies-impact-on-health-care-prediction/ (accessed Aug. 13, 2022).

[2] Flückiger, Isabelle. 8 Exciting Case Studies of Machine Learning Applications in Life Sciences and Biotechnology. Towards Data Science, Apr. 26, 2021, https://towardsdatascience.com/8-exciting-case-studies-of-machine-learning-applications-in-life-sciences-and-biotechnology-97c1b0b43688 (accessed Aug. 13, 2022).

[3] Yeo, Nicholas, Raymond Lai, Min Jyeh Ooi, and Jie Yin Liew. Literature Review: Artificial Intelligence and Its Use in Actuarial Work. Society of Actuaries, Dec. 2019, https://www.soa.org/globalassets/assets/files/resources/research-report/2019/ai-actuarial-work.pdf (accessed Aug. 13, 2022).

[4] Actuarial Standards Board. Actuarial Standard of Practice No. 56: Modeling. ASB, Dec. 2019, http://www.actuarialstandardsboard.org/asops/modeling-3/?msclkid=ad170057aa2811ecb03761eae6ebbdf6 (accessed Aug. 13, 2022).

[5] Locke, Trevin, Valerie Parker, Christina Silcox, and Andrea Thoumi. Preventing Bias and Inequities in AI-Enabled Health Tools. Duke-Margolis Center for Health Policy, Jul. 6, 2022, https://healthpolicy.duke.edu/publications/preventing-bias-and-inequities-ai-enabled-health-tools (accessed Aug. 13, 2022).

[6] Jillson, Elisa. Aiming for Truth, Fairness, and Equity in Your Company’s Use of AI. Federal Trade Commission, Apr. 19, 2021, https://www.ftc.gov/business-guidance/blog/2021/04/aiming-truth-fairness-equity-your-companys-use-ai (accessed Aug. 13, 2022).