Society of Actuaries Research Institute Announces Participation in the Department of Commerce Consortium Dedicated to AI Safety

Actuarial Organization and Other Leading AI Stakeholders to Help Advance the Development and Deployment of Safe, Trustworthy AI Under New U.S. Government Safety Institute

February 8, 2024, Chicago, IL – The Society of Actuaries (SOA) Research Institute announced that it will lend its actuarial expertise, joining more than 200 of the nation’s leading artificial intelligence (AI) stakeholders to participate in a Department of Commerce initiative to support the development and deployment of trustworthy and safe AI. Established by the Department of Commerce’s National Institute of Standards and Technology (NIST), the U.S. AI Safety Institute Consortium (AISIC) will bring together AI creators and users, academics, government and industry researchers, and civil society organizations to meet this mission. As a member of this consortium, the SOA Research Institute will harness its expertise in AI security and its ethical use for the actuarial profession and insurance industry. 

“We are honored to be part of the Artificial Intelligence Safety Institute Consortium initiated by the National Institute of Standards and Technology,” said R. Dale Hall, FSA, CERA, CFA, MAAA, Managing Director of Research, Society of Actuaries Research Institute. “As the world’s largest actuarial association, the Society of Actuaries is actively involved in artificial intelligence (AI) research in all areas of actuarial practice. We look forward to utilizing our expertise and collaborating with consortium partners to accelerate the progress and implementation of the responsible use of safe and trustworthy AI,” stated Hall.

“The U.S. government has a significant role to play in setting the standards and developing the tools we need to mitigate the risks and harness the immense potential of artificial intelligence. President Biden directed us to pull every lever to accomplish two key goals: set safety standards and protect our innovation ecosystem. That’s precisely what the U.S. AI Safety Institute Consortium is set up to help us do,” said Gina Raimondo, U.S. Secretary of Commerce. “Through President Biden’s landmark Executive Order, we will ensure America is at the front of the pack – and by working with this group of leaders from industry, civil society, and academia, together we can confront these challenges to develop the measurements and standards we need to maintain America’s competitive edge and develop AI responsibly.”

The consortium includes more than 200 member companies and organizations that are on the frontlines of creating and using the most advanced AI systems and hardware, the nation’s largest companies and most innovative startups, civil society and academic teams that are building the foundational understanding of how AI can and will transform our society, and representatives of professions with deep engagement in AI’s use today. The consortium represents the largest collection of test and evaluation teams established to date and will focus on establishing the foundations for a new measurement science in AI safety. The consortium also includes state and local governments, as well as non-profits, and will work with organizations from like-minded nations that have a key role to play in developing interoperable and effective tools for safety around the world.

The full list of consortium participants is available here.

About the SOA Research Institute

With roots dating back to 1889, the Society of Actuaries (SOA) is the world’s largest actuarial professional organization with more than 32,000 actuaries as members. Serving as the research arm, the SOA Research Institute provides objective, data-driven research bringing together tried and true practices and future-focused approaches to address societal challenges and your business needs. It provides trusted knowledge, extensive experience and new technologies to help effectively identify, predict and manage risks. For more information visit soa.org/research-institute.