Models are meant to represent reality, and they often do so quite well. However, they cannot make decisions that contradict the modeling algorithm or underlying assumptions and data. Neither do they possess emotional intelligence or the ability to adjust for social equality. As complexity of models evolved to predictive analytics and machine learning, reliance on model results has increased as well. Over the last decade, as actuaries, modelers and data scientists looked closer into their models, they often identified certain social biases in model assumptions and algorithms. From the use of indicators like credit scores to geo-social segmentation, insurance models have often skewed results in favor of certain socio-economic groups. Underlying this segmentation, one can identify bias around access to health care and education, socioeconomic groupings and financial literacy.
By attending the session, you will be able to:
- Identify signs of social bias in models and how to address them.
TRACK: Spearheading innovation through change, Safeguarding population(s), Cultivating future opportunities