Artificial intelligence (AI) models have become increasingly popular in various industries, including finance. Such models can help lenders assess the risk of loan applicants, identify potential defaults and make better decisions about granting credit. However, the complexity of many machine-learning and, particularly, deep-learning models makes it difficult to understand how they arrive at their decisions. This lack of interpretability can be a significant problem in the legal and ethical realms, as it hinders transparency and accountability. Join us to learn interpretability techniques that are essential to address the challenges of using complex AI models in credit risk modeling. These techniques aim to provide a better understanding of how an AI model arrived at a particular decision or prediction. Techniques include surrogate models, which aim to provide a simpler model that approximates the behavior of the original model, and decision trees, which show the decision-making process in a more transparent way. However, applying interpretability techniques to AI models can be a challenging task. For example, some techniques can reduce the performance of the model, which can be unacceptable in applications where high accuracy is critical. Additionally, some techniques may only be effective for certain types of models or datasets, which limits their applicability.Despite these challenges, interpretability techniques are becoming increasingly important in the field of credit risk modeling. Financial regulators have recognized the importance of interpretability, and are starting to require financial institutions to use models that can be explained and validated. In addition, interpretability techniques can be used to improve the performance of credit risk models. By understanding how a model makes decisions, it is possible to identify areas where the model can be improved or where additional data may be needed. TRACK: Technology/Model Development/ Artificial Intelligence/Machine Learning;Predictive Modeling