6 November 2024
Challenge
“White boxes” and “black boxes” are terms used to describe Machine Learning (ML) algorithms. These models are not easily understandable to those outside of the field, making it difficult for stakeholders to understand and accept the models’ decisions. To close this gap, the field of Explainable AI was developed with the intention of enhancing transparency and understanding of model decisions, ultimately leading to a greater sense of confidence and security when applying ML models in critical domains.
Solution
To aid in model interpretation, BI4ALL employs Explainability Techniques (XAI). These tools help identify the significance of each variable in the model’s predictions. However, interpreting the meaning of each contribution and what they mean in the context of each problem remains challenging. By merging a Large Language Model (LLM) with an explainability technique to better comprehend the model’s decisions and explain each decision in the context of the challenge, as well as combining the context of the problem with the results from the explainability technique, BI4ALL can input this information into an LLM prompt and translate complex numerical contributions into simple explanations that fit the specific context of each problem or business. Finally, the model’s interpretation outputs were presented in a report.
Benefits
Through the LLM’s response, stakeholders can determine whether this prediction is reasonable and comprehend what led the model to anticipate a specific prediction. This simplification in expressing the results eliminates the concept of “black boxes,” resulting in transparency and accountability in model usage and allowing all stakeholders to comprehend and use the results without concern.
In a world where Artificial Intelligence and Machine Learning are increasingly prominent solutions, the ability to interpret “black boxes” facilitates the development of ethical and responsible behaviour as well as the useful application of these technologies for the good of humanity.
20%
improvement
was achieved on the F1 score metric by using XAI techniques to identify prejudicial information.
60%
of users
prefer to see the results explained by the LLM rather than using only XAI libraries.
Practical applications
-
Medical diagnosis
-
Students dropout
-
Tourism prediction
-
Lead scoring
-
Fraud detection
Example
Consider a healthcare company using a Machine Learning model to predict patient outcomes based on various medical factors. By integrating an LLM with an Explainability Technique, the company can generate a report that explains each prediction in simple, context-specific terms. For instance, if the model predicts a high risk of diabetes for a patient, the LLM can provide a detailed yet accessible explanation, highlighting factors such as high BMI, family history, and age. This report helps doctors and healthcare administrators understand the rationale behind the model’s prediction, ensuring they can make informed decisions and communicate effectively with patients about their health risks.