Skip to main content
BI4ALL BI4ALL
  • Expertise
    • Artificial Intelligence
    • Data Strategy & Governance
    • Data Visualisation
    • Low Code & Automation
    • Modern BI & Big Data
    • R&D Software Engineering
    • PMO, BA & UX/ UI Design
  • Knowledge Centre
    • Blog
    • Industry
    • Customer Success
    • Tech Talks
  • About Us
    • Board
    • History
    • Partners
    • Awards
    • Media Centre
  • Careers
  • Contacts
English
GermanPortuguês
Last Page:
    Knowledge Center
  • Empowering Responsible AI through the SHAP library

Empowering Responsible AI through the SHAP library

Página Anterior: Blog
  • Knowledge Center
  • Blog
  • Fabric: nova plataforma de análise de dados
1 Junho 2023

Fabric: nova plataforma de análise de dados

Placeholder Image Alt
  • Knowledge Centre
  • Empowering Responsible AI through the SHAP library
8 March 2024

Empowering Responsible AI through the SHAP library

Empowering Responsible AI through the SHAP library

Key takeways

Equity in AI and ML Models

Responsibility and supervision

The explainability of AI and ML models

As Artificial Intelligence (AI) systems and Machine Learning (ML) models are increasingly used in everyday life, it is crucial to ensure they are used responsibly. While these technologies have enormous potential to revolutionise many industries and improve society’s lives, these benefits also come with significant challenges.

As AI systems and ML models are used to make decisions that impact people and society, it is essential to understand how they predict specific outcomes. However, the lack of transparency in AI and ML models, also called “black box models”, can make it difficult to comprehend their decision-making process, leading to scepticism about their behaviour and technology. Understanding how these models make their decisions is essential in healthcare, banking, and security.

Ethical considerations must be incorporated to ensure the responsible and sustainable use of AI and ML models to address emerging challenges. This article will explore the importance of responsible AI and the role of moral considerations in addressing some of the most significant issues affecting our society today. We also present SHAP, a library that aids in explaining AI and Machine Learning projects.

To ensure that AI and ML models are ethical and responsible is essential to consider several fundamental principles:

Fairness – Understanding the bias that data have introduced is the first step in making sure the model makes predictions that are fair to all demographic groups. It’s crucial to apply fairness analysis across the whole ML process, regularly evaluating the models from the standpoint of justice and inclusiveness rather than thinking of fairness as a distinct endeavour. This is particularly crucial when AI is used in essential business activities that impact many end users, like reviewing credit applications and medical diagnostics. Some alternatives, such as the Performance & Fairness tab in Google Cloud, assist with this monitoring. (Building ML models for everyone: Understanding fairness in machine learning)

Social Ethics – A broad spectrum of human populations should benefit from AI applications, and the underlying data should reflect diversity and inclusion. To ensure that these cutting-edge technologies serve humanity as a whole, national and international authorities provide regulatory frameworks. AI must serve humankind’s best interests, not the other way around. (Ethics of Artificial Intelligence)

Accountability and Responsability – An operating model should be integrated into AI initiatives to specify the roles and responsibilities of different stakeholders responsible for oversight, accountability, due diligence, and verification at different stages of the implementation of AI projects. To maintain accountability, it’s also crucial to evaluate AI systems when they function as expected and when they don’t. This evaluation “should occur at every point in its life cycle. This will help identify system-wide issues that can be missed during narrowly defined “point-in-time” assessments. Accountability and responsibility are essential to creating trustworthy AI products. (How to Build Accountability into Your AI)

Systemic Transparency – A full view of the data and AI lifecycle, including assumptions, operations, updates, and user consent, should be provided by AI systems. Depending on their positions, different stakeholders will want varying degrees of transparency. Fairness, prejudice, and trust are all topics that have recently drawn more attention. Transparency can assist in lessening these problems. However, it is becoming evident that sharing information on AI comes with risks: Providing more information may make AI more open to cyberattacks, explanations can be compromised, and company exposure to lawsuits or regulatory activity increases. The “transparency paradox” of AI is the idea that, while learning more about technology may have advantages, it may also bring up new dangers. Transparency is crucial for fostering confidence in AI systems. (The AI Transparency Paradox)

Data and AI Governance – The capacity to lead, supervise, and watch over an organization’s AI activities is known as AI governance. Processes that track and record the data’s source, models with related metadata and audit pipelines are all included in this technique. The methods used to train each model, the hyperparameters employed, and the metrics from the testing phases should all be included in the documentation. More transparency into the model’s behaviour throughout its lifecycle, the data that influenced its construction, and any potential hazards are the outcomes of this documentation. This includes managing risks and complying with laws, regulations, and corporate standards. (What is AI governance?)

Explainability – Lack of it can harm people and society by reinforcing biases and discrimination. Explainability is the capacity to offer concise and understandable justifications for decisions or products. This enables people to verify the system´s behaviour, enhance functionality, and increase stakeholder trust. Achieving explainability in AI and ML models is essential for fostering confidence in the technology and ensuring its ethical application. Explainability can aid stakeholders, such as regulators, auditors, and end users, to better understand how AI systems function and assess their effectiveness. Moreover, explainability can be used to spot and fix biases, mistakes, and unintended consequences. Nowadays, libraries like SHAP and LIME can be used in Python to reduce the impact of the “black box.”. (What is the explainability of ML, and what can we do?)

SHAP is a library that can be used to improve the level of explainability to develop more responsible AI and ML projects.

When dealing with a complex model and wanting to understand its choices, using SHAP values can help. Predictive models help answer “How much?” but SHAP values allow you to know “Why.”

 

A practical case:

Consider determining whether a patient is likely to experience a heart attack. The ML model will provide the forecast but not the reasoning behind it. SHAP will assist in identifying the features that are more influential in the prediction. SHAP provides this knowledge either for the whole model or for each patient individually.

In the SHAP library, you will find a variety of graphs to help you interpret the models you build. Below are some of these graphs and an explanation of how they can be interpreted:

Mean Plot

This plot shows how important each attribute is. The order of the input variables is determined by the mean absolute SHAP values for the entire dataset. The mean fundamental SHAP values are, on average, how much each variable affects the outcome in the positive or negative direction.

 

Beeswarm

Every instance appears as its point in a beeswarm plot for each variable. According to shape value, the dots are distributed along the x-axis and are stacked in areas with a high density of SHAP values. Examining the distribution of SHAP values reveals how a variable may influence the prediction.

High variable values appear as red dots, while low variable values appear as blue.

 

Decision plot

A decision plot aggregates multiple SHAP values to understand how our model generally predicts. The movement on the x-axis is given by the SHAP value for each feature. This provides similar information to a waterfall plot, except we can now see it for multiple observations. With ten observations, we can see some trends. A positive SHAP increases the prediction, and a negative SHAP decreases the prediction for these observations.

 

Waterfall plot

The waterfall plot illustrates how each input variable contributes to the predicted target for a single instance. The input variables are ranked by their influence on the prediction. The impact and direction of the variable on the target are shown by the shape values. Red arrows indicate a higher prediction, while blue arrows indicate a lower prediction. The predicted value is the sum of all the shape values plus the base value.

 

Force plot

The force plot is used to analyse a single instance. Features that positively influence the prediction are in red, while those that negatively influence it are in blue. More giant arrows indicate variables with higher SHAP values. The variable’s actual value is next to the variable name. If f(x) is smaller than the base value, that instance most likely belongs to the negative class. The base value is the dataset’s average target.

 

SHAP can be applied to numerous types of problems, some of which are represented in the table above.

 

ML and AI models have the potential to fundamentally alter the way we live, work, and interact with one another. It is quickly permeating every aspect of our lives. By incorporating ethical considerations into the design, development, and deployment of systems, we can ensure that these technologies are used to improve our lives in a responsible and sustainable manner. As AI and ML develop and shape our world, we must remain cautious in promoting moral and responsible practices.

With this information, we hope to have piqued your interest in responsible AI and SHAP, a particularly useful tool for improving the explainability of ML models!

If you’re interested in developing your project, send us an e-mail and take a look at our Success Stories.

 

References:

  • Burt, A. (13, December, 2019). The AI Transparency Paradox. Harvard Business Review Home: https://hbr.org/2019/12/the-ai-transparency-paradox
  • Cooper, A. (1, November, 2021). Explaining Machine Learning Models: A Non-Technical Guide to Interpreting SHAP Analyses. Aidan Cooper’s Blog: https://www.aidancooper.co.uk/a-non-technical-guide-to-interpreting-shap-analyses/
  • Ethics of Artificial Intelligence. (s.d.). UNESCO: https://www.unesco.org/en/artificial-intelligence/recommendation-ethics
  • Hollander, W. (22, December, 2020). What is explainability of ML and what we can do? Medium: https://medium.com/ubiops-tech/what-is-explainability-of-ml-and-what-we-can-do-d326d42f8c38
  • Kuo, C. (14, September, 2019). Explain Your Model with the SHAP Values. Medium: https://medium.com/dataman-in-ai/explain-your-model-with-the-shap-values-bc36aac4de3d
  • Lundberg, S. (2018). SHAP. SHAP documentation: https://shap.readthedocs.io/en/latest/
  • mazzanti, S. (4, January, 2020). SHAP Values Explained Exactly How You Wished Someone Explained to You. Medium: https://towardsdatascience.com/shap-explained-the-way-i-wish-someone-explained-it-to-me-ab81cc69ef30#:~:text=In%20a%20nutshell%2C%20SHAP%20values%20are%20used%20whenever,answer%20the%20%E2%80%9Chow%20much%E2%80%9D.%20SHAP%20answers%20the%20%E2%80%9Cwhy%E2
  • Radecic, D. (9, November, 2020). SHAP: How to Interpret Machine Learning Models With Python. better data science: https://betterdatascience.com/shap/
  • Randall, L. (27, August, 2021). 6 Key Principles for Responsible AI. Informatica: https://www.informatica.com/blogs/6-key-principles-for-responsible-ai.html
  • Robinson, S. (26, September, 2019). Building ML models for everyone: Understanding fairness in machine learning. Google Cloud: https://cloud.google.com/blog/products/ai-machine-learning/building-ml-models-for-everyone-understanding-fairness-in-machine-learning
  • Sanford, S. (11, August, 2021). How to Build Accountability into Your AI. Harvard Business Review: https://hbr.org/2021/08/how-to-build-accountability-into-your-ai
  • Trevisan, V. (17, January, 2022). Using SHAP Values to Explain How Your Machine Learning Model Works. Medium: https://towardsdatascience.com/using-shap-values-to-explain-how-your-machine-learning-model-works-732b3f40e137
  • What is AI governance? (s.d.). IBM: https://www.ibm.com/analytics/common/smartpapers/ai-governance-smartpaper/#ai-governance-delivers

Author

Marta Carreira

Marta Carreira

Associate Consultant

Share

Suggested Content

Optimising Performance in Microsoft Fabric Without Exceeding Capacity Limits Blog

Optimising Performance in Microsoft Fabric Without Exceeding Capacity Limits

Microsoft Fabric performance can be optimised through parallelism limits, scaling, workload scheduling, and monitoring without breaching capacity limits.

Metadata Frameworks in Microsoft Fabric: YAML Deployments (Part 3) Blog

Metadata Frameworks in Microsoft Fabric: YAML Deployments (Part 3)

YAML deployments in Microsoft Fabric use Azure DevOps for validation, environment structure, and pipelines with approvals, ensuring consistency.

Metadata Frameworks in Microsoft Fabric: Logging with Eventhouse (Part 2) Blog

Metadata Frameworks in Microsoft Fabric: Logging with Eventhouse (Part 2)

Logging in Microsoft Fabric with Eventhouse ensures centralised visibility and real-time analysis of pipelines, using KQL for scalable ingestion.

Simplifying Metadata Frameworks in Microsoft Fabric with YAML Blog

Simplifying Metadata Frameworks in Microsoft Fabric with YAML

Simplify metadata-driven frameworks in Microsoft Fabric with YAML to gain scalability, readability, and CI/CD integration.

Analytical solution in Fabric to ensure Scalability, Single Source of Truth, and Autonomy Use Cases

Analytical solution in Fabric to ensure Scalability, Single Source of Truth, and Autonomy

The new Microsoft Fabric-based analytics architecture ensured data integration, reliability, and scalability, enabling analytical autonomy and readiness for future demands.

Applications of Multimodal Models | BI4ALL Talks Tech Talks

Applications of Multimodal Models | BI4ALL Talks

video title

Lets Start

Got a question? Want to start a new project?
Contact us

Menu

  • Expertise
  • Knowledge Centre
  • About Us
  • Careers
  • Contacts

Newsletter

Keep up to date and drive success with innovation
Newsletter

2025 All rights reserved

Privacy and Data Protection Policy Information Security Policy
URS - ISO 27001
URS - ISO 27701
Cookies Settings

BI4ALL may use cookies to memorise your login data, collect statistics to optimise the functionality of the website and to carry out marketing actions based on your interests.
You can customise the cookies used in .

Cookies options

Estes cookies são essenciais para fornecer serviços disponíveis no nosso site e permitir que possa usar determinados recursos no nosso site. Sem estes cookies, não podemos fornecer certos serviços no nosso site.

Estes cookies são usados para fornecer uma experiência mais personalizada no nosso site e para lembrar as escolhas que faz ao usar o nosso site.

Estes cookies são usados para reconhecer visitantes quando voltam ao nosso site. Isto permite-nos personalizar o conteúdo do site para si, cumprimentá-lo pelo nome e lembrar as suas preferências (por exemplo, a sua escolha de idioma ou região).

Estes cookies são usados para proteger a segurança do nosso site e dos seus dados. Isto inclui cookies que são usados para permitir que faça login em áreas seguras do nosso site.

Estes cookies são usados para coletar informações para analisar o tráfego no nosso site e entender como é que os visitantes estão a usar o nosso site. Por exemplo, estes cookies podem medir fatores como o tempo despendido no site ou as páginas visitadas, isto vai permitir entender como podemos melhorar o nosso site para os utilizadores. As informações coletadas por meio destes cookies de medição e desempenho não identificam nenhum visitante individual.

Estes cookies são usados para fornecer anúncios mais relevantes para si e para os seus interesses. Também são usados para limitar o número de vezes que vê um anúncio e para ajudar a medir a eficácia de uma campanha publicitária. Podem ser colocados por nós ou por terceiros com a nossa permissão. Lembram que já visitou um site e estas informações são partilhadas com outras organizações, como anunciantes.

Política de Privacidade