How to Build an AI System That’s Trustworthy

The use of AI is becoming pervasive in transforming business strategies, solutions and operations. It has brought with it unique risks that organizations must learn to identify, manage and respond to effectively.

Download our free ebook, 6 Frameworks to Empower Your Team to Lead With AI

Learn about data and AI ethics, guidelines and regulations, and how to deploy trustworthy AI systems in an organization.

Since AI deployment affects entire organizations and all stakeholders, it becomes especially critical to manage the risks associated with AI and work towards building trustworthy AI systems.

Businesses need to design and integrate governance, risk management and control strategies and frameworks to maximize the benefits of their AI investments.

Building Trust in Data

Before business leaders embark on a full-blown AI implementation project, they need to make sure their organization has the data it needs to support the AI modeling and training — in order to make data analytics and AI algorithms outcomes more accurate.

Ensuring trust in data means:

  • Improving the data collection process: Data that takes into account diverse sources relevant to the organization, both structured and unstructured.

  • Refining data organization: meticulous data organization that is easily accessible and transparent across the company. 

  • Refreshing and cleaning data regularly: poor quality and biased data can lead to inaccurate results and result in harm if used in AI algorithms. Data cleansing is critical to ensure data is high quality, current, complete and relevant. 

  • Normalizing data: data collected from different sources tend to have inconsistencies that can be perpetuated in algorithms that use it. Hence creating a standard that will standardize data and make it uniform is important. 

  • Harness cross-functional data: Data that’s inaccessible and kept in silos across the organization will hamper the outcomes of the AI algorithms. Organizations need a singular data management platform that integrates data in one place and eliminates silos.

A Framework for Developing Trustworthy AI

Although there is no global standard for ethical AI development and deployment, the European Commission created a guideline for trustworthy AI, which states that ethical AI should be:


  • Lawful: should be mindful of and incorporate all applicable laws and regulations
  • Ethical: follow ethical principles and values
  • Robust: should consider how the outcomes of algorithms affect society at large

This guideline for ethical AI includes seven requirements that any AI system should meet to be called trustworthy:

  • Human agency and oversight: AI systems should foster fundamental human rights, empower human beings and allow them to make informed decisions. AI should employ human-in-the-loop and human-in-command approaches.
  • Technical robustness and safety: AI systems should be resilient, secure, accurate, reliable, reproducible and include a fallback plan in case of problems to minimize unintentional harm.
  • Privacy and data governance: AI developers should take into account the quality and integrity of the data and also make sure they have permission to use that data. AI should demonstrate respect for privacy and data protection and have in-built data governance mechanisms.
  • Transparency: AI should include traceability mechanisms that can make the data and AI business model transparent. The outcomes from AI algorithms should also be explainable, and users must be aware that they are using AI and understand its capabilities and limitations.
  • Diversity, non-discrimination and fairness: Data used to build AI algorithms shouldn’t be skewed so as to prevent perpetuating negative implications, such as marginalizing vulnerable groups, propagating prejudice, promoting discrimination. AI should also be accessible to all users.
  • Societal and environmental well-being: AI should benefit all humans, including future generations, which means it should be sustainable and environmentally friendly and take into account its social and societal impact.
  • Accountability: AI developers need to put mechanisms for responsibility and accountability in AI and the outcomes it generates. Algorithms, data and design processes should be auditable.

Eradicating Bias in AI

AI and data analytics systems have shown bias towards certain groups of people in ways that have reflected widespread societal biases in race, gender, age and culture.
There are two primary biases in AI.

First is the algorithmic bias or the data bias, where an AI/ML algorithm is trained using a biased data set. These biases are perpetuated in the outputs of the algorithms, such as the Portrait AI Generator app that was “trained mostly on portraits of people of European ethnicity.”

Second is societal bias in AI, which reflects social intolerance or institutional discrimination, such as AI facial recognition technologies have higher error rates for minorities or Google Maps avoiding black neighborhoods.

Dr. Rumman Chowdhury, Global Lead for Responsible AI at Accenture Applied Intelligence, says,

“What's wrong is that ingrained biases in society have led to unequal outcomes in the workplace, and that isn’t something you can fix with an algorithm.”

That’s where keen human leadership in AI comes in. Business leaders should be attuned to the biases, challenge the assumptions inherent in data sets and be in command of the AI decision-making loop, and make their organizations accountable for mitigating AI biases.

Sign up for the MIT SAP Data Strategy: Leverage AI for Business course to learn about AI ethics and governance, manage the risks associated with AI and build trustworthy AI systems that will empower business growth.

Data Strategy: Leverage AI for Business is delivered as part of a collaboration with MIT School of Architecture + Planning and Esme Learning. All personal data collected on this page is primarily subject to the Esme Learning Privacy Policy.

 

© 2021 Esme Learning Solutions. All Right Reserved.