How to Build an AI System That’s Trustworthy
How to Build an AI System That’s Trustworthy
The use of AI is becoming pervasive in transforming business strategies, solutions and operations. It has brought with it unique risks that organizations must learn to identify, manage and respond to effectively.
Download our free ebook, 6 Frameworks to Empower Your Team to Lead With AI
Learn about data and AI ethics, guidelines and regulations, and how to deploy trustworthy AI systems in an organization.
Since AI deployment affects entire organizations and all stakeholders, it becomes especially critical to manage the risks associated with AI and work towards building trustworthy AI systems.
Businesses need to design and integrate governance, risk management and control strategies and frameworks to maximize the benefits of their AI investments.
Building Trust in Data
Before business leaders embark on a full-blown AI implementation project, they need to make sure their organization has the data it needs to support the AI modeling and training — in order to make data analytics and AI algorithms outcomes more accurate.
Ensuring trust in data means:
A Framework for Developing Trustworthy AI
Although there is no global standard for ethical AI development and deployment, the European Commission created a guideline for trustworthy AI, which states that ethical AI should be:
This guideline for ethical AI includes seven requirements that any AI system should meet to be called trustworthy:
Eradicating Bias in AI
AI and data analytics systems have shown bias towards certain groups of people in ways that have reflected widespread societal biases in race, gender, age and culture.
There are two primary biases in AI.
First is the algorithmic bias or the data bias, where an AI/ML algorithm is trained using a biased data set. These biases are perpetuated in the outputs of the algorithms, such as the Portrait AI Generator app that was “trained mostly on portraits of people of European ethnicity.”
Second is societal bias in AI, which reflects social intolerance or institutional discrimination, such as AI facial recognition technologies have higher error rates for minorities or Google Maps avoiding black neighborhoods.
Dr. Rumman Chowdhury, Global Lead for Responsible AI at Accenture Applied Intelligence, says,
“What's wrong is that ingrained biases in society have led to unequal outcomes in the workplace, and that isn’t something you can fix with an algorithm.”
That’s where keen human leadership in AI comes in. Business leaders should be attuned to the biases, challenge the assumptions inherent in data sets and be in command of the AI decision-making loop, and make their organizations accountable for mitigating AI biases.
Sign up for the MIT SAP Data Strategy: Leverage AI for Business course to learn about AI ethics and governance, manage the risks associated with AI and build trustworthy AI systems that will empower business growth.
Data Strategy: Leverage AI for Business is delivered as part of a collaboration with MIT School of Architecture + Planning and Esme Learning. All personal data collected on this page is primarily subject to the Esme Learning Privacy Policy.
© 2021 Esme Learning Solutions. All Right Reserved.