We use cookies to give you the best experience. By continuing to browse, you agree with our

menu

Four Factors for Building a Trusted AI System

The applications of artificial intelligence are becoming more embedded in the financial industry. AI systems will soon become key decision-makers in critical situations like whether someone is granted or denied a loan. Without trust and accountability, AI systems will never reach their true potential.

Cathy Cobey, EY Global Trusted AI Consulting Leader says, “Without trust, artificial intelligence cannot deliver on its potential value. New governance and controls geared to AI’s dynamic learning processes can help address risks and build trust in AI.

What Does Trust Mean In An AI System?

AI systems are coming up to a level of maturity where trust is becoming an essential factor. But trust cannot be built into an AI – it’s a product of the human-machine relationship.

For an AI system to be deemed trustworthy, humans will have to trust in:

  • The performance of the AI/machine learning model
  • The operations of the AI system
  • The ethics of the workflow, both to design the AI system and how it’s used to inform business processes

Corporate and Government Policies Regarding Trusted AI

As yet, there isn’t a singular set of rules that extract the essence of building a trustworthy and accountable AI. But various industry experts and governments have put forward standards for responsible AI.

Microsoft and IBM call for fairness, reliability, safety, privacy, security, inclusiveness, transparency and accountability in building responsible AI.

The United States House of Representatives Resolution 2231 wants AI and Ml developers to establish standards like:

  1. A requirement of algorithmic accountability, addressing bias and discrimination
  2. A risk-benefit analysis and impact assessment
  3. Address issues of security and privacy.

The Government of Canada also has its own Algorithmic Impact Assessment tool that is available online for use.

Although there isn’t one established set of rules, these principles have a lot in common when designing AI that’s responsible and trustworthy.

4 Factors Required to Build Trusted AI

Artificial Intelligence is being used to make decisions that can have significant consequences in a person’s life. But as AI applications become more ubiquitous in the financial sector, how do you know whether you can trust them?

According to Lauren Frazier of IBM’s Cloud Pak for Data marketing team, a trustworthy AI platform should be:

1. Fair

Since AI tools are making so many decisions on our behalf, they have to be fundamentally fair. At a minimum, that means it won’t discriminate based on age, gender, race, religion and disability.

2. Accountable

AI systems will have to inform how they arrived at a particular conclusion. For example, if an AI platform declines a loan, it should explain what factors led to that decline, such as below the threshold household income or credit score.

3. Values-Driven

Values vary from one organization to another and AI algorithms need to be built in a way that will align with those specific sets of values. Otherwise, AI will perpetuate injustices and discrimination.

4. Explainable

AI tools have to be interpretable and explainable. From what data sets did it learn? How did it incorporate data into its algorithms? How did it make decisions? According to Frazier, “unless we know what’s in the engine, we won’t know how to fix it.”

    What are other leading experts saying about the ethical issues surrounding data, discrimination, privacy and competition? And what level of risk do you expose yourself to when you don’t fully understand their implications? Answering these questions will become increasingly important for AI entrepreneurs and business leaders looking to leverage AI in their organizations. Learn the nuances of AI ethics and its implications in the fintech sector in our Oxford AI in Fintech and Open Banking Programme. Download a free prospectus today.

    View All Articles

    Search