The applications of artificial intelligence are becoming more embedded in the financial industry. AI systems will soon become key decision-makers in critical situations like whether someone is granted or denied a loan. Without trust and accountability, AI systems will never reach their true potential.
Cathy Cobey, EY Global Trusted AI Consulting Leader says, “Without trust, artificial intelligence cannot deliver on its potential value. New governance and controls geared to AI’s dynamic learning processes can help address risks and build trust in AI.
What Does Trust Mean In An AI System?
AI systems are coming up to a level of maturity where trust is becoming an essential factor. But trust cannot be built into an AI – it’s a product of the human-machine relationship.
For an AI system to be deemed trustworthy, humans will have to trust in:
The performance of the AI/machine learning model
The operations of the AI system
The ethics of the workflow, both to design the AI system and how it’s used to inform business processes
Corporate and Government Policies Regarding Trusted AI
As yet, there isn’t a singular set of rules that extract the essence of building a trustworthy and accountable AI. But various industry experts and governments have put forward standards for responsible AI.
Microsoft and IBM call for fairness, reliability, safety, privacy, security, inclusiveness, transparency and accountability in building responsible AI.
Although there isn’t one established set of rules, these principles have a lot in common when designing AI that’s responsible and trustworthy.
4 Factors Required to Build Trusted AI
Artificial Intelligence is being used to make decisions that can have significant consequences in a person’s life. But as AI applications become more ubiquitous in the financial sector, how do you know whether you can trust them?
According to Lauren Frazier of IBM’s Cloud Pak for Data marketing team, a trustworthy AI platform should be:
Since AI tools are making so many decisions on our behalf, they have to be fundamentally fair. At a minimum, that means it won’t discriminate based on age, gender, race, religion and disability.
AI systems will have to inform how they arrived at a particular conclusion. For example, if an AI platform declines a loan, it should explain what factors led to that decline, such as below the threshold household income or credit score.
Values vary from one organization to another and AI algorithms need to be built in a way that will align with those specific sets of values. Otherwise, AI will perpetuate injustices and discrimination.
AI tools have to be interpretable and explainable. From what data sets did it learn? How did it incorporate data into its algorithms? How did it make decisions? According to Frazier, “unless we know what’s in the engine, we won’t know how to fix it.”
What are other leading experts saying about the ethical issues surrounding data, discrimination, privacy and competition? And what level of risk do you expose yourself to when you don’t fully understand their implications? Answering these questions will become increasingly important for AI entrepreneurs and business leaders looking to leverage AI in their organizations. Learn the nuances of AI ethics and its implications in the fintech sector in our Oxford AI in Fintech and Open Banking Programme. Download a free prospectus today.