No Results Found
Relevance Date
Money Movement
Trust & Security
Inclusive Growth
Data and Insights

How to use Visa Navigate

Swipe down to share

Swipe left or right to next page

Swipe up to read

Would you like to continue to open links from Visa Navigate in the default mail app or switch to Microsoft Outlook (if installed)?

Keep using the Default
Use Microsoft Outlook

Jessica Lennard

June 2021

Jessica is a Senior Director in Visa’s Global Strategic Initiatives team, leading on Data and Artificial Intelligence. Her work focuses on AI (policy, regulation, and ethics), privacy, data sharing, and data for good.

 

6 - 7 Minutes

How organisations can responsibly unlock the power of Artificial Intelligence

On a now famous front page in 2017, the Economist newspaper proclaimed that data has become the world’s most valuable resource. While that premise undoubtedly remains true today, it is not quite as simple as it sounds.

Data needs to be analysed and interpreted to be useful – on its own it is like having a mine full of gold, but no picks or shovels.

Artificial Intelligence (AI) is a powerful tool for unlocking the value of data, turning the raw material of data into insights and predictions, learning from experience as it goes. AI enables automated decision-making at scale and can transform an organisation’s ability to create successful products, and drive beneficial outcomes for consumers, businesses and society. AI has enormous potential in financial services, from helping to detect and prevent financial crime and assisting organisations in making responsible lending decisions, to better understanding and responding to customers’ needs by providing more personalised advice and efficient services.

At its best, AI is already transforming the world around us – and there is much more to come.

Need for transparency

However, the power of AI comes with a unique set of responsibilities and challenges.

The rapidly expanding use of these technologies will drive the single biggest delegation of human decision-making that has ever taken place in society.

And automation at scale, though clearly valuable in many ways, can have huge implications for people’s lives and for trust in organisations, especially if AI models are inaccurate, difficult to explain, unfairly biased, or otherwise somehow flawed.

The use of AI also poses significant new questions that have to be considered. What is a ‘good’ outcome from an automated decision, and who decides? A loan applicant seeking credit approval and a bank using AI for greater accuracy in risk scoring might have different perspectives on whether the decision to approve or decline was fair or good. There is also research1 to suggest that, as humans, we judge certain decisions more harshly when they are made by machines than when they are made by other human beings.

As we expand our reliance on decision making based on algorithms and technology, there is a greater need to be transparent to explain the outcomes these models generate in ways that are understandable to those they affect. We also need to be able to explain the management, governance and accountability processes which framed the technical processes of designing and building the AI decision model. All of this is critical to building trust in these new technologies.

Building on hard-won trust

The financial services industry is highly trusted by consumers today, meaning it is well placed to show leadership and best practice in the development and use of AI. But the industry’s trust has been hard won – and we don’t want to jeopardise it. Given the new challenges AI raises, and the importance of maintaining trust, it is not enough to consider the risks and governance of AI solely within the existing frameworks that most organisations have traditionally used.

Companies need to think deeply and holistically about AI – ethically, operationally, culturally – and what it means for the future of their business.

It is clear that many businesses and policymakers are already awake to the scale and complexity of the challenge. This awareness is evidenced by the 160-plus ethics frameworks already in use around the world, and ongoing debate around the evolution of critical functions such as risk management, privacy and audit.

Visa has been on an exciting journey toward best practice in recent years, although we constantly remind ourselves there are no ‘right answers’ and the work is never finished. Part of our responsibility as a global network is to share learnings and bring together broad groups of stakeholders to advance progress. We firmly believe the payments and financial services industry can work together, along with the research, regulation and policy community, to raise the bar for responsible AI - particularly through sharing research, learnings, and best practices.

Three steps to stronger governance

We have identified a number of areas that we have found valuable in informing our constantly evolving approach:

1. Build cross-organisational awareness and accountability

Technical functions within businesses often operate in siloes, creating potential gaps in the required AI expertise and understanding, resulting in difficulties in governance, accountability and risk management. To mitigate these risks, we believe it’s crucial that accountability for responsible AI and data use is embedded throughout the business, across all functions.

Achieving appropriate levels of awareness and accountability requires working to establish technical expertise across the organisation and creating cross-functional leadership and governance groups. For Visa, this includes our Data Council that acts as a “strategic brain trust” on data issues, and our Data Use Council that evaluates new and specific use cases of data. We have found that these structures help both to promote shared understanding, and to ensure healthy checks and balances between innovation and governance - all of which serves to build and maintain consumers’ trust.

A striking example is the issue of bias mitigation and fairness in AI, which is a particular area of focus for Visa. Tackling this aspect of responsible AI requires accountability, expertise and collaboration across functions as broad as Legal and Privacy, Risk, Policy, Social Impact, Diversity and Inclusion and HR - that’s in addition to technical data science teams.

2. Ensure ethics are fit for a digital world

Ethical dilemmas raised by data use and AI – including issues such as privacy, fairness, equity and human autonomy – often do not fit easily into the frameworks companies use to guide and govern their values and behaviours today.

Many of the ethical considerations raised regarding AI have been around for a long time. But the delegation of decision-making to computers (especially where decisions are not easily explained), coupled with the scalability of AI, requires a fundamental rethink of the responsibilities of businesses toward consumers, society, investors and employees.

One way to address this is to create new, or adapted, principles within organisations that translate corporate mission and beliefs into the context of data and AI. This requires a review of global ethics principles and regulation, as well as extensive external and internal stakeholder engagement. Visa has partnered with Stanford University on its ethical dilemmas in technology project.

3. Engage in global regulatory, industry and civil society debate

Much of the regulation and policy which will govern AI in the future does not yet exist. Nor is it clear how existing areas of law (from consumer protection to human rights) applies to AI today.

A fast-moving global conversation is taking place around both of these areas, and it is important to be an active participant in that process, whether it is with national governments, industry councils (like the Microsoft National AI Council, of which Visa is a member), or supranational organisations, such as the World Economic Forum, with which Visa partners on multiple levels.

A powerful tool to meet global challenges

For companies committed to strong governance, ethics, and culture around AI, the possibilities are enormous. AI delivers powerful insights to drive better decision making. It can help predict risk and manage crises, drive economic growth, support business recovery and resilience planning. As consumers, it is already delivering dramatically greater choice and convenience – from biometric security to protect our devices, to the recommendations engines we rely on to find the product or service we want in seconds.

In terms of unlocking innovation, AI is arguably the most powerful tool on the planet, particularly in helping tackle some of the greatest humanitarian and socio-economic challenges we face today.

AI can play a role in the rapid development of vaccines and the management of pandemics (as we have seen throughout COVID-192), in expanding financial access and participation, and in fighting climate change and delivering a more sustainable and inclusive world.

All of these benefits, and many, many more, depend on the trust of society and regulators, requiring AI to be developed and used with transparency and accountability. This needs to be done in a way that customers are comfortable with, and that delivers fair and beneficial outcomes. Without that trust, a huge opportunity will be missed – and we risk being left with a hypothetical goldmine unable to access the treasure all around us.

Stay current with the latest payments insights from Visa Navigate CEMEA – subscribe today.

All brand names, logos and/or trademarks are the property of their respective owners, are used for identification purposes only, and do not necessarily imply product endorsement or affiliation with Visa.

1 ‘How Humans Judge Machines’, Massachusetts Institute of Technology, 2021: https://www.judgingmachines.com/

2 ‘How AI is being used for COVID-19 vaccine creation and distribution’, Tech Republic: https://www.techrepublic.com/article/how-ai-is-being-used-for-covid-19-vaccine-creation-and-distribution/

Read Next



UK Consumer Spending Index

Read More
Share
 

Welcome to Visa Navigate from Charlotte Hogg (CEO Visa Europe)

January 2019, The War for the Customer
Read More
Share
 

The War for the Customer Gets Personal

January 2019, The War for the Customer
Read More
Share