An Ethical Framework for the AI Age.
This article by Joy Macknight appeared earlier in The Banker
Widespread use of artificial intelligence in financial services has encouraged debate around the ethical use of this cutting-edge technology built on vast data sets. Joy Macknight explores the issues, as well as initiatives to establish guidelines.
While blockchain and quantum computing continue to inch up the emerging technologies hype cycle, artificial intelligence (AI) is revolutionising the financial services industry by stealth. Fuelled by the three ‘V’s of big data – velocity, volume and variety – AI tools are being deployed across the bank, from the customer front end to deep in the back office.
AI is an umbrella term that covers robotic process automation, or the simple automation of processes such as data entry; natural language processing (NLP), most commonly used in chatbots; and machine learning, as is used in robo-advisory services and credit scoring.
Risk assessment opportunities
To date, much of the focus has been on customer benefits, such as improved experience and more personalised products; however, Cathy Bessant, Bank of America’s chief operations and technology officer, believes that the more exciting application areas are in risk management and financial forecasting. “AI gives us the ability to take vast amounts of data and produce forecasts and risk assessments based on changing variables to understand their impact. The opportunity for fast, world-class risk management is huge,” she says.
Better risk assessment can prove invaluable in times of systemic stress, such as the global financial crisis. “While not suggesting that a crisis could have been averted, there are many things that the industry could have possibly foreseen with the ability to analyse decades of data at scale, or with AI models that showed us early warning signals,” says Ms Bessant.
Royal Bank of Canada (RBC) is deploying AI in areas including cyber security and fraud detection, as well as in a new product called Apollo, which analyses news and determines the potential impact on client portfolios and markets, says Dr Foteini Agrafioti, chief science officer at the bank and head of research institute Borealis AI.
Apollo helps the bank’s analysts keep up with information on companies of interest. “As humans, we are limited by our capacity, language ability and the speed at which we can consume information,” says Ms Agrafioti. “Apollo can watch the information world 24/7, read news about companies, qualify it and bring forth important insights. It develops intuition and alerts the analyst if specific news could be impactful or likely to change the trajectory of a company.”
Ms Agrafioti highlights the synergistic relationship between humans and machines. “With these types of technologies, we can mine big data sets, process them in real time and stay on top of any information out there. But what we can’t do is combine that information to make financial decisions, which is what humans excel at,” she says.
Areas of concern
But as AI becomes more powerful, it is beginning to extend into decision-making areas, such as credit scoring and lending, which can have profound impacts on people’s lives. Industry concerns fall into four broad categories: fairness, accountability, transparency and explainability.
Rory Macmillan, co-chair of the security, infrastructure and trust working group ITU Digital Financial Services group, starts from the broad socioeconomic challenges that come with large amounts of data being held on consumers. He says: “There exists consumer vulnerability in the face of major information asymmetry, where the seller knows far more about a consumer than they know about the seller, the product or, in some cases, even their own habits and interests.”
He is quick to point out that AI can also help to close the information gap, as well as assist providers to better assess risk in credit or insurance, which aids financial inclusion. “AI can be used to tailor products to meet consumer needs and improve access to products. In developing countries, this has allowed many people access to financial products that they wouldn’t have had previously,” he says.
However, consent remains a major issue. “Even if consumers could read and understand all the terms and conditions, they don’t really understand the implications of their data being gathered, shared, aggregated and analysed to profile them,” says Mr Macmillan.
And despite the stringent requirements for banks to notify consumers as to how their data is being used, this is difficult because the data is run through multi-layers of machine learning. “There is a vast number of variables and the deeper the machine learning has gone to detect patterns, the harder it is to explain that,” adds Mr Macmillan.
Ms Agrafioti adds: “AI-powered systems have significant economic advantages and opportunities, but there are constraints around how much you can do with them if you can’t understand how they work. For example, in consumer lending there are concerns across the industry around the biases a system could perpetuate; any data that has been collected previously could be intentionally or unintentionally biased.”
Dr Dimitris Vlitas, principal, AI, at Data Practitioners, also picks up on the issue of potential bias inherent in the historical data used to build AI/machine-learning models. For example, an NLP model was used to predict the next words in sentences or passages. Because it had been trained in Wikipedia data, it was making correlations between ‘man’ and ‘scientist’, ‘woman’ and ‘housewife’. “The model picked up on the bias that was already inside the text of Wikipedia. Therefore, algorithms can be used as an assisting hand but, ultimately, a human should be there to make the final decision,” says Mr Vlitas.
Part of the solution to eliminate bias lies in omitting specific variables and creating an inclusive data set, which ensures that everyone is represented, says Pere Nebot, chief information officer at CaixaBank. “In addition, the models should always have final human input to ensure that the machine’s decision is the right one,” he adds, echoing Mr Vlitas’s point.
Many banks are exploring whether an ethical framework for the use of AI can be useful in addressing these concerns. Dr Iain Brown, head of data science, UK and Ireland, at analytics software firm SAS, reports that the topic comes up weekly among the software company’s financial service clients. “Several banks have already instigated ethics committees internally to look precisely at AI and its implications,” says Mr Brown.
Likewise, Victoria Birch, a partner at law firm Norton Rose Fulbright, reports a dramatic increase in interest in the past year. “As banks move into deployment stage, they must get these AI projects signed off by various stakeholders, including compliance and legal,” she says. “We find these questions are coming up more frequently, as they look at the risks, including potential liability, and how they can put strategies in place to mitigate the risks. An ethical framework is one such mitigant.”
Dr Michael Sinclair, a consultant at Norton Rose Fulbright, adds that one client raised concern over the many separate AI projects running within the bank without an overarching ethical governance framework. “Interestingly, it was the lawyers that drew attention to that, rather than the IT people in the bank,” adds Mr Sinclair.
The law firm strongly advocates ‘ethics by design’. “It is difficult to retro-fit ethical or regulatory considerations. Therefore, the ethical aspects need to be incorporated before the system is designed,” says Peter McBurney, a computer scientist and professor who heads Norton Rose Fulbright’s technology consulting practice.
Both Mr Nebot and Ms Agrafioti see ethics by design as a growing trend. Ms Agrafioti reports that RBC is already employing this methodology, conducting an ethics review of every product, even before research begins. “We go through a rigorous review process internally, which aims to identify the risks,” she says. “It is important that data scientists and researchers are included to ask the right questions, because the technology is constantly evolving.”
Several industry initiatives have also emerged. For example, the Monetary Authority of Singapore (MAS), together with domestic financial institutions and technology companies such as Microsoft and Amazon Web Services, launched its fairness, ethics, accountability and transparency (Feat) principles for the use of AI and data analytics in decision making in November 2018.
Dr David Hardoon, chief data officer at MAS, who co-chaired the Feat committee, says the supervisor was spurred into action towards the end of the second Fintech Festival in 2017, where it had been promoting the use of data and AI across the financial ecosystem in Singapore. “This led to informal conversations with financial institutions around whether this is something that needed regulatory guidance,” he says.
But from the outset, MAS did not want it to be the traditional regulatory ambit of what to do or not do. “We wanted it to be a co-creation – to help the industry and us,” says Mr Hardoon. And MAS wanted to lend the weight of the regulator, to signal the importance of “the very delicate and complex issues around the usage of AI and data”, he says.
The Feat committee decided to take a high-level, principles-based approach because it was targeting the whole financial ecosystem, regulated and non-regulated entities, from the smallest fintech start-ups to the large banks and tech giants. Mr Hardoon adds: “It is important to understand that this is a living document. The principles will change as we become more proficient with the outcomes of AI and data analytics.” He is hoping for much more heated debate in the coming year.
The Council on the Responsible Use of AI, an initiative launched by Harvard Kennedy School’s Belfer Centre for Science and International Affairs in April 2018, with Bank of America as a founding donor, aims to bring to bring diverse perspectives together to work on principles, according to Dr Dan Schrag, co-director of the Belfer Centre’s science, technology and public policy programme.
“This includes academics and corporates from many industries – not just technology but pharmaceuticals, healthcare, retail and communication – because it affects every industry. We also wanted policy-makers in the room because they will be the ones regulating these technologies,” says Mr Schrag.
Ms Bessant believes the council will help to bring the infrastructure, such as legal, social, governance of models, testing, transparency, and understanding, up to speed with the technical capabilities of AI. “The council can catalyse dialogue between the many kinds of organisations that need to participate to create the advancement we need in infrastructure,” she says.
The first meeting was held in November 2018, with a lively debate on the problem of defining accountability in a computerised society. Like Mr Hardoon, Mr Schrag emphasises the need for continuing debate. “The technology is evolving so quickly that static principles are unlikely to be sufficient. This needs to be an ongoing dialogue, not a list of principles that we agree and then we are done,” he says.
In addition to keeping up to date with industry initiatives, Barclays has focused on breaking down internal barriers between data scientists and senior business people, according to Steven Roberts, managing director at Barclays UK Ventures. “There was a mismatch between the businesses’ ability to understand what the data scientists were doing and the inability of the data scientists to explain what they were doing,” he says. “This relates to ethics, for it is critical to establish ethical principles that both sides can understand.”
The UK bank brought together the two groups through its 'AI Frenzy' events, also bringing in expert external speakers. “In December 2017, we were running an AI Frenzy at one of our Eagle Labs and customers were phoning up to buy tickets, which reflects the general public’s interest in the AI space,” says Mr Roberts. “We realised there is a huge opportunity to democratise AI, making it accessible to anyone who wanted to get involved.” In 2018, Barclays hosted more than 40 AI Frenzy events, with more than 4000 people in attendance.
While regulations governing data and privacy are just emerging in many countries, Ms Bessant thinks they will have a dramatic impact on AI because the regulatory framework for non-AI-produced outcomes can be effective for AI-produced outcomes. “While there may be specific areas where regulation is needed, the regulatory framework in banking today requires us to be able to document our models, show governance around them and prove that they are effective,” she adds.
However, it is when AI outcomes are produced by non-regulated entities that problems arise. “That is where the playing field must be levelled because there are non-regulated companies that want to sell us AI models, but they don’t want to show us those models because they believe it is their intellectual property. As regulated entities, we must see those models to use them,” says Ms Bessant.
Mr Schrag has not reached an opinion on whether additional regulations are needed, mainly because he thinks it is premature at this stage. However, he could imagine a hybrid system of both self-regulation and legislation. “Generally, it is often better for an industry’s big players to develop an agreed set of principles, because then it allows the government agencies to focus on some of the bigger and more serious issues,” he says.
SAS’s Mr Brown says: “There are discussions ongoing around regulations, but that won’t happen any time soon because of concerns that they might constrain innovation.” But he advises organisations to start looking at the issues now to position themselves “in the right place before anything comes down from above”.