AI: Moving from fear to trust

Speech by Jessica Rusu, FCA Chief Data, Information and Intelligence Officer, at the City and Financial Global summit: Regulation and risk management of Artificial Intelligence in financial services.

Speaker: Jessica Rusu, Chief Data, Information and Intelligence Officer
Event: City and Financial Global summit: Regulation and risk management of Artificial Intelligence in financial services
Delivered: 9 November 2022
Note: this is a drafted speech and may differ from the delivered version

Highlights

  • AI needs governance to move from fear to trust but many of the rules in financial services are already in place.
  • Agency must not be attributed to AI systems as this risks removing accountability away from firms. 
  • Safe and responsible AI adoption must be underpinned by high-quality data.

Polarised debates around AI 

The debate surrounding AI is often polarising.  

It either solves the world’s problems or it leads to our ultimate destruction.  

AI, like most technology, can be used for good, for greed, or for both. 

Let’s look at a recent real-life example. 

During the pandemic, Matthew, the father of nine-year old twin girls, was struggling with home schooling, something that many of us can relate to.

At the time, Matthew, who was a former advertising guru, was part of a small start-up that had invented an AI tool

This tool could track human emotion as users watched video content, adapting it in real time according to their reactions. 

He had initially thought this had huge commercial potential for the world of marketing.   

It’s not a far stretch to imagine how this could work in practice. 

AI technology can be used to change the look and voice of a model to target a consumer in a personal way – very similar to the way deep fakes work. This had huge revenue potential. 

This technology could even have had a political use: Selling the adaptive  video technology to political parties which would allow politicians to hyper-target individual voters, changing their pitch and promises in line with voter reaction. 

So what happened?

Eureka moment

Ultimately – in that moment of necessity – the entrepreneur realised that the AI tool could be used as a force for good rather than greed.  

Working with education experts, he rolled out an adaptive maths programme that could become easier or harder in real time, according to the users’ reactions. 

This was all GDPR compliant with the data held on the user’s device rather than in some anonymous server centre. 

He went on to deploy edge AI in a project with the charity Nacro, to help under-privileged young adults develop soft skills for job and course interviews. His start-up won backing from Innovate UK, the government scheme set up by the Cabinet Office. 

Matthew could have easily been lured down the path of using his technology to help marketing departments sell more. But after two decades in advertising and several months of home schooling, he had found a more compelling purpose. 

In this example, the innovator chose good over greed. 

But we know from experience that not all innovations lead to good outcomes.  

Ethical questions remain 

There are real and legitimate concerns around economic stability as well as ethical questions surrounding the ability of AI to mimic human intelligence.  

And there are perhaps more immediate concerns over firms exploiting consumer data and the privacy concerns associated with hyper-targeting.  

But there is also confusion over what AI really is and what it realistically can achieve.   

And there can be a reluctance to embrace the opportunities of beneficial innovation by being too caught up in the fear of the unknown. 

So where are we really on this debate?

Machine Learning survey 

Looking at UK financial services can give us more balanced perspective.  

We recently published a survey with the Bank of England on the use of machine learning and AI in financial services. 

The survey was conducted to provide regulators with a better understanding of the use of machine learning in financial services so we can understand how best to support its safe and responsible adoption. 

The survey found, unsurprisingly, that the use of AI in financial services is accelerating. 

  • Overall, 72% of firms responding to the survey reported actively using or developing machine learning applications.
  • That trend is expected to more than triple in the next three years. 
  • The largest expected uptake is in the insurance sector, followed by banking. 
  • Firms reported that machine learning applications are now more advanced and increasingly embedded in day-to-day operations, with nearly eight out of ten in the later stages of development.

The survey also provided some insights into the sentiments surrounding the relative benefits, risks and opportunities for AI in financial services. 

Firstly, firms were upbeat about the benefits of machine learning, including enhanced data and analytics capabilities, operational efficiency and better detection of fraud and money-laundering. 

Secondly, in terms of risk, firms who responded consider the current usages of AI to have low to medium risk.

They identified the biggest risk for consumers as data bias and data representativeness.  

The biggest risk for firms was identified as a lack of AI explainability. 

Finally, in terms of opportunities, firms are leveraging existing governance frameworks to address the use of AI. 

These survey results provide a more balanced view of the risks we are collectively facing into alongside the relative benefits AI provides.  

At the FCA, we see many benefits for AI. 

We are using machine learning to analyse over 100,000 new web domains daily to identify potential scam sites. We are leveraging NLP to support our supervision work. And we are investing in AI tools in our digital intelligence environment.

Fine-tuning or fresh regulations?

The key question for financial services is whether AI can be managed through fine-tuning the existing regulatory framework, or whether a new approach is needed.  

This brings me to the FCA and Bank of England’s AI Discussion Paper

We have recently published this joint paper on AI to have a broad-based public debate on:

  • whether the existing regulatory approach in the UK financial services works
  • whether there are gaps that need to be addressed or
  • whether an entirely new regulatory approach is needed

What underpins this discussion is the idea that we already have a framework – the Senior Managers’ and Certification Regime (SMCR) – that can be applied to many of the new regulatory challenges posed by the use of AI in UK financial markets.

Poor data will lead to poor outcomes 

Now, I’d like to spend a few minutes discussing AI from the practitioner point of view. Often, when we discuss AI and the complexity of the models, we forget that AI is all about data.  

If the data utilised in models is of poor quality, it can lead to poor algorithmic outcomes. 

The importance of data quality assessments to determine relevance, representation and suitability are heightened when it comes to AI.  

From sourcing large amounts of data and creating datasets for training, testing, and validation through to the continuous monitoring post-deployment, safe and responsible AI adoption must be underpinned by high-quality data. 

Furthermore, the use of AI must be fully compliant with existing legislation, in particular data protection. 

Staying with the practitioner point of view, one of the key discussion points surrounding AI is about the unintended consequences, often referred to as AI bias, or AI model risk.

How does this AI bias happen in practise? 

If I’m training an algorithm to predict which applicants I should hire into my data engineering team by looking only at past applicants, I may not get the diversity I want. This is an example of sample bias.   

Model bias can occur during the training process, and is influenced by the feature selection, choice of model, and set parameters.  

Automated model-selection tools can exacerbate these risks. Any shortcomings such as incomplete or biased data will undermine the validity of the outcome. 

Risk also occurs due to the lack of explainability of modelled outcomes and potential for unintended consequences.   

And finally monitoring risk is introduced post-implementation, where hyper-tuned models are highly susceptible to data drift and lack of substantive oversight.

Governance for AI

And this brings me to my final point: the central role of the governance framework for AI. 

Model Risk Management has been around for a long time. A 2011 publication by the Federal Reserve set a global benchmark for Model Risk Management. It warned that:

  • There can be adverse consequences from decisions based on misused model outputs and reports.  
  • This can lead to financial loss, poor business and strategic decision-making, or damage to a banking organisation’s reputation. 

Despite huge technological advances since 2011, these risks remain true in 2022.  

In the UK we are leveraging existing frameworks – the SMCR - and we are applying this to AI.  

Now, there are practical challenges that any AI governance mechanism must address, surrounding:  

  1. Responsibility - who monitors, controls and supervises the design, development, deployment and evaluation of AI models 

  1. Creating a framework for dealing with novel challenges, such as AI explainability

  1. How effective governance contributes to the creation of a community of stakeholders with a shared skill set and technical understanding

Above all, governance matters because it ensures that the responsibility is where it needs to be: with the firm!

Machines don’t have agency, humans do 

Let me conclude by saying both as a data practitioner and regulator, I am optimistic about the future uses of AI – in financial services and beyond. 

We must take responsibility. We cannot attribute agency to AI systems, which risks removing accountability for decision-making away from firms.   

We must leverage what we already have in terms of existing regulatory frameworks and adapt them as technologies change. 

I would urge anyone interested in this topic to engage with our AI discussion paper to help make that regulation more agile and dynamic – so that proportionate safeguards are in place to embrace the opportunities AI offers, whether those are in the interests of good or greed, or both.

: Editorial amendment Correcting typo under 'Governance for AI'