AI Sprint summary

Corporate documents Published: 23/04/2025 Last updated: 23/04/2025

In January 2025, we hosted a 2-day artificial intelligence (AI) Sprint. Find out how we are using feedback from this event and how we want to continue working together.

About the AI Sprint

Blue line between chapters

The AI Sprint discussed the opportunities and challenges of AI in financial services. 115 participants took part from across industry, academia, regulators, technology providers and consumer representatives.

In teams, they discussed how AI may develop in financial services over the next 5 years, and the FCA’s role enabling firms to embrace the benefits of AI while also managing the risks.

We summarise the feedback received alongside the responses to the AI Input Zone. We are reviewing this feedback to consider how we can further support firms looking to adopt AI in a safe and responsible way and encourage beneficial innovation and growth.

The AI Sprint is one component of our AI Lab, which continues to support firms’ implementation of AI use cases and provide a pathway to engage with firms and wider stakeholders about AI-related insights, discussions and case studies.

Key insights

During the AI Sprint, 4 common themes came from participants’ discussions and suggestions:

Regulatory clarity

Participants highlighted the importance of firms understanding how existing regulatory frameworks apply to AI. Teams suggested areas where the FCA could clarify, or build on, existing requirements to help firms understand regulatory expectations and to support beneficial innovation.

Trust and risk awareness

Participants saw trust in AI as vital for its successful adoption. Teams discussed that if firms and consumers felt able to trust AI, then they might utilise it more, including seniors buying into new AI use cases and consumers engaging with new offerings. Participants agreed that without trust, the full benefits of AI in financial services wouldn’t be realised.

Collaboration and coordination

Participants emphasised that all parties involved in AI needed to work together to develop solutions. This includes domestic and international regulators, government, financial services firms, academics, model developers and end users.

Safe AI innovation through sandboxing

Blue line between chapters

Participants appreciated the need for a safe testing environment to encourage responsible innovation. Suggestions included using the FCA’s sandboxes and innovation services to create this safe space, as well as providing access to datasets for innovators to develop and improve AI solutions.

Phases explored

This Sprint focused on 2 main phases for participants to explore:

  • The next 5 years of AI in financial services.
  • The current financial services regulatory regime.

Phase 1: The next 5 years

Teams first considered how AI might develop in financial services and how current use cases may change and emerge. Findings included:

Figure A: AI use cases leading up to 2030

Use cases: AI financial advisers, internal process automation, compliance tools, emotion AI and personalisation, agents.

Teams then discussed what conditions might be needed over the next 5 years to enable safe and responsible AI adoption. These included:

Figure B: Enablers of safe AI adoption

Factors that contribute to safe AI adoption: measurable success criteria, model/data/cloud/tech foundations, staff upskilling and internal governance, common standards and interoperability.


Phase 2: The financial services regulatory regime

Teams then looked at topics related to the current regulatory framework for financial services, which were split into 3 overarching themes.

Robust processes within firms

Good outcomes for consumers

Effective competitive markets

Blue line between chapters

Next steps

We are committed to building on the momentum from the AI Sprint to support the safe and responsible adoption of AI across UK financial services.

The feedback and insights received during the AI Sprint – as well as on the AI Spotlight and AI Input Zone – have provided valuable information. They are being used to shape our future work on AI in line with our statutory objectives.

Key issues raised by participants – our focus for the months ahead

  1. 1

    The need for a safe space to innovate

    Launching the Supercharged Sandbox, offering innovators greater computing power, infrastructure, datasets, and mentorship to support the testing and validation of AI solutions. The programme is being shaped by insights from the Sprint and identified focus areas to address key adoption challenges. The goal is to equip innovators with the necessary tools, expertise, and regulatory engagement to drive AI adoption in financial services.

  2. 2

    Consider any areas of uncertainty

    Considering whether there are areas of uncertainty where regulation could be restricting safe and responsible AI adoption.

    For example, one identified area of uncertainty was around data protection and privacy. With the ICO, we have since written to trade associations to obtain more information about these persistent challenges and how we can better support industry in tackling them.

  3. 3

    International engagement

    Ensuring we effectively influence the international standard-setting bodies work (eg IOSCO, FSB, GFIN) on AI to ensure we can support safe and responsible adoption in the UK.

  4. 4

    Collaboration opportunities

    Engaging bilaterally with other regulators to explore cross-cutting considerations as well as collaboration opportunities on specific themes.

  5. 5

    Communication and continued engagement

    Providing clear communication about our approach to AI and continuing engagement with stakeholders through our AI Lab. The AI Spotlight will run on an ongoing basis, where new themes will be added based on the use cases discussed in the Sprint.