We have developed this Rule Review Framework (the Framework) in line with an obligation introduced by the Financial Services and Markets Act 2023. This Framework applies to all our rules, which are found in our Handbook. In practice very few of our rules operate in isolation, so references to ‘a rule’ in this Framework are likely to relate to a set of rules or a policy intervention.
This Framework explains how we set, measure and monitor the outcomes of our rules. It also explains how we gather metrics and qualitative intelligence to understand where there are significant concerns about a rule and how we conduct a review if required.
Following a public consultation, we finalised the Framework in January 2024. We have published our response[1] to the feedback to the consultation separately. As well as monitoring and reviewing our rules, we will also consider whether the Framework is meeting its intended outcomes and update this as needed.
This Framework applies to all rules in the FCA Handbook, with some of the requirements on monitoring applying especially to new rules. However, as part of the Framework’s initial implementation, there may be some rules, such as those that were added to the Handbook shortly after this Framework’s publication, that do not fully meet all the requirements on monitoring. We will explain adherence to the Framework in relevant publications on rules, including Consultation Papers and Policy Statements.
We recognise the importance of updating and amending our rules, as well as looking for opportunities to repeal redundant rules. We have always encouraged firms, consumer groups and wider stakeholders to share intelligence and evidence on our rules and their effectiveness. This Rule Review Framework builds on our existing approaches to evaluation and adds new ways to better assess the effectiveness of our Handbook rules.
Our approach
Our approach to reviewing rules can be split into 2 parts; firstly, we proactively monitor data and collect information about how well our rules are working, and secondly, if the data and other evidence suggests that they may not be working as intended, we will consider a range of actions that we may take to address this, including reviewing in greater depth.
We will generally monitor key metrics unless it is not feasible to do so, collecting data and intelligence would be disproportionate for the FCA or stakeholders, or it is not otherwise an effective use of our resources (for example, we may decide not to monitor where the new rule relates to a minor policy or rule change with minimal impact).
Stakeholder feedback plays an important role throughout this Framework and in helping us to understand how well our rules are working. Feedback may come from the firms and individuals we regulate, from users of their services and consumers more widely or from trade associations or other representative bodies. It may inform our monitoring, contribute to a decision to conduct a review or provide evidence as part of a review. We have developed a dedicated feedback tool[2] to allow anyone with evidence on the effectiveness of our rules to share this with us.
The Framework sets out the 3 main types of rule review that we may undertake, their purpose and when we may use them.
These are an:
- evidence assessment
- post implementation review
- impact evaluation
We explain more about each of these reviews in the Framework.
We decide when and how to do a review based on our plans, existing commitments and resources. We will prioritise reviews based on the scale, urgency and extent of harm to consumers and markets, in line with our objectives.
The Framework also sets out what actions we may take as a result of a review, where we have concluded a rule is not working as intended.
These include considering whether:
- we can improve understanding of the existing rule, for example, through additional guidance
- a change to the rule is needed (including varying or revoking the rule). We would follow our existing policy development process to achieve this.
- a further, more detailed, review would be helpful. This could be undertaken immediately, scheduled for a future date or depend on the outcome of further monitoring. This may be the case where our first review was an evidence assessment.
As well as providing valuable insight into how well a rule is working, the findings of a review can inform our wider work and approach. We may gain insight into how we can improve other interventions, such as supervisory work, and inform our approach to future rulemaking.
Why and what we review
Our policy-making cycle and our objectives
Our Rule Review Framework is one part of our policy-making cycle. This cycle starts with us horizon-scanning and identifying instances of harm in financial services markets. For example, where markets are working poorly and not providing sufficient benefit to users. We gather information from a range of sources to identify this actual or potential harm. These include day-to-day supervisory contact with firms, calls to us from consumers, analysing market intelligence and undertaking research, as well as our ongoing engagement with stakeholders. We do not act to remedy every potential harm. We consider the potential costs and benefits involved and our objectives[3].
Once we have identified actual or potential harm, we aim to diagnose its cause, extent and potential development. To do this, we must decide whether we have enough information to assess the issue or if we need to carry out further work to better understand it. We have a range of diagnostic tools to support this including market studies, calls for inputs and discussion papers. We can also analyse individual firms or several firms simultaneously, for example through our multi-firm work and thematic reviews. We aim to do this in a way that is cost-effective for us and for firms.
When we understand the potential harm, we then consider how best to mitigate or avoid that risk of harm. To do this we consider our regulatory tools and make a judgement about which could be used to remedy the harm. Rulemaking is just one of our possible tools. Other remedies include publishing guidance or other communications to firms or their customers or encouraging industry to act voluntarily to address a problem. We can vary or remove firms’ permissions to carry out certain activities. We can use our authorisations approach to control firms’ and individuals’ entry into the market. We also have our enforcement remedies where we have found breaches of our rules.
Reviewing how effective our intervention has been
Reviewing the effectiveness of our chosen remedies helps us to make better decisions, and to add more public value, in the future. There is always the risk that regulatory interventions, including rulemaking, have unintended effects or don’t work as well as we expected. External factors and changes in the market may affect how well a remedy, such as a rule, is working. Understanding whether our remedies are working as we intended and measuring their impact where possible, helps us understand if our intervention has been proportionate to the outcomes achieved. If we find that the problems originally identified in a market are still occurring and our remedies have not had the intended effect, or had an unintended effect, we consider our next steps and whether to take further action.
Reviewing our rules is also an important vehicle for delivering our objectives. This Framework advances our strategic objective to make markets function well by ensuring our rules are having their intended effect on the market. It also sets out how, during a review, we will consider whether specific Handbook rules are also advancing one or more of our operational objectives. By ensuring the rules in our Handbook are working as intended and undertaking reviews in response to evidence suggesting this may not be the case, our Framework also supports our secondary objective: to facilitate the international competitiveness of the UK economy and its growth in the medium to long term.
What's in scope of this Framework
This Framework applies to all our rules, which are found in our Handbook[4]. Where we monitor or review rules it will normally be helpful for us to consider the policy introduced by the ‘package’ of rules. So we may not monitor or review an individual rule in isolation, but instead consider groups of rules introduced through a policy intervention.
Our Handbook is detailed and many of our rules inter-relate. We need to keep this in mind to ensure that we do not review rules in isolation as it could cause problems if we decided, after review, that a rule is not working as intended and needs to be changed.
The duty in the Financial Services and Markets Act 2023 to keep our rules under review does not apply to any materials that are not rules. However, when we review a rule, we can choose to review our guidance and related materials to see if these are working well.
We can also use monitoring and the 3 types of review in this Framework to assess the effectiveness of other types of regulatory interventions that we make.
Our approach to reviewing Handbook rules
The rule review process can be broken down into the following broad stages.
Developing new rules
As a general principle, when we add new rules to our Handbook, we will collect data and monitor their effectiveness in a systematic way. There are some situations where we may choose not to actively monitor new rules and we explain this during that rule’s consultation process. We will generally monitor key metrics unless it is not feasible to do so, where collecting data and intelligence would be disproportionate for us or stakeholders or where it is not otherwise an effective use of our resources. For example, we may decide not to monitor where the new rule relates to a minor policy or rule change with minimal impact.
Where we do plan to monitor, we will usually set our intended outcomes when we develop the policy and rules. We may develop a causal chain setting out how we expect these outcomes will be achieved. We will then decide the key metrics we will monitor and the data required as a result. We know the potential burden that regulatory data requests can have, and we will endeavour to use existing data for our monitoring purposes as much as possible and take a proportionate approach where we do request data from firms. This Framework sits alongside our existing data governance policies and approach and is not intended to replace them.
For ultimate outcomes that take a long time to materialise, we may choose to monitor intermediate outcomes as leading indicators of progress towards our ultimate outcomes.
Where we identify potential unintended consequences of the rule, we may also plan to monitor these.
Once the rule is in force, we will start collecting and monitoring data on how well it is meeting, or is on track to meet, its intended outcomes. We will also consider relevant evidence from stakeholders. The appropriate timeframe for monitoring will depend on the specific circumstances for the rule.
For a minority of our rules, such as where we expect the impact to be particularly significant, we may plan at the consultation stage to conduct a specific review in the future. This is particularly likely when we believe that an impact evaluation may be appropriate. To ensure that we have the right data for this kind of review, it is best we plan for it when we are developing and consulting on the policy and rules. This Framework does not change our existing approach to maintaining a planned pipeline of annual impact evaluations.
After we have implemented a new rule
If, while monitoring the data, including any stakeholder feedback, there is a suggestion of a potential problem with how the rule is working, we will consider whether to do an evidence assessment. An evidence assessment aims to collate any data which indicates whether the key intended outcomes of a rule or policy intervention are being, or are on course to be, met. Our policy teams use their professional judgement to decide whether to propose to complete an evidence assessment based on a number of criteria. These include the nature of the issue identified, our strategic priorities, our existing commitments, our strategic and operational objectives and our available resources.
The possible outcomes of the evidence assessment include starting the process to consider varying or revoking the rule, where the evidence suggests this is necessary and appropriate. Alternatively, we could choose to return to monitoring the rule, issue clarification or guidance on how it operates to improve stakeholders’ understanding or plan a more in-depth review.
Where our rules flow from international standards, we will not generally depart from these without a compelling justification in line with our objectives. We actively contribute to developing and implementing international standards in global standard setting bodies. These support the management of cross-border risks to financial stability, market integrity and general confidence in the global financial system.
They also support the development of common approaches with counterparts and form the basis of our UK approach where this is appropriate for UK markets and UK consumers. So we would need to carefully consider the effect that changing these rules may have if this were the outcome of a review.
Figure 1: Overview of rule review process for new rules
Figure 1: Overview of rule review process for new rules
Figure 1: Overview of rule review process for new rules (PDF)[10]
Conducting reviews for rules we are not proactively monitoring
The data we get from proactively monitoring our rules is one possible prompt for conducting a rule review. There may be existing rules in our Handbook that we have not been systematically monitoring data for. Additionally, we may have implemented new rules and chosen not to monitor them as we did not consider it feasible to do so. In these instances, where we have a rule without this type of monitoring data, we may still consider conducting a review where:
- We have other evidence that the rule is not working as intended. Separate to the proactive data monitoring outlined above, we may have been collecting information involving a rule, for example through our supervisory work, which we can use to assess how the rule is working. In other cases, stakeholders might give us evidence about how the rule is working, which we will consider carefully alongside all other available evidence.
- There have been substantive changes in circumstances, including market developments or the introduction of other rules, which affect how the rule is functioning and we want to understand the overall effect - positive or negative - that the rule is having on the market.
As with conducting a review based on monitoring data, we decide whether or not to do an evidence assessment based on a number of criteria, including our strategic priorities, existing commitments and resources. We will also consider how adequate the available data is and the likelihood of improving any evidence gaps in a proportionate way.
Once we have decided to do a review, the stages will broadly follow the same process for all rules regardless of when they entered the Handbook.
The Treasury can also require us to review a rule (see Government-directed reviews[11]).
Figure 2: Overview of rule review process for all rules
Figure 2: Overview of rule review process for all rules
Figure 2: Overview of rule review process for all rules (PDF)[12]
How we set, measure and monitor the outcomes of our rules
Setting outcomes
We set the outcomes we are seeking to achieve for new rules and policies so that we can assess whether they are working as intended once they have been implemented. We generally include these outcomes in our consultations and policy statements.
Our statutory objectives shape and guide our policy development and rule proposals. We generally explain the outcomes we want to achieve by referring to how they advance one or more of our statutory objectives.
The difference between outputs and outcomes
In monitoring our rules, we need to differentiate between outputs and outcomes. Outputs stem from our activities. Outcomes are what is achieved because of the outputs we have delivered. The examples in Table 1 illustrate the difference between outputs and outcomes.
Table 1: Examples of outputs and outcomes
Harm | Intervention | Output | Intended outcomes |
---|---|---|---|
Rent to own customers are often vulnerable and pay prices that are too high | Rent to own price cap | Bringing the price cap into force, with firms adjusting prices as a result | Rent to own customers get fair prices |
Market stability is threatened by misconduct and manipulation cases relating to financial benchmarks | Bringing 7 benchmarks into the regulatory and supervisory regime | Setting out our framework for regulating and supervising the additional 7 benchmarks makes the benchmarks more robust and representative | Improved liquidity and participation in the underlying markets, due to robust and representative benchmarks |
Our organisational outcomes and metrics
More broadly, we monitor progress against our key areas of focus, as set out in our published Strategy[13] and Business Plan.[14] We report on our performance against our Business Plan in our Annual Report.
In many cases, the outcomes and metrics for our new rules will align with our organisational strategic outcomes and metrics set out as part of our Strategy. We may therefore also use those metrics to assess whether there may be an issue with our rules.
Using causal chains to assess how our interventions will work
We may use a causal chain to explain how we believe an intervention, such as new rules, will work by setting out the steps between the outputs and the outcomes we want to achieve. We may set out a causal chain in the Cost Benefit Analysis (CBA), which is the document which assesses the costs and benefits of our proposals.
Causal chains will inevitably include some assumptions, for example about changes in behaviour that may result from our intervention. However, they are a useful tool to help us understand how we will bring about the change we want to see.
Identifying key metrics to monitor
We can monitor outcomes, including lead indicators or intermediate outcomes, to help us assess whether we made the correct assumptions in the causal chain. To do this, we identify a set of metrics for each outcome to help us establish whether the rule is working as intended.
We need to examine and frame these metrics carefully to avoid misinterpreting movements and to account for predictable variations that happen each calendar year (‘seasonality’). For some rules, we may find it useful to measure outputs as well as or instead of outcomes (see the difference between outputs and outcomes).
Data sources for key metrics
Evidence for our metrics can come from different sources of data. We collect a lot of data and have a variety of sources to help us understand how our rules are working, including our authorisation, supervision and enforcement work.
For proactive monitoring, where possible we will use data that we already collect or have access to. We know that collecting data from industry can be resource-intensive for firms to provide and for us to analyse. When we do request new data, we will follow our existing data governance procedures and be as proportionate as possible.
The following are examples of possible sources of data for our metrics:
- Stakeholder feedback gathered through, for example, roundtables, focus groups and other forums. We also use evidence proactively provided by stakeholders to us via our feedback tool about how the rules are working.
- Our ongoing engagement with stakeholders’ representatives. For example through trade bodies, industry led bodies and industry associations, consumer organisations and civil society.
- Research, such as surveys like the FCA’s Financial Lives Survey and Practitioner Panel Survey and insights from behavioural analysis, including into consumer experience.
- Feedback from our statutory Panels and advisory committees.
- Our supervisory work, including ongoing contact with firms and our multi-firm reviews.
- Information provided by firms through regulatory returns and reporting.
- Complaints data and data from the Financial Ombudsman Service and other regulators.
- Information from our enforcement work and primary and secondary market oversight work.
- Market intelligence, including from wider monitoring of the market, our market studies and our thematic reviews.
- Parliamentary feedback including MPs’ letters, Parliamentary debates, and Select Committee inquiries.
- Third party data, such as the media, market research firms and financial data providers.
- How other bodies, such as authorities in other jurisdictions, have implemented similar measures, and reviews of these.
These data sources are also relevant when we decide to do a review and want to explore how the rule is working in more detail. At that stage, we may also need to request further information from firms or to do further consumer research.
How we monitor rules
As part of the CBA, or earlier analysis before implementing rules, we typically establish a baseline showing the state of the market before our intervention. We also consider how the market would have changed over time without our intervention (the counterfactual). Ideally, we will collect data for the baseline for several time periods (months/years) before implementation to understand underlying trends or seasonality in the data. However, often this may be unfeasible or disproportionate.
Once the rule has been implemented, if we plan to monitor the policy, we will collect or monitor data to assess progress. Ideally, we will collect and monitor at regular intervals and compare this ongoing data against the baseline or counterfactual. The appropriate frequency will depend on the circumstances of the rule and availability of meaningful data. For example, we may collect some quarterly, some annually, and some at ad hoc or longer intervals. We will consider proportionality, weighing the value of the data against any cost for firms to provide the data and the costs of collection.
If during this monitoring there are signs that the rule is not working as we intended, we may decide to conduct a review of the rule (see Types of review and how and when we undertake them).
While the data and information we monitor indicates that we are on track to achieve the intended outcomes of the intervention, we continue to undertake the monitoring for the pre-agreed length of time.
The duration and frequency of monitoring will depend on the relevant rule which we may have explained in the rule’s Consultation Paper or Policy Statement. Once we are content that a rule is working as intended, we can treat it as an existing rule and stop the proactive monitoring. Even where monitoring has stopped, where necessary we will still respond to any other evidence, including from our stakeholder feedback tool, that it may need to be reviewed.
How we use stakeholder feedback
We welcome stakeholder feedback on how well our rules are working. Feedback may be qualitative or quantitative and it could come from the firms and individuals we regulate, or from users of their services and consumers more widely. In the context of rule reviews, this feedback plays an important role in 3 ways:
- It will inform our monitoring of how well rules are working.
- It may contribute to a decision to conduct a review of a rule by providing evidence that a rule is not working as intended.
- Stakeholder feedback can provide useful evidence as part of the review.
Whether we seek stakeholder feedback as evidence for monitoring a rule will depend on the circumstances of the particular policy and our monitoring plan. Similarly, whether we seek stakeholder feedback as part of a review will depend on our plan for that review.
Stakeholders can provide feedback to us on whether a rule is working as intended through several channels including:
- our ongoing engagement with stakeholders’ representatives such as trade bodies, consumer groups (including our Consumer Network), and civil society
- opportunities such as roundtables, focus groups, sprints and surveys on specific topics
- our ongoing supervisory work with firms
- our Rule Review Framework stakeholder feedback tool (see Channels for stakeholders to feedback on specific rules below[15])
Our statutory panels[16] also play an important role in giving us feedback about how our rules are working in practice. As part of our ongoing engagement with our Panels, we will share our review plans and priorities with them and seek their views. We will also seek the Panels’ input into individual reviews where appropriate. Our relationship with the Panels is such that they regularly raise issues with us, as well as providing feedback and input.
Channels for stakeholders to feedback on specific rules
We are committed to ensuring there are clear and appropriate channels for our stakeholders to share feedback on specific rules. So we have developed a feedback tool[3] for stakeholders to give feedback on any rule in our Handbook. This form allows anyone with any evidence suggesting that our rules are not working as intended to share this with us.
It is important that stakeholders providing feedback set out, with supporting evidence:
- what is not working
- why they believe it is not working
- the effect of the rule not working as intended
This will help us to understand what is happening and, where appropriate, to prioritise a rule review.
We will consider any feedback we get and, where we agree there may be a need for a review, we will build this into our organisational planning and prioritisation processes (see How we prioritise rules for review[17]). We cannot commit to undertaking a rule review in response to every piece of feedback. We also cannot commit to responding to all submissions via our stakeholder feedback tool.
Separate to the feedback tool, the firms and individuals we regulate can provide evidence on a rule’s effectiveness via their supervisory contact.
We will continue to consider other options for feedback including potentially having a feedback option embedded in our online Handbook, so stakeholders can easily feedback on specific rules.
Types of review and how and when we undertake them
Overall, we have 3 types of review:
- evidence assessment
- post implementation review
- impact evaluation
An evidence assessment is designed to be a less resource intensive process for all involved. It will be suitable where a less in-depth review is still sufficient to allow us to assess if the intervention has achieved its intended outcomes or is it on course to do so. However, there will be interventions that require a more comprehensive review to appropriately assess the rules’ effectiveness. In these situations, we may choose to instead conduct a post implementation review or impact evaluation.
We also have other options available to investigate specific types of problems suggested by the data we have collected. We may undertake:
- a market study where we have concerns that competition in the market may not be working well
- a thematic review or multi-firm work where we have concerns about firms’ compliance with rules
At times, we may be able to understand what is happening without a review, simply by further interrogating the available data or supplementing it with additional data. However, in cases where we still have concerns about the effectiveness of the rule, we may need to carry out a review. Table 2 sets out the types of review we may undertake.
Table 2: Types of review
Type of review | Why we do it | How we do it | When we do it |
---|---|---|---|
Evidence assessment |
To answer the following questions:
|
We aim to collate and analyse evidence which indicates whether the intended outcomes of a rule or policy intervention are being, or are on course to be, met. This may involve focusing on the early or intermediate part of the causal chain of an intervention. This helps us understand effects without providing exact quantitative estimates. Our focus is to assess if the key changes expected have happened, and the reasons for these changes, without necessarily isolating the exact causal effect of the intervention. We can also use an evidence assessment for existing or other rules where we did not establish a causal chain and there have no clear outcomes to measure against. Here, our aim is to understand the overall effect - positive or negative - that the rule has on the market and whether there have been significant changes in circumstances that affect how the rule works, such as market developments and the introduction of other rules. As far as possible, an evidence assessment will make use of existing data and information held within the FCA. |
If monitoring, stakeholder input or other evidence indicates that a rule is not working as intended, or there has been a change in circumstance which has a significant impact on the rule’s effectiveness or the context in which it is applied. |
Post implementation review |
To answer the following questions:
|
We aim to establish whether a rule or policy intervention has met its intended outcomes while also identifying implementation issues and potential unintended consequences, assessing compliance with the rule and examining the wider state of the market after an intervention. It does not typically set out to establish causality or examine what would have happened if we had not intervened. Our focus will primarily be on assessing the outcomes from the early or intermediate part of the causal chain of an intervention, through significant engagement with internal and external stakeholders, though it may be possible to consider some ultimate outcomes as well. We use the results of a post implementation review to understand if an intervention has worked as expected, to make improvements or change approach. |
Where we have evidence that a rule is not working as intended and anticipate that significant data analysis and stakeholder engagement will be required to understand which areas of a rule implementation worked well, and which areas did not. We may also state in the Policy Statement that we plan to carry out a post implementation review to assess whether the rule implementation has been successful. We would not seek to establish a causal link between the rule and the outcomes, but look at whether the expected outcomes materialised. |
Impact evaluation |
To answer the following questions:
|
Our primary purpose is to attempt to isolate and quantify the impact of our interventions and more reliably attribute it to our actions. This evaluation attempts to establish the counterfactual (what would occur without an intervention) and measures the impact of interventions on outcomes in a way that controls for the effects of material changes in the business environment. It can include qualitative discussions with stakeholders to help understand why results are as they are. When we cannot use other more empirically robust methods, it can also include studies which show that both intermediate and ultimate outcomes have changed in the direction we expected along an agreed causal pathway (from the implementation of the policy to the realisation of benefits). In those cases, the logic of the intervention and data analyses, coupled with views from stakeholders, help establish a counterfactual. |
Best planned in advance, at the policy development and implementation stage, to ensure we collect the correct data. We use a set of criteria to determine which rules are suitable candidates for an impact evaluation. This includes the ability to identify a counterfactual against which we can measure causal impacts. |
Evidence assessments
Indicative criteria for when we will do an evidence assessment
- The metrics we monitor show that intermediate outcomes are consistently not being met or we are at risk of not achieving our ultimate outcomes.
- Significant new evidence suggests we should look at the rule, including evidence from our supervisory or enforcement work, or from stakeholders.
- Significant changes in the market or the wider context may have affected the way the rule is working.
An evidence assessment is a process of collecting and analysing available intelligence and information, which allows us to provide an assessment of whether the rule:
- is on track to achieve its original outcomes or objectives, for example as set out in the causal chain at the policy implementation stage
- has resulted in any implementation issues or unintended consequences
- can be improved, or may be improved with more evidence, to better meet our intended outcomes
We may consider doing an evidence assessment where monitoring suggests that the rule is not working as intended and we need to better understand the underlying reasons for this. Our focus is to assess if the key changes we expected in the policy’s causal chain have occurred, and the reasons for these changes, without necessarily isolating the exact causal effect of the intervention. An evidence assessment focuses on the early or intermediate part of the causal chain, to understand effects without providing exact quantitative estimates.
For existing rules where we did not set out a causal chain at implementation, it may be difficult for us to establish outcomes against which we can measure whether the rule is working well. In these cases, we will set out to understand the overall effect - positive or negative - that the rule has on the market. We will also consider whether there have been substantive changes in circumstances, such as market developments or the introduction of other rules, which alter how the rule is working.
An evidence assessment is designed to be informative and credible, while being less resource-intensive. It is also flexible, allowing us to design a review that is appropriate for the relevant rule and what the data shows may need to be addressed.
In most cases, we will use existing intelligence we already have, particularly data we have collected through any monitoring or our supervisory work. We may supplement this with qualitative information, such as evidence from stakeholders or their representatives, including trade associations, consumer groups and our statutory Panels. This enables us to take a quicker, more agile approach to assessing a rule or policy.
We will undertake evidence assessments internally, building on our expertise.
Post implementation reviews
Indicative criteria for when we will do a post implementation review
- An evidence assessment suggests that intermediate outcomes are consistently not being met or we are at risk of not achieving our ultimate outcomes, but we need to undertake further engagement with stakeholders or data analysis to determine what course of action we should take.
- There may be times when a rule meets the criteria for an impact evaluation (see section on Indicative criteria for when we will do an impact evaluation) but we are unlikely to be able to identify a counterfactual or establish a causal link. In these cases, we may carry out a post implementation review instead. We may plan these in advance at the policy development stage.
- There is potential to act on lessons learned from the review (particularly when the rule was highly controversial or high profile) and these may be relevant to our future work.
A post implementation review should assess whether the implementation of a rule has been successful, as measured by key changes, outcomes and discussions with stakeholders.
We expect this process will require data and stakeholder engagement beyond that collected during the monitoring process. We may use external parties to help us with parts of the review. This means a post implementation review is likely to require more time and resource than an evidence assessment.
Typically, and in line with Government practice[18], we use a post implementation review to establish whether, and to what extent, the rule:
- has achieved its original objectives as set out in the consultation paper or policy statement
- has been implemented and complied with as intended
- has resulted in any unintended effects
- has objectives which are still valid
- is still required and remains the best option for achieving those objectives
- can be improved to reduce the burden on business and its overall costs
A post implementation review typically seeks to identify areas of a rule implementation and delivery process which worked well, areas which could be improved upon and how external factors have influenced the context of the delivery. As such, we expect to use the outcomes of a post implementation review to guide our decisions on whether we need to change rules or provide further guidance, as well as influencing the implementation and delivery of future interventions.
Our staff typically undertake post implementation reviews, often with input from independent experts, especially for larger reviews.
Impact evaluations
Indicative criteria for when we will do an impact evaluation
- A rule addresses significant harms, generates market upheaval or has large ongoing costs to firms.
- A rule is a new intervention or we were uncertain over the outcomes when we implemented it.
- There is potential to act on lessons learned from the evaluation and these may be relevant to future work (such as changes to policies), other markets or to make our CBAs more robust.
- We can identify relevant counterfactuals against which we can measure impact. This can be affected by factors such as external shocks to the market or the presence of multiple market interventions.
We also need to consider the availability of data, including data that we need from firms. We can factor data collection into our planning when we are developing the rules.
An impact evaluation is our most rigorous tool for assessing the impact of our interventions. This type of review focuses on using causal methods to quantify impacts. We typically plan for these in advance, at policy development stage, to ensure we collect the correct data.
We will make sure that we have a feasible plan for evaluation before implementing rules. A good design will allow us to analyse the causal impact of our rules, separating this from other changes in the market. This fundamentally relies on establishing a plausible counterfactual to measure what may have happened had we not intervened. In some cases, the design may also allow us to make statements about why the rule has had certain effects or the mechanisms through which there has been an impact.
However, if a market has subsequently experienced a range of external shocks, there have been multiple interventions or we cannot gather data in the form intended, it can be very difficult or impossible to establish the impact of a particular intervention (see the Annex[19] for more discussion of challenges for impact evaluations and how we may address them).
Impact evaluations are generally the most quantitative form of review but may draw on a variety of evidence, including qualitative tools. In conducting this form of review, we know that it may not be easy to quantify all impacts.
These evaluations are a demanding form of review in terms of the data requirements and analytical resource needed. We can only undertake them when sufficient time has passed to observe the full effects of a rule. Undertaking impact evaluations requires us to use a significant amount of our resources and often to make ad hoc data requests from firms. We consider value for money in our work and that data requests can create a cost for firms. We will undertake them only when it is proportionate, for a subset of our interventions. We also consider the best ways to collect the information to undertake the evaluation and balance the rigour gained against the costs this would involve.
As a guide, where we are planning to do an impact evaluation of a rule, we normally wait until about 3 to 5 years after implementing the rule. The exact timing will depend on various factors, such as the detail of the rules, the relevant market and the scale of change. However, if the rule has a high cost or is addressing a severe harm, we might review it sooner to ensure it has been implemented as planned and is addressing the harm.
It is also important that we can ensure the robustness and credibility of our impact evaluations where we make claims about the specific impact of our interventions. We will do this by either commissioning them externally or, where carried out by our staff, ensuring they are peer reviewed by external experts. We expect to publish most, if not all, of our evaluations, so they will also be subject to external scrutiny.
Figure 3: Comparison of types of review
Figure 3: Comparison of types of review
How we prioritise rules for review
Our decision on whether to review a rule will take into account a range of factors. Evaluating our rules as part of our policy-making cycle must be balanced with making sure we prioritise all our work effectively.
Our Strategy[14] and our Business Plan[15] are the key vehicles for how we will prioritise our work to ensure we direct our resources most effectively and address the most pressing and significant harm. This Framework does not replace, and sits alongside, the broader planning work we undertake to ensure we deliver across all our commitments.
When deciding if we need to review any specific rules, we will firstly consider the evidence that suggests the rule may not be working as intended and the actual or potential harm being caused. We will consider the severity, complexity and impact of the actual or potential harm. For example, it could be a large harm on a small portion of a market, or it could be a smaller harm affecting a larger number of firms and/or consumers. This is why it is important that stakeholders providing feedback on our rules, including through the feedback tool, provide any supporting evidence which suggests our rules are not working as intended.
After considering the actual or potential harm, we will consider whether undertaking a review is the proportionate approach in response. We have a large and growing remit, so we use a proportionate approach to regulation and prioritise the areas that pose a higher risk to our objectives.
The following are examples of possible factors that will be considered to decide if conducting a rule review is a proportionate response to the identified actual or potential harm:
- Our strategic and operational objectives.
- Our secondary objective on international competitiveness and growth.
- Our priorities as set out in our Strategy and Business Plan.
- The potential financial cost, time, and feasibility of conducting the review for both us and the firms we regulate, and how this compares to the corresponding benefits of us conducting this work.
- Our policy pipeline and any upcoming projects that may impact on available resourcing, as any decision to undertake a rule review will potentially divert resources from other important regulatory priorities.
- Current or potential Governmental priorities that may require further action from us.
We are conscious that, as this Framework is being published, we are in the process of conducting a large-scale review of rules through the Smarter Regulatory Framework[21]. We are working with the Treasury to transfer a large number of files of retained EU law into our Handbook and taking the opportunity to consider whether we need to make changes to tailor rules to the UK market more appropriately. This is having a significant impact on our own resources and the capacity of firms who need to engage with the process and subsequently implement relevant changes. So we will consider this during our prioritisation decisions.
Actions we can take after a review
As well as providing valuable insight into how well a rule is working, a review’s findings will often inform our wider work and policy approach. We may gain insight into how we can improve other interventions, including our supervisory work. Reviews also inform our approach to future rulemaking, for example, our impact evaluations can help inform and improve the assumptions we make in our CBAs.
Once we have the findings of a review, we may decide to take any of the following actions:
When considering which is the best action to take as a result of a review, we will consider the potential costs and benefits of an action and particularly our strategic commitments. We will ensure that any changes as a result of a review are proportionate to the harm they are intended to address.
Cases where a review shows there is a significant problem
Where our initial review shows that there is a significant problem with a rule, we may want to move swiftly to address this. It is important that we meet our obligations as a public body to act fairly and reasonably and in line with our objectives and statutory processes for making, amending and, if relevant, revoking our rules. We also want to ensure that we do not cause stakeholders uncertainty by suddenly or repeatedly changing our rules.
We will consider our available options (such as an expedited consultation process or waiving or modifying our rules) on a case-by-case basis where the outcome of the review justifies swift action and the solution, and the method of adopting it, are appropriate.
Where the problem is significant, and the solution itself is a significant intervention, we will follow our standard processes for formulating and implementing the policy solution.
Our approach to reporting
We know that our stakeholders want to understand how our rules are working and what we learn from our reviews.
We expect to publish most of our larger post implementation reviews and impact evaluations, taking into account potential commercial sensitivities. This ensures the work is transparent and credible and contributes to the body of public policy evidence on effective regulatory interventions. If the Treasury directs a review, there are certain reporting requirements that we must follow (see Government-directed reviews).
We will keep stakeholders updated on our other reviews, including evidence assessments, through our wider reporting and other relevant communications. We will also publish an annual overview on how the Framework is being implemented and adhered to as part of our Annual Report.
We want reviewing rules to be part of our ongoing policy development and improvement. So we want to avoid it being an overly prescriptive and resource-intensive stage in the policy-cycle by requiring a public report in every instance. We also want our rule monitoring to become an integrated part of our policy process, so want to avoid requiring the publishing of all data and stakeholder feedback received as a standard.
However, there may be cases where it would be both in our and our stakeholders’ best interests to publish a more formal update on our review, including a completed evidence assessment. There are a number of ways to achieve this and we will decide this on a case by case basis. For example, where an evidence assessment has shown that the rule is not working as intended and we need to undertake a further public consultation, it may be worthwhile for us to publish that evidence assessment as part of the new consultation.
We may also choose to publish our response to a large amount of stakeholder feedback received on a particular rule, in the same way we would publish our response to consultation feedback.
Government-directed reviews
The Treasury has the power to direct us to carry out a review of specified rules. In doing so, it may specify the timing and the scope and conduct of the review. It may require us to provide interim reports during the review. It can also direct that someone independent of the FCA should do the review.
Where the Treasury has directed us to review, we will work with it to determine the most effective way to do so.
There are specific reporting requirements for these types of review. In a written report to the Treasury, we must explain our opinion on:
- whether the reviewed rules are compatible with our strategic objective, advance one or more of our operational objectives and advance our secondary objective
- whether and to what extent the rules are functioning effectively and achieving their intended purpose
- whether any amendments should be made to the rules and, if so, what those amendments should be
- whether any rules should be revoked, with or without replacement
- whether any other action should be taken and, if so, what that action should be
Outcomes we are seeking from the Framework
While this Framework builds on our existing evaluation processes, it also represents a new approach to proactively monitoring our Handbook’s effectiveness. We would like our Framework to enable the efficient collection and use of relevant data, enable action when a rule is not working as intended, and foster collaboration with stakeholders in keeping our Handbook effective and aligned with our broader strategic and operational objectives.
As such, it is important that we also keep the Framework under review and consideration.
As we continue to report on our activities as part of the Framework we will also consider whether the Framework is meeting its outcomes and update it as needed.