Newsletter on market conduct and transaction reporting issues
May 2024
About this edition
In this Market Watch, we discuss:
- failures of market abuse surveillance caused by issues with factors such as data and automated alert logic
- our recent peer review of firms’ testing of front-running surveillance models
Firms may find our observations useful in helping reduce the risk of failures by improving the implementation, testing and oversight of their technical systems for market abuse surveillance arrangements.
Market abuse surveillance and data governance, surveillance failures and model testing peer review
Background
Under the Market Abuse Regulation (UK MAR), firms must identify and report instances of potential market abuse.
A firm must have effective arrangements, systems and procedures in place to detect and report suspicious activity. These should be appropriate and proportionate to the scale, size and nature of their business activities.
While it is for each firm to assess what is appropriate and proportionate in the context of their specific business, some of the observations in this Market Watch may be useful.
Where UK MAR refers to firms’ systems, arrangements and procedures for market monitoring and surveillance, this includes how firms govern, implement, test, review and rectify issues with the functioning of the technical systems which they use to deliver market monitoring and surveillance arrangements.
There are various factors that affect the effective operation of a firm’s market abuse surveillance function. We have covered many of these in previous editions of Market Watch[1].
Over the past few years, we have become aware of problems with surveillance alerts not working as intended and assumed by the firm. Sometimes, this has come about because of faulty implementation. At others, bugs have inadvertently been introduced when making changes. In other cases, for various reasons, all the required data for successful monitoring has not been ingested.
In our experience, the impact of these failures varies.
- An entire section of a firm’s activity, such as a segment of business sent to a particular exchange, might not be monitored.
- An alert scenario could be partially effective, generating alerts, but not for all instances where it is intended to.
- An alert scenario for a specific type of market abuse could be completely ineffective, with alert generation impossible, due to inadequate testing before and after implementation.
Sometimes, firms identify and remediate these issues in a few weeks, or less. On other occasions, the discovery takes several months. In some extreme cases, we have seen firms unaware of faults for 2 years or more.
Firms use a variety of methods to undertake market abuse surveillance. In our experience, failures can occur with both third-party systems and those designed in-house.
To help firms understand the types of issue they may face, we offer some examples of malfunctions we have observed. We then discuss our 2023 peer review of surveillance model testing, where our findings may be valuable to firms aiming to mitigate the risks of surveillance failures.
Market abuse surveillance failures
Example 1: Firm A
Potential insider dealing can be detected using different surveillance models. To generate alerts for review, (along with a range of other factors) these models often use:
- the release of price-sensitive information
- a significant price move
- a combination of the two
Firm A was active across a range of asset classes, including cash equities. The firm decided to adopt a new third-party automated surveillance system. To flag any potentially suspicious trading, the system’s insider dealing model needed:
- a significant price move
- the release of news
However, when the system was put into production, Firm A did not undertake the necessary testing. It did not notice that the news feed had not been activated.
As there were no news stories being considered by the system, and these were required for the insider dealing scenario to operate, it was unsurprising that alerts were not generated. This was the case for over three years.
Firm A only became aware of this failure when it received an enquiry from us about some potentially suspicious trading where it had not submitted a suspicious transaction and order report (STOR).
Example 2: Firm B
Firm B designed and implemented an in-house surveillance model to identify potential insider dealing in corporate bonds, covering trading by clients and its own traders.
The alert logic did not require news to be released for an alert to trigger (this would be considered during alert review). It did need a price movement at or above a defined threshold (X%) within a defined period after a trade. However, a mistake was made at the coding stage. This meant that for an alert to trigger, either for a client or through its own traders, the firm had to trade in the instrument on the day that the price moved.
With this alert logic, in a liquid, frequently traded instrument, this requirement for a trade on the day of the significant price move would not impede the monitoring. In less liquid instruments, however, the alert logic created a risk of potential insider dealing going undetected.
For example, if a client of the firm purchased a bond on a date (T) and 2 days later (T+2) the bond increased in price by X+3%, but on T+2, the firm did not undertake any trading in the bond, no automated alert would be generated.
The fault originated several years before, at the design and implementation stage. After this point, its identification was impeded by the fact that the model was generating alerts in reasonable numbers and of good quality.
The alert output contained true positives, some of which resulted in the submission of STORs. This led the firm to mistakenly believe that the model was working as intended. The firm’s compliance team discovered the reality when it received a front office escalation. On checking if a surveillance alert had been generated, the team found it had not.
Example 3: Firm C
Firm C offered its clients direct market access (DMA) to certain trading venues. Firm C connected some clients direct connectivity to one of these venues (sponsored DMA – or SDMA), rather than connecting through Firm C. This activity was captured using a trading private order feed (POF) for inclusion in the firm’s surveillance.
When the firm implemented its third-party automated surveillance system, it believed it had arranged for all POF trade and order data to be sent daily for ingestion and processing. However, this was not the case. For several years, the firm mistakenly believed that its POF trading activity associated with this venue was being captured and monitored for market abuse.
The non-POF trades and orders arranged and executed by the firm were subject to surveillance and were generating surveillance alerts. As in example 2, this may have given the firm (false) comfort that all relevant data was being ingested into the system, and the surveillance model was working as intended.
Similar issues to this, relating to POF and SDMA, and to broader data ingestion gaps, have occurred at other firms.
Peer review of firms’ testing of automated surveillance models
In 2023 we undertook an assessment of how investment firms review their automated surveillance models. Specifically, we looked at the frequency and methods used by 9 investment banks to test the efficacy of their client order front running models.
The participating firms had differing approaches to testing, touching on several areas. These included:
- the breadth of the testing
- its frequency
- the degree to which it constituted a formalised process
- the governance arrangements around it
While we acknowledge this review has limitations – in looking only at 1 alert scenario and covering 9 firms – we encourage all firms that undertake market abuse surveillance to study our observations and consider whether modifying their testing arrangements would be useful.
Key findings
Most firms we reviewed had formal procedures describing:
- the frequency of testing
- which elements of the model were subject to review
- the form of the review
The remainder had no formal process or a semi-formalised process.
Most firms undertook an annual test of some type. The different types of testing were:
- parameter calibration
- model logic
- model code
- data (comprehensiveness and accuracy)
Approximately half the firms focused their reviews mainly on parameter calibration.
Some firms used a risk-based approach, where the frequency of testing varied depending on the inherent risk of the relevant market abuse type. Also, calibration testing was sometimes/in many cases split from reviews of logic, coding and data, with the former generally more frequent.
The number of surveillance models deployed by reviewed firms for client order front running varied, depending upon factors such as:
- the range of asset classes for which they were deployed
- the degree of tailoring of parameters that was applied inter and intra asset classes
Observations
Surveillance arrangements can often be complex, particularly in larger firms, where there are a wide range of:
- assets traded
- actors involved
- trading methods
- venues accessed
- other factors
Firms should take all of these into account as part of their governance processes around market abuse.
Our findings from the peer review and our ongoing supervisory work suggest that firms can take further steps in these areas.
Steps to avoid surveillance failures
To avoid surveillance failures and make sure that issues do not go unidentified for prolonged periods, firms may wish to consider how to mitigate these risks. Steps might include the following.
Data governance
- What steps are taken to ensure that all relevant trade and order data is being captured?
- Is the data accurate and comprehensive?
- Is the ownership and management of data clearly defined and understood?
- Are measures in place to regularly conduct checks and identify issues if/when they occur?
- Where issues are identified, can remediation be prioritised, based on risk?
Model testing
- Are governance arrangements around model testing sufficiently robust and formalised?
- Should testing of model scenarios involve parameter calibration, logic, coding or data, or a combination of these?
- How frequently should testing take place?
- Is it better to do light-touch testing more frequently, or undertake less frequent deep dive reviews?
- Should firms consider a risk-based approach when designing testing policies and procedures, or when selecting models for testing (and the frequency and depth)?
- Is the testing programme sufficiently robust and effective, without impeding adequate and productive tailoring of models?
- Are the relevant governance procedures optimised to take this into account?
- When using third-party surveillance systems, how can firms independently gain comfort that models are operating as intended?
Model implementation and amendment
- What form of testing is undertaken before introducing new surveillance models or amendments to models?
- Is this testing formalised and robust enough, while not being so onerous as to hinder swift action to implement, modify, recalibrate and fix surveillance models?
- Is regression testing undertaken when changes are made to other systems that might adversely affect market abuse surveillance systems?
What firms can do
Market abuse surveillance across industry can take many forms. It is often challenging and complex.
Appropriate tailoring of alert models, which we encourage for an effective overall surveillance programme, may increase the associated operational risk at alert level.
Testing by the second line and internal audit may help firms gain comfort about the effectiveness of their monitoring.
Our observations indicate that not all firms have been allocating adequate focus and resource to governance arrangements. Some firms have complex governance arrangements where approvals and validations go through multiple steps, taking significant time. Firms should consider whether intricacy and volume in governance necessarily delivers timely, efficient and effective outcomes.
Firms need a vigilant approach to proactively guard against surveillance failures and mitigate relevant risks. This is particularly relevant in light of likely future innovation within surveillance functions. Developments such as the use of artificial intelligence will need to be accompanied by governance that keeps pace and remains effective.
We encourage firms to review the issues discussed in this Market Watch and consider whether their arrangements are adequate or need improvement.