Operational Effectiveness in 2020 – Monitoring and Supervision

Published On April 20, 2020
15 MINUTE READ

This roundtable involved a frank and open discussion among three surveillance heads on how to use their resources in the current environment to get the optimum mix of quality review without regulatory compromise, while maintaining high team morale.

The panel comprised: Head of Surveillance, Europe, North American Investment bank; Global Head of Surveillance, North American Investment bank; and Executive Director and Surveillance Lead at Asian Investment bank. Below is a compilation of their views and unattributed quotes on this broad subject.

The first topic revolved around the results of the 2019 PwC Market Abuse Surveillance survey, which suggested that on average most firms were taking two minutes to clear every alert. The panel were asked how sustainable this was, and to what extent this might impact quality review.

The panel agreed that widening this survey to all banks might see that review time diminish even more. These metrics receive much attention at firms, as well as the false positives which all felt get too much negative focus. The panel stressed that their teams achieve a lot more than pure STOR submission. Many alerts get closed with no action, but lots of questions and digging is behind the clearance.

One panellist said his firm is putting in place a way to apply risk rankings for closeouts and to clearly define a false positive. Bad alerts must be designated this way. His firm uses “risk 1 through 5” where 1 is a false positive, 2 is a good alert and the analyst had to engage but not escalate, 3 is escalation, 4 is breach etc. This allows location of false positives in the MI and where to focus to reduce these. Effectiveness can also be reviewed to ensure the analysts are getting solid alerts to get them involved. Why use five different spoofing models when only two really work? Working with the analysts to visualise the data will mean better results – “they are not going to be effective if they have a blunt tool.”

Another panellist added, “anyone having a bad day who knows that there is only a 0.2 percent chance this alert is going to be interesting, will find it easier to just close it.” Analysis shows that the biggest time drain is risk 2 – most of these get closed but only after contact with an external source. These alerts were correctly generated but there is insufficient data around the alert. The focus is now on ingesting the right data, to close the alert without it being presented to the analyst. One added, “in trade surveillance the false positives are going to be a lot lower than expected, probably 15 percent, while ecomms might be 50 percent. Even then the lexicon has usually done its job and triggered correctly. If you use the risk rankings, you can identify the 30 percent that are real false positives, where it is just noise and the models need to be refined.”

One panellist said that some of the statistics in the PwC report might not always be indicative, as some alerts that lead to a STOR after extensive investigation can take days or weeks to complete. Others that appear in a more crude manner might be easy to close fast on an individual basis but could be more interesting viewed as a batch. It makes it hard to extrapolate decisively and these stats might be too simplistic. Another felt that two minutes as a weighted average is not that unreasonable when ecomms is included, as well as some cases that can involve 10 to 50 hours of work per alert. The risk ranking analysis per closeout is illuminating. The 1s (false positives) take 30 seconds or less, 2s (no escalation but good alert) should take two minutes or less. Escalations can be an hour, while a STOR can be 10 hours or more.

The panel moved onto a discussion of model calibration. The challenge lies with establishing a clear surveillance decision-making process so a distinct forum can make the call and retire certain models. The panel seemed to feel these decisions should be in the second line, which understands the rationale and has done all the due diligence. One of the firms has an operating committee comprising the heads of surveillance, advisory and capital markets compliance, as well as a regional business head. A surveillance team member who has done the calibration will present the case to convince this committee which conducts a minuted vote and then acts. “We have resisted pressure to add to that calibration committee with audit or the first line. The more you add, the more you get challenge around making it effective. You need challenge, but you need people who better understand the risk/ reward balance in turning a model off that might present some extra risk but is occupying say 0.5 of precious headcount.”

The panel discussed model inventory analysis, and models that are not firing and hard to calibrate, especially where there are multiple models for the same risk. The temptation is to just switch off models that are not seeming effective. It is best to evidence such action rather than relying on gut feel. One firm gets a monthly report which shows any model that has not triggered in the last five months, and any model where the alert numbers statistically deviate considerably over the last month from the previous six. Follow-up will confirm why it has not triggered or if there is a data issue. In some cases there is low expectation of even a good model triggering often, such as for front running. “The third line don’t want anything switched off – in our last audit I had to justify why we would switch off one of our spoofing models. Their reaction was that this implied we are not looking for spoofing! We are, but with other models.”

Peer reports from SMARTS reveal what models are effective at other firms and relative calibrations. There is no right answer to the number of active models per firm. Surveillance is most effective at controlling market abuse risk, but this universe of risk has grown. Ecomms and trade surveillance can only identify so much. Some of the risk might sit in the first line or in a pricing report or elsewhere. It is worth getting those key risks into clusters. One panelist said, “I have 16 or 17 key market abuse risks for trade surveillance and eight or nine for ecomms surveillance. That is the core. We have to think about FX remediation, FMSB guidance, Market Watch and this list keeps growing. But there is nothing outside what is at the heart of the MAR legislation.”

One of the firms is on a quest to become more effective through prioritisation. This involves analysing what surveillance is needed and how extensive this should be. The list of risks covered, and the proportional depth of coverage in each, should be defensible. The starting point is the risk assessment and a mapping exercise that matches market abuse scenarios to sensible categories that drive the risk assessment. Looking at all the different behaviours, product types and geographies combined creates an enormous universe. Applying inherent risk analysis or control analysis results in a very detailed body of work. “The bit that makes me nervous is the FCA’s drumbeat through speeches on the importance of the risk assessment and it seems inevitable they would start with that when they come in and ask ‘what are the risks you need to cover with your surveillance?’” Another panellist commented, “we spent two hours of our last FCA visit talking about the risk assessment. FCA was very challenging so you need to be very clear on what your risks actually are.” And the third panellist added, “the big focus is on behaviours, but it is actually how you represent your business, so give the regulator the detail on dealer to client, dealer to dealer in rates and credit, know how you execute and trade, place orders, stream prices. This is complex to represent but it really helps with your ultimate risk assessment.”

In a drive for effectiveness and efficiency, one firm is spending a lot of time on optimisation, using a small but central quality management team which is looking at time devoted to tasks or alerts from certain reports, and productivity generally. It analyses the work quality and the type of reports and alerts being generated. This team asks if that output can be more meaningful and result in higher quality review time.

All agreed that a key to effective performance is a motivated and curious team. “The vast majority of our best finds have been from analysts following their nose rather than what has been presented to them. An analyst needs the time to probe things that look odd or dig further into more comms etc. It is hard to articulate the right approach to increasing time and resource on non-BAU; my maxim when I first arrived was to halve the number of alerts and double the time spent looking at them which we have achieved, as well as increasing the breadth of our surveillance.”

A focus on analytics technology and giving the team access to the right data will have a big impact. Removing blockers to data access has been empowering. Tools to look at the data are key too. Enable that inherent curiosity that most surveillance analysts have. The findings and work behind them are of the highest quality once the space is developed for good analysts to do their job properly. Once the alerts are in good shape, the analysts trained and involved, QA between the lines or levels can help to facilitate interaction and promote the great finds and hits of the team internally. “We found that when we took the more senior people off the alerts, the STOR numbers went down. You have to get that balance right; also some people perform well for a few months then they switch off, or things might be going on at the firm on a macro level that affect motivation, and this can have a big impact on effectiveness. Training is also essential to keep interest high.”

One of the biggest challenges is evolving the programme while managing to keep running BAU day to day – one firm has taken its best people out of their management-type roles in fixed income and equities and given them exclusive product-lead roles so they are responsible for the product surveillance being done in all the regions. They must strike a balance between what exists now and what is coming down the track that needs to become mainstream. Experienced people are doing evaluations removed from the day to day, to think more strategically.

This firm is also majoring on data analytics in terms of alerting technology, asking if there is a better way of looking for behaviours than a rulesbased system, looking at all the data and reports to see if they can be analysed more effectively, tuned and optimised. Vendor analysis, pilots and PoCs are underway to assess innovative ways to develop the programme, but the predominant weight of it is still on BAU. New reports are coming in to increase coverage and there are some more senior resources thinking strategically and working with the technology and data analytics/science teams to evolve the programme.

The conversation turned to the makeup of the team. A challenge in creating the right team is cost, especially if everyone is onshore. The pros and cons of offshoring were debated – all agreed it can be hard to get good people to stay in offshore centres. But a smaller onshore team can get crushed by the sheer volume of alerts in BAU, and this means product specialisation has to be limited. Even the ex-trader types can have rather niche knowledge around a certain product at a certain time. Being current is key. It is difficult to find the right ex-trader, especially at the right level; if you are a star trader you are unlikely to be eyeing up your next move into reviewing surveillance alerts. And they often struggle with the newer tech, usually coming from the “point and click” generation.

“People who have good gut feel and inherently know when something does not seem right is the desired profile. That is the starting point for risk assessment; the team cannot be experts in every product in surveillance. But we can work with those who are and if we use our expertise in market abuse, we get there together. By testing them and assessing their response we can guide the risk assessment.”

Of vast importance now is comfort and proficiency in using new technologies and tools; the panel agreed that some of their most successful hires have been people with an operational background who really understand systems and products and can then learn the market abuse piece. They find the change exciting as well as appreciate the discretion; there is definitely a bias towards technology in the CVs of those in the teams now. Data science is in demand for specific teams rather than the mainstream. But the surveillance team is very much a part of the wider compliance team – it cannot expect to solve all these things alone without engaging with others such as the first line and other colleagues who own coverage of a particular part of the business and are subject matter experts. The ideal candidate profile includes people who are curious, technology-enabled, operationally savvy and willing to liaise with others.

If the firm specialises in a certain area, and it is very complex, it can require someone on the team who used to be on the business side to give advice and ensure the right level of knowledge, specifically for something like algo trading. Allowing people to have coverage of several types of product, market, report or abuse helps to make it interesting.

For offshoring, it is critical to have some sort of competent management in the offshore location who can be trusted and understand the goals of the offshore programme and who can then build that function. Local hires are unlikely to know the firm, the people at HQ, the operation, systems and culture. If those obstacles are overcome, then there is a great opportunity as there are some very bright people available remotely who can be trained and nurtured and they are often better qualified. Good training, systems and connectivity back to the hub locations must be available; but the challenge is finding that person to build under and around.

There is also a need for the right Level 2 team back at HQ to work in partnership with the offshore team, who need to be given a chance. If that time is invested in them, it can work well. It is not always as cheap as first hoped; finding the quality people, training them and getting everyone involved requires travel, time, money and infrastructure. Regulators have been known to visit offshore operations to scope out if they are being run on the cheap. Is there a Bloomberg terminal or access to market information? Who works there and are they well managed? But if you can achieve critical mass and get the operation working, it does have the potential to provide longer term cost savings. Sometimes this can be the only way to clear vast amounts of alerts effectively. It can free up a lot of “dead” time spent assembling MI and governance decks, which does not require specialist knowledge and can be done elsewhere.

The conversation turned to budgets and imperatives from senior management. One panellist said he is being asked to do more with less. Funds are available now to spend on tech, but not people. He concluded that he can do a little more with less but there comes a point where it is not possible to be as effective. Another panellist qualified this view and said there is more transparency on cost, what is spent and the headcount. There is understandable scrutiny of the effectiveness of the programme. The key focus is optimisation, making the most of the existing resource. While not at the start of a huge growth phase, the focus is on how best to use technology to be efficient and effective. There is certainly support to build and evolve as and when required.

There was discussion of global inconsistencies in approach, which can be a drag on efficiency and effectiveness. This form of collaboration and communication can enable establishing global policies and approach. Regional inconsistencies for similar use cases make no sense. Even with these in place, implementation across the regions can itself be inconsistent. The panel pointed out the increasing influence of lead regulators, the audit approach and the requirements of a global client base – this can demand a global view to get the most accurate picture. A practical solution is adoption of a minimum standard, rather than a maximum, which aims for equivalence rather than replication.

The final talking point was the concept behind outcomes-based regulation as the future that was surfaced by FCA’s Chris Woolard in a speech in October 2019. The panel all agreed that the regulator will not be letting go of the basics that demand surveillance for risks that apply to a business. The requirement is that market abuse is being appropriately policed. FCA is wondering how it can harness technology itself to be more efficient and effective. But everyone needs confidence as they go down this parallel path and the mainstream legacy platforms and approach start to be retired. They are willing to engage and think about this journey, but the bottom line is the solid need for a programme that is working and based on the risk assessment.