FINRA on Algo Compliance, CAT, and the Future of Supervision and Surveillance

Published On January 28, 2019
29 MINUTE READ

FINRA talks algo compliance and the future of supervision and surveillance in an exclusive interview with Radar.

Radar spent time with Thomas Gira and Susan Tibbs, exploring the way that regulators look at algorithmic trading, the changes implicit in the move towards automation, and the extent to which algo trading is different, if at all, to traditional human trading. Thomas is the executive vice president of Market Regulation and Transparency Services and he oversees the Financial Industry Regulatory Authority (FINRA) Market Regulation Department. Susan is vice president in the Market Manipulation Group at FINRA.

The interview looks at governance, testing, documentation and the whole ecosystem behind the compliance process for algo trading, examining to what extent this is a new framework and discipline for firms and regulators alike. The discussion revealed what regulators have to do differently when reviewing the use of algos from a supervisory oversight point of view.

Radar started by asking Tom to evaluate how the automated environment presents new challenges compared to a more manual and human approach that prevailed in the past.

Tom Gira (TG): As to whether algos present new challenges compared to when trading was more manual and humans were involved, at a certain level the answer is ‘no’ because ultimately you have humans developing, programming and implementing the trading strategies that are reflected in the algos. Today’s algos are arguably similar to how humans took in market information, processed it mentally, and decided how to trade based on their knowledge, experience, instinct, charting analysis or other factors. Even though I often say that we now chase bad algos, not bad traders, at the end of the day when regulators bring cases for abusive trading through the use of an algo, we are charging a human, not the algo.

“The human is responsible for the bad conduct effectuated by the algo, much like the scientists in ‘Jurassic Park’ were responsible for the havoc wreaked by the dinosaurs.”

But what does make today’s trading environment more challenging is the speed and complexity of today’s market structure. Markets are now almost fully auto-execution. There are still block trades that are negotiated over the phone or in some cases there might be an order that is delivered for execution, but by and large the markets have all gone fully electronic. When you layer in the availability of more order types tailored to today’s tightly interconnected electronic markets, the proliferation of trading venues, broader and more readily available market information, particularly depth of book information, fee structures that entail charges and rebates, and execution benefits from being in close proximity to trading venues, among other things, it is clear that today’s markets are much more complex than they were in the past. Orders generated by algos can be routed and executed or cancelled in a fraction of the time it takes a human to blink. System turnaround times are now evaluated in terms of microseconds and for some, picoseconds.

So, while humans are still responsible for the trading they generate, even with the use of artificial intelligence, a lot of the trading we see today, given the speed and complexity of the market, simply could not have been done by humans in the past. To be clear, this is not necessarily a bad thing. When properly deployed and managed, algos have been extremely effective in helping investors and liquidity providers successfully achieve the results they desire in a way that is compliant with applicable rules. Algos have enabled many to successfully navigate today’s ultra-quick markets.

Nevertheless, the size of today’s market – during 2018 we have processed, on average, 60 billion events a day, with a peak of 101bn events in one day – creates challenges for us to properly decipher market activity. It is a little like looking for short-lived needles in a huge haystack. The measurement of time in smaller and smaller increments also creates challenges for us. Lastly, the ability of developers of abusive algos to disperse their activity across trading venues, related asset classes, and broker dealers creates challenges. All of these challenges have been mitigated because FINRA moved its market surveillance systems to the cloud in mid-2016. Steve Randich, FINRA’s CIO, was a thought leader in moving us to the cloud and we have benefited greatly from its scalability and computing power.

In terms of the disruptive impacts that algos can have on the markets, at a general level, they can take two forms. One is when we think there’s purposeful bad conduct and manipulation, the other is where you have sloppy algo management. Our goal as regulators is obviously to try to attack both of those.

“Instances of repeated quoting activity, followed by short-term movements in the market, and trades that take advantage of that market movement, are very troublesome to us from a manipulation perspective because they are intentional and designed to disrupt or deceive the market. We have found that market participants operating abusive algos increasingly use more crafty means to ignite false momentum in the marketplace that they can then trade against.”

Fortunately, through Regulatory Service Agreement contracts with our exchange clients, FINRA receives 99.9 percent of the data in US-listed equity securities, and conducts comprehensive cross-market surveillance to look for the disruptive impacts of abusive algos and other rule violations. Recently, approximately 90 percent of these alerts involve activity on more than one exchange and approximately 70 percent of the alerts involve activity from more than one broker-dealer. For listed options we provide cross-market surveillance services to exchange clients that account for approximately 50 percent of the market. Some of our surveillance patterns also monitor for abusive cross-product activity involving equity and options activity. In addition to the data FINRA receives from our client exchanges, FINRA works closely with our client exchanges to share regulatory intelligence and develop cross-market cases that often focus on abusive algos and faulty supervision of algos.

An example of sloppy governance is when a quoting algo starts self-looping by generating quotes over and over because it is reacting to its own activity placed into the market (self-referencing). This can result in thousands of ephemeral quotes being entered into the market in a short period of time that are confusing to other market participants. Another example is when you have two algos at the same firm that are unintentionally interacting with each other and producing a lot of wash sales that inflate the volume in a stock. There may not be a purposeful, nefarious intent behind the activity but it nonetheless has a negative impact on the market as a whole.

We’ve made progress on both fronts.

“On the manipulation side, we’ve seen a decline in some of the activity we’re looking for, and we think machine learning could give us the potential to see things we’re not seeing today or see things better than we see them today. There’s been solid progress on the manipulation side and cases have come out over the years focusing on this type of activity.”

Likewise, on the governance side, we’re generally seeing fewer meltdowns occur because of faulty coding and we’re not seeing as many requests by firms to break trades because they are believed to be erroneous.

While there is still room for improvement at some firms, these positive trends have occurred in large part because of the supervisory structures and processes put in place at a lot of firms who have really gotten with the program and have started to look at algo supervision in a way where it’s not just the compliance staff involved. Compliance staff are usually not quants or programmers and, as a result, they may not by themselves have the ability to properly supervise algos. One of the positive developments that we’ve seen is more of a cross-disciplinary approach to algo supervision, where you bring in IT, obviously you have the business involved, and you bring in Legal too. You’re bringing in all the disciplines earlier in the process of building and generating algos, with more technology support and more testing. In large part it’s why I believe we’ve seen fewer hiccups with algos over time.

Susan Tibbs (ST): I think the evolution of technology has worked in our favor, both from the firm compliance side, where they can better test and monitor the algos, to what we’re able to find on the detection side as a regulator. This has definitely improved tremendously over the years. Our message is consistent that the firm’s compliance technology needs to keep up with its trading technology.

Radar: How involved is management in the algo strategy and its execution now?

TG: If you were to go back six or seven years, there were some firms that empowered their traders and quants with the technology and would allow the quants to go off and develop and implement algo trading software. Traders and quants would be doing all these things and the firm may not have really paid attention to what they were doing, as long as they were making money. Today, I think there’s been much more of a lockdown by firms generally, and while there may be some firms that need to improve, there’s much more discipline surrounding the development, testing, implementation and use of algos. In addition, firms now typically have controls in place to terminate an algo if the firm suspects there is something askew in the market, for example due to an aberrant price or the firm making or losing too much money relative to expected ranges. Firms also have better procedures in place to monitor whether an algo at the firm is causing or contributing to a market disruption.

“From a regulatory standpoint, when we’re looking at activity now, it’s much more common that we can identify which specific algo generated the activity in the marketplace, whereas before it was much harder to do that. Firms just weren’t keeping track of that information in a way that was traceable.”

This type of change in governance has been a huge benefit for us and a huge benefit for the firms. Over the years, several regulators have suggested that all algos should be registered and the reason behind that was this feeling that there is all this activity but you can’t really piece it together because you don’t know exactly where it’s being generated from. While the US hasn’t adopted these specific types of requirements, and there’s certainly still room for improvement, I think there has been significant advancement in terms of the ability to track the human to the code to the activity.

Radar: Do you have a more regular dialogue with firms who want to work with you in an area which is perhaps not as set in stone as those around traditional regulations and a traditional process?

TG: Working with us might perhaps be a strong way of describing it. I think firms want to know what we’re thinking, but they don’t always want to engage with us. Having said that, there was a notice that we issued in 2015 where we actually got input from firms on some of the best practices that were out there in the marketplace. We also have been more transparent about our expectations and areas of focus with respect to algos in the annual regulatory priority letters that FINRA publishes. We want to be as transparent as possible about the problems that we’re seeing so that firms can be aware of what those areas are and can react to them.

A program that we believe has raised firms’ awareness of our concerns with the supervision of algos is our Cross-Market Supervision Potential Manipulation report cards, which we started in early 2016. Through the report cards we are leveraging the surveillance we’re running to benefit the firms. One scenario in particular that we wanted to assist firms with is when we see potentially manipulative activity spanning multiple broker dealers. When potentially problematic trading activity is dispersed between two firms, neither firm sees the totality of the activity and the portion of the activity that they do see may look benign to them. Neither broker can see the full picture so we wanted to enlighten the firms, to let them know that there was potentially toxic activity going through their firm that was generating red flags for us. We don’t share the identity of the other firms that are also involved in this type of multi-firm trading, but we do provide information to the firms in a way that allows them to see if the pattern of activity is consistently with another firm or not. If there is a pattern of the same firm being involved, it is a red flag that the conduct flagged on the report card may be purposeful. After the report cards were launched, we did see some pretty dramatic declines in that type of activity, presumably because firms could now see the totality of the conduct and took action with respect to the accounts that were part of the surveillance alerts. Presently, we are generating report cards for four surveillance patterns.

ST: Via our potential manipulation report cards, we give firms information related to the potential collusion and then they’re able to ask questions of their own trader or customer. The firm can review its own records and surveillance and dig into it further. If an algo is responsible for the activity, this type of report card can also help a firm to identify instances where an algo is impacting the market in a way that is a concern to regulators.

Radar: In terms of the increase in data that is available to you with initiatives like the Consolidated Audit Trail (CAT) and the data you can ask for, plus your ability to run analytics on data in the cloud, is that something that is empowering for you from an oversight point of view? Especially around automated trading where there’s huge volume? Is that the best way for you to oversee firms going forward?

TG: Yes, we’re data geeks and the ability to get richer, more granular data through the CAT once it is in operation is very exciting to us.

“To be able to eventually generate surveillance alerts at the account level instead of at the broker level will be a huge step forward for surveillance of the US markets.”

As I mentioned earlier, FINRA moved its surveillance program to the cloud in mid-2016 because of our significant capacity and computing needs. We have been extremely pleased with the results. We have been able to handle large-volume days without hiccups, the staff can retrieve large data sets in a matter of seconds whereas before it would take hours, sophisticated analytical tools are available to the staff so that they can take deeper and wider dives into the data as part of our surveillance and investigative processes, and surveillance reruns, due to inevitable data issues, take a fraction of the time they used to because of the computing power of the cloud. It is not clear to me that FINRA could have satisfied its surveillance, data, and analytic needs if we were not in the cloud. The emergence of CAT in the future will only make it more likely that we will remain in the cloud.

ST: The cloud has enabled us to not just be able to process and ingest what we need to do our jobs, but also innovate and take things further both from the technology side and from the regulatory side. That’s been truly amazing. We just finished a Createathon event where more than 500 staff participated in various projects around artificial intelligence, and the ideas, the complexity, the passion and the enthusiasm from our staff was just astounding.

TG: FINRA’s surveillance operation is very dependent on technology. At the end of the day, you’ve got to have a strong partnership and a strategy with your technology department to evolve and be successful, as you can’t stay static in this marketplace. Market Regulation’s partnership with Technology is much deeper now than it was before.

Radar: What will your relationship with firms look like as a regulator in future and what can they expect from an oversight point of view? It feels like you will be able to do a lot more remote oversight on a regular basis where you might be sending RFIs into firms, within minutes of you receiving certain data and reports, and they will need to be able to respond in kind. Is that what the future looks like?

TG: With the application of advanced big data analytical techniques, such as machine learning, I am confident that FINRA will be able to unlock the power of the more granular data that CAT will provide and enable FINRA to enhance its surveillance of the markets.

“I think we will produce more exacting surveillance alerts that will better home in on problematic trading, and our surveillance program will more readily identify new or previously unknown abusive trading strategies. I don’t think FINRA will be alone in this, as regulators throughout the world will similarly look to leverage more granular trading data, artificial intelligence and cloud computing.”

However, I don’t think we will be producing these type of enhanced surveillance alerts in real time, as the surveillance will be dependent on non-real time information gathering. With CAT data, much like the order audit trail system that FINRA is running today (known as OATS), the data will be required to come in from firms by 8:00am the next morning. Because the information is so detailed and expansive, firms will have a five-day period to correct any errors or omissions that were made in the original submission. Firms submit massive amounts of detailed data to OATS today and will submit even more data in the future to CAT, so there needs to be an error correction process to make sure the data is sound before you apply sophisticated surveillance techniques. As a result, for the foreseeable future, I think you will see cross-market and cross-product surveillance run on a non-real time basis.

With the ability to ensure the data is sound before running surveillance and conducting value adds to the data, I think you reduce both false positive alerts and false negative alerts. Sometimes when you assess a situation on the fly very quickly, it can be informative because you see the fire, but you may not know why the fire started. I think there’s a trade-off between the amount of time to conduct data management and the value of eventual surveillance alerts. This is not to say that there is no real time surveillance in the US. Our client exchanges conduct robust real time surveillance of their markets and are constantly reaching out to firms when they see something that troubles them. The exchanges make referrals to FINRA when they are concerned with what they have found. These referrals, when combined with the surveillance alerts FINRA generates, are very powerful.

But the message to firms is that with CAT we will have more granular data, and for the first time, more comprehensive options data. With the ability to conduct better cross-market and cross-product surveillance, there’s likely activity we will see better tomorrow that we are not clearly seeing today.

With better data to conduct surveillance, if you’re a good firm doing what you’re doing today you may not be bothered as much by us, as we will hopefully generate less false positive alerts; conversely, if you’re a firm that engages in bad activity or has customers engaging in bad activity, we will be bothering you more as there will be less false negative alerts.

ST: A lot of what we do is driven by the surveillance that we’re building and generating ourselves; a lot of that effort we’re putting into improving the precision so that we are targeting the right things in the right way. That’s certainly an important part of the innovation and the work we’re doing here. I would also add that in Market Regulation we generally don’t do real time surveillance as the exchanges are doing that themselves, what we’re doing is post-trade surveillance.

Radar: What is the best way to mitigate risk in this high-volume automated area and what gives you comfort as a regulator when you first start to look at the approach a firm is taking to ensure compliance?

TG: At a high level, when we go to the firm and they say that they have an algo management committee we see that as a very good sign because it means all perspectives on algo trading hopefully are being heard and addressed in what they’re doing. The level of documentation also is something that would jump out at us. If a firm is able to connect its trading back to its algos, documents what the algos are designed to do and how they have changed over time, and can show the results and actions taken as a result of their internal surveillance programs, those are all good signs to us. The level of testing is also something we look for. Between testing pre- and post-implementation of an algo, there’s probably more room for improvement in the post-implementation testing.

“Firms are getting increasingly better at testing on the front end, but a lot of the time, tests are being done in an environment where you could get a false sense of comfort, so that when firms go live, things can happen that they may not have been aware of or there may be unknown, unintended consequences.”

Also, after things are implemented, it is important to not just test once and leave it alone; firms should conduct periodic assessments because there can be tweaks to the algos over time that could make them different. Markets are also constantly changing, so testing can’t be a static exercise. It’s got to be a constant exercise for firms to evaluate what their intentions are and what’s happening, and if there is a delta there.

ST: We set out in FINRA Regulatory Notice 15-09 numerous considerations regarding effective practices for utilizing algorithmic trading strategies. These have really held up over time. The effective practices include regular assessment of controls and testing. Firms should have a strong rationale for the controls implemented, and adjust them periodically.

Radar: If you were to characterize those instances of deliberate misconduct you have seen, has there been a pattern running through them?

TG: There’s probably a couple of threads. One is what we call micro-manipulation where you’re trying to get the market to move for an instant in time so that you can be on the opposite side of the market to take advantage of the price movement. What some of these strategies are designed to do is to flip the spread, and to get the bids to go up or the offers to go down, so that the manipulator is buying on the bid and selling on the offer. If you’re successful in doing that, then you just reverse it to make your profit. A lot of what these manipulative strategies are designed to do is to send out false signals into the marketplace causing market responses. For example, this can be accomplished through layering, where you’re improving the best bid or offer for very small periods of time with the intended reaction that a third party thinks the market is moving and wants to be top bid/low offer. Once a third party takes the bait, the firm that layered immediately removes its orders. The third party is the one who is harmed as they bought for more or sold for less than otherwise would have been the case without the placement of the layering ephemeral orders designed to move the market. With spoofing, you lead the market up, and then you run away and the others that have followed you then become the top of the market and they get hit from the opposite side. Sometimes we’ll also see people do layering and then they’ll sprinkle in some wash sales to give some legitimacy to pricing at a certain level, again designed to get people to go to that level – creating a false perception of value in the market. So, a lot of the patterns are looking for baiting activity that lures other market participants momentarily to a different price level that is advantageous to the firm conducting the baiting activity, but there are different types of baiting activity used to catch the fish. I would say that sometimes ‘we don’t know what we don’t know,’ and this is what we worry about. Are there strategies out there that we’re not aware of where people are being taken advantage of? Machine learning will go a long way to help us mitigate that risk.

As to the firms that take the bait, interestingly I think a human trader may be better prepared than an algo. I think a human would know if they were picked off. The computer is sort of dumb in this respect. It will keep doing what it’s programmed to do and keep getting picked off until a human shuts it down or until its risk limits have been reached. There’s got to be something to stop it.

“What is surprising to some is that algos operated by high-frequency-trading firms often are the victims of a lot of these momentum ignition strategies. HFT firms are very cost-conscious because their profit margins are so thin, so when they do get picked off they are not bashful about telling us to look at something.”

Some of the best information we get is from HFT firms. It is also a bit of a cat and mouse game. If a firm has been picked off, it will develop defensive measures to not take the bait, which in turn prompts the developer of the abusive algo to be more creative to make its baiting activity look more random, more dispersed, or more legitimate, in a way that entices firms to take the bait again.

Another theme is that volume continues to be more concentrated at the market open and close, and so we do see a lot of activity where people are trying to move the market right up to the point when the opening and closing auctions go off in an attempt to advantage orders that the manipulator has placed in the auction. This is probably our biggest growth area in the last year or so in terms of how we’re spending our time.

Radar: To what extent are you looking at the correlation between underlying markets and the derivatives that are connected to those?

TG: This is something that we look at very closely. In fact, a few years ago we brought a cross-product manipulation case against a firm that was layering in the equity market to get favorable options executions. Because of the leverage that options provide and the enhanced data that we will receive from CAT, I think cross-product surveillance will be a big growth area for us in the future. It is not inconceivable that people may use synthetic stock positions created with options to effectuate some of the abusive algo strategies that I described earlier. We are also looking closely at products that are based on the same index, for example stock-index options and ETFs based on the same index, and we recently launched a suite of surveillance patterns that we call correlated product surveillance. Similarly, we are running a prototype for a new surveillance pattern that examines activity in related ETPs, for example an unleveraged ETP and a leveraged EFP on the same index. The concern is that, similar to equity activity to move the options market, you may see activity in the unleveraged ETP that is designed, through arbitrage, to move the price of the leveraged ETP. In sum, the ability to do cross-product surveillance is going to be dramatically better with CAT data and a growth area for FINRA in the future.

Radar: How do you feel about the requirement to train those working on the algos on regulatory requirements and the environment, especially the quants and developers? Do you prioritize that?

TG: In the past, it was not uncommon to encounter extremely smart and savvy algo developers that would be coding away and creating things on the fly. They were not always familiar with applicable regulations and just wanted to get a trade off as fast as they could at the cheapest price possible. As a result, we do now require those involved in the development or management of algos to be registered with us as a trader. This also means that these individuals must pass a qualification exam and be subject to continuing-education requirements. Some firms register only those required to be registered under the rule, but others require almost all their staff involved with algos to be registered. Hopefully, with the registration requirement and the education, we’re in a better place.

“We are seeing fewer market disruptions related to sloppy algo management, and we like to think that part of that is attributable to FINRA’s requirement that algo developers and managers be registered.”

Radar: Are there other regulators that you work with in these areas both nationally and internationally and how does your interaction take shape and help?

TG: Probably one of the best vehicles for that is a group called the Intermarket Surveillance Group (ISG) which meets twice a year. ISG started in the US many years ago. The SEC requires that all US stock and options exchanges and FINRA be members of ISG so that we can share trading and customer information necessary for our trading investigations. For example, if the New York Stock Exchange was conducting an investigation and saw activity on the Philadelphia Stock Exchange, they could get that data. To be a member of ISG, you have to have the ability to compel your members to give you information when asked by another ISG member. In the US, ISG has expanded over time to include most of the futures exchanges. It also expanded in the late 90s when non-US exchanges started to join ISG. There are now almost as many non-US markets in ISG as US markets. We all have signed the ISG agreement that compels us to share information with each other for trading investigations and related enforcement activities. As part of the ISG, there also needs to be a commonality of principles across the regulatory regimes that are consistent in terms of manipulation, fraud, customer protection, and other areas. All members need to have their rules aligned to those areas.

ST: There are presently 56 ISG members (30 US and 26 non-US), and we regularly share surveillance techniques, case studies, investigations, firms and regulatory intelligence, both in the meetings and with one-off exchanges as needed. So the level of cooperation is changing in a way that involves a lot more than just exchanging data. It’s becoming more collaborative and more detailed as to problematic conduct and problematic firms.

TG: We also have good working relationships with statutory regulators outside the US. For instance, when we have seen problematic activity with a firm in the US and we know they’re active in the UK, we will compare notes on the firm with the UK’s Financial Conduct Authority (FCA) and vice versa. We also have had mutually beneficial secondment arrangements with statutory regulators outside the US, like the FCA, where they have sent staff to us and we have sent staff to them. Because of the level of cross border trading between the US and Canada, we also have a very close relationship with the Investment Industry Regulatory Organization of Canada (IIROC), which is FINRA’s Canadian equivalent. Lastly, FINRA has entered into several Memoranda of Understanding with other regulators outside the US to provide mutual assistance.

Radar: What worries you right now as a regulator with a responsibility to react to the way that market structure, technology and the changes in trading practice evolve?

TG: Liquidity gaps worry me. The Flash Crash in 2010 was unfortunately a real-life example of such a gap. We’ve moved away from registered market makers and specialists as the primary providers of liquidity to HFT firms, some of whom are registered market makers on exchanges and some of whom are not. I worry that there could be some event in the market that causes HFT firms to retreat by widening their spreads or simply exiting the market if they are not a registered market maker. From a market-structure standpoint, I also think the velocity with which things occur can exacerbate risks in the market because so much can happen before things are turned off to stop the bleeding. I’m not sure that these risks are unique to our current market structure, as you could argue that more traditional models of liquidity provision would create the same, if not greater, risks.

I also think that the SEC, exchanges and FINRA have taken significant steps to mitigate these risks through the development of the limit up/limit down process and more refined circuit breakers.

“With these measures, I believe the market has better guardrails than it did in the past. So while there could be a really bad event that will drive a stock or the market overall down, things will shut down at some point. There’ll be some bleeding, but at some point it’s going to shut things down and let people take a step back and assess it.”

So I think the narrower bands on the circuit breakers and the limit up/limit down process are really good reforms for the market. The SEC’s implementation of its Regulation Systems Compliance and Integrity rules (Reg SCI) also has shored up the resiliency and capacity of the markets to withstand volume surges and technology issues.

Finally, as I said earlier, today’s markets are very complex, interconnected, super quick and constantly changing. As a regulator, this creates an environment where you have to constantly worry about what you don’t know and puts a big premium on having an accurate, consistent, reliable and timely audit trail to support systems that can best surveil the market. I constantly worry if there are abusive trading strategies out there that we’re not aware of. With the help of our client exchanges, I think FINRA is operating a very robust and successful cross-market and cross-product surveillance program for today’s market, but we must always be striving to enhance our program. The data granularity and detail that the CAT will provide will certainly provide opportunities for FINRA to take its surveillance program to the next level through machine learning and other techniques.