A Market Data Makeover for the U.S.A.

Dec. 24, 2024

In Episode 68, we are joined by Allison Bishop, President of Proof Trading, for a discussion on market data in the US, a topic that is a front burner item with market structure followers. Allison provides her firm's experience as a start-up small broker required to purchase market data and the choice between subscribing to the faster but more expensive prop feeds from exchanges or using publicly available data feeds known as SIPs. Her firm chose the SIPs which put her front and center as a stakeholder in the on-going debate over content and governance of the public data feeds, a contest that began in 2018, included multiple lawsuits and remains in limbo and awaiting direction from the SEC. This podcast is a great resource for any market participant wanting to understand the history of the debate over market data and what might happen next with the incoming Atkins Administration at the SEC.

Chapters:
4:50 The Choice between the SIP and Prop Feeds
07:15 Guiding Principles of Market Data Reform
13:36 The Final Rules on SIP Content and Governance
15:29 Making Sense of the Initial Cost Proposal for the Enhanced SIP
25:40 Why Data Should be a Utility like Water or Electricity?
29:09 Will Smaller Exchanges Fight Incumbents to Lower Data Fees?
Subscribe to Spotify
Subscribe to Apple Podcasts

This podcast was recorded on December 17, 2024.

PETER HAYNES: Welcome to TD Cowen's podcast series, Bid Out, a market structure perspective from North of 49. My name is Peter Haynes, and today for Episode 68, we're going to study market data, a topic that is a front-burner item with market structure followers in just about every corner of the globe.
Why? Because in the minds of industry, for-profit exchanges are extracting enormous rents for a service, providing access to market data that the industry is required to utilize. Today I am pleased to be joined by Allison Bishop, President of Proof Trading, a relatively new brokerage firm offering execution algorithms to institutional investors.
Alison, thanks for coming on the podcast.
ALLISON BISHOP: Thanks for having me.
PETER HAYNES: All right. Well, before we dig in on market data, can you tell our listeners a little bit about your background, including your expertise in cryptography?
ALLISON BISHOP: Sure. So before I worked in finance, I was an academic computer scientist, and I was doing research and teaching as a professor. And my main area of study was cryptography, which is the study of tools like encryption and related technologies for securing data.
And my particular focus was in reasoning about complicated systems that use data, and what do we know about their security, and how do we prove things about it systematically? So I get asked a lot if I use any cryptography in my jobs now in finance, and the answer is, no, not directly.
But I think the kind of reasoning you learn from thinking about cryptography, it trains you to be a very systematic and rigorous thinker about complicated things, especially under adversarial conditions, and I think that does translate well. And probably the most famous example that people would be aware of is that Jim Simons was also a cryptographer before he came to finance.
PETER HAYNES: Well, you're in good company, that's for sure, if Jim Simons is someone from the same space. I think about the CAT. I know some of the talk around the CAT is it's one of the single largest databases in the world of data. I can't remember what the numbers were. It was massive.
But you think about the security, I know that's one of the big issues that some people that are frustrated with how much data is in one spot and the risk that that data could be misused or, I guess, in your world, hacked into. Have you spent much time on the CAT? Is that something you spent-- paid any attention to?
ALLISON BISHOP: I've certainly followed the development. I haven't spent any time working on it in any form. But I think the cybersecurity challenges there are pretty interesting. One of the major risk factors for cybersecurity that the CAT has going in its favor is that keeping data protected at rest is something cryptography, over the last decades, has gotten pretty good at.
So we have pretty good encryption algorithms. They're very stable. They've been studied very well. And if you just want to lock your data away and keep it safe, we're pretty good at that. Most of the problems happen when you bring your data out into the world and you transmit it across the internet, and you have different people logging in from different places.
And so the communication surface is a large part of where risk comes from, which-- so most of the things that keep me up at night as a cybersecurity person in the modern world are things like Internet of Things, where we all have so many devices talking to each other, which creates a surface on which viruses and things could spread.
So I do think the CAT, while being one big, juicy target, doesn't really have a lot of people needing to access it all the time, and they can probably control those communication surfaces a bit better than some more heavily-used systems. So, at least from an outsider perspective, that's my thinking on it.
PETER HAYNES: Well, that's good to know. My brother's in the cybersecurity industry, and every time I do those tests that the organization makes us do on cybersecurity, they always ask the question about your passwords. And my brother used to say, the safest place to keep your passwords is underneath your keyboard and not in that database or saved in that Word file that says Passwords for someone to go in and get them all, so.
ALLISON BISHOP: Actually, I would like to disagree with that advice, though, because it actually depends very specifically on the threat model. And you often do see cybersecurity people give that advice, and I think it makes sense in a very specific context where you're worried about the outside threat of a hacker, and that makes a lot of sense.
But you also have to worry about inside threats, whether in a corporate context or in a personal context of people who have access to your physical spaces and devices. And so it's definitely not a one-size-fits-all kind of recommendation.
PETER HAYNES: Yes. The dummy I am would just be thinking and he would be referring to that in the world I live in, not necessarily that there would be a threat actor sitting beside me on the trading floor, but you're absolutely correct. And so maybe that'll be another podcast we can talk about cryptography or get him on to talk with you about cybersecurity someday.
But let's set the stage for this discussion, which is on market data. We'll be focused on US market data today, and in particular, the debate between content and the cost of equity data that is publicly available through the security information processors, or they're well known by their acronym, SIPs.
And we're going to look at that in comparison to the proprietary data that is only available by subscribing to direct feeds from exchanges. While this discussion will be primarily focused on a made in USA, it is relevant for listeners in other jurisdictions, namely Canada and Europe, where similar discussions on market data costs and content are playing out, albeit taking into consideration local market conventions.
So Allison, you joined the discussion, or we'll call it a debate, on market data in 2020 when the SEC first published proposals for market data reform. Proof Trading was a startup at the time, and you were a consumer of SIP data, and it was a very significant expense item for your firm. Can you put these data costs into context for our listeners?
ALLISON BISHOP: So we are still a startup and we are still a consumer of SIP data, and it remains a significant cost for our firm. So at that time period in 2020, when we were preparing to launch our trading business, we were budgeting out our costs and subscribing to the SIP.
And then, as in now, our monthly costs due to the SIP, so back to the exchanges and the processors, are about $14,000 a month. And that is just for agency trading. So we do not have a prop trading business. We do not have additional fees because of that. We do not pay display fees, because all of our things are automatic and happening inside our technology systems.
So it's the thinnest setup, the cheapest setup you could get from that perspective, from a market data perspective on the SIP, and for our real-time data costs that comes out to $14,000 a month. To put that in perspective of our overall costs, which we've been very transparent about, our yearly budget is somewhere between like $1.5 until the $2 million a year for operating budget.
That's been going up over time as we build out the business, but it was something in that ballpark in our projections in the early days and something in that ballpark now. And so SIP costs are one of our most significant line items and represent a very significant portion of our overall cost of operating the business.
PETER HAYNES: And I guess a barrier to entry, as we will discuss going forward for other organizations that want to be startup innovators like yourself. So in response to the SEC's proposed order, which came in early 2020, that required exchanges to come up with a new market data plan, you wrote an eight-page comment letter outlining what you believed should be the guiding principles for market data reform.
So in this particular letter, you broke the discussion into four parts, and I'm going to go through them one by one and ask you for your thoughts. And we'll start on the topic of SIP latency, which post the Flash Boys book became a major issue, about the slow feed versus the fast feeds. You, in your letter, did not believe it would be beneficial to have the SIPs try to get any faster. Why not?
ALLISON BISHOP: And just for context on this question, I should mention, I used to work for IEX, so I'm well-versed in the Flash Boys kind of narrative and all that low latency kind of stuff. And as a data scientist looking at this problem, it's like the low latency problem is very, very clear in the market dynamics of what's happening.
So you have these kind of moments when price changes are happening across the market, and the, quote unquote, "market" in the US is actually a distributed system across many exchanges and dark pools. So things do not move atomically. Like a price change doesn't happen in one instant simultaneously everywhere. It happens by different people rounding across the market in different ways.
And so there's this little window of time, often, where a price change is in progress, and you can reasonably predict that that process is going to happen and the price is going to change. And what we could see from doing the research that we did at IEX on things like the Crumbling Quote Signal, which I worked on, was that there's probabilistic patterns in how these things play out.
So latency matters very much in that sense. So there are these dynamics of short-term price changes, and people can take advantage of those and use them to provide market making and change their strategies and all these sort of things.
But very fundamentally, they are winner-take-all races. So once one of these process starts, market makers are potentially trying to adjust their quotes, people are trying to pick people off, all those sort of things. But at the end of the day, you're either going to be picked off by somebody faster or you're going to move out of the way because you're faster than them. So it's a very zero-sum sort of situation.
And, as you might expect, the top firms that are very good at taking advantage of those situations win the vast majority of those races. And so if you're not in the top three fastest firms and competing in low latency races, you're not going to win a reasonable percentage of those opportunities. You're going to get picked off along with everybody else.
And there's no real advantage to being the sixth fastest versus the 10th fastest. The result is the same. You get picked off by the first fastest. And as a result, for agency brokers, our contention, that it doesn't really make sense for us to invest in playing the speed race to that extent. Because the get from the 10th fastest to the eighth fastest would be a huge investment in technology and cost, and we're not going to win any significant fraction of these speed-based races.
So especially from an agency perspective, you have risk checks, you have other sort of things in your system that are designed to facilitate agency trading, that are going to take you out of the running for being the absolute fastest. And also, unlike market makers, you don't really get to pick your spots of what symbols you're trading and when exactly.
So agency brokers are playing these speed races with a handicap, which is why it makes sense, in our opinion, for us to just use the tools that have been built at the exchange layer and use smart routing practices to avoid being the thing that sets off these speed races.
So we are very mindful of these kind of dynamics and trying to achieve good outcomes for our customers, but we navigate these things by trying to stay out of speed races, to not instigate speed races against our orders, and to route in ways that avoid these kind of dynamics, rather than trying to come out on top when these situations arise.
Because we don't see much evidence that an agency broker has the capability to end up on top in these situations, and then it just becomes a cost that's being passed on to their clients for no discernible benefit.
PETER HAYNES: So this notion that the SIP is fast enough is really working for you as an agency broker. And it took Wall Street a long time, I think, to figure out that they weren't going to win that race, for all the reasons you just mentioned.
But I'm curious, when you think of the agency algorithm space right now, I keep hearing that there are some sell-side algo providers that are starting to play in that dynamic of the speed game. Do you get that same sense, or do you think, generally speaking, that the algo providers are like you, focusing on their smart routing technology rather than trying to win the speed game?
ALLISON BISHOP: I would hope that they're focusing on their routing technology rather than trying to win the speed game. But I think there are two interesting dynamics at play there at a business level. One is that, often, these companies are companies that already have an investment in all of this speed-based technology for other businesses that might operate under the same umbrella.
And so if you already are bought into the idea that you need all this stuff, or you've already marketed to your customers that you need all this stuff, then you may not particularly want to examine the premise that speed is helping your performance. But this is the nice thing about doing a startup is, when you approach this from first principles, the question that you would ask is perhaps naturally much more narrow.
Which is, well, what are the exact situations and mechanisms in which I anticipate my speed being able to help my customers? And so if I were a client of a broker telling me that they're winning this kind of speed race, I would ask, how and when exactly in the context of agency trading are you beating market makers to the punch, and how are you analyzing and quantifying that?
PETER HAYNES: I think that's going to be an interesting 2025 and beyond debate, and we'll watch it with keen interest. Let's move to content, which was the second topic that you broke your letter into.
The SEC argues that the original intent of excluding detailed information on the SIPs, like depth of book, was in the belief that competitive forces between the different marketplaces would control prices. You refute this notion, and can you explain your argument?
ALLISON BISHOP: Certainly. So depth of book data coming out of a particular exchange is fundamentally a monopoly, in that it can only come from that particular exchange. So the floor on competitive prices for that data is set by what that exchange charges anyone who takes that data and redistributes it.
And I don't see how that part of it, that redistribution fee, is subject to competitive forces, because ultimately, you can't get depth of book data from the New York Stock Exchange without it coming, at some origin point, from the New York Stock Exchange.
And you can say, well, you can buy depth of book data from one market and not another, but ultimately, some of these markets have very large market share. And if you're going to be building models and taking advantage of depth of book, you're definitely going to want that data from the exchanges with highest market share.
And so unless we have some mechanism to control prices on those redistribution fees, any technology provider or vendor coming in there trying to reduce that cost can only reduce the additional overhead that they charge, not that fee due back to the exchanges. And so that part of it, I feel, is not subject to competitive forces.
PETER HAYNES: I would have to agree. And I'm not sure I fully understand the aspect of competing consolidators for exactly the reason you mentioned there, but I'm sure we'll hear more about that in the future.
So let's move to market data pricing. And you use the example, again, of your firm, Proof Trading. When the SIP added Proof as a client to its services, it generated an additional, as you say, $14,000 a month in revenue. But it added virtually no cost to the production of the SIP.
So your belief is that the SIP needs to first redefine what is a user. And given changes to technology and data production, there needs to be some sort of a linkage between revenue and cost of production. So can you elaborate on this argument?
ALLISON BISHOP: Yeah. And so just to describe the issue of a user. So we get our SIP data through another technology provider. And so that technology provider is directly connecting to the SIP, so they are a direct customer of the SIP, we are a customer of that.
So from the SIP perspective, we are just a down-the-line recipient. [AUDIO OUT] interact with us at all. We don't have to touch their systems directly. So the only additional cost to them is the overhead of keeping track of making sure our provider is paying them the correct redistribution fees for our usage.
Now, what we pay that intermediary provider for actually taking those data packets from the SIP and connecting them to our systems is much, much less than what we're paying the SIP for those mandated fees due back to the exchanges.
So just for context, it's like the people directly interfacing with us who have to maintain another connection, who have to send us the data packets, they're incurring the additional cost of dealing with us, and they are charging us much, much less than what we're being charged by the SIP. And that part is more subject to competitive forces because we have a choice of which technology provider for that intermediary we use.
In terms of the relationship to cost, my feeling on this is that when you have prices like the originating SIP fees that are not subject to competitive forces, then it's reasonable to compare them to costs and ask, are they in line, roughly, with what the costs of generating that data and sending it out to the next line of users and technology providers is. And I don't think we have any reasonable evidence that those costs are in line.
PETER HAYNES: And we keep hearing the argument, particularly out of NASDAQ, that you can't separate those costs out properly, that they're part of the, so-called, platform. And I've still had a difficult time coming to grips with that argument.
But I know that at the 2018 hearing that the SEC held that Brett Redfearn had organized, Doug Cifu, famously, from Virtu, stood up and said he could run the SIP for 10% of what is being charged and still be profitable. So that's a nice baseline to start with and those are pretty good margins.
Now, governance, this is interesting because maybe this is going to be the opening that we all need to see some pressure on costs. So you believed that the oversight of the SIP should be broadened to include users, including large brokers. But you did worry about the possibility that those same large brokers would push for a SIP that was focused on speed. Can you just elaborate a bit on that, and maybe relating back to the point you made earlier?
ALLISON BISHOP: Yeah. And so this is because large brokers who've already made investments in the speed race, so in maintaining their own co-located servers and their own fast connections and all those things, have no incentive to turn around and reduce costs or do something differently.
And, in fact, there's a general effect here of that large brokers have incentives to keep costs very high as a barrier to entry for new entrants as small brokers. I should, of course, remind everyone that, for full transparency, we are a small broker, so we are not a neutral voice in this debate.
But it is definitely my concern that what large brokers want may be to keep the barriers high and not necessarily to bring down their own costs because bringing down their own costs also brings down their competitor costs and perhaps exposes them to more competition. So it's not clear to me how that collection of incentives would shake out in terms of larger brokers wanting the SIP to be more efficient versus, let's say, remain as expensive.
PETER HAYNES: And it comes back to that interesting issue you mentioned earlier where those large brokers are already consuming prop feeds from exchanges, and so they have an area that they want to protect there.
OK. So you published your initial letter, and then the SEC proposed rules on market data content and market data governance. And both of these rules, we could say in 2024 it's not surprising, but both of these rules were subject to litigation by exchanges that would be concerned about their commercial interests.
In terms of market data content, as is pretty well known by now, the SEC's final rule, which survived litigation, required that the SIP operating committee come up with fees for the enhanced SIP, the one that would have depth of book, auction information, and odd lots. What did you think of the final rule, and can you explain why you felt there needed to be more engineers that are involved in the design of data feeds?
ALLISON BISHOP: I'm a big fan a lot of aspects of the final rule. I think the inclusion of odd lot data and auction data and depth of book data is all great. Like, from a data science perspective, I want all the data in that that I can possibly get. And I think those additional data sources are important and should be available more cheaply than they are today from the proprietary feeds, so I was very happy to see all that in there.
The reason I complained about engineers not being more involved in the decisions is that, ultimately, I think there can be some misconceptions about what makes data easy to provide or hard to provide, or cheap to provide or expensive to provide, because I think it's a natural tendency to think about these things on a single scale. More data should be more expensive.
But from an engineering perspective, these are all delicate trade-off decisions. Because there's different kinds of resources that may be the bottleneck in making something hard or making it easy in terms of computation time, in terms of human software development time and maintenance, and in terms of data storage and data transmission. Those are all different costs that ultimately control what the cost of providing a data feed is.
In particular, it always feels strange to me when people make this proposal to limit depth of book data to a certain number of levels. And the argument always seems to be, well, if the top levels contain more of the information, then that's where most of the value is, and it's less messages, it's less data, that'll make it cheaper.
That's not necessarily the case because now it's like when a data packet comes in, in order to decide whether it should be distributed, you have to check if it's part of the current top 5 levels or not. So you're not really making something fundamentally easier, you're trading off computational checking and latency, perhaps, for data storage or data transmission.
And so these trade-offs are very delicate at the scale we're talking about. And we have a very latency-sensitive system for distributing this data. We have a very high amount of data being distributed every day. So knowing which of these specifications makes the data feed easier to maintain or cheaper to provide, these are really nuanced engineering decisions.
And it's so much frustrating to watch people just throw around things like, oh, five levels seems like a good compromise. It's like, this is an engineering decision. We could, at least, ask some engineers.
PETER HAYNES: EMSAC, the committee that the SEC put together, should have thought about including engineers as part of that committee. Just to get your perspective, I know they had academics and practitioners and market makers and the like, but you make a very, very good point, because I'm the dummy who doesn't really understand that and appreciate those trade-offs.
We know the SEC won on the market data content litigation. But, again, the whole market data reform was set into two different rules, one on data content and one on governance. So we got the ruling in favor of the exchanges, as I say, for the market data. But the DC Circuit Court ruled in favor of the exchanges who were suing the SEC on the governance side, and this left the industry with only a partial win.
We have now got more data on the SIP, or we'll get more data on the SIP, but we don't have a mechanism to control the costs of this, so-called, enhanced SIP because the incumbent exchange groups owned enough exchange votes to control the operating committee.
Sure enough, the enhanced feed, or the SIP2 fee proposal, which the operating committee published initially, was actually outlandish. And some of the participants suggested that the proposed costs of the enhanced SIP were actually higher than if you paid for this content directly from exchanges. In fact, the smaller exchange members of the SIP dissented on this fee proposal.
I found your letter in response to this fee very enlightening. First, you compared the costs of the SIP versus the enhanced SIP relative to overall costs for your firm and how much that would change. And so why don't you tell us, first of all, how much would this have changed the cost structure for you to subscribe to the enhanced SIP within the context of your firm?
ALLISON BISHOP: Right. So that $14,000 a month that we're currently paying for SIP would go to above $70,000. So that is an instant very high multiple on what we would be charged. And that, of course, would incur a much higher percentage of our current costs. I think that would even be, let's say, more than a third of our current monthly costs would just be going to SIP.
PETER HAYNES: And, again, just in terms of competitiveness, barriers to entry, if others in your space were a little bit bigger, were able to absorb those costs, they then have arguably more robust data that they're working with than perhaps you would have in the legacy SIP. That was a challenge. So those numbers are ridiculous.
OK. So in your letter as well, I found this quite interesting, you compared the cost of market data to core services that are needed in society such as water and electricity. And you suggested that regulators need to figure out if data is a utility or is it a profit maximizing service? What do you mean?
ALLISON BISHOP: Yeah. So the way I think about this is that there are certain kinds of things that we in society have agreed we don't want to be treated as where profit maximizing is the only goal. It's like we don't treat the distribution of clean water as something where businesses are allowed to behave in a way that would be profit maximizing.
It's like these are things that are so valuable and so necessary to the functioning of our society that if we let companies just arbitrarily seek profit in distributing them, especially when we have very unequal starting points in terms of the resources that different people have, the maximal value extraction might end up with a lot of people not having access to water. And we collectively decided that that would be a bad outcome, even if it's profit maximizing or extraction value maximizing.
And I think it's an interesting question, should market data be considered as a thing where the profit maximizing outcome, or the revenue extraction outcome, is therefore the best outcome even if that ends in a world where we have a lot of consolidation, we have very few new entrants, we have very few competitors, and we're tilting the playing field even further, the bigger and bigger firms and conglomerates providing services?
And personally, I think market data, the information flowing through our financial system, is much more of a utility. It's much more of a lifeblood that keeps everything nourished and keeps everything going. And that, to me, is more like the case of water or electricity or this thing that we all need as an underpinning infrastructure in order to do anything.
The original idea of the SIP, in fact, seems to be more in line with that philosophy, that it's public data, that it's a public good, that it is an infrastructure that should support a wide variety of participants using it in different ways. And I think if we solely focus on profit maximization for those providing these market data services, then that gets lost and that doesn't end in the right place in terms of having access to this kind of information.
PETER HAYNES: Well, it isn't surprising, but the SEC actually rejected that exchange group proposed SIP2 fee, thankfully. And they sent it back to the exchanges for a do-over. But unfortunately, there seems to be no sense of urgency to try again and no word from the SIP operating committee that a revised fee proposal is imminent.
Meanwhile, the SEC reproposed its governance rule, and that rule is now final. And in it, the main three exchange groups have lost control of the operating committee.
In fact, it's interesting to read the comments from the big exchanges on that final rule as they're now talking, well, the world's changed from 2020 to 2024, and to get to 15% market share to get two votes is too high because the new rules on governance won't just give you a vote for every one of the medallions you have. So this is a very interesting debate.
So it is possible that the exchanges will sue again. But my bigger question for you is, the world is changing here, and I'm worried that the smaller exchanges, like IEX and MEMX, might fight for lower data-- sorry, I'm a little worried that those smaller exchanges, who were the ones that abstained from the first fee proposal, might actually become drunk at the prospect of profit maximization on data revenues and no longer be on that voice of reason side and potentially fight the big exchanges when it comes to fee changes or fee reductions on the SIP. What do you think?
ALLISON BISHOP: Yeah. I definitely share that concern. I certainly have no special insight into how the small exchanges are thinking about this these days. But I think that's a very natural concern that just comes out of the fact that exchanges are for-profit entities. And if you are relying in your system on for-profit entities doing the right thing instead of maximizing profit, then things are going to tend to go bad in the long term even if they hold the line for a while.
And I think the question remains like, even for small exchanges, doing things differently and fighting the good fight on this front is what I think is the right thing to do and is a differentiator and is a way that they show that they serve a purpose in existing and being different from the larger exchange families.
But if all the incentives are aligned for them to fall in behind the larger changes and collect large data fees, it's hard to imagine-- given the history of how things work in our industry-- it's hard to imagine them resisting that pressure forever.
PETER HAYNES: Well, it's interesting, just stepping back, and I know there's private room stuff that goes on at the SIP operating committee and then their advisors as well. But under the new governance structure, no exchange entity can have more than two votes. You have to have 15% share of volume to get those two votes, and only one of the three major exchange groups is at that level today.
So if each exchange gets a vote, and you have FINRA, LTSE, IEX, MIAX, MEMX, 24 now approved, potentially Dream and Green, and you only need a 2/3 approval now for rule changes, or fee changes and the like, then you do have enough smaller exchanges, if they all went on one side of the ledger, to effect change.
But you can just imagine the back room discussions that'll go on between the incumbent exchanges and any of those list of names there. So I'm hopeful, but I'm also, I think, practical to worry that we're necessarily going to see the benefits of a reduction in governance control.
ALLISON BISHOP: Well, I think, though-- I mean, at least, it makes it possible, right? At least, it moves the status quo in some kind of direction. Because I think without that, where you had this very concentrated control, it was not hard to imagine that things would just keep rolling along the way they were. So I'm also hopeful, but yeah, I think it's hard to predict where it will go.
PETER HAYNES: And I think I heard a lot of stories in the old days of the SIP operating committee sitting there, and someone will come in and propose a fee reduction, and they'll go around and go, yeah, great idea, great idea, and then one person says, no, and it's gone. No.
ALLISON BISHOP: Right.
PETER HAYNES: And so you never get anything done. So the fees were never reduced from day 1. I know they make a point of the fact they've never raised SIP fees, but Harold Bradley wrote that in a piece for the Kaufman Institute a few years ago, just reminding everyone, they've never lowered fees.
And you're the one who understands data better. My guess is it's a lot cheaper to maintain-- I'm not an engineer like you-- but it's a lot cheaper to maintain that data today than it was 20 or 30 years ago, even if the scale is that much higher.
ALLISON BISHOP: And isn't that a sign that probably the fees are too high, they've never had to raise them, right?
PETER HAYNES: Exactly.
ALLISON BISHOP: Because raising fees is a way that you recoup costs when you set them too low, right? So the fact that they started so high that they've never had to revisit them is probably a sign that we've been overpaying this whole time.
But, yeah, I mean, like in almost every other industry, over the last decades, we've seen the effects of new technologies reducing costs. We've seen the cost of memory for computation, the cost of physical devices, the cost of servers, the cost of availability and infrastructure through cloud, like, we've seen these costs come down. And I think it is a sign of a broken pricing structure that we have not seen those costs come down in market data.
And on the cloud issue in particular, this was something funny that happened as I was scoping out our data feeds for Proof as we were starting is, we have built our system in the cloud. So we are in the AWS cloud, which we believe, from our standpoint on the latency gain, that we can route effectively from the cloud, as long as we use the right routing strategies and the right tools, that we think we can operate our business effectively for the outcomes for our customers from the cloud.
And so we built it that way from the beginning, which gives us huge savings in terms of efficiency and not having to maintain our own physical servers, and also great scalability for as we grow, that we can just intraday spin up new servers and change the distribution of load across our system in real time. And so it's been a great setup for us for having a modernized technology platform.
How this interacts with market data feeds is quite complicated. Because in many proprietary feeds, they will have provisions saying, will you pay by the server or you pay by the human user? And our system, of course, is set up that it's some automated process across servers that are being swapped in and out by Amazon and by our programs.
And so even the fee structure itself that has been set up for market data was not set up to contemplate cloud. And so it's not at all clear what happens to the billing structure when you turn on a proprietary data feed with this kind of structure in a cloud system.
Do you pay by the server it touches ever? Do you pay by the number of servers that it's touching simultaneously? Do you have some other kind of-- and you can get every answer in fee structure from reasonable fees to crazy fees for just using cloud.
And so I think it's pretty clear that none of this stuff was designed in a way that mapped well onto emerging technologies, and we haven't really seen the benefits of that as a result.
PETER HAYNES: Yeah. And just on that same vein around cost, it's been pointed out, I know a lot of people are talking about the fact that, for the proprietary feeds, the exchanges are actually applying for substantial fee increases.
They're not touching the SIP, but each of them are asking for substantial fee increases to reflect what they're calling inflation. And in some cases, they're getting very, very big approvals from the commission to be able to raise fees like that. I know that's generating some concern in the industry.
So we're finishing up here. And it's interesting that the incoming chair of the SEC, Paul Atkins, was actually a dissenter on the original NMS plan in 2005, which included market data. And one of his main arguments was concern over for-profit exchanges charging you serious rents on market data. So he really was prescient in his concern.
I know one of Brett Redfearn's biggest frustrations when he was head of trading and markets was that his efforts on market data reform didn't get done during his administration, and when Gensler took over, they did not finish the projects that Brett and Jay Clayton had started in 2018.
Do you think the new administration will finish what Brett and Jay started and take on the exchanges on market data fees by becoming, at the end of the day, the fee regulator that the SEC clearly needs to be?
ALLISON BISHOP: I certainly hope so. But as an engineer and not a politician, I don't really feel qualified to speculate. But that is certainly a thing I would like to see happen.
PETER HAYNES: Well, yeah, me too. And I guess we can hope for that. And I know a lot of people are asking about whether Atkins is going to reopen NMS, go back to the debate on trade-throughs, which he was opposed to initially.
So I think that from a market-- even though the crypto world is spending all their time talking about how great it is to have Atkins in there, I think it's nice to know, from everyone I've talked to, that he's very, very knowledgeable on market structure, equity market structure, and is going to join that conversation with his feet on the ground already. So hopefully market data is part of that discussion.
But, Allison, I got to say, on behalf of TD Securities, thank you so much for coming on here. And I've got to recommend to all the listeners, if you don't follow Allison's blogs or have read any of her writings to the SEC on her letters, they're must-read.
And in particular, we're going to be titling this episode, A Market Data Makeover for the US. And I encourage all of you to read the blog that Allison titled somewhat similar about the makeover of the SIP. And she tells a great story in there, so it's worth everyone's read.
And I know market structure is a fairly nerdy topic. Allison and I might be two of only a small list of people that are this interested, but hopefully everyone who's listening today has learned a little bit. I certainly have. So, Allison, thank you very much.
ALLISON BISHOP: Thank you for having me. I really appreciate it.
[MUSIC PLAYING]

This podcast should not be copied, distributed, published or reproduced, in whole or in part. The information contained in this recording was obtained from publicly available sources, has not been independently verified by TD Securities, may not be current, and TD Securities has no obligation to provide any updates or changes. All price references and market forecasts are as of the date of recording. The views and opinions expressed in this podcast are not necessarily those of TD Securities and may differ from the views and opinions of other departments or divisions of TD Securities and its affiliates. TD Securities is not providing any financial, economic, legal, accounting, or tax advice or recommendations in this podcast. The information contained in this podcast does not constitute investment advice or an offer to buy or sell securities or any other product and should not be relied upon to evaluate any potential transaction. Neither TD Securities nor any of its affiliates makes any representation or warranty, express or implied, as to the accuracy or completeness of the statements or any information contained in this podcast and any liability therefore (including in respect of direct, indirect or consequential loss or damage) is expressly disclaimed.


Photo of Peter Haynes

Managing Director and Head of Index and Market Structure Research, TD Securities

Photo of Peter Haynes


Managing Director and Head of Index and Market Structure Research, TD Securities

Photo of Peter Haynes


Managing Director and Head of Index and Market Structure Research, TD Securities

Peter joined TD Securities in June 1995 and currently leads our Index and Market Structure research team. He also manages some key institutional relationships across the trading floor and hosts two podcast series: one on market structure and one on geopolitics. He started his career at the Toronto Stock Exchange in its index and derivatives marketing department before moving to Credit Lyonnais in Montreal. Peter is a member of S&P’s U.S., Canadian and Global Index Advisory Panels, and spent four years on the Ontario Securities Commission’s Market Structure Advisory Committee.