Pages

Wednesday, March 22, 2023

Why Trusted Blockchains Cannot Support CLOBs

Today's most efficient and liquid exchanges are central limit order books (CLOBs). It is a computerized system that aggregates and matches buy and sell orders for a particular financial asset, such as a stock, currency pair, or commodity, in real time. In a CLOB, market participants can place their orders to buy or sell a specific asset, and these orders are then listed in the book according to their price and time priority. The system automatically matches buyers and sellers at the best available price, filling limit orders that arrive first (i.e., price-time priority). Orders outside the current market price sit on the book as resting limit orders, available for traders until they are canceled.

A Limit Order Book

While these represent the gold standard, they require a degree of low latency (i.e., speedy) that is only attainable because they are centralized (ergo, CLOB and not LOB). This allows market makers to physically place servers running their market-making algorithms next to the exchange servers, allowing communications within milliseconds. Any decentralized trading platform cannot directly compete with these platforms as a price discovery mechanism.

To understand why a CLOB cannot work on a blockchain, it is helpful to understand the economics that drives its equilibrium. A CLOB has three types of traders: uninformed, informed, and market makers. The informed invest in data, models, and hardware to identify temporary mispricings and use their comparative advantage to snipe stale limit orders. The uninformed are ignorant for rational and irrational reasons: they may want to buy a car (liquidity traders) or are delusional and trading on irrelevant information (unintentional noise traders). As informed and uninformed traders do not show up simultaneously, market makers arise to provide liquidity continually by posting resting limit orders to buy and sell.

In equilibrium, all groups generate benefits equal to their costs. The informed trader’s costs—investments in hardware and statistical algorithms—are balanced by revenue from adversely selecting stale market maker limit orders. Uninformed traders pay the market maker by crossing the spread, benefiting from convenient, quick trading. Market makers post knowing they will trade with both types of traders, setting limit orders such that the revenue from uninformed balances that they lose to the informed.

There are many scenarios where low latency is costly, but one applied to a market maker providing resting limit orders should suffice. A market maker places a two-sided order to buy or sell 100 shares of XYZ stock trading at a bid-ask price of $20.17-$20.18, its current bid price, $20.17. Assume the stock will move up or down $0.05 before you can cancel that order. If you get filled, it will only be because it is now trading at $20.12-$20.13, meaning you bought at $20.17 and can sell now at $20.12; you lost $0.05; if the price went up $0.05, you would probably not be filled on your $20.17 order to buy. This is called ‘adverse selection,’ where conditional upon getting filled, you paid too much or sold too low. It generates a loss profile for market makers like for those selling straddles or a liquidity provider’s impermanent loss.

The effect of higher latency on a central limit order book is a classic example of Akerloff’s ‘Market for Lemons’ (Quarterly Journal of Economics, 1970). [1] In that paper, he analyzes markets where parties with asymmetric information separate, so the only viable transactions are those with a negative value, and the market collapses (i.e., no trades).

The lemon’s problem applied to limit order books is the following. High latency leads to market makers suffering higher adverse selection as it amplifies the relative speed advantage of informed traders, causing market makers to increase their spreads. Higher spreads discourage uninformed traders. With fewer uninformed traders, the market maker widens his bid-ask spread further to protect against adverse selection by the informed traders by making more profit per uninformed trade. This discourages more uninformed traders, creating a positive feedback loop until none are left.

Another way to think about the necessity of liquidity traders focuses on the zero-sum nature of trading without them. With no liquidity traders, the remaining participants are then playing the unattractive game of trying to outsmart and out-speed others who have made the same commitment. The Milgrom-Stokey ‘no-trade theorem’ (Journal of Economic Theory, 1982) states that if all the traders in the market are rational, all the prices are rational. Thus anyone who makes an offer must have valuable and accurate private information, or else they would not be making the offer. Similarly, Grossman and Stiglitz’s ‘Impossibility of Informationally Efficient Markets’ (American Economic Review, 1982) shows how without liquidity traders, no one has an incentive to put information into markets because the other rational traders infer what he knows via his market demand. There are no trades because every order is presumed to be informed and thus unprofitable for the other side.

A deficiency of liquidity traders creates a positive feedback loop that causes markets to ultimately unravel. There will not be enough liquidity traders to support an active set of market makers, who need the uninformed retail flow to offset their losses to the informed traders. The high spreads and meager volume on decentralized limit order book exchanges are consistent with this result (e.g., Augur).

One can imagine the layer 2 blockchains will eventually become fast and secure, preventing this problem. However, even in this case, miners or validators can sequence transactions with some discretion. It takes 60 milliseconds for light to travel from Tokyo to San Francisco, creating a large lower bound to this discretionary time window. Successful market makers on modern CLOBs have reaction speeds of 5 milliseconds, implying the feasibility of front-running such a system with impunity (see Aquilina, Budish, and O’Neill, 2020).

A CLOB has price-time priority, so it fills limit orders first by price, and within a given price using first-in-first-out logic. Even without a minimum tick size, the sequencers could front-run limit orders by posting orders conditional upon the price in the newer orders. There is no way for the layer 2 validators to agree on the time sequence of transactions if it is configured to prevent censorship, which would require a globally distributed set of validators. Given the disproportionate advantage of being first on limit order books, the unavoidable sequencing discretion makes transparent competition impossible, enabling and encouraging corruption.

Low-latency chains like Solana, meanwhile, are centralized, which invariably leads to corruption via Acton’s Law. This centralization is not obvious, as many have more validators than Bitcoin or Ethereum (e.g., EOS has 21), but this Nakamoto coefficient is meaningless because the validators on low latency blockchains have to work together, and they are invariably controlled by a central agent. When a blockchain representative proclaims a bald-faced lie about a foundational crypto principle, its developers fall down the slippery slope, leading to more lying and, ultimately, a cesspool of deception. In markets dominated by unaccountable insiders, we should expect every sort of malicious trading tactic (e.g., FTX pumped its Serum token via its low-latency Project Serum exchange on Solana). This leaves blockchain CLOBs only for tokens with no alternatives, like in markets for NFTs and shitcoins.


[1] In this application, the market maker trading with informed traders generates a loss, like a lemon car, hoping to offset this with his trades with ‘peach’ cars (gains). In 70’s slang, a ‘lemon’ is a bad type, as opposed to a good type, e.g., a ‘peach.’ 

No comments:

Post a Comment