Thursday, February 29, 2024

AMM LP Unprofitability: irrationality, volatility premium, or passive trading?

 A puzzling aspect of automated market makers (AMMs) is that LPs, in aggregate, lose money, and no one seems to care. Uniswap is the most prominent AMM developer here, currently worth $9B. Uniswap's docs page only indirectly addresses LP profitability, pointing to theoretical papers with no data (link) or anecdotal empirical blog posts from 2019 (link).

If you search this topic and look for an empirical analysis, you generally get a discursive analysis that is not even wrong. For example, one paper states

Our supporting data analysis of the risks and returns of real Uniswap V3 liquidity providers underlines that liquidity providing in Uniswap V3 is incredibly complicated, and performances can vary wildly"

This could be said about anything one does not understand. The most focused prominent empirical study was done in 2021 by Topaze Blue, and even there, the best they could say was that half of LPs lose money, which is almost meaningless: By count or capital? Were the losses of greater or lesser magnitude than the profits? Was it only the stupid ones? Invariably, the few empirical investigations out there break the data into a dozen subsamples and present hundreds of points in a scatter plot but no table with a simple "average LP profitability." 

I have been posting about AMM LP profitability since 2022, and the trend is consistent: major pair LPs lose money.[1]

LP profitability is a significant problem. Transformational technologies like the internet and the automobile experienced consistent exponential growth for decades, while AMM usage peaked two years after its introduction and has not recovered. This is related to AMM profitability, which is not simply a question of fees, as indicated by the comparable loss rates for Uniswap's 30 bp (basis point, 0.3%) pool and its 5 bp pool, and those for TraderJoe's 22 bp pool. LPs in the capital-efficient pools consistently lose money.

Interestingly, Uniswap's initial v2 pool, which uniquely has only an unrestricted range, has been consistently profitable for LPs since it started in April 2020, but the new capital-efficient restricted range approach immediately overshadowed it, so it's of limited relevance today. However, this gives a clue as to what drives LP profitability in the common restricted range pools, as it is clearly not fundamental to the blockchain (e.g., MEV).

To recap, the LP pnl can be broken into three parts.

LP pnl = + price change of tokens + fees - convexity costs

The effect of the token price change on the underlying pool dominates the other components by a factor of 100. As this risk is orthogonal to the LP's position and hedgable, we can eliminate this factor to see LP profitability more clearly. That gives us

LP pnl = fees - convexity costs

Fees are easy to see, but convexity costs are subtle and do not show up unless you go out of your way to calculate them. For example, if you deposit 1 ETH and $2000 in a pool, and the price of ETH rises to $2400, you pull out 0.91 ETH and $2191 for a profit of $382 without fees. You made money, but it is not obvious you lost $18 due to convexity costs (see here for videos on calculating pnl).

Cause #1: Irrationality

Crypto is filled with conspicuous irrationality, where celebrities and shills promote coins without a legitimate use case, enabling insiders to make a lot of money. Yet this should be de minimus in liquid markets with open entry and exit. However, there is a subtle selection bias akin to the winner's curse, where an auction winner tends to overpay because the most optimistic bidder necessarily wins. Similarly, perhaps LPs tend to underestimate convexity costs or overestimate volume. If expectations are normally distributed, and the most optimistic become LPs, they could lose money due to their collective overconfidence in their subjective estimations.[2] That would seem just as likely in Uniswap v2, where we see profits, so it cannot be a sufficient explanation by itself.

 The capital efficiency in Uniswap v3 levers a v2 LP position. Assume an LP considered a pool where the v2 return on investment, net of convexity costs, generates a 2% APY. Unlike many YouTube influencers promising 100%+ returns, such a return is not absurd. With concentrated liquidity, restricting a range to 20% +/- up and down reduces the required capital by 90%, like 10x leverage. This turns a 2% return on a v2 pool into a 21% return. Below, we see the effect of restricted ranges on the base 2% APY. Each LP position has the same capital invested, but as the range narrows, the returns are amplified.

Market value and liquidity for various ranges

Current price = 2000

APY for v2 is assumed to be 2%

Thus, if one thinks the pool is profitable like a v2 pool, given the average range is +/- 10%, that's an attractive return. A minor overestimation can become compelling in this framework. This plays perfectly with the ubiquitous overconfidence bias, in that not only will the overoptimistic LP see a significant return, but also reason they can outperform their LP peers by applying their capital to a more concentrated range than average. For example, if Bob puts down $1k in a 10% range, Alice can put $1k into a 5% range, double Bob's profit, and proudly boast she is the best at LPing.

The problem is subtle because LP profitability is linear in liquidity at the margin or partial equilibrium analysis. In general equilibrium, more liquidity will not change volume much, and Bob and Alice get the same revenue split if they each supply 10 or 100 units of liquidity. However, their convexity costs will increase 10-fold with certainty. Uniswap v3 is especially sensitive to overconfidence in a way that does not apply to the v2 approach, where this mechanism is absent.

#2: Inverse Vega Premium

In tradfi, there's a return premium to being short convexity, often called the volatility premium. One can see it in variance swaps, straddles, and futures on implied volatility (e.g., short VXX ETF). It is present in all developed equity markets and has a similar risk to the stock market, underperforming in crises like the last big recession in 2008 or the brief collapse in March 2020. It makes sense that a short volatility/gamma position will generate a premium like the stock market because it has a similar risk profile, substituting exposure to this risk factor. In contrast, crypto's volatility premium goes the other way. For example, here is a chart of the total return for a few major equity markets from Nomura research.

Here's the current implied volatilities for the SPY, the US equity index ETF. Volatilities are higher for the lower strikes, as the market anticipates volatility will increase if the price falls.

Here are the Bitcoin implied volatilities on Deribit. Here, the market expects volatility to rise when the price rises. The crypto-implied volatility smirk goes in the opposite direction as the equity smirk.  

Intuitively, this makes sense, as volatility rises when crypto is mooning. I took daily data for the US stock market since 1926 and ETH and BTC data from 2016 to document this. I then calculated each week's total return and volatility using the 5-ish data points. It shows that, unlike equities, returns are highest in the top volatility decile and lowest in the lowest volatility decile. Thus, in crypto, being long volatility is like being long the market, which should generate a risk premium.

If long crypto volatility positions generate a positive return, short volatility generates a negative return. In crypto, unlike equities, being long volatility is like being long the underlying. If the expected return on crypto is positive, so is the return on being long volatility. If short crypto volatility had a positive return, one could short volatility as an LP and then go long crypto, generating a hedged portfolio that makes a profit on both legs, arbitrage. I don't have data on implied volatilities from Deribit, but it would be interesting to know the averages over the past couple of years.

Given a positive expected return for volatility, an LP position is like a standard hedge, costly. At the very least, it lowers the market pressure for positive returns on negative-convexity positions, like a constant product AMM LP.

#3: LP Positions as Resting Limit Orders

I recently discovered that many LP positions are passive trading tactics (ht @ryskfinance). For example, if I post 1 ETH just above the current ETH price, as soon as the price jumps through my range, I will have sold ETH while getting paid the trading fee instead of paying it. This isn't a free lunch, as statistically, the price will be significantly higher than the range when one removes that position, implying you could have sold it for more, classic adverse selection. The efficacy of this tactic depends on many parameters, so I won't get into whether this is a good idea. I want to see if it adds up to anything substantial.

If many LPs effectively trade this way, they may lose money on the convexity implied by their positions. Still, as they have a different objective, that's not a problem. We need to estimate LP profitability when we remove these players, as they could be biasing the aggregate profitability statistics.

To estimate their effect, I took all the mints and burn for the Uniswap v3 ETH-USDC 5bp pool from Jan 1, 2023, through Feb 14, 2024. I only included LP positions that completed a round-trip, with both an add and remove in the time frame, where the add and remove were single transactions. I classified an LP position as a passive-trade if its initial range was above or below the current price and held for less than seven days, and the deposit and withdrawal were completed in a single transaction. I also excluded those LP positions with zero token changes, as these would have never been touched and are irrelevant to LP risk or revenue.

LP positions

Between 1/1/23 and 2/16/24

The table above shows that these passive-trade LP positions accounted for 5% of the total mints over those 409 days. The absolute USDC change for these LPs is not the total amount traded in LP positions, just the net. For example, if the price moved across a range in one transaction and were then removed in the next block, the LP's net USDC change would equal their gross USDC change. If the price moved into the range and bounced around from the upper to lower price bound, the gross USDC traded would be much greater than the net USDC traded. The average duration for passive-trade LP positions was only half a day, and the median range size was 0.2%, so they do not generate significant fees outside of the tokens needed to push the price across their range.

To estimate their effect on the total LP convexity costs, I used the initial and ending price, the LP range's lower and upper price bounds, and their liquidity. For example, in the range below, from 125 to an upper price of 175, if the initial price is at p0, around 115, the range is above the current price. Only one token will be offered for these positions, and the range will be either above or below the initial AMM price. 

The formula I used for the convexity cost was

Conv cost = liquidity * (sqrt(p0) – sqrt(p1))^2 / sqrt(p0)

Here, p0 is the first price in the range crossed, and p1 is the more extreme final price relevant to the LP position. This captures the cost of an LP position from its negative convexity. The profit for the LP position was hedged once at inception, excluding fees (and, thus, always a loss). As explained in my post last Tuesday, the static hedge does not change the expected cost of convexity cost. Still, it eliminates the first-order effect of the simple position delta, which would otherwise increase the volatility of these data points by a factor of 100. While a better estimate would update a hedging ratio daily, that would take ten times more work for me. My approach below, which presumes a static hedge, generates an unbiased estimate that is sufficiently efficient given the thousands of LP positions in my sample.

The tricky thing about restricted ranges is the three cases mentioned above, where a range is above, below, or straddling the current price. These require different formulas for estimating the number of tokens in the position and the convexity cost for the LP over its life. There are three cases for the initial state and when the position is removed, for a total of nine combinations. Below is how these nine cases map into the prices used in the above equation.

price points used for LP positions based on range position

The total costs are listed below. I present three different convexity cost calculations, which highlight the standard error in these approaches. All approaches rely on different assumptions, but they give confidence in their approximate magnitude.  

Convexity Costs
1/1/23 through 2/16/24


Cashflow: USDCin + ETHin * endPrice – 0.05%*abs(USDCtraded)
Sqrt()2: liquidity * (sqrt(endPrice) – sqrt(initPrice))^2 / sqrt(initPrice)
Variance: 2 * liquidity * sqrt(initPrice) * variance / 8
Data were calculated daily and summed from 1/1/23 through 2/16/24

While the passive-trade LPs represented only about 5% of the total convexity cost, this is significant because LP losses are driven by costs about 9% greater than revenues. In this period, the average daily net LP profits in this pool were about -13k daily, comprising around $141k in fee revenue and 154k in convexity costs (see explainer here). If we subtract $3.6MM/409 from the convexity costs, the daily pnl loss falls to $4k per day.

Anyone who has worked with large datasets knows many problems are only discovered through experience. I have only been investigating this effect for a few days; my LP convexity cost numbers are tentative.


AMM LP unprofitability is a problem, but perhaps not as puzzling as I thought. The LP positions that are put on primarily as a passive limit order to buy below the current price or sell above the current price significantly bias standard LP profitability, as these LPs are not providing liquidity to make money as a liquidity provider but instead are trying to implement conditional trades (resting limit orders). This effect seems responsible for most of the LP losses.

While the race to narrow ranges generates negative externalities many LPs do not appreciate, over time, losing money tends to disabuse even the most deluded. The anomalous long volatility premium reduces the tendency for LPs to demand large positive returns for being short volatility, as a premium for being short vol/gamma/convexity would not be an equilibrium, given volatility and returns are positively related in this asset class. I suspect both are relevant, but it isn't easy to know how much. 

I hope more people will address this problem because, as noted initially, very few papers estimate average LP profitability. Often, a yield farm is built on something like a constant product AMM, and the farmer dapp just adds up the fee revenue (and airdrops!) to generate highly misleading APYs. While the recent crypto bull market covers up many bad business models, in the long run, dapps built on LP tokens with a zero or negative, long-run return are not going to make it. I realize most people in this space don't care about the long run, though they would never admit it publicly. Nonetheless, many do, and they should prioritize decentralization and sustainable mechanisms. A sustainable AMM must generate profits for its LPs. 

There are a bazillion long-tail AMMs, and the ability to permissionlessly create trading pairs on meme coins does seem a sustainable model in that there's a perennial influx of new coins. Further, as these coins are not listed off-chain, the LPs may be able to avoid adverse selection by being connected to insiders. This increases risk to idiosyncratic scams frequent in these coins, so pulling general statistics is a second-order consideration.

2 The analogy to the winner's curse is not perfect because everyone who wants to become an LP is treated the same on a pro-rata basis, while for an auction, only the top bidder pays for and gets the item. Nonetheless, it highlights the selection bias to buyers.

Tuesday, February 27, 2024

Hedging Negative Convexity

 Automated market makers (AMMs) invariably present their Liquidity Providers (LPs) with convexity costs. Hedging does not eliminate or even reduce these costs but it does lower volatility. With lower cost volatility, an LP does not need as much capital to cover these losses, so considering capital is expensive, it is correct to say hedging reduces costs, though only indirectly.  

Consider the pool with a token A vs. USDC in the pool. If an LP provides 777 units of liquidity, his initial LP position would look as in Table 1 below.

Table 1

To model the LP's position, we can use a binomial lattice. Here, we transform a volatility assumption into an up-and-down move in the lattice. In practice, one uses many little steps, but we will show a large 10% movement for illustrative purposes. One can use the following formulas to create an arbitrage-free recombining lattice. Here, we will assume zero drift, or an expected token A return of zero. 

Table 2

Applying these parameters to a lattice, we start at a price of 100 and then move up or down. Given an up move is the reciprocal of a down move, up*down=1, the center node just takes one back to the starting price. Note the probabilities are not 50% as the price movements are modeled as lognormally distributed (e.g., exp(x)), so the probabilities adjust these asymmetric future prices to be consistent with the starting prices (E(p) = 100). 

Recombining Lattice

This approach is convenient because modeling the price of a derivative just involves applying the derivative formula to the various nodes with their distinct prices and then multiplying the derivative values in these nodes times their probability. In each period, here portrayed as a row, the probabilities add up to 1.0. Below is a lattice applied to a v2 LP position initially hedged. The token A position in the pool is initially 77.7, so the initial hedge is short 77.7 units of token A. 

Static Hedge

In the final period, T=2, we see that the two tail events, each with a 1/4 probability, generate an LP loss, while the middle case, where the price is flat, has a probability of around 1/2. If the LP did not hedge, he would see a gain in the 'up' state and a loss in the 'down' state, while in the hedged case, he would lose money in both states. 

Looking at this in Table 3 below, we can see the expected pnl for the hedged and unhedged LP positions are identical (ignoring fees here, which would be unaffected by hedging). These are the payoffs and probabilities at the bottom of the lattice. However, the unhedged LP position varies by +/- 1500, while the hedged pnl varies by +/- $75. 

Table 3

A dynamic hedge would be more frequent and reduce the LP's risk. The initial hedge is identical to the static hedge, so the LP's pnl is the same in period T=1 for both hedging strategies. However, in period T=1, the LP will adjust his hedge to match his new token A position, decreasing it in the up state from -77.7 to -73.91 and increasing it to -81.68 in the down state. As the hedge is readjusted, to simplify accounting, we will realize the total pnl in period T=1. The LP pnl is not actually realized, but putting the period 1 LP pnl onto the balance sheet makes it easy to see the effect of the hedge in the subsequent period. 

The initial hedged LP position is identical to the above static hedge case, as they had identical positions in T=0

Dynamic Hedge Path Dependent

In period T=1, the hedge ratio is updated, and we realize hedge and LP pnl profits (expected loss is 19.40). The expected loss in the second period is 19.38, given that the initial loss was realized in period T=1, which is just the sum of the probabilities of these four states times their net pnls. The total expected loss over both periods is 38.79, identical to the static hedge. 

More frequent hedging reduces pnl volatility in the same way that a static hedge reduces the pnl volatility: they do not change the total expected LP loss, totaling 38.79 whether unhedged, statically hedged, or constantly hedged. You multiply the pnls by their respective probabilities to get the expected value. 

If you don't hedge a negatively convex position, you will lose $x with probability p after T periods, while if you hedge continuously, you will lose x*p/T each period. In the example above, the static hedge reduces the pnl variability from about +/- $1500 in the extreme states to +/- $75 with a static hedge, and the dynamic hedge reduces its variance further to a constant $20 each period. 

What about restricted ranges, as in Uniswap v3? Again, this doesn't change the convexity costs for a given amount of liquidity. Consider if the range had a range spanning the price movement in this example, a +/- 10% price movement. The total LP cost would be reduced by 90%, a substantial savings. One's hedge size is also reduced, reducing capital requirements further. 

Given the same liquidity, the expected convexity cost is identical for an LP, whether the range is restricted. In the lattice below, the restricted range starts with token A amount of only 7.39 vs. 77.7 in the unrestricted case, so the hedge is only -7.39 vs. -77.7. However, in the final state, the net LP pnls are identical. 

v3 restricted range

Convexity costs are purely a function of gamma times the variance, and LP gamma is a linear function of liquidity. It's prudent to hedge this, but also important to realize the limits to what hedging can do. It cannot eliminate a convexity cost. 

Wednesday, February 21, 2024

Academic CLOB Model

Last week, I commented on the University of Chicago professor Eric Budish et al.'s hedge fund sniping model but neglected a more significant point. To recap, Budish modeled a scenario likened to a Centralized Limit Order Book (CLOB), where liquidity providers (LPs) post bids and asks, and takers then take those orders (link). Assuming several high-frequency traders (HFTs) are posting the bids and asks, for any resting lone bid or ask, a stale quote will generate a race between the lone HFT LP and several HFTs acting as takers. If the race winner is random, the odds are the LP posting the order will lose. This is the latency race deadweight loss. He estimates this symmetric information adverse selection adds 0.4 basis points (0.004%) to the bid-ask spread. When you apply that number to all the stocks traded worldwide, you get $40B. We would save the world $40B if we switched to sequential auctions. In his words, "continuous markets don't 'work' in continuous time." 

The craziest thing about this argument is that he applies this to a novel market mechanism that lowered costs by over 90% after nearly a century of stasis. In the first chart, we see the spread was constant from 1900 to 1990 at around 60 basis points, making some think it was some fundamental equilibrium. The internet enabled an alternative way to trade stocks, the electronic trading revolution from firms like Island, BATS, Archipelago, Instinet, etc. The second chart shows that it fell still lower from 2001 through 2006. The second chart is in different units and breaks it up by size groupings, but the point is the decline continued and was permanent. The current spread is about 3 basis points. 60 to 3.

Jones Trading Costs 1900-2000

Jones, Hendershott and Menkveld (2008)

In that context, advocating a wholesale change in market structure to eliminate a remaining 0.4 basis point of that spread is a classic example of letting perfection be the enemy of the good. This sort of economic ingratitude is standard, as no matter how much GDP grows, someone can point out something wrong and suggest we replace capitalism with a centrally planned economy, one that, unlike any of the past centrally planned economies, will work better than decentralized economies.

In spite of economics as seeming laissez-faire, most economists have been progressives (e.g., founder of the AEA Robert T. Ely, JM Keynes), viewing competitive behavior as wasteful, a criticism shared by industry leaders who appreciate it as a powerful cartelization device. The essence of free markets is private property and liberty. If people can make unfettered decisions about themselves and their property, it promotes prosperity. There is no trade-off between freedom and prosperity; they go hand in hand. Alas, most people, especially academics, don't trust decentralized results. Since Plato's Republic, they have always thought they could design a better world if given the mandate. Hubris and pride were prominent human vices in the Classical and Biblical canon because of their perennial, pervasive, and pernicious nature. It's a constant battle.

Back to Budish's model, he looks at some data circa 2015 on the London Stock Exchange, back when they timestamped in microseconds (now nano) and focuses on trades where several HFTs were taking or canceling a particular offer (e.g., bid for 100 @ 99.32) within 500 microseconds of each other. He figures the takes that got there before the LP canceled were unfairly robbing the poor HFT LP, which happened 90% of the time. Indeed, cancel orders were not recorded most of the time in these races, but if several HFTs tried to take simultaneously, he figured the LP probably wanted to.

This inference is silly for several reasons. First, It depends on who is trading and why. This was highlighted by Holmstrom and Myerson (1981): planners do not know everything in an economy with incomplete information, something economists often forget when looking down at their models like God. Many sniped orders were from retail traders who thought they would post a buy order one tick above the best buy instead of crossing the spread. They get up and go to the kitchen, and when they come back, they are filled, though the market has moved much lower. If the retail trader instead sent a market order and bought at a higher price, they would have a bigger loss. This is not a deadweight loss.

Another type of trader is the HFT, who merely wants to change their inventory, including rectifying a short position (moving from -100 to -90). In that case, they would be willing to cross the spread or post an aggressive limit order; they are not trading for arbitrage, where one tries to buy below the mid, etc. When one posts an aggressive resting limit order above the current best bid, one knows that they are not guaranteed a fill with the price not moving, as otherwise, no one would cross the spread. Marking such resting limit order trades to the mid-price will make it look like a loss to the LP, but it should be marked relative to the alternative, which is crossing the spread.

Another significant problem in his model is that this is not so much an exogenous cost born by HFT LPs, just an intra-HFT transfer. Only a dozen firms dominate HFT, as is usual in elite competitions. Those playing the game are taking and posting resting limit orders probabilistically. Such an order may be taken by a retail rube immediately, sniped by an HFT, and sometimes, their bid will sit there, and the price will move, and then they will become the top of a large queue, the perfect place for a positive expected value order. If you only count those times they get picked off as a loss and consider that an exogenous expense to be eliminated, you are simplifying the game in a way that mischaracterizes it completely.

His batch auction alternative is efficient when one thinks about a one-period scenario where people have private valuations for an object. With repeated auctions for equities, we must incorporate the complexity of updating valuations based on other valuations, inventory situations, and time preferences. As the other players are doing the same thing, we must update the priors of other people's priors, ad infinitum. There is also the possibility of collusion, like playing poker with two players sharing information, which would radically alter one's interpretation of market orders. His simple model is an excellent way of illustrating a particular problem, but to presume this is sufficient justification for a policy suggestion, let alone a radical new regulatory mandate, is absurd. The state space quickly becomes beyond any closed-form solution.

He also says the market can't fix itself like it's stuck in the bad Nash equilibrium and mentions two examples. First, he objects to letting the broker get two cents for their trades from an HFT who processes the orders rather than an explicit two-cent fee charged to the retail trader. The broker gets two cents in either case, but the trader may feel like he is getting a better deal. Retail flow is considered uninformed, as it will always have high latency, so HFT LPs can assume it is not filled with adverse selection and charge a lower spread, creating positive gains from trade. Allowing the broker to swap an explicit fee for an implicit fee via the rebate given by HFTs is an efficient way to capture this. His example of an HFT problem is the sort of fix markets make.

Budish also mentions co-location as a cost of HFTs because many millions are spent by HFTs wishing to be the fastest. This is perverse because providing privileged but open access, where everyone is treated the same, down to a couple of nanoseconds, is undoubtedly more efficient than distributing this privilege off the books. The latency advantage will exist in any modestly continuous market, so the question is how to administer this. Backroom deals encourage corruption and are cancerous. What Budish presents as a problem and something markets cannot fix is actually an efficient market solution. He has it backward.

During the 90 years of bid-ask stasis, the country was transformed by the telephone, radio, and TV, which radically altered the speed of information flow across the country. Yet it was only with the internet that outsiders could create an alternative to the closed, heavily regulated specialist equity trading system. Consider the NYSE's alternative, Nasdaq, was found to collude, quoting highly liquid stocks in ¼, ½, etc., but rarely, 1/8, 3/8, etc., artificially increasing the bid-ask spread. There were more Nasdaq dealers competing for quotes than on the NYSE, but the NYSE was a more efficient mechanism. Those just counting the number of players, as many naively do (Herfindahl index, Nakamoto number), would have never guessed that. Such are the game theoretic equilibria in complicated real-world games.

Instead, Budish emphasizes state coercion, ignoring the precedent of a 90-year regulatory-enforced inefficient market. In his paper, Will the Market Fix the Market, he favorably quotes a top regulator for writing, "Without changing [the] incentives, we cannot and should not expect the market to fix the market." This na├»ve and hubristic view is natural because big institutions are not going to fly him around to tell them that markets are doing well, though there's an edge case where you get adverse selection without asymmetric information. He created a model that theoretically proved a new inefficiency exists, and he also empirically proved billions of dollars in waste. 

Luckily, this policy does not look like it is gaining ground, as Taiwan recently moved away from a batched auction market to a standard CLOB. A simple solution, liberty, implies its best if we generally ignore economists when they come up with specific policies.

Monday, February 19, 2024

Hyperliquid's CLOB

 The latest horrid DEX simulacrum is Hyperlink, a perp CLOB emphasizing shit coins. It runs on its own dapp-specific L1 using a version of Tendermint, the favorite consensus mechanism for those prioritizing speed. 

I listened to a Flirting with Models podcast with founder Jeff Yan, where he spoke with Corey Hoffstein, a quantitative long-term equity portfolio manager. Hoffstein's background and demeanor prevented him from calling out Yan's BS, which is fine because most good podcast hosts are credulous and agreeable; otherwise, no one would go on their shows. Yan spouts many typical crypto/market maker cliches among those trying to impress people who are vaguely familiar with the issues. For example, when talking about his earlier trading experience, he said he 'was surprised how inefficient markets were.' This is a great phrase because it implies one was outsmarting people, an alpha generator, and it does exist, so it is possible.

Yet, one should provide several good examples to demonstrate you are more than just bloviating. In crypto, there was the famous Kimchi premium, where coins traded at 10% in Asian countries. This inefficiency persisted because it required trusted partners in the US and Japan or Korea to make the trade, which is non-trivial. In any case, it was over by early 2018 and required connections instead of savvy. Nonetheless, many Sam Bankman-Fried interviewers were blown away by SBF's singular arbitrage example, presumably one of his many clever trades. Looking back at Alameda's tax returns and the fact that most FTX traders left FTX throughout 2018 (no bonus pool), this was likely his only profitable trade outside of buying coins in the 2021 bubble with customer funds (see here for more). Thus, we have FTX exec Sam Trabucco bragging about their other genius trade idea, buying Doge when Elon Musk tweeted about it. The Kimchi premium was the only inefficiency Yan mentioned on the podcast.

When Yan was asked about the difference between maker and taker high-frequency strategies, Yan stated that a taker strategy might make only one trade a day. An HFT taker strategy involves sniping the top of the book before an LP can cancel. By definition, it can't do this to a large volume, as the top of the book is not big. The edge is the spread at best. That's a large amount of risk and capital, adding up to a couple of bucks per day. He described a strategy that has never existed because it makes no sense: a once-a-day HFT taker strategy.

Yan highlighted that Hyperlink's L1 can do things that would otherwise be impossible and emphasized updating a perp account for hourly funding payments. This is pointless. Given the horizon of perp traders—days, weeks—crediting their accounts hourly instead of waiting until the position is closed is pointless. A 50% funding rate is a 1% potential boost for a week-long position, which would be long for perps. No rational trader will be excited by having that trivial amount accelerated into hourly payments. Again, this highlights Yan's cluelessness.

Yan says he then decided to build an exchange on his own L1 because of the Impermanent Loss problem in AMMs. I agree that Uniswap's AMMs are unsustainable because they lose more money to this expense, but CLOBs on blockchains are not the solution. A fast L1 will still be slow because the CEXs are centralized, and co-located servers can respond to exchange messages within 5 microseconds. A centralized L1 is pointless, like a private permissioned blockchain. With decentralization, you have geographic diversity among validators, which takes you to 100 milliseconds if you restrict yourself to one hemisphere. It will always be a price follower for coins listed on CEXs, so its market makers will be scalped just like they are on AMMs. However, their select market makers will make money, but not for the reasons they state. [As for the unlisted coins, there is no need for speed, as they are less correlated with the big two that move secondary crypto coins around (ETH and BTC). The only people who should be market making shit coins are their insiders]

The main problem with a dapp-specific L1 is that the chain validators and the protocol are equal partners, as the gas and trade fees all support that one dapp. The incentives of the DEX and L1 insiders are perfectly aligned, so insider collusion is the default assumption. They probably give insiders a latency advantage by giving them effective co-location, and prioritization in sequencing transactions within a block. However, as officially a decentralized L1, this would be unacknowledged.  As LP cancellations are explicitly prioritized over trades, the Hyperlink insiders can make consistent money-making markets, unlike in Uniswap. 

Hyperlink conspicuously claims to be decentralized. Yet they currently have centralized control over their L1 validators, bridge, their oracle, and whoever is running their primary market-making strategy (currently working pro bono because that's what trustless anonymous crypto insiders do!). They restrict IP access to avoid US regulators, which would not be possible on a genuinely decentralized dapp. While Binance and Bitmex almost surely have insiders making their markets with privileged access, they at least have the decency not to pretend these are decentralized exchanges.

Its disingenuous design encourages the worst in crypto, which is saying something. For example, it wasn't until SBF created his unregulated offshore exchange FTX that things took off for SBF. He claimed to be trading $300MM/day when he approached VCs for funding in July 2019, though you won't find more than a handful of mentions about their exchange on Reddit or other forums where people talk about their crypto activities. FTX released a White Paper on how many crypto exchanges were faking volume to boost interest, which is a great way to learn about these tactics—and what not to do—and disarm people who might accuse you of faking your trading metrics. As we now know, SBF and his team were like Alex Mashinsky, liars who emphasized their unique integrity and decentralization, making them safer and morally superior to his competitors. We should not be surprised that SBF and Mashinsky moved on to stealing customer funds to gamble on shit coins because that's what liars do, like the scorpion on the frog; it's in their nature.

Their L1 does not have a native token, so it just takes USDC collateral from Arbitrum to bridge to their chain. Users thus have the hacking risk of the Arbitrum and Hyperlink bridge and worry about USDC censorship (if you get flagged, your USDC is effectively zeroed out). With just their trusted version of USDC, the famously fraudulent perp finance rate is the only mechanism that ties their perp prices to spot prices on off-chain assets. Yan mentioned this was discovered in trad-fi, indicating he probably heard stories about how Nobel Laureate Robert Shiller introduced a different version in 1991. He probably does not realize that the Bitmex perp/spot funding rate mechanism is nothing like Shiller's perpetual real-estate futures contract, which, in any case, never caught on in trad-fi because it was fatally flawed. Many perps work fine without the perp/spot funding rate ruse, highlighting its irrelevance. The perp funding rate mechanism is just an excuse to comfort traders to believe these perp prices are not mere Shelling points but are tied down by arbitrage (see here for more on that).

If you go to Hyperliquid's Discord or search them on YouTube, most of the content is focused on schemes to get free money via rewards, giving them tokens from their airdrop. Everyone is wash trading to collect points, as there are no fees or gas costs. There's also a referral program, and I'm sure many have gamed that, as it's easy to create seemingly unrelated accounts on the blockchain. Their fake trading is already twice that of Bitmex and ten times that of GMX ($1B/day—heh).

If you stick around for the airdrop, take your money and run. 

Tuesday, February 13, 2024

Budish's Plan to Replace CLOBs

I don't keep up with the latest economics journals because I've learned that very little of it is fruitful. Like many subjects, the basics are valuable, but the marginal returns are small with the exponential rise in academic output. We aren't in the golden age of economics if there ever was one.

Yet, I am still fond of economic models that illustrate a point clearly and succinctly, and I stumbled across a model that applies to an area where I have first-hand knowledge. I worked on high-frequency trading algorithms to execute our hedges for an electronic equity-options market maker. We were not at the bleeding edge of high-speed trading, so I am not privy to the tactics used by Renaissance, etc., but no one from those firms will talk about what they do anyway. If someone does talk about what they did, they are invariably a smokescreen.

Further, many high-frequency clients want stupid things, like different models for when the market is trending vs. staying in a range. This is a stupid idea because if one knew we were in a trading range, there would be better things to do than apply nuances to a VWAP algorithm. However, if customers want to pay for it, you might as well sell it, and the best snake-oil salesmen believe in their product. Thus, many great firms with access to the best of the best employ deluded people to create and sell such products, useful idiots. They often speak at conferences.

Experienced private sector people discussing bleeding-edge high-frequency traders (HFTs) are generally deluded or deceptive. This leaves a hole filled by people with no experience, like Michael Lewis. Thus, I am qualified as anyone who will talk about these matters, even if I am not and have never worked on a successful worm-hole arb-bot from New York to Tokyo. Indeed, one might say my experience in HFT was a failure, as we couldn't compete, and I was part of that. I haven't worked on that problem directly since 2013. However, like a second-stringer, I can better appreciate what doesn't work, which is easy to miss if you are making bank because you aren't constantly looking for ways to fix things.

Budish at a16z

Eric Budish is a professor at the University of Chicago and the coauthor of several papers on 'hedge fund sniping' on limit order books, most conspicuously Budish, Cramton, and Shim (2015), and Aquila, Budish, and O'Neill (2022). I do not want to dismiss his coauthors or put all the blame on Budish, but for simplicity, I will present this work as being Budish's singular work. This work has been popular, as it was mentioned in the press, sponsored by significant financial regulators like the FCA and BIS, and was the basis for talks last year at the NBER (here) and the crypto VC a16z (here).

His work highlights the best and worst parts of economics. He presents a model that highlights the assumptions required, the mechanism, and then tries to support it with data. That makes it subject to rational criticism, unlike most work in the social sciences. On the bad side, it follows in a tradition from Frederick Taylor, the original McKinsey/Harvard MBA, who wrote about Henry Ford's assembly line as if his analytical approach was relevant to a famous business method (assembly line) and generated insights into other areas. It doesn't. 

Budish's big insight is that a profound flaw in Centralized Limit Order Books (CLOBs) generates a deadweight loss. When HFTs compete on CLOBs, they often engage in speed races that inflict costs on LPs (aka, liquidity providers, market makers). What is new is that this form of adverse selection is not generated by asymmetric information but by the nature of the CLOB. If the top HFTs are within a Plank-length of each other as far as the exchange is concerned, the fastest is arbitrarily chosen. However, in high-frequency trading, the fastest wins, and the losers get nothing (a Glenn Gary-Glenn Ross tournament).

An HFT would only snipe the best bid or offer if it made them money, and for HFTs, this is a zero-sum game, so the poor liquidity provider suffers losses. While the LP can try to cancel, he is one, and those who are not him are more than one, so when thrown into the micro-second blender behind an exchanges gateway, the LP will lose the race to cancel before he gets sniped. In equilibrium, the LP passes that cost off to customers.

His solution is to replace the continuous limit order book with one with discrete auctions. This allows players to compete on price instead of time because, in each period, they will all be represented, not just the first one, and the snipers will compete away the profit that was generating a loss for the LP.

Primer on Adverse Selection

In standard models of LPs, there is the LP who sets bids and offers. He will buy your shares for 99 and sell them to you for 101, a two-sided market. Liquidity traders come along and buy at 101 and sell at 99. If we defined the spread as the difference between the bid and ask (101 – 99), the LP's spread is 2; his profit is spread/2, or the price relative to the mid. The LP's profit transacting with liquidity traders is the number of shares he trades times half the spread.

There are also informed traders with private information about the value of their assets in the future. This also goes by the phrase adverse selection because conditional upon trading, the LP loses money with informed traders. These LPs trades are selecting trades that are adverse to their bottom line.

But, the nice thing is that the informed traders discipline the LP, setting the price at its true market clearing price. Liquidity traders pay a fee to the LP via the spread for the convenience of instant transformation from cash into asset or vice versa. The LP has to balance the profits from the liquidity traders with the losses to the informed traders so that the benefits of liquidity traders offset the costs of adverse selection.

If we assume profits are zero, the greater the adverse selection, the greater the spread, but this is a real cost, so such is life. Information is costly to aggregate when dispersed unevenly across an economy. However, to the degree we can lower asymmetric information, we can lower the spread. In Budish's model, his toxic flow is not informed, just lucky, but the gist is that these traders are imparting adverse selection costs onto LPs just like informed traders in previous models.

Budish, Cramton, Shim model

I will simplify the BCS model to make it easier to read by removing notation and subtleties required in an academic journal but a distraction for my purposes. Hopefully, I will capture its essence without omitting any crucial subtleties. Let us define S as the spread, so S/2 is half the spread. This is the profit the LP makes off liquidity traders.[eg, if bid-ask is 99-101, the spread is 2 and S/2 is 1, so the profit per trade is 1].

 Let us define J as the absolute size of the price change that is revealed to the HFTs, a number larger than S/2. It can be positive or negative, but all that matters for the LP is its size relative to S/2, because the profit for the sniping HFT will be J – S/2 (eg, buys at p+S/2, now worth p+J, for a profit of J – S/2) and the LP loses (J - S/2). In the following trade, either J is revealed, a jump event, or it is not, and the liquidity trader trades. If the liquidity trader trades, the profit is S/2.

The next trade will come from a liquidity trader or a sniper. As the probability of the sum is 1, we can simplify this to being prob(jump) and 1-prob(jump). Assume there are N HFTs; one decides to be an LP, and the others decide to be snipers, picking off the LP who posts the resting limit orders if a jump event occurs. All of the HFTs are equally fast, so once their orders are sent to the exchange's firewall for processing, it is purely random which order gets slotted first. Thus, the sniper's expected profit each period is

That is, the probability of getting a signal, Pr(jump), times the profit, J – S/2, times the probability the sniper wins the lottery among his N peers.

For the LP, the main difference is that he only loses if the other HFTs snipe him. He tries to cancel, avoiding a loss and making zero. We can ignore that probability because it is multiplied by zero. But the probability he loses is (N-1)/N. This is the crucial point: The upside for snipers is small per sniper, as it is divided by 1/N, but it is large for the LP, multiplied by (N-1)/N. To this loss, we add the expected profit from the liquidity traders. The profitability of the LP is thus

The HFT chooses between being a sniper or an LP, where only one can be an LP. In equilibrium, the profitability of both roles must be equivalent.


Solving for S, we get

The profit should be zero in the standard case with perfect competition and symmetric information. The spread is positive even with symmetric information because the lottery is rigged against the LP. It seems we could devise a way to eliminate it, as it seems inefficient to have liquidity traders pay a spread when no one here is providing private or costly information. The 'hedge fund sniping' effect comes from the race conditions, in that any poor LP is exposed to losses in a jump event, as there are more snipers (N-1) than LPs (1).

When you add the costs of speed technology, such as the Chicago-New York fiber optic tunnel, these HFTs must recover this cost, which adds another nice result in that speed costs increase the spread. Now, we have the profitability of the sniper after investing in technology. Here, we will set the profitability of the sniping HFTs to zero.

This implies

Now we take the equilibrium condition that the profitability of the sniper equals the profitability of the LP

And while the c cancels out, we can replace Pr(jump)*(J – S/2) with C*N to get

In this case, the N is endogenous and would depend on the function c, so the spread is not exactly a linear function in c. Given the various flaws in this model discussed below, elaborating this result is not interesting. The main point is that S is positively related to C, which is intuitive. Again, even in the absence of asymmetric information, we have a large spread that seems arbitrary, which seems like an inefficiency economists can solve.


Budish presents two sets of data to support his model. In 2014, he noted the occasional crossing of the SP500 futures (ES) in Chicago with the SPY index traded in New York. This arbitrage is famous because there have been a few times when HFTs spent hundreds of millions of dollars creating straighter lines between Chicago and New York, getting the latency down from 16 to 13 to 9, and currently, with microwaves, we are at five milliseconds. I don't think it's possible to get it down further, but weather can affect microwaves, so perhaps there is still money to be spent. In any case, it's a conspicuous expenditure that seems absurd to many.

There's a slight difference in the futures and the SPY ETF, but this is stable and effectively a constant over the day. The bottom line is that the correlation is effectively perfect over frequencies greater than a day. Over shorter durations, however, the correlations have been rising, and the correlation at one nanosecond has and will always be zero because the speed of light sets a lower bound on how quickly information can travel between Chicago and New York of 4ms. One can imagine various reasons why the markets could become disentangled briefly. Thus, when we look at prices over 250-millisecond intervals, there were periods, almost always less than 50 milliseconds, where it was possible to buy futures and sell the ETF for an instant profit. This did not happen frequently, but it generated arbitrage profits when it did.

ES vs. SPY over 250 ms
ES vs. SPY over 250 ms

Buddish assumes traders can buy and sell the other whenever these markets cross for more than 4 milliseconds. In his 2015 paper, his data sample of 2005-2011 generated an average of 800 opportunities per day for an average profit of $79k per day.

In 2020, he presented data on the London Stock Exchange stocks and used it to estimate latency races cost investors $5B a year worldwide. It made quite a splash and was picked up by many prominent media outlets such as the Financial Times, Wall Street Journal, and CNBC. Unlike the ES-SPY data, this one does not involve strict arbitrage but statistical arbitrage. Using message data from 40 days from the LSE in the fall of 2015, they can see trades and cancellations and those transactions that were not executed because they were late. This gets at the cost of latency. Budish highlights that he can isolate orders sent at approximately the same time, where one order was executed merely due to chance, and all the others miss out. More importantly, the limit orders are sniped by the faster trades (which can be limit or IOC, immediate or cancel, orders that take liquidity), which is the essence of the BCS model.

He isolated clusters of orders within 500 microseconds, or 0.5 milliseconds, that targeted the same passive liquidity (quantity & price). Most data involved 3 trades, but only 10% included a cancel. As a failed take order implies no more quantity at this price, these were all relevant to taking out a small queue. For example, here's a hypothetical case where there is a small limit to buy at 99 and a larger offer to sell at 103, for a midprice of 101.


Note that if the sniper takes out the limit order to buy at 99, he sells below the midprice before and after the trade. As the sniper is selling here, they define 'price impact' as how much the mid moves after the trade in the direction of the trade, here, +1 unit; he pushes the price down by 1. The 'race profit' compares the trade price to the after-trade mid; in this case, it is -1 unit because the selling price of 99 is one unit below the new midprice of 100; his profit is -1.

A case where the sniper would profit could look like the figure below. Here, the sell moves the mid down by 2 in the direction of the trade for a price impact of +2. The race profit here is positive, as the sniper sold for 101, which is +1 over the new midprice.


 By definition, the price impact will be positive because it takes out the best bid or offer. If the best bid or offer remained, the sniper would not win the race, as there would be no loser.

Applied to their set of 40 days on liquid LSE stocks, they estimate these latency races are involved in 20% of all LSE volume. So, while they only last 79 microseconds, they apply to 1000 trades per ticker daily. They estimate an average race profit of 0.5 basis points on a set of stocks where the average spread is 3.0 basis points. Applying that to all stocks traded generates $5B.


I sense that no one criticizes this work much because it's a parochial problem involving data that requires money and a lot of time. As even economists specialize, and many do not examine CLOBs, they ignore the Murray Gell-Mann amnesia effect, so Michael Lewis's Flash Boys informs even academic economic opinion on this issue, as evidenced by Budish's frequent mention. The SBF debacle highlighted Lewis doesn't have the discernment to realize when he is dealing with complete frauds whose primary business was making markets, which should hopefully warn economists not to take Flash Boys seriously when trying to understand modern markets.

Either/Or vs. Both/And

My first issue is where the HFTs sort themselves into two roles: ex-ante, one chooses to be the liquidity provider, the others as stale-quote snipers. Most HFTs run a combo strategy of sniping and LPing. If we look at the scenario he outlines as sniping at the LSE, we can see that they are sniping aggressive quotes. An aggressive quote is essential for getting to the top of a queue and making money as an LP.

Generally, a resting order at the top of the queue has a positive value, while one at the bottom does not. Investing in speed infrastructure is the only way to get to the top of the queue. Consider the LOB below. Here, the top of the bid queue is in yellow at a price of 98. That queue position has a positive value. At the end of the queue, the position in dark blue generally has a negative value. While there are various scenarios where it pays to stay at the end, the bottom line is that you generally want to get to yellow, but to do so implies one first takes a stab at a new, aggressive level. The yellow ask at the price of 102 is how that happens.


As noted in their paper on the LSE, the top 3 firms win about 55% of races and lose about 66% of races. The figures for the top 6 firms combined are 82% and 87%.  Thus, getting to the top of a queue with several LPs is an important and probabilistic game. Budish did not look at those races. It seems they would most likely be competing for queues and, if they lose, sniping those queues. Both rely on a sizeable specialized investment.

Only a handful of firms are playing this game on the LSE in Budish's data, and they are playing with each other. To the extent the stale quotes are from other HFTs, they are playing a zero-sum game among themselves. However, likely, these stale quotes are usually from non-HFTs, such as retail traders with their E-Trade platforms. That most stale quotes are not in the HFT club is consistent with the fact that only 10% of Budish's data included cancel orders.

Even if we assume the sniping only affects fellow HFTs, the latency tax disappears when we think of them probabilistically sniping or posting resting limit orders. Assume each of N HFT traders has a 1/N chance of sniping the newest bid-ask level, which is necessary for getting to the top of the next queue, the cost of sniping cancels out. That is, there are two symmetric probabilities applied here, one to becoming the lead on a new tick, the other reacting to the jump event. The resulting equilibrium equation (without c) is just


So, when providing liquidity, the probability of getting sniped is (N-1)/N, but now this is multiplied by the probability of being on the new aggressive bid, 1/N; the probability of winning the snipe is 1/N, which is now multiplied by the probability of being a sniper, (N-1)/N. This cancels out the cost from the latency race, so it need not be passed off to liquidity traders via a spread. S=0. 

BCS states that it does not matter whether one always chooses to be the liquidity provider or they chooses so stochastically. I suspect they think it does not matter because the expected profit of posting liquidity and sniping is equal in equilibrium. However, that assumes the HFTs make a fresh decision as if they were certain to become the top of the next queue. In practice, they will have to try to be the lead LP on a queue, and their success will be stochastic, so they evaluate the LP and sniping role as a package deal and apply probabilities to both roles instead of in isolation.

Misspecified Objective Function

Another issue is that sniped quotes are assumed to be losses by marking them relative to a future mid-price. This would be true for a pure LP/sniper; however, many HFTs can provide complementary services like implementing VWAP trading algos for large buy-side clients. A VWAP strategy does not simply cross the spread or jump on the best bid-offer, but employs both tactics when they find either attractive. It can include posting aggressive bids that are immediately taken or taking new aggressive bids as an alternative to merely crossing the spread. A resting order subject to HFT sniping leans on HFT liquidity; these other HFTs assist the LP in efficiently implementing their VWAP strategy.

An HFT is in a good position to sell $X of Apple stock at tomorrow's VWAP plus a fee that covers the expected trading fees, price impact, and spread. They profit if they can implement that strategy at a lower cost. Thus, a subset of an HFT's orders may target minimizing trading costs instead of making a profit. How much of an HFT's limit order trading involves this complementary tactic? Who knows, but the fact that Budish has offered no estimate or even mentioned it is a significantly omitted variable.

Sequential 100 ms Auctions are Complicated

As for the alternative, frequent batch auctions held every 100 ms, this is a solution only an academic could love. The current system works very well, as evidenced by the dramatic reduction in spreads and fees since electronic market making arose (no thanks to academics, except Christie and Schultz).

The novel gaming strategies created by this mechanism are not well specified. The model does not even consider the standard case where trades happen on large queues, which is most of the time. One could easily imagine an endgame like the California Electricity market debacle circa 2000-2001, where a poorly implemented auction market was gamed, revealed, and then everyone blamed 'the market.'

Could LPs keep tight markets across instruments and market centers if matching were queued and pulsed like a lighthouse?  All exchanges would have to become frequent batch auctions and have the auctions synchronized within 1 ms for the discrete-time auction model to work. The Solana blockchain, which tries to synchronize at a 100-fold higher latency, goes down frequently. On the world equity market, such failures would generate chaos.

The Randomizer Alternative

In BCS, they briefly address the alternative mechanism, adding randomized delay. They note it does not affect the race to the top. However, it affects the amount they should be willing to pay for speed. For example, if new orders added a delay of 0 to 100 ms, the benefits of shaving 3 ms off the route from Chicago to New York would be negligible, as it would increase one's chance of winning a latency race by a mere 1% as opposed to 100%, reducing the benefit 100-fold. In their own model, eliminating these investments directly lowers the spread. If one had to change CLOBs to reduce the allure of wasteful investments in speed, this would be a much simpler and safer approach.

Talk about Stale Data

The data driving his extrapolated costs are from 2005-2011 for the ES-SPY data and 2015 for the LSE. Strangely, his recent 2023 presentations at a16z and the NBER did not update the data. Remember your cell phone capabilities circa 2005-11? Currently, the major exchanges all generate multiple hardware timestamps and correctly sequence incoming orders within 5 nanoseconds. If he is confident that his mechanism can save the world $5B a year, getting data from the 2020s would seem obvious. While technology has improved significantly, so have the tactics. No one in this field thinks a strategy backtested on message data from before 2022 is relevant. This is clearly an academic idea.

Co-located Level 2 Tactics != Regional Arbitrage

The microsecond speed race in their LSE data differs significantly from the ES-SPY arbitrage game. Co-located servers involve a trivial expense and are a massive improvement in efficiency. With co-location, you can have hundreds of investors at precisely the same distance with a leveled plain field. Without co-location, the competition for closest access will be discrete and more subject to corruption; the playing field will not be leveled, their costs will be higher, and they will have to maintain their own little data center. The costs and tactics for co-location vs regional arbitrage are incomparable, and we should not encourage regulators to treat them the same way.


As mentioned, the nice thing about this proposal is that it is clear enough to highlight its flaws. Those who just want to add a stamp tax to fund all of college (or health care, etc.) don't even try to justify it with a model; they just know its costs would tax people in ways that most voters would not feel directly. Nonetheless, like cash-for-clunkers, this is not an economic policy that will do any good.