Tuesday, June 14, 2022

Why Slavery in the Bible Doesn't Bother Me

Slavery is often considered a shining example of how "the moral arc of the universe is long, but it bends towards justice." Many people cannot understand how a worldview, or a 'real' God, could have any legitimacy today if it ever tolerated such an institution. I am a Christian, and while I agree that slavery is abhorrent today, it was a moral advance in its time. Those asserting that ancient tribes should have condemned it categorically are just naïve prigs.

Ancients had no moral qualms over what we today would consider horrific crimes. Members of outside tribes were so different in language, religions, and manners that they were considered beyond the universe of obligation they applied to their own tribe. For example, in Homer's Iliad, King Agamemnon explains why Troy will be annihilated. 

"My dear Menelaus, why are you so wary of taking men's lives? Did the Trojans treat you as handsomely as that when they stayed in your house? No; we are not going to leave a single one of them alive, down to the babies in their mothers' wombs, not even they must live. The whole people must be wiped out of existence and none be left to think of them and shed a tear."

Genocide was not something anyone needed defending when dealing with vanquished foes. Ashoka the Great (280 BC) and Assyrian King Ashurnasirpal II (883 to 859 BC) boasted of genocide. When Rome razed Carthage in 46 BC, 150K Carthaginians were killed out of a total population of 200K to 400K. As late as 1850, the Comanche killed all adult males and babies in battles. When a tribe is at its Malthusian limit, its apologists will have no problem justifying such tactics.

Slavery or genocide was necessary because the survivors kept on fighting if released. For example, in 689 BC, the Assyrian King Sennacherib destroyed Babylon and carried a statue of their god Marduk back to Nineveh. By 612 BC, the Babylonians were strong enough to come back and destroy Nineveh, and the 2,000-year-old Assyrian kingdom never recovered. There were endless genocidal conflicts throughout the Bronze Age, so if a tribe, like the Hebrews, unilaterally chose to play nice and release captured soldiers, they would not have survived, and we would not even know about the Bible.

Unlike today, real existential crises existed all the time in ancient history. One of the worst genocides in human history happened in Britain and Ireland when invaders from Spain and France obliterated the natives around 2500 BC. This can be seen in that the frequency of the Y-chromosome G marker is now only about 1 percent, overtaken by those with the R1b marker, which makes up about 84% of Ireland's male population.

Around 2,500 BC, much of the European population's DNA was replaced with that of people from the steppe region near the Black and Caspian seas. This is a nice way of saying the men were killed. The Steppe Hypothesis holds that this group spread east into Asia and west into Europe at around the same time—and the current study shows that they made it to Iberia, too. Though 60 percent of the region's total DNA remained the same, the Y chromosomes of the inhabitants were almost entirely replaced by 2,000 BC. The history of humanity is not just the history of warfare but the genocides that accompanied it.

Slavery requires a society complex enough to have a formal justice system that enforces property rights, as highlighted by the many references to slavery in our oldest texts (Code of Hammurabi), law codes from 1700 to 2300 BC. It is an institution enforced not merely by the owner but by a community, as otherwise, enslaved people would run off at the first opportunity for a better life. Conquering tribes did not take the defeated men home and have them work as a group, they sold them off to individuals, diluting them sufficiently so that a slave uprising would be an impossible coordination task. Most people would prefer slavery to execution, however, making this a moral advance.

The feudal system applied to a continent united by a shared religion reduced the need for genocide or slavery in Europe. Serfs would serve their King regardless of which cousin won the job, and in either case, they would be allowed to worship the same god, so it did not matter too much. Yet the entire time, ethics were subject to prudential rationality. For example, ransoming captured nobility was financially prudent and honorable, but this principle was relaxed when costly, as all principles are. Thus, at the battle of Agincourt (1415), the English took so many prisoners that King Henry worried they might overpower their guards; he violated the rule of war by ordering the immediate execution of thousands. The selective application of principles in times of existential crises is rational and predictable (Cicero noted, "In times of war, laws fall silent.").

Prison is an invention of the early 1800s as an alternative to corporal punishment. Before that, jails existed merely to hold people until their trial, as no society could afford to house prisoners for a long time, which is why the death penalty was so prominent. As late as the 1700s, 222 crimes were punishable by death in Britain, including theft, cutting down a tree, and counterfeiting tax stamps. The alternatives, such as a fine or property forfeiture the average criminal did not have, and the non-punishment of criminals has never been an equilibrium. As Thomas Sowell has frequently noted, a policy must look at the trade-offs, not just the benefits, and these are often manifest in the alternatives.  

In Knowledge, Evolution, and Society, Hayek argued that intellectuals do not create morals; they grow and sustain themselves via natural selection. John Maynard Smith applied this reasoning to animal behavior. He outlined that a strategy must work as an evolutionarily stable strategy, which involves iterated games where payoffs affect the distribution of players in the next game.

Evolutionary Game Theory

Mores, customs and laws must be consistent with strengthening the tribes that practice it, especially in the days when genocide was common. Productivity growth via social institutions like division of labor, or technologies like the plow, change the costs and benefits of human strategies. With more wealth, one can tolerate costs more easily, reducing the need for draconian punishment. This is a good argument for prioritizing policies that increase output over those targeting fanciful existential threats; wealthier societies can afford mores, customs, and laws that in a poor community would be irrelevant because they are infeasible.

The Bible laid the foundation for eliminating slavery once society could afford it. In the Bible, enslaved Christians are considered equal to their masters in moral worth (Gal. 3:28). Masters are to take care of their slaves, and slaves are encouraged to seek freedom (1 Cor. 7:21). In contrast, stoics of that time, like Seneca and Epictetus, never promoted manumission or the axiom that all men are made with equal moral status. They noted that slavers were also human, but this just implied slave owners should be nice to them, and Antebellum southerners were fond of quoting Seneca. The Philosophes of the 18th century, such as Rousseau or Kant, wrote almost nothing on the issue of slavery as practiced in their time.

The abolitionist movement only started in the late 18th century when English Christians (e.g., the Clapham Sect) and American Quakers began to question the morality of slavery. The principles underlying the condemnation of slavery revolve around the equality and dignity of all people. This was the primary motivational force of leading 18th Century British abolitionist William Wilberforce who viewed all people as "made in the image of God." This stands in contradiction to atheism which cannot justify nor defend the equality of humanity and its qualitative difference from any other sentient animal. While many Christians used the Bible to justify slavery, the more important point is that the seeds of slavery's eventual overthrow were based on Christian theology and championed by Christians, not secular philosophy and philosophers.

One can object to Christianity for many reasons, but it is ludicrous to suppose that slavery in the Bible proves it is morally bankrupt. If the Hebrews did not practice slavery, they either would have perished or had to have applied the viler tactic of genocide. If the first Christians campaigned for the immediate abolishment of slavery in an empire where 10-20% of the populace were slaves, the Roman government would have decimated them in the first century AD.

It is even more ludicrous to assert that because today's humanist intellectuals have never tolerated slavery, their worldview is morally superior to one based on a text written thousands of years ago that did. A mere century ago, humanist progressives were promoting colonialism, scientific racism, communist tyrannies, and fascism. Unlike slavery in the Bible, these were not the lesser evil policy given constraints of the time, but rather new policies foisted from above that increased human suffering and decreased human flourishing. To say that these errors were not 'real humanism' is special pleading.

Slavery was known in almost every ancient civilization. Outside of that sphere, slavery is rare among hunter-gatherer populations because it requires economic surpluses, the division of labor, and a high population density to be viable. Further, their social organization is too limited to create anything approaching a battalion, so they never engage in what one would call a war, and are thus never presented with the problem of what to do with one thousand enemy soldiers. 

Thursday, June 09, 2022

Uniswap LPs are Losing Money

[I'm posting these are Substack at efalken.substack.com, but I thought I post a couple here in case some people don't see those].

It is easy to ignore the option value when providing liquidity (LP for liquidity provider) because this is second-order relative to the risk from the price of one’s assets changing. Consider an unrestricted range where one provides $500k in ETH and USD at an initial price of $2500. The initial position is represented by the black line, the pool position by the red, and the impermanent loss (IL) is the difference between the two.

Another way to see this second-order distinction is by comparing that difference, the IL, with the pnl of the initial position. The orange line seems almost irrelevant. Almost.

It turns out that the IL will cost you around 12.5% annually for a token with a 100% annualized volatility (ETH’s vol is about 90%). Many people are being fleeced because they only look at the fee income in these pools.

The current automated market maker (AMM) LP situation is reminiscent of convertible bonds, where the seller of the bond includes an option for the buyer. If the stock price goes up, the bondholder can exchange his bond for a share of stock. As the value of this option is positive to the buyer, this lowers the bond issuer’s yield, seemingly saving the company money via a lower interest expense. Many ignorant Treasurers reasoned, “if the price goes up a lot, my shareholders will be happy, so they won’t be mad at me for some opportunity cost via dilution. If the price goes down, the shareholders will be happy because I saved the company money by selling a worthless option. Win-win!”

For decades convertible bonds were underpriced because their option value was undervalued, driven by the ignorant reasoning above. Savvy investors like financial wizard Ed Thorpe bought these bonds, and when hedged within a hedge fund, this strategy generated consistent Sharpe ratios near 2.0 without any beta through 2004. Eventually, investment banks became good at separating the options from these convertible bonds via derivatives, which made the value of the options within these bonds more transparent. Competition via arbitrage strategies pushed convertible bond prices up to accurately reflect the value of this option, and the days of easy alpha in convertible bonds were over.

An LP offers a pair of tokens for people to buy or sell at a current price. If you provide ETH and USDC to a pool, you implicitly offer a fixed price for buy or sale at every instant. In a limit order book, it would be analogous to offering to buy or sell Tesla at $750 when the current price is $750. If you leave that offer out there for a week, clearly, you will have bought only if the price went down and sold only if the price went up. This is called ‘adverse selection,’ because your offer selects the trade that is adverse to you.

Uniswap’s AMM isn’t exactly the same because the quantities offered do not have a fixed price, but the general idea is the same. It is like selling a put and a call at a fixed price, also known as a short straddle.

A short straddle payout looks like this:

The payout for a short straddle is strictly negative, but the option is sold for a positive amount. In equilibrium, the price should be slightly above the average payoff to compensate the seller for taking on risk, as potentially the seller could lose a lot of money. In contrast, the option buyer can only lose precisely as much as he paid. This risky nature of the option seller’s position is reflected in its convexity. The seller is exposed to negative convexity (second derivative negative) while the buy gets positive convexity. Positive convexity is like a lottery ticket; negative convexity is like asking The Godfather to do you a favor.

The cost of selling a straddle, or any payoff with convexity, can be calculated in two ways (see my earlier post here). One is by looking at the expected payoff of the option and discounting it with the risk-free rate. In the above graph, you would weight the points on the straight lines (intrinsic value) using a probability distribution given the time until expiration, current price, and expected volatility. This is the intuition behind the binomial options model. Another way is by calculating the convexity of the position, called ‘gamma,’ and multiplying it by the underlying price variance. This is the Black-Scholes method. One method assumes the seller does not dynamically hedge and one assumes the seller does, but that does not affect the value of the option sold, just the risk preferences or stupidity of the seller.

In application to providing liquidity in Uniswap, a good way to see the value of the options sold is by calibrating a straddle to replicate the pool position. In the attached spreadsheet, I present an ETH-USDC pool with an initial ETH price of $2500. The position was initially valued at $1MM and using the terminology of Uniswap, it has a liquidity=$10k (see my earlier post on how the term liquidity is used by Uniswap).

The IL for this position looks very much like a short straddle, so it seems fruitful to find the pool’s option analog. This is called the ‘replicating portfolio’ method of valuation. If you can replicate the payoffs of asset A with portfolio B, then they should have equal value via arbitrage. Here, the LP position value is

LP position = marketValue(liquidity=10k, p0=$2500)+IL

We know how to replicate and value the first term; it is just a $500k position in ETH and $500k in USD. So we need to find an option position that replicates the IL.

For the LP position, first, we start with the delta. The LP’s delta will change via a linear change in the reciprocal of the square root of the price:

The derivative of this with respect to price is the Gamma

So, the gamma for a pool position is pretty straightforward.

The gamma for an option can be calculated a couple of ways, identical for a call and a put. As we are replicating a straddle, we need to multiply this by 2.

[See here for a definition of these terms. Fun fact: Jimmy Wales’s first Wikipedia post was on the Black-Scholes model. This formula applies to one option, so the notional is the price. To generate the gamma for an arbitrary notional, we multiply this result by N/p, the number of options needed to generate a notional amount of N].

Option values, and their ‘greeks’ like gamma, are determined by the current price, volatility, time to expiration, notional, and strike. Only the current price is fixed, so this gives us an infinite combination of parameters to work with, but it’s helpful to assume the volatility is the asset’s historical volatility simply. Let us use 100% for ETH to make it simple (it’s probably about 85%, but close enough). Let us also use 1-month for the expiration, in that this is the most popular option maturity. This leaves us only with the notional and the strike. We can solve for these parameters with a variety of hill-climbing methods, including the solver in excel (see spreadsheet for this example).

Doing so generates the following comparison.

Even though I just targeted the gamma, the fit is almost perfect over the distribution of prices that span a day’s potential price movement. In this case, the straddle value is $21k, and the theta, or daily time decay, is $342.

For the option, the theta is calculated via

An option position’s theta must equal the cost of the IL via arbitrage, as proven by Black-Scholes

So, in this case, the LP’s gamma at p=2500, and liquidity=10,000, is -0.04. The one-day variance is 2500^2*(100%^2)/365=$17,123, so the theta derived from the LP position is $342.

Alternatively, one can apply probabilities to the pool IL losses over a day, generating $345 (off slightly due to approximation errors).

The bottom line is that this $1MM pool position bleeds $342 a day, which can be calculated in several ways. This adds up to a 12.5% loss over a year.

We can apply this to two of the most popular Uniswap pools, the ETH-USDC 0.05% pool, and the higher fee instantiation, the 0.3% pool. The revenue for LPs is just daily USD traded times the fee amount (0.05% and 0.3%). The cost is derived via the gamma (a function of liquidity, price, and volatility). We can pull the liquidity and price, and for the volatility, I pulled the 15-minute returns throughout the day (24 times 4 or 96 observations) to get the daily variance. Average daily liquidity encountered by trades, and the daily volume traded for these pools, can be pulled from places like Dune.

Uniswap data on USDC-ETH pools (downloadable spreadsheet ).

This table shows a persistent LP loss, with revenues around 80% of costs. This estimate is consistent with the results by TopazeBlue/Bancor earlier this year (see p.25), though that paper emphasized that ‘half of the LP providers’ lost money. This assumes LPs did not hedge their positions. More importantly, this should not be a primary takeaway, as it implies that all one has to be is above average to make money as an LP. As everyone thinks they are above average, this is a reckless implication.

There is no way for an LP to make money off these pools, no trick to make negative gamma disappear. Either the fees need to increase, or the volume needs to grow. The effect of a fee increase is obvious, but for the volume, the issue is these pools need more noise traders. Noise traders are just looking for convenience as opposed to arbitrage. They offset each other because when people act randomly, some buy and some sell, not affecting the price much at any time. If these pools can get more noise traders, LPs could make profits while fees and volatility are the same.

Currently, a lot of naive LPs are giving away money to arbitrageurs.

Tuesday, June 07, 2022

One-Month Trading Strategies

 About half of Robeco’s Quantitative Investing team recently published a short paper on monthly trading strategies (see Blitz et everybody Beyond Fama-French Factors: Alpha from Short-Term Signals Frequencies). I can imagine these guys talking about this stuff all the time, and someone finally says, “this would make a good paper!”

‘Short-Term’ refer to one-month trading horizons. Anything shorter than a month does not significantly overlap with standard quant factors like ‘value’ because they involve different databases: minute or even tick data, level 2 quotes. They are less scalable and involve more trading, making them more the province of hedge funds than large institutional funds. Alternatively, super-short-term patterns can be subsumed within the tactics of an execution desk, where any alpha would be reflected in lower transaction costs.

The paper analyzes the five well-known anomalies using the MSCI World database, which covers the top 2000 largest traded stocks in developed countries.

  1. Monthly mean-reversion. See Lehman (1988). A return in month t implies a low return in month t+1.

  2. Industry momentum. See Moskowitz and Grinblatt (1999). This motivates the authors to industry-adjust the returns in the mean-reversion anomaly mentioned above so it becomes more independent.

  3. Analyst earnings revisions over the past month. See based Van der Hart, Slagter, and van Dijk (2003). This signal uses the number of upward earnings revisions minus the number of downward earnings revisions divided by the total number of analysts over the past month. It is like industry momentum in that it implies investor underreaction.

  4. Seasonal same-month stock return. See Heston and Sadka (2008). This strategy is based on the idea that some stocks do well in specific months, like April.

  5. High one-month idiosyncratic volatility predicts low returns. See Ang, Hodrick, Xing, and Zhang (2006). This will generally be a long low-vol, short high-vol portfolio.

All of these anomalies initially documented gross excess monthly returns of around 1.5%, which, as expected, are now thought to be a third of that at best; at least they are not zero, as is the fate for most published anomalies.

Interestingly, Lehmann’s paper documented the negative weekly and monthly autocorrelation that underlay the profitable ‘pairs-trading’ strategy that flourished in the 1990s. Lo and Mackinlay also noted this pattern in 1988, though their focus was testing random walks, so you had to read between the lines to see the implication. DE Shaw made a lot of money trading this strategy back then, so perhaps Bezos funded Amazon with that strategy. A popular version of this was pairs trading, where for example, one would buy Coke and sell Pepsi if Coke went down while Pepsi was flat. Pairs trading became much less profitable around 2003, right after the internet bubble burst. However, this is a great example of an academic paper that proved valuable for those discerning enough to see its real value (note that Lehmann’s straightforward strategy was never a highly popular academic paper).

An aside on DE Shaw. While making tons of money off of this simple strategy, they did not tell anyone about it. They would tell interviewees they used neural nets, a concept like AI or machine-learning today: trendy and vague. I have witnessed this camouflage tactic firsthand several times, where a trader’s edge was x, but he would tell others something plausible unrelated to x. Invariably, the purported edge was more complicated and sophisticated than the real edge (e.g., “I use a Kalman filter to estimate latent high-frequency factors that generate dynamic alphas” vs. “I buy 50 losers and sell 50 winners every Monday using last week’s industry-adjusted returns”).

If each strategy generates a 0.5% monthly return, it is interesting to think about how well all of them do when combined. Below is a matrix of the correlation of the returns for these single strategies. You can see that by using an industry-adjusted return, the monthly mean-reversion signal becomes significantly negatively correlated with industry momentum. While I like the idea of reducing the direct offsetting effects (high industry returns —> go long; high returns —> go short), I am uncomfortable with how high the negative correlation is. In finance, large correlations among explanatory variables suggest redundancy and not efficient diversification.

STR=short-term mean reversion, IND_MOM=industry momentum, REV30D=analyst revision momentum, SEA_SAME=seasonal same month, iVOL=one-month idiosyncratic volatility.

Their multifactor approach normalizes their inputs into Winsorized z-scores and adds them up. This ‘normalize and add’ approach is a robust way to create a multifactor model when you do not have a strong theory (e.g., Fama-French factors vs Black-Scholes). Robyn Dawes made a case for this technique here. His paper was included in Kahneman, Slovic, and Tversky’s seminal Judgment under Uncertainty, a collection of essays on bias published in 1982. In contrast, multivariate regression weights (coefficients) will be influenced by the covariances among the factors as well as the correlation with the explanatory variable. These anomalies are known only for their correlation with future returns, not their correlations with each other. Thus, excluding their correlation effects ex ante is an efficient way of avoiding overfitting, in the same way that a linear regression restricts but dominates most other techniques (simple is smart). The key is to normalize the significant predictors into comparable units (e.g., z-scores, percentiles) and add them up.

I should note that famed factor quant Bob Haugen applied the more sophisticated regression approach to weighting factors in the early 2000s, and it was a mess. This is why I do not consider Haugen the OG of low vol investing. He was one of the first to note that low volatility was correlated with higher-than-average returns, but he buried that finding among a dozen other factors, each with several flavors. He sold my old firm Deephaven a set of 50 factors circa 2003 using rolling regressions, and the factor loading on low vol bounced from positive to negative; the model promoted historical returns of 20+% every year for the past 20 years, while never working while I saw it.

I am not categorically against weighting factors; I just think a simple ‘normalize and add’ approach is best for something like what Robeco does here, and in general one needs to be very thoughtful about restricting interaction effects (e.g., don’t just throw them into a machine learning algorithm).

The paper documents the transaction costs that would make the strategy generate a zero return to handle this problem. This has the nice property of adjusting for the fact that some implementations have a lower turnover than others, so that aspect of the strategy is then included within the tests. The y-axis in the chart below represents the one-way transaction costs that would make the strategy generate a zero excess return. Combining signals almost doubles the returns, which makes sense. You should not expect an n-factor model to be n times better than a 1-factor model.

From Blitz et al

They note some straightforward exit rules that reduce turnover, and thus transaction costs, more than it lowers the top-line returns. Thus, in the chart below, no single anomaly can overcome its transaction costs, but a multifactor approach can. Further, you can make straightforward adjustments to entry and exit rules that increase the net return even while decreasing the gross return (far right vs. middle columns).

From Blitz et al

They also present data on the long-only implementation and find the gross excess return to fall by half, suggesting the results were not driven by the short side. This is important because it is difficult to get a complete historical dataset of short borrow costs, and many objectively bad stocks cannot be shorted at any rate.

I am cautious about their results for a couple of reasons. First, consider the chart below from the Hesten and Sadka (2006) paper on the monthly seasonal effect. This shows the average lagged regression weighting on prior monthly returns for a single stock. Note the pronounced 12-month spike, as if a stock has a particular best month that persists for 20 years. The stability of this effect over time looks too persistent to be true, suggesting a measurement issue more than a real return.

From Hesten and Sadka (2006)

Another problem in this literature is that presenting the ‘alphas’ or residuals to a factor portfolio is often better thought of as a reflection of a misspecified model as opposed to an excess return. Note that the largest factor in these models is the CAPM beta. We know that beta is at best uncorrelated with cross-sectional returns. Thus, you can easily generate a large ‘excess return’ merely by defining this term relative to a prediction everyone knows has a systematic bias (i.e., high beta stocks should do better, but don’t). You could create a zero-beta portfolio, add it to a beta=1.0 portfolio and seemingly create a dominant portfolio, but that is not obvious. The time-varying idiosyncratic risk to these tilts does not diversify away and could reduce the Sharpe of a simple market portfolio.

US stocks presorted by CAPM beta, 2000-2020

Interestingly, Larry Swedroe recently discussed a paper by Medhat and Schmeling (2021). They found that monthly mean reversion only applies to low turnover stocks; for high turnover stocks, the effect is to find its opposite month-to-month momentum. I do not see that, but this highlights the many degrees of freedom in these studies. How do you define ‘excess’ returns? What is the size cut-off in your universe? What time period did you use? Did you include Japan (their stock market patterns are as weird as their TV game shows)?

Robeco’s paper provides a nice benchmark and outlines a straightforward composite strategy. If you are skeptical, you might simply add them to your entry and exit rules.

Wednesday, December 02, 2020

Dates of US Bear Markets Since 1873

It's useful to test longer-term rules, such as trend-following, across many cycles. To this end, it is useful to have dates for bull and bear markets. If your sample is merely from, say, 2010-2019, you will have 2520 data points, but no bear markets. Thus, you can prove such a strategy works by noting the significance level of your statistics, but anyone with some knowledge of history would see the error. A strategy optimized over only bull markets is, as they say, 'problematic.' The US stock market has been in bear markets 20% of the time since 1871. 

I have identified 24 US bear markets since 1873. I'd like to say it is a purely objective classification, but there are some judgment calls. Basically, I looked for the traditional "20% drawdown" definition. Several bear markets did not actually meet this standard--1990, 1957, 1873--but I included them anyway out of respect for history. For example, the 'Panic of 1873' occurred in the midst of European turmoil, started when the US and many other nations demonetized silver, was the first wave of railroad failures, had many bank failures, and even caused a 10-day closure of the New York Stock Exchange. The recession from 1873-77 was the longest in US history.  

On the other hand, there were some that perhaps are false positives. The 1884 bear market was only down 2% in real terms, though 21% nominally. This bear market is referred to as the 'panic of 1884.' As the US just returned to the gold standard in 1879, many Europeans were skeptical the US could maintain it and were selling their US assets. Many businesses and banks failed. To say this was not a bear market because in real terms the markets were virtually flat seems wrong. 

I put in the prior bull market gain to give better context. It also explains why I have 2 bear markets from 1937-42 as opposed to one, as some have. Given the 68% increase from March 1938 through October 1939, and that the cause of the '37 slide is so different than the '39 slide, it does not make sense to consider the entire '37-'42 period a single bear market. 

I used Shiller's data for data prior to 1926, and Ken French's data for afterward. Shiller's data is monthly, and this tends to soften cycles, avoiding the true peaks and troughs. Shiller's data is a little funky, as for example, his 2020 February return is flat while the SP500 was down 8%. These discrepancies tend to leak over to other months, however, so for measuring bull and bear market returns they are probably less problematic. 

While not perfect, it's useful to have these dates, at least for a starting point. If you have suggestions on amendments, I would appreciate them. You can download this here

Corrections thus far: 3/12/20 should be 3/23/20 (Elfenbein)

StartEndDeclinePrior RiseMonthsComments
Feb-1873Nov-1873-18%#N/A10Left silver, failure of Jay Cooke, railroads, Europe weak
Mar-1876Jun-1877-33%32%16End of longest recession
Sep-1882Jan-1885-21%198%29Foreign run on US assets due to worry about US gold standard
Jan-1893Aug-1893-25%89%8Failure of railroads, banks
Sep-1895Aug-1896-19%34%12Double dip from last recession
Sep-1902Oct-1903-26%194%14Minor recession
Oct-1906Nov-1907-32%76%14A run on Knickerbocker Trust , JPMorgan leads bailout
Nov-1916Dec-1917-28%160%26Start of inflation, US entered WW1
Oct-1919Aug-1921-23%60%23Prices fall by 50% after rising 100% in war
9/7/292/27/33-84%635%43Great Depression
3/6/373/31/38-51%416%14Short-lived massive retained earnings tax
10/25/394/28/42-31%68%31Start of WW2
5/29/466/6/47-24%237%13End of war transition
8/2/5610/22/57-17%421%16Minor recession
12/12/616/26/62-28%122%7Kennedy micro-manages steel price increases
2/9/6610/7/66-21%101%9Fed tightens, relents
11/29/685/26/70-37%71%19Collapse of merger wave, tech boom
1/11/7310/3/74-48%88%22OPEC oil crisis
11/28/808/12/82-20%246%21Peak inflation, Volker Fed tightening
8/25/8712/4/87-33%281%4Fed tightens to support dollar, market crash
1/2/9010/11/90-18%71%10Run-up to Iraq War I, junk bond & Comm RE bust
3/24/0010/9/02-50%575%31Collapse of tech bubble/911 attack
10/9/073/9/09-55%131%18Mortgage crisis

Monday, November 30, 2020

$1000 Covid Bet with Robin Hanson

"The whole aim of practical politics is to keep the populace alarmed by menacing it with an endless series of hobgoblins, most of them imaginary." ~ HL Mencken

Robin Hanson
In February, Robin Hanson tweeted that he would take bets on the nascent COVID-19 pandemic, and he was generally 'long' the severity. His interest is non-partisan, as he is a seminal proponent of prediction markets for policy debates. The idea is rather simple: forecasts are more accurate when forecasters have to put money on them. Talk is cheap. Interestingly, the biggest obstacle to this idea is legal, as lawmakers discourage these markets by highlighting bizarre edge cases (as with crypto, these usually involve terrorists). More practically, regulators and their industry constituents want to make sure such markets do not encroach on their protected markets.

When covid arose in February I knew that historically bad flu seasons would generate extra 40k deaths in some years; high-profile viruses like avian flu tended to be limited (3k deaths in the US). Further, virulence is inversely correlated with contagiousness, as people really do not like dying, and so are very good at quarantining those infected by deadly diseases like SARS and Ebola. I knew that data could be manipulated, as with African AIDS deaths, but I thought death statistics in the USA would be relatively immune to this tactic. Thus, when he gave me a number of 250k deaths by the end of the year, I thought it impossible and offered 10-1 odds on $100 ('impossible' means 10% chance when applied to things I understand at this level; I'm a doctor, but not a real doctor). 

I just paid him $1,000. I lost the bet fair and square because implicit in the bet was that we would use conventional metrics of covid deaths, such as those of the Center for Disease Control (CDC) or the World Health Organization (WHO). I have been following the CDC, and while one page reports 244k, it will pass 250k soon; another page on their site reports 265k. Even if I take the minimum, the result is inevitable. In hindsight, my error was not anticipating that covid would become politicized.  Robin was right for the wrong reason (covid deaths are inflated, it is not comparable to the Spanish Flu), but that often happens in bets.

The SARS Effect

In January, China reported the first death from the new covid virus, and by mid-month, the WHO published a comprehensive of guidance documents on this new disease. In a prelude to the panic, the CDC, following the WHO's lead, was confident that a new pandemic was at hand. The WHO's initial January report specifically referenced the 2003 SARS, the highly lethal respiratory disease that formed the basis for many new Crisis Response Protocols developed by the health care bureaucracy. Covid was the pandemic that our experts had extensively planned for, which proved disastrous.

In the 2003 SARS pandemic many healthcare workers became infected, and hospital transmission was the primary accelerator of SARS infections, accounting for 72% of cases in Toronto and 55% of probable cases in Taiwan. This pattern is so common and terrifying that there is a special word for this: nosocomial, which means transmitted in a healthcare facility. While conventionally SARS refers specifically to the 2003 pandemic, it is also a generalized term (Severe Acute Respiratory Syndrom), and our current virus is considered in the same clade as the 2003 SARS virus. It is the SARS-CoV-2 virus that causes COVID-19 disease (hereafter, covid). Over the past 17 years, health care institutions created hundreds of detailed guides about how to quarantine, report, and control the next SARS outbreak, with hospital protocols the first line of defense.

Reviews of the SARS experience noted the importance of a detailed protocol for dealing with such diseases. In Toronto, infected health care workers all reported that they had worn the recommended protective equipment, including gowns, gloves, specialized masks, and goggles, each time they entered the patient's room. However, the workers had not been fit-tested for their masks, and one nurse admitted his mask didn't fit well. It was also noted that some of the workers might not have followed the correct sequence in removing their protective equipment (i.e., gloves first, then mask and goggles). 

The emphasis on small details created a bureaucratic mindset that ignored common sense because the motivation was preventing not merely the next SARS, but the next worst-case-scenario SARS  (see The Andromeda Strain or the latest Planet of the Apes series). The focus was on health care workers at the expense of patients, which seems simply self-serving, but it makes sense if your vision of a pandemic comes from dystopic science-fiction movies. If all health care providers die first, everyone else is sure to die next because, without health experts, health experts expect society to revert to Medieval life expectancies. Thus the priority was not so much healing the sick but getting them out of circulation. When the objective is to prevent an existential threat to humanity, virtually any extreme measure with large present costs is justified.

At the beginning of the covid crisis, the CDC recommended health care workers don full Personal Protective Equipment (PPE) for each patient encounter, consisting of the following:

  • A disposable N95 respirator face mask that achieves a seal around the mouth and nose
  • Gloves
  • Eye protection
  • Disposable gown
  • Footwear
The priority was clearly on protecting health care workers, not saving infected patients. This implies reducing contact and making sure patients didn't breathe too much into hospital rooms. Here are some CDC covid protocol recommendations:
  • Intermittent rather than continuous patient monitoring to reduce contact
  • Rapid ventilation to minimize aerosol generation (Rapid Sequence Intubation)
  • Aggressively suppress patient cough through sedation strategies (fentanyl, ketamine, propofol).
  • Reduced suctioning
  • Reduced visitors, and then only with PPE
If a patient coded (goes into cardiac or respiratory arrest) in many hospitals, guidelines recommended that staff currently with that patient must leave the room and don full PPE before administering CPR. These are critical moments as the time to do this takes a couple of minutes, the difference between life and death. There were also do-not-resuscitate orders, though, like the orders to don full PPE, institutions denied ever having such protocols. Since no visitors are allowed in this protocol, this scandal was never witnessed by family members. These extreme protocols have been silently abandoned.

Early in the crisis, there was a focus on the number of ventilators as a hospital capacity metric. There were calls for transitioning defense contractors to ventilators' production, which are tangible cures for clueless politicians and journalists, similar to how Mao emphasized steel production. In fact, more people would be alive today if there was a shortage, and its aggressive and negligent application killed tens of thousands. Usually, 40% of patients with severe respiratory distress die while on ventilators, as these are emergency tactics for the very sick (classic selection bias). Yet in the March covid disaster in New York City, 85% of coronavirus patients placed on the machines died, including 97% of ventilated patients over 65 (see here). As many were placed on ventilators that otherwise would not have been, the implications for excess deaths are fairly direct.

The problems with intubation are known as VALI: Ventilator Association Lung Injury. For example, the absolute pressures used in order to ventilate lungs, and shearing forces associated with rapid changes in gas velocity can traumatize lung tissue. It increases the risk of pneumonia because the tube that allows patients to breathe can introduce bacteria into the lungs. Pressure and oxygen levels need to be individualized because too much or too little of either damage lungs, requiring frequent monitoring and adjustment. People were put on ventilators at a higher-than-normal rate and monitored infrequently. As their family members were absent, no one could call for a nurse when a patient was in obvious distress.

Drugging patients and putting them on ventilators reduced the risk they would infect health care workers. Additionally, there were reports that some covid patients had a rapid decline of oxygenation levels, and so in anticipation of this, a ventilator first strategy was seen as proactive. A review of experiences in Italy stated that "invasive ventilation is associated with reduced aerosolization and is thus safer for staff and other patients," but also admitted that "it might also be associated with hypoxia, hemodynamic failure, and cardiac arrest during tracheal intubation."

Financial incentives aggravated the overuse of ventilators. In the United States, the government pays approximately $13,000 for a regular COVID-19 patient, but $39,000 for an intubated patient. A ventilator is a cash cow for medical facilities. Given the CDC's official recommendations, no one could second-guess them for being overly aggressive, especially when their aim was to prevent an existential threat. [Left-wing fact-checker Snopes rated this payment factoid as 'mixed,' employing the casuistry that while correct as an approximation, actual payments are not exactly $13k or $39k in every case] 

Over Counting

When covid exploded in Italy the WHO had already implemented an unprecedented policy to count all deaths 'with covid' as deaths 'from covid.' The policy was immediately adopted in the US as well, as Illinois' Public Health Director Ngozi Ezike stated, "even if you died of a clear alternative cause, but you had covid at the same time, it's still listed as a covid death." Early in the pandemic, when there was little data on how virulent this pandemic would be, the CDC emphasized how important it was to label anything plausibly related to covid as a COVID-19 death to "appropriately direct [the] public health response." This is a clear indication that they were interested in maximizing covid deaths from the outset. As Marx advised, the purpose of intellectuals is not merely to interpret history, but to change it.

As you die, your immune system shuts down, allowing many viruses to thrive as one nears death. These are opportunistic collateral infections, not the cause of death. Pneumonia was often referred to as 'old man's friend' because it was the immediate cause of death for most old people, whether the real reason was renal failure, cardiovascular disease, or cancer. Measuring for the appearance of a particular virus, regardless of these co-morbidities, is misleading, and why historically, no one has ever used the "died with" protocol for attributing the underlying cause of death (UCOD).

Further, a covid diagnosis is very lenient. The CDC not only allows a presumptive diagnosis but before any significant data on this new virus, confidently recommended applying covid to any death remotely plausible: "it is likely that it will be the UCOD, as it can lead to various life-threatening conditions, such as pneumonia ... in these cases, COVID–19 should be reported." Thus in April, when New York City breached 10k deaths, this included 3,700 who were presumed to have died of covid but never tested.

The US authorized $150B for covid relief in March, including a 20% add-on to the standard rate for patients diagnosed with covid. If you have been to a hospital out-of-network recently, you learn how much extra you are charged without insurance, the 'standard rate' as defined by 'diagnosis-related groups.' These rates are benchmarks that allow insurers to show you how much you are saving with them. They are also high rates because they have low collection rates, and hospitals are obligated to service an ill person regardless of insurance. Many patients leave and are untraceable, so those who pay subsidize those who do not, a hidden redistributive tax within our health care. A covid diagnosis generates the standard rate, which is a premium rate, and adds a 20% bonus.

If you run a long-term care facility where many patients are at the end of their life, and final days usually entail expensive treatments, it would be financially prudent and entirely legal to diagnose as many decedents as covid as possible. Further, this petty cash grab would avoid media moral censure, as many eager to inflate the death count would consider this a cost worth paying.

While no testing is required for a covid diagnosis at death, the tests themselves are biased. A virus with a low load is often inactive, passive, non-threatening. This phenomenon is the basis for HIV antiretroviral therapy, in that when a person has a sufficiently low viral load, they not only do not get sick, they do not transmit the disease.

A critical threshold (Ct) for 'cycles' in PCR tests is an important cause of false positives. Each cycle doubles the amount of the virus fragments, so as 35 cycles is 10 more than 25 cycles, this implies it generates 1024 times (2^10) of the viral fragments in the final solution. A recent covid study found that 70% of samples with Ct values of 25 or below could be cultured, indicating an active infection, compared with less than 3% of the cases with Ct values above 35. Yet, the CDC states Ct values should not be used to determine a patient's viral load because the correlation between Ct values and viral load is imperfect. This objection would obviate just about every health metric if not all of statistics: is high blood pressure a useless signal because some people with high blood pressure live to 100? PCR provides an argument by authority--they reference peer-reviewed science--but if one does not simply defer to their credentials and understands the logic they present, it exposes their complete lack of credibility. Science as a method is rational and objective; science as an institution is as corrupt as the Medieval church.

Nick Cordero
A good example of a spurious covid death is the tragic case of Broadway actor and dancer, Nick Cordero. He was promoted as an example of how covid threatens everyone, and the NYT reported he had no underlying health conditions. Yet, at some point, he contracted pneumonia so severe he was admitted to the hospital in the peak of the New York City covid fiasco. I have had pneumonia twice, and in both cases, I was just given antibiotics, so he must have had a severe case. Once hospitalized, he was put on a ventilator, given dialysis, and put on a heart-lung bypass machine. His heart stopped for two minutes at one point, was put in a medically induced coma for six weeks, and his right leg was amputated due to excessive clotting. He was tested several times for covid, including initially, always negative, but eventually, he tested positive for covid before dying in July.

For the media to portray Cordero as having no underlying health conditions merely because this described him before hospitalization is not just misleading, but intentionally so. The litany of life-threatening complications before his first positive covid test made him one of the least healthy people on the planet. There is clearly a higher truth for the media in his story. It would be interesting to know to what degree the aggressive intubation protocols at his time of admittance factored in his death. It is quite likely he was rapidly intubated and neglected, per SARS protocols, a classic case of iatrogenesis, when medical care harms the patient.

Declines in Elective Surgery and Regular Doctor Visits

To reduce infectious risk to providers and conserve critical resources, most states in the US enacted a temporary ban on elective surgery from March through May 2020. Various discouragements have continued. Elective surgical cases fall somewhere between vital preventative measures (e.g., screening colonoscopy) and essential surgery (e.g., cataract removal). These surgeries plummeted 60% in April, but have subsequently rebounded, though are still well below last year. Similarly, outpatient visits fell by 50% initially and are still well below previous levels (see here and here). 

The effects of healthcare visits and elective surgery on mortality, let alone and quality of life, are speculative. Yet, many papers supported Obama's Affordable Care Act, noting that increased access to such care had significant effects. Estimates of how much more health care access people had due to Obamacare range from 1 to 5%, and the consequences range from 10k to 50k deaths avoided per year. Given an initial 50% reduction and a subsequent reduction of 10-20% over the rest of the year, a 100k increase in deaths would be a reasonable estimate given this literature.

Obamacare supporters generally also support the lockdown. They insist a small increase in access to healthcare saved tens of thousands via Obamacare, while this year's radically sharp decline in access to healthcare had no effect worth mentioning when discussing the lockdowns.

Social isolation

People are social, which is why one of the worst punishments in Roman times was exile. Solitary confinement cuts people off from the types of activity that bring meaning and purpose to their life, communal activities, and face-to-face social interactions. To suggest taking this away from people, especially the elderly, is not worth estimating in this pandemic is absurd to anyone who thinks life is about quality as well as quantity.

Yet even if we just focus on quantity, social isolation is a risk factor. Social isolation is associated with functional decline and death. For example, loneliness among heart failure patients nearly quadruples their risk of death, and it increases their risk of hospitalization by 68%. A meta-study on the effects of social isolation found significant mortality effects, where people in the 'loneliest quintiles had 30% higher all-cause mortality rates.

Suicide deaths are a relevant metric, but national data has a couple-year lag. We know that in 2018 there were 48,000 deaths from suicide and at least 1.4 million attempts, and in 2019, almost 71,000 people died from drug overdoses, many of which were suicide-related. There have been anecdotal reports that suicides are up, and it's concerning that the Social Justice Warriors are quick to lobby Twitter to censor these reports as if any information or even discussion of the costs of the lockdown is dangerous. Our uber-rational elite sees no value in debating the costs and benefits of our extreme response, just like the state-run media in one-party states. 

University Data: 0.0007% Case Fatality Rate

While one can re-label a standard pneumonia death as covid, this is not possible for young people who rarely die of pneumonia. Further, given their excellent health, young people do not put themselves in situations to receive iatrogenic medical treatment or feel the effects of restricted access to health providers.

As mentioned, opportunistic infections are common in people near death, and there are strong incentives and easy ability to label a decedent as a covid death, regardless of its relevance. This makes the standard CDC data susceptible to massive inflation. An ideal estimation procedure would test a random sample of people, and then for those who test positive, check if they are alive in a couple of months. This removes many of the above-mentioned biases. Universities have done something close to this. As schools were cautious about the PR debacle if they were a covid-death hot-spot, universities were well equipped to test their students in order to keep them from spreading the virus. They would test those arriving, those with minor symptoms, and those without symptoms who were in contact with someone who tested positive. It is not perfectly random in that they will miss asymptomatic cases that were not in known contact with a covid positive person, but it's the most bias-resistant metric we have.

As of November 22, they had 139,000 positive covid test subjects. There have been 17 hospitalizations, and 1 death, which is a 0.0007% case fatality rate (CFR). This death rate is one-tenth that of the flu for this demographic. Given the sample size, you can reject the hypothesis that covid has a higher death rate than the flu at any conventional significance level, just like regular coronaviruses. 

Despite this anomalous data, over the summer, two college anecdotes received stories in the New York Times, eager to highlight covid is a significant mortal threat to everyone. One story highlighted a 350-pound young man who died of a pulmonary embolism and whose initial obituary did not mention covid; the other student had an undetected case of the deadly Guillain-Barre syndrome. These cofactors were not just downplayed, but reversed: the obese young man was an athlete (football player), the other described as "super healthy." This tendentious narrative highlights that covid is less about covid than something else.

In contrast, the CDC's total covid deaths by age group show 428 deaths for the 15-24-year-old grouping and 1006 deaths in the 18-29 year-old grouping, which implies a death rate of 0.001% to 0.002% among ALL people in this group. Given the CDC reports that 5% of this demographic has tested positive, this would imply a 0.03% case fatality rate. The case fatality rate for college students who tested positive is about 1/25th of this (0.0007%). Given the large sample size, you can reject the hypothesis that these fatality rates are equal at any conventional significance level. 

The simplest and most obvious explanation is that the CDC's death data include many deaths not caused by covid. The CDC's 'died with' protocol not only allows but encourages labeling the cause of death as covid, but this bias can only work if there is a large set of deaths to work on. As 28k 20-somethings have died this year, tagging 428 of them with covid is pretty easy, and there are strong financial incentives to do this. 

Avg Age at Covid Death > Avg Age at Death

Paradoxically, a typical plague affects the young more than the old. While the old die at high rates in a plague, they die at high rates anyway, they're old. The increase in excess deaths centers on the more numerous young, who start at a much lower normal mortality rate. For example, in the non-politicized Avian flu, the average age at death was 48, well below the usual average age at death, which is about 75. Ebola and AIDS killed mostly young people. Older people are more immune to viruses of all sorts, which is why kindergarten teachers rarely get colds, while children need to suffer through the process of getting infected to get immunity. Older people are less likely to socialize or wrestle (competitively or amorously). We should see a significant effect of a deadly new virus among young adults and infants who are more exposed and less biologically prepared for a novel virus strain, but we do not.

Below we see the most Spanish Flu deaths were among those under 45 years old, with a peak at 27. Though this chart is for select cities, it was a general pattern (see here, here, or here). In contrast, those under 45 represent 3% of total covid deaths according to the CDC, and covid deaths increase by age group, even though the total population starts to fall at 65. 

The CDC warehouses a large amount of data, mostly in categories of no significance, as if their purpose is to hide the truth. I could only find case rate in groupings of 10-19, and deaths in 15-24 (etc.), so I had to do some interpolations. Further, the case data by age was about half of the total cases, so I basically multiplied the case data by 2 to get cases by age group. Using this data we can estimate the case fatality rate for the age group, dividing covid deaths by cases. For those under 45, the mortality rate conditional upon getting covid is less than or equal to the all-cause mortality rate. In other words, if you test positive for covid and are under 45, your risk of dying does not increase. Covid is just a regular cold (coronavirus) for healthy people. I could not find a prior pandemic with an average age at death greater than the all-cause average age at death (75 vs. 73), but suggestions are welcome.

CDC Death and Case Data (see here and here)

Through 11/25

  Covid Deaths All Deaths Pop(K) Cases(K) Covid CFR Overall Mort Rate
< 1 yr 29 14,582 3,783 27 0.11% 0.39%
1–4 yrs 16 2,718 15,794 111 0.01% 0.02%
5–14 yrs 42 4,366 40,994 289 0.01% 0.01%
15–24 yrs 428 28,020 42,688 1,571 0.03% 0.07%
25–34 yrs 1,812 57,251 45,940 2,208 0.08% 0.12%
35–44 yrs 4,663 80,852 41,659 2,134 0.22% 0.19%
45–54 yrs 12,371 147,270 40,875 1,984 0.62% 0.36%
55–64 yrs 29,888 337,300 42,449 1,921 1.56% 0.79%
65–74 yrs 51,667 512,249 31,483 1,346 3.84% 1.63%
75–84 yrs 64,575 623,712 15,970 989 6.53% 3.91%
> 85 yrs 74,722 771,228 6,605 418 17.88% 11.68%
Total 240,213 2,579,548 328,240 13,000 1.85% 0.79%
Avg Age at Death 75.6 73.0                                                             

My personal experience with corona is consistent with the general data. My two college sons tested positive this fall and had only mild symptoms. I practice jiu-jitsu three times a week (when not in lockdown, our current status), and such activity exposes one to 10-20 different biomes each session. You are wrestling with several people who wrestle several other people, so basically everyone shares their viral load with the class. Neither I nor anyone else in our gym has developed covid symptoms, though statistically, it is almost certain we have all been exposed.


I do not doubt that covid represents a novel coronavirus, which like all viruses is lethal to some people. I doubt it is an abnormal one, in that flu seasons regularly vary, and given 325MM Americans, such viruses generate tens of thousands of extra deaths. To anyone who dies from these new viruses, it is a tragedy for them and their loved ones.

We have seen an extra 378k deaths this year over the prior 5-year average. It seems reasonable to attribute 50k of that from covid. However, the rest is probably the result of increased isolation, lack of standard care, and medical malpractice.

I failed to appreciate how this virus would become the tool of the Left, not merely to replace Trump, but to implement all sorts of comprehensive government policies. Jane Fonda was honest enough to state that covid was "God's gift to the Left," and the Left now has no shame in saying we should  'never let a crisis go to waste (this tactic was initially attributed to right-wingers by Naomi Klein and considered unethical).'

Asian and African countries do not have Western-style liberal parties. For example, there is no great call for third-world immigration in Japan, and they have a small footprint at Davos. Thus, they have considerably less incentive to inflate covid death counts. They have all passed through this virus the way the US passed through the avian flu, with a cumulative covid death rate as a percent of the population orders of magnitude smaller than in the West. Haiti, South Korea, Cuba, Venezuela, Japan, China, Nigeria, Ethiopia, Congo, Singapore, Zimbabwe, and Vietnam all have trivial covid death rates. These countries vary considerably in economic development, only sharing independence from Western political priorities.

If the covid panickers merely cared about covid and not its broader implications, they would emphasize low-cost, simple correctives, such as recommending vitamins C and D, zinc, aspirin (anti-coagulant), and exercise. They would also not grant legal and moral exemptions for Black Lives Matter gatherings. The higher truth in this farce is that the various emergency responses to the pandemic pave the way for further institutional changes and progressive policies. For example, if one gives up at the first sign of problems, no policy will ever work, no matter how good it is. Thus, leaders of new policies hate criticism, because they are 'all-in,' tied to that policy's success, while outside critics have the luxury of simply saying they should try something else. The net result is that those in charge discourage mentioning discrediting information, why one-party states never have a free press. Currently, many obvious anomalies to the covid narrative are actively suppressed as misinformation that threatens public health. Once suppressing these stories becomes common, it is easier to then also suppress criticisms of global warming, immigration policies, or Title IX expansions. 

Progressive international organizations like the World Economic Forum and the New Economy Forum have promoted policies with mottos like 'The Great Reset' and 'Build Back Better. They also have seized upon covid as a key justification, in that covid death-counts make it easier to convince people this is an existential threat that needs a war-like response.  When you dig into their literature, the priorities are straight out of the Communist Manifesto: centralized ownership, the subordination of the family and the individual to the state, and ultimately the elimination of the state to a one-world government. 

When the Soviet Union killed 4 million Ukrainians, or when Mao killed tens of millions in the Great Leap Forward, their state-sponsored press highlighted record harvests and anecdotes of the happy and prosperous new socialist man. Western socialists swooned at the efficiency of a well-ordered economy that didn't waste resources on profits and destructive competition. As with covid, the deaths were indirect, allowing those responsible to think these were unrelated to state policy, and as a practical matter, you know you have to break eggs to make an omelet. The fact that those promoting covid are willing to decimate our economy and kill hundreds of thousands to achieve their political objectives highlights what could lie ahead.