Wednesday, December 02, 2020

Dates of US Bear Markets Since 1873

It's useful to test longer-term rules, such as trend-following, across many cycles. To this end, it is useful to have dates for bull and bear markets. If your sample is merely from, say, 2010-2019, you will have 2520 data points, but no bear markets. Thus, you can prove such a strategy works by noting the significance level of your statistics, but anyone with some knowledge of history would see the error. A strategy optimized over only bull markets is, as they say, 'problematic.' The US stock market has been in bear markets 20% of the time since 1871. 

I have identified 24 US bear markets since 1873. I'd like to say it is a purely objective classification, but there are some judgment calls. Basically, I looked for the traditional "20% drawdown" definition. Several bear markets did not actually meet this standard--1990, 1957, 1873--but I included them anyway out of respect for history. For example, the 'Panic of 1873' occurred in the midst of European turmoil, started when the US and many other nations demonetized silver, was the first wave of railroad failures, had many bank failures, and even caused a 10-day closure of the New York Stock Exchange. The recession from 1873-77 was the longest in US history.  

On the other hand, there were some that perhaps are false positives. The 1884 bear market was only down 2% in real terms, 21% nominally. Yet, this bear market corresponds to what is often called depression as opposed to a recession, and it is referred to as the 'panic of 1874.' As the US just returned to the gold standard in 1879, many Europeans were skeptical the US could maintain it and were selling their US assets. Many businesses and banks failed. To say this was not a bear market because in real terms the markets were virtually flat seems wrong. 

I put in the prior bull market gain to give better context. It also explains why I have 2 bear markets from 1937-42 as opposed to one, as some have. Given the 68% increase from March 1938 through October 1939, it does not make sense to call that period a bear market. Further, these bear markets had very different causes. 

I used Shiller's data for data prior to 1926, and Ken French's data for afterward. Shiller's data is monthly, and this tends to soften cycles, avoiding the true peaks and troughs. Shiller's data is a little funky, as for example, for this year his February return is flat while the SP500 was down 8%. These discrepancies tend to leak over to other months, however, so for measuring bull and bear market returns they are probably less problematic. Further, prior to 1926, I don't have anything better. 

While not perfect, it's useful to have these dates, at least for a starting point. If you have suggestions on amendments, I would appreciate them. You can download this here

StartEndDeclinePrior RiseMonthsComments
Feb-1873Nov-1873-18%#N/A10Left silver, failure of Jay Cooke, railroads, Europe weak
Mar-1876Jun-1877-33%32%16End of longest recession
Sep-1882Jan-1885-21%198%29Foreign run on US assets due to worry about US gold standard
Jan-1893Aug-1893-25%89%8Failure of railroads, banks
Sep-1895Aug-1896-19%34%12Double dip from last recession
Sep-1902Oct-1903-26%194%14Minor recession
Oct-1906Nov-1907-32%76%14A run on Knickerbocker Trust , JPMorgan leads bailout
Nov-1916Dec-1917-28%160%26Start of inflation, US entered WW1
Oct-1919Aug-1921-23%60%23Prices fall by 50% after rising 100% in war
9/7/292/27/33-84%635%43Great Depression
3/6/373/31/38-51%416%14Short-lived massive retained earnings tax
10/25/394/28/42-31%68%31Start of WW2
5/29/466/6/47-24%237%13End of war transition
8/2/5610/22/57-17%421%16Minor recession
12/12/616/26/62-28%122%7Kennedy micro-manages steel price increases
2/9/6610/7/66-21%101%9Fed tightens, relents
11/29/685/26/70-37%71%19Collapse of merger wave, tech boom
1/11/7310/3/74-48%88%22OPEC oil crisis
11/28/808/12/82-20%246%21Peak inflation, Volker Fed tightening
8/25/8712/4/87-33%281%4Fed tightens to support dollar, market crash
1/2/9010/11/90-18%71%10Run-up to Iraq War I, junk bond & Comm RE bust
3/24/0010/9/02-50%575%31Collapse of tech bubble/911 attack
10/9/073/9/09-55%131%18Mortgage crisis
2/19/203/12/20-27%535%2Covid


Monday, November 30, 2020

$1000 Covid Bet with Robin Hanson

"The whole aim of practical politics is to keep the populace alarmed by menacing it with an endless series of hobgoblins, most of them imaginary." ~ HL Mencken

Robin Hanson
In February, Robin Hanson tweeted that he would take bets on the nascent COVID-19 pandemic, and he was generally 'long' the severity. His interest is non-partisan, as he is a seminal proponent of prediction markets for policy debates. The idea is rather simple: forecasts are more accurate when forecasters have to put money on them. Talk is cheap. Interestingly, the biggest obstacle to this idea is legal, as lawmakers discourage these markets by highlighting bizarre edge cases (as with crypto, these usually involve terrorists). More practically, regulators and their industry constituents want to make sure such markets do not encroach on their protected markets.

When covid arose in February I knew that historically bad flu seasons would generate extra 40k deaths in some years; high-profile viruses like avian flu tended to be limited (3k deaths in the US). Further, virulence is inversely correlated with contagiousness, as people really do not like dying, and so are very good at quarantining those infected by deadly diseases like SARS and Ebola. I knew that data could be manipulated, as with African AIDS deaths, but I thought death statistics in the USA would be relatively immune to this tactic. Thus, when he gave me a number of 250k deaths by the end of the year, I thought it impossible and offered 10-1 odds on $100 ('impossible' means 10% chance when applied to things I understand at this level; I'm a doctor, but not a real doctor). 

I just paid him $1,000. I lost the bet fair and square because implicit in the bet was that we would use conventional metrics of covid deaths, such as those of the Center for Disease Control (CDC) or the World Health Organization (WHO). I have been following the CDC, and while one page reports 244k, it will pass 250k soon; another page on their site reports 265k. Even if I take the minimum, the result is inevitable. In hindsight, my error was not anticipating that covid would become politicized.  Robin was right for the wrong reason (covid deaths are inflated, it is not comparable to the Spanish Flu), but that often happens in bets.

The SARS Effect

In January, China reported the first death from the new covid virus, and by mid-month, the WHO published a comprehensive of guidance documents on this new disease. In a prelude to the panic, the CDC, following the WHO's lead, was confident that a new pandemic was at hand. The WHO's initial January report specifically referenced the 2003 SARS, the highly lethal respiratory disease that formed the basis for many new Crisis Response Protocols developed by the health care bureaucracy. Covid was the pandemic that our experts had extensively planned for, which proved disastrous.

In the 2003 SARS pandemic many healthcare workers became infected, and hospital transmission was the primary accelerator of SARS infections, accounting for 72% of cases in Toronto and 55% of probable cases in Taiwan. This pattern is so common and terrifying that there is a special word for this: nosocomial, which means transmitted in a healthcare facility. While conventionally SARS refers specifically to the 2003 pandemic, it is also a generalized term (Severe Acute Respiratory Syndrom), and our current virus is considered in the same clade as the 2003 SARS virus. It is the SARS-CoV-2 virus that causes COVID-19 disease (hereafter, covid). Over the past 17 years, health care institutions created hundreds of detailed guides about how to quarantine, report, and control the next SARS outbreak, with hospital protocols the first line of defense.

Reviews of the SARS experience noted the importance of a detailed protocol for dealing with such diseases. In Toronto, infected health care workers all reported that they had worn the recommended protective equipment, including gowns, gloves, specialized masks, and goggles, each time they entered the patient's room. However, the workers had not been fit-tested for their masks, and one nurse admitted his mask didn't fit well. It was also noted that some of the workers might not have followed the correct sequence in removing their protective equipment (i.e., gloves first, then mask and goggles). 

The emphasis on small details created a bureaucratic mindset that ignored common sense because the motivation was preventing not merely the next SARS, but the next worst-case-scenario SARS  (see The Andromeda Strain or the latest Planet of the Apes series). The focus was on health care workers at the expense of patients, which seems simply self-serving, but it makes sense if your vision of a pandemic comes from dystopic science-fiction movies. If all health care providers die first, everyone else is sure to die next because, without health experts, health experts expect society to revert to Medieval life expectancies. Thus the priority was not so much healing the sick but getting them out of circulation. When the objective is to prevent an existential threat to humanity, virtually any extreme measure with large present costs is justified.

At the beginning of the covid crisis, the CDC recommended health care workers don full Personal Protective Equipment (PPE) for each patient encounter, consisting of the following:

  • A disposable N95 respirator face mask that achieves a seal around the mouth and nose
  • Gloves
  • Eye protection
  • Disposable gown
  • Footwear
The priority was clearly on protecting health care workers, not saving infected patients. This implies reducing contact and making sure patients didn't breathe too much into hospital rooms. Here are some CDC covid protocol recommendations:
  • Intermittent rather than continuous patient monitoring to reduce contact
  • Rapid ventilation to minimize aerosol generation (Rapid Sequence Intubation)
  • Aggressively suppress patient cough through sedation strategies (fentanyl, ketamine, propofol).
  • Reduced suctioning
  • Reduced visitors, and then only with PPE
If a patient coded (goes into cardiac or respiratory arrest) in many hospitals, guidelines recommended that staff currently with that patient must leave the room and don full PPE before administering CPR. These are critical moments as the time to do this takes a couple of minutes, the difference between life and death. There were also do-not-resuscitate orders, though, like the orders to don full PPE, institutions denied ever having such protocols. Since no visitors are allowed in this protocol, this scandal was never witnessed by family members. These extreme protocols have been silently abandoned.

Early in the crisis, there was a focus on the number of ventilators as a hospital capacity metric. There were calls for transitioning defense contractors to ventilators' production, which are tangible cures for clueless politicians and journalists, similar to how Mao emphasized steel production. In fact, more people would be alive today if there was a shortage, and its aggressive and negligent application killed tens of thousands. Usually, 40% of patients with severe respiratory distress die while on ventilators, as these are emergency tactics for the very sick (classic selection bias). Yet in the March covid disaster in New York City, 85% of coronavirus patients placed on the machines died, including 97% of ventilated patients over 65 (see here). As many were placed on ventilators that otherwise would not have been, the implications for excess deaths are fairly direct.

The problems with intubation are known as VALI: Ventilator Association Lung Injury. For example, the absolute pressures used in order to ventilate lungs, and shearing forces associated with rapid changes in gas velocity can traumatize lung tissue. It increases the risk of pneumonia because the tube that allows patients to breathe can introduce bacteria into the lungs. Pressure and oxygen levels need to be individualized because too much or too little of either damage lungs, requiring frequent monitoring and adjustment. People were put on ventilators at a higher-than-normal rate and monitored infrequently. As their family members were absent, no one could call for a nurse when a patient was in obvious distress.

Drugging patients and putting them on ventilators reduced the risk they would infect health care workers. Additionally, there were reports that some covid patients had a rapid decline of oxygenation levels, and so in anticipation of this, a ventilator first strategy was seen as proactive. A review of experiences in Italy stated that "invasive ventilation is associated with reduced aerosolization and is thus safer for staff and other patients," but also admitted that "it might also be associated with hypoxia, hemodynamic failure, and cardiac arrest during tracheal intubation."

Financial incentives aggravated the overuse of ventilators. In the United States, the government pays approximately $13,000 for a regular COVID-19 patient, but $39,000 for an intubated patient. A ventilator is a cash cow for medical facilities. Given the CDC's official recommendations, no one could second-guess them for being overly aggressive, especially when their aim was to prevent an existential threat. [Left-wing fact-checker Snopes rated this payment factoid as 'mixed,' employing the casuistry that while correct as an approximation, actual payments are not exactly $13k or $39k in every case] 

Over Counting

When covid exploded in Italy the WHO had already implemented an unprecedented policy to count all deaths 'with covid' as deaths 'from covid.' The policy was immediately adopted in the US as well, as Illinois' Public Health Director Ngozi Ezike stated, "even if you died of a clear alternative cause, but you had covid at the same time, it's still listed as a covid death." Early in the pandemic, when there was little data on how virulent this pandemic would be, the CDC emphasized how important it was to label anything plausibly related to covid as a COVID-19 death to "appropriately direct [the] public health response." This is a clear indication that they were interested in maximizing covid deaths from the outset. As Marx advised, the purpose of intellectuals is not merely to interpret history, but to change it.

As you die, your immune system shuts down, allowing many viruses to thrive as one nears death. These are opportunistic collateral infections, not the cause of death. Pneumonia was often referred to as 'old man's friend' because it was the immediate cause of death for most old people, whether the real reason was renal failure, cardiovascular disease, or cancer. Measuring for the appearance of a particular virus, regardless of these co-morbidities, is misleading, and why historically, no one has ever used the "died with" protocol for attributing the underlying cause of death (UCOD).

Further, a covid diagnosis is very lenient. The CDC not only allows a presumptive diagnosis but before any significant data on this new virus, confidently recommended applying covid to any death remotely plausible: "it is likely that it will be the UCOD, as it can lead to various life-threatening conditions, such as pneumonia ... in these cases, COVID–19 should be reported." Thus in April, when New York City breached 10k deaths, this included 3,700 who were presumed to have died of covid but never tested.

The US authorized $150B for covid relief in March, including a 20% add-on to the standard rate for patients diagnosed with covid. If you have been to a hospital out-of-network recently, you learn how much extra you are charged without insurance, the 'standard rate' as defined by 'diagnosis-related groups.' These rates are benchmarks that allow insurers to show you how much you are saving with them. They are also high rates because they have low collection rates, and hospitals are obligated to service an ill person regardless of insurance. Many patients leave and are untraceable, so those who pay subsidize those who do not, a hidden redistributive tax within our health care. A covid diagnosis generates the standard rate, which is a premium rate, and adds a 20% bonus.

If you run a long-term care facility where many patients are at the end of their life, and final days usually entail expensive treatments, it would be financially prudent and entirely legal to diagnose as many decedents as covid as possible. Further, this petty cash grab would avoid media moral censure, as many eager to inflate the death count would consider this a cost worth paying.

While no testing is required for a covid diagnosis at death, the tests themselves are biased. A virus with a low load is often inactive, passive, non-threatening. This phenomenon is the basis for HIV antiretroviral therapy, in that when a person has a sufficiently low viral load, they not only do not get sick, they do not transmit the disease.

A critical threshold (Ct) for 'cycles' in PCR tests is an important cause of false positives. Each cycle doubles the amount of the virus fragments, so as 35 cycles is 10 more than 25 cycles, this implies it generates 1024 times (2^10) of the viral fragments in the final solution. A recent covid study found that 70% of samples with Ct values of 25 or below could be cultured, indicating an active infection, compared with less than 3% of the cases with Ct values above 35. Yet, the CDC states Ct values should not be used to determine a patient's viral load because the correlation between Ct values and viral load is imperfect. This objection would obviate just about every health metric if not all of statistics: is high blood pressure a useless signal because some people with high blood pressure live to 100? PCR provides an argument by authority--they reference peer-reviewed science--but if one does not simply defer to their credentials and understands the logic they present, it exposes their complete lack of credibility. Science as a method is rational and objective; science as an institution is as corrupt as the Medieval church.

Nick Cordero
A good example of a spurious covid death is the tragic case of Broadway actor and dancer, Nick Cordero. He was promoted as an example of how covid threatens everyone, and the NYT reported he had no underlying health conditions. Yet, at some point, he contracted pneumonia so severe he was admitted to the hospital in the peak of the New York City covid fiasco. I have had pneumonia twice, and in both cases, I was just given antibiotics, so he must have had a severe case. Once hospitalized, he was put on a ventilator, given dialysis, and put on a heart-lung bypass machine. His heart stopped for two minutes at one point, was put in a medically induced coma for six weeks, and his right leg was amputated due to excessive clotting. He was tested several times for covid, including initially, always negative, but eventually, he tested positive for covid before dying in July.

For the media to portray Cordero as having no underlying health conditions merely because this described him before hospitalization is not just misleading, but intentionally so. The litany of life-threatening complications before his first positive covid test made him one of the least healthy people on the planet. There is clearly a higher truth for the media in his story. It would be interesting to know to what degree the aggressive intubation protocols at his time of admittance factored in his death. It is quite likely he was rapidly intubated and neglected, per SARS protocols, a classic case of iatrogenesis, when medical care harms the patient.

Declines in Elective Surgery and Regular Doctor Visits

To reduce infectious risk to providers and conserve critical resources, most states in the US enacted a temporary ban on elective surgery from March through May 2020. Various discouragements have continued. Elective surgical cases fall somewhere between vital preventative measures (e.g., screening colonoscopy) and essential surgery (e.g., cataract removal). These surgeries plummeted 60% in April, but have subsequently rebounded, though are still well below last year. Similarly, outpatient visits fell by 50% initially and are still well below previous levels (see here and here). 

The effects of healthcare visits and elective surgery on mortality, let alone and quality of life, are speculative. Yet, many papers supported Obama's Affordable Care Act, noting that increased access to such care had significant effects. Estimates of how much more health care access people had due to Obamacare range from 1 to 5%, and the consequences range from 10k to 50k deaths avoided per year. Given an initial 50% reduction and a subsequent reduction of 10-20% over the rest of the year, a 100k increase in deaths would be a reasonable estimate given this literature.

Obamacare supporters generally also support the lockdown. They insist a small increase in access to healthcare saved tens of thousands via Obamacare, while this year's radically sharp decline in access to healthcare had no effect worth mentioning when discussing the lockdowns.

Social isolation

People are social, which is why one of the worst punishments in Roman times was exile. Solitary confinement cuts people off from the types of activity that bring meaning and purpose to their life, communal activities, and face-to-face social interactions. To suggest taking this away from people, especially the elderly, is not worth estimating in this pandemic is absurd to anyone who thinks life is about quality as well as quantity.

Yet even if we just focus on quantity, social isolation is a risk factor. Social isolation is associated with functional decline and death. For example, loneliness among heart failure patients nearly quadruples their risk of death, and it increases their risk of hospitalization by 68%. A meta-study on the effects of social isolation found significant mortality effects, where people in the 'loneliest quintiles had 30% higher all-cause mortality rates.

Suicide deaths are a relevant metric, but national data has a couple-year lag. We know that in 2018 there were 48,000 deaths from suicide and at least 1.4 million attempts, and in 2019, almost 71,000 people died from drug overdoses, many of which were suicide-related. There have been anecdotal reports that suicides are up, and it's concerning that the Social Justice Warriors are quick to lobby Twitter to censor these reports as if any information or even discussion of the costs of the lockdown is dangerous. Our uber-rational elite sees no value in debating the costs and benefits of our extreme response, just like the state-run media in one-party states. 

University Data: 0.0007% Case Fatality Rate

While one can re-label a standard pneumonia death as covid, this is not possible for young people who rarely die of pneumonia. Further, given their excellent health, young people do not put themselves in situations to receive iatrogenic medical treatment or feel the effects of restricted access to health providers.

As mentioned, opportunistic infections are common in people near death, and there are strong incentives and easy ability to label a decedent as a covid death, regardless of its relevance. This makes the standard CDC data susceptible to massive inflation. An ideal estimation procedure would test a random sample of people, and then for those who test positive, check if they are alive in a couple of months. This removes many of the above-mentioned biases. Universities have done something close to this. As schools were cautious about the PR debacle if they were a covid-death hot-spot, universities were well equipped to test their students in order to keep them from spreading the virus. They would test those arriving, those with minor symptoms, and those without symptoms who were in contact with someone who tested positive. It is not perfectly random in that they will miss asymptomatic cases that were not in known contact with a covid positive person, but it's the most bias-resistant metric we have.

As of November 22, they had 139,000 positive covid test subjects. There have been 17 hospitalizations, and 1 death, which is a 0.0007% case fatality rate (CFR). This death rate is one-tenth that of the flu for this demographic. Given the sample size, you can reject the hypothesis that covid has a higher death rate than the flu at any conventional significance level, just like regular coronaviruses. 

Despite this anomalous data, over the summer, two college anecdotes received stories in the New York Times, eager to highlight covid is a significant mortal threat to everyone. One story highlighted a 350-pound young man who died of a pulmonary embolism and whose initial obituary did not mention covid; the other student had an undetected case of the deadly Guillain-Barre syndrome. These cofactors were not just downplayed, but reversed: the obese young man was an athlete (football player), the other described as "super healthy." This tendentious narrative highlights that covid is less about covid than something else.

In contrast, the CDC's total covid deaths by age group show 428 deaths for the 15-24-year-old grouping and 1006 deaths in the 18-29 year-old grouping, which implies a death rate of 0.001% to 0.002% among ALL people in this group. Given the CDC reports that 5% of this demographic has tested positive, this would imply a 0.03% case fatality rate. The case fatality rate for college students who tested positive is about 1/25th of this (0.0007%). Given the large sample size, you can reject the hypothesis that these fatality rates are equal at any conventional significance level. 

The simplest and most obvious explanation is that the CDC's death data include many deaths not caused by covid. The CDC's 'died with' protocol not only allows but encourages labeling the cause of death as covid, but this bias can only work if there is a large set of deaths to work on. As 28k 20-somethings have died this year, tagging 428 of them with covid is pretty easy, and there are strong financial incentives to do this. 

Avg Age at Covid Death > Avg Age at Death

Paradoxically, a typical plague affects the young more than the old. While the old die at high rates in a plague, they die at high rates anyway, they're old. The increase in excess deaths centers on the more numerous young, who start at a much lower normal mortality rate. For example, in the non-politicized Avian flu, the average age at death was 48, well below the usual average age at death, which is about 75. Ebola and AIDS killed mostly young people. Older people are more immune to viruses of all sorts, which is why kindergarten teachers rarely get colds, while children need to suffer through the process of getting infected to get immunity. Older people are less likely to socialize or wrestle (competitively or amorously). We should see a significant effect of a deadly new virus among young adults and infants who are more exposed and less biologically prepared for a novel virus strain, but we do not.

Below we see the most Spanish Flu deaths were among those under 45 years old, with a peak at 27. Though this chart is for select cities, it was a general pattern (see here, here, or here). In contrast, those under 45 represent 3% of total covid deaths according to the CDC, and covid deaths increase by age group, even though the total population starts to fall at 65. 



The CDC warehouses a large amount of data, mostly in categories of no significance, as if their purpose is to hide the truth. I could only find case rate in groupings of 10-19, and deaths in 15-24 (etc.), so I had to do some interpolations. Further, the case data by age was about half of the total cases, so I basically multiplied the case data by 2 to get cases by age group. Using this data we can estimate the case fatality rate for the age group, dividing covid deaths by cases. For those under 45, the mortality rate conditional upon getting covid is less than or equal to the all-cause mortality rate. In other words, if you test positive for covid and are under 45, your risk of dying does not increase. Covid is just a regular cold (coronavirus) for healthy people. I could not find a prior pandemic with an average age at death greater than the all-cause average age at death (75 vs. 73), but suggestions are welcome.

CDC Death and Case Data (see here and here)

Through 11/25

  Covid Deaths All Deaths Pop(K) Cases(K) Covid CFR Overall Mort Rate
< 1 yr 29 14,582 3,783 27 0.11% 0.39%
1–4 yrs 16 2,718 15,794 111 0.01% 0.02%
5–14 yrs 42 4,366 40,994 289 0.01% 0.01%
15–24 yrs 428 28,020 42,688 1,571 0.03% 0.07%
25–34 yrs 1,812 57,251 45,940 2,208 0.08% 0.12%
35–44 yrs 4,663 80,852 41,659 2,134 0.22% 0.19%
45–54 yrs 12,371 147,270 40,875 1,984 0.62% 0.36%
55–64 yrs 29,888 337,300 42,449 1,921 1.56% 0.79%
65–74 yrs 51,667 512,249 31,483 1,346 3.84% 1.63%
75–84 yrs 64,575 623,712 15,970 989 6.53% 3.91%
> 85 yrs 74,722 771,228 6,605 418 17.88% 11.68%
Total 240,213 2,579,548 328,240 13,000 1.85% 0.79%
Avg Age at Death 75.6 73.0                                                             

My personal experience with corona is consistent with the general data. My two college sons tested positive this fall and had only mild symptoms. I practice jiu-jitsu three times a week (when not in lockdown, our current status), and such activity exposes one to 10-20 different biomes each session. You are wrestling with several people who wrestle several other people, so basically everyone shares their viral load with the class. Neither I nor anyone else in our gym has developed covid symptoms, though statistically, it is almost certain we have all been exposed.

Conclusion

I do not doubt that covid represents a novel coronavirus, which like all viruses is lethal to some people. I doubt it is an abnormal one, in that flu seasons regularly vary, and given 325MM Americans, such viruses generate tens of thousands of extra deaths. To anyone who dies from these new viruses, it is a tragedy for them and their loved ones.

We have seen an extra 378k deaths this year over the prior 5-year average. It seems reasonable to attribute 50k of that from covid. However, the rest is probably the result of increased isolation, lack of standard care, and medical malpractice.

I failed to appreciate how this virus would become the tool of the Left, not merely to replace Trump, but to implement all sorts of comprehensive government policies. Jane Fonda was honest enough to state that covid was "God's gift to the Left," and the Left now has no shame in saying we should  'never let a crisis go to waste (this tactic was initially attributed to right-wingers by Naomi Klein and considered unethical).'

Asian and African countries do not have Western-style liberal parties. For example, there is no great call for third-world immigration in Japan, and they have a small footprint at Davos. Thus, they have considerably less incentive to inflate covid death counts. They have all passed through this virus the way the US passed through the avian flu, with a cumulative covid death rate as a percent of the population orders of magnitude smaller than in the West. Haiti, South Korea, Cuba, Venezuela, Japan, China, Nigeria, Ethiopia, Congo, Singapore, Zimbabwe, and Vietnam all have trivial covid death rates. These countries vary considerably in economic development, only sharing independence from Western political priorities.

If the covid panickers merely cared about covid and not its broader implications, they would emphasize low-cost, simple correctives, such as recommending vitamins C and D, zinc, aspirin (anti-coagulant), and exercise. They would also not grant legal and moral exemptions for Black Lives Matter gatherings. The higher truth in this farce is that the various emergency responses to the pandemic pave the way for further institutional changes and progressive policies. For example, if one gives up at the first sign of problems, no policy will ever work, no matter how good it is. Thus, leaders of new policies hate criticism, because they are 'all-in,' tied to that policy's success, while outside critics have the luxury of simply saying they should try something else. The net result is that those in charge discourage mentioning discrediting information, why one-party states never have a free press. Currently, many obvious anomalies to the covid narrative are actively suppressed as misinformation that threatens public health. Once suppressing these stories becomes common, it is easier to then also suppress criticisms of global warming, immigration policies, or Title IX expansions. 

Progressive international organizations like the World Economic Forum and the New Economy Forum have promoted policies with mottos like 'The Great Reset' and 'Build Back Better. They also have seized upon covid as a key justification, in that covid death-counts make it easier to convince people this is an existential threat that needs a war-like response.  When you dig into their literature, the priorities are straight out of the Communist Manifesto: centralized ownership, the subordination of the family and the individual to the state, and ultimately the elimination of the state to a one-world government. 

When the Soviet Union killed 4 million Ukrainians, or when Mao killed tens of millions in the Great Leap Forward, their state-sponsored press highlighted record harvests and anecdotes of the happy and prosperous new socialist man. Western socialists swooned at the efficiency of a well-ordered economy that didn't waste resources on profits and destructive competition. As with covid, the deaths were indirect, allowing those responsible to think these were unrelated to state policy, and as a practical matter, you know you have to break eggs to make an omelet. The fact that those promoting covid are willing to decimate our economy and kill hundreds of thousands to achieve their political objectives highlights what could lie ahead.  

Wednesday, May 06, 2020

Evolutionary Biology of Left-Right

not really true, but funny
It's straightforward to explain why a binary set of competing coalitions within a collective is common. Given the non-linear political payoff to coalition size--eg, when moving from 49% to 51% of the votes--the larger party gains more than it loses by letting the minor group add some of their priorities to their platform. The minority group joining the larger party, meanwhile, will see its priorities relegated to secondary positions but within a ruling or near-ruling party, such that their policies will have a higher chance of actually being implemented. When there are mutual gains from trade the transaction occurs spontaneously, resulting in the bipartisan equilibrium.

A less obvious phenomenon is the left-right political dichotomy. The differences are not as obvious because on many issues the right and left have changed sides, such as censorship and the right to free speech. Classical liberalism dominated the 19th century and emphasized negative rights, which only require others to abstain from interfering with you. These are promoted by left and right for different things: the left champions abortion, euthanasia, and recreational drugs, while the right champions economic transactions, guns, and homeschooling. President Woodrow Wilson was widely seen as a Progressive, but as he led the US into the disastrous WW1, leftist intellectuals sought a new term unsullied by this connection and chose 'liberalism,' highlighting the tricky nature of distilling the essence of right and left political views.

One speculative explanation is that the left-right divide is based, if not in our genes, then on our pathogen aversion intuition formed through evolution (see this book, or this online paper). The physical immune system evolved to defend us when pathogens enter the body. Concurrently, humans and other animals have evolved a behavioral immune system, which motivates us to avoid situations where we might become exposed to infection. Feces and rotting flesh are universally considered bad smells, and unsurprisingly contain a lot of dangerous bacteria.  Avoiding potential sources of infection has been crucial to our survival, and the motivation to avoid infection remains deeply rooted in us to this day.

Supposedly, this behavioral immune system explains the left-right stance on immigration. If you are hypersensitive to infection the last thing you want to do is to interact with a pathogen source. It operates entirely outside conscious awareness, utilizing the emotion of disgust to motivate avoidance of potentially infected objects and people. Thus it would explain why the right discourages sexual freedom because it leads to sexually transmitted diseases, and also why the right sees immigrants as an infection hazard, the way the American Indians should have looked at the Europeans.

If some people see dangers in immigrants via these deep evolutionary roots, it's difficult to reach a mutual understanding with reason-based, rational arguments. The fear comes from deeply ingrained unconscious systems that one can't control any more than one can choose to like the smell of rotting meat. You can discuss whether immigrants are a financial gain or burden to society, but if germophobes are concerned about an entirely different risk they aren't even fully aware of, such arguments will have no resonance.

As usual, it puts the right into being unreasonable, driven by instinctive heuristics now irrelevant as proven by science in peer-reviewed journals. Such evolutionary arguments are only popular, if even allowed, among academics for understanding bad right-wing beliefs or tendencies. If you apply such reasoning to why women or African Americans act or believe in a certain way, it would be immediately shot down as sexist or racist if you were explaining anything that was not an obviously good thing. Even for advantageous attributes--say why people of West African descent dominate sprinting--it is 'problematic,' because it provides a slippery slope for explaining disadvantageous attributes, perpetuating oppression.  Intellectuals are highly attuned to their particular status hierarchy, needing citations, faculty recommendations, and book blurbs. Suggesting the intolerable status quo is not explained by a malevolent cis-white-Christian-hetero pathology marks you a potential quisling at best. Tribalism requires identifying enemies, and also potential traitors insufficiently dismissive of existential threats (or, sticking with biology, a fellow ant infected with the Cordyceps fungus).

I don't find the behavioral immune theory to the left-right divide stupid, just tenuous. My aversion to dirt or sticky things has little valence in my thoughts about immigration, but that's my conscious brain working, perhaps puppeteered by a hyperactive behavioral immune response. In any case, the Covid-19 pandemic is a falsification of this theory. If the right were relatively hypertensive to pathogens, the current left-right stance on easing lockdown restrictions would be the opposite. The right wants to break the lockdown more than the left. One could argue the real factor here is that the right does not value life highly relative to economics, but the lockdown is not just opening up factories, but beaches, parks, and all sorts of places where people congregate.




Tuesday, April 21, 2020

Factor Momentum vs Factor Valuation

I am not a fan of most equity factors, but if any equity factor exists, it is the value factor. Graham and Dodd, Warren Buffet, Fama and French have all highlighted value as an investment strategy. Its essence is the ratio of a backward-looking accounting value vs. a forward-looking discounting of future dividends. As we are not venture capitalists, but rather, stock investors, all future projections are based on current accounting information. To the extent that a market is delusional, as in the 1999 tech bubble, that should show up as an excess deviation from the accounting or current valuation metric (eg, earnings, book value).  If there's any firm characteristic that should capture some of the behavioral bias trends among investors, this is it. 

Alternatively, there's the risk story. Many value companies are just down on their luck, like Apple in the 1990s, and people project recent troubles too far into the future. Thus, current accounting valuations are low, but these are anomalous and should be treated as such. Alas, most value companies are not doing poorly, they just do not offer any possibility of a 10-fold return, like Tesla or Amazon. Greedy, short-sighted investors love stocks with great upside--ignoring the boring value stocks--and just as they buy lottery tickets with an explicit 50% premium to fair value, they are willing to pay for hope.

There are several value metrics and all tell a similar story now. As an aside, note that it's useful to turn all your value metrics into ratios where higher means cheaper: B/M, E/P, CashFlow/Price, Operating Earnings/Book Equity. This helps your intuition as you sift through them. Secondly, E/P is better than P/E because E can go through zero into negative numbers, creating a bizarre non-monotonicity between your metric and your concept; in contrast, if P goes to zero predicting its future performance is irrelevant.

If you rank all stocks by their B/M, take the average B/M of the top and bottom 30%, and put them in a ratio, you get a sense of how cheap value stocks are: (B/M, 80th percentile)/(B/M, 20th percentile). Historically all value ratios are trend stationary. Given B/M ratios generally move mainly via their market cap and not book value or earnings, this means that value stock performance is forecastable. A high ratio of B/M for the top value stocks over the bottom value stocks implies good times for value stocks, as the M of the value stocks increases relative to the M of the anti-value stocks (eg, growth). All of these value metrics are near historical highs over the past 70 years (see AlphaArchitech's charts here).


This is pretty compelling, so much so that last November Cliff Asness at AQR decided to double down on their traditional value tilt. While there are dozens of value metrics today that scream 'buy value now', we have the Ur-metric--Book/Market--going back to 1927 in the US. This suggests we are not anywhere close to a top, which was much higher for most of the 1930s when value did relatively well on a beta-adjusted basis.


It's easy to come up with a story as to why the 1930s are not relevant today, but that is throwing out one-tenth of your data just because it disagrees with you.

Yet there's another way to time factors, momentum, whereby a factor's relative performance tends to persist for a couple months at least, and perhaps a year. Momentum refers to relative outperformance as opposed to absolute performance, which is referred to as 'trend following.' Trend following works as well but applies to asset classes like stocks, bonds, and gold, while momentum refers to stocks, industries, or factors.

Year to date, iShares' growth ETF (IWO) outperformed its value ETF (IWN) by 12%. For the past 10 years, growth has outperformed value by 100%. While iShare's growth ETF has a slightly higher beta (1.27 vs 1.05), that does not explain more than 20% of this.  Regardless of your momentum definition--3 months, 12 months--value is not a buy based on its momentum, which is currently negative, and has been for over a decade in the US.

In 2019, AQR's Tarun Gupta and Bryan Kelly authored 'Factor Momentum Everywhere in the Journal of Portfolio Management. They noted that 'persistence in factor returns is strange and ubiquitous.' Incredibly, they found that performance persisted using 1 to 60 months of past returns. I was happy to assume factor momentum exists, but usually saw evidence at the 6 month and below horizon (eg, see Arnott et al). If they found it at 60 months, my Spidey sense tingled, maybe this is an artifact of a kitchen-sink regression where 121 Barra factors are thrown in, generating persistence in alpha? That hypothesis would take a lot of work, but at the very least I should see if value factor momentum is clear.

I created several value proxy portfolios using Ken French's factor data:
  • HML Fama-French's value proxy, long High B/M short Low B/M (1927-2020)
  • B/M book to market (1927-2020)
  • CF/P cashflow to price (1951-2020)
  • E/P earnings to price (1951-2020)
  • OP operating profits to book equity (1963-2020)
I applied a rolling regression against the past 36 months of market returns to remove the market beta. As the HML portfolio's beta has gone from significantly positive to negative and back to slightly positive over time, it's useful to make this metric beta-neutral to avoid seeing market fluctuations show up as value fluctuations. Unlike the Barra factors, removing the market factor is not prone to overfitting, and captures something most sophisticated investors do not just understand but actually use.

The non-zero beta is just one reason to hate on the HML factor. Another is that, as it contains a short position, it can be of little interest if the short position is driving the results because for most factor investors--who have long horizons--short portfolios are not in your opportunity set. Most 'bad' stocks, following the low-vol phenomenon, are not just bad longs, but also bad shorts: returns are low, not negative, and volatility is very high. Shorting equity factors is generally a bad idea, and thus an irrelevant comparison because you should not be tempted to short these things.

The result is a set of 5 beta-neutral value proxy portfolios. I then ranked them by their past returns and looked at the subsequent returns. These returns are all relative, cross-sectional, because value-weighted, beta-adjusted returns across groupings net to zero each month by definition. By removing the market (ie, CAPM) beta, we can see the relative performance, which is the essence of momentum as applied to stocks (as defined by the seminal Jagdeesh and Titman paper).

The 12-month results were inconsistent with momentum in the value factor.



Using 6-months, momentum becomes more apparent (6M but returns annualized).


With the 1-month momentum, factor momentum is clear: past winners continue (returns are annualized).


I'm rather surprised not to find momentum at 12 months, given it shows up at that horizon in the trend-following literature, and would like to understand how Gupta and Kelly found it at 60 months. Nonetheless, it does seem factor momentum at shorter horizons is real.

If we exclude the US 1930s, valuations of value are at an extreme, if we include them they are not. Meanwhile, over the next several months, value's past performance suggests a continuation of the trend.  Given the big moves in value tend to last for over a year (eg, the run-up and run-down in the 2000 technology bubble), it seems prudent to accept missing out on the first quarter of this regime change and wait until value starts outperforming the market before doubling down.

Thursday, April 09, 2020

Fermi's Intuition on Models

In this video snippet, Freeman Dyson talks about an experience he had with Enrico Fermi in 1951. Dyson was originally a mathematician who had just shown how two different formulations of quantum electrodynamics (QED), Feynman diagrams and Schwinger-Tomonoga's operator method, were equivalent. Fermi was a great experimental and theoretical physicist who built the first nuclear reactor and discovered things like neutrinos, pions, and muons.

Dyson and a team at Cornell were working on a model of strong interactions, the forces that bind protons and neutrons in the nucleus. Their theory had a speculative physical basis: a nucleon field and a pseudo-scalar meson field (the pion field), which interacted with the proton. Their approach was to use the same tricks Dyson used on QED. After a year and a half, they produced a model that generated a nice agreement with Fermi's empirical work on meson-proton scattering he had produced at his cyclotron in Chicago.

Dyson went to Chicago to explain his theory to Fermi and presented a graph showing how his theoretical calculations matched Fermi's data. Fermi hardly looked at the graphs, and said,
I'm not very impressed with what you've been doing. When one does a theoretical calculation there are two ways of doing it. Either you should have a clear physical model in mind, or you should have a rigorous mathematical basis. You have neither. 
Dyson asked about the numerical agreement between his model and the empirical data. Fermi then responded, 'how many free parameters did you use for the fitting?'  Dyson noted there were four. Fermi responded, 'Johnny von Neumann always used to say with four parameters I can fit an elephant, with five I can make him wiggle his trunk. So I don't find the numerical agreement very impressive.'

I love this because it highlights a good way of looking at models. A handful of free parameters can make any model fit the data, generating the same result as a miracle step in a logical argument.  Either you derive a model from something you know to be true, or you derive them from a theory with a clear intuitive causal mechanism.

The entire interaction took only 15 minutes and was very disappointing, but Dyson was blessed with the wisdom and humility to take Fermi's dismissal as definitive and went back to Cornell to tell the team they should just write up what they had done and move on to different projects.

While this was crushing, with hindsight Dyson was grateful to Fermi. It saved his team from wasting years on a project that he discovered later would have never worked. There is no 'pseudo-scalar pion field.'  Eventually, physicists replaced the pion with 2 quarks, of which there are 6, highlighting the futility of the physical basis of their approach. Any experimental agreement they found was illusory.

After this experience, Dyson realized he was best suited to simplify models or connecting axioms to applications like quantum field theory. What was required for the strong interactions at that time was not a deductive solution but an invention--in this case, quarks--and that requires strong intuition. He realized his strengths were more analytic, less intuitive.

Unfortunately, today our best scientists are unconcerned about the ability of free parameters to make a bad theory seem fruitful. In physics, we have inflation, dark matter, and dark energy, things that never been isolated or fit into the Standard Model. In climate science, an anachronistic and clearly wrong Keynesian macro-model is one of many components (e.g., atmospheric, ocean, vegetation). They fit known data well but are totally unfalsifiable.

Tuesday, April 07, 2020

Decentralized Networks Need Centralized Oracles

I created an open-source contract and web-front-end, OracleSwap, because I want to see crypto move back to its off-the-grid roots. I cannot administer it because I have too many fingerprints on it to benefit directly. OracleSwap is a template that makes it easy for programmers to administer contracts that reference objective outcomes: liquid assets or sports betting. Users create long or short swap (aka CFD) positions that reference a unique Oracle contract that warehouses prices (the prototype references ETH/USD, BTC/USD, and the S&P500). The only users in this contract are liquidity providers, investors, and oracle. The single attack surface is via a conspiring oracle posting a fraudulent price. It contains several innovations, including forward-starting prices (like market-on-close), netting exposure for liquidity providers, and the ability and incentive for cheated parties to zero-out their cheater.

 The White Paper and Technical Appendix describe it more fully, but I want to explain why a centralized, pseudonymous, oracle is better than a decentralized oracle for smart contracts. Many thoughtful crypto leaders believe decentralization is a prerequisite for any dapp on the blockchain, which they define as implying many agents and a consensus mechanism. This is simply incorrect, a category error that assumes the parts must have all the characteristics of the whole. The bottom line is that decentralized oracles are inefficient and distract from the fundamental mechanism that makes any oracle 'trustless.'

 Attack and Censorship Resistance Is the Key 


After the first crusade (1099), the Knights Templar safeguarded pilgrims to newly conquered Jerusalem and quickly developed an early international bank. A pilgrim could deposit money or valuables within a Templar stronghold and receive an official letter describing what they had. That pilgrim could then withdraw cash along the route to take care of their needs. By the 12th century, depositors could freely move their wealth from one property to the next.

 The Templars were not under any monarch's control, and even worse, many owed them money. Eventually, King Philip IV of France seized an attack surface by arresting hundreds of top Templars, including the Grand Master, all on the same day across Europe in 1307. They were charged with heresy, the medieval version of systematic risk, a clear threat to all that is good and noble. A few years later many Templars were executed and the Templar banking system disappeared [unknown Templars were somehow able to flee with their vast fortune, which back then was not digital, and it is a mystery where it went].

 Governments, exchanges, and traditional financial institutions have always fought anything that might diminish their market power. Decentralization is essential for resisting their inevitable attacks, in that if someone takes over an existing blockchain, it can reappear via a hard fork. The present value of the old chain would create a willing and able army of substitute miners if China or Craig Wright decided to appropriate 51% of existing Ethereum miners.

 Vitalik Buterin nicely describes this resiliency in his admirable assessment of his limited power:
The thing with developers is that we are fairly fungible people. One developer goes down, and someone else can keep on developing. If someone puts a gun to my head and tells me to write a hard fork patch, I'll definitely write the hard fork patch. I'll write the GitHub issue, I'll write up the code, I'll publish it, and I'll do everything they say. If I do this and publish a hard fork patch to delete a bunch of accounts, how many people will be willing to download the update, install it and switch to that update? This is called decentralization.
 Vitalik Buterin. TechCrunch: Sessions Blockchain 2018 Zug, Switzerland

The potential for a hard fork in the case of an attack is the primary protection against outsiders. This depends on the protocol having a deep and committed set of users and developers who prioritize essential bitcoin principles--transparency, immutability, pseudonymity, confiscation-proof, and permissionless access--and why decentralization is critical for long-run crypto security.

 Outside attacks have decimated if not destroyed several once useful financial innovations. E-gold, Etherdelta, Intrade, and ShapeShift all had conspicuous centralization points, allowing authorities to prosecute, close, or force them to submit to traditional financial protocols. A pseudonymous oracle running virtually scripts on remote servers across the globe would be impervious to such interference. This inheritance is what makes Ethereum so valuable, in that dapps do not need their own decentralized consensus mechanisms to avoid such attacks.

 Any oracle that facilitates derivative trading or sports betting is subject to regulation in most developed countries. Dapp corporations are conspicuous attack surfaces. To the extent Augur and 0x do not compete with traditional institutions, authorities are wise to see them as insignificant curiosities simply. If these protocols ever become competitive with conventional financial institutions—by providing a futures position on the ETH price, for instance—all the traditional fiat regulations will be forced upon them under the pretext of safeguarding the public. Maker and Chainlink are already flirting with KYC, because they know they cannot conspicuously monetize markets that will ultimately generate profits without surrendering to the Borg collective.

 Satoshi needed to remain anonymous at the early stages of bitcoin to avoid some local government prosecuting him before bitcoin could work without him. The peer-to-peer system bitcoin initially emulated, Tor, is populated by people who do not advertise on traditional platforms, have institutional investors, or speak at conferences. Viable dapps should follow this example and focus less on corporatization and more on developing their reputation among current crypto enthusiasts.

Conspiracy-Proofness is Redundant and Misleading 


For cases involving small sums of money, it is difficult for random individuals in decentralized systems to collude at the expense of other participants. The costs of colluding are too high, which eliminates the effect of trolls and singular troublemakers. Yet this creates a dangerous sense of complacency as any robust mechanism must incent good behavior even if there is collusion. If we want the blockchain to handle real, significant transactions someday, this implies cases where there would be enough ETH at stake to presume someone will eventually conspire to hack the system.

Satoshi knew that malicious collusion would be feasible with proof-of-work, just not problematic because it would be self-defeating. In the Bitcoin White Paper, Satoshi emphasized how proof-of-work removed the need for a trusted third party, why the term trustless is often attributed to a decentralized network. With proof-of-work, it is not impossible to double-spend, just contrary to self-interested rationality. Specifically, he wrote that anyone with the ability to double-spend 'ought to find it more profitable to play by the rules … than to undermine the system and the validity of his own wealth.'

 For the large blockchains like Ethereum and Bitcoin, one needs specialized mining equipment that is only valuable if miners follow the letter and spirit of their law. The capital destroyed by manipulating blocks is a thousand-fold greater than the direct hash-power cost of such an attack. While a handful of Bitcoin or Ethereum mining groups can effectively collude and control 51% network control, it is not worrisome because it would not be in their self-interest to engineer a double-spend given the cost of losing future revenue. For example, in the spring of 2019, the head of Binance, Changpeng Zhao, suggested a blockchain rollback to undo a recent theft. The bitcoin community mocked him, and he quickly recanted because this would not be in the long-term self-interest of the bitcoin miners or exchanges. Saving $40 million would decimate a $100 billion blockchain, making this an easy decision.

 People often mention 'collusion resistance' as a primary decentralization virtue. A better term would be 'conspiracy resistance.' A decentralized system must generate proper incentives even if there is collusion because collusion is invariably possible as, in practice, large decentralized blockchains are controlled by a handful of teams (Michels' Iron Law of Oligarchy). There have been several instances of benign blockchain collusion, which when applied judiciously and sparingly increases resiliency (e.g., vulnerabilities in Bitcoin were patched behind the scenes in September 2018, the notorious Ethereum 2016 rollback in response to the DAO hack). Law professor Angela Walsh highlighted episodes of benign collusion as evidence the Bitcoin and Ethereum are not decentralized, and thus should be more regulated by the standard institutions.

 Lawyers are keen on technical definitions, but the key point is that the conventional regulators cannot regulate Bitcoin or Ethereum if they tried, highlighting the essential decentralization of these protocols. If the SEC in the US, or the FCA in the UK, tried to aggressively regulate Ethereum they would find the decision-makers soon outside their jurisdiction, Similarly, if Joe Lubin and Vitalik Buterin agreed to fold Ethereum into Facebook miners would fork the old chain and the existing Ether would be more valuable on this new chain. To the extent such a move is probable, the protocol is decentralized, safe from outsiders who do not like its vision for whatever reason.

 Conspiracy resistance all comes down to incentives, making sure that those running the system find running the system as generally understood more valuable than cheating. This same profit-maximizing incentive not only keeps miners honest, but it also protects them from themselves. While blockchains have many things in common, they have very different priorities. Users who prioritize speed prefer EOS; those who prioritize anonymity, Monero; institutional acceptance, Ripple. A quorum of miners who conspire to radically change their blockchain's traditional priorities will devalue their asset by alienating their base, and those who share the new priority will not switch over, but rather highlight that their favorite blockchain has been right all along. Competition among cryptos prevents hubristic insiders from doing too much damage.

 Costly Decentralization 


Quick and straightforward monitoring is essential for creating an incentive-compatible mechanism. For a decentralized oracle, various subsets of agents are at work on any outcome. It is difficult to find a concise set of data on, say, the percentage and type of Augur markets declared invalid, or a listing of Chainlink's past outcome reports. While all oracle reporting exists (immutably!), putting this together is simply impractical for an average user. Further, past errors and cheats are dismissed as anomalies, which lowers the cost of cheating.

 The 2017 ICO bubble encouraged everyone in the crypto space to issue tokens regardless of need; how a token would make a dapp more efficient was a secondary concern for investors eager to invest in the next bitcoin. Even if a small fraction of ICO money was applied to research and development, that implies hundreds of millions of dollars of talent and time focused on creating decentralized dapps that could justify their need for tokens. All would have all recognized the value of a dependable decentralized oracle, yet they were unable to deliver one, a telling failure. The most popular oracles today are effectively centralized, as ChainLink and MakerDAO have conspicuous attack surfaces as they are both tightly controlled by insiders. They will continue to be effectively centralized because the alternative would be an Augur-like system that is intolerably inefficient (slow, hackable, lame contracts).

 Decentralized oracles that depend on the market value of their tokens incenting good behavior have a significant wedge between how much users must pay the oracle and how much is needed to keep it honest. For example, suppose there is a game such that one needs to pay the reporter 1 ETH so that the net benefit of honestly reporting is greater than a scam the reporter can implement. If only 2% of token holders report on an outcome, this implies we must pay 50 ETH to the oracle collectively (1/0.02), as we have no way to focus the present value of the token onto the subset of token-holders reporting. One could force the reporter to post a bond that would be lost if they report dishonestly, but to make this work it would caps payoff at trivial levels based on reporter capital, which inefficiently ignores the present value of the oracle, and also implies a lengthy delay in payment.

 Another problem with decentralized oracles is they generally serve a diverse set of games. While this facilitates delusions of Amazon-like generality, it makes specific contracts poorly aligned with oracle incentives. The frequency and the size of the payoff will vary across applications so that an oracle fee incenting honesty at all times will be too expensive for most applications. Efficient solutions minimize contextual parameters, implying the best general solution will be suboptimal for any particular use.

 While there are obvious costs to decentralization within an oracle, there are no benefits if the fundamental censorship/attack resistance requirement is satisfied. The wisdom of the crowd is not relevant for contracts on liquid assets like bitcoin or the S&P500. A reputation scoring algorithm is pointless because the most obvious risk is an exit scam, which relies on behaving honestly until the final swindle (Bitconnect).

 To align the oracle's payoff space in a cryptoeconomically optimal way, one needs to create an oracle payoff such that the benefit of truthful reporting always outweighs the costs of misreporting. By having the oracle in total control, its revenue from truthful reporting is maximized; by being unambiguously responsible and easy to audit and punish, its costs from misreporting are fully born by the oracle; by playing a specific repeated game, the cost/benefit calculus is consistent each week; by giving a cheated user the ability and incentive to punish a cheating oracle, the cheat payoff minimized. These all lead to the efficiency of a single-agent oracle.

 Fault Tolerance 


Unintentional errors can come from corrupted sources or the algorithm that aggregates these prices and posts a single price to the contract. We often make unintentional mistakes and rely on the goodwill and common sense of those we do business with to 'undo' those cases where we might have added an extra zero to a payment. Credit cards allow chargebacks, and if a bank accidentally deposits $1MM in your account, you are liable to pay it back. The downside is that this process involves third parties who enforce the rule that big unintentional mistakes are reversible, and this implies they have rights over user account balances.

 In OracleSwap, the oracle contract itself has two error checks within the solidity code. First, if prices move by more than 50% from their prior value, and secondly, if they are exactly the same as their previous value. These constraints catch the most common errors. Off the blockchain, however, is where error filtering is more efficiently addressed in general, and ultimately it should be made into an algorithm because otherwise, one introduces an attack surface via the human who would verify a final report. Thus, using many people to reduce errors just adds back in the more subtle and dangerous source of bad prices. OracleSwap uses an automated pull of prices from several independent sources, over a couple-minute window, and takes the median. As the contract is targeting long-term investors, a median price from several exchanges will have a tolerable standard error; as the precise feeds and exchanges are unspecified, this prevents censorship; as prices a posted during a 1-hour window that precludes trading, it is easy to collect and validate an accurate price.

 A Market: Competing Centralized Agents 


Decentralizing oracles solves a problem they do not have: attack and censorship resistance. An agent updating an oracle contract only needs access to the internet and pseudonymity to avoid censorship. Given that, the best way to create proper oracle incentives is to create a game where the payoffs for honesty and cheating are well parameterized, and outsiders can easily verify if an agent is behaving honestly. Simplicity is essential to any robust game, and this implies removing parties and procedures.

 Markets are the ultimate decentralized mechanism. It is not a paradox that markets consist of centralized agents as it is often the nature of things that properties exist at lower levels but not higher ones. A decentralized market just needs consumers to have both choice and information, and for businesses to have free entry. Markets have always depended upon individuals and corporations with valuable reputations because invariably quality is difficult to assess, so consumers prefer sellers with good reputations to avoid going home with bad eggs or a car with bad wiring. Blockchains are the best accountability device in history, allowing contracts to create valuable reputations for their administrators while remaining anonymous and thus uncensorable.

 A set of competing contracts is more efficient than generalized oracles designed for unspecified contracts, or generalized trading protocols designed for unspecified oracles. A simple contract tied to an oracle that is 'all-in' creates clear and unambiguous accountability, generating the strongest incentive for honest reporting

Tuesday, March 31, 2020

The Real Corporate Bond Puzzle

The conventional academic corporate bond puzzle has been that 'risky' bonds generate too high a return premium (see here).  The most conspicuous credit metric captures US BBB and AAA bond yields going back to 1919 (Moody's calls them Baa and Aaa). This generates enough data to make it the corporate spread measure, especially when looking at correlations with business cycles.  Yet BBB bonds are still 'investment grade' (BBB, A, AA, and AAA), and have only a  25 basis point expected loss rate (default x loss in event of default). 10-year cumulative default rate after the initial rating.  Since the spread between Baa and Aaa bonds has averaged about 1.0% since 1919, this generates an approximate 0.75% annualized excess return compared to the riskless Aaa yield. Given the modest amount of risk in BBB portfolios, this creates the puzzle that corporate bond spreads are 'too high.'

HY bonds have grades of B and BB (CCC bonds are considered distressed). Their yields have averaged 3.5% higher than AAA bonds since 1996, yet the implication on returns is less obvious because the default rates are much higher (3-5% annually over the cycle). As a defaulted bond has an average recovery rate of 50% of face, a single default can wipe out many years of a 3.5% premium. 

Prior to the 1980s all HY bonds were 'fallen angels,' originally investment grade but downgraded due to poor financial performance. Mike Milken popularized the market to facilitate corporate take-overs, and by the 1990s it became common for firms to issue bonds in the B or BB category. In the early 1990s there was a spirited debate as to the actual default rate, and total returns, on HY bonds. This was not merely because we did not have much data on default and recovery rates, but also because bonds issued as HY instead of falling to HY might be fundamentally different. Indeed, when I worked at Moody's in the late 1990's I came across an internal report, circa 1990, that guestimated the default rate for HY bonds would be around 15% annualized. HY bonds were not just risky, but there was a great deal of 'uncertainty' in the sense of Knight or Keynes (winning a lottery vs. the probability Ivanka Trump becomes president).

We now have 32 years of data on this asset class, and as usual, the risky assets have lower returns than their safe counterparts. There is a HY yield premium, but no return premium.

The primary data we have are the Bank of America (formerly Merrill Lynch) bond indices, which go back to December 1988. Here we see a seemingly intuitive increase in risk and return:

Annualized Returns to US Corp Bond Indices
Bank of America (formerly Merrill Lynch)
December 1988 to March 2020

BBB
AA
AnnRet
7.85%
7.18%
6.49%
AnnStdev
8.17%
5.42%
4.58%

These indices are misleading. Just as using end-of-day prices to generate a daily trading strategy is biased, monthly price data for these relatively illiquid assets inflate the feasible return. Managers in this space pay large bid-ask spreads, and if they are seen eager to exit a position--which is usually chunky--this generates price impact, moving the price. Add to this the operational expense incurred in warehousing such assets, and we can understand why actual HY ETFs have lagged the Merrill HY index by about 1.4%, annualized

High Yield ETF Return Differential to BoA High Yield Index
2008 - 2020
2007 - 2020
JNK v. BoA
HYG vs. BoA
-1.58%
-1.28%
JNK and HYG are US tickers, BoA is their High Yield Total Return Index

With this adjustment, the HY return premium in the BoA HY index disappears relative to Investment Grade bonds. In my 2008 book Finding Alpha I documented that over the 1997-2008 period, closed-end bond funds showed a 2.7% return premium to IG over HY bonds.

Via Twitter, Michael Krause informed me about a vast duration difference in the ETFs I was examining, and so I edited an earlier draft for the sake of accuracy.

More recently, we can look at the difference in the HY and IG bond ETFs since then. HYG and JNK have an average maturity of 5.6 years. Investment-grade ETFs LQD and IGSB have maturities of 13 and 3, respectively. Adjusting for this, this implies a 200 basis point (ie, 2.0%) annualized premium for HY ETFs.

There is a 50 basis point management fee for the HY ETFs, about 10 bps for the IG ETFs. Given the much greater amount of research needed to buy HY ETFs, it reflects a real cost, not something that should be ignored as exogenous, unnecessary.

This generates, actually, a nice risk-return plot: linear in 'residual volatility', the volatility unexplained by interest rate changes.