Tuesday, April 21, 2020

Factor Momentum vs Factor Valuation

I am not a fan of most equity factors, but if any equity factor exists, it is the value factor. Graham and Dodd, Warren Buffet, Fama and French have all highlighted value as an investment strategy. Its essence is the ratio of a backward-looking accounting value vs. a forward-looking discounting of future dividends. As we are not venture capitalists, but rather, stock investors, all future projections are based on current accounting information. To the extent that a market is delusional, as in the 1999 tech bubble, that should show up as an excess deviation from the accounting or current valuation metric (eg, earnings, book value).  If there's any firm characteristic that should capture some of the behavioral bias trends among investors, this is it. 

Alternatively, there's the risk story. Many value companies are just down on their luck, like Apple in the 1990s, and people project recent troubles too far into the future. Thus, current accounting valuations are low, but these are anomalous and should be treated as such. Alas, most value companies are not doing poorly, they just do not offer any possibility of a 10-fold return, like Tesla or Amazon. Greedy, short-sighted investors love stocks with great upside--ignoring the boring value stocks--and just as they buy lottery tickets with an explicit 50% premium to fair value, they are willing to pay for hope.

There are several value metrics and all tell a similar story now. As an aside, note that it's useful to turn all your value metrics into ratios where higher means cheaper: B/M, E/P, CashFlow/Price, Operating Earnings/Book Equity. This helps your intuition as you sift through them. Secondly, E/P is better than P/E because E can go through zero into negative numbers, creating a bizarre non-monotonicity between your metric and your concept; in contrast, if P goes to zero predicting its future performance is irrelevant.

If you rank all stocks by their B/M, take the average B/M of the top and bottom 30%, and put them in a ratio, you get a sense of how cheap value stocks are: (B/M, 80th percentile)/(B/M, 20th percentile). Historically all value ratios are trend stationary. Given B/M ratios generally move mainly via their market cap and not book value or earnings, this means that value stock performance is forecastable. A high ratio of B/M for the top value stocks over the bottom value stocks implies good times for value stocks, as the M of the value stocks increases relative to the M of the anti-value stocks (eg, growth). All of these value metrics are near historical highs over the past 70 years (see AlphaArchitech's charts here).


This is pretty compelling, so much so that last November Cliff Asness at AQR decided to double down on their traditional value tilt. While there are dozens of value metrics today that scream 'buy value now', we have the Ur-metric--Book/Market--going back to 1927 in the US. This suggests we are not anywhere close to a top, which was much higher for most of the 1930s when value did relatively well on a beta-adjusted basis.


It's easy to come up with a story as to why the 1930s are not relevant today, but that is throwing out one-tenth of your data just because it disagrees with you.

Yet there's another way to time factors, momentum, whereby a factor's relative performance tends to persist for a couple months at least, and perhaps a year. Momentum refers to relative outperformance as opposed to absolute performance, which is referred to as 'trend following.' Trend following works as well but applies to asset classes like stocks, bonds, and gold, while momentum refers to stocks, industries, or factors.

Year to date, iShares' growth ETF (IWO) outperformed its value ETF (IWN) by 12%. For the past 10 years, growth has outperformed value by 100%. While iShare's growth ETF has a slightly higher beta (1.27 vs 1.05), that does not explain more than 20% of this.  Regardless of your momentum definition--3 months, 12 months--value is not a buy based on its momentum, which is currently negative, and has been for over a decade in the US.

In 2019, AQR's Tarun Gupta and Bryan Kelly authored 'Factor Momentum Everywhere in the Journal of Portfolio Management. They noted that 'persistence in factor returns is strange and ubiquitous.' Incredibly, they found that performance persisted using 1 to 60 months of past returns. I was happy to assume factor momentum exists, but usually saw evidence at the 6 month and below horizon (eg, see Arnott et al). If they found it at 60 months, my Spidey sense tingled, maybe this is an artifact of a kitchen-sink regression where 121 Barra factors are thrown in, generating persistence in alpha? That hypothesis would take a lot of work, but at the very least I should see if value factor momentum is clear.

I created several value proxy portfolios using Ken French's factor data:
  • HML Fama-French's value proxy, long High B/M short Low B/M (1927-2020)
  • B/M book to market (1927-2020)
  • CF/P cashflow to price (1951-2020)
  • E/P earnings to price (1951-2020)
  • OP operating profits to book equity (1963-2020)
I applied a rolling regression against the past 36 months of market returns to remove the market beta. As the HML portfolio's beta has gone from significantly positive to negative and back to slightly positive over time, it's useful to make this metric beta-neutral to avoid seeing market fluctuations show up as value fluctuations. Unlike the Barra factors, removing the market factor is not prone to overfitting, and captures something most sophisticated investors do not just understand but actually use.

The non-zero beta is just one reason to hate on the HML factor. Another is that, as it contains a short position, it can be of little interest if the short position is driving the results because for most factor investors--who have long horizons--short portfolios are not in your opportunity set. Most 'bad' stocks, following the low-vol phenomenon, are not just bad longs, but also bad shorts: returns are low, not negative, and volatility is very high. Shorting equity factors is generally a bad idea, and thus an irrelevant comparison because you should not be tempted to short these things.

The result is a set of 5 beta-neutral value proxy portfolios. I then ranked them by their past returns and looked at the subsequent returns. These returns are all relative, cross-sectional, because value-weighted, beta-adjusted returns across groupings net to zero each month by definition. By removing the market (ie, CAPM) beta, we can see the relative performance, which is the essence of momentum as applied to stocks (as defined by the seminal Jagdeesh and Titman paper).

The 12-month results were inconsistent with momentum in the value factor.



Using 6-months, momentum becomes more apparent (6M but returns annualized).


With the 1-month momentum, factor momentum is clear: past winners continue (returns are annualized).


I'm rather surprised not to find momentum at 12 months, given it shows up at that horizon in the trend-following literature, and would like to understand how Gupta and Kelly found it at 60 months. Nonetheless, it does seem factor momentum at shorter horizons is real.

If we exclude the US 1930s, valuations of value are at an extreme, if we include them they are not. Meanwhile, over the next several months, value's past performance suggests a continuation of the trend.  Given the big moves in value tend to last for over a year (eg, the run-up and run-down in the 2000 technology bubble), it seems prudent to accept missing out on the first quarter of this regime change and wait until value starts outperforming the market before doubling down.

Thursday, April 09, 2020

Fermi's Intuition on Models

In this video snippet, Freeman Dyson talks about an experience he had with Enrico Fermi in 1951. Dyson was originally a mathematician who had just shown how two different formulations of quantum electrodynamics (QED), Feynman diagrams and Schwinger-Tomonoga's operator method, were equivalent. Fermi was a great experimental and theoretical physicist who built the first nuclear reactor and discovered things like neutrinos, pions, and muons.

Dyson and a team at Cornell were working on a model of strong interactions, the forces that bind protons and neutrons in the nucleus. Their theory had a speculative physical basis: a nucleon field and a pseudo-scalar meson field (the pion field), which interacted with the proton. Their approach was to use the same tricks Dyson used on QED. After a year and a half, they produced a model that generated a nice agreement with Fermi's empirical work on meson-proton scattering he had produced at his cyclotron in Chicago.

Dyson went to Chicago to explain his theory to Fermi and presented a graph showing how his theoretical calculations matched Fermi's data. Fermi hardly looked at the graphs, and said,
I'm not very impressed with what you've been doing. When one does a theoretical calculation there are two ways of doing it. Either you should have a clear physical model in mind, or you should have a rigorous mathematical basis. You have neither. 
Dyson asked about the numerical agreement between his model and the empirical data. Fermi then responded, 'how many free parameters did you use for the fitting?'  Dyson noted there were four. Fermi responded, 'Johnny von Neumann always used to say with four parameters I can fit an elephant, with five I can make him wiggle his trunk. So I don't find the numerical agreement very impressive.'

I love this because it highlights a good way of looking at models. A handful of free parameters can make any model fit the data, generating the same result as a miracle step in a logical argument.  Either you derive a model from something you know to be true, or you derive them from a theory with a clear intuitive causal mechanism.

The entire interaction took only 15 minutes and was very disappointing, but Dyson was blessed with the wisdom and humility to take Fermi's dismissal as definitive and went back to Cornell to tell the team they should just write up what they had done and move on to different projects.

While this was crushing, with hindsight Dyson was grateful to Fermi. It saved his team from wasting years on a project that he discovered later would have never worked. There is no 'pseudo-scalar pion field.'  Eventually, physicists replaced the pion with 2 quarks, of which there are 6, highlighting the futility of the physical basis of their approach. Any experimental agreement they found was illusory.

After this experience, Dyson realized he was best suited to simplify models or connecting axioms to applications like quantum field theory. What was required for the strong interactions at that time was not a deductive solution but an invention--in this case, quarks--and that requires strong intuition. He realized his strengths were more analytic, less intuitive.

Unfortunately, today our best scientists are unconcerned about the ability of free parameters to make a bad theory seem fruitful. In physics, we have inflation, dark matter, and dark energy, things that never been isolated or fit into the Standard Model. In climate science, an anachronistic and clearly wrong Keynesian macro-model is one of many components (e.g., atmospheric, ocean, vegetation). They fit known data well but are totally unfalsifiable.

Tuesday, April 07, 2020

Decentralized Networks Need Centralized Oracles

I created an open-source contract and web-front-end, OracleSwap, because I want to see crypto move back to its off-the-grid roots. I cannot administer it because I have too many fingerprints on it to benefit directly. OracleSwap is a template that makes it easy for programmers to administer contracts that reference objective outcomes: liquid assets or sports betting. Users create long or short swap (aka CFD) positions that reference a unique Oracle contract that warehouses prices (the prototype references ETH/USD, BTC/USD, and the S&P500). The only users in this contract are liquidity providers, investors, and oracle. The single attack surface is via a conspiring oracle posting a fraudulent price. It contains several innovations, including forward-starting prices (like market-on-close), netting exposure for liquidity providers, and the ability and incentive for cheated parties to zero-out their cheater.

 The White Paper and Technical Appendix describe it more fully, but I want to explain why a centralized, pseudonymous, oracle is better than a decentralized oracle for smart contracts. Many thoughtful crypto leaders believe decentralization is a prerequisite for any dapp on the blockchain, which they define as implying many agents and a consensus mechanism. This is simply incorrect, a category error that assumes the parts must have all the characteristics of the whole. The bottom line is that decentralized oracles are inefficient and distract from the fundamental mechanism that makes any oracle 'trustless.'

 Attack and Censorship Resistance Is the Key 


After the first crusade (1099), the Knights Templar safeguarded pilgrims to newly conquered Jerusalem and quickly developed an early international bank. A pilgrim could deposit money or valuables within a Templar stronghold and receive an official letter describing what they had. That pilgrim could then withdraw cash along the route to take care of their needs. By the 12th century, depositors could freely move their wealth from one property to the next.

 The Templars were not under any monarch's control, and even worse, many owed them money. Eventually, King Philip IV of France seized an attack surface by arresting hundreds of top Templars, including the Grand Master, all on the same day across Europe in 1307. They were charged with heresy, the medieval version of systematic risk, a clear threat to all that is good and noble. A few years later many Templars were executed and the Templar banking system disappeared [unknown Templars were somehow able to flee with their vast fortune, which back then was not digital, and it is a mystery where it went].

 Governments, exchanges, and traditional financial institutions have always fought anything that might diminish their market power. Decentralization is essential for resisting their inevitable attacks, in that if someone takes over an existing blockchain, it can reappear via a hard fork. The present value of the old chain would create a willing and able army of substitute miners if China or Craig Wright decided to appropriate 51% of existing Ethereum miners.

 Vitalik Buterin nicely describes this resiliency in his admirable assessment of his limited power:
The thing with developers is that we are fairly fungible people. One developer goes down, and someone else can keep on developing. If someone puts a gun to my head and tells me to write a hard fork patch, I'll definitely write the hard fork patch. I'll write the GitHub issue, I'll write up the code, I'll publish it, and I'll do everything they say. If I do this and publish a hard fork patch to delete a bunch of accounts, how many people will be willing to download the update, install it and switch to that update? This is called decentralization.
 Vitalik Buterin. TechCrunch: Sessions Blockchain 2018 Zug, Switzerland

The potential for a hard fork in the case of an attack is the primary protection against outsiders. This depends on the protocol having a deep and committed set of users and developers who prioritize essential bitcoin principles--transparency, immutability, pseudonymity, confiscation-proof, and permissionless access--and why decentralization is critical for long-run crypto security.

 Outside attacks have decimated if not destroyed several once useful financial innovations. E-gold, Etherdelta, Intrade, and ShapeShift all had conspicuous centralization points, allowing authorities to prosecute, close, or force them to submit to traditional financial protocols. A pseudonymous oracle running virtually scripts on remote servers across the globe would be impervious to such interference. This inheritance is what makes Ethereum so valuable, in that dapps do not need their own decentralized consensus mechanisms to avoid such attacks.

 Any oracle that facilitates derivative trading or sports betting is subject to regulation in most developed countries. Dapp corporations are conspicuous attack surfaces. To the extent Augur and 0x do not compete with traditional institutions, authorities are wise to see them as insignificant curiosities simply. If these protocols ever become competitive with conventional financial institutions—by providing a futures position on the ETH price, for instance—all the traditional fiat regulations will be forced upon them under the pretext of safeguarding the public. Maker and Chainlink are already flirting with KYC, because they know they cannot conspicuously monetize markets that will ultimately generate profits without surrendering to the Borg collective.

 Satoshi needed to remain anonymous at the early stages of bitcoin to avoid some local government prosecuting him before bitcoin could work without him. The peer-to-peer system bitcoin initially emulated, Tor, is populated by people who do not advertise on traditional platforms, have institutional investors, or speak at conferences. Viable dapps should follow this example and focus less on corporatization and more on developing their reputation among current crypto enthusiasts.

Conspiracy-Proofness is Redundant and Misleading 


For cases involving small sums of money, it is difficult for random individuals in decentralized systems to collude at the expense of other participants. The costs of colluding are too high, which eliminates the effect of trolls and singular troublemakers. Yet this creates a dangerous sense of complacency as any robust mechanism must incent good behavior even if there is collusion. If we want the blockchain to handle real, significant transactions someday, this implies cases where there would be enough ETH at stake to presume someone will eventually conspire to hack the system.

Satoshi knew that malicious collusion would be feasible with proof-of-work, just not problematic because it would be self-defeating. In the Bitcoin White Paper, Satoshi emphasized how proof-of-work removed the need for a trusted third party, why the term trustless is often attributed to a decentralized network. With proof-of-work, it is not impossible to double-spend, just contrary to self-interested rationality. Specifically, he wrote that anyone with the ability to double-spend 'ought to find it more profitable to play by the rules … than to undermine the system and the validity of his own wealth.'

 For the large blockchains like Ethereum and Bitcoin, one needs specialized mining equipment that is only valuable if miners follow the letter and spirit of their law. The capital destroyed by manipulating blocks is a thousand-fold greater than the direct hash-power cost of such an attack. While a handful of Bitcoin or Ethereum mining groups can effectively collude and control 51% network control, it is not worrisome because it would not be in their self-interest to engineer a double-spend given the cost of losing future revenue. For example, in the spring of 2019, the head of Binance, Changpeng Zhao, suggested a blockchain rollback to undo a recent theft. The bitcoin community mocked him, and he quickly recanted because this would not be in the long-term self-interest of the bitcoin miners or exchanges. Saving $40 million would decimate a $100 billion blockchain, making this an easy decision.

 People often mention 'collusion resistance' as a primary decentralization virtue. A better term would be 'conspiracy resistance.' A decentralized system must generate proper incentives even if there is collusion because collusion is invariably possible as, in practice, large decentralized blockchains are controlled by a handful of teams (Michels' Iron Law of Oligarchy). There have been several instances of benign blockchain collusion, which when applied judiciously and sparingly increases resiliency (e.g., vulnerabilities in Bitcoin were patched behind the scenes in September 2018, the notorious Ethereum 2016 rollback in response to the DAO hack). Law professor Angela Walsh highlighted episodes of benign collusion as evidence the Bitcoin and Ethereum are not decentralized, and thus should be more regulated by the standard institutions.

 Lawyers are keen on technical definitions, but the key point is that the conventional regulators cannot regulate Bitcoin or Ethereum if they tried, highlighting the essential decentralization of these protocols. If the SEC in the US, or the FCA in the UK, tried to aggressively regulate Ethereum they would find the decision-makers soon outside their jurisdiction, Similarly, if Joe Lubin and Vitalik Buterin agreed to fold Ethereum into Facebook miners would fork the old chain and the existing Ether would be more valuable on this new chain. To the extent such a move is probable, the protocol is decentralized, safe from outsiders who do not like its vision for whatever reason.

 Conspiracy resistance all comes down to incentives, making sure that those running the system find running the system as generally understood more valuable than cheating. This same profit-maximizing incentive not only keeps miners honest, but it also protects them from themselves. While blockchains have many things in common, they have very different priorities. Users who prioritize speed prefer EOS; those who prioritize anonymity, Monero; institutional acceptance, Ripple. A quorum of miners who conspire to radically change their blockchain's traditional priorities will devalue their asset by alienating their base, and those who share the new priority will not switch over, but rather highlight that their favorite blockchain has been right all along. Competition among cryptos prevents hubristic insiders from doing too much damage.

 Costly Decentralization 


Quick and straightforward monitoring is essential for creating an incentive-compatible mechanism. For a decentralized oracle, various subsets of agents are at work on any outcome. It is difficult to find a concise set of data on, say, the percentage and type of Augur markets declared invalid, or a listing of Chainlink's past outcome reports. While all oracle reporting exists (immutably!), putting this together is simply impractical for an average user. Further, past errors and cheats are dismissed as anomalies, which lowers the cost of cheating.

 The 2017 ICO bubble encouraged everyone in the crypto space to issue tokens regardless of need; how a token would make a dapp more efficient was a secondary concern for investors eager to invest in the next bitcoin. Even if a small fraction of ICO money was applied to research and development, that implies hundreds of millions of dollars of talent and time focused on creating decentralized dapps that could justify their need for tokens. All would have all recognized the value of a dependable decentralized oracle, yet they were unable to deliver one, a telling failure. The most popular oracles today are effectively centralized, as ChainLink and MakerDAO have conspicuous attack surfaces as they are both tightly controlled by insiders. They will continue to be effectively centralized because the alternative would be an Augur-like system that is intolerably inefficient (slow, hackable, lame contracts).

 Decentralized oracles that depend on the market value of their tokens incenting good behavior have a significant wedge between how much users must pay the oracle and how much is needed to keep it honest. For example, suppose there is a game such that one needs to pay the reporter 1 ETH so that the net benefit of honestly reporting is greater than a scam the reporter can implement. If only 2% of token holders report on an outcome, this implies we must pay 50 ETH to the oracle collectively (1/0.02), as we have no way to focus the present value of the token onto the subset of token-holders reporting. One could force the reporter to post a bond that would be lost if they report dishonestly, but to make this work it would caps payoff at trivial levels based on reporter capital, which inefficiently ignores the present value of the oracle, and also implies a lengthy delay in payment.

 Another problem with decentralized oracles is they generally serve a diverse set of games. While this facilitates delusions of Amazon-like generality, it makes specific contracts poorly aligned with oracle incentives. The frequency and the size of the payoff will vary across applications so that an oracle fee incenting honesty at all times will be too expensive for most applications. Efficient solutions minimize contextual parameters, implying the best general solution will be suboptimal for any particular use.

 While there are obvious costs to decentralization within an oracle, there are no benefits if the fundamental censorship/attack resistance requirement is satisfied. The wisdom of the crowd is not relevant for contracts on liquid assets like bitcoin or the S&P500. A reputation scoring algorithm is pointless because the most obvious risk is an exit scam, which relies on behaving honestly until the final swindle (Bitconnect).

 To align the oracle's payoff space in a cryptoeconomically optimal way, one needs to create an oracle payoff such that the benefit of truthful reporting always outweighs the costs of misreporting. By having the oracle in total control, its revenue from truthful reporting is maximized; by being unambiguously responsible and easy to audit and punish, its costs from misreporting are fully born by the oracle; by playing a specific repeated game, the cost/benefit calculus is consistent each week; by giving a cheated user the ability and incentive to punish a cheating oracle, the cheat payoff minimized. These all lead to the efficiency of a single-agent oracle.

 Fault Tolerance 


Unintentional errors can come from corrupted sources or the algorithm that aggregates these prices and posts a single price to the contract. We often make unintentional mistakes and rely on the goodwill and common sense of those we do business with to 'undo' those cases where we might have added an extra zero to a payment. Credit cards allow chargebacks, and if a bank accidentally deposits $1MM in your account, you are liable to pay it back. The downside is that this process involves third parties who enforce the rule that big unintentional mistakes are reversible, and this implies they have rights over user account balances.

 In OracleSwap, the oracle contract itself has two error checks within the solidity code. First, if prices move by more than 50% from their prior value, and secondly, if they are exactly the same as their previous value. These constraints catch the most common errors. Off the blockchain, however, is where error filtering is more efficiently addressed in general, and ultimately it should be made into an algorithm because otherwise, one introduces an attack surface via the human who would verify a final report. Thus, using many people to reduce errors just adds back in the more subtle and dangerous source of bad prices. OracleSwap uses an automated pull of prices from several independent sources, over a couple-minute window, and takes the median. As the contract is targeting long-term investors, a median price from several exchanges will have a tolerable standard error; as the precise feeds and exchanges are unspecified, this prevents censorship; as prices a posted during a 1-hour window that precludes trading, it is easy to collect and validate an accurate price.

 A Market: Competing Centralized Agents 


Decentralizing oracles solves a problem they do not have: attack and censorship resistance. An agent updating an oracle contract only needs access to the internet and pseudonymity to avoid censorship. Given that, the best way to create proper oracle incentives is to create a game where the payoffs for honesty and cheating are well parameterized, and outsiders can easily verify if an agent is behaving honestly. Simplicity is essential to any robust game, and this implies removing parties and procedures.

 Markets are the ultimate decentralized mechanism. It is not a paradox that markets consist of centralized agents as it is often the nature of things that properties exist at lower levels but not higher ones. A decentralized market just needs consumers to have both choice and information, and for businesses to have free entry. Markets have always depended upon individuals and corporations with valuable reputations because invariably quality is difficult to assess, so consumers prefer sellers with good reputations to avoid going home with bad eggs or a car with bad wiring. Blockchains are the best accountability device in history, allowing contracts to create valuable reputations for their administrators while remaining anonymous and thus uncensorable.

 A set of competing contracts is more efficient than generalized oracles designed for unspecified contracts, or generalized trading protocols designed for unspecified oracles. A simple contract tied to an oracle that is 'all-in' creates clear and unambiguous accountability, generating the strongest incentive for honest reporting