Tuesday, March 10, 2020

OracleSwap: An Open-Source Derivative Contract Suite


A couple of years ago, I thought it would be good to create a crypto fund. I soon discovered that as a registered US firm my options were severely limited. I could go long or short a handful of crypto names over-the-counter, but they had excessive funding rates for going long and short (eg, >12%), and required 100% margin; I could short bitcoin at the CBOE, but I had to put up five times the notional as collateral. No reasonable estimate of alpha can overcome such costs. To use popular exchanges like Deribit or BitMEX would require lying about my domicile which would violate US regulations related to investment advisors, and also diminishes my legal rights if such an exchange decided to simply not give me my money back.

So I thought, why not create my own derivatives contract? Ethereum gives users the ability to create simple contracts, and nothing is more straightforward than a futures contract. I figured I could create a contract where the money at risk would be small relative to the notional, and its oracle would be honest because of the present value of this repeated game. The basic idea was simple enough, but the details are important and difficult, which turned this into a 2-year trip (I have sincere empathy for Ethereum's development pace).

Many initial crypto enthusiasts were motivated by the belief that our traditional financial system was corrupted by bailouts and greed. Ironically, the standard floundering blockchain dapp is constrained by their earlier short-sighted greed. Enterprising capitalists discovered that if you sell tokens, you can propose a vague blockchain business model and people will think it will be just like bitcoin, only it would offer insurance, porn, or dentistryThis required corporate structures because even in 2017 no one was gullible enough to invest in a token that funded an individual. Supposedly, the token is for use and decentralized governance, the latter implying all of the desirable bitcoin properties: transparency, immutability, pseudonymity, confiscation-proof, and permissionless access. Yet consensus algorithms are much easier to apply to blockchains than cases where essential information exists off the blockchain; non-blockchain consensus mechanisms do not generate all of those desirable bitcoin properties because they are much easier to game. 


Decentralization is a good thing, but like democracy, not at every level. A nation of purely independent contractors would never have developed the technology we have today, as things like computer chips and airplanes require hierarchal organization, and hierarchies need centralization. To relegate a market to atomistic, anonymous participants implies either an intolerable base level of fraud or costly adjudication mechanisms that jeopardize security and delay payments. A free market is built on a decentralized economy, which is based on free entry by firms and free choice by consumers. The degree of centralization within those firms is particular to a market, some of which should be large (e.g., banks).

The Coase Theorem highlights that the optimal amount of vertical integration depended on transaction costs related to information, bargaining, and enforcement. This is why firm size varies depending on the product. Naïve types think that we should just have small businesses because then we would have no oppression from businesses wielding market power. Given our current level of technology, that implies mass starvation. The naïve extension is that we should have large firms, but they should be zealously regulated by selfless technocrats. This ignores the universal nature of regulators, who protect existing firms under the pretext of protecting the consumer. This latter point is especially relevant as most protocols have some ability to permission access, and regulators will hold them accountable. Large institutions do not like competition, and governments do not like complete independence among their subjects, resulting in either KYC or curiosities like trading CryptoKitties.


The alternative I present is based on the idea that decentralization is basically competition, and that dapps can simply inherit the essential bitcoin properties by being on the blockchain without tokens and avoid convoluted consensus algorithms. That makes it cheaper and easier to design a viable product. A pseudonymous ethereum account allows oracles to develop a reputation because its actions are transparent and immutable; outsiders cannot censor it. Lower costs, crypto-security, and pemissionless access, provides a valuable way for people to lever, short, and hedge various assets: the initial contract has derivatives on ETHUSD, BTCUSD, and the S&P500.

The result is OracleSwap, an ethereum derivatives contract suite. I have a working version on the web, at oracleswap.co. While it is live on the Ethereum Main Network, it is restricted to margins of 1 or 2 szabo, which even with leverage is well under $0.01 in notional value. It is meant to provide an example. I would be an oracle and liquidity provider myself, but as a middle-aged American, that is not practical. I have fingerprints all over this thing and my friends tend to have good jobs in the highly regulated financial sector, and we would have a lot to lose by violating US regulations (e.g., CFTC SEF regulations within Dodd-Frank, FinCEN, BSA). Yet there are many who can and do invest in exchanges prohibited to US investors, and such investors need better choices.

Many competent programmers have the ability and resources to modify and administer such a contract (you can rent server space for $10/month). The oracle is honest because the present value of oracle revenue is an order of magnitude greater than a cheat. Further, the oracle has economies of scale, so those who are disciplined can create a working product, and by the time they graduate, they will have generated a couple-year track record supplying timely and accurate data. 

Several innovations make this work, all focused on radical simplicity. This lowers costs and reduces direct and indirect costs. The most important innovations are the following:  

·         Forward-starting prices

Trades are transacted at the next-business-day closing price. As this contract targets long-term investors, the standard errors generated by differences in various 4 PM ET prices are minimal and unbiased (the median of several sources over different intervals within a 5-minute window for crypto, the SPX uses the close price). As an institutional investor, I always used next-day VWAP prices. Limit order books generate many complications, and provide nothing of interest to long term investors; day trading blockchain assets is predicated on delusion. 

·         LP netting

The key to market-making capital efficiency is allowing the liquidity provider to net trades. Without a token, this had to be done through netting exposures at the weekly settlement. The LP's are basically mini-exchanges, in that long and short positions are netted. Weekly settlement can handle 200 positions in a single function call, but this can be broken up into 200-unit chunks, allowing an almost unlimited set of positions for any LP. They balance long and short demand by adjusting their long and short funding rates. The Law of Large Numbers implies larger LPs will have more balanced books, allowing them to generate a higher gross-to-net asset ratio, which implies higher returns for a given level of risk and capital; LPs are incented by economies of scale, not delusional token appreciation. 

·         The oracle


This contract is designed for those who want to stay off the grid, and so its pseudonymous oracle can maintain its anonymity and avoid censorship. Its main costs are fixed, as once the contract, front-end, and automated scripts for updating prices are created, maintenance is trivial. The oracle is kept honest via the repeated game it is playing, and the ability and incentive for users to burn their PNL rather than submit to a fraudulent PNL at settlement. A centralized oracle is much easier to incentivize because it is all-in on the brand value of its oracle contract, as a cheat should eliminate future users.


the only way to cheat involves colluding with an oracle that posts fraudulent prices, so the contract focuses on minimizing a cheat payoff while concentrating the cheat cost on the oracle. An oracle's reputation is black or white, as its history of reported prices is easy to monitor, and no rational person would ever use an oracle that cheated once. All of an oracle's brand value is in the contract due to its pseudonymous nature, so there is less incentive to sell-out to seize or protect some traditional brand value (e.g., Steem). While explaining the incentive structure requires more space than I have here, the crucial issues are that players have the ability and the will to decimate a cheat. 

Not only are the ethereum contracts open source, but the web3.js front end is as well. By downloading the web front-end users can eliminate the risk that someone is watching their interactions with the identical front-end hosted at oracleswap.co. Yet, it is mainly a template for developers. I hired people to create the basic structures as I am not a hard-core programmer, but I have modified them endlessly and in the process had to learn a lot about Drizzle/React, JavaScript, Python, and Solidity.

Python is for the APIs that pull prices from data providers and post them to the contract. This has to be automated with error-checking processes and redundancies. You can send questions related to this contract to ericf@efalken.com. I can't promise I'll respond, but I'll try.

Links:

This site is not encrypted--http as opposed to https--but as this contract is denominated in szabo, and the website and contract do not ask for no user information such as emails, etc, users can interact via MetaMask or MyCrypto.com without worry. Users can also download the front-end from GitHub and run a local version with all the same functionality (it's open source). It is fully functional.



Technical Document


Excel Worksheet of Technical Document Examples

Monday, February 24, 2020

BitMex Funding Rate Arbitrage

Last year I wrote about the peculiar BitMEX ether perpetual swap. The average funding rate paid by long ETH swap holders has been 50% annualized since it started trading in August 2018, considerably higher than the BTC swap rate of 2%. BitMEX makes enough money off their day trading users via their latency edge to insiders, so they let their rube traders fight over the basis: the shorts get what the longs pay. At 30k feet, it seems you can go long ETH, short the BitMEX ETH perpetual swap, and make 50% annualized with no risk. Arbitrage!

Actually, there's no arbitrage. This 50% funding rate anomaly is just the result of the simplistic pricing algorithm they used, which generates a convoluted payout. That is, as a crypto exchange, everything is denominated in BTC, but their ETH perpetual swap references the USD price of ETH. This generates the following USD return:

  • ETHswap USD return=[1+ret(BTC)]*ret(ETH)
where ret(BTC) and ret(ETH) are the net returns for bitcoin and ether. The expected value of this swap, assuming a zero risk premium for ETH and BTC, is just the covariance of the ETH and BTC:

  • E{ETHswap USD return}=covariance(ret(ETH),ret(BTC))=ρb,eσbσe
This is unfortunate because wary investors have to look at the current funding rate and expected correlation to make sure they are getting a good deal. Luckily, BitMEX insiders have arbitraged this pretty well historically, so you would have done well be simply ignoring the correlation and funding rate, and trusting arbs to sort it all out for you. If we look at the historical returns on ETH/USD, and compare them to the BitMEX ETH swap, we see this fits the data perfectly:



This shows the additive total return, in USD, for someone who was simply long ETH, and one long the ETH perpetual swap at BitMEX. The differences are insignificant. 


Note that this uses BitMEX's published funding rates, which update every 8 hours. It uses BTC and ETH prices from Windex, and the covariance is derived from BTC and ETH returns. So just as arbitrage pricing theory would suggest, the BitMEX ETH swap returns--without the funding rate--are 50% higher than the raw ETH returns in this sample period (annualized). Yet when you subtract the funding rate, it brings things back into alignment. 

In other words, the average annualized funding rate and the covariance (annualized) have been 50%. The 50% made via the convexity adjustment is taken away by the funding rate (vice versa for the short). 

Several people have contacted me after searching around the web for information on arbitraging BitMEX's ETH swap, thinking I too discovered the arbitrage opportunity. Obviously, I wasn't clear: there's no arbitrage here. I'm supplying a link to an Excel workbook with this data to make this easier to see. 

While this is a nice example of efficient markets, going forward, it's not good to trust anyone on the blockchain, especially when you probably lied about your home country (to trade from the US, one uses a VPN and pretends to be from Costa Rica), and they are domiciled in Seychelles (the Panama of Africa?). 

Wednesday, February 12, 2020

A Simple Equity Volatility Estimator

While short-term asset returns are unpredictable, volatility is highly predictable theoretically and practically. The VIX index is a forward-looking estimate of volatility based on index option prices. Though introduced in 1992 it has been calculated back to 1986, because when released they wanted people to understand how it behaved.



Given the conditional volatility varies significantly over time it is very useful to generate a VIX proxy for cases where one does not have VIX prices. This includes pre-1986 US, countries that do not have VIX indices, and when trying to estimate the end-of-day VIX. This latter problem is subtle but important because historical closing VIX prices are taken from the 4:15 ET in the US while the market closes at 4:00, and so using VIX prices for daily strategies can generate a subtle bias when used in daily trading strategies.

First, we must understand the VIX because there's some subtlety here. It is really not a volatility estimate, but a variance estimate presented as volatility. VIX is calculated as the square root of the par SP500 variance swap with a 30-day term, multiplied by 100 and annualized (ie, 19.34 means 19.34% annualized). That is, it would be the strike volatility in a 30-day variance swap at inception:


On September 22, 2003, the CBOE changed the VIX calculation in two ways. First, they began to use SP500 rather than SP100 option prices. This lowered the volatility to about 97% of its old vol level because the SP500 is more diversified and less volatile. Second, instead of just taking the average volatilities of nearby puts and calls, they used explicit call and put prices in a more rigorous way. This is because a variance swap's replicating portfolio consists of the following weights for out-of-the-money puts and calls. 



The VIX futures started trading in 2004, and options on these futures started in 2008. Liquid markets make index prices more efficient because nothing motivates like the profit motive (eg, regardless of your preferences, more money will help you achieve them). The net result is that one should use data since 2004 when analyzing the VIX even though there is data back to 1986 (which, is still useful for some applications).

One can see that the old VIX index was significantly more biased upwards than after these changes. This implies abnormal volatility trading strategies prior to 2004 if you assumed the VIX was a true par variance swap price.  Now, there should be a slight positive bias in the VIX due to the variance premium, where shorting variance generates a positive return over time. Personally, I think this variance premium is really a consequence of the equity premium, in that short variance strategies have very strong correlations with being long the market. That is, the variance premium is not an independent priced risk factor, just a consequence of the equity premium given its high beta. 

VIX
Var(VIX)
Actual Vol
Actual Variance
1986-2003
20.91
4.96
17.67
3.12
2004-2019
18.20
4.05
17.99
3.24
VIX/ActVol
Var(VIX)/ActVar
1986-2003
1.18
1.59
2004-2019
1.01
1.25


As a liquid market price, the VIX is a good benchmark for any equity volatility model. The most common academic way to estimate volatility is some variant of a Garch(1,1) model, which is like an ARMA model of variance:


The problem is that you need to estimate the parameters {w, α, β} using a maximum likelihood function, which is non-trivial in spreadsheets. Further, there is little intuition as to what these parameters should be. We know that α plus β should be less than 1, and that the unconditional variance is w/(1-α-β). That still leaves the model highly sensitive to slight deviations, in that if you misestimate them you often get absurd extrapolations.

For daily data, a simple exponentially weighted moving average (EWMA) version of Garch(1,1) works pretty well, with w=0, α=0.05, and  β=0.95. This generates a decent R2 with the day and month-ahead variance.

EWMA Vol Estimator on Daily Data


Alas, this has two problems. First, there is a predictable bias in the EWMA because it ignores mean reversion in volatility. Garch models address this via the intercept term, but as mentioned it is tricky to estimate and creates non-intuitive and highly sensitive parameters. We can see this bias by sorting the data by VIX into deciles, and take the average EWMA, where the relative difference in the VIX and the EWMA increases the lower the EWMA. As this bias is fairly linear, we can correct for this via the function 



US data sorted into VIX deciles
2004-2019

VIX EWMA EWMA*
Low 11.1 8.1 10.6
2 12.7 10.2 13.2
3 14.0 11.4 14.7
4 15.6 12.4 15.8
5 17.1 13.4 16.9
6 18.7 15.0 18.7
7 20.7 17.3 21.2
8 23.0 19.1 23.1
9 25.9 21.3 25.3
High 40.3 39.5 39.7

Secondly, there's the correlation between returns and VIX movements that are asymmetric: positive index returns decrease implied volatility while negative movements increase implied volatility. Further, the strength of the relationship is asymmetric, in that down moves are twice as strong as up moves.  Here are the contemporaneous changes in the VIX and SPY using daily returns since 2003. I sorted by SPX return into 20 buckets and took the average SPX and VIX percent changes.



An EWMA would generate a symmetric U-pattern between asset returns and volatility as 0.012 = (-0.01)2,  a huge mismatch with real daily VIX changes.

There are a couple of good reasons for this asymmetric volatility response to price changes. As recessions imply systematic economic problems, there's always a chance that negative news is not just a disappointment, but reveals a flaw in your deepest assumptions (e.g., did you know you don't need 20% down to buy a house anymore?).  This does not happen in commodities because for many of these markets higher prices are correlated with bad news, such as oil shocks or inflation increases. Another problem is that many large-cap companies are built primarily of exponential growth assumptions. Companies like Tesla and Amazon need sustained abnormal growth rates to justify their valuations, so any decline could mean an inflection point back to normal growth, lowering their value by 90%. Again, this has no relevance for commodities.

One can capture this by the following function


For example, if the return was +1%, yesterday's vol is multiplied by 0.975, while if it was down 1%, the adjustment factor is 1.05. While the empirical relation of returns on volatility is not just asymmetric but non-linear (the absolute returns have a diminishing marginal impact), putting in a squared term creates problems as they extrapolate poorly, and so this piecewise linear approximation is applied to make the model more robust.

These two adjustments--one for mean reversion, one for the return-implied volatility correlation--generates the following function for adjusting the simple EWMA: 



The first term captures the volatility-return correlation, the second mean reversion. The term 0.2 adjusts the speed to which our volatility estimate moves towards its long-run target given its current level. I'd like to give this a cool name with Latin roots but given two adjustments it would become German-sized, so I'm just going to call this transformed estimate of the EWMA 'EricVol' for simplicity and clarity. After this transformation, the bias to our vol estimate is diminished:


Vol Estimators sorted by VIX

VIX
EricVol
EWMA
Low
11.1
10.8
8.1
2
12.7
13.6
10.2
3
14.0
15.0
11.4
4
15.6
16.1
12.4
5
17.1
17.2
13.4
6
18.7
19.2
15.0
7
20.7
21.9
17.3
8
23.0
24.0
19.1
9
25.9
26.7
21.3
High
40.3
43.2
39.5

Comparing the daily correlations with the VIX changes, we see EricVol is much more correlated than the simple EWMA, especially in the most volatile times


Daily Correlation with VIX Changes
2004-2019

EWMA
EricVol
2008
29%
82%
Oct-08
-19%
84%
total
37%
75%

As most volatility trading strategies are linear functions of variance, and the VIX itself is really the square root of its true essence, we predict returns squared and square our vol estimates in these equations. 

Regression R2 for predicting forward day-ahead and 21-day ahead variance


VIX
EWMA
EricVol
day-ahead
33.0%
26.9%
34.3%
Month-ahead
61.1%
58.4%
61.8%

If we look at regressions that predict future variance given our estimates, we see EricVol is significantly better than a simple EWMA. While it does slightly better than the VIX, I doubt this generates significant profits trading, say, the VXX, though readers are free to try.  

You can download a spreadsheet with all this data and the model here. You need to have two columns in a spreadsheet because you have to keep a time-series of EWMA and EricVol, which is annoying, but it's much simpler than fitting a Garch model. Most importantly, its parameters are intuitive and stable.  

Tuesday, February 04, 2020

Factor Risk and Return

Factor returns should reflect risk, in that they have traditionally been interpreted as proxies for some kind of risk not measured by beta. The idea is that perhaps what people really care about is whether there will be another oil shock, and nothing matters as much. Stocks that have a high dependence on cheap oil would have more risk than other stocks. In the early 1980s, this was a common hypothesis, though later people would add things like consumption growth and inflation.

Remember that our conception of risk comes from the idea that as utility functions are concave (have decreasing marginal utility), the higher the expected variance in our wealth, the lower the expected utility. Thus, $1 for certain tomorrow is worth more than a payoff with equal probabilities for {$0.50, $1.50}. Because of decreasing marginal utility, one enjoys an extra 50 cents less than one suffers from the 50 cent deficit, so you have to bribe people to make them indifferent to such an alternative, what is called the risk premium.

While most presentations of the CAPM use normal distributions, the initial creators did not simply jump on this assumption blindly because it was convenient, as there was a lot of focus on the best distributional assumption for asset returns.  I took the weekly returns and normalized them by a rolling volatility forecast to capture the heteroskedasticity (otherwise the tails are much fatter). While these deviations from normality are statistically significant, they aren't large. You can see the weekly returns are too frequent at the bottom, offset by the extra number of slightly-above-average returns.

Markowitz considered downside volatility and maximum drawdown among other metrics, and Eugene Fama's dissertation and several subsequent papers investigated the degree to which downside risk outside the normal distribution--aka fat tails--was the big driver. This was all motivated by Benoit Mandelbrot's documentation that many commodity markets had large downside tail returns. If you can get through this paper by Fama (1965), you'll understand why he stopped investigating it: it's boring. [In contrast, the degree to which market crashes overstate 'average' returns is much more interesting, see Barro (2005). But note, this is all about whether the average equity return premium should be reduced by 3%, not a criticism of the CAPM or its spawn.]

You can capture the above deviation from normality in many different ways but they don't add much of any practical value. Levy and Markowitz (1979) documented that the gaussian distribution, for all its faults, is pretty innocuous. All you have to do is increase a person's risk aversion so that they 'feel' the downside risk more strongly, and you capture the effects of fat declines just as you would with some mixed distribution with more parameters. The costs are low, but the benefits are large. Normal distributions are additive (one normal plus another is still normally distributed), which makes statistics a lot easier. A mere 2 parameters describe the entire distribution (all the higher moments are linear functions of the mean and variance). Further, when x is normally distributed, E[ex]=eμ+1/2σ2, which is useful in all sorts of models (eg, exponential utility is of the form 1-e-αx).

I pulled a bunch of portfolio factor data from Ken French's website. It's an awesome resource for anyone into equity risk because it's free and easy to access (no sign-ins or passwords).  For the US I pulled seven factors, the high and low quintiles among the largest 2 size quintiles, for 7 factors: book/market, cashflow/price, earnings/price, volatility, investment (asset growth), prior returns, and accruals.

As a practical matter, most metrics of risk are correlated. Beta and portfolio volatility have incredibly high correlations, so much so they are almost redundant. Minimizing beta or vol is basically the same thing.  The correlation between portfolio volatility and the maximum drawdown is not as high, but still highly significant. To the degree risk is the expected worst-case scenario, volatility/beta should capture a good deal of this.


When we look at the relationship between these risk metrics for various factor portfolios, there is nothing close to a strong correlation. Indeed, the correlation is negative to a first approximation.

For international portfolios, I used book/market, cashflow/price, and earnings/price from several regions. For these, I normalized by subtracting the region-specific volatility and average returns, so that factor performance is comparable.  While risk is no longer negatively correlated with returns, the correlation is weak.


Factor risk premiums do not reflect risk, they explain themselves. Perhaps they are capturing something institutional, like taxes. I came across Phil DeMuth's latest book, The Overtaxed Investor, and he emphasizes the tax advantage of capital gains over dividends. Now if in equilibrium investors have to be indifferent, perhaps the extra 2% earned by high dividend stocks is compensated by the higher tax rate. Transaction costs could be another issue, especially for small-cap stocks, in that it may cost one a couple of percent more to get in and out of these positions, which would not be reflected in end-of-day returns but would your for small-cap investor. Such explanations have nothing to do with risk.

In the latest American Finance Association Presidential Adress, David Hirshleifer presented the following theory by way of an analogy. Moths (supposedly) fly into flames kamikaze-style, as they are hard-wired to fly towards the moon to get their bearings. Clearly, this isn't good for the moths, but rather, a rule-of-thumb that does not extrapolate well, especially after man discovered fire. Perhaps our monkey brains have similar lacunae. For example, many remember dating someone crazy because she was super hot, not because men like crazy girls, but rather, crazy girls do hot things that cause men's limbic system to shut down the cerebral cortex (this is now a meme, but young fools will only learn by experience). Similarly, investors could be attracted to risk, not directly, but rather, the fact that risky stocks have attractive attributes like stories about transforming the workplace, or they have pot-smoking CEOs, or that they specifically focus more on marketing to investors as opposed to consumers.

While I agree with most people that many investors are ignorant and susceptible to biases, these suboptimal decision-makers are a poor explanation for equilibrium asset prices; their bad decision-making is mainly evinced by excessive trading. It doesn't take many smart traders to figure out that if they steel themselves against those risky-stock promoters they can make large risk-adjusted returns. It's not complicated, you don't have to tie yourself against a mast like Odysseus, just don't listen to conference calls or watch CNBC, and instead just crunch datasets where all you know about ABC corp are its objective metrics.  Then buy the wallflower stocks and sell the crazy/hot ones. Arbitrageurs should counteract behavior biases in asset markets with free entry.

Biases, departures from normality, and new factors don't explain why anything reasonably correlated with something related to wealth volatility has no correlation with average returns. This leaves the utility function. With relative risk, there should be no risk premium just as there isn't one. This makes factor investing much less attractive because, without the risk adjustment that accentuates the alpha, you just have the potential for making 2% extra but have to take on more risk to do so (by deviating from the market, creating benchmark risk).  It's not like there are $20 bills lying on the floor, but rather, a bunch of change in a fountain, and you can grab a handful if you don't mind getting soaking wet and mean looks.