Wednesday, March 21, 2012
Agent Based Models
I was recently at the FactSet Phoenix conference, and Rick Bookstaber gave one of the keynote addresses. He works for the SEC, and like all senior regulators works with the Fed, the IMF, and the whole kitchen sink of regulatory bodies out there. He seemed like a veteran speaker, and made an interesting analogy between investment markets and warfare, in that the only thing that maintains is the competition, not the means.
Then he introduced me to the cutting edge of bank regulation: agent based models. Bookstaber said these are better than value-at-risk (VAR) involves agent based models, because 'anything that works is obsolete.' Clearly this is an exageration (I hope).
So what are these models? I found a good example here, by Stefan Turner, for the OECD ("Better Policies for Better Lives" is their motto). He starts out withh 24 pages of words, telling us that this model analyzes millions of simulations of millions of interactions. He notes (p. 7), that 'the models demonstrate that such regulator measures can, under certain circumstances, lead to advese effects.' Let's hope that's a feasible scenario.
Anyway, there are a bunch of obvious economic assumptions like suppy equals demand, and that demand decreases in price, mixed with a variety of exogenous assumptions such as that noise trade demand is 'weakly mean-reverting', and informed demand is a stepwise linear function of the asset price and some constant. This is all like Stephen Wolfram's New Kind of Science, the idea that you take some simple assumptions, create some cool time series that look like or 'are homologous to' real time series.
This is so abstract, the main point is merely to generate time series that look vaguely like some asset time series, which given the number of parameters (20+) and rules is rather simple. What this implies about the current state of Bank of America is anyone's guess. A complex financial company will have hundreds of very different assets, each with their own underwriting criteria, loss curves, recovery rates, and then several different types of liabilities. VAR isn't relevant here, but neither are agent based models.
It's fine for academics to do deep research on things like dynamic programming and input-output matrices. These generated a lot of excitement, Nobel prizes, but were ultimately useless for predicting anything except tenure decisions. VAR is a very useful tool, mainly in making sure crazy rogue traders aren't operating, but mark-to-market books are not the essence of banking. When I was head of capital allocations at KeyCorp, our VAR risk was about 1% of our total economic risk capital, almost nothing. Surely it's higher for the money center banks, but it's still small compared to their balance sheets that aren't being churned.
The idea that 'agent based models' is the new focus of bank regulation strikes me as a monumental waste of time, up there with the idea that neural nets can discover interesting (as opposed to merely true) theorems. On the other hand, they are going to do something, and irrelevant is merely a waste of their time and money.
The main problem everyone sees is too big to fail, but regulators see it as something they simply have to maintain given the contagion risk. Why not keep it simple, and just say, you pay an asset tax for every dollar over $100B? Then you don't even have to force anything, just collect the extra revenue on the big, fat companies that probably would be worth more to their investors (as opposed to CEOs) if they were smaller too. Remember, Bob Rubin didn't even know Citi owned a $300B+ in mortgages, and he was getting paid $115MM to mind the store, so it's not obvious to anyone what's going on in these $1 Trillion behemoths, but we all know the Treasury will be there if they collapse. If all the banks were smaller, we wouldn't be so afraid of contagion risk (who cares if KeyCorp fails?). Banks would be better incented to be prudent investors, and do things like assume collateral can fall in value. With banks smaller, perhaps they wouldn't find lobbying government more important than coming up with better and more efficient business models.
Perhaps that is a bad idea, but it is simple, and is better than any idea that could conceivably come out of an agent based model.
Complex regulatory schemes generate the appearance of rigor and efficient regulation. Further, this way we can focus on what's really important, like which Boston-based financial conglomerate will win the Barney Frank sweepstakes this November and snag the Dodd-Frank author to their executive committee, winning them a 10 year regulatory pass and signaling to everyone on Capital Hill how the game is played.
Subscribe to:
Post Comments (Atom)
5 comments:
There is one real value to agent-based modeling, it gives you some hint of the uncertainty involved. Take any simulation, and tiny changes of assumptions or initial conditions cause widely varying predictions for the effect of regulations. The actual uncertainty in the world is certainly more than the uncertainty captured in the model, but at least the model smashes the idea of simple cause and effect like higher capital requirements = fewer bank failures or bonus clawbacks = more prudent decision-making.
A German psychologist named Dietrich Dorner has done some fascinating experiments with simple simulation games that demonstrate how bad people are with tasks as simple as controlling temperature in a meat locker. That's a two variable problem (current temperature and current setting on the refrigeration) with one input (setting on the refrigeration) and a simple goal (keep the temperature in a given range). Even with extensive instruction and practice, people do a terrible job of something simple mechanical devices have done for centuries.
The greatest regulation is failure. Too bad we have never tried it.
Well, if they can learn about the limits of their efficiency, that would be great. I just think after all this time, and all their failures, it would be pretty clear the futility of these clever approaches. No regulator could have tightened mortgage underwriting, or mortgage capital requirements, pre 2007, even if they wanted to (which they didn't).
I don't really see any non-handwaving reasons why you think Agent Based models won't provide better predictions than equilibrium models.
In addition, I think you think way too little of Leontief linear relation input-output models. For one thing, they have some impressive predictive successes(predicting copper price increases in World War II even though copper was hardly used) and had the misfortune of being developed at a time before we had the computational power to implement them on a national scale.
To date, there has never been a full linear (public) input-output model estimated for the United States, EU or even the Soviet Union during the 1960s heyday of planning (unlike say, the less computationally demanding MIT-Penn-FRS model which is applied all over the world).
The descendants of primitive Leontief models have seen extensive use in private sector operations research for modeling short-run supply shocks (and other things), but you'd never know because any system of value would be proprietary until it had no value.
Who knows what we'd find with full model today?
Post a Comment