In the 1970s there was a period when large scale, multi-equation macro models with hundreds of equations were used to forecast the economy. Those efforts were a dead end. Of course, the modelers themselves never admitted that, and those making macro models in the 1970s still make macro models today (see Macroeconomic Advisers LLC). Economics does not note approaches are failures, rather, it just stops doing them as the professors grow old and graduate students do not replace them.
Their demise was twofold. Chris Sims showed that a three instrument Vector Auto Regression (insanely simpler reduced form model) did just as well. His argument is applicable to any complicated model: when you have lots of interactions, every variable should be included in every equation, and so the model is underidentified because then there are more parameters than datapoints. Structural parameters with very different forecast implications are consistent with the same data, so you cannot forecast because you cannot identify the correct model. Secondly, no economic model, especially a large scale macro model, predicted that the socialist countries would severely underperform the capitalist countries, or that Africa would stagnate, or that the Asian tigers would flourish. Just about every major economic trend--computers, energy shortages, internet bubble, mortgage crisis--was seen only after they happened. Of course, they missed business cycles, and the secular decrease in inflation from 1980. If an economist were to tell me the average GDP for Paraguay in 2050, I imagine the more complex the model, the worse their forecast.
It is very easy to convince yourself that a complex model is correct, because you can always fine tune it to the data, and the pieces are never fully specified so outsiders can't judge how much it was fit to explain the data it is testing. For climate models it appears obvious there are more degrees of freedom in such a model than the two main datapoints driving them: the 0.75 degree Celsius increase in temperature and the 35% increase in atmospheric CO2 from 1900-2000. It is very suspicious that the models disagree a lot on specifics, like how much water vapor is expected over Antarctica, but generally agree on the temperature and CO2 implications. For a model with lots of interactions, one should see greater variability than what I have seen. I get the sense that to get funding one's work on a model has to be plausible, where 'plausibility' has already been decided.
Ronald Prin has a video on MIT's latest climate models, and he highlighted the economic aspects of the global climate model. Uh oh. He stated the climate models were as uncertain as the economic models. That's a high standard. Prinn admits to big uncertainties in climate models: clouds, which play a large role, are difficult to model. There are also uncertainties about emissions, and ocean-mixing, the churning of cooler and warmer waters, which can bring carbon buried on the ocean floor to the surface. He says there are 'hundreds' of these uncertainties. His solution is to look at the model under various changes in assumptions, generating hundreds of thousands of forecasts to estimate the probability of various amounts of climate change. As he states, “in the Monte Carlo sense, building up a set of forecasts on which we can put a measure of the odds of being correct or incorrect.”
But enumerating changes in assumptions and then looking at the effects is not an unbiased nor asymptotically consistent estimation process. Why should we believe the model is correct even if we knew the assumptions? You have, by Prin's estimate, 'hundreds' of uncertain parameters and processes, and forecasting over 100 years. Read Sokolove's outline, and you merely get graphics of 20 to 30 categories interacting in some massive feedback loop, and you have these for Land, Ocean, Urban area, and the Atmosphere, broken down by 20 regions, and then assumptions on technological changes (nuclear vs biofuel growth) as well as economic growth. Looking at Sokolov's description, it appears to be using a macro model within the super global climate model:
The current EPPA version specifically projects economic variables (GDP, energy use, sectoral output, consumption, etc.) ... Special provision is made for analysis of uncertainty in key human influences, such as the growth of population and economic activity, and the pace and direction of technical change.
Add to that the sociology-like graphs with big arrows pointing to A --> B --> A, enumerating sectoral employment growth in Modesto, and it looks a lot like those old, discredited large-scale macro models! Using a multi equation macro model in the Global Climate Model is like using a CDO squared formula to manage your portfolio.
I'm not saying there are much better models out there, just that these models are known to be useless for long term forecasting, and specifying every sector of the economy in such a manner seems like deliberate spurious precision given the known way such precision is treated by outsiders (eg, journalists) and the wealth of experience by insiders on how fruitless this approach is. These complex 'structural models' are like the early days of flight when people would strap on bird wings to fly because it emulated reality. Now we know that is a dead end, and use nothing like nature to fly. Specifying input-output relations by sector is a reasonable approach if it was not so thoroughly tried, tested, and found lousy.
Econometrician Arnold Zellner noted, after a lifetime of forecasting:
I do not know of a complicated model in any area of science that performs well in explanation and prediction and have challenged many audiences to give me examples. So far, I have not heard about a single one. ..it appears useful to start with a well understood, sophisticatedly simple model and check its performance empirically in explanation and prediction.
Instead of evaluating the models by how much they agree under various assumptions, I would want to see some step-forward, out-of-sample forecasting. That is, run the model through time T using data only from prior to T, and see how it does in period T+1. You do this through time and get a step foreward forecast. I haven't seen this. There are data on changes in CO2 and temperature using ice cores, but there is a causation problem: say the sun (or something exogenous) causes both warmth and CO2 rise. This will not allow you to estimate the effect of an exogenous CO2 rise on temperature unless you can identify an event with clear exogenous increases in CO2, as in a volcano. For example weight is correlated with height in humans: taller people tend to be heavier. But if you gain weight, that won't make you taller. The correlation we see over time in individuals, or cross sectionally, does not imply height is caused by changes in weight.
So many Global Warming proponents emphasize this is all about the facts and impugn the motives of global warming skeptics, yet the facts and theories underlying Global Warming scenarios are actually quite tenuous. There have been no important human-created trends or events that have been foreseen by a consensus of scientists. Adding that fact to the model implies we should ignore them.
7 comments:
Much money and effort was invested in the building of the World Model by the Club of Rome in the seventies. Among other prophetic things, it showed that we would have run out of oil before 1992. The World Bank assumed it was going to be so, hunger will hunt the land. World modelling is an eternal illusion in the category of socialism, and each generation has to discover that it does not work, as you are doing now.
I was a FORTRAN programmer back in the early 70s and was involved with several projects while in grad school.
One was doing the programming for an 'input/output' model of the Iranian economy that a team in our Econ Dept was building at the request of the Shah's govt.
The other was programming an equally complicated model a professor in the Physiscs Department was building to forecast 'monsoons' on the Indian subcontinent.
Superficially, both models were pretty much the same -- took thousands of historical data observations put them through some mathematical model and output some forecasts.
Neither, as it turned out, was sucesfully able to predict.
However,scientists the results from the monsoon model provided much input which factors could be ignored and which additional factors needed to be considred. It took them another ten years, but the ability to predict the monsoon greatly improved.
Yes, it is pure outright socialism to work on complicated models in physics or economics.
More degrees of freedom --> bigger data sets . . .
高尚的普通話,您想要什麼?
Great minds. . . . I wrote a piece on my Streetwise Professor blog in December '06 pointing out the similarities between climate models and the big macro models. I was initially very interested in macro models when I started grad school at Chicago, but soon became very disillusioned with them. Unfortunately, it seems that climate scientists are more enamored with their models than economists were of theirs, as economists pretty much ditched the big models in a decade or so, whereas climate scientists just keep making theirs bigger without realizing that (as Lucas pointed out about macro models) the models have intrinsic flaws that can't be fixed by adding equations and parameters.
The primary problem with macro models from the 1970s is that nobody paid attention to the requirement for the model to have a sensible 'steady state' - there was admixture of variables of different orders of integration.
A 'steady state' has 'sensible' characteristics which stem explicitly from theory - if left untouched the economy will eventually wind up on a path where all real variables will grow at the same rate, all nominal variables will grow at the same rate, with growth rates given by rates of technical progress and population growth. Relative prices in a steady state are constant.
This results in a set of stylised facts that were not satisfied by 1970s macro models - for example 'money neutrality' (that a 1% increase in the money supply will not change long-run real outcomes... it will result in a 1% increase in all prices).
A second problem is that right up until the 1990s most behavioural equations were estimated outside of their simultaneous context - variables which were endogenous to the system as a whole were treated as being exogenous during estimation.
When I re-estimated a 'modern' macro model using 3SNLS, there were significant changes in the parameter vector - as one would expect - and the forecasting capability of the model was improved (assuming that the forecaster made perfect guesses at the exogenous variables) that is, running the model forward from the estimation dataset, and giving the model actual values for the exovars, resulted in paths for endovars that were a decent for to actual values for the endovars.
Furthermore, 1970s models assumed better-than-unity stimulus from government spending, and had no intertemporal budget constraint - that is, government debt could increase without bound (and people would not be 'fooled', either).
Simply attacking the modelling paradigm because a bunch of people didn't do it very well 30 years ago, is simply stupid - it is like saying that supersonic flight is impossible based on analysis of a Sopwith Camel.
To the guy who worked on an IO model - congrats for contributing to a 1920's (Leontief) technology. You might not be aware that Leontief's grad student from the 1970s (Peter Dixon, who was my PhD supervisor) revised and extended IO modelling to incorporate technical/preference change and intertemporality... and is currently building a CGE model for the US State department.
As to the idea that a macro model is 'underidentified' because it has more parameters than datapoints: well, that just means the model you're looking at is badly designed. The model I worked on - TRYM - had 13 behavioural equations, a dozen or so reaction functions and 80 identities (things which MUST hold - the components of GDP must add to GDP, for example). It was identified - by design. VAR models - the tools of frustrated mathematicians - are helped inordinately by the fact that steady-state type ideas are sensible - since the VAR model is really just a bare-bones reduced-form expression designed for people who think economics is glorified mathematics.
Your statement about out-of-sample performance also belies some ignorance about how modelling is done.
As a very first step: assuming you have a modern, properly identified model with a stable steady state, and you want to perform a forecast.
How do you close the model (i.e., how do you choose your exogenous variables? (i.e., variables over which the agents in the model do not have any control)?
Then, how do you forecast the EXOGENOUS variables?
In reality, there ARE some things that are determined outside any model: until recently the modelling fraternity simply used the estimatio nclosure as its forecasting closure (not necessarily sensible) and then came up with a 'point forecast' of the matrix of endogenous variables required to close the model.
Part of my (unfinished) PhD was to think a bit about the silliness of point-forecasting exovars: it makes no sense to use the mean if the distribution of the exovars might be skewed. Do you use the mode, perhaps? Not so fast - the mode of an asymmetric multivariate distribution is not necessarily the vector of the modes of the individual variables.
My solution was to perform STOCHASTIC SIMULATION - but based on a set of realistic scenarios (which most US economists, being frustrated mathematicians first and foremost, don't think hard enough about).
Then you do a gazillion simulations, and you get a distribution for the endovars - and you can then run a 'Bayesian ruler' over them to see if they make sense.
When I did that (in 1998, with the press being all a-twitter about the Asian currency crisism the Russian default and and LTCM), I constructed what I thought were sensible scenarios for the Australian economy, estimated some distributions for those scecnarios, and ran the thing a bazillion times. The result was that the best guess for GDP for the Australian economy over the next 10 years was annualised growth of 4.1%, there was less than a 5% chance of recession and all of the probability mass said that unemployment was likely to FALL.
Beginner's luck, perhaps. Let's just say that when I presented it at a grad seminar, I got lots of raised eyebrows, basedon the idea that the crisis would 'cause a depression in Asia'.
ALL THAT being said, I absolutely agree that climate models are garbage - for the simple reason that a lot of the 'data' used to generate parameters is itself generated by a ethod that has a MAPE that any sensible modeller would find unacceptable - and the degree of simultaneity in the system is MUCH greater (nobody can agree on the causality chain between temperature and CO2 concentration for example: in the economic context it is like not knowing whether or not government spending in 1940 affects GDP in 1920).
Cheerio
GT
GT,
very thoughtful. My experience circa 1987-90 was that no one was doing better than a VAR with GDP and Fed Fund difference lags. I do a lot of out of sample testing currently, but not of complex interactive systems like economies or climates.
Maybe you can get them to work better. Great luck with that (seriously).
E
Post a Comment