In Boldrin and Levine's book Against Intellectual Property, they note that their are no a priori grounds for their argument. That is, there are offsetting costs and benefits to intellectual property, and so it is a matter of empirically estimating these costs and benefits. In the book's case, this is mainly focused on patents, as opposed to confidentiality agreements, non-compete agreements, or trade secrets.
I think this is really true for almost any economic debate. Theoretically, there's a case for quotas: assume sufficiently high increasing returns to scale, some spill-over effects, and you have the case for quotas (this kind of pretty reasoning led to Krugman's fall to the Dark Side). The issue is whether empirically, giving a legislature the ability to grant quotas, to what degree will they be used in this area, as opposed to a pure rent generation via government fiat.
Thus, theory is nice merely because it tell us what variables to look at when doing an empirical analysis. In practice, with enough data, the variables speak for themselves, and it will be obvious what they are saying. The problem is merely that there are infinite number of potential effects, and potential interesting variables to control for. For example, looking at stocks, we may be interested in their annualized returns and volatility. If stock returns are lognormally distributed, this completely defines their distribution. If they have fat tails, we need futher data, higher moments, or extremum statitics. If markets are not efficient, perhaps it helps to look at auto-correlation in returns over various horizons (daily, weekly), or various technical patterns (head-and-shoulders). The state space is infinite, you need a theory to constrain it.
Similarly, when looking at what affects the effects, you need to control for other things. You might look first at how the 'market' affects returns contemporaneously, or industry effects. You might look at size, or value factors. Again, the state space is infinite, and you need a theory, a story.
So, theory is very useful, but usually theory merely suggests something to look at. The data then say how the functional form fits. If theory says variance, but it turns out that the result is really a function of the square root of variance (volatility), you can be sure that in 10 years no one will remember the theories that proposed variance, and it will all appear an unbroken advance in science.
So, I don't get too excited by proofs, or the precise nature of the functional forms. Just identify what is important as an input and output, and then roll up one's sleeves and see what the data say.
For people who hate models, I think the key to remember is they provide a useful scaffold to fit real data. That is, if you fit data without theory, you need a lot of it so it is not really bumpy, and without focusing on a small set of data, the combinations of potentially interesting data is simply too big.
to affect is to effect an effect. (kudos for getting it right.)
you need a theory, a story.
I'm with you on everything in this post but the two words after the comma here. A theory is not a story, a story not a theory.
What's the essential difference, Michael?
Sometimes a story is just a story.
I would add that I am sceptical of any empirical analysis without a theory. No matter what the statistical significance and the sophistication of the empirical method. As you mentioned, the state space is infinite and finding something that is highly unlikely becomes just a matter of searching long enough.
if i improve the wheel, i do it for myself. chances are someone else will come up with the same improvement on their own, at around the same time. (a keen knowledge of IP, from inside the industries that produce it would inform this is often the case)
is anyone going to share their improvement with others?
will some people not share their improvement?
someone, at some point, must have asked: how do we encourage people to share their improvements, and if necessary, to produce improved wheels for distribution?
a legal right to exclude others is one way.
but it will continue to be questioned because, with IT, it's very easy to share the improvement and many people can easily implement it into a product, develop it and distribute it, but with biopharma, only a few entities can implement the idea and even fewer can develop, produce and distribute products to which it applies.
in short, IT production is realtively fast, generally low risk and requires little overhead while biopharma production is relatively slow, extremely high risk and requires massive overhead. even the people in these industries have very different respective views about IP.
yet in reading the papers being written about IP, it seems many have no trouble talking about IP as being a general concept that can be applied across each of these industries (that together produce the lion's share of IP).
i think it is fraught with difficultiesto approach an analysis of IP this way.
focus on what's being produced, and what it takes to produce it.
Post a Comment