Tuesday, August 31, 2010

Would New Orleans levees hold for a second Katrina?

Christian Science Monitor article "Would New Orleans levees hold for a second Katrina?" (August 29th, 2010) offers only a partial discussion of the costs and distribution of destructive floods.
The Corps says the reinforcements are built to provide a defense against a 100-year storm surge, which means protection against flooding that in any given year, may have a 1 percent chance of taking place. For a peak storm surge, such as one that may occur once every 500 years, the system is designed to allow overtopping, where a storm sends waves over the top of the wall.
Mr. Barry says the levees should be constructed to withstand a 1,000-year flood, adding that Holland enjoys a 10,000-year protection standa
The article should have, but did not, recognize and discuss the nature of the flood distribution that a levee system has to protect against. Specifically, we can imagine that a number of floods or hurricanes occur in New Orleans every year. Each flood's destructive power is drawn from some distribution.  (Note that our general point will hold for any exponential distribution, which includes the normal, gamma, Weibull, binomial, Poisson, and more). What we need to be concerned about is not the distribution itself, but the distribution of the maximum for a given year.

To illustrate our point, we can imagine hurricane density distributed as a normal with mean 5 and standard error .6.  Each year ten floods are sampled from this distribution.   This maximum will be distributed as a Type-I Gumbel Distribution.  These two are depicted graphically below--a single flood's distribution in blue, and the maximum of that season in red (click to enlarge).
The important thing to note is how the distribution of the maximum positively skews the probability distribution we're considering to a long-right-tailed maximum flood, and that this is the distribution we need to consider when making optimal flood insurance decisions.

In this light, the article's discussion of 100, 1,000, and 10,000-year floods should be interpreted--as costs of protection rise, the benefits of protection become dramatically smaller.

Sunday, August 29, 2010

What if the end isn't near?

USA Today article "What if the end isn't near?" (August 23rd, 2010) discusses a large subpopulation in America that ostensibly believes that the Second Coming of Christ will occur within the next forty years. The article is deeply concerned about this and its effects on public policy (e.g. if Nuclear Disarmament or Global Warming are long-term threats, we need spend resources on them, as the world ends before they become problems).
A new poll from the Pew Research Center for the People and the Press finds that roughly four in 10 Americans believe the Second Coming will happen by 2050.
Thankfully, Wigg-Stevenson and many new-breed evangelicals like him are refusing the kind of end-times bait that lets believers off the hook — off the hook of inspired social action that can make their faith a powerful blessing to their society and their time.
Corrections, from its own a priori beliefs, finds this statistic difficult to believe. The proper economic method for discerning beliefs is to watch what individuals do, not what they say. Our a priori beliefs are so strong that Corrections suggests that individual economic activity simply doesn't match up with these beliefs--people are professing things to pollsters that they don't believe.Corrections ventures out of its area of expertise into christian eschatology to understand this poll figure. Any corrections are welcome; the purpose here is just to get a grasp on what individuals might believe, as various interpretations impact economic behavior.There are five important events or periods that are relevant to the Christian End Times: 1) The First Coming, 2) Tribulation 3) The Second Coming 4) The Millenial Reign 5) The Last Judgement.
  1. The First Coming kicks off the sequence of events, bounding the sequence of events and marking the beginning of the "countdown".
  2. The Tribulation is a period of time after the Rapture (taking of Christians to Heaven, and their disappearance on Earth). In this period of time, for many, the Four Horsemen of the Apocalypse come, many individuals die.
  3. The Second Coming is the arrival of Christ on earth.
  4. The Millennial Reign is the Thousand-Year Reign of Christ before Judgement Day.
  5. Judgement Day is the point at which all economic activity ceases (e.g. August 29, 1997 as Judgement Day would signal the cessation of all economic activity, as individuals are separated into good and bad, and sent to the afterlife).
First, we stipulate all individuals believing in the Second Coming are Christians. In our understanding, there are several ways to interpret "The Millennium," mentioned in the bible before the Last Judgement (after which we suppose all economic activity to cease). These beliefs can be broken down into two categories and two sub-categories within those.The first is Premillennialism, which includes both Post-tribulational Premillennialism and Pre-tribulational Premillenialism. These believers hold that there is economic activity after the Second Coming--that the Second Coming occurs before the Millennial Reign.

In this case, these individuals do not believe that economic activity will cease upon the second coming. (Though Pre-tribulational Premillennialists may believe that the rapture will remove them or others from economic activity upon the Second Coming. Neither of these allows for the end of the world before 2040, requiring at least a Millennial Reign.

The second category are individuals who believe the Second Coming and the Last Judgement will be concurrent--in this case, all economic activity ceases. Included in this are Postmillennialists and Amillennialists, the former thinking that the Millennial Reign will occur before the Second Coming (and may have been happening for some time) and Amillennialists believing that the Bible only refers to a "symbolic" Millennial Regin. Both allow for the end of the world to occur in or before 2040.In any case, the article can only be concerning itself, as far as Corrections can see, with Postmillennialists and Amillennialists, as it would be difficult for either Premillenialists to believe the Second Coming will happen, due to the requirement of the Millennial Reign which has not happened--these people should still be willing to invest in their futures or their children's futures, as the Second Coming may happen in 2040, the world doesn't end.

Do people act as if the world will end by 2040 rather than at an indeterminate time? Corrections suggests not. To understand why, we merely need to understand that individuals would have starkly different consumption patterns. To understand why, take two individuals, starting out with the same consumable resource. They enjoy consuming it, but given they don't consume it, it grows or reproduces at some rate. An example of this might be any animal herding, or saving money (which grows at the real interest rate). Individuals are impatient, but also want to smooth consumption. One individual believes in an infinite-horizon world, where they save for themselves and future generations. Another believes the world will end in forty periods. How would their consumption patterns look? We solve the dynamic programming problem for when to sell a herd stock for both individuals. Their stock of animals and number of animals sold is displayed graphically below (click to enlarge):

As one can see, savings and consumption patters are starkly different in the two groups quite quickly--people who have dynastic preferences and solve an infinite-horizon problem (or something approximating it) are able to take advantage of exponential growth in a way that finite-horizoned individuals cannot. The question is whether or not we see this sort of behavior among the 40% of the population the Pew Research Center claims. It is also worth noting that the difference seen would be enlarged further by any comparison before today's date (we assume the same resources today--were individuals to have started with the same resources five years ago, a difference would be even more noticeable, because there is more time for divergence).

How might we see this in public policy? Any individual believing that Judgement Day would happen before 2050 and born after 1983 will not see any social security benefits, while paying in for social security and other retirement programs. Indeed, individuals born before 1983 will not come remotely close to being paid their contributions, and should rebel equally agianst this program.

Such individuals should not be saving for retirement, and certainly not be taking care of their bodies--many of this 40% who believe the world will end by 2050 should begin smoking, and planning for a family may be seen as mildly short-sighted.

In summary, Corrections believes that the lack of evidence on this 40% of the population, the lack of articles noting the incredible rise of unhealthy behavior and savings is evidence of individuals not believing what they claim to believe in surveys. Corrections might further note that while one may joke about short-sightedness among Americans today, the question is about whether or not people are behaving with the degree of extremity necessary to act as if the world was going to end in 40 years.

Saturday, August 28, 2010

The Mackerel Wars: Europe's Fish Tiff With Iceland

Time Magazine article "The Mackerel Wars: Europe's Fish Tiff With Iceland" (Friday, August 27th, 2010) discusses optimal common-pool resource exploitation but speaks of "sustainable" fishing as though a balanced stock required a single and unchanging level of fishing.
The Marine Stewardship Council, which issues fishery certification programs, said that if the fishing continued at this rate, mackerel would start to fall below sustainable levels by 2012.

Let's imagine governments, along with the alphabet soup of NGOs the Time article mentioned (SFM, WWF, MSC, FIFVO, CFP, and PEG), are capable of optimally exploiting a common-pool resource. We can further imagine there are two possible stochastic states of the world: high price for mackerel and low price for mackerel. Additionally, there is a stock of fish that can be consumed, and after consumption, the remaining stock multiplies.

In this case, we might have a dynamic programming problem summarized by a value function with fish stock S, consumption C, price-state $$\theta$$, and price $$P(C,\theta)$$ and growth rate r:

With the law of motion for fish stock:

In this case, with appropriate parameters and functional forms, solving the value function above we may have a consumption pattern that has a non-singlet ergodic set for fish stock that is above zero. That is, in "low" price moments, we "under-fish." In "high" price moments, we "over-fish." Below, we graph out next period's stock against this period's stock under the high price and low price states (click to enlarge).

As one can see, the ergodic set stretches from where the 45 degree line (stock today is the same as stock tomorrow) intersects with a low shock (a stock cannot possibly go below this point, even with an infinite series of low shocks), to where the 45 degree line intersects with a high shock (a stock cannot possibly go above this point, even with an infinite series of high shocks).

The important stylized fact to understand from this is that it's possible not to have a "steady state" of fish but instead have an ergodic set of possible levels of fish, a similar notion that optimally exploits the resource, dipping into it in "bad" times and saving in "good."

Friday, August 27, 2010

One number can't illustrate teacher effectiveness

Los Angeles Times article "One number can't illustrate teacher effectiveness" (August 25th, 2010) criticizes Richard Buddin's scoring of Los Angeles teachers, arguing that it isn't taking everything into account and that there's an "ethical issue" publishing his research:

Given analytic weaknesses, the ethical question that arises is whether The Times is on sufficiently firm empirical ground to publish a single number, purporting to gauge the sum total of a teacher's effect on children.

Corrections had a number of issues with the article's criticism of Buddin's procedure. However, the more interesting issue was the author's raising of "ethicality" in the Times decision to publish Buddin's research. Theory suggests that unions may raise short-run pay but they flatten skill differentials. Perhaps to that end, teacher's unions have nearly uniformly opposed the ranking of teachers and anything that might help that process.

One reason for this is so fellow travelers might then criticize any empirical ranking that breaks a union's power by encouraging best practices and getting rid of bad teachers. What, then, might a strategy be to break this cartel of unions denying statistics, and education professors criticizing studies using bad statistics? To publish the best studies we can using the statistics we are given. Why?

Let us imagine that there are two types of teachers, good teachers, and bad teachers. They both have the same baseline utility. Both their utilities may be decreased by increased supervision. However, given that they'll be ranked, good teachers would rather have good statistics. The idea is displayed graphically below (click to enlarge).

What might this do? If they aren't being graded, both good teachers and bad teachers prefer to obstruct good statistics being collected. However, if they are being graded, good teachers now prefer good statistics while bad teachers like them even less. However, a wedge has now been created, and it's an empirical question whether or not good teachers will be able to outvote bad teachers in the quest for the collection better statistics.

In this vein, publishing well-done, competent research that admits its flaws (which an article criticizing it then rehashes) and encourages the destruction of union power to the detriment of bad teachers and benefit of students would appear the only "moral" choice.

Thursday, August 26, 2010

Why California should just say no to Prop. 19

LA Times OpEd "Why California should just say no to Prop. 19" (August 25th, 2010) deceives readers.
A 2004 meta-analysis published in the journal Drug and Alcohol Review of studies conducted in several localities showed that between 4% and 14% of drivers who sustained injuries or died in traffic accidents tested positive for delta-9-tetrahydrocannabinol, or THC, the active ingredient in marijuana. Because marijuana negatively affects drivers' judgment, motor skills and reaction time, it stands to reason that legalizing marijuana would lead to more accidents and fatalities involving drivers under its influence.
The meta-analysis the article mentions is called "A review of drug use and driving: epidemiology, impairment, risk factors and risk perceptions" in the Drug and Alcohol Review Volume 23, Issue 3 by Erin Kelly, Shane Darke and Joanne Ross, and is available here (gated). The analysis notes explicitly that, "There is inconsistent evidence regarding the impairing effects of cannabis in field studies." It lists four field studies and notes that three of them indicate no significant impact of cannabis consumption on driving. The article concludes, "the relationship between between THC and street driving performance is equivocal." Any reader inclined towards believing what the data is telling them, rather than what they wish they could find, would conclude not that "legalizing marijuana would lead to more accidents and fatalities involving drivers under its influence", as the LA Times piece has, but rather that we shouldn't expect legalizing marijuana to change the number of accidents and fatalities. The OpEd's conclusion is ridiculous and unsupported by the evidence they themselves cite.

In addition the the egregious error discussed above, the article also provides readers with a foolish analysis of negative externalities.
The current healthcare and criminal justice costs associated with alcohol and tobacco far surpass the tax revenue they generate, and very little of the taxes collected on these substances is contributed to offsetting their substantial social and health costs. For every dollar society collects in taxes on alcohol, for example, we end up spending eight more in social costs. That is hardly a recipe for fiscal health.
This analysis implies that the pleasure people get from drinking is worthless. Drinkers and non-drinkers alike deserve to have their utility taken into account when analyzing the costs and benefits of a liquor tax. Basic economic theory would tell us that social surplus is maximized when the marginal social cost of drinking equals the marginal social benefit of drinking. If we are counting drinkers as members of society, then it follows that some level of drinking is optimal. The figure below demonstrates how a pigouvian tax allows us to arrive at the optimal social level of alcohol consumption (click here to enlarge).

In equilibrium, the level of the tax just measures the difference between marginal private cost (cost of drinking to drinkers) and marginal social cost (cost of drinking to society, including alcohol-related externalities). Taxes on goods with externalities are meant to regulate consumption to socially optimal levels. As in the figure above, it is possible to draw social and private cost curves that yield the result discussed in the article--tax revenue is one eighth the area between social cost and private cost curves. The red shaded area represents the tax revenue and the blue shaded area represents the difference between social and private cost. This is a socially optimal (but not necessarily revenue maximizing) tax.

Wednesday, August 25, 2010

Case of soup-kitchen thief fuels critics of three-strikes laws

Christian-Science Monitor article "Case of soup-kitchen thief fuels critics of three-strikes laws" (August 19th, 2010) discusses California's three-strikes law, a law that requires an individual be sentenced to life imprisonment after being convicted of three felonies. It leaves relatively undiscussed the statistical reasons for such a law.

Los Angeles Superior Court Judge Peter Espinoza said Taylor’s sentence was one of many third-conviction cases that brought “disproportionate” sentences and “resulted in, if not unintended, then at least unanticipated, consequences.”


But stories like Taylor’s are useful in illustrating the problems behind the law's implementation, says Ms. Levinson. The challenge for critics will be trying to prove that society does not benefit from decades-long or even life sentences for nonviolent – or at least not serious – crimes.

Corrections suggests that it may make sense to permanently incarcerate individuals who have been convicted of only minor crimes previously, because the likelihood that they have committed unobserved and serious crimes is high. It's worth noting that, truncating the distribution of claims at the 90th percentile, burglars appear to commit about 38.1 burglaries per year (Visher 1986, reported by NSW Bureau of Crime Statistics and Research). The Senate Congressional record from 2004 indicates that rapists commit between 8 and 10 rapes on average, before being caught (Congressional Record, page 22999). Car Thieves in England appear to steal 45 cars before they're 18, and 94 when they're older (Car Theft: The Offender's Perspective, Light, Nee and Ingham, 1993, page 11). Between 545 and 707 metric tons of cocaine was shipped toward the United States in 2007. Two-hundred and fifty-nine metric tons were caught by U.S. authorities, either internally, at arrival zones, or in transit. Given the discrepancy, Corrections conjectures that distributors of cocaine go through hundreds, or thousands of transactions without being caught.

Perhaps some of these statistics are individually not dependable, but their consistency and magnitude serve as more general suggestive evidence. Given someone has been convicted of one crime, the likelihood that they have committed an order of magnitude more without having been caught appears rather large. The idea that crimes someone is convicted for are simply a signal for the crimes they have committed is the impetus for the point Corrections makes here.

Let us imagine, before we have caught and convicted an individual, that we have some probability distribution that we believe they are sampled from, some likelihood that they are a career criminal, irredeemable recidivists. This is our "prior probability," a distribution between zero and one, that someone will be a recidivist, displayed graphically below (the image is rather small, click to enlarge).

Someone's criminal act or acts provide a data distribution that the person will be a recidivist. One example of this might be their being caught and convicted of breaking into a building. Our hypothetical probability distribution we might place that someone has committed a major crime given they have been caught committing a serious crime like breaking an entering is displayed graphically below

We have distribution of the data on the probability of another criminal act, displayed graphically below (click to enlarge).

We can combine our data on our parameters of interest with our prior beliefs about the parameters, in this case the likelihood that someone will commit another criminal act (click to enlarge)

We can summarize bayesian update by displaying our prior, data, and posterior in one graph, displayed graphically below (click to enlarge). For those interested, in this case we took data to be drawn from a binomial distribution, and assumed its conjugate prior distribution, a beta distribution (therefore making the prior distribution is a beta distribution--of course, our assumptions are without loss of generality, merely for mathematical convenience).

To summarize the point Corrections is making: given that someone has committed and for two felonies previously and has been convicted of a third felony, the likelihood that they have committed dozens, or hundreds of felonies and crimes is high. Convicting and imprisoning these career criminals for life, for their three observed, and dozens of unobserved felonies and crimes, may simply be the optimal response of a justice system that is sentencing people for their likely crimes (the likely crime they were convicted, and the likely crimes they were not caught for).

For those concerned that we are convicting individuals for crimes they have only likely committed, we remind them that this is precisely what the judicial system does currently. The question is how to decide the threshold for punishment.

Corrections finally notes that it is possible punishments already take this into account--that is, the penalty for marijuana distribution, in locations (excepting those with marijuana legalization or decriminalization) are deliberately punishing crimes for which one has not yet been caught. Our discussion took place under the assumption that punishments reflect only the crime for which one was convicted.

The Littlest Redshirts Sit Out Kindergarten

New York Times article "The Littlest Redshirts Sit Out Kindergarten" (August 20th, 2010) discusses the "redshirting" of kindergarteners, the practice of holding them back a year so they have an age advantage. Corrections is dubious that the practice could become a problem, and that it will wane, despite "the signs."
“Redshirting” of kindergartners — the term comes from the practice of postponing the participation of college athletes in competitive games — became increasingly widespread in the 1990s, and shows no signs of waning.
The Times doesn't articulate the tradeoffs that altruistic parents face when deciding when their children will enter school. Children gain some initial advantage entering Kindergarten later because they are older and more mature, and they may gain a measure of happiness by not entering into school immediately. What they lose is that year of their life that they might have spent working or retiring. There are two important empirical questions the Times should have addressed when discussing this issue. First, whether or not there is an advantage to entering kindergarten late, and if so, the time-profile of this benefit. Second, whether or not the net present value of the time profile for benefits due to entering early is greater than, or less than, the net present value of the time profile for benefits due to not entering early.

What are the benefits to entering class early, assuming there are any? On the one hand, if the "alpha dogs" of a class get a larger share of the resources, confidence, and attention, then we might expect benefits to late enrollment to explode over time. Alternatively, if students enter with a fixed advantage and all students learn equally over time, then the benefits to being a year older than one's peers decays over time. Two prototypical time paths are displayed graphically below (click to enlarge). The plot simply shows an advantage, measured initially at 1, and its decay or growth over time. The black line separates two answers to our second question. If a plot stays above the black line, then benefits grow or stay constant, and below, benefits decay or stay constant.

Evidence indicates that the blue line of decaying benefits is the empirical reality. Elder & Lubotsky find that benefits are relatively short lasting in "Kindergarten Entrance Age and Children's Achievement, Journal of Human Resources" (2009) (gated) (ungated). The authors use exogenous changes in state age cutoffs and consequential differences between predicted and actual entrance ages to produce identification (a counterfactual).

Elder & Lubotsky indicate that there are benefits, however fleeting. What are the costs? Earnings rise as one gets older (falling as one enters retirement age). Inspired by Empirical Age-Earnings Profiles (Kevin M. Murphy and Finis Welch, Journal of Labor Economics, April 1990), Corrections offers a similar treatment, using historical cohort averages of earnings from the Current Population Survey (available at the Census Bureau). We use the data (not plotted) to fit a cubic polynomial of earnings over time or age for cohorts born in 1940 or 1950, displayed graphically below. The first plot has earnings (all earnings in current dollars) by age (click to enlarge) the second plot has earnings by year (click to enlarge). Both plots use median data from males only (all races).

To overcome the cost of putting off one's earnings profile by one year, how much would an individual born in 1940 have to be paid? In this primitive analysis, ceteris paribus, if the net present value of putting off one's education is greater than $8,500, an individual should do it.

Corrections might further add that even if the trend has been increasing, it is likely to find some equilibrium. As the proportion of "alpha-dogs" increase, their allotment of resources above the baseline presumably decreases--the benefits of postponing decrease, while the costs, as discussed above, remain the same. This leads to an interior equilibrium, (the equilibrium proportion of late-entrants is .247, the point of intersection) as displayed below (click to enlarge). In this case, no benefit is gained to waiting, and individuals are indifferent to waiting or not.

Monday, August 23, 2010

Free That Tenor Sax

New York Times editorial "Free That Tenor Sax" (August 21st, 2010) espouses a shift in U.S. copyright law. Specifically, it advocates shortening the copyright law to only protect a work during an author's life, rather than an author's life plus seventy years.

Copyright laws are designed to ensure that authors and performers receive compensation for their labors without fear of theft and to encourage them to continue their work. The laws are not intended to provide income for generations of an author’s heirs, particularly at the cost of keeping works of art out of the public’s reach.

Corrections should first note the patent falsity of this statement. The law protects a work for an authors life plus seventy years. To argue that the law is only meant to protect a work during an author's life, but not past it, is the sort of socialist self-deception the New York Times editorial board has made a habit. The position of the Times is ludicrous.

But more important than this deliberate deception by the Times are the false economic implications behind its statement. The Times appears to believe that an author prefers monetary reward only during his lifetime. Authors are not so selfish as to only desire profits in their lifetime--they have dynastic preferences, and are altruistic towards their heirs.

When deciding how hard to work, authors care about the net present value of profits--that is, total profits over all time, discounted to the present period. In the current paradigm, we might suppose that profits look like this (click to enlarge):

The Times wishes to change this to a value-stream following this model: (click to enlarge):

If all authors care about is the shaded area, their total profits, then we can see why the Times idea serves as an assault on art--it helps corrode and shrink an artist's livelihood and joy from his work.

Yet the point Corrections is espousing holds even if authors didn't care about their children. All an author needs to gain the net present value of all future profits is to sell the continuing rights to his work before his death. In this manner, all that matters is the total profits an artist can make--he can obtain the net present value of his work's entire stream of profits currently by selling the work to another individual. Indeed, a work's copyright could span many generations and liquidation would still be possible.

What the Times is suggesting is to destroy a portion of the incentives that authors have to create their original works in return for a few works to be out-of-patent now. This is, in effect, a tax on the value of all author's works. If ever an organization was willing to kill the infinitely-lived goose for its golden egg, the New York Times is.

Indeed, we might note that because an author is a durable-goods monopolist that does not face the Coase Conjecture (gated) (not to be confused with the Coase Theorem), profits are further decreased that they would otherwise have been, because consumers are willing to put off their consumption during an author's lifetime when they know the end of copyright is near.

Sunday, August 22, 2010

Foreclosures Grind On

New York Times editorial "Foreclosures Grind On" (August 19th, 2010) suggesting that the government intervene in helping those who can't afford their mortgages avoid foreclosure notes that,
Another big problem is that many lenders, whose participation in the program is voluntary, have been reluctant to aggressively rework bad loans. Reducing a loan’s principal balance — rather than lowering interest levels or extending payout periods — is often the best chance of keeping underwater borrowers in their homes.
The entire article is predicated on the assumption that, somehow, borrowers were prey for lenders. Any reasoning individual could see the situation for what it really was--those who couldn't afford to own homes taking advantage of the opportunity to live in them for a short amount of time. The number of homeowners skyrocketed in the past decade, as the figure below shows, and now appears to be falling back to historical levels (click here to enlarge). For some reason lost upon us, the New York Times article suggests that the government intervene to maintain apparently unsustainably high levels of homeownership.

Lenders suffered after housing prices fell, not ineligible homeowners who entered their contracts just as they will leave them (with nothing). It is unclear to Corrections why taxpayers should fund those who have already have enjoyed stays in homes well beyond their means--it would seem that for nearly a decade already they have gotten more than they paid for.

Saturday, August 21, 2010

U.S. Farmers Wary of Gaining From Russia's Woes

New York Times article "U.S. Farmers Wary of Gaining From Russia's Woes" (August 19th, 2010) offers a series of quotes from U.S. farmers that simply do not make sense when taken at face value. Specifically, wheat prices have gone up because Russia, a large producer of wheat, appears to have banned exports in the coming year. The New York Times quotes confused farmers that appear to suggest that they don't want to plant wheat, due to uncertainty about Russia's next moves:

Mr. Schroder said he feared that wheat prices were being driven by speculators, as was the case a few years ago, just before the recession, when the price soared and then crashed.

“What is this wheat market? I don’t have a clue, and I’m a professional wheat farmer,” he said. “There’s a complete lack of transparency.”

The problem the Times is pointing out is that farmers only know the current price, while their planting decisions should be based on prices in the future. They are subject to a large variability of even autoregressive prices. Corrections depicts such a movement graphically below, along with 95% error bands (click to enlarge).

A naive Times writer might instinctually think this represented a market failure. This is incorrect. Indeed, commodities futures markets, perennially despised by market-opposing individuals, are the market solution. However, the good news for farmers is that they do not have to bear any of the turbulence or non-transparency in the market at all. All they have to do to look at futures prices to plan their planting patterns. Why might this be?

Historical knowledge that appears to have been lost, even by some ideologically-motivated economists, is the very reason commodities markets, and commodities futures markets, were created. The Chicago Board of Trade (CBOT), to Correction's knowledge the oldest still-operating futures market, was created in 1848 to help farmers cope with fluctuating wheat prices.

The problem was as follows: farmers are often poor and unwilling to bear the risk of producing wheat and holding it until it is to be sold at some unknown price. What the Chicago Board of Trade did, and still does, is homogenize a good--in this case, sort wheat into bundles of the same quality, and allow a futures contract between speculators/investors, who are willing to bear the risk farmers don't want, and farmers, who are able to lock in the current price for their wheat. In this manner, all a farmer has to do is sell a futures contract in order to take all market-based uncertainty out of his decision to plant wheat--in selling the contract, he has, in effect, paid someone to bear the his risk. This allows him to plant the most valuable crop, even if its future prices are highly variable.

Therefore, farmers now only need to look at current futures prices--they should not care about the current price or what they think might happen to the price, only what the current futures price is. If we examine CBOT's wheat futures prices for July 2011, we see that the current futures price of corn is elevated (click to enlarge).

Farmers can lock in this price now. No risk necessary. It is important to note that a very large proportion of farmers do this every year, creating the massive derivative markets we have today. The only "risk" that a farmer would take is that he might miss out on higher prices now. Indeed, this is precisely the example the Times gives:

Another brake on any irrational exuberance over wheat will be farmers’ own suspicions, despite the incentives of higher prices.

Some think they are being played, and that the big run-up is partly, or largely, just market manipulation — like the increase in 2007 and 2008 that drove wheat prices more than twice as high as they are now before a gut-wrenching crash during the global recession.

'I hate to sound negative, but I’ve been burned so many dang times on wheat that I think I’m done,' said Olea McCall, who farms about 4,000 acres near the Kansas border, mostly in corn, wheat and sorghum. Mr. McCall said his attitude was not helped by missing out on the new rise in prices.

'I sold at 4, and three weeks later went to 6,' he said, referring to the price in dollars per bushel.

The Times continues:
“It took 20 years to sort the market out after [the similar 1972-73 Soviet Union crop failure],” Mr. Stulp said of the 1972-73 price bubble.

As much as the Times might write about "irrational" "bubbles" harming farmers, it supports the ridiculousness of the concept with its own quotes. Farmers need not fear any bubbles--they need only to lock in their high prices with futures contract and allow speculators to bear any "bubble" that might be present.

Friday, August 20, 2010

Israelis don't need lectures from American leaders

Seattle Times syndicated column "Israelis don't need lectures from American leaders" (August 19th, 2010) by George Will offers a curious observation. Specifically, that under threat of school buses exploding, Israeli parents would send their children on separate buses to lessen the likelihood of losing two children.
During the onslaught, which began 10 Septembers ago, Israeli parents sending two children to a school would put them on separate buses to decrease the chance that neither would return for dinner.
Corrections finds this rather peculiar. Either sending both children on the same bus, or sending them on different buses can be justified depending on the parent's risk aversion. However, Corrections would expect state-dependent utility to cause parents to send both their children on the same school bus. (Enough to be curious about Will's claim).

The idea is as follows. Parents cannot reduce the expected number of children killed by a terrorist device or gunfire by splitting them up or putting them together. All they can do is choose the spread of their losses. Let us imagine, assuming tractable, rather than realistic, numbers, that there is a 50% chance any given bus is blown up.

In this case, then parents are deciding between two options, given they send their children to school.  If they send both together, then they have a 50% chance of losing none, and a 50% chance of losing both.  There is a 0% chance of losing just one, as both will live or die together.  However, if they send both apart, then each has an independent 50% chance of being killed.  With independent chances (assuming negatively correlated shocks, or negative covariance between attacks, would only strengthen our point), their probabilities will now be 25% chance of losing zero children, 50% chance of losing one, and 25% chance of losing both.

The two options are displayed graphically below (click to enlarge).  The red line indicates the decision to send both children apart.  The blue line indicates the decision to send both children together.  As we could calculate, the expected number of children killed is the same: one child.  What is different is the spread of the children killed.

It is up to each individual to discover which choice they prefer.  However, Corrections would have conjectured that parents would choose to keep their children together, because they have state-dependent utility.  Far from dismissing or trivializing the pain a parent would feel, Corrections is conjecturing that, if human beings have some maximum level of pain, and the loss of a child raises one to that level, then individuals are at a "censoring" point for their "gambles" with their children's lives.

Our stylized conjecture about the pain of losing a number of children is displayed graphically below (click to enlarge).  The idea is as follows:  a parent hits a "maximum pain" point as soon as a single child dies.  The loss of another cannot raise their pain past the point at which it is already.  (We note that our general point also survives any two-children-lost pain-level adjustment that does not raise the pain loss of two children to greater than double the loss of one). If we then plot the expected pain of each decision, we see that sending both children on the same bus, because it is "upward censored", gives an expected pain level of .5.  However, sending both children on different buses  gives an expected pain level of .75. The two expected levels of pain are also displayed on our graph, blue denoting the pain of sending children apart, and red denoting the pain of sending children together.
As we can see, the expected pain of sending children together would appear to be less than the expected pain of sending them apart, due to the "censoring" of pain--the pain of the loss of two is less than double the pain of the loss of one.

However, Israeli parents, according to Will's article, are not behaving in the way Corrections would suggest.  Faced with theory and reported empirics not meeting, methodologically, several options are open to our understanding:
  1. Parental utility does not fall into the broad class of relative utilities we conjecture.
  2. Will's facts are wrong.
  3. Will's facts are right, his reasons are wrong (reasons we miss).
  4. Our broader framework of expected utility maximization is incorrect.
Corrections ascribes the failure of our model as most likely to be due to either Option 2 or Option 3.  Supporting Will's claim, a cursory examination of the ~170 bus casualties in Israel in the last 10 years yields  few people with the same last name killed in the same explosion or gunfire, and fewer same-age siblings.  Suggesting Option 2 is the likelihood that the observation came purely from anecdote.

Thursday, August 19, 2010

Academic Bankruptcy

New York Times OpEd "Academic Bankruptcy" (August 14th, 2010) makes the argument that colleges are spending too much, without really considering the economic landscape for such institutions.

Rather than learning to live within their means, Columbia University, where I teach, and New York University are engaged in a fierce competition to expand as widely and quickly as possible.

The article continues, mustering projections for future tuition without considering the forces at work:
With unemployment soaring, higher education has never been more important to society or more widely desired. But the collapse of our public education system and the skyrocketing cost of private education threaten to make college unaffordable for millions of young people. If recent trends continue, four years at a top-tier school will cost $330,000 in 2020, $525,000 in 2028 and $785,000 in 2035.

The "paying customers" of a college are its students. As the figures below make clear, college enrollment has continued to increase over time (click to enlarge 1, 2).

Given the acknowledged increase in the price of tuition, and the increase in the amount of college degrees purchased, we can conclude with certainty that demand for education has been increasing over time, as supply-and-demand are partially identified.
In addition, a high tuition price does not make education "unaffordable." When students see that the returns to skill are high (that education is valuable), they can borrow against their future earnings until it is not worthwhile to do so. What determines the price of education? In a human capital model (rather than a signaling model), the price of education will be equal to the value of the increase in productivity it provides students. This increase is determined largely by faculty quality. While talented faculty are scarce, but provide students with a high increase in productivity, the price of tuition will remain high.

Nothing in the article ties the price of education with increases in the productivity of students. If students see that they are not learning anything, and so realize that their future wages will not increase enough to justify tuition, they will not attend college. An aggregation of such decisions will decrease the demand for education and cause tuition to fall.

Wednesday, August 11, 2010

Sorry, Kid: No License, No Lemonade

New York Times article "Sorry, Kid: No License, No Lemonade" (August 6th, 2010) offers a concise display of the deeper, recurrent misunderstanding of economics the Times represents. The article discusses a child's lemonade stand shut down by County health inspectors.

Julie Murphy, a 7-year-old Oregonian, set up a lemonade stand on July 29 at an art fair in northeast Portland. County health inspectors shut her down, however, telling Julie and her mother, Maria Fife, that they needed a temporary restaurant license, which costs $120. The penalty for selling food without a permit, they warned, was $500.

Discussing this degree of regulation, the Times only gives an open-ended, if suggestive, quote.

The Health Department employees were doing their jobs, he said, and “there’s a reason those laws exist,” but “a 7-year-old selling lemonade isn’t the same as a grown-up selling burritos out of a cart.” As for the health inspectors, Mr. Cogen said he had “engaged them in a conversation” about professional discretion.

There are indeed reasons that such regulations exist, but they aren't the safety of the public. Health regulations that involve heavy lump-sum taxation are inevitably supported and enhanced, if not created, by business interests, rather than consumers. The New York Times, as a liberal flagship, offers the standard idea: organized government protects non-organized consumers from organized business interests. The Stiglerian economic analysis offers instead: organized government and organized business combine to fleece non-organized consumers through the destruction of competitive forces. Indeed, even regulatory agencies that may have been created in response to market failures are subject to "regulatory capture," eventually corrupting the very organizations intended to "police" them.

The manner in which such lump-sum regulations impact a market with quantity is displayed graphically below (click to enlarge). With fixed-price heterogeneity and increasing marginal cost of production. As we can see, the quantity provided is reduced and price is increased. Both consumers and businesses are hurt, as their surplus is reduced.

This is not necessarily the end of the story. Product quality may be endogenous, chosen along a spectrum of production price-quality combinations. Instead of a supply-and-demand analysis, we might look at quality being distorted rather than quantity, with similar utility impact. The distortion is the focus, rather than its dimension.

We might add that the "professional discretion" Mr. Cogen refers to is the source of the sort of corruption Corrections is referring to. To extend a quote of Gary Becker's to one of our own, discretionary power is the most corruptive sort of government power.

Corrections thinks the lesson to be learned is that, as a rule, whatever power created by a government for whatever reason will inevitably be used to destroy competition and harm consumers in the long run. The only stable source of benefit to consumers is through competition, not government regulation, which inevitably destroys competition.

Monday, August 9, 2010

How to Lose an Election Without Really Trying

New York Times opinion "How to Lose an Election Without Really Trying" (August 7th, 2010) discusses political "amnesia", a concept that sounds particularly non-economical. Corrections suggests an alternative model.
Betting on amnesia is almost always a winning, not a losing, wager in America. Angry demonstrators at health care town-hall meetings didn’t remember that Medicare is a government program, and fewer and fewer voters of both parties recall that the widely loathed TARP was a Bush administration creation supported by the G.O.P. Congressional leadership.
There may be a real reason for political "amnesia." We might take a "regime switching" model as an explanation. The republican party can take on two values. One in which most Republicans want to reduce government intervention, and one in which they do not.  Individuals do not know what state or "regime" Republicans are in, but have signals.  (regular readers will see the familiarity between this Regime Switching model and our earlier article introducing the Kalman Filter).

In any case, we can generate a random variable in which Republicans are in a "regime."  They have a 95% chance of staying in whatever regime they are currently in next period, and a 5% chance of switching regimes.  In our case, we have a signal with noise which broadly tracks the true regime (because of the noise, we can get "false" signals).  In this case, we observe the following signal.  As the blue line is close to one, we see high legislative activity and Republicans are likely to be in a pro-government mood, though they may or may not be.  Using the blue line, our probabilities, and a standard regime switching model, as James Hamilton outlines here (gated) and here (ungated), we can make a "best guess" of what our regime is.  Graphically below, we display our signal in blue and our "best guess" as a red dotted line.  The red line is the "probability"we assign to each state (click to enlarge).

A measure of our success is the following graphical display, in which we again graph the probability that we assign to each state, while also graphing the "truth" (something we wouldn't ordinarily observe) (click to enlarge).  We call this a 'Hamiltonian' Regime Switch simply because we're following Hamilton's outline, not in relation to the mathematical concept.  Note that times when our guess (red dotted line) spikes and our regime (solid blue) doesn't change  were noise that lead us to believe regimes switched when they did not.  Also note that we are (asymptotically) efficient with our estimator--linear weightings cannot do better, ex ante.

This modeling situation appears to be more appropriate than suggesting "amnesia."  We can extend this situation in the case of having no signal as well.  In the case of having no data and predicting what regime or state we are in, or the case of forecasting what state we will be in at some future period, probabilities will slowly converge to our unconditional probabilities--a 50/50 probability of being in Regime 1 or Regime 2.

This seems to be an adequate story for voter "amnesia."  Voters observe strong signals of the regime Republicans are in when they have legislative power.  This may be the Medicare Prescription Drug Improvement and Modernization Act of 2003, for instance.  In such a case, voters understand with a clear signal "where" Republicans are.  When they are out of power for a time, or with a noisy signal, they may be less sure than they were two years ago--they recognize the regime can switch.

Corrections suggests that this sort of model is more satisfactory and economical than a model positing "amnesia" in voters.

Friday, August 6, 2010

Hiroshima 65 years later: US attends ceremony, but offers no apology

Christian Science Monitor article "Hiroshima 65 years later: US attends ceremony, but offers no apology" (August 6th, 2010) offers a mildly confusing omission when discussing the August 6th and August 9th, 1945 atomic bombings of Hiroshima and Nagasaki.

Some Japanese still want an apology for the atomic bombings of Hiroshima and Nagasaki, while others complained about the absence of President Obama.

Corrections is confused at their desire for an apology. It would appear to any student of coetaneous documentation that nuclear bombing was a Pareto-improving (if unilateral) decision. Surrender was not an immediate option. Waiting for surrender would have caused more continuous Chinese civilian deaths in Manchuria. Waiting would also have given the Soviet Union the opportunity to invade; it is worth noting that the mass rapes of East Germans in the aftermath of invasion (approximately 240,000 East German women died in connection with Soviet rapes, of a population of around 19 million, let alone massive German deportations to the Gulag and tens of thousands of deaths in Speziallager as the war ended). A planned invasion, Operation Downfall, composed of two separate invasions, Operation Olympic and Operation Coronet, would have caused anywhere from 500,000 and 1,000,000 Japanese deaths, also causing between 109,000 and 800,000 Allied deaths. Comparably, the bombings killed between 8-13 Allied deaths (POW's held in Nagasaki) and between 150,000 and 246,000 Japanese deaths. We forego discussions of other paths of history (for example, the singular bombing of Hiroshima, or a "display" bombing for Japanese observers), suggesting merely that examination of wartime documents offer similar analyses.

The decisions are graphically represented below (click to enlarge). It is difficult to view without opening in a larger window, due to the scope of differences in deaths. The x-axis represents Allied deaths, while the y-axis represents Japanese deaths. The grey-blue area are the maximum and minimum estimates of Secretary of War Henry Stimson. The green bars represent the Joint Chief's estimate earlier in the year. The red bars represent the actual deaths because of the nuclear bombings. Finally, note that the 8-13 Allied deaths are not displayed in the red because they do not render in a manner discernible from zero.

As we can see from the diagram, the nuclear bombings were clearly the favorable outcome for almost any weighting of Japanese and Allied deaths. This analysis foregoes discussion of casualties, rather than fatalities, which only strengthen the point. Indeed, as an interesting side fact, 500,000 Purple Hearts were produced in anticipation of the invasions--these have instead been used for all Purple Hearts given out in the Korean War, Vietnam War, Operations Desert Shield/Storm, Operation Enduring Freedom, and Operation Iraqi Freedom/New Dawn. There are still approximately 100,000 Purple Hearts left over from this anticipatory production run.

It would appear to Corrections an apology by the United States for the use of nuclear weapons does not appear appropriate, given the Pareto-improving nature of the decision to bomb Hiroshima and Nagasaki. Rather, from our analysis it appears that the Japanese should offer an expression of gratitude for the bombings, which saved hundreds of thousands of Japanese lives.

Thursday, August 5, 2010

Lower the voting age to 10

Washington Post editorial "Lower the voting age to 10" (August 5th, 2010) argues that in order to raise the discount rate of politician's decisions, the voting age should be lowered to 10. Corrections notes that this suggestion is not likely to have any discernible impact due to intergenerational altruism (both the possibly facetious suggestion and the general idea).

There are about 35 million Americans ages 10 to 17. Giving them the vote would transform our political conversation. It would introduce the voice we're sorely missing -- a call to stewardship, of governing for the long run, via the kind of simple, "childlike" questions that never get asked today.

Letting alone the likely real manipulative intent of the suggestion, this is not likely to actually have a real impact on political discount rates. The average inheritance is currently $90,000, but lifetime income transfers from parents far outstrip this figure, and it is important to note, as "The Role of Intergenerational Transfers in Aggregate Capital Accumulation" (Kotlikoff and Summers, 1981) do, that the bulk of the $3.884 billion of U.S. wealth holdings in the U.S. in 1974 are accumulated intergenerational transfers.

Parents are currently transferring a their preferred amount to children. Let us imagine, for the sake of the author's argument, that children do not care about parents at all, while parents clearly care about their children, to the tune of a significant portion of their lifetime wealth. Furthermore, money is fungible. The decision to give children the vote is, by the author's point, equivalent to giving them a small transfer of wealth, which will have no impact on the total wealth they will be given, it simply changes the form in which they receive it.

To illustrate the point, let us say that a child is receiving $1,000 in lifetime wealth from their parent. Their parent also makes political decisions that cost the child $100. Giving the child the vote is equivalent to forcing the parent to transfer $100 in cash--they will simply transfer $900 in other cash and $100 in forced transfer, rather than $1000 in other cash.

Corrections suggests that altruism is so strongly present in parents that an interior solution is inevitable. The only feasible argument (one the author did not make) is one of public goods and private transfers.