Saturday, June 5, 2010

Can a Soda Tax Save Us From Ourselves?

New York Times article "Can a Soda Tax Save Us From Ourselves?" (June 4th, 2010) by Greg Mankiw offers an idea that appears clever at first glance, but ultimately fails empirical inspection, as far as Corrections can discern. Mankiw notes, quite correctly, that most "sin" taxes are rejected prima facie on the facile princeps of economics--they do not feature neither externalitites nor information asymmetries nor monopoly. He then offers the idea that smokers or soda-drinkers are imposing negative externalities on themselves, as a possible justification for a soda tax.

There is, however, an altogether different argument for these taxes: that when someone consumes such goods, he does impose a negative externality — on the future version of himself. In other words, the person today enjoys the consumption, but the person tomorrow and every day after pays the price of increased risk of illness.

This raises an intriguing question: To what extent should we view the future versions of ourselves as different people from ourselves today?

Corrections sees this as a theoretical possibility, but one that does not hold in reality. If individuals feel altruism, a love of others, then surely most feel a sense of philauty, a love of self. In such a case, we should expect to see transfers from individuals-now to individuals-tomorrow as much, or more, than we should see consanguineous transfers.

However, there is a deeper, Stiglarian point to be made here. All the necessary conditions for the Coase Theorem to hold are present, as far as Corrections can discern. There is no problem of enforcement, negotiations, or property rights. An individual will reach a Pareto optimal outcome without government intervention. They will be capable of bargaining with themselves in the future.

Furthermore, most individuals save. For individuals who are convinced that Americans do not save, net national savings (roughly adjusted for inflation to 2010 dollars) is depicted below (click to enlarge).
Before Professor Mankiw's idea is accepted, it must be explained why most individuals save, if individuals tomorrow are distinct entities from individuals today.  Should this be counted as charity?  Given that most save, if an individual knew that their future-self valued health at more than their now-self valued soda, they should simply have their future-self "pay" them to not drink soda by saving less--an efficient, Coaseian transfer.  Corrections conjectures that Professor Mankiw's idea is not robust to these considerations.  

Addendum: Corrections now recognizes that our point is robust even if an individual is not saving, so long as he can go into debt that his future-self must pay off. That is, the presence of savings is not even a necessary condition for Mankiw's point to collapse, as we first suggested.


  1. I googled the article and you are one of the first hits. I like the coase theorem application but dont you think there are a lot more problems with his argument than internal inconsistency? Like it is absurd?

  2. Thank you for your comment!

    Mankiw's post more generally made a point that Corrections agrees with, and the future-self past-self externality argument was, as we read it, a means to an end. The means was setting up some economic circumstance through which we might possibly view taxation as a good idea. The end was correctly (if lightly) suggesting that having the government play the role of parent is a grievous error.

    Our objection was not to Mankiw's main point as we read it, which is, by laying out a paternalist's view in sympathetic terms, displaying its ludicrous nature. Our objection was that even his "economist-sympathetic" argument for paternalism did not hold.

  3. wheninrome15 steps up to the plate...

    I'm going to have to side with Mankiw on this one, but his argument is not completely clear, so I can see why you would go in this direction with it. Maybe he is just aiming at a more general audience, but if we're going to do the real deal, we have to address the elephant in the room, namely dynamic inconsistency. Below are my notes from thinking the matter through, hope they will benefit you as well.

    To frame this issue, let's first consider a 2-period model with discounting (with 2 periods it doesn't matter what sort of discounting is going on, geometric, hyperbolic or otherwise). The agent maximizes u1(x1)+du2(x2) subject to some budget constraint (say, x1+x2=M) where d is the discount factor.

    In solving this problem, we know that, starting from an allocation of u1(x1), du2(x2), the agent moves an extra dollar to the second period precisely when the transfer causes du2 to go up by more than u1 goes down. If we are instead thinking of a “multiple agents” framework, then it must be that such bargaining occurs precisely when agent 2, with utility function du2, gains more from an extra dollar than agent 1, with utility function u1, loses. So you could think of this as Coasian bargaining, but the 2nd period agent is _not_ someone with utility function u2; rather, he has utility function du2.

    [Sidenote: By the way, in one sense it is an illusion that Coasian bargaining is occurring here. Why are the agents trading with utility functions u1 and du2 rather than u1 and u2? The problem is a total unilateral lack of property rights. Agent 1 can steal whatever he wants from agent 2 (provided he has free access to credit, he can even go into debt, which agent 2 will be forced to repay). Agent 2's share is completely determined by agent 1's altruism. If d=0, for example, then agent 2 is simply screwed, unless we really are thinking of him as an agent with utility function du2=0. Another clue is that the outcome is completely independent of the initial assignment of property rights (i.e. period 1 and 2 endowments). But for the present purposes it is actually somewhat useful to continue to suppose Coasian bargaining is occurring, so let's keep it.]

    So, when you say that Coasian bargaining will occur, let us be clear that you mean between agents with utility functions u1 and du2, _not_ u1 and u2. When Mankiw uses the word “externality,” he does not mean that perfect bargaining isn't taking place, but rather that it is taking place at the exchange rate of 1 to d rather than 1 to 1. Coase does not say what the exchange rate should be, it simply says that, given the exchange rate, trade will occur.

    [CONTINUED IN NEXT COMMENT...apparently there is a 4096-character limit...I know, it's ridiculous that this is necessary...]

  4. To say that agent 1 imposes an externality on agent 2 is to say that agent 1 is not fully weighing the effect of his actions on agent 2's how much should he be weighing it? How much, really, should we care about agent 2? Here we are discounting his native utility function u2 by a factor of d, but maybe that's just reality we might think discounting _should_ be going on. Perhaps people are fine with the fact that they don't care about tomorrow as much as they care about today. It would be a poor social planner who made it his goal to eliminate discounting that people really wanted around. It's hard to construct a defensible argument that people shouldn't be geometrically discounting. In a multiperiod model, with discounting (1, d, d^2, d^3,...), there is no job for a social planner. But on the other hand it's pretty easy to take issue with hyperbolic discounting and the like. Dynamic inconsistency is rotten, and there is value in helping people to eliminate it [see endnote, but don't read it till you finish this paragraph]. Once you have dynamic inconsistency, everything you're saying simply flies out the window. Coase does not answer the question of what the relative price should be; you have to decide that for yourself, you have to take a stand. In the world of dynamic inconsistency, “the” utility function is no longer well-defined, because it depends on the perspective in time that you choose! You may want a “t=minus infinity” perspective or a “t=10 periods ago” perspective or a “t=now” perspective...but once you pick it you're stuck, you have to evaluate everything from that perspective. It is no longer simply maxing utility, but rather maxing utility w.r.t. time t. Go ahead and treat the agent's weights of (1, bd, bd^2, bd^3,...) as gospel if you like (that's quasihyperbolic discounting from t=now. The b captures the notion that the agent discounts in the usual geometric way except that he discounts all future periods by an additional factor of b<1, i.e. he really cares about now), have that be your criterion for how resources ought to be allocated if you like...but all the agent's plans will just fly out the window next period, won't they? If agent 1 wants to stop agent 2 from gorging at agent 3's expense, Coase will not save him! Agent 1 wants the terms of trade to be bd to bd^2 (i.e. 1 to d), but next period they will simply be 1 to bd.

    One solution is for agent 1 to precommit, and indeed in a world with perfect information and frictionless and complete commitment, there is no dynamic inconsistency problem. But in reality people are often naive or inert. Thus, the reasoning goes, people can potentially be helped by a soda tax that helps them to resist soda that isn't really maximizing their utility (w.r.t. the point in time that you have decided they really care about).

    In matters such as these, protecting people from themselves is always always always about dynamic inconsistency. So if your argument does not go there, then you can be sure that it's not getting to the heart of the matter. The point of this is not at all to convince you that soda taxes are a good thing on net; that will come down to Mankiw's last sentence. But that is Mankiw's point too, that it comes down to his last sentence. I do not think his argument stops short of that.


  5. [Endnote: I said that with dynamically consistent discounting, it's hard to argue that a person should do something else. The reason is that, for any proposed alternative, you would be telling them to do something they'd never want to do, no matter what perspective in time they were looking at it from. But with dynamic inconsistency, their decision about what's best depends on their perspective in time, and so in fact you _have_ to make a judgment call about what perspective to call best. And once you pick the perspective, you are forced to concede that the agent -- who does _not_ stick with the perspective you picked (or any perspective, for that matter) -- is doing things that do not maximize his utility.]

    ok now I'm really done

  6. We have responded to your comment here, wheninrome15:

    We appreciate the opportunity to engage in the subject on a deeper level!