Conceptual Problems for Consequentialism

A Guest Post by John & Oskar

"Let us look more closely at the type of economy which is represented by the 'Robinson Crusoe' model, that is an economy of an isolated single person or otherwise organized under a single will. This economy is confronted with certain quantities of commodities and a number of wants which they may satisfy. The problem is to obtain a maximum satisfaction. This is . . . indeed an ordinary maximum problem, its difficulty depending apparently on the numher of variables and on the nature of the function to he maximized; but this is more of a practical difficulty than a theoretical one . . .

Consider now a participant a social exchange economy. His problem has, of course, many elements in common with a maximum problem. But it also contains some, very essential, elements of an entirely different nature. He too tries to obtain an optimum result. But in order to achieve this, he must enter into relations of exchange with others. If two or more persons exchange goods with each other, then the result for each one will depend in general not merely upon his own actions but on those of the others as well. Thus each participant attempts to maximize a function (his above-mentioned 'result') of which he does not control all variables. This is certainly no maximum problem, but a peculiar and disconcerting mixture of several conflicting maximum problems. Every participant is guided by another principle and neither determines all variables which affect his interest.

This kind of problem is nowhere dealt with in classical mathematics. We emphasize at the risk of being pedantic that this is no conditional maximum problem, no problem of the calculus of variation, of functional analysis, etc. It arises in full clarity, even in the most 'elementary' situatioins, e.g. when all variables can assume only a finite number of values.

A particularly striking expression of the popular misunderstanding about this pseudo-maximum problem is the famous statement according to which the purpose of social effort is the 'greatest possible good for the greatest possible number'. A guiding principle cannot be formulated by the requirement of maximizing two (or more) functions at once.

Such a principle, taken literally, is self-contradictory. (In general one function will have no maximum where the other function has one.) It is no better than saying, e.g., that a firm should obtain maximum prices at maximum turnover, or a maximum revenue at minimum outlay. If some order of importance of these principles or some weighted average is meant, this should be stated. However, in the situation of the participants in a social economy nothing of that sort is intended, but all maxima are desired at once—by various participants.

One would be mistaken to believe that it can be obviated, like the difficulty in the Crusoe case . . . by a mere recourse to the devices of the theory of probability. Every participant can determine the variables which describe his own actions but not those of the others. Nevertheless those 'alien' variables cannot, from his point of view, be described by statistical assumptions. This is because the others are guided, just as he himself, by rational principles—whatever that may mean—and no modus procedendi can be correct which does not attempt to understand those principles and the interactions of the conflicting interests of all participants.

Sometimes some of these interests run more or less parallel—then we are nearer to a simple maximum problem. But they can just as well be opposed. The general theory must cover all these possibilities, all intermediary stages, and all their combinations."

—John von Neumann & Oskar Morgenstern, The Theory of Games and Economic Behavior (p. 10-11)

Share this


Isn't this equivalent to what I've said for years on Catallarchy - that the utilities of individuals cannot validly be weighed against each other?


And it's old hat. This is the fundamental idea of game theory, which has been around for a while.


The book the quote comes from has just been released in a 60th anniversary commemorative edition.


And since Catallarchy has only existed for a few years, then for the entire period that John T. Kennedy was talking about, it has been old hat.

I haven't been here as long

I haven't been here as long as everyone, but your argument seems not to be "utilities cannot be weighed against each other," but rather, that they shouldn't.

Squeezing out of it

Utilitarians may be able to squeeze out of it by asserting the theoretical possibility of weights (just as the solipsists can endlessly defend the theoretical possibility that only they themselves exist, just as the angel faddists can comfortably maintain their belief that angels are all around us), but von Newmann and Morgenstern's description of the phenomenon highlights features which do not lend themselves to such a treatment.

The very way you worded your comment implies, by omission, that there is no problem where there is in fact a very big problem. The statement is not merely that utilities should not be weighed, in the sense that dogs should not be fed chocolate or that small children should not be left unattended, but that this does not describe the actual situation, in which "nothing of that sort is intended, but all maxima are desired at once—by various participants." To weigh, you (i.e. the utilitarian, not Scott) must first assign weights, but you in your ivory tower are yourself assigning weights because the people down below your lofty perch are not themselves assigning weights but rather, each one, pursuing his particular maximum with gusto.

These weights are something you (the utilitarian) import into the subject matter. In writing about what "the general theory must" do, von Newmann and Morgenstern argue that the subject matter itself demands, for its proper understanding, a quite different treatment: a game theoretic approach as we have now come to call it, which does not artificially seek to derive a single maximum from the various individual utility functions but which recognizes and deals with their separateness and their interplay.

The utilitarians are engaged in wishful thinking because of a failure of imagination. The subject matter itself does not come with weights that would allow them to engage in the activity that defines utilitarianism: summing the utility functions to come up with a global utility function. They wish such weights into being. They could have proceeded in some other way, but their imagination failed them. Summing the utilities was the best they could imagine at the time.

Game theoretic approaches to morality exist, and in fact (I would argue) come very close to describing the real basis for people's moral beliefs (i.e., they are adaptations to a game which has been played repeatedly for hundreds of millions of years). The utilitarian approach is now as obsolete as phlogiston and the luminiferous aether.

The utilitarian maintain the possibility of weighing utilities because they need it to be so in order to proceed. That is wishful thinking: to believe something because you need it to be true in order for you to proceed in the way that you, because of the failure of your imagination, have chosen.


I don't understand why this is problematic for the utilitarian (or consequentialist, as we style him here). I do understand why "the greatest good for the greatest number" is problematic. But as to maximizing more than one utility, it seems the consequentialists' strongest move (and one many make) is simply denying that distribution is relevant: only the total utility, summed over all individuals, matters. That is only one maximum, by my count.  So I don't see the conceptual problem.

"the total utility, summed

"the total utility, summed over all individuals" is an attempt to define "the greatest good for the greatest number". Since you recognize the later is problematic, the former must be.

Utility is not a number, it is a set of descriptions of an individual preference for one choice over another choice. It cannot be summed nor compared

It's a problem because

It's a problem because resources and individual valuations of those resources are heterogeneous in quantity, space, and time. Most of that information is unknown to any central authority and the mere act of trying to gather the information is not only costly, but there is an incentive for people to lie to such an authority.

You are imagining that there is some maximum and that then you distributed it unevenly. That's not the problem. The problem is that you don't know what the maximum value is in the first place. Distributing the goods doesn't give you that information either.

Even if you assumed you had all the exact details involved it then becomes some kind of bizarre recursive backwards bin packing problem where the objects and bins can grow and shrink depending on how you pack them. If you stuff too many in a single bin the size of the objects actually shrinks as the bin expands. For instance you could give everything to me, a single bin, which would certainly expand my utility but would definitely make each item of much less value to me. Give me on razor blade and I will value it greatly but give me a billion and I might actually see it as trash that needs disposing.

This is what Hayek was talking about when he said that socialism can't do economic calculation.

The problematic part

I would say that the core problematic part is the assignment of weights. Until you do that, you can't blithely talk about such a creature as "the total utility" or about summing.

While it is theoretically possible to assign weights, the weights are not in evidence but rather must be imported into the picture by the utilitarian. Utilitarians evidently believe that the weights are somehow implicit in the reality, they evidently believe that they are not just making it all up. But this belief is rather like the belief of angel faddists that there are angels all around us. The utilitarians might be right - there might be objective weights - but by the same token, the angel faddists might also be right.

Edit - Arthur is right that utility is an ordering of preference, and not a cardinal number. This adds to the difficulty of summing. And "difficulty" is an understatement. Brian may be on to something but I think that needs to be expanded - i.e., is there a connection between the theoretical hurdles of combining utilities, and the Socialist calculation problem. Sounds promising but I think it needs to be fleshed out a bit.

By the way, I really hope you see that, "everyone is equally important" is not an answer to the problem of assigning weights. It's not a matter of deciding who is more important than whom. Even if we stipulate equal importance the problem remains.

"I would say that the core

"I would say that the core problematic part is the assignment of weights."

 This seems to be a different issue than the one Matt's going after.  If the only problem is interpersonal comparison of utility, then this is adding nothing new. 

The core problem

...really is the definition of utility.

I may be beating a dead horse here but :

- utility is not happiness; it's what's being maximized ex-ante through purposeful action.

- utility is not the utility function. Indeed, the set of preferences known as utility can be conveniently mapped onto a multivariate utility-function, better there's always at least one such function that accounts for risk preference in a nice way; but while this is definitively cool, the actual values of the function are a mathematical artifact free of any empirical meaning by themselves.

- utility is not money. While adding specific amount of money can make me tip from one choice to the other, it is not a quantification of how much I prefer a situation to another since I do not value money but different situations with a specific amount of money in each.

There is room for summing and comparing happiness between different persons, but that implies measuring happiness (as a neurological process) and it must be kept in mind that not everyone seeks happiness.


I would be surprised if it was new

After all, the quote is not new, in the sense of its literal age. And it hasn't been hiding in obscurity all these years - in fact the ideas have been tremendously influential.

Yes, but I doubt Matt would

Yes, but I doubt Matt would trouble us with a dusty old point.  We've been over the interpersonal comparison deal many times.

Distribution irrelevant to whom?

<i>"it seems the consequentialists' strongest move (and one many make) is
simply denying that distribution is relevant: only the total utility,
summed over all individuals, matters.."</i>


By this theory: If we can save X individuals by carving you up for parts, there ought to be a number X at which carving you up for parts increases total utility and thus becomes efficient and desireable, correct?

How would we find that number?

Rule consequentialism is a

Rule consequentialism is a bit more sophisticated than that. If we allow carving up people for parts, there will be very little sleeping which is a bad thing...

On the other hand, if killing an innocent person can save the world from a certain nuclear holocaust (so X ~ 6,000,000 for example) many people would find that acceptable. To them, the rule means a 1 / 6000000 chance of dying in case of a nuclear holocaust instead of a chance of 1.

I am not a consequentialist though for reasons that would take a long post to detail.

John, This isn't a strong


This isn't a strong argument.  It may well be it's difficult to find the exact point of balancing utility, but such problems with vagueness plague our rights-based theories as surely as they do utilitarianism.  E.g., How do we determine how to punish someone who has violated another's rights?