Discussion about this post

User's avatar
Ralph Wedgwood's avatar

The abstract structure that you describe here is very interesting. It arises whenever we have one value that is a kind of "compromise" between some other conflicting values, and this "compromise" value should seek to minimize its distance from all these conflicting values. This structure will have many applications besides the one that you focus on - where the conflicting values are the "welfare levels" (or "utilities") of individuals, and the value that strikes a compromise between them is labelled "the moral value" of the "world" or "welfare distribution".

I actually suspect that some of the other applications of this abstract structure are going to be more illuminating than the one that you focus on. (This is because I doubt that this notion of the "moral value of a world" has the kind of practical significance for the decision making of ethically virtuous agents that you seem to assume it has.)

A second issue is that I am not sure how plausible this sort of approach will be if it is used to compare worlds with different populations. If the same set of conflicting values rank all the alternatives, it is easy to see how there can be a "compromise" between these conflicting rankings. But if some of these conflicting values are completely silent about some of the alternatives, the idea of a "compromise" becomes a bit murkier...

Expand full comment
John Quiggin's avatar

If those making the contract are uncertainty-averse/pessimist RDU maximizers, then the prioritarian solution arises, with utilitarianism as the limiting case of preferences linear in the probabilities. Ebert (Rawls and Bentham reconciled) was the first to spell this out, I think.

Expand full comment
6 more comments...

No posts