The abstract structure that you describe here is very interesting. It arises whenever we have one value that is a kind of "compromise" between some other conflicting values, and this "compromise" value should seek to minimize its distance from all these conflicting values. This structure will have many applications besides the one that you focus on - where the conflicting values are the "welfare levels" (or "utilities") of individuals, and the value that strikes a compromise between them is labelled "the moral value" of the "world" or "welfare distribution".
I actually suspect that some of the other applications of this abstract structure are going to be more illuminating than the one that you focus on. (This is because I doubt that this notion of the "moral value of a world" has the kind of practical significance for the decision making of ethically virtuous agents that you seem to assume it has.)
A second issue is that I am not sure how plausible this sort of approach will be if it is used to compare worlds with different populations. If the same set of conflicting values rank all the alternatives, it is easy to see how there can be a "compromise" between these conflicting rankings. But if some of these conflicting values are completely silent about some of the alternatives, the idea of a "compromise" becomes a bit murkier...
Yes, you’re right about the variable population case. Not v clear what to say there. And those questions about population ethics are tricky for contractualism in general, since it’s unclear whom you’re asking to agree to the contract.
The other applications that I had in mind were principally to the aggregation of reasons for action, into an overall "compromise" judgment of how much reason one has for the actions "all things considered". See: https://philpapers.org/rec/WEDTRA-2. In that paper, I argued for a Harsanyi-style aggregation, as a weighted sum of the measures of the conflicting values (I tried to say a bit more about where the "weights" come from here: http://tinyurl.com/2cf9ry3e).
But it would be interesting if different sorts of aggregation were plausible in other contexts...
If those making the contract are uncertainty-averse/pessimist RDU maximizers, then the prioritarian solution arises, with utilitarianism as the limiting case of preferences linear in the probabilities. Ebert (Rawls and Bentham reconciled) was the first to spell this out, I think.
The abstract structure that you describe here is very interesting. It arises whenever we have one value that is a kind of "compromise" between some other conflicting values, and this "compromise" value should seek to minimize its distance from all these conflicting values. This structure will have many applications besides the one that you focus on - where the conflicting values are the "welfare levels" (or "utilities") of individuals, and the value that strikes a compromise between them is labelled "the moral value" of the "world" or "welfare distribution".
I actually suspect that some of the other applications of this abstract structure are going to be more illuminating than the one that you focus on. (This is because I doubt that this notion of the "moral value of a world" has the kind of practical significance for the decision making of ethically virtuous agents that you seem to assume it has.)
A second issue is that I am not sure how plausible this sort of approach will be if it is used to compare worlds with different populations. If the same set of conflicting values rank all the alternatives, it is easy to see how there can be a "compromise" between these conflicting rankings. But if some of these conflicting values are completely silent about some of the alternatives, the idea of a "compromise" becomes a bit murkier...
Yes, you’re right about the variable population case. Not v clear what to say there. And those questions about population ethics are tricky for contractualism in general, since it’s unclear whom you’re asking to agree to the contract.
You’re right about the other applications too! I’ve actually used the idea a couple of times for judgment aggregation problems. Here: https://philpapers.org/rec/PETAIA. And here: https://philarchive.org/rec/PETAAW-3. If you’re interested! I got the idea originally from this paper: https://philpapers.org/rec/KONLBM-2
I’d be interested to hear if you had other sorts of case in mind…
The other applications that I had in mind were principally to the aggregation of reasons for action, into an overall "compromise" judgment of how much reason one has for the actions "all things considered". See: https://philpapers.org/rec/WEDTRA-2. In that paper, I argued for a Harsanyi-style aggregation, as a weighted sum of the measures of the conflicting values (I tried to say a bit more about where the "weights" come from here: http://tinyurl.com/2cf9ry3e).
But it would be interesting if different sorts of aggregation were plausible in other contexts...
If those making the contract are uncertainty-averse/pessimist RDU maximizers, then the prioritarian solution arises, with utilitarianism as the limiting case of preferences linear in the probabilities. Ebert (Rawls and Bentham reconciled) was the first to spell this out, I think.
Possibly related is Peter Wakker's new paper: https://personal.eur.nl/wakker/pdf/deu.pdf
Ah, that’s very interesting! Thanks, both!
This looks fascinating. John Horowitz and I tried this with Time and Risk, decades ago, but I can't remember where we came out.