Great article! Quick question: Is there a way to allow rational risk into a utility function, while still leaving room for calling certain degrees of risk aversion no longer rational (say, for instance, person x prefers a 100% chance of $1 over a 99% chance of 1,000 is presumably irrational)? Under the assumption that risk aversion is rational, it seems like we are forced to say that if person x is just that risk averse in this case (and other seemingly very irrational cases), they are still rational.
Yeah, nice point! As long as the utility function is sufficiently concave in money, the preference you describe is rationalizable as expected utility maximizing. I tend to be pretty Humean about utilities--i.e., they're not rationally criticizable, though they are morally criticizable, for instance, if you give high utility to harms to innocents, or low utility to the welfare of others. But I can see the force of the intuition that the sort of utility function that rationalizes the preferences you describe is somehow rationally suspect. My feeling about these sorts of intuition is that it is coloured by our actual experience of people in the world: our experience teaches us that someone who acts as if this is their utility function is going to be reasoning badly in other ways. So we use their preference over these options as a proxy for judging them and we appeal to a correlation we've previously experienced between this sort of utility function and poor reasoning about other things.
Great article! Quick question: Is there a way to allow rational risk into a utility function, while still leaving room for calling certain degrees of risk aversion no longer rational (say, for instance, person x prefers a 100% chance of $1 over a 99% chance of 1,000 is presumably irrational)? Under the assumption that risk aversion is rational, it seems like we are forced to say that if person x is just that risk averse in this case (and other seemingly very irrational cases), they are still rational.
Yeah, nice point! As long as the utility function is sufficiently concave in money, the preference you describe is rationalizable as expected utility maximizing. I tend to be pretty Humean about utilities--i.e., they're not rationally criticizable, though they are morally criticizable, for instance, if you give high utility to harms to innocents, or low utility to the welfare of others. But I can see the force of the intuition that the sort of utility function that rationalizes the preferences you describe is somehow rationally suspect. My feeling about these sorts of intuition is that it is coloured by our actual experience of people in the world: our experience teaches us that someone who acts as if this is their utility function is going to be reasoning badly in other ways. So we use their preference over these options as a proxy for judging them and we appeal to a correlation we've previously experienced between this sort of utility function and poor reasoning about other things.