Discussion about this post

User's avatar
Matt Weiner's avatar

I'm not sure the unmarked clock example works as a case where you'll pay not to receive evidence, because it depends on the fact that you'll bet according to your credences. But if my credence is non-luminous, how do I bet on it?

I guess I can make sense of the idea of non-luminous revealed credences, where your actions show that you believe (or are uncertain) about something that you didn't know you believed. But that seems uncomfortable in this case, because as soon as I take the bet I slap my forehead and say "Crap! Since I just took this bet, I now realize that my credence is 2/3 that the number is odd, which means it must in fact be even!" Or something like that.

In any case, it seems like making this work for the unmarked clock go requires something that winds up not as psychologically plausible as the original unmarked clock case? There will definitely be cases where you can know that receiving information will be bad for you because you systematically misevaluate the information, like certain Sleepy Detective cases. But then those may involve weakness of the will and/or misevaluation of so-called higher-order evidence, and it's maybe less clear that we can say there's no irrationality involved?

Also thanks for the Janina Hosiasson link, very cool! Though now I've looked at her Wikipedia entry and I'm depressed.

Expand full comment
Kenny Easwaran's avatar

“the teleological framework allows us to assess whether forgetting is rational from your point of view—if you were to have control over doing it, would you be rational to choose it?”

I would put things slightly differently. The rationality of *choosing* to do something is sometimes quite different from the rationality of *being such that* you do it.

This is related to Williamson’s statement: “forgetting is not irrational; it is just unfortunate”, this is sometimes what people say about being the sort of agent that has the ability to freely choose in Newcomb problems or games of chicken, and thus two boxes or swerves.

My view is that finite physical beings like humans don’t have the abilities that classic causal decision theorists assume, of being able to freely choose at every moment what behavior we bring about - but we do have some abilities that are incompatible with these, of being able to form habits and set up our future attention.

It would be irrational to choose to one box or to choose to swerve, but being the kind of being that one boxes or swerves is often at least partly in our control (just like being the kind of person who remembers four digit numbers or being the kind of person who remembers them to two significant figures or being the kind of person who remembers the last digit or being the kind of person who has a notepad that important numbers are written down on), and it can thus be evaluated for rationality, just as classic causal decision theorists want to evaluate the actions themselves.

Expand full comment
4 more comments...

No posts