I'm not sure the unmarked clock example works as a case where you'll pay not to receive evidence, because it depends on the fact that you'll bet according to your credences. But if my credence is non-luminous, how do I bet on it?
I guess I can make sense of the idea of non-luminous revealed credences, where your actions show that you believe (or are uncertain) about something that you didn't know you believed. But that seems uncomfortable in this case, because as soon as I take the bet I slap my forehead and say "Crap! Since I just took this bet, I now realize that my credence is 2/3 that the number is odd, which means it must in fact be even!" Or something like that.
In any case, it seems like making this work for the unmarked clock go requires something that winds up not as psychologically plausible as the original unmarked clock case? There will definitely be cases where you can know that receiving information will be bad for you because you systematically misevaluate the information, like certain Sleepy Detective cases. But then those may involve weakness of the will and/or misevaluation of so-called higher-order evidence, and it's maybe less clear that we can say there's no irrationality involved?
Also thanks for the Janina Hosiasson link, very cool! Though now I've looked at her Wikipedia entry and I'm depressed.
Yeah, I know what you mean. It's pretty unclear what to say about these non-luminous credence cases. I think Daniel Greco has an interesting picture of what's going on in his Fragmentation and Higher-Order Evidence paper. And I also like Kenny's approach, which he describes below. So I'm not really wedded myself to the sort of picture Williamson is spelling out. But thought it was an intriguing feature of the forgetting framework that if something like this happens, we get the odd disconnect I describe.
I actually think Williamson is saying something really tendentious in saying that having vision accurate to within one notch means that one has “evidence” that consists of a proposition that includes three notches. I think that “evidence” is best understood as a kind of skill (know that is a type of know how) and that what you want here is a skill of most effectively using whatever abilities of vision you have to make effective bets (or whatever else it is you’re doing on the basis of your credences). Phrasing it in terms of a proposition seems to be misleading.
“the teleological framework allows us to assess whether forgetting is rational from your point of view—if you were to have control over doing it, would you be rational to choose it?”
I would put things slightly differently. The rationality of *choosing* to do something is sometimes quite different from the rationality of *being such that* you do it.
This is related to Williamson’s statement: “forgetting is not irrational; it is just unfortunate”, this is sometimes what people say about being the sort of agent that has the ability to freely choose in Newcomb problems or games of chicken, and thus two boxes or swerves.
My view is that finite physical beings like humans don’t have the abilities that classic causal decision theorists assume, of being able to freely choose at every moment what behavior we bring about - but we do have some abilities that are incompatible with these, of being able to form habits and set up our future attention.
It would be irrational to choose to one box or to choose to swerve, but being the kind of being that one boxes or swerves is often at least partly in our control (just like being the kind of person who remembers four digit numbers or being the kind of person who remembers them to two significant figures or being the kind of person who remembers the last digit or being the kind of person who has a notepad that important numbers are written down on), and it can thus be evaluated for rationality, just as classic causal decision theorists want to evaluate the actions themselves.
Yeah, I think I agree with that, Kenny. I want to say something similar about flip-flopping objections to permissivism--bad to be the sort of person who flip-flops; not bad to actually flip-flop in a particular case. I guess one difference in this case is that how we assess the forgetting episode is going to be the same from the point of view of the prior and from the god's-eye point of view looking down the sort of person I am.
This is a very interesting take on how we can model estimates of the cost of storing information! I was unaware of Hosiasson's work on the value of information, thanks for that. Currently, I am working on a chapter on simplified reasoning where I review a framework worked out by Harsanyi in a 1985 paper. My take ends up looking very similar to at least part of the setup for calculating the Brier score, but I am focusing only on practical value. I think an interesting question in these cases (that I myself don't attempt to answer) is what measures of epistemic and practical value make them commensurable. Intuitively, they are, but when one begins looking at the literature things look more difficult, since it seems it is pure estimation happening at a sub-personal level.
I'm not sure the unmarked clock example works as a case where you'll pay not to receive evidence, because it depends on the fact that you'll bet according to your credences. But if my credence is non-luminous, how do I bet on it?
I guess I can make sense of the idea of non-luminous revealed credences, where your actions show that you believe (or are uncertain) about something that you didn't know you believed. But that seems uncomfortable in this case, because as soon as I take the bet I slap my forehead and say "Crap! Since I just took this bet, I now realize that my credence is 2/3 that the number is odd, which means it must in fact be even!" Or something like that.
In any case, it seems like making this work for the unmarked clock go requires something that winds up not as psychologically plausible as the original unmarked clock case? There will definitely be cases where you can know that receiving information will be bad for you because you systematically misevaluate the information, like certain Sleepy Detective cases. But then those may involve weakness of the will and/or misevaluation of so-called higher-order evidence, and it's maybe less clear that we can say there's no irrationality involved?
Also thanks for the Janina Hosiasson link, very cool! Though now I've looked at her Wikipedia entry and I'm depressed.
Yeah, I know what you mean. It's pretty unclear what to say about these non-luminous credence cases. I think Daniel Greco has an interesting picture of what's going on in his Fragmentation and Higher-Order Evidence paper. And I also like Kenny's approach, which he describes below. So I'm not really wedded myself to the sort of picture Williamson is spelling out. But thought it was an intriguing feature of the forgetting framework that if something like this happens, we get the odd disconnect I describe.
I actually think Williamson is saying something really tendentious in saying that having vision accurate to within one notch means that one has “evidence” that consists of a proposition that includes three notches. I think that “evidence” is best understood as a kind of skill (know that is a type of know how) and that what you want here is a skill of most effectively using whatever abilities of vision you have to make effective bets (or whatever else it is you’re doing on the basis of your credences). Phrasing it in terms of a proposition seems to be misleading.
“the teleological framework allows us to assess whether forgetting is rational from your point of view—if you were to have control over doing it, would you be rational to choose it?”
I would put things slightly differently. The rationality of *choosing* to do something is sometimes quite different from the rationality of *being such that* you do it.
This is related to Williamson’s statement: “forgetting is not irrational; it is just unfortunate”, this is sometimes what people say about being the sort of agent that has the ability to freely choose in Newcomb problems or games of chicken, and thus two boxes or swerves.
My view is that finite physical beings like humans don’t have the abilities that classic causal decision theorists assume, of being able to freely choose at every moment what behavior we bring about - but we do have some abilities that are incompatible with these, of being able to form habits and set up our future attention.
It would be irrational to choose to one box or to choose to swerve, but being the kind of being that one boxes or swerves is often at least partly in our control (just like being the kind of person who remembers four digit numbers or being the kind of person who remembers them to two significant figures or being the kind of person who remembers the last digit or being the kind of person who has a notepad that important numbers are written down on), and it can thus be evaluated for rationality, just as classic causal decision theorists want to evaluate the actions themselves.
Yeah, I think I agree with that, Kenny. I want to say something similar about flip-flopping objections to permissivism--bad to be the sort of person who flip-flops; not bad to actually flip-flop in a particular case. I guess one difference in this case is that how we assess the forgetting episode is going to be the same from the point of view of the prior and from the god's-eye point of view looking down the sort of person I am.
This is a very interesting take on how we can model estimates of the cost of storing information! I was unaware of Hosiasson's work on the value of information, thanks for that. Currently, I am working on a chapter on simplified reasoning where I review a framework worked out by Harsanyi in a 1985 paper. My take ends up looking very similar to at least part of the setup for calculating the Brier score, but I am focusing only on practical value. I think an interesting question in these cases (that I myself don't attempt to answer) is what measures of epistemic and practical value make them commensurable. Intuitively, they are, but when one begins looking at the literature things look more difficult, since it seems it is pure estimation happening at a sub-personal level.