15 Comments
Jun 11Liked by Richard Pettigrew

Like Quentin, I’m a little confused by the clock example (I also can’t access Williamson’s paper, so apologies if I’ve missed something obvious).

Suppose it’s 2 o’clock: when I update with the clock evidence, I know that it’s (1,2,3 o’clock), and I also know that [if it’s (1,2,3 o’clock) then it’s 2 o’clock]. It follows that I should know that it’s 2 o’clock after updating, via the epistemic closure principle. This remains true even if I don’t know my evidence and don’t know that I know it’s (1,2,3 o’clock)

Now it’s possible to deny epistemic closure in this case, but then how am I able to know before updating that I shouldn’t take the bet? In order to know that I shouldn’t bet, it would seem that I have to know that:

If [(Its 1,2,3 o’clock) & (if it’s 1,2,3 o’clock then it’s 2 o’clock)] then it’s 2 o’clock. For otherwise I couldn’t know that my bet would be erroneous in the 2 o’clock case.

But if I have deductive closure when forming my prior, then I surely have it when forming my posterior. Unless we’re postulating a case of knowledge loss, but then I don’t see how that’s supposed to be arguing against the principle of total evidence

P.S sorry for any silly mistakes on my part, it’s late here! 😜

Expand full comment
author

No silly mistakes! This stuff is extremely weird! And there are probably more plausible versions of the case that describe the evidence you get differently, but they're quite a lot more involved, and I was trying to avoid that complexity (Cian Dorr has some nice examples in unpublished work, so hopefully that will come out soon). But to answer your question about epistemic closure. You don't know the following conditional: if it's (1,2,3), then it's 2. What you know is: if *my evidence is that it's (1,2,3)*, then it's 2. And since you don't know what your evidence is, you can't use closure of knowledge under known implication to infer the time.

Expand full comment
Jun 11Liked by Richard Pettigrew

Interesting. Does the person know that they have at least received generic evidence from the clock (e.g. do they know that they looked at the clock)? If so then it seems they shouldn’t in fact reason that its 2/3 probability that the time will be odd (in the 2 o’clock case), since they know that in generic clock cases where they receive evidence about the time, they will be wrong to reason in that way.

If , on the other hand, they don’t know that they received generic evidence from the clock then it seems to me wrong to conclude that this violates TER. Yes the conclusions reached will be wrong in those possible worlds where the evidence of the time comes from receiving this kind of specific clock data, but this is surely counterbalanced by all those possible worlds where evidence of the time can come from clocks that are tuned in some other way, and so forth. So it’s not clear to me that that the average person across all possible worlds wouldn’t be better off using TER, given the same epistemic data.

Expand full comment
Jun 12Liked by Richard Pettigrew

To add on to the second part. This would put us in the strange situation of not wanting to learn new info, even though we should rationally rely on our posteriors even after we’ve acquired our new info. While this is no doubt weird, I don’t see that it violates the total evidence requirement (TER) principle, given that TER doesn’t apply to learning but rather to what information you should reason with. So I take such cases as the clock to be more indicative of interesting psychological facts about the person, rather than a refutation of TER.

This also neatly ties into Anthropic reasoning. Not sure how much you’re familiar with things like the self-indication assumption, but basically SIA tells us to reason as though we were randomly sampled from all possible observers given the same epistemic data (as opposed to all actual observers-which is called SSA). I think this case neatly demonstrates that TER requires us to use SIA and is incompatible with SSA. I’m fine with that, I think SSA sucks anyways!

In summary, if we adopted SIA, we should reason as though we were an average person sampled from all possible observers receiving our epistemic data. And as I noted in my last comment, it’s not obvious that our posteriors lead us astray here. For every elaborate scenario you can construct where our posteriors lead us astray, I can construct one with the exact same epistemic data where they are advantageous. Assuming of course the person doesn’t know they received evidence from the clock.

Expand full comment
Jun 10Liked by Richard Pettigrew

There's something weird with the clock example. You know that "if it's actually 3, then I'll know it's either 2, 3 or 4 (but nothing more)". The paradox seems to imply the converse: "If I know it's either 2,3 or 4 (but nothing more), then it's actually 3". This is how you can be sure that you'll lose your bet. But knowing the converse leads to a contradiction, because then you can infer from the fact that you know it's either 2,3 or 4 (but nothing more) that it's exactly 3, which contradicts the "nothing more" clause. So, I think this example doesn't make sense, it's ill-posed, and I'm wondering if you couldn't find a similar problem with the previous cases.

Expand full comment
author

Thanks, Quentin! If you have epistemic access to your evidence or to your credences, then you can do the sort of thing you describe. But we're thinking about cases in which you don't have that sort of access. Now, you might think that's not possible, that credences are introspectible. If that's so, then you're right that the paradox doesn't arise. But we're working in the sort of framework that, say, Tim Williamson describes, in which evidence and other mental states are not introspectible. In those cases, it's actually 3 iff your evidence is that it's 2, 3, or 4. If you can then learn your evidence is that it's 2, 3, or 4, then you can of course infer that it's 3 and update to that. But if you can't learn what your evidence is, that isn't open to you. That's what I'm trying to gesture towards in the 'A Way Out?' section.

Expand full comment
Jun 10Liked by Richard Pettigrew

Ok, I see. But then, if you cannot introspect in the way you describe, you cannot either know that learning new evidence is bad for you (even if it's the case), right? Only an all-knowing agent knows that learning this piece evidence would lead you astray. Or am I missing something? And if I'm right, how is it different from any case where partial evidence leads one astray?

Expand full comment
author

So the idea is that, beforehand, you know what evidence you'll get in each state of the world and how your credences will change in response to that evidence. And after you get the evidence, your credences do indeed respond in that way. And indeed at that later time you perhaps have very good reason to think that they have responded that way. But what you don't know is exactly how it is that they've responded. So beforehand, you know: if the scarf is rose, my credences will be...; if the scarf is peach, my credences will be...; and so on. And afterwards, you know if the scarf is rose, my credences currently are...; if the scarf is peach, my credences currently are...; and so on. But what you don't know afterwards is what your credences actually are. So you can't use information about them to learn more information about the colour of the scarf.

Expand full comment
Jun 11Liked by Richard Pettigrew

Ok, so you know the dynamics of your belief system without knowing its actual state? Honestly, I can't see what motivates such a setting (it sounds very implausible to me, and the counterintuitive result looks like an artefact of it) but I suppose it's because I'm not familiar with the literature. Thanks anyway for the response!

Expand full comment

I'm in rush at the moment, but

1. Blackwell 1951 gave a much clearer statement than Hosiasson and (AFAICT) completly anticipated Good

2. I suspect that a version the principle of restricted choice is relevant here, and would lead to a +ve value of info in the scarf example https://en.wikipedia.org/wiki/Principle_of_restricted_choice

Expand full comment
author

Thanks, John! I'll have to think about the restricted choice suggestion. And yes, absolutely, Blackwell was there very early. I talk about that in the longer notes. As I say, I think it's very tricky to pinpoint the origin of an idea, and indeed Ramsey has unpublished notes that essentially give the idea in the 1920s, and there's some suggestion he gets it from the notes of Charles Sanders Peirce, which his schoolmaster Ogden showed him. But I think the key insights are there in Hosiasson. She just doesn't have the clear idea of personal probability at that point, which essentially no-one did, except perhaps Ramsey. That's one of the things that's clearer in Blackwell.

Expand full comment

Rereading, I don't think it's restricted choice. Rather it's that, given the description of the problem, getting the signal "peach or pink" is possible only if the true colour is pink. So, on receiving this signal, you should refuse the bet.

Expand full comment
author

That's right, and of course if you have access to your posterior probabilities, you'll see you learned 'peach or pink', know that this could only happen if the scarf is actually pink, and so update again on this new evidence. But I'm thinking about cases in which we can't introspect our probabilities, and so we're uncertain about them as well.

Expand full comment

On priority, I agree that the ideas were developing in the first half of C20 before Blackwell formalized things properly - Good didn't add anything AFAICT. Savage completed the picture for EU based on subjective probability, allowing everything to be derived from preferences (provided you satisfy the axioms and are fully aware of all possibilities)

Expand full comment
author

Yes, I think Good pretty much acknowledges this. He cites Raiffa, but neither Blackwell nor Savage.

Expand full comment