3 Comments
Jun 24·edited Jun 24Liked by Richard Pettigrew

Awesome post! I want to raise a similar methodological complaint to the one I did last time.

It seems to me that if you're broadly functionalist about evidence and credences, then you should at least find it puzzling to think there are cases where one rationally updates on some evidence, and ends up with new credences as a result, but the credences one obtains by rationally updating on one's evidence shouldn't (and won't?) be used to guide one's actions. We should want to ask metaphysical questions--if the agent starts with prior credences P (which guide her actions), and ends up with posterior credences P*, but P is still the map by which she steers (and should steer), in what sense have her credences changed*? What could make that true?

I float a suggestion here (https://philpapers.org/archive/GREFAH.pdf) which crucially relies on the idea of fragmentation. We can make sense of an agent who has some credences, but doesn't act on them or know what they are, if we imagine different classes of action, whereby her actions in one class (e.g., visuomotor tasks, like reaching out to touch the clock hand) are guided by one set of credences, but her actions in another class (e.g., consciously accepting and rejecting bets) are guided by a different set of credences, which isn't confident about what the other set is. On that view, it's not really the case that she has an unequivocal credence in the position of the clock. Rather, there's her visuomotor credence, and her conscious bet-evaluation credence, and they differ. But this interpretation at least lets us make sense of her not knowing (consciously) what her (visuomotor) credence is.

But absent any interpretation like that, I think functionalists about evidence/credence should find these cases really puzzling, and should be wondering what concrete set of behavioral dispositions could be reasonably captured by the models you offer of either pink scarf (I think--I'm less confident about this one, not having thought about it as much) or the unmarked clock.

Expand full comment
author

Wonderful, Daniel! Thank you so much for the link. In the original version of the post, I put it in terms of fragmentation, actually! There, I was thinking that the tension is not between priors and posteriors but between two sorts of posterior: the consciously accessible and the consciously inaccessible (obviously way too crude, but you know what I mean). So I think I'm very much on board with your suggestion here. Is your view also that you *should* the inaccessible ones for visuomotor and the accessible ones for accepting bets? I can see that this is descriptively an appealing picture, but I wonder how satisfying it is normatively? Of course, you might think that there's an ought implies can, and if each is really inaccessible to the other, then no normative question arises: you just should do the only thing you can do, which is follow the inaccessible ones for visuomotor and the accessible ones for conscious bet acceptances. But this doesn't seem obviously true to me. It's true that I can't see what my inaccessible credences are like, but I could hand over to them for various decisions, such as visuomotor ones. I could think consciously: I wonder whether I should reach out, or perhaps I should leave this one to my subpersonal, inaccessible self? In that situation, your accessible credences will say 'stick with me, those inaccessible guys are hopeless'; but of course your inaccessible credences will sometimes say the opposite (though not in the Unmarked Clock case).

Expand full comment
Jun 26·edited Jun 26Liked by Richard Pettigrew

I think I want to agree with everything you just said. I don't want to say that no normative questions arise once you make this move, but I think it's delicate to say just what they are.

But I do think that once you're thinking of things this way, you should regard it as misleading to model situations like this in terms of decision theory, with a single agent who has a single set of credences, including non-trivial higher-order credences, and is confronting various decisions. Clearer, I think, to model them in terms of game theory, with two (or more, depending on the case) agents with aligned interests, but who can't communicate. Then a game might involve an agent who faces a question about whether to take some action herself, or delegate the choice to another agent with different information. (Sort of like a principal/agent problem.)

Expand full comment