In last week’s post, I wrote about a puzzle that arises when you receive evidence you’d rather you hadn’t received. In this post, I’d like to explore a potential solution. I’ll begin by presenting the examples that generate the puzzle again, and then move to the solution. Those familiar with the previous post can skip to the section called ‘The Self-Recommending Solution’ without missing anything.
![](https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fec7627a7-a03a-428c-bead-565a40fb4d4b_666x800.jpeg)
The first example: the pink scarf
I’ll focus on two different evidential situations. The first we might call Pink Scarf. Here are the details:
I have just bought a new scarf, which is in my bag.
You know that it is pink, but you know nothing more than that.
You sort pink items into one of four finer-grained colour categories: coral, peach, rose, and flamingo.
At the moment, you think it equally likely that my scarf belongs to each of these categories. That is, your prior credences P are given by the following table:
\(\begin{array}{r|cccc} & \textit{Coral} & \textit{Peach} & \textit{Rose} & \textit{Flamingo} \\ \hline P & 1/4 & 1/4 & 1/4 & 1/4 \end{array}\)But you know that, if you see the scarf, your credences will change. You won’t come to learn exactly which shade of pink it is, because your perceptual system is not sufficiently sensitive to tell you that. If the scarf is coral, it can’t tell whether it’s coral or peach or rose, but it can tell it’s not flamingo; and similarly if it’s peach. So, if it’s coral or peach, you’ll learn it’s coral or peach or rose, and you’ll update by conditionalizing on this information, so that your posterior divides credences equally between those three possibilities. If the scarf is rose, your perceptual system can’t tell whether it’s peach, or rose, or flamingo, but it can tell it’s not coral; and similarly if it’s flamingo. So, if it’s rose or flamingo, you’ll learn it’s peach or rose or flamingo, and again you’ll update by conditionalizing on this information, dividing credences equally between those possibilities. So the following table gives your prior and then your different possible posteriors.
That table gives your credences in the different possible shades of pink—let’s call those your first-order credences; they’re about the colour of the scarf. But what about your second-order credences, that is, your credences about what your first-order credences are? We can assume that, before seeing the scarf, you’re certain what your credences are at that time—that is, you’re certain your prior is P, which divides credence equally between the four shades; in the jargon, your prior is luminous to you. But after seeing the scarf, you won’t know what your credences are at this later time; you won’t have perfect access to this information. For instance, if the scarf is coral, you’ll assign a first-order credence of 1/3 to each of coral, peach, and rose, and 0 to flamingo, but you won’t know that this is what your first-order credences are. After all, if you did know this, you could simply use that information together with your knowledge of the learning situation, including your knowledge of what evidence you’d get in what worlds, to discover further first-order facts. If the scarf were coral, you could simply check what your first-order credences are, notice they assign credence 0 to flamingo, conclude that the scarf is coral or peach since only in those situations do you have a posterior that assigns 0 to flamingo, and update to a different credence function, one that divides credence equally over just coral and peach. And similarly for the other shades. So we must assume that your posterior credences after seeing the scarf will not be luminous to you.
Now let’s think about the following decision: you have to choose whether to accept or reject a bet that the scarf is either peach or rose; it’ll pay out £10 if it is, £0 if it’s not; and it’ll cost £6. So the payoffs are as follows:
\(\begin{array}{r|cccc} & \textit{Coral} & \textit{Peach} & \textit{Rose} & \textit{Flamingo} \\ \hline \textit{Accept} & -6 & 4 & 4 & -6 \\ \textit{Reject} & 0 & 0 & 0 & 0 \end{array}\)
The puzzle of unwanted evidence
That’s the first example. Its interesting feature is this: even though looking at the scarf is sure to give you evidence about its shade, and even though it’s guaranteed that this evidence is both true and stronger than the evidence you had beforehand, if you are given the choice of facing the decision with your prior credences or having a look at the scarf and making it with the posterior credences you’ll thereby come to have, your priors will prefer you to face the decision with them. After all, your priors will reject the bet: accepting has an expected utility of -1 relative to your priors; rejecting has an expected utility of 0. But, whatever shade the scarf is, the posteriors you’ll have after seeing it will prefer accepting the bet: accepting has an expected utility of 2/3 relative to both of the possible posteriors; rejecting has an expected utility of 0.
So far, that is a curiosity, but it’s not necessarily a puzzle. The puzzle arises because sometimes we simply do gain evidence that we’d prefer not to have gained. Some evidence we actively gather; some we simply receive without asking for it. I might simply bring my scarf out of my bag and you might simply see it. And some of the evidence we actually receive we might have thought ahead of time we’d rather not receive, given the choice we’re about to make. You feel like that about seeing my scarf in Pink Scarf. In such cases, the question arises: which credences should guide your decision whether to accept or reject the bet? Your priors or your posteriors? Typically, we say it must be your posteriors, but in this case your priors thought that wasn’t the wise the choice, so there seems to at least be a question.
The second example: the unmarked clock
As I mentioned last time, a natural answer comes from the so-called Principle of Total Evidence. Surely you should choose using your most informed credences, and those are your posteriors—it’s worth noting that the value of information framework in which we’re working here was first introduced into the philosophy literature by Janina Hosiasson in a 1931 paper in Mind entitled ‘Why do we prefer probabilities relative to many data?’. To choose using your priors is to throw away some of your evidence.
However, the next example suggests that’s wrong. Let’s call it Unmarked Clock. It’s based on an example from Tim Williamson.
I have just bought a tiny new clock, which is mounted high up on my office wall.
You know that it is there; you know it is unmarked and has a single hand; you know the hand points to only one of four points: 1, 2, 3, or 4.
At the moment, you think it equally likely that the clock points at each point, so your credence that it points at 1 is 1/4, as is your credence it points at 2, and so on. That is, your prior credences are given by the following table:
\(\begin{array}{r|cccc} & 1 & 2 & 3 & 4 \\ \hline P & 1/4 & 1/4 & 1/4 & 1/4 \end{array}\)But you know that, if you see the clock, your credences will change. You won’t come to learn exactly which number it points at, because your eyesight and your perceptual system is not sufficiently sensitive to tell you that. In particular, it has a margin of error. If the clock points at 1, your perceptual system can’t tell whether it points at 4, 1, or 2, but it can tell it isn’t pointing at 3, so if it does point at 1, you’ll learn it points at 4, 1, or 2, and you’ll update by conditionalizing on this information, so that your posterior divides credences equally between those three possibilities. If the clock points at 2, your perceptual system can’t tell whether it points at 1, 2, or 3, but it can tell it isn’t pointing at 4, so if it does point at 2, you’ll learn it points at 1, 2, or 3, and you’ll update by conditionalizing on this information, so that your posterior divides credences equally between those three possibilities. And so on. So the following table gives your prior and then your different possible posteriors:
\(\begin{array}{r|cccc} & 1& 2 & 3 & 4 \\ \hline P & 1/4 & 1/4 & 1/4 & 1/4 \\ \hline P_1 & 1/3 & 1/3 & 0 & 1/3 \\ P_2 & 1/3 & 1/3 & 1/3 & 0 \\ P_3 & 0 & 1/3 & 1/3 & 1/3 \\ P_4 & 1/3 & 0 & 1/3 & 1/3 \end{array}\)
That table gives your first-order credences. What about your second-order credences? Again, we assume your priors are luminous to you, so that your second-order credences are certain you have the prior you actually have. But your posteriors are again not luminous to you. For again, if they were, you could learn more about the first-order facts. Indeed, in this case, you could learn exactly where the clock hand points.
Now let’s think about the following decision, which has three options: the first is a bet that the clock points at an even number, which pays out £10 if that’s true and £0 if it doesn’t and costs £6; the second is a bet it points at an odd number, same payoffs and price; and the third option is to reject both bets. Here is the pay-off table:
\(\begin{array}{r|cccc} & 1 & 2 & 3 & 4 \\ \hline \textit{Even} & -6 & 4 & -6 & 4 \\ \textit{Odd} & 4 & -6 & 4 & -6 \\ \textit{Reject} & 0 & 0 & 0 & 0 \end{array}\)
The prior prefers to reject both bets; the posterior you’ll have if the hand points at an odd number prefers to pay for the bet on even, which will of course lose in that situation; the posterior you’ll have if the hand points at an even number preferes to pay for the bet on odd, which will of course lose in that situation. So the prior prefers not to gain the evidence because doing so will lead you to lose money for sure. But each possible posterior prefers that you use it to make the decision.
Now suppose we go along with the Principle of Total Evidence. Then we will lose money for sure; and this is completely foreseeable and agreed upon by everyone. This suggests the Principle of Total Evidence is not a good guide in such cases. It seems we must fall back on our priors in such cases and reject both bets.
The self-recommending solution
Situations in which you learn something you’d rather not have learned and updated on it rationally but don’t know what you’ve learned nor what credences it has led you to have put you in a strange position. There are two points of view from which you might approach any reasoning you need to do or any decisions you need to make. In Unmarked Clock, for instance, there is the point of view of your prior—that is, P—which assigns equal credence to each of the four points on the clock. And there is the point of view of your actual posterior, that is, P1 if the hand points to 1, P2 if it points to 2, and so on. From which of these points of view should you make your decision between Even, Odd, and Reject?
Each has points in its favour. Your posteriors were obtained from your priors by updating in the rational way upon true evidence that is stronger than the evidence on which your priors were based. However, by the same token, you might think your posteriors obtain what legitimacy they have from their roots in your priors, and your priors wished you not to receive the evidence that led to them. What’s more, your first-order posteriors lack certain evidence your priors have, namely, the evidence about the way in which they were formed: they lack that evidence not because you’ve forgotten that information—you haven’t, you still know it—but because, not knowing what your posteriors are, you don’t know what effect the information about how they were formed should have on them and so they stay simply as they were when you first updated to them.
My initial reaction to this puzzle was to say that, for this reason, it is rationally permissible to use your priors and rationally permissible to use your posteriors. But the Unmarked Clock case persuaded me this isn’t right. In this case, your prior and your posterior are not simply two legitimate points of view from which to act with nothing to tell between them: your posterior is not a legitimate point of view because of the way in which it was formed. But how to circumscribe the cases in which it is a legitimate point of view and cases in which it is not?
My proposal is this: we ask whether the point of view in question is self-recommending. If it is, it is permissible; if it isn’t and there is an alternative point of view available that is self-recommending, it is not permissible; if it isn’t self-recommending, but nor is any alternative point of view, then it is permissible.
What do I mean by self-recommending? Your prior recommends itself if it would prefer to avoid gathering the evidence to gathering the evidence; that is, it would prefer to make the decision using itself rather than using the posterior you would have after gathering the evidence, whatever it may be. And the posterior recommends itself if it would prefer to choose using the posterior, whatever it may be, rather than using the prior. Of course, in the second question, we’re not asking whether the posterior would like to gather the evidence or not—the evidence has been gathered and the posterior knows that. But, because the posterior doesn’t know what posterior has resulted from gathering the evidence, we can ask it whether it would prefer to use the posterior, whatever it is, rather than use the prior.
Consider the case of the Unmarked Clock. If you ask one of the posteriors in this case whether they’d prefer to choose using the prior or using the posterior, whatever it may be, they’ll prefer to use the prior. So they are not self-recommending. After all, they can see as well as anyone that using the posterior, whatever it may be, leads to utility -6 in all worlds, because of the ways in which the posteriors are formed so that they are guaranteed to have a sort of anti-expertise with respect to the choice between Even, Odd, and Reject, always choosing the bet that will lose; while, on the other hand, using the prior leads to utility 0 in all worlds; and so, whatever probabilities the actual posterior assigns, it prefers to choose using the priors. In a sense, this allows the posteriors to incorporate the information about the way they were formed without needing to know what they themselves are.
So, in summary, here is the proposed solution to the problem of unwanted evidence. There are four possible situations:
Prior and posterior agree that using the posterior, whatever it may be, is best. In this case, you’re required to use the posterior. Standard cases of learning factive and partitional evidence are like this.
Prior and posterior agree that using the prior is best. In this case, you’re required to use the prior. As we saw above, Unmarked Clock is such a case.
Prior thinks prior is best, and posterior thinks posterior is best. In this case, you’re permitted to use either prior or posterior. Pink Scarf is such a case. The prior prefers you not to gather the evidence; each of the posteriors prefers you to use the posterior, whatever it may be.
Prior thinks posterior is best, and posterior thinks prior is best. If we hold fixed the decision to be made in Pink Scarf, but give the following prior and posteriors, we get an instance of this:
\(\begin{array}{r|cccc} & \textit{Coral} & \textit{Peach} & \textit{Rose} & \textit{Flamingo} \\ \hline P & 1/8 & 1/8 & 3/8 & 3/8 \\ \hline P_\textit{Coral} & 1/3 & 2/3 & 0 & 0 \\ P_\textit{Peach} & 2/3 & 1/3 & 0 & 0 \\ P_\textit{Rose} & 0 & 0 & 2/3 & 1/3 \\ P_\textit{Flamingo} & 0 & 0 & 1/3 & 2/3 \end{array}\)I’m not entirely sure what to say about this case. You might say it’s a rational dilemma, and doing either is irrational. I hold there can be no rational dilemmas, so I’m inclined to say instead that, in these cases, you’re permitted to use either prior or posterior.
Awesome post! I want to raise a similar methodological complaint to the one I did last time.
It seems to me that if you're broadly functionalist about evidence and credences, then you should at least find it puzzling to think there are cases where one rationally updates on some evidence, and ends up with new credences as a result, but the credences one obtains by rationally updating on one's evidence shouldn't (and won't?) be used to guide one's actions. We should want to ask metaphysical questions--if the agent starts with prior credences P (which guide her actions), and ends up with posterior credences P*, but P is still the map by which she steers (and should steer), in what sense have her credences changed*? What could make that true?
I float a suggestion here (https://philpapers.org/archive/GREFAH.pdf) which crucially relies on the idea of fragmentation. We can make sense of an agent who has some credences, but doesn't act on them or know what they are, if we imagine different classes of action, whereby her actions in one class (e.g., visuomotor tasks, like reaching out to touch the clock hand) are guided by one set of credences, but her actions in another class (e.g., consciously accepting and rejecting bets) are guided by a different set of credences, which isn't confident about what the other set is. On that view, it's not really the case that she has an unequivocal credence in the position of the clock. Rather, there's her visuomotor credence, and her conscious bet-evaluation credence, and they differ. But this interpretation at least lets us make sense of her not knowing (consciously) what her (visuomotor) credence is.
But absent any interpretation like that, I think functionalists about evidence/credence should find these cases really puzzling, and should be wondering what concrete set of behavioral dispositions could be reasonably captured by the models you offer of either pink scarf (I think--I'm less confident about this one, not having thought about it as much) or the unmarked clock.