Jane is investigating a murder at the local parsonage. She has enumerated a list of people who might have done it, ruled out all but Lawrence on the basis of alibi or lack of motive, discerned a motive for Lawrence, caught him in a lie that’s inexplicable unless he’s guilty, constructed a timeline for the murder, and so on. Indeed, Lawrence is the killer. Jane becomes very confident that Lawrence is the killer on the basis of this reasoning.
Jack has been approach by the henchman of a murderous dictator who has promised that he will cause untold misery and suffering to Jack and everyone Jack cares about, unless Jack becomes very confident that Lawrence is the killer. Jack has DNA evidence that makes it extremely unlikely that Lawrence is the killer. However, in fact, Lawrence is the killer—the DNA evidence is misleading. Jack becomes very confident that Lawrence is the killer in response to the threat and in spite of his evidence.
Jen has been approached by Carrie Jenkins’ Truth Fairy, who has promised that if Jen becomes very confident that Lawrence is the killer, and very confident in each proposition in a given consistent set, then the Truth Fairy will make each of those propositions in the set true, and will thereby make her credences in them very accurate. Like Jack, Jen has DNA evidence that makes it extremely unlikely that Lawrence is the killer. However, in fact, Lawrence is the killer. Jen becomes very confident that Lawrence is the killer in response to the Truth Fairy’s promise and in spite of her evidence.
Most people, whether epistemologists or not, will say there is something right about Jane’s credences that is not right about Jack’s or Jen’s; they will say there’s a sense in which Jane should have her credences, while Jack and Jen shouldn’t. Not that there is nothing right about Jack’s or Jen’s credences, nor that there is no sense in which they should have them. Just that there is some sense in which they should not—and, in that sense, Jane should.
So, we have a pre-theoretic sense of the distinction between these cases. There isn’t necessarily a standard pre-theoretic term we use to draw the distinction in general, but we do have a standard way of speaking that we use to make the complaint against beliefs like Jack’s and Jen’s: we say ‘You only believe that because…’. So someone might say to a believer, ‘You only believe in your chosen religion because you were brought up in it’. This suggests your belief is not right in the same way Jane’s belief is right, and that there’s a sense in which you shouldn’t have it, and in this sense Jane should have hers. Similarly: ‘You only believe your friend is innocent of the betrayal of which they’re accused because it would upset you too much to believe otherwise’.
In epistemology, we tend to mark the distinction by talking of a purely epistemic sense of ‘should’, on the one hand, and other senses of ‘should’, on the other—a pragmatic sense, a moral one, an aesthetic one, an all-things-considered one, for instance. Similarly, we talk of a purely epistemic ought, in contrast with a pragmatic, moral, aesthetic and all-things-considered oughts. For instance, there’s clearly a sense in which Jack should believe Lawrence is the killer. What matters a false belief about a murder at a parsonage next to the suffering of himself and his loved ones? So, perhaps in the all-things-considered sense, he should believe it. But, we say, epistemically speaking, he should not believe it. Or we might put the point in terms of justification or rationality or reasons or duties: epistemically speaking, Jane’s belief is justified, but Jack’s isn’t; epistemically speaking, Jane’s belief is rational, but Jack’s isn’t; Jane has purely epistemic reason for her belief, but Jack doesn’t, and indeed has strong epistemic reason against it; and so on.
My question here is two-fold: first, how do we distinguish the purely epistemic senses of should, ought, justification, rationality, etc., from the other senses? second, why do we draw this distinction? To answer the former, I draw on work in epistemic utility theory, in particular, a proposal by Jason Konek and Ben Levinstein—and I suggest it might illuminate a distinction that Kurt Sylvan draws in his epistemic Kantianism. To answer the latter, I take my lead from a suggestion by Sinan Dogramaci and Sophie Horowitz, who in turn are following an approach pioneered by Edward Craig. Where Dogramaci and Horowitz take that approach in one direction, I take it in a different but closely related one; and I try to say what I find unsatisfying about Craig’s original conclusions.
What is epistemic rationality?
I’ll focus throughout on epistemic rationality, but I imagine what I say could be translated straightforwardly to shoulds and oughts and justification and reasons.
I think there are two components to the distinction that come out by attending to the case of Jack and Jen. When we contrast Jack with Jane, we might be tempted to say that the distinction between purely epistemic rationality other sorts of rationality arises from a more fundamental distinction between purely epistemic values and other values. To be rational, we might say, speaking rather roughly, is to do the best thing available to you in the light of your ends—this is sometimes put by saying rationality requires us to take the best means to our ends, but we’ll see below why that’s a bit stronger than we want. Jack’s confidence that Lawrence is the killer is rational in this general sense: it is the best option available to him in the light of his ends. His ends include having accurate credences and avoiding the suffering of himself and his loved ones, but the latter are much more important ends for him and they weigh more heavily in his calculations. But, we might then think, to be purely epistemically rational is to do the best thing available to you in light of your purely epistemic ends, where these are separated out from your other ends.
It’s natural to take having accurate credences as your sole epistemic end; that is, it’s natural to take accurate credences to be the sole fundamental source of epistemic value. That’s the veritist position. But there are others around, such as ones based on the value of respecting evidence or having knowledge. I’ll just focus on the veritist position here, but again what I say might generalize.
However, what I’ve said so far—that purely epistemic rationality is simply a matter of doing the best thing available to you in the light of your purely epistemic ends—can’t be the whole story, because of cases like Jen’s.1 After all, it seems plausible that she does the best thing available to her in the light of her epistemic ends. If she becomes very confident that Lawrence is the killer, then from her point of view, she risks great inaccuracy with respect to that credence—and given the DNA evidence she has, she indeed expects great inaccuracy for that credence. But she also expects to become very accurate with respect to the other credences. So, assuming the latter gain outweighs the former loss, it’s natural to think becoming very confident Lawrence is the killer is the best thing to do in the light of her purely epistemic ends. And yet we want to say that she is not epistemically rational.
To respond to that, we say, borrowing terminology from Kurt Sylvan, that epistemic rationality is not about doing the thing that best promotes your epistemic ends; it’s not about doing the thing that best promotes accuracy. Rather, it’s about doing the thing that best respects those ends. Or, to put it differently, using the language that Jason Konek and Ben Levinstein favour: when we evaluate the rationality of your credences, and we ask which sets of credences are best in light of our epistemic ends, we determine which is best by evaluating them as doxastic states rather than doxastic acts. When we epistemically evaluate a set of credences qua doxastic state, we ask how well the state represents the world—that is, how accurate it is. When we epistemically evaluate it qua doxastic act, we ask how well it promotes accurate credences.
We can now say why Jen’s credence is epistemically irrational. She expects the doxastic act of being very confident that Lawrence is the killer to produce the most accurate credences; or, perhaps better, to produce a world in which her credences are most accurate. But that’s not the expectation to which we should appeal when we evaluate her credences qua doxastic state—and it’s qua doxastic state we must evaluate them to discern their epistemic rationality. Instead, we should ask, from the point of view of Jen’s credences, whether there is an alternative state that she expects to be more accurate. And there is! For recall: she has evidence that makes it extremely unlikely that Lawrence is the killer. So the chances relative to that DNA evidence expect a credal state that gives very low credence to Lawrence being the killer, while retaining her high confidence in the propositions that the Truth Fairy has arranged to be true, to be more accurate than the ones she has, which give high confidence to Lawrence being the killer and high confidence to the propositions in the set. Of course, were she to actually have those alternative credences, they wouldn’t be so accurate, since the Truth Fairy wouldn’t arrange for the propositions to be true in that scenario. But that’s irrelevant to the evaluation qua doxastic state, for we’re not asking about the ability of the alternative credences to promote accuracy; rather, we’re asking about how accurate they are.
So that’s it. That’s the account of epistemic rationality. I think it’s pretty much the view that Konek and Levinstein take in their paper; and, while I can’t be so confident that Sylvan would find it appealing, it strikes me as an interesting explication of the distinction between promotion for and respect of accuracy on which he builds his epistemic Kantianism.2
Why is epistemic rationality?
Now, then, I can come to my main question: why do we draw this distinction? It is this sort of question that Edward Craig attempted to answer concerning the distinction between knowledge and non-knowledge in his Knowledge and the State of Nature. Sometimes, he advertises this question as a new approach we might take to the old question of how to analyse knowledge; sometimes, he thinks it’s a further question we would still wish to answer should such an analysis be given. Having sketched an analysis of epistemic rationality in the previous section—one due to Konek and Levinstein, and possibly countenanced by Sylvan—the question still remains why we find it useful to identify this property, and use it to draw distinctions, setting certain sets of credences on one side and certain on the other.
The question is a puzzle because, at first sight, knowing whether credences are epistemically rational or not does not settle the question whether we should have them. For instance, it’s pretty clear Jack should become very confident that Lawrence is the killer, if he possibly can. And I suspect Jen should too, unless the accuracy of her credence about the identity of the murderer is very important for her to get other things she wants further down the line. The point is that, according to the all-things-considered should, Jack and Jen should become confident that Lawrence is the killer. Knowing that, epistemically, they should not doesn’t really help much. Why, then, are we also interested in this type of should that pays attention only to certain of Jack’s and Jen’s goals—namely, their epistemic ones—and is determined by that restricted set of goals in a particular non-standard way—namely, by asking what respects those goals rather than what promotes them?
Funnily enough, I think the answer is very close to the answer that Craig gave to his questions: the distinction is useful when we want to mark credences as ones from which we wish to learn. I’ll come back to this below, where I’ll try to fill in a little more detail, but first I’d like to give a short critique of Craig’s suggestion. (I doubt this is a novel critique, but I haven’t seen it.)
Why is knowledge? Craig’s answer
Craig takes himself to be offering a new route into an analysis of knowledge. And roughly speaking, his account is that we need the concept of knowledge to help us identify from whom we can learn something about which we’re inquiring. But, as he points out himself, if that’s what we need it for, why isn’t knowledge simply true belief? Or perhaps not even true belief, but true utterance? For in order to learn a proposition from someone, I just have to know that they believe it and their belief is true; or that they utter it and their utterance is true.
But that would be an analysis of ‘S knows that p’, and Craig doesn’t take himself to be considering the practice of making that sort of ascription, or at least not primarily. Rather, he takes himself to be considering the practice of making ascriptions of the form ‘S knows whether p’. That is, his primary target is the practice of making ascriptions of knowledge-whether, not the practice of making ascriptions of knowledge-that. If the practice was merely to say ‘S knows that p’, then the information provided to someone inquiring whether p that is contained in that ascription could just as well be communicated by ‘p’, and a good deal more efficiently. But the practice is to identify where to look to discover the truth about p, without specifying whether p is true or not. And that’s when you say ‘S knows whether p’. Because that doesn’t tell you whether p is true or false, but it does point you to the next step in your investigation, providing S is accessible and in a talkative mood.
The problem is that, again, if that is the purpose of ascriptions of the form ‘S knows whether p’, then the correct analysis of these ascriptions must just be ‘Either: (i) S believes p and p is true, or (ii) S believes not-p and p is false’. After all, to know that asking S will settle your inquiry into p, you just need to know that S has a belief in p or a belief in not-p, and whichever it is, they’re right about it. You don’t need to know that either (i) S knows p or (ii) S knows not-p. That’s far more information than you need.
So why does Craig think that, if this is why we ascribe knowledge-whether to people, the Nozickian ‘tracking’ analysis of knowledge is pretty close to being right? Here’s what he says:
We have to remember that the inquirer’s knowledge of the actual world is bound to be highly incomplete. It is not only that he doesn’t yet know whether p; there will be all sorts of things about himself, the environment and the potential informant of which he is ignorant. […] [T]here are indefinitely many different possible worlds any one of which, so far as he knows, might be the actual world. His concern with getting the right information in the actual world will therefore lead him to hope for an informant who will give him the truth about p whichever of all these possibilities is realised. Which is to say, if you like the jargon, that he wants an informant who will give him the right answer in a range of possible worlds. (20)
But I think this gets things wrong. All you need to know in order for S’s assertion about p to inform your inquiry is that it will be true, whichever way it goes—you just need to know S got it right. And that is much weaker than knowing that S would have got it right had things been different. If Callum knows p, and also knows that Karen believes p as a result of wishful thinking, and Callum says to Corinne, ‘Karen’s right about whether p, just ask her’, then Corinne has all the information she needs to complete her investigation into p. It doesn’t matter a jot that Karen’s belief doesn’t count as knowledge because she’d have believed p whether it was true or not.
Of course, Craig is right that we typically get evidence that someone is right in their belief about a proposition by discovering that their belief tracks its truth. But the question of what typically gives evidence for a particular proposition is distinct from the question of what that proposition means. I typically discover the time by consulting a watch, but the proposition that the time is 4:33pm doesn’t mean that my watch reads that time.
Why epistemic rationality? Dogramaci and Horowitz’s answer
The phenomenon that Dogramaci and Horowitz wish to explain is subtly different from the one I wish to explain. They wish to explain not only the fact that we make ascriptions of epistemic rationality and irrationality, but also that we use the latter to apply social pressure on people to be epistemically rational—‘Don’t be stupid!’ is one of their examples of such ascriptions! What’s more, they are interested in cases in which we use epistemic rationality to distinguish it from epistemic irrationality, rather than to distinguish it from pragmatic rationality or all-things-considered rationality.
They conjecture that this practice is socially useful because it allows us to learn from each other. But, they argue, it only does this if rationality is very demanding. They argue if it does this, there must be a single epistemically rational prior credence function, and it must be that our posteriors are epistemically rational if they are the result of updating this unique rational prior on the evidence we’ve obtained. Only then can we use judgments of epistemic rationality in the way they want: roughly, if I learn you believe p, and your belief is epistemically rational, then I should also believe p. And they conclude that rationality is demanding in this sense.
I won’t go into their argument further here, but Jonathan Weisberg and I have said a little about why we think the conclusion is too strong in Section 8 of this paper.
Why epistemic rationality? My answer
Like Craig, and like Dogramaci and Horowitz, I think we make distinctively epistemic evaluations—such as the evaluation of epistemic rationality—in order to help us identify when to learn what from the assertions of the people we encounter. Why is it important to distinguish the epistemic rationality of some credences from their pragmatic rationality, then? For two reasons.
First: for each of us, the pragmatic side of the division between epistemic values and pragmatic values is populated by a distinctive, idiosyncratic set of values. You care most about these people, I care most about these other ones; you care about football, I care about musical theatre; and so on. So credences that are pragmatically or all-things-considered rational for you may not be pragmatically or all-things-considered rational for me. Learning they’re pragmatically or all-things-considered rational for you of course gives me some evidence, but it’s complex and I’d need to do a lot of unpacking to use it to update my own credences. On the other hand, if I learn your credences are epistemically rational—and, let’s say, I know you have all the evidence I have and more, and we began with roughly the same priors—then I can simply take on your credences as mine. And that’s because the same values—indeed, just the value of accuracy—sits on the epistemic side of the division for both of us.3
Second: even if we have the same pragmatic values, or roughly similar—both care about the same people, both love musical theatre—then it is still the case that what is pragmatically or all-things-considered rational for you might not be for me. After all, it might easily be that your credences have some causal power that mine don’t. The murderous dictator who threatens Jack might not give a hoot what credences I have. And so learning that it is pragmatically or all-things-considered rational for Jack to be confident it’s Lawrence doesn’t make it pragmatically or all-things-considered rational for me to be similarly confident. But, should I learn that Jack is very confident Lawrence didn’t do it and that this confidence is epistemically rational for Jack—for remember, Jack has extremely compelling DNA evidence against Jack’s guilt—under many conditions that would make it epistemically rational for me to have that confidence as well.
So, it’s an important practice to mark when credences of others that we learn are epistemically rational, rather than pragmatically or all-things-considered rational. Doing so improves the quality of the credences we acquire by attending to that information. And that’s the reason we pick out this particular type of ought and should and epistemic evaluation.
Coda on the value-free ideal in science
It is something like these reasons that underpin what is true in the value-free ideal in science. Of course, the role of values cannot be eliminated in lots of aspects of science: what topic you as a scientist investigate is determined by what you think is important, the incentives around you, and so on; the way you gather evidence will also be guided by your values; and indeed Stephanie Harvard and Eric Winsberg make a compelling case that the way you represent complex systems in the models you use to make predictions about them or to try to understand them will be driven again by what you consider important, as will the trade-offs you’re prepared to tolerate between accurate prediction concerning this variable and accurate prediction concerning this other one.
But there is another way in which values might enter into science.4 The probabilities a scientist reports at the end of their investigation might not be the ones that are epistemically rational in the light of their evidence. They might instead be the ones that are pragmatically rational for these scientists, or that these scientists judge to be pragmatically rational for the people to whom they communicate their results—the ones they judge will lead these people to make pragmatically better choices.
In these cases, it seems to me, what goes wrong is two-fold: first, the scientists in question are not best placed to judge what values their audience have, and so are not best placed to judge what decisions they will make on the basis of which credences; second, the scientists in question are not best placed to predict what decisions their audience will face with the credences they receive from the scientists. For it is a crucial feature of credences that we set them and then we use them for whatever decision we face. They are an all-purpose part of our decision-making toolkit. We don’t use different credences depending on the choice we face. And so if you’re stating the results of scientific investigation, and you know people will form particular credences on the basis of your statement, then you must bear in mind that those credences could well be used to face any number of decisions you haven’t even considered. It then seems important to have a default convention that no-one try to predict how people will use their credences nor what decisions they will face with them, and should instead simply report the credences that are epistemically rational in the light of their evidence.
I take this to be one of the aims of epistemic utility theory, or accuracy-first epistemology, as it’s sometimes known. It is to discover and justify norms that govern the credences we report to one another under this convention, where what we report should be the epistemically rational credences. Perhaps Probabilism is not a general norm for credences, since it doesn’t apply to the person who is threatened with suffering if they remain probabilistically coherent; and similarly for Conditionalization. But it does apply to the credences we should report to one another in the practice of science, because those should be the epistemically rational ones.
In fact, I’ve argued before that it is the whole story, but the Craigian considerations I’ll describe at the end persuade me I was wrong about that. In that earlier paper, I offered an error theory for our intuitive judgment that Jen is not epistemically rational in the case described at the beginning of the post. I suggested that to be epistemically rational is to do the thing that best promotes accurate credences—what Sylvan would call genuine veritist consequentialism—and so Jen is epistemically rational. However, I continued, it is often difficult and time-consuming to calculate what is the thing that best promotes accurate credences in a given case—the cluelessness worry about ethical consequentialisms recognises this—and so in order to make our intuitive judgments quickly, we fall back on a heuristic that gets things right in most cases we actually encounter, but trips up on cases like Jen’s that are a little less everyday. The heuristic is simply to ask whether the credences are supported by the person’s evidence. In most cases this is a good proxy for whether the credence best promotes accuracy.
Konek & Levinstein take their view to be consequentialist, while Sylvan takes his not to be. But I think the difference there is terminological, rather than substantial.
As Jonathan and I show in the paper mentioned above (Section 8, here), you can actually drop the shared prior assumption—providing I know what prior you did have, I can factor it out when I incorporate your credences with mine.
From reading Liam Kofi Bright’s exegesis of W. E. B. DuBois’s writings on this, I see that he had this worry. As Bright says: “a large part of [DuBois’s] complaint concerning American historians dealing with the US Civil War and Reconstruction era was that they had let considerations of promoting social harmony and guiding policy shape what information they presented.”
This is an interesting set of thoughts - but I'm not sure it quite distinguishes the Truth Fairy case, unless we know something about how the Truth Fairy operates. Depending on how we imagine it, I can go more towards the bullet-biting response that you went for earlier, or more in the direction suggested here.
Does the Truth Fairy give lots of free true beliefs to everyone who forms this belief, or just to Jen in particular? If the Truth Fairy is on the lookout for people who form this belief, and give all these people lots of free truths (or free evidence, or whatever) then I'm inclined to say that Jen is a good person to follow, even if we know she just believes this because of the Truth Fairy's offer. On the other hand, if the Truth Fairy's offer only stands for Jen, and not for everyone, then we shouldn't follow Jen, and so we shouldn't call her epistemically rational.
This matters because I suspect that a lot of non-idealized science works in Truth Fairy type ways. Should I reject a hypothesis just because of a single falsifying piece of anomalous evidence? (Maybe it's anomalies in the orbit of Uranus, or of Mercury, or whatever.) Once I have the anomalous evidence, I'm sure that the theory plus auxiliaries is incorrect. However, I also know from experience that holding on to the theory plus auxiliaries gives me lots and lots of true beliefs about everything else, while giving it up in a Popperian way deprives me of lots of truth. Any time I hear people talking about theoretical virtues like simplicity, explanatoriness, or whatever, I imagine it working something like this - this is a kind of virtue of a theory that doesn't make the theory any more likely to be true itself, but from experience seems to mean that it's likely to give us access to lots of other truths, and thus it might be epistemically rational to seek these virtues.
Consider instead the question: should J accuse Lawrence. For each value of J, the answer seems to be "Yes, probably".
Now suppose that (as stipulated) people can form beliefs by an act of will and match that with the (at least as plausible) stipulation that others can detect lies. Then, to accuse Lawrence successfully, J must form the belief that he is guilty.