In a recent paper, ‘Practical reasons to believe, epistemic reasons to act, and the baffled action theorist’, Nomy Arpaly asks two questions: First, can we ever have non-epistemic reasons for believing? For instance, can we have a pragmatic reason to believe something? Does the fact that I will be happier if I have a particular belief give me reason to have it? And, second, can we ever have epistemic reasons for doing anything other than believing? For instance, can we have epistemic reasons for inquiring? Does the fact that I will, in expectation, have more accurate credences if I carry out a particular experiment give me an epistemic reason to do that?
Arpaly answers no to both questions: beliefs are the only things for which you can have epistemic reasons, and the only reasons for belief you can have are epistemic ones. I want to argue against the first, in particular.
Let’s take the specific question whether we have epistemic reasons to inquire in particular ways. A couple of posts ago, I described Graham Oddie’s version of I. J. Good’s Value of Information theorem, which shows that, in certain familiar cases, inquiring will result in more accurate credences in expectation. Does this give us epistemic reason to inquire in these cases? Arpaly says no. She says what here looks like an epistemic reason for inquiring is in fact an instrumental reason for someone whose goal is accuracy. Arpaly agrees that the following is a norm: If your goal is accurate belief, you should inquire in such-and-such a case. But it is not an epistemic norm any more than the following is an aesthetic norm: If you want a lovely house, you should save money until you can afford one. They are both instrumental norms that explain what means we have reason to take if we have certain ends.
My response is that, in the end, all epistemic norms are instrumental norms for those who value accuracy. This is the core tenet of accuracy-first epistemology with its veritist axiology for beliefs and credences and its teleological conception of rationality and epistemic normativity. And that’s what I want to argue for in this post
We can get a sense of what Arpaly takes an epistemic reason to be by considering her Sinking Heart Intuition, which she uses to argue against Susanna Rinard's claim that there are practical reasons for believing.
Imagine that you have cancer and you do not yet know if the course of chemotherapy you have undergone will save you or not. You sit down at your doctor’s desk, trying to brace yourself for news, aware that at this point there might be only interim news—indications that a good or a bad outcome is likely. The doctor says there are reasons to be optimistic—to believe that everything will come out OK. Though you are still very tense, you perk up and you feel warm and light all over. You ask what the reasons are. You’re all ears. In response, the doctor tells you about ironclad scientific results showing that optimism is good for the health of cancer patients.
Your heart sinks. You experience a very bitter disappointment and will probably be angry at the doctor for the misleading way he put his point.
According to Arpaly, what the doctor gives you is not an epistemic reason for optimism—that is, for high credence that the chemo worked. It is not even a practical reason for believing. What he gives you instead is a practical reason for trying to bring yourself to be optimistic. For Arpaly, there can be no practical reasons for believing or having high credence. There can only be epistemic reasons. And an epistemic reason for having high credence that the chemo worked is the sort of thing you were expecting to hear when he said there's reason to be optimistic, namely, some fact that stands in the evidential support relation to the proposition that the chemo worked.
So Arpaly would like to distinguish an epistemic reason to believe a proposition, which is simply a fact that supports that proposition, from an instrumental reason to do things that will lead you to have a belief with certain desirable properties, such as being true or being accurate or counting as knowledge. But I think there are two related problems with this.
First, it isn’t clear why a fact that evidentially supports a proposition counts as an epistemic reason for belief on Arpaly's view, rather than an instrumental reason for someone whose goal is to have evidentially-supported beliefs. Arpaly’s view requires that there is a goal whose fulfilment gives reason to believe, but which is sufficiently closely tied to the notion of the epistemic that it gives you epistemic reason to believe whether or not it is one of your own goals, and so is not merely an instrumental reason for belief for those who have the goal. But why think having evidentially-supported beliefs is such a goal, while having accurate beliefs is not? What is so special about the goal of having beliefs that are well supported by your evidence that fulfilling it gives a special sort of reason for belief that is independent of your own goals? Indeed, it seems to me that the situation is almost exactly the other way round: it is the goal of having true beliefs or accurate credences that is so closely tied to the notion of the epistemic that being a good means to fulfilling that goal provides epistemic reasons for belief regardless of whether it is explicitly your goal; the goal of having evidentially-supported beliefs derives from this primary epistemic goal.
This brings us to the second problem with Arpaly’s argument. There are two ways in which the goal of having evidentially-supported beliefs is a secondary goal that derives from the primary goal of accuracy. First, when we ask why we want beliefs that are supported by our evidence, we answer that they are more likely to be true than beliefs that are not. We have the goal of apportioning our beliefs to the evidence precisely because it serves the more fundamental goal of having accurate beliefs—it is the best means to that end, in some sense. Secondly, I want to argue that, when we ask what it means to have beliefs that are supported by our evidence, we ultimately give an analysis that appeals crucially to the accuracy of the beliefs we thereby have. I'll argue for this now by working through alternative analyses of the evidential support relation and showing that they either don’t work or can’t be shown to stand in the normative relationship to rational credence they must stand in to do the work Arpaly needs them to do.
Arpaly assumes that there is a relation of evidential support that holds between propositions: It seems that there is an external world supports There is an external world; Ada says that it is raining supports It is raining; The CT scan shows no tumour supports The chemo worked; and so on. This relation comes in degrees, so that It seems that there is an external world supports There is an external world strongly, perhaps, while, if Ada is not a wholly reliable witness, Ada says that it is raining might support It is raining rather weakly. Arpaly thinks that, when this relation holds strongly enough between your evidence and a proposition, you have an epistemic reason to believe the proposition. How should we understand this relation?
The standard approach is to posit something like an evidential probability function, which takes a body of evidence and a proposition and returns a measure of how likely that evidence makes that proposition.1 A proposition is supported by a new piece of evidence relative to some background evidence if its evidential probability given the background evidence and the new evidence is greater than its evidential probability given the background evidence alone, and the degree of support increases with the difference the new evidence makes.
Accounts of evidential probabilities fall into three categories: (i) on the first, they are generalized logical consequence relations; (ii) on the second, they are the credences that rationality requires; (iii) on the third, they are primitive. I want to argue that the first and third are untenable, and the second is best understood by appealing to the accuracy of the credences. So, on our best understanding, the evidential support relation is ultimately analysed in terms of evidential probabilities, which are in turn ultimately analysed in terms of the accuracy of the credences to which they give rise. And so, if it is true that accuracy does not give epistemic reason to believe but instead gives instrumental reason to believe for those with that goal, then evidential support ultimately also gives only instrumental reason to believe for those with the goal of accuracy.
Evidential probabilities as logical probabilities
We find the idea of evidential probabilities first explicitly described in John Maynard Keynes’ A Treatise on Probability—though perhaps we see the beginnings of the idea in Hume’s talk of the wisdom of proportioning belief to evidence in Section 10 of the Enquiry. Keynes thought that the evidential support relation was a generalization of the logical consequence relation. Take the following two arguments:
(1) Everything is material; therefore, Bluebell is material.
(2) Nearly everything is material; therefore, Bluebell is material.
The premise of the first logically entails its conclusion, while the premise of the second strongly supports but does not entail its conclusion. Keynes thinks we are positing the same relation between premise and conclusion in the two cases; it's just that the relation holds with maximal strength in the first instance and with high-but-not-maximal strength in the second. And he proposes to analyse this relation using logical probabilities. The degree of evidential support given to the conclusion by the premises is the logical probability of the conclusion conditional on the premise.
Positing these logical probabilities and using them to understand talk of evidential support and the strength of non-deductive argument is all very well, but if we are to use them to understand the sorts of epistemic reason that Arpaly posits, we need bridge principles between them and our rational beliefs and credences. Perhaps the principle says: If you believe the premises of an argument, and the logical probability of its conclusion given the conjunction of those premises lies above a particular threshold, then you should believe the conclusion. Or perhaps it says: Your credence in a proposition should be equal to its logical probability conditional on your total evidence. The problem is that it's hard to see why either principle should hold.
To see this, consider how we might justify a bridge principle that connects full logical entailment and belief. Take the following principle: You may believe anything logically entailed by your evidence. It has a pretty straightforward motivation: if a set of propositions logically entails a proposition, then whenever the former are all true, the latter is also true. And so, if we assume evidence is factive, the bridge principle can be justified by noting that believing a consequence of your evidence will always lead to a gain in true beliefs, and this is always permissible.2 The problem is that, if we move from what for Keynes is the limit case in which full logical consequence holds between premises and conclusion to the more general case in which the logical probability of the conclusion given the premises is not maximal, we have no recourse to this sort of justification for our bridge principles. Of course, we might say that, in those cases, we can point to the fact that following the bridge principle leads to beliefs that are likely to be true, but the probability involved is the evidential probability in question, and we've been given no reason to think that having beliefs that are likely to be true relative to those probabilities is something desirable. For any terrible way of forming beliefs, there is some probability relative to which the beliefs formed are likely to be true, but that doesn’t make it a good way of forming beliefs!
Part of the problem here is that Keynes does not tell us enough about what makes a particular proposition have a given logical probability given a specific body of evidence. And so we cannot assess whether we have epistemic reason to align our credences or beliefs with it. But it's interesting to note that a similar problem arises for Rudolf Carnap, who is absolutely explicit about what determines logical probabilities.
It is worth noting upfront that Carnap takes his logical probabilities to be relative to a framework we choose, which includes a linguistic component. And indeed he later became reasonably permissive and pragmatic about how you pick logical probabilities within your chosen framework for your inductive purposes. But let's focus on his most well-known suggestion. Suppose the framework we have chosen has a simple language containing only a single predicate F and a two constants a and b. Then a state description specifies, for each of a and b, whether F holds of it or not. So there are four state descriptions: Fa&Fb, Fa&~Fb, ~Fa&Fb, and ~Fa&~Fb. Carnap then considers a sort of symmetry: two state descriptions are symmetrically related if one is the result of taking the other and permuting the constants. So Fa&~Fb is symmetrically related to ~Fa&Fb, because you acquire the first from the second by swapping a and b, but Fa&Fb is symmetrically related to neither. Carnap makes two claims about this symmetry: first, any two state descriptions that are so related should receive the same probability; second, if we take equivalence classes of state descriptions under this relation, and for each one take the disjunction of the state description it contains and call these disjunctions structure descriptions, then each structure description should receive the same probability. So, in our case:
the equivalence classes are {Fa&Fb}, {Fa&~Fb, ~Fa&Fb}, and {~Fa&~Fb};
the structure descriptions are Fa&Fb, Fa&~Fb v ~Fa&Fb, and ~Fa&~Fb; and
the logical probability function is P(Fa&Fb) = P(Fa&~Fb v ~Fa&Fb) = P(~Fa&~Fb) = 1/3 and P(Fa&~Fb) = P(~Fa&Fb) = 1/6.
Carnap thought that probability functions that respect these symmetries count as logical because the full logical consequence relation respects these symmetries as well: it is invariant under permutation of constants. But of course this doesn’t help us say why the bridge principles between logical probability and rational credence should hold. For whatever epistemic normativity the logical consequence relation has comes not from its invariance under permutation of names, but from its truth preservation, as we saw above. Logical probabilities do not inherit any of the epistemic lustre of the logical consequence relation in virtue of both respecting the same symmetries.
The Teleological Objection
And so there is an objection to Keynes’ and Carnap’s approaches to logical probability that is a particular instance of what we might call the teleological objection. It asks: what is good about matching your credences to the logical probabilities conditional on your evidence? What goes wrong for you if you don't? If, for instance, I could see that I would have more accurate credences were I not to match the logical probabilities, either for sure or in expectation, what reason would I have to continue to match them? And if there is no way this could happen, is it not the fact that I couldn't foreseeably improve my accuracy for sure or in expectation by moving from these credences that renders them rationally permissible, and not the fact that they are logical in the senses that Keynes and Carnap consider?
I see these questions as belonging to a broad class of objections, some of which we make in the moral realm, some in the realm of prudential rationality, and some in the realm of epistemic rationality. For instance, in the case of prudential rationality, if someone tells you to eschew gluttony and practice temperance, to consume enjoyable things in moderation and not too much, it is reasonable to reply: ‘But why? What benefit does this self-denial acquire for me? If I gain more enjoyment from my current consumption than I would from temperance, what reason have I for adopting temperance.’ And we might see Mill's Harm Principle as furnishing a similar style of objection in the case of morality. In response to those who maintain that polyamorous relationships are immoral, we might appeal to Mill's principle to reply: ‘But why? What harm is done to any person by engaging in them? What protection from harm do we secure by avoiding them?’ Similarly in the epistemic case: when we are told to match our credences to the logical probabilities conditional on our evidence, we want to know why; we want to know what good this secures for us and what bad it avoids.
Of course, there are moral cases in which the Harm Principle doesn’t seem to give the right result. Suppose someone has committed egregious wrongs in their life, harming many others. Living in almost complete isolation now, they are nonetheless extremely happy. Those who think their past wrongs make them undeserving of this happiness would prefer that they be unhappy, perhaps even that they suffer, even though, because of their isolation, this would not make anyone else happier. The Harm Principle objects to that: ‘What good is done by making them suffer? None. What benefit accrues to anyone? None. So there is no reason to prefer that situation.’ But those who think this person doesn’t deserve the happiness they have think that it is right for people to receive happiness in proportion to what they deserve.
I suspect there are some who feel like this about beliefs: even if we could gain greater accuracy by going beyond our evidence, we shouldn’t because in some sense we don’t deserve that greater accuracy since we don’t have evidence that warrants us getting it. But I think when this motivation is spelled out like this, it becomes clear that it is not reasonable. The accuracy you obtain for your beliefs is not like the happiness a person obtains for themselves. It is not something that has to be earned or deserved. There is no sense in which it is bad to get an undeserved amount of accuracy by going beyond your evidence. I think this is the heart of William James’ objection to W. K. Clifford’s maxim: ‘‘It is wrong always, everywhere, and for anyone to believe anything on insufficient evidence’’.
Carnap himself agrees that the symmetry he demands for logical probabilities cannot also be demanded of rational credence:
symmetry must be required for a [logical probability function] only because it is meant to represent credibility, not credence. A non-symmetric credence function may still be rational.
So, in sum: the logical probability functions of Carnap and Keynes do not furnish us with the sort of evidential probabilities that in turn can give the facts about evidential support that Arpaly takes to be the only facts that give epistemic reason to believe.
Evidential probabilities as rational credences
Perhaps in partial recognition of this, with the notable and influential exception of Tim Williamson’s, which we'll return to below, most future accounts of evidential support do not posit a separate thing called an evidential probability function and then seek bridge principles that relate that thing to rational credences. They instead say something like this: for any body of evidence, there is a unique credence function that is rationally required for anyone who has that body of evidence; a new piece of evidence supports a proposition against a background body of evidence if the unique credence function required for someone with the background and the new evidence assigns a higher probability to that proposition than the unique credence function required for someone with the background evidence only. Clearly, given this approach, there is no need for a bridge principle, and so instead the burden falls on the claim that there is always a unique credence function. How might we argue for this? There are at least four arguments available. The first two fall to a version of the teleological objection discussed above; the second two don’t, but they rely on unwarranted assumptions about what risks rationality requires us to avoid.
The first argument is due to E. T. Jaynes. We might see him as an inheritor of W. K. Clifford’s legacy. In agreement with three of the four other arguments we’ll consider, Jaynes conceives of our evidence not primarily as facts on which we should update prior credences by conditioning on them, but as constraints placed on our new credences after we learn it. So, for instance, learning a proposition is true constrains your future credences by demanding that you assign maximal credence to the learned proposition. Jaynes then claims that rationality requires us to respect our evidence by satisfying the constraints it places on our credence function, but moreover to pick the credence function among those that respect our evidence that is maximally unbiased or maximally uncertain. He then gives an argument for measuring the uncertainty or lack of bias in a probability function using so-called Shannon entropy. Putting all this together, he argues for the inference principle known as MaxEnt: this says that your credence function should be the one among those that satisfy the constraints placed by your evidence that maximizes Shannon entropy.
The problem with Jaynes’ suggestion is, again, that we are given no reason to think that unbiased or uncertain credence functions are better than biased or more certain ones. As before, the teleological objection looms: What do we gain by restricting ourselves in the way required? If by becoming more biased or less certain I might increase the accuracy of my credences, what reason would I have not to do so?
The second argument, developed over a series of papers by Jeff Paris and Alena Vencovská, faces the same problem (here, here). Instead of offering a measure of uncertainty or lack of bias and then saying you should maximize that within the bounds place on your credence function by your evidence, they give axioms for an inference rule that takes such constraints and returns a set of probability functions that satisfy them. And they show that the only inference rule that satisfies those axioms is the MaxEnt rule that we just described. But Paris and Vencovská’s justification relies on a similar assumption to Jaynes’: as they put it, the result of the inference rule ‘‘should not make any assumptions beyond those contained in [the set of constraints]’’. In particular, if the constraints imposed by your evidence do not demand that you treat a set of propositions are dependent on one another in any way, then your credences should treat them as independent. Again, the teleological objection arises: Why is it better not to go beyond the evidence in the way they describe? What does this gain for you? What bad things does it ensure you avoid?
We cannot raise the teleological objection against the final two arguments, since they are themselves teleological—that is, they argue that there is a uniquely rational credence function precisely by identifying the benefits that accrue to those who adopt it. Jon Williamson shows that the credence function demanded by the MaxEnt principle—the one considered uniquely rational by Jaynes and Paris & Vencovská—minimizes worst-case expected loss for a particular loss function, and argues that this is the loss function we should assume we face in the absence of any evidence. Roughly speaking, the idea is that maximizing entropy within the limits of your evidence will lead you to make fewer risky choices that go wrong in their worst-case scenario, while the further you lie from the entropy-maximizing credence function, the more such choices you will make. So, if we are cautious, we should try to be as close to the MaxEnt credence function as possible within the limits set by your evidence. But of course this will be rationally compelling only if we are rationally required to be cautious in this way, and I see no reason why rationality would demand that. Why not instead try to minimize the best-case rather than the worst-case loss? Why would rationality require one rather than another?
A similar objection can be raised against my own argument in this area. It seeks to establish not the MaxEnt principle in full generality, but the version of the Principle of Indifference that says you should divide your credences equally over all the possibilities grained as finely as your language or conceptual scheme allows. Like Oddie’s epistemic version of the Value of Information theorem, my argument appeals to strictly proper epistemic utility functions. The argument turns on the following fact: the credence function whose epistemic utility in its worst-case scenario is highest is the uniform distribution; that is, any other credence function has less epistemic utility in its worst-case scenario than the uniform distribution has in its worst-case scenario. But of course the same question arises about this argument as arose for Jon Williamson's, namely, why would rationality require such attention to the worst-case scenario? Why not pick a credence function whose best-case epistemic utility is maximal?
The upshot is this: if we take evidential probabilities to be logical probabilities, in the way that Keynes and Carnap did, or if we take them to be the most uncertain or unbiased credences that satisfy our evidential constraints, or the credences that do not go beyond what those evidential constraints demand, in the way Jaynes and Paris & Vencovská do, we must say why credences are rationally required to align with them; on the other hand, if we take them to be those that get us some pragmatic or epistemic good in the most cautious way possible, we must say why we should be so cautious. And so none of these proposals gives us the sort of evidential probabilities that Arpaly needs to support the distinction between epistemic reasons and purely instrumental reasons for people with particular epistemic goals.
Evidential probabilities as measures of intrinsic plausibility
There is one approach to evidential probabilities that I haven't discussed yet. It is due to Tim Williamson. He proposes an account of evidential probabilities that lies somewhere between the logical account offered by Carnap and the reductive account that identifies them with rational credences. For Williamson, as for Keynes, evidential probabilities are pretty much taken as a primitive. He says the evidential probability of a proposition measure its ‘‘intrinsic plausibility’’ in the absence of evidence, but he says little more than that.
Why does Williamson do this? On the one hand, he is moved by Nelson Goodman’s new riddle of induction and so rejects Carnap’s syntactic approach to logical probabilities. On the other hand, he points out that analysing evidential support in terms of evidential probabilities, and then analysing evidential probabilities in terms of rationally required credences gives implausible verdicts. Take someone, for instance, who is very good at introspecting the credences they assign to logical truths. They hold the proposition before their mind and they get a strength of feeling about it, and that gives them a very reliable indication of the strength of their credence in it; and they’re fully aware of how reliable this process is. Now, it’s natural to think that there are at least some logical truths to which rationality requires us to assign very high credence—those with a low degree of complexity, for instance. Now suppose p is one of these simple logical truths, and suppose our reliable introspector has strong evidence that she has a low credence in p. Then we want to say that this evidence supports the proposition that she has a low credence in p, and so the evidential probability that she has a low credence in p is high conditional on her evidence. And yet that’s not the credence she’s rationally required to have. Instead, by hypothesis, rationality requires her to be certain of p, and therefore to have strong evidence that she's certain of p, and so to have a low credence that she has a low credence in p. So evidential probability and rationally required credence seem to come apart.3
Between them, these two worries lead Williamson to his primitivist position: unconditional evidential probabilities measure intrinsic plausibility absent evidence, and the evidential probability of a proposition given a body of evidence is the evidential probability of the proposition conditional on the evidence, where these conditional probabilities are defined from the unconditional probabilities in the usual way. But of course this faces the teleological objection just as much as the logical probabilities of Carnap and Keynes or the unbiased probabilities of Jaynes and Paris & Vencovská. Why should we match our initial credences to this measure of intrinsic plausibility? What good does that obtain for us? What is it about this plausibility that we value epistemically and that we would forego were we not to match our credences to the evidential probabilities conditional on our evidence?
Of course, you might hold that to proportion your initial credence in a proposition to its intrinsic plausibility is an epistemic good in itself, and so there is something that you gain by doing so. And that would answer the teleological objection, if we could understand intrinsic plausibility in such a way that it is an epistemic good. But that pushes us to ask what Williamson could possibly mean by the intrinsic plausibility of a proposition absent any evidence.
What do we say when we describe a proposition as plausible? There are descriptive and prescriptive uses. Sometimes, we mean to say that, upon considering it, a person will assign it high credence—e.g., the defendant's version of events is plausible. Sometimes, though, we say that such a proposition seems plausible, signalling that we intend a prescriptive meaning on which there can be a gap between what a person actually comes to give high credence and what is truly plausible—the defendant’s version of events seems plausible until you think about it for a minute and you realise it doesn’t add up. Both of these uses have a natural relational analysis: a proposition is plausible if it does or should receive high credence when it is considered. Williamson signals that he wants to avoid such a relational analysis by talking about the intrinsic plausibility of a proposition: not the credence a person would or should have in it were they to consider it, but some measure of something that doesn't rely on a person considering the proposition and their response to doing so. But what reason have we to think that there is such a measure? What reason have we to think that there is any sense to a non-relational version of the concept?
Presumably, Williamson would reply that, as his argument described above shows, we use the concept in everyday life in a way that cannot be reduced to talk of rationally required credence, and that gives us reason to think there is a coherent non-relational version of the concept we’re picking out. I think that argument would fail even if Williamson’s argument against reductionism were to succeed: it’s quite plausible our everyday concepts do not have a fully satisfactory analysis that covers even rather pathological applications such as the one at the heart of Williamson’s argument, but it doesn’t give us evidence that there is some completely coherent primitive concept independent of these analyses; the everyday concept of truth is likely a bit like this. But in any case, the argument against reductionism fails, as we can see by following Jennifer Carr's account of the relationship between ideal and non-ideal epistemology, which draws on Angelika Kratzer's semantics for ‘must’ and ‘can’. On this Kratzer’s account, our modal notions—in our case, the notion of rational requirement—are based not simply on a description of the ideal, but on an ordering of the non-ideal states. This allows us to ask not only what is necessary or required simpliciter, but also what is necessary or required given certain constraints: what is required simpliciter is what holds in the ideal state; what is required given certain constraints is what holds in all the highest-ranked states in which the constraints hold. So, in Carr’s example, we can understand both the norm You should return your library books on time, and the norm If you don’t return your library books on time, you should pay the fine.
How does this help us to respond to Williamson’s argument? Well, one way to understand his example is like this. The person in question has low credence in the simple logical truth p and so evidence that they have a low credence in p. But they are rationally required not to have this evidence, because they're rationally required to have high credence in p, and if they were to have high credence in p, then they’d have evidence that they had that instead. So, in the ideal situation—the one at the top of Kratzer’s ordering—this person has different evidence from the evidence they in fact have. And that’s what seems to cause problems. But, when we ask for the evidential support that one proposition gives another, we ask not what credence a person assigns to it in the ideally rational situation that inhabits the top of Kratzer’s ordering, but what they assign to it in the highest level of that ordering at which their evidence is as it actually is. Of course, nearly always, these levels are the same; but, in pathological cases, such as Williamson’s, in which you have the evidence you actually have because you’re not fully rational, it is not. And so I think there is no reason to posit the sorts of evidential probability that Williamson does; we need only tweak the reductive account of evidential probability using the tools that Kratzer and Carr give us.
Evidential probabilities as rational credences redux
We embarked on this journey through the range of possible accounts of evidential probabilities in order to see whether there is anything that might play the required role in Nomy Arpaly’s distinction between an epistemic reason to believe a proposition—which, for her, is a fact that provides evidential support for the proposition—and instrumental reasons to believe a proposition for someone who has certain epistemic values. I submit that there is nothing that can play the role. All accounts fall either to the teleological objection or make unwarranted assumptions about the attitude to risk that rationality requires you to take.
In the end, then, there is a range of rationally permissible prior credence functions—how wide depends on just how radical your subjectivism is; there may be only one, though the attempts to show that so far have failed. One proposition supports another relative to a prior credence function if the credence assigned to the second conditional on the first is greater than the unconditional credence assigned to the second. And this is the only sort of evidential support relation there is: the only objective evidential support facts are those that hold relative to any rationally permissible prior, such as that a conjunction supports each of its conjuncts, or perhaps that The objective chance of rain is high supports It will rain. The first of these is demanded of every rational prior by Probabilism, for which we have an accuracy argument; the second is demanded by the Principal Principle, for which we have an accuracy argument.
What determines which prior credence functions are rationally permissible? I think there are pragmatic considerations that show which credence functions are rationally permissible from the pragmatic point of view, and there are epistemic considerations that show which are rationally permissible from the epistemic point of view. On the epistemic side, I think it is considerations of accuracy or epistemic value of the sort adduced in Oddie's version of the Value of Information theorem that determine what is epistemically rational.4 And so, ultimately, to the extent there are facts about evidential support and therefore epistemic reasons of the sort that Arpaly seeks and so epistemic norms in her sense, they are ultimately just instrumental reasons and instrumental norms that govern those who value accuracy.
On the view I’m sketching, what makes a reason epistemic is that it is grounded in the epistemic value of doxastic states that are closely connected to whatever the reason is a reason for; and what makes a norm epistemic is that it holds because of facts about epistemic value. For me, those facts are facts about the accuracy of the belief; but for others they might be something else, such as knowledge or understanding or wisdom.
Horowitz’s Overgeneration Objection
One worry about this sort of view, which Sophie Horowitz has raised, is that it over-generates epistemic reasons and epistemic norms.5 A detective is settling in for a long night working through the evidence against a suspect in order to decide whether or not to charge them in the morning. If she keeps drinking coffee, she’ll power on through to the early hours and read all of the relevant evidence; if she doesn’t, she’ll fall asleep at her desk at 4am and miss out on much of the evidence. On the view that I favour, the detective has epistemic reason to take in all of the evidence she can. But if that’s so, Horowitz challenges, surely she also has epistemic reason to drink coffee. And surely that’s absurd! While she has instrumental reason to drink the coffee, given that she values being accurate and drinking coffee will serve that end, she doesn’t have epistemic reason to do so.
In the end, I’m happy to bite the bullet here. The notion of an epistemic reason is not a pre-theoretic one; it is closer to a technical notion used in epistemology. So I don’t think we need to try too hard to respect whatever our philosophical intuitions are concerning its usage. And in the end, what harm is done by allowing lots of reasons to count as epistemic? It is not an accolade whose prestige we must preserve by awarding it only sparingly.
Having accurate credences vs making credences that are accurate
Nonetheless, it would be foolish to deny that there is some distinction to which Arpaly’s Sinking Heart Intuition draws attention. Even if we categorize both sorts of reason as epistemic, the reason for believing that one is cancer-free that is given by the fact that the CT scan shows no tumour is surely different from the reason for believing that one is cancer-free that is given by the fact that having that belief will cause it to be more likely that you are indeed cancer-free. I agree! But we can accommodate this within the teleological account of epistemic rationality that I’ve been sketching. We need only appeal to a distinction that Jason Konek and Ben Levinstein draw between the epistemic state one is in when one has particular credences, on the one hand, and the epistemic act of adopting that epistemic state, on the other. Epistemic states are evaluated by how much epistemic utility they have—for the accuracy-first epistemologist, how much accuracy they have. In contrast, epistemic acts are evaluated by how much epistemic utility they produce—for the accuracy-first epistemologist, how much accuracy they lead to. Konek and Levinstein hold that evaluations of epistemic rationality should pay attention only to the assessment of credences as epistemic states, and not the acts of adopting them. But I think they can be assessed in both ways. And it is these two ways that are in play in Arpaly’s Sinking Heart example.
Some, such as Tim Williamson, take the evidential probability function to be primarily a unconditional probability function P(-) that measures how likely a proposition is absent any evidence, and then define the evidential probability of X given E to be the ratio P(X&E)/P(E) whenever P(E) > 0. Some, such as Janina Hosiasson-Lindenbaum and Rudolf Carnap, take it to be a binary, primitive conditional probability function P(- | -) and define the evidential probability of X given E to be P(X | E). And some, such as E. T. Jaynes, Jeff Paris and Alena Vencovská, and Jon Williamson, take a body of evidence E to impose constraints on the credence functions of those who have that evidence, take the evidential probability function given E to be a particular probability function P_E(-) that satisfies those constraints, and define the evidential probability of X given E to be P_E(X). We'll meet all of these below.
See Anna-Maria Asunta Eder’s excellent article for a detailed analysis of this argument.
Carolina Flores and Elise Woodard refer to Horowitz’s objection in their paper about epistemic reasons for inquiry.
I can't see the point of worrying about whether our reason for valuing information is epistemic or some other kind of reason. But if I were worried about it, I'd invent a new label like meta-epistemic.