Hi Richard, just a note of surprise about how you ended up glossing "K>TB" under Clarifying the Claim. I would have thought "K>TB" amounted to the claim that (for instance) my knowing that p would be epistemically better than my merely truly believing that p. (Here "my knowing that p" and "my merely truly believing that p" pick out possible states of affairs.) I worry that the way you put things is subject to usual problems with counterfactual analyses (since it appeals to the "closeness" relation). E.g., to change the context, consider the claim that ecstasy is prudentially better than mere pleasure. I would gloss that as saying something like, my being in ecstasy would be better for me than my experiencing mere pleasure. On the other hand, we can dream up an exotic state of affairs, perhaps involving an evil demon, in which I'm in ecstasy, but if I were experiencing mere pleasure I would have stronger friendships. So (it seems) this is a state of affairs in which I'm in ecstasy, but the closest state of affairs in which I'm experiencing mere pleasure is one in which I'm (arguably) better off over all. I don't think this is a counterexample to the intuitive claim that ecstasy is better than pleasure; all it shows is that there are tradeoffs. Subtler examples might show failures of separability, instead of tradeoffs. I'm not sure whether this affects your arguments very much, though it might open up some gaps.
Thanks, Teru! Yes, I know what you mean. I’m not quite sure how to understand the betterness relation between the sparsely specified states of affairs that you mention, though? (I initially wanted to go this way, but wasn’t sure I could make sense of it.) Are you thinking of a Jeffrey-style approach, where there are desirability relations between any propositions/states of affairs, regardless of how fully specified they are? Or are you thinking of a sort of ceteris paribus approach? My worry about the first was that you’d get the sorts of problems you allude to: if you have high credence you’ll have strong friendships conditional on experiencing mere pleasure, you’ll get that mere pleasure is better than ecstasy because of Jeffrey’s axioms for desirability. My worry about the first was that it assumes we know what to hold fixed when making other things equal, and that’s exactly what I was trying to explicate with the closeness stuff. So, basically, I totally understand your worry, but I think I’m not understanding how the solution would go?
I was definitely thinking of something close to the "ceteris paribus" approach, although I think there are some different (better?) ways of getting at more or less the same idea.
One version of the story is that certain states of affairs (like perhaps my knowing that p) are intrinsically valuable. The intrinsic value of simple states of affairs contributes (perhaps additively, but perhaps not) to the intrinsic value of more complex states of affairs (like my having such-and-such total doxastic state). It would be natural to think that the value of a complex states of affairs is grounded in the value of the simple states of affairs that compose it. That's in contrast with the "ceteris paribus" approach, which is naturally understood as defining the value of simple states of affairs in terms of the value of more complex ones.
A different but compatible story is in terms of respects or dimensions of value. Suppose that X and Y are total doxastic states (or whatever the fine-grained epistemic outcomes are), and suppose
(R): X involves knowledge that p, whereas Y only involves true belief that p.
Then we might say that X is better than Y with respect to p, or something like that. Alternatively, one can speak in "pro tanto" terms. If (R) is true, then X is pro tanto better than Y; that is, it is better in _this_ respect or as far as _this_ consideration goes, but of course there might be other considerations, and our evaluative theory would ideally go on to explain what these considerations are and how they combine.
If you'll buy for a moment that "X is better than Y" means that one ought to prefer X over Y, then the thought is that (R) is a (pro tanto) reason to prefer X over Y. Of course, it's tempting to then say things like, if (R) is true, ceteris paribus one ought to prefer X over Y. But I think "ceteris paribus" must mean "all other relevant considerations being equal", and I think it's the job of the evaluative theory to tell us which considerations are the relevant ones, rather than there being some theory-neutral explanation.
Anyway, I'm not sure any of this counts as a "solution" to anything, but I do think there is some usable notion in this neighbourhood!
One thing about Asa in Fake Barn County - if you think they're still in Fake Barn County, then you might legitimately wish they had passed through Real Barn County, since that would reduce their risk of forming new false beliefs. But if you think they're done with the trip, then it no longer matters.
I have parallel thoughts about Newcomb problems - if I think someone's going to face a Newcomb problem, I can hope on their behalf that they're a one-boxer, but if I think they just did it, I might hope on their behalf that they two-boxed.
There's another prominent view that you don't discuss: that knowledge is valuable because it's an (epistemic/intellectual) achievement. As Sosa has put it (in various places), it's like hitting the target with your arrow as a result of your skill, but it's your intellectual skill vs your skill as an archer. Of course, some known beliefs, like some arrow shots, are trivially easy, and you needn't have been skilful to make them...but skill is a somewhat modal idea (yes, it was an easy shot, but I'd still have made the shot even if it was harder vs the lucky drunk who wouldn't, or whatever). Something like that.
There's also something quite nice about that view: achievements come in different sizes (from the trivial to the grand) and that captures the idea that some knowledge is much more valuable than the mere true belief while other knowledge is (perhaps) only trivially so.
Edward Craig has a nice discussion of this passage from the Meno in his lectures (in German) "Was wir wissen können". Essentially, he things that Plato is asking the right question but comes up with an unconvincing answer because knowledge can be destroyed by misleading defeaters just as much as true belief can. Instead, he suggests that what's more valuable about knowledge is that it is recognizable as being reliable. True beliefs (that aren't knowledge) are often not reliable, and when they are they are sometimes not recognizable as such. Being recognizable as being reliable allows us to mark the information as trustworthy and pass it on to others, among other things.
Ah, that’s fascinating! Thanks! I’ll try to track those down (though my German is non existent). I’ve never understood Craig’s view, so this would be a good opportunity for me to try to get to grips with it.
The problem with unjustified true belief is that you can't know when the conditions for your belief have changed. Hence, if you act on unjustified beliefs, you will be lucky sometimes and unlucky at others.
That's also why there should be no such thing as justified false belief. In the typical Gettier examples purporting to show that justified true belief isn't knowledge, your belief could change from true to false and you would have no way of detecting this.
Thinking about the barns example, if you are in a country where there are both true and fake barns, you can't be justified in the belief that a barn facade has a barn behind it. That's true even in counties where all the barns are real, unless you know you are in such a county.
Hi Richard, just a note of surprise about how you ended up glossing "K>TB" under Clarifying the Claim. I would have thought "K>TB" amounted to the claim that (for instance) my knowing that p would be epistemically better than my merely truly believing that p. (Here "my knowing that p" and "my merely truly believing that p" pick out possible states of affairs.) I worry that the way you put things is subject to usual problems with counterfactual analyses (since it appeals to the "closeness" relation). E.g., to change the context, consider the claim that ecstasy is prudentially better than mere pleasure. I would gloss that as saying something like, my being in ecstasy would be better for me than my experiencing mere pleasure. On the other hand, we can dream up an exotic state of affairs, perhaps involving an evil demon, in which I'm in ecstasy, but if I were experiencing mere pleasure I would have stronger friendships. So (it seems) this is a state of affairs in which I'm in ecstasy, but the closest state of affairs in which I'm experiencing mere pleasure is one in which I'm (arguably) better off over all. I don't think this is a counterexample to the intuitive claim that ecstasy is better than pleasure; all it shows is that there are tradeoffs. Subtler examples might show failures of separability, instead of tradeoffs. I'm not sure whether this affects your arguments very much, though it might open up some gaps.
Thanks, Teru! Yes, I know what you mean. I’m not quite sure how to understand the betterness relation between the sparsely specified states of affairs that you mention, though? (I initially wanted to go this way, but wasn’t sure I could make sense of it.) Are you thinking of a Jeffrey-style approach, where there are desirability relations between any propositions/states of affairs, regardless of how fully specified they are? Or are you thinking of a sort of ceteris paribus approach? My worry about the first was that you’d get the sorts of problems you allude to: if you have high credence you’ll have strong friendships conditional on experiencing mere pleasure, you’ll get that mere pleasure is better than ecstasy because of Jeffrey’s axioms for desirability. My worry about the first was that it assumes we know what to hold fixed when making other things equal, and that’s exactly what I was trying to explicate with the closeness stuff. So, basically, I totally understand your worry, but I think I’m not understanding how the solution would go?
I was definitely thinking of something close to the "ceteris paribus" approach, although I think there are some different (better?) ways of getting at more or less the same idea.
One version of the story is that certain states of affairs (like perhaps my knowing that p) are intrinsically valuable. The intrinsic value of simple states of affairs contributes (perhaps additively, but perhaps not) to the intrinsic value of more complex states of affairs (like my having such-and-such total doxastic state). It would be natural to think that the value of a complex states of affairs is grounded in the value of the simple states of affairs that compose it. That's in contrast with the "ceteris paribus" approach, which is naturally understood as defining the value of simple states of affairs in terms of the value of more complex ones.
A different but compatible story is in terms of respects or dimensions of value. Suppose that X and Y are total doxastic states (or whatever the fine-grained epistemic outcomes are), and suppose
(R): X involves knowledge that p, whereas Y only involves true belief that p.
Then we might say that X is better than Y with respect to p, or something like that. Alternatively, one can speak in "pro tanto" terms. If (R) is true, then X is pro tanto better than Y; that is, it is better in _this_ respect or as far as _this_ consideration goes, but of course there might be other considerations, and our evaluative theory would ideally go on to explain what these considerations are and how they combine.
If you'll buy for a moment that "X is better than Y" means that one ought to prefer X over Y, then the thought is that (R) is a (pro tanto) reason to prefer X over Y. Of course, it's tempting to then say things like, if (R) is true, ceteris paribus one ought to prefer X over Y. But I think "ceteris paribus" must mean "all other relevant considerations being equal", and I think it's the job of the evaluative theory to tell us which considerations are the relevant ones, rather than there being some theory-neutral explanation.
Anyway, I'm not sure any of this counts as a "solution" to anything, but I do think there is some usable notion in this neighbourhood!
One thing about Asa in Fake Barn County - if you think they're still in Fake Barn County, then you might legitimately wish they had passed through Real Barn County, since that would reduce their risk of forming new false beliefs. But if you think they're done with the trip, then it no longer matters.
I have parallel thoughts about Newcomb problems - if I think someone's going to face a Newcomb problem, I can hope on their behalf that they're a one-boxer, but if I think they just did it, I might hope on their behalf that they two-boxed.
There's another prominent view that you don't discuss: that knowledge is valuable because it's an (epistemic/intellectual) achievement. As Sosa has put it (in various places), it's like hitting the target with your arrow as a result of your skill, but it's your intellectual skill vs your skill as an archer. Of course, some known beliefs, like some arrow shots, are trivially easy, and you needn't have been skilful to make them...but skill is a somewhat modal idea (yes, it was an easy shot, but I'd still have made the shot even if it was harder vs the lucky drunk who wouldn't, or whatever). Something like that.
There's also something quite nice about that view: achievements come in different sizes (from the trivial to the grand) and that captures the idea that some knowledge is much more valuable than the mere true belief while other knowledge is (perhaps) only trivially so.
Edward Craig has a nice discussion of this passage from the Meno in his lectures (in German) "Was wir wissen können". Essentially, he things that Plato is asking the right question but comes up with an unconvincing answer because knowledge can be destroyed by misleading defeaters just as much as true belief can. Instead, he suggests that what's more valuable about knowledge is that it is recognizable as being reliable. True beliefs (that aren't knowledge) are often not reliable, and when they are they are sometimes not recognizable as such. Being recognizable as being reliable allows us to mark the information as trustworthy and pass it on to others, among other things.
Ah, that’s fascinating! Thanks! I’ll try to track those down (though my German is non existent). I’ve never understood Craig’s view, so this would be a good opportunity for me to try to get to grips with it.
The problem with unjustified true belief is that you can't know when the conditions for your belief have changed. Hence, if you act on unjustified beliefs, you will be lucky sometimes and unlucky at others.
That's also why there should be no such thing as justified false belief. In the typical Gettier examples purporting to show that justified true belief isn't knowledge, your belief could change from true to false and you would have no way of detecting this.
Thinking about the barns example, if you are in a country where there are both true and fake barns, you can't be justified in the belief that a barn facade has a barn behind it. That's true even in counties where all the barns are real, unless you know you are in such a county.