4 Comments

This is an interesting set of thoughts - but I'm not sure it quite distinguishes the Truth Fairy case, unless we know something about how the Truth Fairy operates. Depending on how we imagine it, I can go more towards the bullet-biting response that you went for earlier, or more in the direction suggested here.

Does the Truth Fairy give lots of free true beliefs to everyone who forms this belief, or just to Jen in particular? If the Truth Fairy is on the lookout for people who form this belief, and give all these people lots of free truths (or free evidence, or whatever) then I'm inclined to say that Jen is a good person to follow, even if we know she just believes this because of the Truth Fairy's offer. On the other hand, if the Truth Fairy's offer only stands for Jen, and not for everyone, then we shouldn't follow Jen, and so we shouldn't call her epistemically rational.

This matters because I suspect that a lot of non-idealized science works in Truth Fairy type ways. Should I reject a hypothesis just because of a single falsifying piece of anomalous evidence? (Maybe it's anomalies in the orbit of Uranus, or of Mercury, or whatever.) Once I have the anomalous evidence, I'm sure that the theory plus auxiliaries is incorrect. However, I also know from experience that holding on to the theory plus auxiliaries gives me lots and lots of true beliefs about everything else, while giving it up in a Popperian way deprives me of lots of truth. Any time I hear people talking about theoretical virtues like simplicity, explanatoriness, or whatever, I imagine it working something like this - this is a kind of virtue of a theory that doesn't make the theory any more likely to be true itself, but from experience seems to mean that it's likely to give us access to lots of other truths, and thus it might be epistemically rational to seek these virtues.

Expand full comment

Hey Kenny! Yes, completely agree that it doesn't capture everything. I'm not sure I fully understand the rules of the game for these Craig-style analyses, but I *think* it should be OK if the cases for which it doesn't make sense are few and far between. In the sort of case you're imagining, where the Truth Fairy is on the lookout for people who form the belief, and will make the same offer to them they make to Jen, you'd lose out badly from not considering Jen's beliefs epistemically rational and so not taking them on. So that would provide pressure to create a different concept. But of course there's pressure in the other direction to create a simple useable concept. So you might think, as long as such cases are rare enough, the latter pressure wins out.

On the non-idealized science case: that's really interesting!! I think I imagined that, in such cases, we treat it a bit like the Preface Paradox. When we learn the whole theory can't be perfectly true, we needn't reduce our beliefs in many of its consequences back down to 0.5, or something like that, but rather just reduce them a bit, because we recognise that, while the theory can't be true, it's likely approximately true, or something like that. I take it that's the idea between something like ontic structural realism. You learn the theory isn't true, but you've still got good reason to think that certain structural facts it posits are true, and so you keep a reasonably high credence in those.

Expand full comment

My thought is that believing in outdated paradigms that haven’t been replaced by something better, or believing theories because of their theoretical virtues, is something like a simple, usable concept that plays the role of the truth fairy, in giving you lots of true beliefs, despite not being true itself.

Expand full comment

Consider instead the question: should J accuse Lawrence. For each value of J, the answer seems to be "Yes, probably".

Now suppose that (as stipulated) people can form beliefs by an act of will and match that with the (at least as plausible) stipulation that others can detect lies. Then, to accuse Lawrence successfully, J must form the belief that he is guilty.

Expand full comment