Genealogical anxiety, philosophical vertigo, and awareness growth
It is an inevitable but interesting feature of creatures like us that we never knew or now don’t remember the causal origin of many of our beliefs; and, for those we do, the causal origin often runs back to a more fundamental belief and the causal origin of that is opaque to us, so that even if the proximate causal origin is known the ultimate origin is not. We form many fundamental beliefs when we are too young to reflect on how we are forming them; and even when we are old enough to so reflect, there are often too many demands on our time to do so; and even when we do reflect, we often forget shortly thereafter what we concluded about them. What’s more, this opaqueness isn’t something to which we turn our attention very often, except when those origins are made apparent to us, or the question of their standing is raised. As a result, it can be disconcerting when we do reflect on it. Not always, of course: if I draw your attention to the origin of your high confidence that you’re reading a blogpost, and describe the reliable perceptual experience and subsequent reliable inferences you carried out to establish it, you nod smugly and retain it. This is a very simple instance of what Bernard Williams calls a vindicatory history of a belief. But there are others that are less triumphant. If I draw your attention to the origin of your high confidence that the worst-off should receive greatest priority when making decisions that will affect a group, describing the series of evolutionary and cultural factors that led to it, and note that none of those was sensitive to the truth of the moral claim, so that you would have been highly confident in it even if it weren’t true, then this is certainly not vindicatory, and indeed you might think it gives you grounds to become less confident—perhaps even 50-50. And even it you don’t end up becoming less confident, you’ll likely feel a bit uneasy about your high confidence, at least for a little while. Epistemologists have discussed this under the heading of irrelevant causal influences on your beliefs.
In this post, I want to explore what such origin stories tell us about our beliefs, what this sense of unease is that we feel and what its source is, and how we should respond to it. I want to agree with Roger White, Miriam Schoenfield, and Amia Srinivasan that even the less than triumphant origin stories don’t require us to become less confident, though they do permit it. Srinivasan calls the unease we feel genealogical anxiety; I want to liken it to the philosophical anxiety that Stanley Cavell thinks we feel in response to sceptical arguments, even if we hold with Wittgenstein that they are the result of misusing language—John McDowell calls this feeling philosophical vertigo, though he thinks, and he thinks Cavell should think, that it is not an apt response.
To get to this conclusion, I want to argue that, when you learn the causal origins of your beliefs, two things happen: first, you experience what epistemologists and decision theorists have recently started to call awareness growth, where you come to consider propositions to which you haven’t assigned credences in the past;1 second, you gain some new evidence to which you must respond. Our best account of how to respond to awareness growth explains why a number of responses are permissible; and since sceptical arguments also give rise to awareness growth, we can see why both give rise to the same sort sense of vertigo.
So suppose you are highly confident in a particular moral claim—perhaps that the worst-off should receive greatest priority when making decisions that will affect a group. When I look at its origins, I see that you have it because of an evolutionary process—it is hard-wired into your cognitive apparatus as a trait that has been selected for at the group level. I see that the evolutionary process that led to it would have gone this way whether or not the moral claim is in fact true. I tell you my findings and you accept that I’ve got this right. What happens next? (Of course, we never learn this sort of etiological detail about our beliefs, but let’s stick with the ideal case for now.)
One peculiarity that Roger White returns to again and again throughout his excellent paper on this topic is that it seems odd that we talk as if knowledge of the causal origins of your high confidence in the moral claim might lead you to reduce that confidence and provoke genealogical anxiety, but your ignorance of its origins, which presumably characterises your situation before I tell you about them, doesn’t have that effect. Is it that you previously made an explicit assumption that its origins are sensitive to the truth and my discoveries undermine that assumption? But if so, what possibly warrants this assumption?
My hypothesis is that, in the cases where learning the truth-insensitive causal origins of a high level of confidence lead to genealogical anxiety, we haven’t ever considered the higher-order question about the process by which we came to have that confidence before. So, these cases involve what philosophers and economists have come to call awareness growth, which happens when you become aware of propositions you haven’t entertained before—or perhaps you’ve entertained them, but your attitudes to them haven’t stuck around and you currently have no attitudes to them. In the case in question, you become aware of propositions of the form: My high confidence in this particular moral claim was formed by such-and-such a process. And now you’re aware of them, you’d better assign credences to them, and to at least some logical combinations of them and propositions to which you already assign credences, and therefore at least some conditional credences in old propositions given these new ones and vice versa. And the whole lot had better cohere with one another in the ways the probability calculus demands, so that, for instance, you can’t end up assigning high credence to the moral claim, high credence the claim it was formed by a truth-insensitive process, and only middling credence to the moral claim conditional on the claim about the process—the laws of probability won’t allow this.2
One attraction of this picture is that it responds to White’s puzzlement. Ignorance does not fully characterise your situation before I tell you about the causal origin story—unawareness does. Ignorance is the situation in which you consider the different hypotheses, but you don’t know which is true; unawareness is the situation in which you don’t even consider the hypotheses. When I tell you, I both make you aware of the hypotheses about the causal origins and give you information about which is true. Even if I were just to make you aware of the hypotheses, I’d still precipitate some of the genealogical anxiety, and might still prompt you to revise your high credence, though perhaps not to the same extent.
So how should you set these new credences after you become aware of the hypotheses about the causal origins of your high confidence in the moral claim? We might hope that a general answer will be forthcoming from Bayesian epistemology. After all, doesn’t that tell us how we should update our credences when we acquire new evidence? Actually, as I argue in this paper, things are rather subtle. This will be a little quick, but the details don’t matter too much here.
There are essentially three versions of the Bayesian norm for updating, which is called Bayes’ Rule or Conditionalization or something in the vicinity. The first is a narrow scope norm that tells you not how you should update but how you should plan to update, given the current credences you have: If you have so-and-so current credences, and you know your future evidence will come from a particular set of propositions but you don’t know which one, then you ought to plan to update upon receipt of that evidence as follows. The second is a wide scope norm that governs these plans: If you know your future evidence will come from a particular set of propositions but you don’t know which one, you ought to have current credences and an updating plan that are related as follows. The third is a narrow scope norm that tells you how to actually update when the evidence comes in: If you have so-and-so current credences, and you actually receive such-and-such evidence, then you should update as follows.
The first two don’t apply in the case of awareness growth, since you can’t plan how to update in the face of awareness growth. After all, as soon as you’re aware of the possible propositions of which you might become aware, well, you’re already aware of them and your awareness has grown! The third is more promising, but it comes with a caveat: the arguments in favour of it require that you still take your prior credences to have normative authority over your decisions at the point that you’re setting the new credences. If you’re just responding to new evidence, and that evidence comes in the form of a proposition you’ve entertained before, this just means that learning that evidence itself doesn’t rob your current credences of their normative authority—and typically that’s true. But awareness growth can rob your current credences of their normative authority. Why? Well, you realise that when you set your ur-priors—that is, your credences at the beginning of your epistemic life before you gathered any evidence, the credences from which your current credences have evolved by updating on the evidence you’ve gathered—you set them over the original set of propositions that didn’t include the new propositions of which I’ve just made you aware. And you realise that how one sets priors is often sensitive to the propositions of which you’re aware when you set them: for instance, if I introduce you to a new possibility you’ve never considered, such as that a coin might land on its edge rather than heads or tails, then you will want to go back to the drawing board and assign new priors, now taking this third possibility into account.
Returning to our target case, how might we set these new ur-priors? There are many options. If your current credences retain their normative authority, they will tell you how to fix your new credences in the propositions you’ve always entertained. Indeed, they’ll tell you to keep them the same. And then you’re free to fix your credences in the new propositions as you like, so long as they obey the constraints of rationality, such as the probability axioms. And, having done that, you can update on your new evidence. So, you might choose to set your new conditional credences in the moral claim given the origins of your original high credence in it to be high or to be middling, and so end up either with high or middling credence in it after updating on the evidence I gave you.
If, on the other hand, your current credences lose their normative authority in the light of your awareness growth, they place no constraints on your new credences and you are simply back in the situation of picking an ur-prior, this time for the expanded set of propositions. And then you might choose new ur-priors that give a middling credence to that moral claim, or even ones that give a high credence to it, but a middling credence to it conditional on its original instantiation being formed in truth-insensitive ways, and so you’ll end up with a middling credence once you update these new ur-priors on the evidence I’ve given you. In that case, my genealogical story has led you to reduce your credence in the moral claim.
But equally you might choose ur-priors that were very much like your old ones, giving high credence to the moral claim and also high credence to it conditional on the genealogical facts I’ve since told you, so that you retain high credence in it even after you update on all of your evidence. But, you might ask: how can that be rational? On what basis can you set high credence in it? What is your justification for doing so? I think this is the key insight running through the White-Schoenfield-Srinivasan papers. You can think: yes, the process that got me here was insensitive to the truth, but gosh I’m lucky it did get me here since this is the truth. So, in this case, you just take the truth of the moral claim to be basic, and not something you justify by appealing to something else, such as your moral intuitions, how the claim seems to you, or whatever. This is essentially what we do in the case of sceptical beliefs. We note we’d have the same beliefs if we were brains in a vat, but we retain our high credence that we’re not. As Srinivasan puts it, we attribute to ourselves a sort of doxastic luck: lucky we’re not brains in a vat; lucky the evolutionary process bequeathed us with true moral beliefs.
So, in short, you are permitted to retain your high credence in the moral claim, or to lower it, just as White and Schoenfield and Srinivasan say.
Whence the feeling of anxiety in this picture? I think it’s partly that, outside of philosophy, we don’t often face the fact that there are different legitimate starting points for our epistemic journey—different ur-priors we might have before the evidence comes in. We don’t face this fact as children when we begin to set our credences, because we do so unreflectively—I hear my parents say that such-and-such is morally wrong, and I simply adopt a high credence that it is wrong; I don’t have a prior conditional credence that the thing is wrong given my parents say it is. And then we don’t tend to face this fact as we acquire new evidence, for the evidence we acquire tends not to call into question the normative authority of the credences we have.
But, in philosophy, we are often in the business of creating awareness growth: it’s what Descartes did when he described the malicious demon scenario in the First Meditation; it’s what Sharon Street does when she draws attention to the truth-insensitive evolutionary origins of our ethical beliefs; and it’s what the experimental philosophers do when they tell us about the sensitivity of our Gettier intuitions to our cultural background—it’s also, I think, what’s going on in the cases that hermeneutical injustice is intended to cover, where new concepts and other hermeneutical resources become available, but that’s for another time. When our awareness grows in these ways, our current credences can lose their normative authority: we realise we assigned high credence to the existence of the external world because it was the only hypothesis we considered, but now that another is on the table, we need to think about how we’d set that credence in the presence of the sceptical hypothesis. And that creates anxiety, because we see the fragility of our epistemic practices.
I think this is what Cavell talks of as the terror induced by sceptical arguments, and McDowell calls philosophical vertigo. Here is Duncan Pritchard describing it:
The metaphor [of vertigo] is apt, for it seems that this anxiety [that Cavell describes] is specifically arising as a result of a kind of philosophical ‘ascent’ to a perspective overlooking our practices, and hence to that extent disengaged from them (as opposed to the ordinary pre-philosophical perspective in which one is unself-consciously embedded within those practices). (‘Cavell and Philosophical Vertigo’)
Cavell is often talking about Wittgensteinian scepticism about our use of language, which is why Pritchard is talking about practices here, but we can translate the thought to our case. The practices are our current credences, which encode our view of the world. Typically, we are unself-consciously embedded or immersed in them, which just means that we take them to have normative authority over our updating behaviour and our decision-making. Awareness growth can lead them to lose their normative authority, and a sort of ascent ensues, where we sit above those practices and have to choose new ones (or retain our old ones, as we often do in the face of sceptical arguments, even if we retain some of the vertigo after doing so, now that we are aware we might have chosen differently).
Pritchard and McDowell think that the anxiety arises because we become aware that there are what Wittgenstein calls hinge propositions, that is, propositions we believe but where our belief in them is not amenable to rational evaluation. They are the completely basic beliefs that underpin all the others. I think that’s wrong. What we become aware of is not our beliefs in hinge proposition, but our ur-prior credences. And we don’t become aware that they are not susceptible to rational evaluation, for they are—you can easily have irrational ur-priors, for instance by having probabilistically incoherent ones. What you become aware of is that there are many of them that rationality permits. And you become aware that the ones you picked were sensitive to the propositions to which you assigned them, among other things. And so, in many ways, you might have chosen differently. Why does this induce anxiety? Because, once chosen, you inhabit these ur-priors, immerse yourself in them, and they constitute the standpoint from which you approach the world, updating them on the evidence, of course, but always having credences that have evolved from them and didn’t evolve from some other permissible ur-priors you might have had. And the anxiety arises because of the gap between your commitment to them when you inhabit them, and the sense that they are no better from the point of rationality than any of the other permissible ones.
So I hope I’ve made a case that learning causal origin stories for your beliefs typically involves both awareness growth and learning evidence; it’s the awareness growth that often leads to the anxiety; and the anxiety is akin to the sort of terror or philosophical vertigo that Cavell and McDowell describe as a common response to sceptical arguments.
But surely something is left out here? If it’s awareness growth that does this, why doesn’t it happen all the time? Surely I’m forever becoming aware of new propositions and assigning credences to them? As I mentioned above, I might come to think of the possibility that a coin lands on its edge when before I hadn’t considered this. Why am I not forever experiencing this doxastic sort of anxiety? I think the reason is that the awareness growth that occurs in those more quotidian cases is much more local in two ways. First, there aren’t so many other propositions that naturally come to mind when I become aware of the proposition, This coin will lands on its edge; or, if there are—such as if I reconsider my credences in the possible future outcome of tossing any coin in existence—they are all pretty similar and can be treated in the same way and all at once. Second, there aren’t so many other propositions for which my credence in those is based on my credences in the propositions whose credences I’ll alter as a result of the awareness growth—not many of my other credences are based on the outcome of particular coin tosses. In contrast, when you come to entertain the hypotheses about the causal origin of your high credence in the moral claim, this leads you to entertain hypothesis about the causal origin of many—perhaps all—of your other fundamental beliefs, both moral and theoretical; and, if you do feel moved to change those credences, that will have enormous downstream effects, because so many other credences are based on them.
This book by Katie Steele and Orri Stefánsson in the Cambridge Elements series is an excellent overview.
For instance, if I’m 95% confident in X and 95% confident in Y, then my credence in X conditional on Y must be at least 94.73%, because my credence in X&Y must be at least 90%, and 0.9/0.95 is greater than 0.9473.