Willing practical reasons
In a series of papers over the past ten years, Ruth Chang has argued that it is possible to create a reason for yourself by an act of your will; that is, it is possible to make it the case by a mere act of will that a consideration that was not a reason for you before is a reason for you now.1
Now, in one sense, this is uncontroversial. By promising to pick up my godson from school, I make the fact that his school day finishes at 3pm a reason for me to be at the school gates at 3pm. But this isn’t the sort of case Chang has in mind. In this case, I will that I do some external action that affects the world—I will that I openly and explicitly make a promise to my godson’s parents—and I do that thing, and it is this act and the fact that it affects people and their actions in a particular way that changes the normative landscape, not the mere act of willing it. I make it the case that his parents will not be at the school gates at 3pm, since they trust that I will be instead, and so if I break my promise and don’t turn up, no one will be there for him and I create a danger or distress for my godson that wouldn’t have been present if either I hadn’t made the promise in the first place, or if I had made it but I’d also kept it. There’s no mystery about how we can create reasons for ourselves in that case.
But Chang is interested in cases in which we can create reasons for ourselves without performing any external action that affects any other person. Indeed, on her view, I can simply stipulate privately to myself that something is a reason for me, and it will thereby become one. She likens this power to the power we have to take a string of letters that has not hitherto been given a meaning in our language and stipulating a meaning for it. If I will that ‘to glig’ means to befriend a cat in the street, then I have, by that act of will alone, and not because of any onwards effects of that act, changed the normative landscape a little. I have made it the case that I use the term incorrectly if I use it to describe a situation in which I do not befriend a cat in the street. Presumably this is akin to the power we have when we specify the rules of a solitary game we’re about to play, or the form of poem we’re about to write: by willing that those are the rules of the game, we make it the case that we act wrongly if we deviate from the rules; and if we specify that we’re writing a poem in this form, we make it the case that we write wrongly if we break from that form.
Here is one of Chang’s central examples. Chris and Kiran have been dating for some months. It’s reasonably serious, but while Kiran is fully committed, Chris is not—he could see a future with Kiran, but he hasn’t determine to himself yet that Kiran is the one for him. One evening, it emerges that Kiran needs a kidney transplant, and also that Chris would be a suitable donor. As it stands, and given the current nature of their relationship, the fact that Kiran needs a kidney does not give Chris a strong reason to donate one of his. It gives some reason, surely, but not a strong one. However, later that evening, when he has parted from Kiran, Chris decides to do something he has been considering for some time. He decides to commit fully to Kiran in the way that Kiran has already committed fully to him. As Chang points out, this is a moment with which we are familiar from Hollywood movies, usually occurring just as the person to whom the commitment is given is about to board a flight to Arizona to start a new life on a ranch.
Chris’ commitment is an act of will. And part of what it involves is creating reasons for himself. By this act of will, he transforms considerations that before gave him no or little reason to do something into strong reasons to act. In particular, perhaps it transforms the fact that Kiran needs a kidney from a weak reason for him to donate to a strong reason. And it does this in much the same way that my act of defining the verb ‘to glig’ makes it the case that I use it wrongly if I stray from that definition, and my act of specifying the rules of my solitary game make it the case that I act wrongly by breaking them, and my act of specifying the poetic form in which I’m writing makes it the case that I write wrongly by deviating from that form.
It’s important that Chris’ act of will is entirely private. He hasn’t communicated it to Kiran. He has not promised Kiran his kidney, and so he has not created the reason for himself in that familiar way. Rather, he did something internal by an act of will and thereby made a weak reason into a strong reason. He did this by stipulating that it is a strong reason for him now.
This leads Chang to what she calls a hybrid account of the reasons we have: there are world-given reasons and there are will-given reasons. The fact that someone has tripped and fallen on the pavement in front of you provides a world-given reason for you to offer to help them back on their feet—you don’t need to do any internal willing in order to make that fact a strong reason for you. The fact that Kiran needs a kidney gives Chris a weak world-given reason to donate, just as all suffering that we might alleviate gives us a weak reason to alleviate it. But, when Chris commits fully to Kiran, he transforms that weak world-given reason into a strong will-given reason—or perhaps we want to say that it’s now a hybrid reason with a particular strength, a small part of its strength donated by the world and a larger part donated by Chris’ will; the metaphysics doesn’t matter too much.
That’s Chang’s thesis. Is it true? We’ll come back to that later. First, I’d like to discuss a particular application of the idea in epistemology proposed by Laura Callahan.
Willing epistemic reasons
I first heard about Chang’s view through Laura Callahan’s work on what she calls epistemic existentialism.2 And I was reading that because I’m interested in a series of puzzles that arise for epistemic permissivism. Very roughly, this is the view that two people with the same evidence can rationally disagree; that is, in at least some cases, your evidence does not determine a unique doxastic response—permissivists differ on how often the evidence is permissive in this way and how wide is the range of permissible doxastic responses to it.3 Suppose you and I share exactly the same evidence about a virus. You might be 80% sure it’s transmitted through the air, I might be 65% sure, and yet we might both be responding to our shared evidence in ways that rationality permits—such evidence is complex, after all, and different components of it pull in different directions and can be given different weights. Such permissivism stands in contrast to, say, the view that there are evidential probabilities and the accompanying norm that says that rationality requires our credences match the evidential probabilities.
One of the puzzles about this view is the so-called flip-flopping objection, which I know from Roger White.4 Let’s suppose I’ve got a certain total body of evidence, and it permits a range of rational responses. Suppose my current doxastic state is among those rationally permitted responses—perhaps I have credence 80% in a particular proposition. But there are alternative states that are also rationally permitted—perhaps that state assigns 65% to the same proposition. Suppose I now suddenly flip to one of those alternatives without any external stimulus or internal reasoning or reevaluation. And suppose five minutes after that, I flip back, again unprompted. And I keep doing this all day so that by sundown, I arrive back where I started. This puzzles asks us to agree that you would judge me irrational. And yet the permissivist says that, at each time, my credences are rationally permissible.
One way to put this puzzle is to say that the permissivist recognises no reason for you to stick with your rationally permissible doxastic state; they recognise no reason for you not to switch to another rationally permissible doxastic state. And yet we seem to think that there is a reason, since we judge someone irrational for flip-flopping between the states. Putting it in this way, we can see how Chang’s proposal might help, and Laura Callahan argues that we should appeal to it to help us in exactly this way. When I adopt one rationally permissible doxastic state rather than another, I decide to commit to this state in much the way that Chris decided to commit to Kiran. Commit to it how? I commit to do what is best by its lights, perhaps, both what is best doxastically speaking, and what is best pragmatically. Perhaps I commit to responding to evidence in the way it demands—if you’re a Bayesian, perhaps this involves committing to updating on it using Bayes’ Rule. Perhaps I commit to using it to represent the world until new evidence emerges. That sort of thing. And, as with Chris’ commitment to Kiran, committing to this doxastic state is an act of will; it is internal, in that I do it privately and it requires no uptake by anyone else; and by doing it, I create reasons for myself where there were no reasons before.
So the permissivist is right to say that, prior to this commitment, I had no reason to believe in one rationally permissible way rather than another. But, having adopted one doxastic state over another, and willed my commitment to it, I now have reason to view the world that way rather than the other, to respond to evidence in the way that doxastic state demands, and to act in the way that doxastic state tells me to.
Now how does this answer the flip-flopping objection? Why does committing to a particular doxastic state give you reason not to move to another? I think there are epistemic and pragmatic answers we might give to this.
On the epistemic side, at least for credences, there are good reasons to think that the correct way to measure the epistemic value of credences is what formal epistemologists call immodest, or strictly proper. This means that any set of credences that satisfies the axioms of the probability calculus expects itself and itself alone to be best; that is, its expected epistemic value, from its own point of view, is higher than the expected epistemic value of any other set of credences from that same point of view.5 And if that’s the case, then if by committing to a set of credences you thereby commit to whatever they demand you do, then they demand you not move to alternative credences since, in expectation, that will lose you epistemic value. So you have reason not to flip-flop.
On the pragmatic side, again for credences, suppose you aren’t sure what decisions you’ll face with your credences in the future. And suppose you say that the pragmatic value of a set of credences, given a decision problem and given a state of the world, is the pragmatic value at that world of whatever option those credences will lead you to choose from among those available in that decision problem. So, if you face the decision whether or not to take an umbrella when you leave the house, and your credences lead you not to take it, then their pragmatic value at a world at which it rains is the pragmatic value of being in the rain with no umbrella, while at a world at which is doesn’t rain, their pragmatic value is the pragmatic value of being in the dry with no umbrella. Let’s take this account as given. Then, building on a suggestion by Mark Schervish, Ben Levinstein shows that, under certain plausible assumptions about your uncertainty about which decision problem you’ll face, this measure of pragmatic value will also be immodest, or strictly proper, and so again your current credences will judge any alternative to be worse than they judge themselves to be, in expectation. And so they demand you stick with your current credences.
The normative force of will-given reasons
So, if Chang is right that we can create reasons by a mere act of will, and if Callahan is right that, in particular, we can create epistemic reasons to do whatever our adopted credences say we should do merely by committing to them by an act of will, then we can solve the flip-flopping objection. And so now we must ask: is Chang right that there are will-given reasons; and, if so, is Callahan right that epistemic reasons are among them?
A natural worry about Chang’s proposal is that will-given reasons have no force because, should one wish to act against their recommendation, one can simply will that they are no longer reasons for you. And indeed Chang’s analogy with privately stipulated definitions of words rather invites this objection, for the worry just raised is the analogy of one reading of Wittgenstein’s argument against the possibility of purely private language in the Philosophical Investigations (§§244-271).6 Chang herself considers something like this objection, and writes:
There is an immediate worry, however, that needs to be addressed. If a commitment is essentially a matter of willing something to be a reason, then the exit costs of commitments seem implausibly low. Just as you can stipulate, as a matter of will, the meaning of a word, so too can you ‘unstipulate’ it as a matter of will.
She responds as follows:
Suppose you and Harry have been in a loving, committed relationship for many years. But you start to grow apart and no longer have common interests or are able to share deeply felt emotions. You begin to feel dissatisfied with the relationship and entertain thoughts of what life would be like without Harry. What might you do? […] You might […] abandon your commitment—you might no longer will Harry’s interests to be normative for yourself. We sometimes call this ‘falling out of love’. This coming in and out of existence as a matter of will is in the nature of these commitments. You can make them and you can unmake them as a matter of will.
It does not follow, however, that a commitment’s exit costs are implausibly low. For one thing, any reasons you might have had to make a commitment in the first place may persist. For another thing, your commitment to Harry will […] typically have downstream effects: for example, Harry’s expectation, and subsequent reliance, on the fact that you have the kind of relationship in which you will subsidize his theatre-goings, empty his bedpan, and offer your kidney if he needs one. These downstream effects can give you reasons you wouldn’t have otherwise had—reasons to make amends or even to meet some of Harry’s expectations—despite the fact that you have withdrawn your commitment. So the exit costs of uncommitting—including the costs of discharging downstream reasons—can be very high. Lawyers call it alimony.
And of course we might say something similar in the epistemic case. After all, my commitment to a particular set of credences will also have downstream effects. In conjunction with my values, I might have been using these to make decisions about which means to take in pursuit of my long-term projects. But other means might be better by the lights of the alternative credences that I flip to, and so I abandon the original means and pursue these alternative ones. Then I flop back to my original credences and return to taking the means I originally endorsed, but now having lost time and other resources. In the end, I am in a situation that is strictly worse than the one I would have been in had I simply stuck to my credences from the start—I pursue the means I originally pursued, but now minus the time and energy I spent on the alternative means.
I think there are a couple of problems with this response. First: while these downstream effects do indeed give you reason to continue to act in committed ways towards Harry—though of course reasons that might be outweighed by other ones—these are not the reasons you created as an act of will. These are reasons created by outward actions you have taken as a result of that commitment that have had an effect on the world. Importantly, they are world-given reasons. The world gives them in part because of something you previously did, but they are world-given all the same. They are like my reason to turn up at my godson’s school gates at 3pm having promised to pick him up. Recall: those reasons are given by the danger or distress that my godson will be in if I don’t, and the fact that, by promising to pick him up, I created an expectation I would and thereby made it very likely that no-one else would be there for him. Similarly, whatever these reasons are that I create by the downstream effects of my willing to commit to Harry, they aren’t the originally reasons I created, and those are still there to be turned over by me. Their replacement by world-given reasons doesn’t bolster their normative strength—it simply replaces it.
The second problem is that, once we see that these reasons we have later to continue to act as if we’re still committed are different reasons for the same actions, we can ask what the normative landscape looks like in cases in which we make one of these commitments without it then generating any actions that will have downstreams effects that produce new world-given reasons, at least for some period of time. Within that period of time, we can ask whether these putative will-given reasons have any normative force.
To test this, return to Chris and Kiran. Sitting at home alone after their drinks, Chris commits to Kiran and by doing so stipulates that Kiran’s need for a kidney and Chris’ donor match give him reason to donate. Since time is of the essence, they also give him reason to call Kiran now to tell him that he wishes to do this. And yet, as he reaches for his phone, he decides not to do that. According to Chang’s picture, he’s acted against a reason he has; he’s not done what he should; he’s not done what he has decisive reason to do. And yet it’s hard to see why. Why should his decision from five minutes ago to commit to Kiran trump his decision now not to call and tell him? What could possibly give it this normative authority? In this case, since Chris hasn’t yet acted externally on his new commitment, he’s generated no downstream effects, and they’ve generated no world-given reasons. And in this case, it seems we just don’t have any reason at all.
Chang writes of your commitment to Harry, who is now in hospital and whose bedpan is full:
Before you commit to Harry, the fullness of his bedpan strikes you as not your problem. After you commit to him, it strikes you as providing you with a special reason to empty his bedpan.
But what if it doesn’t strike you as providing a special reason? Why then would the putative will-given reason have any force or authority? Why would it set a normative standard by which you are judged? After all, it only became a reason because you previously willed it to be. Now it doesn’t strike you as a reason. Why would your past self win out in this clash of wills?
The problem is just as clear in the epistemic case. Suppose that my evidence doesn’t determine unique credences about what was the decisive catalyst for the French Revolution. I settle on some credences, but there are alternatives that are also rationally permissible. Now I’m a philosopher, not an historian, and so my opinions about this matter remain private for a long time, and they make no difference to how I behave, since the truth on this matter makes no difference to the decisions I face. So again, for a reasonable time period, my epistemic commitment doesn’t generate any downstream effects that might then generate world-given reasons to stick with these credences. If I then flip to one of the rationally permissible alternative credences during this period, Callahan’s application of Chang’s view says that I’ve gone against the reasons I earlier committed. But again it’s hard to see why.
Of course, it’s possible that you continue to endorse your earlier decision to commit to the original credence. And, if you do that, and yet let your credence shift to another rationally permissible one, then we can judge you negatively. But it’s because the credences you endorse don’t line up with the credences you have. So you can be criticized for an incoherence between current endorsement and current behaviour. But Callahan and Chang think you can be criticized for an incoherence between past commitment and current behaviour. And it’s hard to see what goes wrong with someone who does that.
The problem with flip-flopping
So what is going on with flip-flopping? I think it’s one of these cases in which each action in a sequence is rationally permissible, and yet it wouldn’t be rational to choose ahead of time to perform the whole sequence. Georgi Gardiner has a good discussion of how something like this can happen in the ways in which we direct our attention: it might be that each time you direct your attention to a particular feature of a situation, it’s rationally permissible, and yet the pattern of directing your attention in that way every single time shows a certain sort of irrationality—if you could choose your pattern of directing attention ahead of time, you shouldn’t choose this.
What’s irrational about choosing the flip-flopping pattern ahead of time?7 Here, we can appeal to another fact about immodest or strictly proper ways of measuring the epistemic value of credences. Suppose I flip from credence p in a proposition to credence q in that same proposition. Then there is an alternative credence r such that, if I’d had that credence at each point in time rather than p then q, I’d have greater total epistemic utility for sure. That is, in the language of decision theory, the sequence p, q is strongly dominated by the sequence r, r. And so it would surely be irrational to choose it. And the same goes for any sequence of credences that isn’t constant.
It might seem that this result gives us reason to stick with our credences, absent new evidence. But consider the matter from the point of view of someone with rationally permissible credence p, who is about to switch to rationally permissible credence q. Having just learned the dominance result I just described, you run to warn them: Don’t switch!, you say. You’ll be dominated! But what the result shows is that, if they switch from p to q, there will be r such that r, r is guaranteed to be better than p, q. But, at the time when you warn them, there’s no way they can choose r, r. After all, p is already their first credence; they can’t make it have been r instead. And there is no r such that p, r strongly dominates p, q. And so their action of flipping from p to q is not irrational, because there isn’t an action available to them at the time they flip that dominates their action. And yet, by flipping, they have made it such that there is some alternative sequence of credences they might have had instead that does dominate them. And so we have a strange situation in which we can give no reason not to flip, and yet we can give reason not to be someone who flips.
Funnily enough, I think this does create room for something like the sorts of epistemic commitments that Callahan proposes. I think we might see epistemic commitments as something like what Julia Staffel calls terminal attitudes. Staffel distinguishes transitional attitudes and terminal attitudes.8 Transitional attitudes are the ones you form in the course of inquiry, but before the inquiry is complete; terminal ones are the ones you form at the end of inquiry, once your view is settled. My own view is that we don’t really have terminal attitudes because there’s never a definitive end to inquiry on a topic: we could nearly always continue to improve our doxastic situation with respect to it; and the features by which Staffel marks out the terminal—such as our willingness to use those attitudes in reasoning if we have to—are in fact shared by the attitudes we take throughout inquiry—the ones Staffel calls transitional.9 But I think we do treat certain attitudes as if they were terminal. That is, we mark them as terminal—defeasibly terminal, for sure, but terminal for the time being. We do this when the costs of further inquiry are too great or the expected gains too small. We say that we’ve closed inquiry on this topic and settled our view on it. And we do this to try to bind ourselves to not undertaking further inquiry on it, which we currently expect not to be worth it.
I think we do something similar when we commit to certain rationally permissible credences rather than others. We try to close off further consideration of which of the rationally permissible vantage points to adopt. Again, we try to bind ourselves to not flip-flopping. And the dominance argument given above against being a flip-flopper explains why it’s good to have such a psychological mechanism—it makes it more likely that we live a doxastic life that isn’t dominated from the point of view of epistemic value. So the commitment itself doesn’t give you reason not to change your mind. It’s rather a mechanism we use by which we try to ensure that we don’t continue to relitigate the question of which rationally permissible doxastic state to adopt.
I’ll use ‘doxastic’ throughout to mean pertaining to belief or credence.
Here’s the passage from pages 449-50, as quoted by Chris Meacham:
[I]f I really do judge that believing P in this situation would be rational, as would believing :P, then there should be nothing wrong with my bringing it about that I have some belief or other on the matter. But then it surely cannot matter how I go about choosing which belief to hold, whether by choosing a belief that I’d like to hold, or flipping a coin, or whatever. … [Adopting beliefs this way would be irrational, so] we have reached the conclusion that I cannot rationally accept the extreme permissivist thesis…
I discuss this suggestion in a bit more detail in Chapter 10 of Epistemic Risk and the Demands of Rationality.
I found the part about Chris and Kiran interesting. One of the central theses of common-sense ethics is that you should place higher weight on the welfare of people you have certain relationships with, and in proportion to the closeness of those relationships.
A major unanswered question for common-sense ethics is the *dynamic* network-formation question of how we should decide whether to form relationships given that doing so will change the weights in our moral utility function. So, for example, if the act-of-will in the story (i.e. "committing") would require Chris to place greater weight on the Kiran's welfare (or at least give him a reason to do so), how does that affect whether or not he should perform the act-of-will in question?
I haven't read the whole thing, but the kidney example (which resonates with previous decisions of mine, though not kidney-related) seems to me to be backwards. Chris has been deferring a decision about the relationship, and Kiran's need for a kidney forces him to make the choice, one way or the other.