Last week, Stephen Darwall, a moral philosopher at Yale University, published a critique of effective altruism in the journal Liberties (it’s behind a paywall, but if you sign up for an account with the journal, can read two articles for free each month).
I agree with the criticisms that you make of Darwall's arguments here. But the problem is that those arguments don't get at (what I at least regard as) the deepest problems with "effective altruism". The deepest problem is the implication that it is *wrong*, or at least *irrational*, to give £1000 to institutions like the National Trust - and that, instead, every penny that we spend in charitable giving must be devoted to "doing the most good" as judged from a completely impartial perspective (what the great utilitarian Sidgwick called "the point of view ... of the Universe"). I just don't accept that. Instead, I believe that it is at least permissible for you to give greater weight in your decision-making to the needs of those with whom you have closer ties - yourself, your friends and family, your neighbours, your compatriots, and your contemporaries. The alternative is to agree with William Godwin that, when you can only save one person from a fatal fire, you should choose to save the great philanthropist Archbishop Fenelon rather than your own mother...
but I think most effective altruists would agree with you! I think there’s been this problem that some of the most visible ones are true equal-weight total hedonic utilitarians, and it’s sort of become assumed that that’s a defining feature. But the official line is only that you should give 10% of your earnings (if your earnings are above a certain amount) and that you should try to do the most good you can with that 10%. And that definitely doesn’t require complete impartiality. One thing they often point out is just how extreme the partiality would have to be not to do this, given the difference between what £5,000 can do when donated to an international health charity (save a life) and when spent on yourself or your loved ones. That’s not to say such partiality is morally impermissible. But I think their point is that, when people think it through, comparing the things they could do with different amounts of money, they often discover they aren’t partial to that extent.
Thanks, Richard! Well, I certainly agree (a) that most of us should give a lot more than we do, and (b) that our donations should include international health charities that save lives in the poorest parts of the world. But I am also quite happy with making donations to museums in the UK, Oxford colleges, and the like. (Perhaps the degree to which I am partial to those who are closer to me would shock you...?)
However, I certainly disagree with the thesis that you ascribe to the effective altruist here:
"It is precisely the recognition that their distance from you—physically, temporally, socially—does not diminish .... the strength of obligation you have to bring them happiness that drives the effective altruist."
I guess the sort of partiality that I endorse comes out particularly clearly with respect to people existing in the remote future. I believe that, as people get further and further away from you in time, it eventually becomes permissible for you to treat your reason to help them as trivial. (This is part of how I want to respond to the problem of "fanaticism"; I also hold that, quite generally, the values that guide rational choice have to be bounded somehow...)
I don't know about EA, but as far as utilitarianism goes, I sort of like Alastair Norcross' "scalar ethics" approach which posits "reasons without demands". It gets rid of the idea of permissible/impermissible, right and wrong etc. . Instead, there's just better and worse actions.
It would be great if I acted in a perfectly selfless and impartial way (if that's even possible). But if I do so mostly, that's still pretty good, and the more the better!
It's a bit weird and not entirely satisfying, but then again neither is anything else.
It's exciting to finally see a legit formal epistemologist engage with normative ethics!
Utilitarianism is sometimes criticized for being too foundationalist, thereby treating its fundamental premise as unrevisable, without engaging in reflecting equilibrium with competing intuitions. I think that critique confuses the concept of a theory with that of belief in the theory. But I'll leave it to the epistemologists to adjudicate those issues.
If you want to help poor people, give them money. With appropriate qualifications about public goods and distribution within households, that;s the dominant view among economists these days. Charities like Give Directly, which pursue this path, are recommended by effective altruists. In effect, this is a consequentialist way incorporating some of the critiques you mention such as concerns about autonomy.
I agree with the criticisms that you make of Darwall's arguments here. But the problem is that those arguments don't get at (what I at least regard as) the deepest problems with "effective altruism". The deepest problem is the implication that it is *wrong*, or at least *irrational*, to give £1000 to institutions like the National Trust - and that, instead, every penny that we spend in charitable giving must be devoted to "doing the most good" as judged from a completely impartial perspective (what the great utilitarian Sidgwick called "the point of view ... of the Universe"). I just don't accept that. Instead, I believe that it is at least permissible for you to give greater weight in your decision-making to the needs of those with whom you have closer ties - yourself, your friends and family, your neighbours, your compatriots, and your contemporaries. The alternative is to agree with William Godwin that, when you can only save one person from a fatal fire, you should choose to save the great philanthropist Archbishop Fenelon rather than your own mother...
but I think most effective altruists would agree with you! I think there’s been this problem that some of the most visible ones are true equal-weight total hedonic utilitarians, and it’s sort of become assumed that that’s a defining feature. But the official line is only that you should give 10% of your earnings (if your earnings are above a certain amount) and that you should try to do the most good you can with that 10%. And that definitely doesn’t require complete impartiality. One thing they often point out is just how extreme the partiality would have to be not to do this, given the difference between what £5,000 can do when donated to an international health charity (save a life) and when spent on yourself or your loved ones. That’s not to say such partiality is morally impermissible. But I think their point is that, when people think it through, comparing the things they could do with different amounts of money, they often discover they aren’t partial to that extent.
Thanks, Richard! Well, I certainly agree (a) that most of us should give a lot more than we do, and (b) that our donations should include international health charities that save lives in the poorest parts of the world. But I am also quite happy with making donations to museums in the UK, Oxford colleges, and the like. (Perhaps the degree to which I am partial to those who are closer to me would shock you...?)
However, I certainly disagree with the thesis that you ascribe to the effective altruist here:
"It is precisely the recognition that their distance from you—physically, temporally, socially—does not diminish .... the strength of obligation you have to bring them happiness that drives the effective altruist."
I guess the sort of partiality that I endorse comes out particularly clearly with respect to people existing in the remote future. I believe that, as people get further and further away from you in time, it eventually becomes permissible for you to treat your reason to help them as trivial. (This is part of how I want to respond to the problem of "fanaticism"; I also hold that, quite generally, the values that guide rational choice have to be bounded somehow...)
I don't know about EA, but as far as utilitarianism goes, I sort of like Alastair Norcross' "scalar ethics" approach which posits "reasons without demands". It gets rid of the idea of permissible/impermissible, right and wrong etc. . Instead, there's just better and worse actions.
It would be great if I acted in a perfectly selfless and impartial way (if that's even possible). But if I do so mostly, that's still pretty good, and the more the better!
It's a bit weird and not entirely satisfying, but then again neither is anything else.
"For instance, it might lead us to ignore those who do not have the capacity to hold us accountable or show us respect". Thank you!
It's exciting to finally see a legit formal epistemologist engage with normative ethics!
Utilitarianism is sometimes criticized for being too foundationalist, thereby treating its fundamental premise as unrevisable, without engaging in reflecting equilibrium with competing intuitions. I think that critique confuses the concept of a theory with that of belief in the theory. But I'll leave it to the epistemologists to adjudicate those issues.
Thinking of adding "legit formal epistemologist" to my bio!
If you want to help poor people, give them money. With appropriate qualifications about public goods and distribution within households, that;s the dominant view among economists these days. Charities like Give Directly, which pursue this path, are recommended by effective altruists. In effect, this is a consequentialist way incorporating some of the critiques you mention such as concerns about autonomy.