In Aug. 2009, I had an email discussion with some friends about the relative importance of a few topics that are usually seen as worth considering for activism. I'm reproducing that email below with some modifications to update references, as well as to update my own views slightly.
Combating global warming
Possibly the main impact of climate change is its effect on existential risk -- if not directly through runaway climate change, then indirectly insofar as ecological burdens exacerbate political conflicts and increase chances of nuclear war.
As far as the direct effects of global warming to the earth, I'm not totally sure whether they're net positive or negative. The direct human impact is almost certainly negative, but the wild-animal impact is less obvious: Will global warming increase or decrease the net wild-animal population of earth? I go into some details in this piece, but my conclusion from a few hours of research is that I still can't tell either way. Tentatively, I'm guessing that climate change is bad because "Global Warming Could Trigger Insect Population Boom," and most of those insects would live short lives before dying painfully at a few days/weeks of age.
My main reaction to global warming, though, is the following: Since so many people are concerned about it, and it's now a major political issue, the marginal impact of your involvement will be really small. It's much better for utilitarians to focus their energies on big-picture questions that the general public misses.
Ending aging
In general, I'm skeptical of claims that this is a utilitarian cause, because I think a lot of people have obvious ulterior motivations for wanting to support it. Since I'm a classical utilitarian, I do see organisms as just buckets in which to hold positive emotions, and it doesn't matter which buckets you use to store them or how often their replaced. That's an approximation, since in practice, death causes personal anguish, pain to the elderly person, etc., but the point is that I don't view "saving lives" for its own sake as intrinsically valuable.
There are two ways in which I can see at least an attempt at a utilitarian justification for life extension:
Stopping wildlife suffering
Well, this is of course my favorite option. Mainly what I would focus on here is promoting concern for animal suffering, such as through veg outreach in the short term. It might make sense also to advance ideas like humane insecticides to push the envelope on people's moral sympathies in a way that still allows for concrete action today.
Among hard-core anti-speciesists, we can be more explicit about the fact that suffering in nature can hurt just as much as suffering due to human cruelty. I think there are a number of people who would latch on to the cause if there was a group out there working on it. I've met probably 15-20 people who now care passionately about wild-animal suffering, and most of the time it was because of the influence of other people they knew. (One friend said that my piece on the topic helped reassure him that he wasn't crazy. )
Avoiding astronomical waste
Bostrom is right that, if creating vast numbers of minds is your priority, rather than preventing massive amounts of suffering, then you should probably focus on existential risk. Plus, existential risk is a much easier sell to most people than a utilitronium shockwave, which is my desired outcome. Unfortunately most people I talk to -- even at SIAI, etc. -- actively oppose pure utilitronium.
Human enhancement
As with aging, I have a hard time seeing the obvious utilitarian benefits here. Even more than with negligible senescence, I'm skeptical of the marginal returns, because I think most of this technology will probably be developed anyway for selfish reasons.
What about intelligence enhancement? Well, it's not clear to me whether that's good or bad. Smarter people means more ability to design super-bugs, etc. per unit time, which means less reaction time for defenses against disasters, to the extent that there may be an inherent asymmetry between offense and defense. Some people (not at SIAI, but elsewhere) claim that intelligence enhancement would even improve morality, while I doubt that very much. I'm even sometimes worried that improving people's comfort level in general could be harmful.
One form of enhancement that I do think is worth exploring is changes that make people more empathetic and more utilitarian. (See the above link for more on that topic, too.) If widely deployed, this could potentially trump even promoting concern for wild-animal suffering, because the latter would follow from increased ability for empathy. But I really can't imagine how someone would do this: How do you go around telling people to change their children to make them more utilitarian, unless the parents are already hard-core utilitarians? If it could be done, though, I would be interested!
Friendly AI
I favor this more than generically reducing extinction risk. However, I'm still ambivalent about it out of concern that friendly AI could lead to more wild-animal (and other) suffering than, say, paperclipping.
For one thing, different people have different ideas of what a "nice future" would look like. For these people, a good future means propagation of life throughout the universe. For deep ecologists, it means preserving the cruelty of untouched natural habitats. (Ned Hettinger: "Respecting nature means respecting the ways in which nature trades values, and such respect includes painful killings for the purpose of life support [...].") For many more people, that includes creating lab universes (if physically possible). And there will almost certainly be suffering for instrumental reasons like terraforming and simulations for scientific purposes.
What's more, that's only talking about a future in which relatively "good" people take control. But reducing extinction risk also means increasing the chance of really bad things arising from planet earth, including war-torture, savage religious-type ideologies, suffering simulated slaves, etc. We may be able to shift the course of the future somewhat, but much of it will be out of our hands and steered by Darwinian forces, so our probabilities for these undesirable outcomes never get even close to zero. Increasing the odds that humans survive necessarily means increasing the odds of really bad things by some non-trivial amount.
(At the time of writing the original email, I said that I met someone just last week who told me his moral objective function consisted in propagating life as much as possible, even though he agreed that wild animals probably endure net suffering.)
Summary: Which considerations are most important for utilitarian organizations to focus on?
My predictable answer is that the most important thing to get right (and probably the most important thing to work on, at least indirectly, depending on cost-effectiveness considerations) is to steer humanity's moral, economic, and psychological values in the direction we want. To the extent that happens, we don't have to worry too much about the rest ourselves (e.g., technical details of implementation) because that will come along for the ride with any superintelligent future civilization.
Of course, "steering values in the right direction" is a broad charter, and in many cases, the best way to promote values may be to focus on concrete projects. (Beliefs often follow actions rather than preceding them.)
There are a few main ways I envision to change society's values: (1) Straightforward social movements (e.g., civil rights, women's liberation, animal rights). (2) Changing biological / psychological constitution (e.g., reducing tendencies toward aggression and sadism, enhancing ability to feel others' pain). (3) Influencing a seed AI. Chances are that (1) would have a big role to play in accomplishing (3).
I think influencing the values of future civilization is more important than many people do because I'm a metaethical emotivist and am not sure whether people in the future will feel the way I do on ethical questions (notably because many people in the present don't feel the way I do about them!). Things like whether it's okay to create new wildlife that will suffer (I think we shouldn't) and whether bugs would be better off not existing (I think so).
Suggestions on meme spreading?
I don't have as many recommendations for reading as I'd like. We started a discussion on the topic on Felicifia, but it doesn't have a lot of concrete points. I hear that Nick Cooney's Change of Heart: What Psychology Can Teach Us About Spreading Social Change is a nice synthesis of research, focused especially on vegetarianism and concern for animals.
Religions provide some interesting case studies for spreading and preserving strong ideological views that can often differ significantly from evolutionary drives. That said, we don't necessarily want to replicate many of the dark arts that religions employ, because we care about actually reducing suffering in the universe, which requires rationality and sound epistemology, not just "following the party line" for all eternity.
Any recommendations for utilitarian lifestyle?
I made some observations here, and we've had a number of discussions of this type on various Felicifia forums.
A few sound bites:
Combating global warming
Possibly the main impact of climate change is its effect on existential risk -- if not directly through runaway climate change, then indirectly insofar as ecological burdens exacerbate political conflicts and increase chances of nuclear war.
As far as the direct effects of global warming to the earth, I'm not totally sure whether they're net positive or negative. The direct human impact is almost certainly negative, but the wild-animal impact is less obvious: Will global warming increase or decrease the net wild-animal population of earth? I go into some details in this piece, but my conclusion from a few hours of research is that I still can't tell either way. Tentatively, I'm guessing that climate change is bad because "Global Warming Could Trigger Insect Population Boom," and most of those insects would live short lives before dying painfully at a few days/weeks of age.
My main reaction to global warming, though, is the following: Since so many people are concerned about it, and it's now a major political issue, the marginal impact of your involvement will be really small. It's much better for utilitarians to focus their energies on big-picture questions that the general public misses.
Ending aging
In general, I'm skeptical of claims that this is a utilitarian cause, because I think a lot of people have obvious ulterior motivations for wanting to support it. Since I'm a classical utilitarian, I do see organisms as just buckets in which to hold positive emotions, and it doesn't matter which buckets you use to store them or how often their replaced. That's an approximation, since in practice, death causes personal anguish, pain to the elderly person, etc., but the point is that I don't view "saving lives" for its own sake as intrinsically valuable.
There are two ways in which I can see at least an attempt at a utilitarian justification for life extension:
- If you want to cause environmental destruction for the sake of wild animals, this might not be a bad way to do it (pending considerations about increasing insect populations through climate change).
- This is the standard argument: Extending lifespans will make more people care more about the far-distant future and so work toward reducing existential risk. This argument seems mildly plausible, and since I think the ending-aging project itself may be fairly cost-effective (in the sense of having high leverage for marginal donations), working on aging might be an okay way to prevent existential risk. Or maybe not -- if the increased environmental burdens make conflicts worse. And more people surviving implies more people (especially in the developed world) in total, which means more brains that can think up ways to destroy the world per unit time. But it also means more brains to work on space colonization to reduce risk.
Stopping wildlife suffering
Well, this is of course my favorite option. Mainly what I would focus on here is promoting concern for animal suffering, such as through veg outreach in the short term. It might make sense also to advance ideas like humane insecticides to push the envelope on people's moral sympathies in a way that still allows for concrete action today.
Among hard-core anti-speciesists, we can be more explicit about the fact that suffering in nature can hurt just as much as suffering due to human cruelty. I think there are a number of people who would latch on to the cause if there was a group out there working on it. I've met probably 15-20 people who now care passionately about wild-animal suffering, and most of the time it was because of the influence of other people they knew. (One friend said that my piece on the topic helped reassure him that he wasn't crazy. )
Avoiding astronomical waste
Bostrom is right that, if creating vast numbers of minds is your priority, rather than preventing massive amounts of suffering, then you should probably focus on existential risk. Plus, existential risk is a much easier sell to most people than a utilitronium shockwave, which is my desired outcome. Unfortunately most people I talk to -- even at SIAI, etc. -- actively oppose pure utilitronium.
Human enhancement
As with aging, I have a hard time seeing the obvious utilitarian benefits here. Even more than with negligible senescence, I'm skeptical of the marginal returns, because I think most of this technology will probably be developed anyway for selfish reasons.
What about intelligence enhancement? Well, it's not clear to me whether that's good or bad. Smarter people means more ability to design super-bugs, etc. per unit time, which means less reaction time for defenses against disasters, to the extent that there may be an inherent asymmetry between offense and defense. Some people (not at SIAI, but elsewhere) claim that intelligence enhancement would even improve morality, while I doubt that very much. I'm even sometimes worried that improving people's comfort level in general could be harmful.
One form of enhancement that I do think is worth exploring is changes that make people more empathetic and more utilitarian. (See the above link for more on that topic, too.) If widely deployed, this could potentially trump even promoting concern for wild-animal suffering, because the latter would follow from increased ability for empathy. But I really can't imagine how someone would do this: How do you go around telling people to change their children to make them more utilitarian, unless the parents are already hard-core utilitarians? If it could be done, though, I would be interested!
Friendly AI
I favor this more than generically reducing extinction risk. However, I'm still ambivalent about it out of concern that friendly AI could lead to more wild-animal (and other) suffering than, say, paperclipping.
For one thing, different people have different ideas of what a "nice future" would look like. For these people, a good future means propagation of life throughout the universe. For deep ecologists, it means preserving the cruelty of untouched natural habitats. (Ned Hettinger: "Respecting nature means respecting the ways in which nature trades values, and such respect includes painful killings for the purpose of life support [...].") For many more people, that includes creating lab universes (if physically possible). And there will almost certainly be suffering for instrumental reasons like terraforming and simulations for scientific purposes.
What's more, that's only talking about a future in which relatively "good" people take control. But reducing extinction risk also means increasing the chance of really bad things arising from planet earth, including war-torture, savage religious-type ideologies, suffering simulated slaves, etc. We may be able to shift the course of the future somewhat, but much of it will be out of our hands and steered by Darwinian forces, so our probabilities for these undesirable outcomes never get even close to zero. Increasing the odds that humans survive necessarily means increasing the odds of really bad things by some non-trivial amount.
(At the time of writing the original email, I said that I met someone just last week who told me his moral objective function consisted in propagating life as much as possible, even though he agreed that wild animals probably endure net suffering.)
Summary: Which considerations are most important for utilitarian organizations to focus on?
My predictable answer is that the most important thing to get right (and probably the most important thing to work on, at least indirectly, depending on cost-effectiveness considerations) is to steer humanity's moral, economic, and psychological values in the direction we want. To the extent that happens, we don't have to worry too much about the rest ourselves (e.g., technical details of implementation) because that will come along for the ride with any superintelligent future civilization.
Of course, "steering values in the right direction" is a broad charter, and in many cases, the best way to promote values may be to focus on concrete projects. (Beliefs often follow actions rather than preceding them.)
There are a few main ways I envision to change society's values: (1) Straightforward social movements (e.g., civil rights, women's liberation, animal rights). (2) Changing biological / psychological constitution (e.g., reducing tendencies toward aggression and sadism, enhancing ability to feel others' pain). (3) Influencing a seed AI. Chances are that (1) would have a big role to play in accomplishing (3).
I think influencing the values of future civilization is more important than many people do because I'm a metaethical emotivist and am not sure whether people in the future will feel the way I do on ethical questions (notably because many people in the present don't feel the way I do about them!). Things like whether it's okay to create new wildlife that will suffer (I think we shouldn't) and whether bugs would be better off not existing (I think so).
Suggestions on meme spreading?
I don't have as many recommendations for reading as I'd like. We started a discussion on the topic on Felicifia, but it doesn't have a lot of concrete points. I hear that Nick Cooney's Change of Heart: What Psychology Can Teach Us About Spreading Social Change is a nice synthesis of research, focused especially on vegetarianism and concern for animals.
Religions provide some interesting case studies for spreading and preserving strong ideological views that can often differ significantly from evolutionary drives. That said, we don't necessarily want to replicate many of the dark arts that religions employ, because we care about actually reducing suffering in the universe, which requires rationality and sound epistemology, not just "following the party line" for all eternity.
Any recommendations for utilitarian lifestyle?
I made some observations here, and we've had a number of discussions of this type on various Felicifia forums.
A few sound bites:
- Have utilitarian friends who keep you interested in what matters most.
- Don't let the best be the enemy of the good -- e.g., with meat consumption, not wasting time on frivolities, how much to donate, etc. (This is a point that LadyMorgana has mentioned.)
- Watch some videos of animal suffering every once in a while.
- Make public commitments about your intentions to do good to reduce your risk of future relapse.
- While you're doing great work for so many sentient organisms, make sure to have fun in the process. Play, laugh, and smile.