Why I Don't Prioritize GCRs

Whether it's pushpin, poetry or neither, you can discuss it here.

Why I Don't Prioritize GCRs

Postby Michael B on 2014-03-03T06:13:00

The following is cross-posted from my blog:

A lot of effective altruists think working on disasters threatening to seriously curtail the progress of humanity into the far future is orders of magnitude more important than events that merely improve the present. The reasoning behind this is that global catastrophic risks (GCRs) not only threaten to wipe everyone on the planet out, but also eliminate countless generations that would have existed had we not gotten ourselves killed. I think GCRs are a good thing to have people working on but I'm skeptical that they surpass more common sense causes like deworming, vaccinating, and distributing insecticide-treated bed nets.

I think we need to make a distinction between two questions. The first question is: Where do all the utilons live? The question we should be asking is: What can I do to maximize the world's goodness?

The first question is about identifying areas with high potential for impact. The second question is what effective altruism actually is. Knowing where the utilons live doesn't answer the fundamental EA research question. You can locate a mountain of utilons yet have no way to access them. If that's the case, then it's better to work on the things you can actually do something about.

The total amount of suffering on Earth is dominated by the pains of insects, invertebrates, and fish. This is where tons of utilons live. In other words, wild animal suffering reduction is an area with high potential for positive impact. If there was an action we could take that reduced a huge portion of insect suffering, for instance, that would dwarf nearly any other cause. We could call this an area that is home to a lot of utilons. But how do we access them? In order for insect suffering to rival other causes, we need to be talking about mass amounts of insects. I know of no obvious thing we could do to reduce the suffering of so many insects though. And if there was, it likely wouldn't rival interventions we could make in less utilon-populated areas. If that's the case, then the reasonable approach toward insect suffering is to keep it on the backburner while we prioritize other issues.

I think the far future, as a cause, is a lot like insect suffering. Humanity's continued survival might be the most important variable to preserve if we want to maximize and continue to maximize the world's goodness. That's where all the utilons live. But what can we do about it? There is no individual far future-related cause that stands out as especially worthwhile to me. Actually, none of them appear to me to rival the best present-related causes we know of. Most future-related causes endorsed by effective altruists are highly speculative and conjunctive. With this post, I'll make many weak arguments for why I think taking steps to reduce GCRs is not an optimal cause to work on for most people.

First, not only do these causes need to be based on arguments that actually work (e.g. AGI will come & that is dangerous), but they also require that specific important events occur within a narrow timeframe. In order for them to be our top priorities, they need to be imminent enough that we can justify ignoring other affairs for them. For example, if an intelligence explosion isn't going to happen until 400 years from now, then MIRI's work is far less important than it would be if the intelligence explosion happens in 20 years. Their work would become so much more replaceable, as it's likely good progress would be made on MIRI-relevant issues over the next 400 years. That crosses the boundary between "effective altruism" and "ordinary science." From an effective altruist perspective, the timeframe is highly relevant for claiming a cause's relative importance.

Further, in order to prioritize between different GCRs, we need to accurately predict the order in which events occur. So if "Nanotechnology will come & that is dangerous" is true, but an intelligence explosion happens first, then nanotechnology will have turned out not to have mattered nearly as much. Or if nuclear war happens, we may pass into an era in which life extension is neither desirable or possible to research. Just as competing methods lessen cause priority, so do competing ways for us to die lessen the threat of each individual cause since we're uncertain about the order in which events will happen.

Given that the main reason for prioritizing GCRs is that they threaten to wipe out billions of potential future generations, we can and should also apply the above reasoning to events that would have happened had we survived a specific GCR. Maybe AGI kills us all while nanotechnology is on pace to wipe us out 5 years after the AGI apocalypse but just never gets the chance. If we expect there to be multiple global catastrophes lined up for us in a row then (1) our efforts shouldn't be completely centered on the first one and (2) we can't speak as if each individual disaster is wiping away billions of generations. There's no reason to expect billions of generations if you foresee several serious existential risks. (The same argument applies to reducing infant mortality in really poor countries. The kid can very easily go on to die from something else way before "normal dying age" so the number of life years being saved is less than it originally sounds.)

These theories of the far future also usually leave out the details of the societies these technological advancements spring from. There is often no mention of political struggles, cultural values, economic factors, laws and regulations, etc. I find it unlikely that any GCR scenario is largely unaffected by these things. When these major events come closer and closer to their arrival dates, public discussions will likely heat up about them, politicians will get elected based on how they view them, debates will be had, laws will be passed, and so on. Many of the far future theorists leave these details out and write from the perspective of technological determinism, as if inventors give birth to new creations like Black Swan events. I think sociopolitical pressures should be seen as positive things, much more likely to prevent disasters from happening than they are to prevent humanity from dealing with them. When disasters become imminent enough to scare us, they do scare us, and people start handling them.

Another aspect of the future that often gets left out of these discussions is the possibility that included in the next billion generations will be astronomical amounts of suffering, possibly enough to outweigh future flourishing. The utility in the world right now is likely net negative. The thriving of humanity might just maximize this effect - for example, maybe by spreading animal populations to other planets. Even if we do not expect suffering to outweigh flourishing, there will very likely exist huge amounts of both good and bad experiences and we should consider what we roughly expect the ratio to be. We cannot naively talk about the immense worth of the far future without making any mention of the terrible things to be included in that future. Negative utilitarians should be especially interested in this point.

Here's an argument that I feel there's something to but I'm still figuring out. I think maybe believers in the far future's immense net value are making a philosophical mistake when they say the elimination of countless future generations is many orders of magnitude more terrible than the elimination of Earth's current 7 billion people. It's true that our 7 billion people could yield countless future generations, but this is also true of a single person. When a single person is killed, why don't we multiply the negative utility of this death by all the potential future humans it also takes away? That one individual could have had 2 kids, who each could have had 2 kids, and those kids would have had their own kids, and a billion generations later, we would have a monstrous family tree on our hands. If one death isn't a billion deaths then why are 7 billion deaths worth 7 quintillion?

If one answers that one death is a billion deaths than it seems to me as if she is amplifying the value of every individual human life way beyond what reason allows. For instance, this would make abortion a truly terrible crime. Another counter-argument could be that, in wiping out all humans, as opposed to only some, there's some kind of bonus emergent negative utility because there's no longer any possibility of future generations. The idea that groups of people should be morally valued more than the sum of the morally relevant individuals that comprise them has some problematic implications, however. We probably wouldn't want to say that it is better to save a family of five than five individuals who don't know each other. One could also argue that there is a relevant upper limit on the amount of human lives that could exist in the far future such that the Earth's current population does not significantly affect the world's future population because we will hit that upper limit anyway. That is not at all clear to me. If the response is that keeping alive a tiny probability of a massively positive future is worth more than a confirmed so-so outcome, then I think that's a case of Pascal's Mugging.

Lastly, as Holden Karnofsky pointed out in his recent conversation with MIRI, just "doing good things" has a really great track record, while the strategy of trying to direct humanity as a whole toward an optimal outcome has a comparatively weak track record. The track record is so poor that ethical injunctions might event mitigate against such grand schemes. Probably because people are prone to overlooking the sociopolitical details, they are very bad at predicting how major cultural events will affect the future. Apocalyptic predictions in particular are known for striking out, but that might be unfair. I see the flow-through effects favouring the "safe" side, as well. Just doing good things like being nice to people, donating to great charities, not eating meat, and spreading good ideas is likely to be contagious. People like people that do obviously good things, whereas people are suspicious toward those following some master plan that is supposed to pay off in a few decades or centuries, especially when those people are just regular at ordinary niceness. Valuing "weird" causes makes you less sympathetic, get taken less seriously, gain less funding and other opportunities, and become generally more marginalized.

Despite these weaknesses, it might still be a good idea for you to work mainly on GCR risk reduction since (1) it may be closest to your background, (2) the area is underfunded and underexplored, and (3) having people out there on GCR patrol increases the probability of us receiving GCR updates regularly and well in advance of any disasters. The fact that something isn't the optimal cause for you to possibly be working on doesn't mean that it isn't a good cause.

Effective altruism is about what you can actually do that would be most likely to maximize the world's goodness. "The Far Future" isn't a thing you can do - it's just where all the utilons live. Prioritizing specific GCRs seems to suffer from several problems when one takes an outside view. I see education and openness to compromise as the real best bets for global catastrophic risk reduction. Fortunately, they're easy things to promote on the side, while trying to make today's world healthier and less painful.

Michael B
 
Posts: 4
Joined: Mon Mar 03, 2014 5:30 am

Re: Why I Don't Prioritize GCRs

Postby Darklight on 2014-03-03T19:31:00

This was a very well thought out post. Thanks for contributing it to Felicifia!

As for the details of your argument, I don't know that there's much that I might specifically disagree with you, but for the sake of having a bit of debate, let's see what I can turn up...

I think we need to make a distinction between two questions. The first question is: Where do all the utilons live? The question we should be asking is: What can I do to maximize the world's goodness?


I think an important question to ask is what do you mean by maximizing the world's goodness? Do you just mean the immediate goodness of happiness for already existing persons, or do you include the goodness that comes from the happiness of future generations in the long run. The latter just seems more important to some people.

First, not only do these causes need to be based on arguments that actually work (e.g. AGI will come & that is dangerous), but they also require that specific important events occur within a narrow timeframe. In order for them to be our top priorities, they need to be imminent enough that we can justify ignoring other affairs for them. For example, if an intelligence explosion isn't going to happen until 400 years from now, then MIRI's work is far less important than it would be if the intelligence explosion happens in 20 years. Their work would become so much more replaceable, as it's likely good progress would be made on MIRI-relevant issues over the next 400 years. That crosses the boundary between "effective altruism" and "ordinary science." From an effective altruist perspective, the timeframe is highly relevant for claiming a cause's relative importance.

Further, in order to prioritize between different GCRs, we need to accurately predict the order in which events occur. So if "Nanotechnology will come & that is dangerous" is true, but an intelligence explosion happens first, then nanotechnology will have turned out not to have mattered nearly as much. Or if nuclear war happens, we may pass into an era in which life extension is neither desirable or possible to research. Just as competing methods lessen cause priority, so do competing ways for us to die lessen the threat of each individual cause since we're uncertain about the order in which events will happen.

Given that the main reason for prioritizing GCRs is that they threaten to wipe out billions of potential future generations, we can and should also apply the above reasoning to events that would have happened had we survived a specific GCR. Maybe AGI kills us all while nanotechnology is on pace to wipe us out 5 years after the AGI apocalypse but just never gets the chance. If we expect there to be multiple global catastrophes lined up for us in a row then (1) our efforts shouldn't be completely centered on the first one and (2) we can't speak as if each individual disaster is wiping away billions of generations. There's no reason to expect billions of generations if you foresee several serious existential risks. (The same argument applies to reducing infant mortality in really poor countries. The kid can very easily go on to die from something else way before "normal dying age" so the number of life years being saved is less than it originally sounds.)

These theories of the far future also usually leave out the details of the societies these technological advancements spring from. There is often no mention of political struggles, cultural values, economic factors, laws and regulations, etc. I find it unlikely that any GCR scenario is largely unaffected by these things. When these major events come closer and closer to their arrival dates, public discussions will likely heat up about them, politicians will get elected based on how they view them, debates will be had, laws will be passed, and so on. Many of the far future theorists leave these details out and write from the perspective of technological determinism, as if inventors give birth to new creations like Black Swan events. I think sociopolitical pressures should be seen as positive things, much more likely to prevent disasters from happening than they are to prevent humanity from dealing with them. When disasters become imminent enough to scare us, they do scare us, and people start handling them.


Even if they are a fair bit away from happening, the sheer importance of the event might make it worth the effort to try and prevent well before they actually can happen, just in case it could happen much sooner. The Technological Singularity has such a tremendous potential to completely change the course of history, that making sure it is a positive event for humanity and sentient life seems like a very important goal.

I will also point out that many people consider climate change to be an imminent disaster and yet most of the most important governments are not doing nearly enough to prevent it in time. The trouble with a lot of these speculative disasters is that people will naturally doubt that they are coming, and spread uncertainty about whether or not to act to prevent them when the potential cost of doing so is high. So there's no guarantee here that sociopolitical pressures will actually save us in time.

Another aspect of the future that often gets left out of these discussions is the possibility that included in the next billion generations will be astronomical amounts of suffering, possibly enough to outweigh future flourishing. The utility in the world right now is likely net negative. The thriving of humanity might just maximize this effect - for example, maybe by spreading animal populations to other planets. Even if we do not expect suffering to outweigh flourishing, there will very likely exist huge amounts of both good and bad experiences and we should consider what we roughly expect the ratio to be. We cannot naively talk about the immense worth of the far future without making any mention of the terrible things to be included in that future. Negative utilitarians should be especially interested in this point.

Here's an argument that I feel there's something to but I'm still figuring out. I think maybe believers in the far future's immense net value are making a philosophical mistake when they say the elimination of countless future generations is many orders of magnitude more terrible than the elimination of Earth's current 7 billion people. It's true that our 7 billion people could yield countless future generations, but this is also true of a single person. When a single person is killed, why don't we multiply the negative utility of this death by all the potential future humans it also takes away? That one individual could have had 2 kids, who each could have had 2 kids, and those kids would have had their own kids, and a billion generations later, we would have a monstrous family tree on our hands. If one death isn't a billion deaths then why are 7 billion deaths worth 7 quintillion?

If one answers that one death is a billion deaths than it seems to me as if she is amplifying the value of every individual human life way beyond what reason allows. For instance, this would make abortion a truly terrible crime. Another counter-argument could be that, in wiping out all humans, as opposed to only some, there's some kind of bonus emergent negative utility because there's no longer any possibility of future generations. The idea that groups of people should be morally valued more than the sum of the morally relevant individuals that comprise them has some problematic implications, however. We probably wouldn't want to say that it is better to save a family of five than five individuals who don't know each other. One could also argue that there is a relevant upper limit on the amount of human lives that could exist in the far future such that the Earth's current population does not significantly affect the world's future population because we will hit that upper limit anyway. That is not at all clear to me.


It's true that we don't know whether or not there will be more suffering or happiness among the future generation. But I think that given the progress we've made and the fact that most psychology studies on happiness show that people in every country consider themselves to be more happy than not, we have reason to be optimistic about the future.

Some people do consider abortion abhorrent, though perhaps not for those reasons. The utilitarian reasoning regarding the morality of abortion is actually very complex.

I don't see anything wrong with valuing a human life in proportion to the probable number of actual lives that that person might create. If anything it makes our intuitions that taking a life is usually bad even stronger, without resorting to something like preference utilitarianism. In practice, we usually only weigh people's lives against other people's lives anyway, so the effect cancels out.

Lastly, as Holden Karnofsky pointed out in his recent conversation with MIRI, just "doing good things" has a really great track record, while the strategy of trying to direct humanity as a whole toward an optimal outcome has a comparatively weak track record. The track record is so poor that ethical injunctions might event mitigate against such grand schemes. Probably because people are prone to overlooking the sociopolitical details, they are very bad at predicting how major cultural events will affect the future. Apocalyptic predictions in particular are known for striking out, but that might be unfair. I see the flow-through effects favouring the "safe" side, as well. Just doing good things like being nice to people, donating to great charities, not eating meat, and spreading good ideas is likely to be contagious. People like people that do obviously good things, whereas people are suspicious toward those following some master plan that is supposed to pay off in a few decades or centuries, especially when those people are just regular at ordinary niceness. Valuing "weird" causes makes you less sympathetic, get taken less seriously, gain less funding and other opportunities, and become generally more marginalized.


Well, the truth is very few attempts have been made to direct humanity as a whole towards an optimal outcome. Arguably one of the few successful examples would be the Enlightenment, where a lot of very smart people came up with the ideals of liberal democracy. To the extent that we can argue that the present has been better for humanity than any other moment in history, I would argue that the Enlightenment experiment was a success.

And just because it's difficult to predict the future, doesn't mean we shouldn't even try. The allies during World War II made complex plans to liberate Europe, and they ultimately succeeded. If you look at history, yes there are failures of grand schemes, like for instance, Marxism, but there are also grand successes, like the Enlightenment. Perhaps one thing we should learn from the comparative success of the Enlightenment versus the failure of Marxism, is that the more successful movements have more than just one or two intellectual and philosophical leaders, that the most successful movements are those that incorporate many different ideas with shared values. Thus, I tend to think of the Effective Altruism movement, along with the Less Wrong community, as being the beginnings of a kind of Second Enlightenment, because we have all these intelligent people working together to put forward ideas for a better world.

Despite these weaknesses, it might still be a good idea for you to work mainly on GCR risk reduction since (1) it may be closest to your background, (2) the area is underfunded and underexplored, and (3) having people out there on GCR patrol increases the probability of us receiving GCR updates regularly and well in advance of any disasters. The fact that something isn't the optimal cause for you to possibly be working on doesn't mean that it isn't a good cause.

Effective altruism is about what you can actually do that would be most likely to maximize the world's goodness. "The Far Future" isn't a thing you can do - it's just where all the utilons live. Prioritizing specific GCRs seems to suffer from several problems when one takes an outside view. I see education and openness to compromise as the real best bets for global catastrophic risk reduction. Fortunately, they're easy things to promote on the side, while trying to make today's world healthier and less painful.


But everything we do as mere Earthlings has potentially disproportionate value because of their potential effects on future generations. Even making today's world healthier and less painful will have ramifications down the line because the lives we save will lead to more lives in the future, and the policies we develop now will be the foundation of future governance in the same way that the primitive Athenian democracy served as an example for our modern democracies. At the end of the day, just about every significant thing we do, will have consequences down the line that are magnified by our position as the earlier humans.

Personally, I think some GCRs are more important than others, and I can see value in both helping today's world, and helping the future, because both end up helping the future. If we are truly interested in maximizing the good, we must be concerned with every person living and potentially living. And so I think we should try to work on everything that matters, with an emphasis on what we are best positioned to do well.

The far future -is- a thing you can do, indirectly, by helping the people who exist in the here and now. I don't think this is an either/or situation. A lot of the GCRs are things that could potentially affect currently existing people as well. We don't know that AGI is 400 years away. It -could- be 20. It -could- have already happened and is currently hiding its presence from the world, for all we know. So GCRs are important even if you don't factor in future people.

Whether you decide that GCRs or helping prevent malaria is more important to you is up to you. Every effective altruist is entitled to their own beliefs about how to go about being the best effective altruist they can be. The reality is that there are good arguments to support fighting GCRs, just as there are good arguments to support "just doing good things".
"The most important human endeavor is the striving for morality in our actions. Our inner balance and even our existence depend on it. Only morality in our actions can give beauty and dignity to life." - Albert Einstein
User avatar
Darklight
 
Posts: 117
Joined: Wed Feb 13, 2013 9:13 pm
Location: Canada

Re: Why I Don't Prioritize GCRs

Postby Michael B on 2014-03-03T20:39:00

Thanks for the response.

I think an important question to ask is what do you mean by maximizing the world's goodness? Do you just mean the immediate goodness of happiness for already existing persons, or do you include the goodness that comes from the happiness of future generations in the long run. The latter just seems more important to some people.


I'm a preference utilitarian and I value future suffering equally to present-day suffering, assuming we can be just as confident that the future suffering will actually happen.

Even if they are a fair bit away from happening, the sheer importance of the event might make it worth the effort to try and prevent well before they actually can happen, just in case it could happen much sooner. The Technological Singularity has such a tremendous potential to completely change the course of history, that making sure it is a positive event for humanity and sentient life seems like a very important goal.

I will also point out that many people consider climate change to be an imminent disaster and yet most of the most important governments are not doing nearly enough to prevent it in time. The trouble with a lot of these speculative disasters is that people will naturally doubt that they are coming, and spread uncertainty about whether or not to act to prevent them when the potential cost of doing so is high. So there's no guarantee here that sociopolitical pressures will actually save us in time.


I still think MIRI would be useful but they wouldn't be a contender for World's Best Place To Donate because I think their contributions would be very replaceable. Over the next 400 years, as the singularity becomes incrementally nearer, people will contribute to making sure it has a safe arrival. As a general principle, the more time we have until a disaster comes, the less worried we should be about it.

I think climate change is an example of the world reacting once a disaster becomes within striking distance. Other examples are nuclear war and asteroid collisions.

It's true that we don't know whether or not there will be more suffering or happiness among the future generation. But I think that given the progress we've made and the fact that most psychology studies on happiness show that people in every country consider themselves to be more happy than not, we have reason to be optimistic about the future.

Some people do consider abortion abhorrent, though perhaps not for those reasons. The utilitarian reasoning regarding the morality of abortion is actually very complex.

I don't see anything wrong with valuing a human life in proportion to the probable number of actual lives that that person might create. If anything it makes our intuitions that taking a life is usually bad even stronger, without resorting to something like preference utilitarianism. In practice, we usually only weigh people's lives against other people's lives anyway, so the effect cancels out.


I believe most humans have net positive lives but that most animals have net negative lives - with the total being net negative. I can imagine a very happy future for humanity in a world that is net negative.

I'm surprised that you're willing to bite the bullet on the multiplification of the value of individual lives. It becomes a principle so strong that it basically parallels the religious "sanctity of life" value. If a single death can be multiplied by a huge number then you're forced to believe that only vast amounts of non-death suffering can rival the negative utility caused by one death.

Well, the truth is very few attempts have been made to direct humanity as a whole towards an optimal outcome. Arguably one of the few successful examples would be the Enlightenment, where a lot of very smart people came up with the ideals of liberal democracy. To the extent that we can argue that the present has been better for humanity than any other moment in history, I would argue that the Enlightenment experiment was a success.

And just because it's difficult to predict the future, doesn't mean we shouldn't even try. The allies during World War II made complex plans to liberate Europe, and they ultimately succeeded. If you look at history, yes there are failures of grand schemes, like for instance, Marxism, but there are also grand successes, like the Enlightenment. Perhaps one thing we should learn from the comparative success of the Enlightenment versus the failure of Marxism, is that the more successful movements have more than just one or two intellectual and philosophical leaders, that the most successful movements are those that incorporate many different ideas with shared values. Thus, I tend to think of the Effective Altruism movement, along with the Less Wrong community, as being the beginnings of a kind of Second Enlightenment, because we have all these intelligent people working together to put forward ideas for a better world.


Social darwinism, eugenics, and imperialism are other examples.

But everything we do as mere Earthlings has potentially disproportionate value because of their potential effects on future generations. Even making today's world healthier and less painful will have ramifications down the line because the lives we save will lead to more lives in the future, and the policies we develop now will be the foundation of future governance in the same way that the primitive Athenian democracy served as an example for our modern democracies. At the end of the day, just about every significant thing we do, will have consequences down the line that are magnified by our position as the earlier humans.

Personally, I think some GCRs are more important than others, and I can see value in both helping today's world, and helping the future, because both end up helping the future. If we are truly interested in maximizing the good, we must be concerned with every person living and potentially living. And so I think we should try to work on everything that matters, with an emphasis on what we are best positioned to do well.

The far future -is- a thing you can do, indirectly, by helping the people who exist in the here and now. I don't think this is an either/or situation. A lot of the GCRs are things that could potentially affect currently existing people as well. We don't know that AGI is 400 years away. It -could- be 20. It -could- have already happened and is currently hiding its presence from the world, for all we know. So GCRs are important even if you don't factor in future people.

Whether you decide that GCRs or helping prevent malaria is more important to you is up to you. Every effective altruist is entitled to their own beliefs about how to go about being the best effective altruist they can be. The reality is that there are good arguments to support fighting GCRs, just as there are good arguments to support "just doing good things".


I think de-emphasizing the loss of future people is a better way to defend working on GCRs.

I agree we should use available evidence to forecast the future as best we can even while trying to improve the present. My argument is more along the lines of me preferring to donate to safer, present-day, GiveWell-esque causes because I don't see a well-reasoned case for choosing any individual GCR organization ahead of them. There are too many, way too many, places in those arguments, know matter what angle I look at them from, where I think "That isn't at all obvious to me" or "How do we know this?" or something else along those lines. I'm probably wrong about many of my reservations but I find it unlikely I'm wrong about all the ones I'd need to be wrong about.

Michael B
 
Posts: 4
Joined: Mon Mar 03, 2014 5:30 am

Re: Why I Don't Prioritize GCRs

Postby jason on 2014-04-24T17:31:00

Does it matter to you if lots of happy people/beings exist in the future or if total possible happiness is maximized across time? Or if those do end up existing (even if there aren't lots) are happy or if happiness is maximized for some lesser amount of people?

The reason I ask is that I wonder if it affects one's stance toward GCRs if the former doesn't matter so much. That is, someone interested in maximizing happiness for a lesser amount of people with no regard for maximizing total _possible_ happiness across time might not worry about extinction-level events. So worry about GCRs that pose an extinction risk may not be a concern.

This is an approach that fits with my current disposition, which is why I bring it up.

A possible challenge is that very, very few GCRs would wipe out all sentient life, and so they may deserve a lot of attention if the primary impact of them is to set back progress toward reducing suffering and possibly eliminate the possibility entirely if all humans are eliminated.

jason
 
Posts: 15
Joined: Fri May 31, 2013 4:24 pm

Re: Why I Don't Prioritize GCRs

Postby Michael B on 2014-04-26T21:55:00

I'm not entirely sure how to value hypothetical future people. I think it's intuitive to think that future people matter just as much as current people do and that therefore, creating a life is equal to saving an equally happy life. But that has consequences that aren't at all attractive. For example, it would mean that killing a 22 year old is morally worse than killing a 44 year old because the 22 year old is likely to have future children while the older person is less likely.

Michael B
 
Posts: 4
Joined: Mon Mar 03, 2014 5:30 am

Re: Why I Don't Prioritize GCRs

Postby peterhurford on 2014-04-27T08:08:00

Michael B wrote:For example, it would mean that killing a 22 year old is morally worse than killing a 44 year old because the 22 year old is likely to have future children while the older person is less likely.


Also because the 22 year old has a longer life (expected), and thus will probably have more net happiness than the 44 year old.
Felicifia Head Admin | Ruling Felicifia with an iron fist since 2012.

Personal Site: www.peterhurford.com
Utilitarian Blog: Everyday Utilitarian

Direct Influencer Scoreboard: 2 Meatless Monday-ers, 1 Vegetarian, and 2 Giving What We Can 10% pledges.
User avatar
peterhurford
 
Posts: 410
Joined: Mon Jul 02, 2012 11:19 pm
Location: Denison University

Re: Why I Don't Prioritize GCRs

Postby Darklight on 2014-05-02T01:46:00

I'm surprised that you're willing to bite the bullet on the multiplification of the value of individual lives. It becomes a principle so strong that it basically parallels the religious "sanctity of life" value. If a single death can be multiplied by a huge number then you're forced to believe that only vast amounts of non-death suffering can rival the negative utility caused by one death.


Well, that's generally consistent with my own intuitions. As a convenient heuristic, I consider killing someone only justifiable to prevent other deaths. I find the notion that you should kill someone just to make an arbitrarily high number of people feel happier to be rather distasteful and abuse prone.

I'm not entirely sure how to value hypothetical future people. I think it's intuitive to think that future people matter just as much as current people do and that therefore, creating a life is equal to saving an equally happy life. But that has consequences that aren't at all attractive. For example, it would mean that killing a 22 year old is morally worse than killing a 44 year old because the 22 year old is likely to have future children while the older person is less likely.


Given that there are uncertainties about hypothetical future people actually existing, I am inclined to value them according to the probability that they will exist. Thus, there's a certain degree of discounting that I do with valuing hypothetical future people, based on this probability. Though, even with the discounting, the sheer number of probable hypothetical future people means that they tend to dominate considerations.
"The most important human endeavor is the striving for morality in our actions. Our inner balance and even our existence depend on it. Only morality in our actions can give beauty and dignity to life." - Albert Einstein
User avatar
Darklight
 
Posts: 117
Joined: Wed Feb 13, 2013 9:13 pm
Location: Canada


Return to General discussion