Global Catastrophic Risk Institute

Whether it's pushpin, poetry or neither, you can discuss it here.

Global Catastrophic Risk Institute

Postby sethbaum on 2012-09-15T18:59:00

hi all,
I intend this as a general-purpose discussion thread for my main current project, the Global Catastrophic Risk Institute. I'm putting this under the philanthropy section because GCRI is a charity (US 501c3) and does accept donations, and because this seems like the best place within the forum for this discussion. However I'll be posting more than just info about donating to GCRI here, and likewise I'm interested in your thoughts on the organization as a whole.

As some personal background, GCRI for me is the culmination (or at least most recent phase) of my studying how to be an effective utilitarian for the last ~6-7 years, beginning with the original Felicifia site, http://felicifia.blogspot.com. I still think an online space for discussing utilitarianism is really important and I feel bad that I haven't been as active with Felicifia in recent years. Likewise I'm delighted that Felicifia continues to thrive - kudos to you all for that. Meanwhile, as a 'total' utilitarian (see details of my views here) I find that reducing the risk of global catastrophe is what I (/we) should prioritize now. (Here I won't get into the GCR vs. existential risk debate except to note that I care mainly about catastrophes of astronomical significance.) GCRI is the organization that I've developed (with Tony Barrett) to try to do this most effectively. We designed GCRI to welcome a range of views but meanwhile will focus on the astronomically significant side of GCR.

In this discussion thread, I'll try to keep you updated with some of the big happenings with GCRI. You can also sign up for our new email newsletter. I also welcome your questions, comments, concerns, ideas, etc. I'll try to keep up with the discussion here but I apologize in advance if I get distracted with other stuff. (It happens! Sorry! :) ) If you want to reach me please email - I'm much better with email.

Some questions I have for you:
1. What do you think about the idea of prioritizing GCR (/xrisk) reduction over other things we can do to help? Why?
2. What do you think about the GCRI concept as a means for reducing GCR? I'm especially interested in your criticisms and suggestions for improvement. Don't hold back on those!! :)
3. What specific projects would you like to see GCRI pursue? What might you like to participate in?

thanks,
Seth

sethbaum
 
Posts: 33
Joined: Tue Nov 11, 2008 4:07 am

Re: Global Catastrophic Risk Institute

Postby Hedonic Treader on 2012-09-16T00:32:00

sethbaum wrote:Meanwhile, as a 'total' utilitarian (see details of my views here) I find that reducing the risk of global catastrophe is what I (/we) should prioritize now.

Yes, it is a common idea that total utilitarians should focus on reducing existential risk. However, it's not clear to me that they should favor reduction. This assumes that life in the future will contain more positive than negative utility, which is far from certain even if you have no negative-leaning bias in your utilitarian inclination. I find it likely, at the very least, that non-consensual violence, the deliberate infliction of suffering and large-scale power asymmetries will be phenomena that will play big roles in the (post-)human future. This should be taken very seriously not just by negative, but also by total utilitarians. Also, optimism bias is our enemy. :)
"The abolishment of pain in surgery is a chimera. It is absurd to go on seeking it... Knife and pain are two words in surgery that must forever be associated in the consciousness of the patient."

- Dr. Alfred Velpeau (1839), French surgeon
User avatar
Hedonic Treader
 
Posts: 342
Joined: Sun Apr 17, 2011 11:06 am

Re: Global Catastrophic Risk Institute

Postby sethbaum on 2012-09-16T17:07:00

Hedonic Treader wrote:However, it's not clear to me that they should favor reduction. This assumes that life in the future will contain more positive than negative utility, which is far from certain even if you have no negative-leaning bias in your utilitarian inclination. I find it likely, at the very least, that non-consensual violence, the deliberate infliction of suffering and large-scale power asymmetries will be phenomena that will play big roles in the (post-)human future. This should be taken very seriously not just by negative, but also by total utilitarians. Also, optimism bias is our enemy. :)


It is indeed the case that my own concern about extinction depends on survival having positive expected value. If it would have negative expected value, then I'd say we should increase the probability of extinction. I also agree that it is not certain that there would be positive value, but I think positive value is the more likely outcome.

Meanwhile, I think efforts to understand whether there would be positive or negative value can be every bit as important as efforts to understand extinction. Indeed, I would consider events bringing about global-scale negative value to fit within the set of global catastrophes and thus also within GCRI's domain. (GCRI as an organization takes an inclusive approach to what does or doesn't count as a global catastrophe, so if someone wants to argue for a particular definition of global catastrophe, we're happy to hear it out.)

Are you familiar with any research exploring the possibility of global-scale negative value? I'd be interested to see this.

sethbaum
 
Posts: 33
Joined: Tue Nov 11, 2008 4:07 am

Re: Global Catastrophic Risk Institute

Postby RyanCarey on 2012-09-16T21:21:00

Thanks for your work on GCRI, Seth!
As an outsider, it's hard to pick anything that you can clearly improve at this stage. Research, education and networking sounds like a good core of activities. Probably you should get a new prettier web design if you're going to put lectures online and largely communicate in a decentralised and global fashion?

That also sums up my view of global negative value. Researching whether the trajectory of humanity is heading toward a positive or negative value is an important area of research.

And remember that just because it's important for felicifia to continue to exist, doesn't mean that participating is each and every particular possible member's optimal action!!

Good luck reducing risk, Seth :D
You can read my personal blog here: CareyRyan.com
User avatar
RyanCarey
 
Posts: 682
Joined: Sun Oct 05, 2008 1:01 am
Location: Melbourne, Australia

Re: Global Catastrophic Risk Institute

Postby Pablo Stafforini on 2012-09-16T23:19:00

Hi Seth,

sethbaum wrote:It is indeed the case that my own concern about extinction depends on survival having positive expected value. If it would have negative expected value, then I'd say we should increase the probability of extinction.

We can also work on increasing the expected value of colonization in scenarios where humans do survive. Is there any principled reason for thinking that we will have a greater impact on future sentience by striving to alter the probability of extinction, rather than by striving to increase expected value conditional on human survival?

This, incidentally, seems to me to be perhaps the most important research question for folks interested in existential risk, including you and the Institute that you preside.

sethbaum wrote:Are you familiar with any research exploring the possibility of global-scale negative value? I'd be interested to see this.

Brian has written a great post on possible dystopic future scenarios.

Good luck with the Institute. It seems like a great initiative.
"‘Méchanique Sociale’ may one day take her place along with ‘Mécanique Celeste’, throned each upon the double-sided height of one maximum principle, the supreme pinnacle of moral as of physical science." -- Francis Ysidro Edgeworth
User avatar
Pablo Stafforini
 
Posts: 177
Joined: Thu Dec 31, 2009 2:07 am
Location: Oxford

Re: Global Catastrophic Risk Institute

Postby sethbaum on 2012-09-17T00:18:00

Pablo Stafforini wrote:We can also work on increasing the expected value of colonization in scenarios where humans do survive. Is there any principled reason for thinking that we will have a greater impact on future sentience by striving to alter the probability of extinction, rather than by striving to increase expected value conditional on human survival?


This is a great question, and I agree it's very important. I can give a partial answer. Extinction threats come with a certain urgency and irreversibility. If humanity goes extinct, then it will never have the chance to work towards a positive colonization. Thus in broad terms I'd view preventing extinction as the core role for our era of civilization, so that future eras can work towards positive colonization. In mathematical terms this would be the solution to a dynamic optimization problem.

But this is only a partial answer. We'll always have to make our decisions on a case-by-case basis. It may well be the case that sometimes some people today will have opportunities to achieve greater expected value through activities other than reducing extinction risk. Working towards a positive Singularity may be one such opportunity. Working towards space colonization may be another, though I think this is more readily postponed. I personally have an eye towards such opportunities even while I work on GCRI, but meanwhile I view GCRI as among the most effective opportunities for myself, and for many others.

That's my answer. Do you have your own answer? If so, I'd be curious to hear it.

sethbaum
 
Posts: 33
Joined: Tue Nov 11, 2008 4:07 am

Re: Global Catastrophic Risk Institute

Postby Ruairi on 2012-09-17T16:54:00

sethbaum wrote:It is indeed the case that my own concern about extinction depends on survival having positive expected value.
....
Meanwhile, I think efforts to understand whether there would be positive or negative value can be every bit as important as efforts to understand extinction. Indeed, I would consider events bringing about global-scale negative value to fit within the set of global catastrophes and thus also within GCRI's domain.


Awesome! :D

sethbaum wrote:Are you familiar with any research exploring the possibility of global-scale negative value? I'd be interested to see this.
....
This is a great question, and I agree it's very important. I can give a partial answer. Extinction threats come with a certain urgency and irreversibility. If humanity goes extinct, then it will never have the chance to work towards a positive colonization. Thus in broad terms I'd view preventing extinction as the core role for our era of civilization, so that future eras can work towards positive colonization. In mathematical terms this would be the solution to a dynamic optimization problem.


Lukas Gloor and some others are currently writing a piece on this subject and them, Brian Tomasik and the rest of the people on Felicifia seem to be the (only?) people best considering this possibility (!!!).

Brian and Holly had a discussion about urgency and irreversibility but I don't remember the name of the thread I'm afraid, maybe someone else does? I don't think this point stands up, I'm not really sure where to begin on why I think this, it simply doesn't seem to make sense, it seems like saying "It looks like my life is going to be unimaginably horrible, but I'm going to avoid a series of risks this year which could kill me so I can properly asses and work toward a better life after this year, also the risks this year may be the only big chances I have at dying for a very long time." no? :P

Anyway hopefully someone else will give a more complete answer as I think this question is ridiculously important!

Fredrik Bränström also expressed interest in questions about future utility :D

EDIT: If you have answers to my questions here please let me know! :D!
User avatar
Ruairi
 
Posts: 392
Joined: Tue May 10, 2011 12:39 pm
Location: Ireland

Re: Global Catastrophic Risk Institute

Postby Hedonic Treader on 2012-09-17T18:44:00

Ruairi wrote:it seems like saying "It looks like my life is going to be unimaginably horrible, but I'm going to avoid a series of risks this year which could kill me so I can properly asses and work toward a better life after this year, also the risks this year may be the only big chances I have at dying for a very long time." no? :P

Yes, Ruairi, this doesn't make sense to me either. Before doing anything about existential risk, I'd have to know not only how to affect it, but also in what direction I want to affect it in the first place. I think it's wishful thinking to count on the benevolence of other/future people in preventing negative utility, unless we already have a reason to think that they will.

sethbaum wrote:Are you familiar with any research exploring the possibility of global-scale negative value? I'd be interested to see this.

Yes, me too. I don't know any academic research, only the well-known discussions in the transhumanist/x-risk-aware/utilitarian online communities, the ecosystem/malthusianism suffering analyses on wild animals by Brian, as well as some vague pointers from antinatalists like Benatar (mostly just some statistics, quoted via muflax):

Benatar wrote:Whether or not one accepts the pessimistic view I have presented of ordinary healthy life, the optimist is surely on very weak ground when one considers the amount of unequivocal suffering the world contains. […]

Consider first, natural disasters. More than fifteen million people are thought to have died from such disasters in the last 1,000 years. In the last few years, flooding, for example, has killed an estimated 20,000 annually and brought suffering to ‘tens of millions’. The number is greater in some years. In late December 2004, a few hundred thousand people lost their lives in a tsunami.

Approximately 20,000 people die every day from hunger. An estimated 840 million people suffer from hunger and malnutrition without dying from it. That is a sizeable proportion of the approximately 6.3 billion people who currently live.

Disease ravages and kills millions annually. Consider plague, for example. Between 541 CE and 1912, it is estimated that over 102 million people succumbed to plague. Remember that the human population during this period was just a fraction of its current size. The 1918 influenza epidemic killed 50 million people. Given the size of the current world human population and the increased speed and volume of global travel, a new influenza epidemic could cause millions more deaths. HIV currently kills nearly 3 million people annually. If we add all other infectious diseases, we get a total of nearly 11 million deaths per year, preceded by considerable suffering. Malignant neoplasms take more than a further 7 million lives each year, usually after considerable and often protracted suffering. Add the approximately 3.5 million accidental deaths (including over a million road accident deaths a year). When all other deaths are added, a colossal sum of approximately 56.5 million people died in 2001. That is more than 107 people per minute. […]

Although much disease is attributable to human behaviour, consider the more intentionally caused suffering that some members of our species inflict on others. One authority estimates that before the twentieth century over 133 million people were killed in mass killings. According to this same author, the first 88 years of the twentieth century saw 170 million (and possibly as many as 360 million) people ‘shot, beaten, tortured, knifed, burned, starved, frozen, crushed, or worked to death; buried alive, drowned, … [hanged], bombed, or killed in any other of the myriad ways governments have inflicted death on unarmed, helpless citizens and foreigners’.

[…]

Nor does the suffering end there. Consider the number of people who are raped, assaulted, maimed, or murdered (by private citizens, rather than governments). About 40 million children are maltreated each year. More than 100 million currently living women and girls have been subjected to genital cutting. Then there is enslavement, unjust incarceration, shunning, betrayal, humiliation, and intimidation, not to mention oppression in its myriad forms. […]


Of course, none of it takes into consideration the possibilities, let alone defining probabilities, of hedonic enhancement, or a more systematic connection between darwinian dynamics affecting sentient beings and the utilitarian calculus, e.g. projected into a posthuman future. I'd say it's very hard to do this.

For completion, even though you've probably seen it in other threads already, the pointer to Carl Shulman's useful post on hedonium vs dolorium.
"The abolishment of pain in surgery is a chimera. It is absurd to go on seeking it... Knife and pain are two words in surgery that must forever be associated in the consciousness of the patient."

- Dr. Alfred Velpeau (1839), French surgeon
User avatar
Hedonic Treader
 
Posts: 342
Joined: Sun Apr 17, 2011 11:06 am

Re: Global Catastrophic Risk Institute

Postby sethbaum on 2012-09-18T00:02:00

Thanks for the discussion. Looks like we have some different beliefs on the expected value of existence (I estimate it to be decidedly positive) and perhaps also on the capacity for future people to help out. Note part of my project is to raise enduring awareness about these ideas, so that future people become more likely to help out.

sethbaum
 
Posts: 33
Joined: Tue Nov 11, 2008 4:07 am

Re: Global Catastrophic Risk Institute

Postby Hedonic Treader on 2012-09-18T03:50:00

sethbaum wrote:Thanks for the discussion. Looks like we have some different beliefs on the expected value of existence (I estimate it to be decidedly positive)

Based on what? What reason other than optimism bias could you possibly have to assume it's going to be positive, and not even just slightly positive in expectation, but decidedly so?
"The abolishment of pain in surgery is a chimera. It is absurd to go on seeking it... Knife and pain are two words in surgery that must forever be associated in the consciousness of the patient."

- Dr. Alfred Velpeau (1839), French surgeon
User avatar
Hedonic Treader
 
Posts: 342
Joined: Sun Apr 17, 2011 11:06 am

Re: Global Catastrophic Risk Institute

Postby sethbaum on 2012-09-19T01:30:00

Interesting that after all these years, the Felicifia community still doesn't have consensus on this point. I'm guessing I won't be able to create that consensus in this thread...

One quick point: Benatar wrote "An estimated 840 million people suffer from hunger and malnutrition without dying from it. That is a sizeable proportion of the approximately 6.3 billion people who currently live." This means that 87% are not suffering from hunger. Even if the 840 million are in a state worse than death (I doubt it, but I haven't looked into it), then the overwhelming majority is in a positive state. Of course this quick analysis oversimplifies. But Benatar's data doesn't even support his argument.

And it is possible to change the trajectory of human civilization on these issues. For example it turns out that US meat consumption has been declining for several years now: http://vegnews.com/articles/page.do?pageId=4916&catId=1
I was pleasantly surprised to learn this. Likewise my own plans involve spreading ideas about altruism, utilitarianism, etc, which make future people more likely to attend to the big issues they face.

sethbaum
 
Posts: 33
Joined: Tue Nov 11, 2008 4:07 am

Re: Global Catastrophic Risk Institute

Postby peterhurford on 2012-09-19T03:52:00

sethbaum wrote:Interesting that after all these years, the Felicifia community still doesn't have consensus on this point. I'm guessing I won't be able to create that consensus in this thread...


I always thought there was a loose coalition between the realists who recognize that we really can't extrapolate our predictions that far into the future and it could easily go either way and the idealists who latch on to existing positive trends of moral progress and suggest things will work out. (I'm not sure which of the two camps I fall into.)
Felicifia Head Admin | Ruling Felicifia with an iron fist since 2012.

Personal Site: www.peterhurford.com
Utilitarian Blog: Everyday Utilitarian

Direct Influencer Scoreboard: 2 Meatless Monday-ers, 1 Vegetarian, and 2 Giving What We Can 10% pledges.
User avatar
peterhurford
 
Posts: 410
Joined: Mon Jul 02, 2012 11:19 pm
Location: Denison University

Re: Global Catastrophic Risk Institute

Postby peterhurford on 2012-09-19T03:52:00

sethbaum wrote:Interesting that after all these years, the Felicifia community still doesn't have consensus on this point. I'm guessing I won't be able to create that consensus in this thread...


I always thought there was a loose coalition between the realists who recognize that we really can't extrapolate our predictions that far into the future and it could easily go either way and the idealists who latch on to existing positive trends of moral progress and suggest things will work out. (I'm not sure which of the two camps I fall into.)
Felicifia Head Admin | Ruling Felicifia with an iron fist since 2012.

Personal Site: www.peterhurford.com
Utilitarian Blog: Everyday Utilitarian

Direct Influencer Scoreboard: 2 Meatless Monday-ers, 1 Vegetarian, and 2 Giving What We Can 10% pledges.
User avatar
peterhurford
 
Posts: 410
Joined: Mon Jul 02, 2012 11:19 pm
Location: Denison University

Re: Global Catastrophic Risk Institute

Postby Hedonic Treader on 2012-09-19T05:24:00

sethbaum wrote:One quick point: Benatar wrote "An estimated 840 million people suffer from hunger and malnutrition without dying from it. That is a sizeable proportion of the approximately 6.3 billion people who currently live." This means that 87% are not suffering from hunger. Even if the 840 million are in a state worse than death (I doubt it, but I haven't looked into it), then the overwhelming majority is in a positive state. Of course this quick analysis oversimplifies. But Benatar's data doesn't even support his argument.

But now you pick one data point and pretend that his entire pessimistic position relies on it. Even just using the 87% figure, being hungry really sucks; it's not like you can say the experience of starving is as bad as eating a cake is good. It would take a shitload of pleasure to get me to consent to being in a state of starvation 13% of the time! And that's just one data point, it's not like it ends there. By a long shot. The people who aren't currently starving are still going to die in agony with a good probability; well-fed people who are technically fine often still go through a good proportion of their day in an annoyed or bored mood; most of the things even wealthy people do are essentially involuntary, and people do them not necessarily because they can rationally expect something in return that makes it worth it, but because they kind of have to. I think it's a warning sign that most religions, social ideologies and legal systems have come up with quite severe punishments and coercion against suicide - the idea seems to be that, while it essentially is recognized as a bad deal for the individual, society wants them to be alive and function so that the system can perpetuate itself. So people are made exactly as miserable as they can be made given the context, in order to extract their usefulness as soldiers, drones, gene carriers, and so forth. And optimism bias and anti-suicide norms/laws are a part of the mix that keeps this running at a level of efficiency that the individual wouldn't rationally choose if they were self-interested hedonists. I have literally facepalmed countless times when people essentially said they look forward to be compensated in heaven for this shitty earthly life when it is hopefully over soon.

And it is possible to change the trajectory of human civilization on these issues. For example it turns out that US meat consumption has been declining for several years now: http://vegnews.com/articles/page.do?pageId=4916&catId=1
I was pleasantly surprised to learn this. Likewise my own plans involve spreading ideas about altruism, utilitarianism, etc, which make future people more likely to attend to the big issues they face.

As Peter points out, to project optimistic trends into a utopic paradise is just not a very solid way to handle the data. I saw a documentary about Enron today. The sheer denial of reality by so many people until a multi-billion dollar crash hit them right in the face was a scary reminder of how overconfident humans are in these things. I'm skeptical about the human ability to plan the future in any robust way, and I'm also skeptical about altruism. I don't think humans are very altruistic by nature and I don't think we have strong robust ways to create more altruism in the world by trying to convince people. It's entirely possible that the effect is there, but I don't see how that could possibly make up for the fact that, say, future wars will be bigger and victimize a hell of a lot more sentient beings.
"The abolishment of pain in surgery is a chimera. It is absurd to go on seeking it... Knife and pain are two words in surgery that must forever be associated in the consciousness of the patient."

- Dr. Alfred Velpeau (1839), French surgeon
User avatar
Hedonic Treader
 
Posts: 342
Joined: Sun Apr 17, 2011 11:06 am

Re: Global Catastrophic Risk Institute

Postby sethbaum on 2012-09-19T10:22:00

Yup, I'm still guessing I won't be able to create that consensus in this thread...

sethbaum
 
Posts: 33
Joined: Tue Nov 11, 2008 4:07 am

Re: Global Catastrophic Risk Institute

Postby Ruairi on 2012-09-19T12:09:00

sethbaum wrote:Yup, I'm still guessing I won't be able to create that consensus in this thread...


Do some research and convince us so! :D!
User avatar
Ruairi
 
Posts: 392
Joined: Tue May 10, 2011 12:39 pm
Location: Ireland

Re: Global Catastrophic Risk Institute

Postby Ruairi on 2012-09-19T12:31:00

sethbaum wrote:a state worse than death


What is your 0 utility point?
User avatar
Ruairi
 
Posts: 392
Joined: Tue May 10, 2011 12:39 pm
Location: Ireland

Re: Global Catastrophic Risk Institute

Postby Hedonic Treader on 2012-09-19T14:36:00

sethbaum wrote:Yup, I'm still guessing I won't be able to create that consensus in this thread...

That's because you're declaring a position (decidedly positive value) without defending it. If I wanted to defend your position, I would probably focus on hedonic enhancement and the hedonium over dolorium argument. However, it would be non-sequitur to declare those arguments decisive since the total expected value still depends on the probability of the implementation of these techniques. In my experience, people would like to be able to control their own pain sensitivity, but most say they would object to manipulating their children so that they can control their pain sensitivity even if the technology were otherwise perfectly harmless. And imagine we could create hedonium right now, given our current state of technology. How many resources would humanity put toward that end? Would this outweigh the suffering on earth now? I think not even billionaires would do it. Most of them would just buy another yacht. And this is not a moral accusation or anything, this is just the way I think the world currently actually works.

Another approach to convincing me of a decidedly positive future value would be to analyze the human brain, create a formal description of what suffering actually is, and then proving mathematically that it is inferior in all functional ways to some other implementation paradigm for the same functions. I.e. a paradigm that does not contain anything we would recognize or categorize as suffering. In other words, an actual proof that suffering in the future will not be phased out or reduced because people will be more ethical, but because it is functionally inferior to something else, in the same way in which mechanical typewriters are inferior to digital word processors, and maybe factory farmed meat could be inferior to cheap healthy cultured meat (note that this, too, has yet to be proven).
"The abolishment of pain in surgery is a chimera. It is absurd to go on seeking it... Knife and pain are two words in surgery that must forever be associated in the consciousness of the patient."

- Dr. Alfred Velpeau (1839), French surgeon
User avatar
Hedonic Treader
 
Posts: 342
Joined: Sun Apr 17, 2011 11:06 am

Re: Global Catastrophic Risk Institute

Postby Pablo Stafforini on 2012-09-20T02:17:00

Hedonic Treader wrote:If I wanted to defend your position, I would probably focus on hedonic enhancement and the hedonium over dolorium argument. However, it would be non-sequitur to declare those arguments decisive since the total expected value still depends on the probability of the implementation of these techniques.

Isn't Shulman's argument supposed to be that these probabilities, even if small, can be ignored given the astronomically greater quantities of positive and negative affect involved in hedonium and dolorium, relative to other scenarios? If you think hedonium is more likely than dolorium, or vice versa, this seems all you need to know, as a classical utilitarian, to believe that the future will be good or bad on the whole. (Things are different for "non-classical" utilitarians, like Brian, as he himself notes in the comments to Carl's post.)
"‘Méchanique Sociale’ may one day take her place along with ‘Mécanique Celeste’, throned each upon the double-sided height of one maximum principle, the supreme pinnacle of moral as of physical science." -- Francis Ysidro Edgeworth
User avatar
Pablo Stafforini
 
Posts: 177
Joined: Thu Dec 31, 2009 2:07 am
Location: Oxford

Re: Global Catastrophic Risk Institute

Postby Hedonic Treader on 2012-09-20T02:51:00

Pablo Stafforini wrote:Isn't Shulman's argument supposed to be that these probabilities, even if small, can be ignored given the astronomically greater quantities of positive and negative affect involved in hedonium and dolorium, relative to other scenarios? If you think hedonium is more likely than dolorium, or vice versa, this seems all you need to know, as a classical utilitarian, to believe that the future will be good or bad on the whole.

If you expect a big future with only a small expected proportion of total resources being put to hedonium/dolorium, it's not automatically clear this dominates the calculus. I don't think you can just dismiss the probabilities. Imagine we expect a scenario that starts with a period in which some share of resources is allocated according to current human-like values (making hedonium and dolorium relatively probable), but the greater future is generally determined by some more alien and darwinian optimization process spanning several galaxies (say). In such a scenario, it might be perfectly possible that the resource equivalent of a dyson sphere is used on hedonium-dolorium without it dominating the total calculus. Even if the rest of the colonization process isn't as optimized to affect total hedonism as hedonium/dolorium, it's not clear that the sentience density * total quantity of the rest is automatically negligible. Hedonium/dolorium are clearly not optimized for colonization/reproduction, which means it will be selected against if there is reproductive competition.
"The abolishment of pain in surgery is a chimera. It is absurd to go on seeking it... Knife and pain are two words in surgery that must forever be associated in the consciousness of the patient."

- Dr. Alfred Velpeau (1839), French surgeon
User avatar
Hedonic Treader
 
Posts: 342
Joined: Sun Apr 17, 2011 11:06 am

Re: Global Catastrophic Risk Institute

Postby CarlShulman on 2012-09-23T07:31:00

Hedonium/dolorium are clearly not optimized for colonization/reproduction, which means it will be selected against if there is reproductive competition.


What's your reasoning? With respect to speed of colonization waves, probes carrying some non-reproductive purpose for eventual use of resources needn't suffer competitively. See this blog post (and Robin Hanson's agreement in the comments):

http://reflectivedisequilibrium.blogspo ... seems.html

CarlShulman
 
Posts: 32
Joined: Thu May 07, 2009 2:01 pm

Re: Global Catastrophic Risk Institute

Postby sethbaum on 2012-10-05T23:56:00

Hi all,

I'm writing with some updates from GCRI, and then some comments on the discussion in this thread.

********** GCRI updates

The big update is that we now have a redesigned website, including a new blog. See in particular our Get Involved page and the October 2012 Newsletter, both of which list some opportunities for collaborating with us on various projects. (Sign up for newsletters here.)

More generally, GCRI is currently in a pretty good position to take new collaborator(s)/volunteer(s), which can be either established researchers/professionals or students interested in a GCR career. Drop me an email if you're interested.

********** Comments on the discussion

First, I should clarify, I intended that I estimate survival as having decidedly positive expected value. I do not claim certainty here, nor do I rule out the possibility if negative value. I regret that I'm not in a position to articulate my estimation in detail here.

Even if I'm not responding point-by-point in this discussion, please note that I am reading the arguments, and a lot of the links. I'm glad to be catching up on the dialog that's been going on here. Thanks for your patience as I get back up to speed.

Ruairi, you asked about my 0 utility point. I respond here.

And I am building the ideas in this discussion into my research. As a quick teaser, Tony and I are currently working on a value of information paper. In that, we may be able to comment on the value of short-term survival as a means of reducing uncertainty about the net value of long-term survival. Here's an initial sketch. Consider two factors:

A) The current expected value of permanent survival
B) The value of temporary survival to gain information about the value of permanent survival

As long as A is positive, survival is the recommended option and B is a moot point. (I estimate A as being "decidedly" positive but again do not have 100% confidence in this estimate.) However, for anyone who estimates A as negative, then there could be some analysis to compare A and B towards deciding about current survival.

Regarding Ruairi's idea that "this year may be the only big chances I have at dying for a very long time", we might add:

C) The probability that survival is reversible.

And for completeness:

D) The probability that extinction is reversible.

As long as C>D (presumably D=0) there would appear to be some option value in survival. But we haven't done the full analysis yet...

sethbaum
 
Posts: 33
Joined: Tue Nov 11, 2008 4:07 am


Return to General discussion