Existential Risk

Whether it's pushpin, poetry or neither, you can discuss it here.

Existential Risk

Postby RyanCarey on 2012-08-27T10:38:00

Which of the following tasks is most important:
1. Reducing existential risk
2. Researching kinds and magnitudes of existential risk, and ways to reduce them
3. Evaluating the trajectory of humanity, such as to project its future
4. Deciding on our values.
?

[cross-posted to facebook utilitarians group]
You can read my personal blog here: CareyRyan.com
User avatar
RyanCarey
 
Posts: 682
Joined: Sun Oct 05, 2008 1:01 am
Location: Melbourne, Australia

Re: Existential Risk

Postby Hedonic Treader on 2012-08-27T11:01:00

4 is completed for me and should be for most of you by now.
3 is important but hard due to the chaotic nature of the system and various tipping points ahead. I think the most important question is whether a big future is good or bad; the rest is probably too detailed to project meaningfully.
2 seems like a prerequisite for 1.
"The abolishment of pain in surgery is a chimera. It is absurd to go on seeking it... Knife and pain are two words in surgery that must forever be associated in the consciousness of the patient."

- Dr. Alfred Velpeau (1839), French surgeon
User avatar
Hedonic Treader
 
Posts: 342
Joined: Sun Apr 17, 2011 11:06 am

Re: Existential Risk

Postby Nap on 2012-08-27T13:35:00

Hedonic Treader wrote:4 is completed for me and should be for most of you by now.


I think he means as a species, clearly in that light we have not. Besides, that should never truly be "completed", you don't complete some thing like that. You might temporarily come to terms with it, but you should ever complete it.

I some thing more important than that is unifying in bigger ways. I guess 4 goes with that, but its more than just deciding on values, its also acting on them.
When did empathy become a mental illness?
User avatar
Nap
 
Posts: 53
Joined: Tue Jul 10, 2012 4:25 am

Re: Existential Risk

Postby peterhurford on 2012-08-28T19:54:00

I pick "5.) Effectively answering this question". It's still an open question even among people who are finished with 4.

On another note, why is this topic entitled "Existential Risk". Isn't that presupposing an answer?
Felicifia Head Admin | Ruling Felicifia with an iron fist since 2012.

Personal Site: www.peterhurford.com
Utilitarian Blog: Everyday Utilitarian

Direct Influencer Scoreboard: 2 Meatless Monday-ers, 1 Vegetarian, and 2 Giving What We Can 10% pledges.
User avatar
peterhurford
 
Posts: 410
Joined: Mon Jul 02, 2012 11:19 pm
Location: Denison University

Re: Existential Risk

Postby Pablo Stafforini on 2012-08-29T03:12:00

RyanCarey wrote:1. Reducing existential risk
2. Researching kinds and magnitudes of existential risk, and ways to reduce them
3. Evaluating the trajectory of humanity, such as to project its future
4. Deciding on our values.

A recent discussion with Jesper Östman made me realize that the concept of existential risk might be an impediment to clear thinking on these issues. Consider these two alternative ways of making the world a better place:

A. Reduce the risk of human extinction.
B. Reduce the risk that, if humanity does not become extinct, humans will eventually create Dolorium.

These are very different ways of improving the world, and it's very likely that they require very different behaviors from us (e.g. AI research versus meme spreading). However, because both will count as instances of "permanently and drastically curtail[ing] humanity's potential," they both fall under the category of "existential risk reduction". As a consequence, people using the concept of existential risk might fail to appreciate the important ways in which these two approaches differ from one another. Furthermore, the similarity between the words 'existential' and 'extinction' is likely to cause folks to assume, without argument, that the most effective way to reduce existential risk is to reduce the risk of human extinction. Given that (B) is not clearly a suboptimal way to insure that humanity realizes its full potential for successful development, this assumption is unwarranted.

So, to go back to your question, I'd list both (A) and (B) as top candidates for "tasks [that are] most important."

peterhurford wrote:I pick "5.) Effectively answering this question". It's still an open question even among people who are finished with 4.


This relates to something I think about occasionally, without making much progress. How "meta" should we go? If the second-order task of deciding which first-order task we should focus on is itself as important as any of these first-order tasks, isn't the third-order task of deciding which of these first- and second-order tasks are more important itself plausibly as important as these lower-level tasks? Etc.
"‘Méchanique Sociale’ may one day take her place along with ‘Mécanique Celeste’, throned each upon the double-sided height of one maximum principle, the supreme pinnacle of moral as of physical science." -- Francis Ysidro Edgeworth
User avatar
Pablo Stafforini
 
Posts: 177
Joined: Thu Dec 31, 2009 2:07 am
Location: Oxford

Re: Existential Risk

Postby peterhurford on 2012-08-29T20:55:00

Pablo Stafforini wrote:This relates to something I think about occasionally, without making much progress. How "meta" should we go? If the second-order task of deciding which first-order task we should focus on is itself as important as any of these first-order tasks, isn't the third-order task of deciding which of these first- and second-order tasks are more important itself plausibly as important as these lower-level tasks? Etc.


Well, it would be infinitely meta to answer that question. And then "ω+1" meta to figure out how to effectively answer "how meta should we go?". And so on and so forth through all the ordinals.

I think, however, that a knowledge of one's values are all that are needed to be capable of answering "What should be our priority?", so I don't think you need to meta-level it out any further.
Felicifia Head Admin | Ruling Felicifia with an iron fist since 2012.

Personal Site: www.peterhurford.com
Utilitarian Blog: Everyday Utilitarian

Direct Influencer Scoreboard: 2 Meatless Monday-ers, 1 Vegetarian, and 2 Giving What We Can 10% pledges.
User avatar
peterhurford
 
Posts: 410
Joined: Mon Jul 02, 2012 11:19 pm
Location: Denison University

Re: Existential Risk

Postby Pablo Stafforini on 2012-08-29T22:42:00

I think, however, that a knowledge of one's values are all that are needed to be capable of answering "What should be our priority?", so I don't think you need to meta-level it out any further.

Perhaps I'm misunderstanding you here, but it seems clear that, to answer that question, we also need knowledge about the real world (specifically, knowledge about the expected consequences of our acting in various ways).
"‘Méchanique Sociale’ may one day take her place along with ‘Mécanique Celeste’, throned each upon the double-sided height of one maximum principle, the supreme pinnacle of moral as of physical science." -- Francis Ysidro Edgeworth
User avatar
Pablo Stafforini
 
Posts: 177
Joined: Thu Dec 31, 2009 2:07 am
Location: Oxford

Re: Existential Risk

Postby peterhurford on 2012-08-30T05:13:00

Pablo Stafforini wrote:
I think, however, that a knowledge of one's values are all that are needed to be capable of answering "What should be our priority?", so I don't think you need to meta-level it out any further.

Perhaps I'm misunderstanding you here, but it seems clear that, to answer that question, we also need knowledge about the real world (specifically, knowledge about the expected consequences of our acting in various ways).


We definitely need that real world knowledge to answer "What should be our priority, given utilitarianism?". But that's not the same kind of real world knowledge we would need to answer "How can we go about answering 'What should be our priority?'?" or even "How should we go about answering 'What should be our priority, given utilitarianism?'?".
Felicifia Head Admin | Ruling Felicifia with an iron fist since 2012.

Personal Site: www.peterhurford.com
Utilitarian Blog: Everyday Utilitarian

Direct Influencer Scoreboard: 2 Meatless Monday-ers, 1 Vegetarian, and 2 Giving What We Can 10% pledges.
User avatar
peterhurford
 
Posts: 410
Joined: Mon Jul 02, 2012 11:19 pm
Location: Denison University

Re: Existential Risk

Postby Jesper Östman on 2012-08-31T02:08:00

Pablo: Agreed. Still, it is a substantial question whether different actions are optimal for these goals. At the moment for example I suspect effective altruist movement building is better than either direct AI-research or direct meme-spreading.

Jesper Östman
 
Posts: 159
Joined: Mon Oct 26, 2009 5:23 am

Re: Existential Risk

Postby Bruno Coelho on 2012-09-04T12:28:00

Code: Select all

I suspect effective altruist movement building is better than either direct AI-research or direct meme-spreading.


Like any other activism, they feel more effective, because we see people doing things, and cooperations for a "better world".

For example, 01/09, I run a THINK chapter. Where a live, only I know these things, because most my friends think "left" activism is the better form to combat the "system", i.e, traditional political disagreement. However, political disagreement normally degenerate in vacuosly speech, with no or minimal epistemic gain.

These epistemic limitations could be applicable to affective altruism as well, and I presume most participants know, which explain the "no cause specific".

On existential risks, if you are already aware, is better to concentrate on few of then and do analysis. There are two forms to know the priority ordering: authority or original research. Methodological limitations in value theory fall in the later, where if solved could generate recomendations to possible interventions.

Bruno Coelho
 
Posts: 7
Joined: Fri Jul 27, 2012 7:41 am
Location: Brazil


Return to General discussion