--------
So, no I don’t think that humans are unified at a base level, I just think all humans are pretty much the same once you get down to the level of implicit values
Got it. Yeah, I think that's mostly correct. Still, I think the fine-grained differences could still lead to diametric results: Consider, for instance, the difference between the negative-utilitarian volition (caring only about reducing suffering) and the "panbiotic" volition (caring only about promoting the spread of life). I know different people who subscribe to each of these philosophies.
CEV itself also remains completely underspecified even given a single set of volitions (say, a single person). How do you weight the conflicting impulses? Which get to take control over others? There are thousands of ways these conflicts could be resolved, and exactly which is chosen depends on the whims and imagination of the seed programmers.
but I can see that different nation-states or interest groups (or species) might receive different proportions of pie depending on their pre-singularity influence (even if implicit values were implemented I can see different results depending on who gets extrapolated).
Cool.
I was working with the I.J.Good version of the Singularity as “intelligence explosion”. With that definition your scenario doesn’t count as a singularity, but under others it might: it’s probably irrelvant anyway.
Fair enough. I'll buy that scenarios in which humans don't advance to a Type II or III civilization can probably be ignored from the calculation.
“There are *lots* of implicit values other than the CEV of humanity…” I don’t understand this, sorry. Can you elaborate? From what I can understand it seems like an important point.
All I meant was that it seems like there are lots of AI optimization targets besides "maximize paperclips (etc.)" and "advance human CEV." If human CEV is a so-called "implict value" that's more nuanced than paperclip maximization, then there are tons of other "implicit values" that also wouldn't lead to paperclipping. One example could be to “Promote what biological life would become if it were allowed to flourish to its fullest extent.” That's a (vague) implicit value different from CEV that an AI could optimize. Is it too vague to be well-defined? Maybe, but I think CEV is just as vague. So if CEV counts as a non-paperclipping optimization target, then this should as well.
The overall point was just to challenge the dichotomy between "paperclip maximization" and "CEV," as though those were the only two possibilities for an intelligence explosion.
--------
Some concluding thoughts:
1. Since I lean toward negative utilitarianism, the payoff table for me looks something like this:
a. Ordinary human extinction: value ~= 0.
b. Human extinction by paperclipping: value ~= 0 (subject to further thought on the matter).
c. Human survival: value could be negative, because there are humans who want to spread life, create human-like minds (more prone to suffering than paperclips are!), and maybe cause suffering to one another during power struggles, out of religious motivation (imagine if fundamentalist Christians/Muslims got hold of simulation resources ), or for fun (see, e.g., "torturing sims," or more real-life examples).
I put "subject to further thought on the matter" next to point (b) because it may be that paperclippers would cause suffering as well. For example, paperclippers might want to create lab universes because those universes will contain more paperclips (and more paperclip-maximizing AIs). Those universes would also contain infinitely many suffering wild animals.
2. Another reason that it makes sense for me to focus on wild animals is the question of leverage. Suppose it is the case that creating friendly AI is far more important than ensuring future concern for wild animals. Even if so, there are *lots* of people (comparatively speaking) working on friendly AI and existential risk, whereas practically no one else is (explicitly) focusing on the implications of humanity's survival for wild animals.
The phrase "preventing existential risk" can have lots of meanings. It may refer to reducing the chance of human extinction (e.g., by asteroids or nuclear war). However, it can also mean "shaping humanity's future trajectory in such a way that bad values don't take over." Preventing paperclippers is (to most people) one example of how to do this. To me, preventing life-spreading values is another way. So you could call "raising concern for wild animals" one of the fronts in the battle against existential risk.
I guess the main question is, What specific projects do you think would be better to work on? Preventing nuclear war? Preventing paperclippers? Lobbying for use of CEV by whoever develops the first AGI?
Cheers!
Alan