I'm currently a researcher for Singularity Institute because I think that producing research helpful for creating Friendly AI is the most important (and satisfying) thing I can do with my time. (For the basics, see my Singularity FAQ.
Some members of this forum have expressed doubts about the worthiness of the Friendly AI project. If I'm right about the value of Friendly AI, then I'd like to persuade others of its value. If I'm wrong about the value of Friendly AI, then I'd like to be persuaded of that so I can spend my time doing something else.
I'd like to focus a discussion not on the plausibility of an intelligence explosion or on many other possible topics, but instead on the issues of whether Friendly AI would be 'good' for the universe.
Alan Dawrst, in particular, has expressed some misgivings about SI's mission:
A main reason why I’m less enthusiastic about SIAI is that the organization’s primary focus is on reducing existential risk, but I really don’t know if existential risk is net good or net bad. As I said in one Felicifia discussion: “my current stance is to punt on the question of existential risk and instead to support activities that, if humans do survive, will encourage our descendants to reduce rather than multiply suffering in their light cone. This is why I donate to Vegan Outreach, to spread awareness of how bad suffering is and how much animal suffering matters, with the hope that this will eventually blossom into greater concern for the preponderate amounts of suffering in the wild.”
“Safe AI” sounds like a great goal, but what’s safe in the eyes of many people may not be safe for wild animals. Most people would prefer an AI with human values over a paperclipper. However, it’s quite possible that a paperclipper would be less likely to cause massive suffering than a human-inspired AI. The reason is that humans have motivations to spread life and to simulate minds closer to their own in mind-space; simulations of completely foreign types of minds don’t count as “suffering” in my book and so don’t pose a direct risk. (The main concern would be if paperclippers simulated human or animal minds for instrumental reasons.) In other words, I might prefer an unsafe AI over a “safe” one. Most unsafe AIs are paperclippers rather than malevolent torturers.
I'd like to clarify my understanding of this position. Are we using total utilitarianism or average utilitarianism to make the moral calculus? Negative or positive utilitarianism? Are we using a person-affecting view or not? Is there a special concern for terrestrial animal suffering, or instead for suffering in general? (We may be approaching a transition point after which most conscious minds will be made not of meat but of non-meat substrates; is there a reason to care more about the suffering of minds that run on meat?)
I hope Mr. Dawrst will be interested to engage me directly, just so the conversation can be manageable, but of course others are welcome to join the conversation as well.
Cheers,
Luke