Some questions for study

Whether it's pushpin, poetry or neither, you can discuss it here.

Some questions for study

Postby Brian Tomasik on 2011-06-16T09:07:00

Inspired by a friend's request, I put together a list of research topics in which I'm interested. I don't pretend that they're sorted by order of importance to the world; some perhaps are just my own curiosities....

Feel free to add to the list!

* Which kinds of environmental policies help/hurt wild animals overall? Presumably, say, rainforest destruction prevents net suffering over the long term. Would global warming cause net suffering by increasing animal populations?
* Are there known examples of some pesticides that are more humane than others? And how does death by an organophosphate for an insect compare with death by a parasite, disease, dehydration, etc.?
* Which animals can suffer? Insects? Copapods?
* In general, what are the neural operations that constitute the type of "conscious" suffering that we care about, rather than reflex nociception?
* What types of computer programs would fit these criteria for conscious suffering? Is there a risk that suffering programs could be instrumentally useful for advanced civilizations?
* What are the most effective ways to promote concern for animal suffering? How can we make sure people care about all animal suffering and not just that which is human-caused?
* Enumerate possible trajectories for the future of humanity. What do they imply for the amount of suffering on earth? On other planets? In simulations?
* What does the doomsday argument imply about efforts to shape post-human civilization?
* What are the most effective ways to promote the humane slaughter of chickens and fish? Political advocacy? Distributing literature?
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Some questions for study

Postby Gedusa on 2011-06-16T12:50:00

Just to clarify, some of these might count as suggestions to be added to the wiki in due course. Although a lot of them are open questions.

Okay here are general topic interest areas for me:

- AI: Paths to friendliness, whether friendliness is possible. What knowledge will help us to create Friendliness, what will make paperclippers more likely? (E.g. nanocomputers make paperclippers more likely but leave the probability of FAI largely unchanged).
- How long before disruptive technologies hit? (AI, nanotech, genetic engineering, space travel, VR etc.)
- What does the Fermi Paradox imply about the likelihood of various extinction events?
- What can anthropic reasoning tell us about the likelihood of various extinction events?
- What extinction events would destroy the biosphere and/or most of the suffering in our light cone? Can I increase the probability of "good" x-risks relative to "bad" x-risks without increasing the probability of x-risks overall?
- What is the likelihood that future civilizations have positive utility? (Is reducing x-risks a good idea?)
- How much suffering is there in our light-cone that we could prevent?
- Does research in any of these areas overlap with my skills?
- How much are other people working on these areas? (Where is the best point of leverage at which to put my efforts)

General point: Whether there is anything I've missed. Whether my morality would change a little if I thought about it more. (Love this quote from Bostrom: "If we have overlooked even just one such consideration, then all our best efforts might be for naught---or less. When headed the wrong way, the last thing needed is progress. It is therefore important to pursue such lines of inquiry as have some chance of disclosing any crucial consideration to which we might have hitherto been oblivious")

Oh, and I second most of Alan's suggestions. Particularly regarding the Doomsday Argument.
World domination is such an ugly phrase. I prefer to call it world optimization
User avatar
Gedusa
 
Posts: 111
Joined: Thu Sep 23, 2010 8:50 pm
Location: UK

Re: Some questions for study

Postby Brian Tomasik on 2011-06-17T16:14:00

Great questions, Gedusa! I could have added most of those to my list as well.

Gedusa wrote:What extinction events would destroy the biosphere and/or most of the suffering in our light cone?

A false vacuum decay is my favorite, since it would destroy our entire future light cone. This is about as good as it gets short of a utilitronium shockwave.

From the Wikipedia article:
One scenario is that, rather than quantum tunnelling, a particle accelerator, which produces very high energies in a very small volume, could create sufficiently high energy density as to penetrate the barrier and stimulate the decay of the false vacuum to the lower-energy vacuum. Hut and Rees,[5] however, have determined that because we have observed cosmic ray collisions at much higher energies than those produced in terrestrial particle accelerators, that these experiments will not, at least for the foreseeable future, pose a threat to our vacuum. Particle accelerations have reached energies of only approximately seven tera electron volts (7×1012 eV). Cosmic ray collisions have been observed at and beyond energies of 1018 eV, the so-called Greisen–Zatsepin–Kuzmin limit. John Leslie has argued[6] that if present trends continue, particle accelerators will exceed the energy given off in naturally occurring cosmic ray collisions by the year 2150.


Gedusa wrote:How much suffering is there in our light-cone that we could prevent?

Yes! At one point I attempted a Drake-equation calculation (though I don't have it offhand). Of course, the results are questionable given the Fermi paradox.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Some questions for study

Postby Gedusa on 2011-06-17T16:46:00

A false vacuum decay is my favorite, since it would destroy our entire future light cone. This is about as good as it gets short of a utilitronium shockwave.

Ohh, that is a good scenario. Except I'd replace "utilitronium shockwave" with "lots of humans living worthwhile lives" :P
I wonder if there's actually any way we could trigger this state. Physicists seem largely skeptical of whether it's even possible in principle and also about us being able to trigger it in the near future. It also seems one of the less likely x-risks overall. The best scenario would be to have a big particle collider ready to fire if the future seemed likely to permanently contain more suffering than happiness. Though that's unlikely.... Still, it would allow me to increase the probability of a "good" x-risk happening as opposed to a "bad" one fairly simply: fund particle accelerators.

Of course, the results are questionable given the Fermi paradox.

Does this mean that the results of your equation might be wrong because they we based on the Drake Equation which was itself wrong as demonstrated by the Fermi Paradox? 'Cause that's what I got from that sentence.
World domination is such an ugly phrase. I prefer to call it world optimization
User avatar
Gedusa
 
Posts: 111
Joined: Thu Sep 23, 2010 8:50 pm
Location: UK

Re: Some questions for study

Postby Brian Tomasik on 2011-06-18T13:05:00

Gedusa wrote:It also seems one of the less likely x-risks overall.

Definitely. I think it's extremely unlikely.

Gedusa wrote:Does this mean that the results of your equation might be wrong because they we based on the Drake Equation which was itself wrong as demonstrated by the Fermi Paradox? 'Cause that's what I got from that sentence.

Exactly what I was trying to say.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Some questions for study

Postby Hedonic Treader on 2011-06-18T15:43:00

Some additions:

- What methods of safe, affordable hedonic enhancements are possible with current tech or on the horizon in the near term?
- Can the ability to consciously control or reduce physical pain, fear, distress, feelings of suffocation etc. become implemented in the human brain as a general ability, with safe affordable interventions, today or in the upcoming decades? Could human suffering become voluntary with current or emerging tech?
- What side-effects should be expected from the adoption of such interventions, on the individual or social level?
- What political obstacles would be expected to their wide-spread adoption? How could they be overcome?
- Are there realistic ways to use money or effort of private individuals to affect existential risks?
- Can non-human animals be changed in such a way that they perceive greatly reduced or no suffering, and/or enhanced subjective well-being, with current or emerging tech? How does this affect behavioral adaptivity? Can it be used for farm or lab animals that would otherwise suffer?
- Is David Pearce's "gradients of well-being" vision a generally feasible option of mind-design, or does it conflict with behavioral adaptivity on a fundamental level?
- Can consciousness become voluntary as a fundamental principle of mind-design (i.e. can minds be designed so that they can always opt out of consciousness, independent of the physical context)?
- Can existence become voluntary as a fundemental principle of mind-design (i.e. can minds be designed so that they can always opt out of existence, independent of the physical context)?
- Is there something fundamental about the causality of universal darwinism (exponential growth of replicators with variability, competing for limited resources) that fixes the utility average of conscious entities that come to exist within it around an equilibrium?
- If so, it is net-positive or net-negative?
- If so, is there any potential way to break the darwinian paradigm by fixing conscious life into a state-space of higher utility without destroying its existential foundations?
"The abolishment of pain in surgery is a chimera. It is absurd to go on seeking it... Knife and pain are two words in surgery that must forever be associated in the consciousness of the patient."

- Dr. Alfred Velpeau (1839), French surgeon
User avatar
Hedonic Treader
 
Posts: 342
Joined: Sun Apr 17, 2011 11:06 am

Re: Some questions for study

Postby Hedonic Treader on 2011-06-18T15:52:00

Alan Dawrst wrote:A false vacuum decay is my favorite, since it would destroy our entire future light cone. This is about as good as it gets short of a utilitronium shockwave.

In the novel "Manifold: Time", this method is used to create an astronomically large number of new universes with gazillions of sentient life forms. Ironically, this was done by beings who aready had a lossless substrate to run consciousness forever in a controlled way. They gave it up to create more darwinism, because the total state-space of possible thoughts and experiences was limited. It's worth noting that the author didn't seem to question the ethics of this, his characters act as if the moral worth of this decision were obvious.
"The abolishment of pain in surgery is a chimera. It is absurd to go on seeking it... Knife and pain are two words in surgery that must forever be associated in the consciousness of the patient."

- Dr. Alfred Velpeau (1839), French surgeon
User avatar
Hedonic Treader
 
Posts: 342
Joined: Sun Apr 17, 2011 11:06 am

Re: Some questions for study

Postby davidpearce on 2011-06-19T12:14:00

>>>
Is David Pearce's "gradients of well-being" vision a generally feasible option of mind-design, or does it conflict with behavioral adaptivity on a fundamental level?
>>>

On an individual level, we know that high functioning life based entirely on gradients of well-being is feasible from contemporary cases of extreme hyperthymia. What's less clear is the interpersonal dynamics of any future society where everyone lives permanently above Sidgwick's "hedonic zero".

My own best guess is that existential risk, for example, is dramatically reduced if we engineer lifelong and generically hypervaluable states for all. But a counter-argument would be that (comparatively) low mood - and its concomitant subordinate behaviour http://www.biopsychiatry.com/depression/index.html - is indispensable to the functioning of social primate societies. Recall too how Aldous Huxley anticipated evolutionary psychiatry in a short passage spoken by the World Controller in Brave New World (1932) :
'Mustapha Mond smiled. "Well, you can call it an experiment in rebottling if you like. It began in A.F. 473. The Controllers had the island of Cyprus cleared of all its existing inhabitants and re-colonized with a specially prepared batch of twenty-two thousand Alphas. All agricultural and industrial equipment was handed over to them and they were left to manage their own affairs. The result exactly fulfilled all the theoretical predictions. The land wasn't properly worked; there were strikes in all the factories; the laws were set at naught, orders disobeyed; all the people detailed for a spell of low-grade work were perpetually intriguing for high-grade jobs, and all the people with high-grade jobs were counter-intriguing at all costs to stay where they were. Within six years they were having a first-class civil war. When nineteen out of the twenty-two thousand had been killed, the survivors unanimously petitioned the World Controllers to resume the government of the island. Which they did. And that was the end of the only society of Alphas that the world has ever seen." '

davidpearce
 
Posts: 45
Joined: Thu May 07, 2009 8:27 am

Re: Some questions for study

Postby Hedonic Treader on 2011-06-19T14:10:00

Hi David, thanks for your reply.

davidpearce wrote:On an individual level, we know that high functioning life based entirely on gradients of well-being is feasible from contemporary cases of extreme hyperthymia. What's less clear is the interpersonal dynamics of any future society where everyone lives permanently above Sidgwick's "hedonic zero".

So social behavoir may be one level of potential maladaptivity. I wonder to what degree this could be fixed by mental modes of stronger impulse control, goal-orientation, and a mix of social transparency, reputation networks, and elicitation of cooperation by anticipated social outcomes. This is something that transparent fair markets may do for relatively selfish agents, even if their moods are generally good. I'm not sure that sub-zero moods are ever needed to maintain social functionality (ie. people wishing they were unconscious or didn't exist during any momentary experience). And Huxley's description is, of course, fictional evidence; it's not clear yet that all-happy persons losing the market game wouldn't agree to go tilling if the alternative is civil war (and potentially losing their all-happy lives).

But there is an additional aspect to the gradients approach: I'm assuming even people with adaptive cases of hyperthymia could feel significantly sub-zero hedonic states when they get physically hurt, or when they are suffocating, or when they are subject to extreme temperatures etc. A "gradients of well-being" solution working with purely above-zero affective states would need to include modes of replicating these immediate aversive functions.

Let's take the specific example of suffocation. Holding one's breath or being unable to breathe results in a very unpleasant feeling of suffocation that can - at least for most people I guess - become quite excruciating. I've read that this is a matter of training, some meditation techniques can reduce this, and I guess we could use technology to implement an off-switch or mental dimmer for these types of unpleasantness. So if you could switch off your pain, fear, or suffocation, you wouldn't be forced to feel these modes of badness even if the situation is physically uncontrollable. What will prevent people from hurting themselves then? The knowledge that it's bad for them, combined with a strong will to live and prosper is one reason. I wonder if this is sufficiently sustainable in the evolutionary process. Alleles encoding for the ability to ignore your own pain at will if you so choose were probably strongly selected against. To a limit, it may be a useful ability, but if it leads to people (e.g. children) switching their pain off instead of addressing integrity damage, that's clearly maladaptive. And if it's maladaptive, any equivalent hedonic enhancements will memetically be selected against by the most successful societies.

Maybe the solution is more impulse control, goal-oriented mental states, and explicit strategic thinking, combined with the voluntary ability to switch off or dim down error signals like pain or suffocation. Maybe it really is adaptive for intelligent, self-controlled mature agents. But it may be a hard problem to find a comprehensive solution to prevent the suffering of children or non-human animals who lack the "strategic thinking" component.

If it turns out that a) making such suffering completely voluntary is statistically maladaptive, and b) it can't be "out-sourced" to workarounds like exoskeletons with non-sentient computer chips that do the aversion for you, force your lungs to breathe, avoid noxious stimuli etc. (if people were even willing to give up such degrees of control to such sub-systems), the gradients approach would have to implement these aversive functions by relying on subjective goodness instead of badness. For example, the agony of suffocation would have to be replaced by an overwhelming lust for breathing to motivate the adaptive behavior. The perceived badness of physical pain would have to be replaced by a lust for avoiding the noxious stimuli etc. It seems that this is hypothetically possible, but an important part of negative feedback learning is that touching the hot plate or breathing through the plastic bag was a bad idea to begin with. If the resolution of such potentially harmful situations is no longer badness-driven but goodness-driven, the learning effect might be maladaptive. It is not clear to me that there necessarily is a fundamental solution to this problem.
"The abolishment of pain in surgery is a chimera. It is absurd to go on seeking it... Knife and pain are two words in surgery that must forever be associated in the consciousness of the patient."

- Dr. Alfred Velpeau (1839), French surgeon
User avatar
Hedonic Treader
 
Posts: 342
Joined: Sun Apr 17, 2011 11:06 am

Re: Some questions for study

Postby Gedusa on 2011-06-19T15:31:00

Can non-human animals be changed in such a way that they perceive greatly reduced or no suffering, and/or enhanced subjective well-being, with current or emerging tech?

Yes. Discussed on sentient developments though I can't find the link just now. Only works for physical pain though (not boredom or depression or whatever). Probably quite useful for farm animals.

Can consciousness become voluntary as a fundamental principle of mind-design...Can existence become voluntary as a fundemental principle of mind-design

Don't get the difference, sorry.

My own best guess is that existential risk, for example, is dramatically reduced if we engineer lifelong and generically hypervaluable states for all.

Possibly, I can certainly imagine existential risks falling if we tinkered with altruism (e.g. increased oxytocin). The question immediately becomes whether the technology to do so will come before most existential risks would occur. I'm skeptical, the genetic technology required to do this sort of stuff seems a) a way off and b) likely to pose existential risks of it's own before it's benefits (generations of happy altruists) are realised. Though non-genetic technology could come sooner (mood-enhancers or whatever) I'm given to doubt widespread adoption of drug-based ways of mood-enhancing.
World domination is such an ugly phrase. I prefer to call it world optimization
User avatar
Gedusa
 
Posts: 111
Joined: Thu Sep 23, 2010 8:50 pm
Location: UK

Re: Some questions for study

Postby Hedonic Treader on 2011-06-19T15:40:00

Gedusa wrote:
Can consciousness become voluntary as a fundamental principle of mind-design...Can existence become voluntary as a fundemental principle of mind-design

Don't get the difference, sorry.

Temporary vs. permanent shut-down of consciousness. I would like to have both abilities independent of physical context, ie. it has to work when you find yourself in a box or are otherwise incapacitated.
"The abolishment of pain in surgery is a chimera. It is absurd to go on seeking it... Knife and pain are two words in surgery that must forever be associated in the consciousness of the patient."

- Dr. Alfred Velpeau (1839), French surgeon
User avatar
Hedonic Treader
 
Posts: 342
Joined: Sun Apr 17, 2011 11:06 am

Re: Some questions for study

Postby davidpearce on 2011-06-19T19:13:00

Hedonic Treader, a nice analysis. That's one reason why phasing out suffering will (probably) be a long-drawn-out affair. But even today, choosing benign variants of as few as two genes
http://www.opioids.com/pain/scn9a.pdf
http://www.reproductive-revolution.com/comt.pdf
http://www.reproductive-revolution.com/ ... truism.pdf
for our prospective children via preimplantation genetic diagnosis could prevent untold suffering without compromising health and adaptability.
In a decade or two, presumably more sophisticated choices will be feasible too - and (hopefully) user-friendly software packages for prospective parents to match.

Gedusa, I'm inclined to agree. Some of the gravest forms of man-made existential risk will loom large a long time before genetically phasing out suffering could make more than a dent in the scale of the threat.
But over what kind of timescale may we anticipate the Era Of Existential Risk? Yes, self-sustaining bases on the Moon and Mars later this century will presumably diminish many man-made (and 'narrow-AI'-made] existential risks. But maybe the future of intelligent life in the universe can be safeguarded only if and when (post)humans colonise other solar systems. Such colonization will be extraordinarily difficult. Extrasolar colonization may take centuries or millennia or (some sceptics say) for ever. Even on more optimistic projections, a long time will elapse before we're in any position to embark on such colonizing missions. IMO the likelihood of our reaching that stage of evolutionary development would be greatly enhanced by genetic source code editing to phase out unpleasant experience in any guise. Alas evolution "designed" men to be hunter-warriors - and IMO thermonuclear war is likely unless we genetically modify human nature ASAP. In practice, I suspect we'll be too late to avert global catastrophe if not human extinction.

davidpearce
 
Posts: 45
Joined: Thu May 07, 2009 8:27 am

Re: Some questions for study

Postby Hedonic Treader on 2011-06-19T20:37:00

I wonder if anyone has ever tried something like this: You implant a chip into the part of the anterior cingulate cortex that's responsible for representing the badness aspect of pain. That chip should have the ability to inhibit activation in that region, but not excite it artificially. It's connected to a control mechanism of some kind, maybe something as simple as an interface with a percentage value.

The default is 100%, that's normal pain sensitivity. Choosing any lower percentage causes the chip to inhibit the activation in the ACC with respective levels of inhibition. So 50% means a 50% activation reduction, 0% means temporary pain asymbolia.

Clearly, you won't get very many healthy volunteers for such an invasive intervention, but generally speaking: Would we have the tech for a proof of concept? Are there any countries with libertarian legislation concerning voluntary surgery of this kind on otherwise healthy people? Or maybe this could be legally tried on a consenting person who requires associated brain surgery anyway.
"The abolishment of pain in surgery is a chimera. It is absurd to go on seeking it... Knife and pain are two words in surgery that must forever be associated in the consciousness of the patient."

- Dr. Alfred Velpeau (1839), French surgeon
User avatar
Hedonic Treader
 
Posts: 342
Joined: Sun Apr 17, 2011 11:06 am

Re: Some questions for study

Postby Gedusa on 2011-06-19T22:06:00

@ Treader
Your plan could be tested in animal models without necessarily resorting to human trials. Simply test the animal's response to noxious stimuli with the inhibition chip set to various levels. If you can decrease the animal's apparent response then it would seem you have a decent proof-of-concept. Though you'd probably still have to establish some sort of precedent in humans. If it worked, couldn't it be used to treat chronic pain disorders?
Also, a similar idea would be for people to have a "wirehead chip" in their pleasure center which would activate in the presence of severe pain signals and "white out" the pain. Though that might have side-effects...

@ Dave
IMO the likelihood of our reaching that stage of evolutionary development would be greatly enhanced by genetic source code editing to phase out unpleasant experience in any guise

Yeah, I agree in some ways. If we got a stage where we were well established throughout the solar system, then such genetic interventions would probably result in increased likelihood of colonizing other solar systems. However, I think that the diminishing of existential risk you talked about if we colonize bits of the solar system would be pretty drastic. I think, as you seem to, that the main problem is getting to the stage where we colonize the solar system. If we can do that then we'll probably have made it past the worst problems. So I'm worried about getting through this century, and share your pessimism regarding the likely outcome (though nuclear war isn't my top risk).
World domination is such an ugly phrase. I prefer to call it world optimization
User avatar
Gedusa
 
Posts: 111
Joined: Thu Sep 23, 2010 8:50 pm
Location: UK

Re: Some questions for study

Postby Brian Tomasik on 2011-06-20T13:51:00

Hedonic Treader wrote:Ironically, this was done by beings who aready had a lossless substrate to run consciousness forever in a controlled way. They gave it up to create more darwinism, because the total state-space of possible thoughts and experiences was limited. It's worth noting that the author didn't seem to question the ethics of this, his characters act as if the moral worth of this decision were obvious.

Uggh; how awful. :cry:

Incidentally, I'm not aware of any connection within real physics between false-vacuum decay and creating new universes. Is there one, or was that just a sci-fi invention?
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Some questions for study

Postby Hedonic Treader on 2011-06-20T21:02:00

Alan Dawrst wrote:Incidentally, I'm not aware of any connection within real physics between false-vacuum decay and creating new universes. Is there one, or was that just a sci-fi invention?

I'm not sure. If there is one, it's probably mostly speculative. My impression was that Baxter bases his plots on concepts and ideas within real physics, but includes the application of very speculative ones. The main idea in the novel is that universes "reproduce" in a quasi-darwinian way by creating baby universes in big crunches and black holes. The deliberately triggered vacuum is depicted as a super-efficient version of the same principle.
"The abolishment of pain in surgery is a chimera. It is absurd to go on seeking it... Knife and pain are two words in surgery that must forever be associated in the consciousness of the patient."

- Dr. Alfred Velpeau (1839), French surgeon
User avatar
Hedonic Treader
 
Posts: 342
Joined: Sun Apr 17, 2011 11:06 am

Re: Some questions for study

Postby Gedusa on 2012-02-25T16:19:00

I have more questions for study! Lot's on the simulation argument.

  • What does the simulation argument tell us about the way that future civilization will develop?
  • What are the likely values of simulators? Can the world we observe give us any significant clues?
  • What is the probability that simulators can be bargained with?
  • Is the simulation argument valid?
  • Is there any weird game theory that applies to this argument?
Also:
  • Are there any likely sets of values that all evolved beings will share? Or are there values that a significant subset of evolved beings will share? (This is distinct from obvious things like survival)
  • Are there any weird alterations to depressing arguments such as the Great Filters explanation of the Fermi Paradox or the Doomsday Argument which would make these arguments less depressing? E.g. The Doomsday Argument is also consistent with no. of births dropping to zero due to mass adoption of virtual reality, the Fermi Paradox is consistent with lot's of aliens hiding from each other because hiding increases the probability that others are hiding too and that extinction isn't inevitable - whereas expansion means that The Silence is unexplained and extinction more likely. (Sorry, word vomit)
  • Are such alterations actually likely to be true? (I don't really think the aliens one is...)

Oh and an interesting random thing I found. That's Anders Sandberg posting at the top, and later in the thread he says some pretty interesting/insane things:
Basically, our argument boils down to 1) SETI people have been thinking too small, 2) physics seem to allow spamming the universe with relatively small resources, 3) this means the great silence is at least a million times more deafening than previously thought. 4) the real explanation for that will hence be (in some sense) a million times more extreme than previously thought.

AND
*Personally*, since I think life is not too hard and intelligence not too unusual, that civilizations do not converge strongly yet are not rapidly killed off by self-made xrisks (since they are so different), and that our paper is realistic... I end up thinking that the aliens might already be here via Bracewell probes. Most likely in the form of a few extremely hard to spot structures out in the Kuiper belt that are enforcing their claim to the solar system. Whether that means running a full interdict, just preventing attempts at spamming the universe, or some form of welcome to whatever game-theoretic alliance that makes sense once humans make proper contact, I don't know.

*Personally* I think he's being insanely optimistic. Solutions generally aren't that nice. But perhaps I'm just being an old (well... not really) cynic.
World domination is such an ugly phrase. I prefer to call it world optimization
User avatar
Gedusa
 
Posts: 111
Joined: Thu Sep 23, 2010 8:50 pm
Location: UK

Re: Some questions for study

Postby Brian Tomasik on 2012-02-25T16:58:00

Gedusa wrote:Are there any likely sets of values that all evolved beings will share? Or are there values that a significant subset of evolved beings will share? (This is distinct from obvious things like survival)

Steve Omohundro suggests some common drives (if not exactly 'values') of intelligent agents in "The Basic AI Drives":
We identify a number of “drives” that will appear in sufficiently advanced AI systems of any design. We call them drives because they are tendencies which will be present unless explicitly counteracted. We start by showing that goal-seeking systems will have drives to model their own operation and to improve themselves. We then show that self-improving systems will be driven to clarify their goals and represent them as economic utility functions. They will also strive for their actions to approximate rational economic behavior. This will lead almost all systems to protect their utility functions from modification and their utility measurement systems from corruption. We also discuss some exceptional systems which will want to modify their utility functions. We next discuss the drive toward self-protection which causes systems try to prevent themselves from being harmed. Finally we examine drives toward the acquisition of resources and toward their efficient utilization.


Gedusa wrote:Are there any weird alterations to depressing arguments such as the Great Filters explanation of the Fermi Paradox or the Doomsday Argument which would make these arguments less depressing?

I don't think they're too depressing, because at least we don't find ourselves in a universe filled with massive numbers of beings enduring brutality. Well, we do find such a situation among biological life on earth and probably other habitable planets, but it's not on the scale of "Astronomical Waste" numbers.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Some questions for study

Postby Gedusa on 2012-02-25T17:26:00

Ah cool. That pretty much answers that question :)
I don't think they're too depressing, because at least we don't find ourselves in a universe filled with massive numbers of beings enduring brutality. Well, we do find such a situation among biological life on earth and probably other habitable planets, but it's not on the scale of "Astronomical Waste" numbers.

Yeah, we have a difference of values/opinion here which we've gone over a lot. It's depressing for me as I care more about humanity surviving and doing things according to my values. It's not for you though.

Still I suppose I could've been clearer: Are there any alterations to these arguments which would result in their conclusions being refuted?
World domination is such an ugly phrase. I prefer to call it world optimization
User avatar
Gedusa
 
Posts: 111
Joined: Thu Sep 23, 2010 8:50 pm
Location: UK

Re: Some questions for study

Postby wallowinmaya on 2012-02-27T22:35:00

I think there are no general refutations of the Great Filter Argument.

But there is an argument which shows that uFAI is not as dangerous as it seems.

In short: Paperclippers can't be common 'cause we see no paperclips.

Here are some posts by Katja Grace on this topic:

-http://meteuphoric.wordpress.com/2010/11/05/light-cone-eating-ai-explosions-are-not-filters/
-http://meteuphoric.wordpress.com/2010/11/11/sia-says-ai-is-no-big%C2%A0threat/

wallowinmaya
 
Posts: 1
Joined: Mon Feb 27, 2012 10:25 pm

Re: Some questions for study

Postby Brian Tomasik on 2012-03-18T12:44:00

More on AI drives from Nick Bostrom (found via Pablo):
This paper discusses the relation between intelligence and motivation in artificial agents, developing and briefly arguing for two theses. The first, the orthogonality thesis, holds (with some caveats) that intelligence and final goals purposes) are orthogonal axes along which possible artificial intellects can freely vary—more or less any level of intelligence could be combined with more or less any final goal. The second, the instrumental convergence thesis, holds that as long as they possess a sufficient level of intelligence, agents having any of a wide range of final goals will pursue similar intermediary goals because they have instrumental reasons to do so.

I agree with both of the theses, though I admit I haven't yet read the paper. :?
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Some questions for study

Postby Arepo on 2012-03-19T12:37:00

The former sounds unlikely to me. Given your emotivist metaethics it's probably not surprising that we disagree on whether intelligence will tend to lead to util, but it seems unlikely that hyperintelligent agents won't at least be able to discern some 'bad' goals - eg the absolute protection of rights under the banner of altruism, or other goal combinations which turn out to be contradictory but not necessarily obviously so.

Also not read the paper, though, so I'll try and do so later.
"These were my only good shoes."
"You ought to have put on an old pair, if you wished to go a-diving," said Professor Graham, who had not studied moral philosophy in vain.
User avatar
Arepo
 
Posts: 1065
Joined: Sun Oct 05, 2008 10:49 am

Re: Some questions for study

Postby Arepo on 2012-03-19T19:39:00

Read the 'Orthogonality Thesis' part of the paper, and it didn't persuade me for three reasons:

1) Bostrom's argument seems to be that since we can conceive of the two questions as spectra, we can overlay them orthogonally, and then treat any point on the graph as possible. This seems like sophistic reasoning, given that we could do the same thing with, say 'intelligence' and 'likelihood of believing that 2+2 = 4', or suchlike comparisons. Using Quinean reasoning, one could even overlay 'intelligence' and 'the capacity for instrumental reasoning' (Bostrom's working definition of it), to the same effect. So his argument boils down to little more than assertion

2) (really an extension of 1) If one does accept the reasoning, it still tells us nothing about probability. After all, one could imagine a hyperintelligent being glitching (presumably) long enough to think 2+2 = 5, but logical possibility is of infinitessimal interest in guiding our judgement on things like AI behaviour. One presumes that as AIs get smarter on any relevant defintion, they become significantly less likely to make inferential errors. So if one assumes the probability of ethics being the conclusion of inference is >0, we can expect better ethical judgement (to some degree) from smarter AIs.

3) Even if we think the probability *is* zero (or is remote), we might still think that better intelligence will in practice correlate with other tendencies, for any number of reasons - greater mass/greater or diminishing vocabulary/greater energy use etc. It certainly won't exist in a vacuum, so it's bound to have some correlates unrelated to inferential ability, each of which in turn will have correlates. So it's far too early to say - as Bostrom seems to be implying - that AI is equally as likely to have any self-consistent motivation as any other.
"These were my only good shoes."
"You ought to have put on an old pair, if you wished to go a-diving," said Professor Graham, who had not studied moral philosophy in vain.
User avatar
Arepo
 
Posts: 1065
Joined: Sun Oct 05, 2008 10:49 am


Return to General discussion