Reasons SIAI (and research generally) is not optimal?

Whether it's pushpin, poetry or neither, you can discuss it here.

Reasons SIAI (and research generally) is not optimal?

Postby Brian Tomasik on 2009-07-19T21:25:00

Among the utilitarians I've met over the years, a sizable fraction have come to the conclusion that the optimal destination for utilitarian funding is organizations that research speculative futuristic scenarios and the philosophical / scientific / methodological questions that such research requires. In particular, many of these utilitarians have named the Singularity Institute for Artificial Intelligence (SIAI) as a good example of such an organization, so I'll focus on it here, but the discussion can apply more broadly.

The claim that it's optimal for utilitarians to research speculative scenarios (including unsettled methodological problems like understanding qualia or evaluating our impacts in an infinite universe) derives from the observation that small changes to the quality of our understanding could drastically alter our conclusions about which courses of action are good and bad. For instance, suppose we discovered that entities we never thought conscious actually do experience qualia and, in fact, suffer greatly in a preventable way. (This isn't an absurd suggestion -- it happened to me several years ago when I realized that animals can feel pain. To the extent that the question of which animals can suffer remains open, such a discovery process is still going on right now.) If these entities outnumbered the sentient organisms we currently know about by orders of magnitude, the new optimal course of action could be dominated by doing what would prevent the most suffering on the part of those new entities.

As far as focusing on futuristic speculation, the argument is basically that there's a non-negligible chance that humans will have vast impacts on their future light cone, affecting many orders of magnitude more sentient organisms than have or will ever populate earth during the few billion years for which life exists there. The chance that humans do have such an astronomical impact is small, but the expected value is still likely enormous.

As a follow-up to my previous question about where to donate, I'll note that I'm currently leaning toward donating the money toward research at SIAI. While in general that organization's work is probably something that utilitarians would endorse, this particular project is one that I've coordinated to be of special interest for utilitarians concerned about preventing massive amounts of suffering in the universe -- possibly even outside our lightcone. In general, I recommend that utilitarians consider contacting SIAI to see if the group can arrange for research that may be of mutual interest.

The main objection I have to this strategy is the following. I am a total hedonistic utilitarian with an "exchange rate" between pleasure and pain that gives a significant weight to the badness of pain. In addition, I care more about animal suffering than I think most people do, in part because hedonism implies a lot more potential value and disvalue on the part of animals than do consequentialisms that value more abstract traits that seem to be possessed mainly by humans and their evolutionary kin. The number of people who hold my particular values is very small; the number who hold utilitarianism proper is somewhat bigger; and the number of rationalists who tend to hold some brand of consequentialism is larger still.

Now, knowledge is important, but so is ideology. For instance, I have concerns about what might result from a superintelligent friendly AI that -- perhaps influenced by deep ecology and impulses to propagate life, or perhaps just due to giving insufficient thought to animal suffering -- led to an increase in the number of wild animals throughout the universe, or perhaps in new universes. So there's a question: At what point is it better to promote your specific memes (hedonistic anti-speciesism, in my case) rather than general knowledge or AI that's generally "human-friendly" but perhaps not Benthamite? This might include, for instance, promoting concern about wild-animal suffering, so that -- if humans do have a huge impact on the future of the universe -- they do so in a positive rather than negative way. Sure, research on decision theory is important, but unless people use it to maximize the right things, it's to no benefit, and could even be harmful.

However, I should point out that while SIAI has no explicit ideology, several of its members do lean strongly utilitarian, and many more lean strongly toward some sort of rationalist consequentialism. So even on the question of ideology, SIAI may not be a bad choice for Benthamites, because the amount of philosophical overlap remains extremely high relative to the overlap with the general population. And if one arranges for specific research on a utilitarian-oriented project, the actual marginal impact of a utilitarian's donation can potentially be even better. But I still think contributing to SIAI's general funds is (probably, based on my current knowledge) an excellent choice.

What do others think here? Are there other reasons SIAI and the like are not optimal for utilitarians? For instance, perhaps the Singularity scenario is highly improbable. Or perhaps SIAI's ability to have an impact on it if it did occur would likely be minuscule. Or maybe real "friendly AI" is a utilitarian pipe dream that will almost certainly never amount to anything. While I agree with all of those statements, I still think the vast potential consequences of success here dominate the expected-value calculation.

But maybe there are other causes that would have higher chance of success? Or other organizations more qualified to address these matters? Or other donation strategies (e.g., funding research informally by coordinating with undergraduate students) that have higher leverage? In other words, tell me why SIAI is not an optimal recipient of charitable-donation dollars for expected-value maximizers?
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Reasons SIAI (and research generally) is not optimal?

Postby Arepo on 2009-07-19T22:34:00

Toby Ord would probably be the best person to speak to about this (although if he responds I hope he'll do so in public).

My problem with these shot-to-nothing scenarios is that the arguments inevitably involve more wild guesswork than the arguers suggest. The typical reasoning goes that some event has a massive impact on net utility, which heavily outweighs its implausibility.

But we seem to have increasing difficulty calculating the odds of even day-to-day events to the kind of degree that would allow such precision, and these are predictions that involve countless factors that such arguments usually gloss over. (I find this with your arguments for Pascal's wager, for eg, which is a relatively simple case in that all you need to do is show that the Christian heaven is more plausible than any other infinite utility alternative). The writer is always willing to offer reasons why he/she thinks one should believe the numbers weigh what they say they do, but they're nowhere near being a mathematical proof. So they're ultimately gut instinct.

More immediate projects - your warm fuzzies ones - invoke far fewer variables, and even then if you were to try to calculate the hedons involved you'd never finish. One advantage they have, I suppose, is that results (or lack of them) can be turned around relatively quickly, so you can see whether they've met goals they were aiming for. Even if the SIAI's goals happen to be sensible ones, you'll never be able to evaluate how well they're meeting them.

An alternative issue, which I is perhaps less significant but worth considering, is simply the public perception of the charities you might give to. The average reasonably smart person is quite capable of understanding eg Toby's comparison of Fred Hollows to Seeing Eye, and might find it particularly inspiring that someone like you were exemplifying a new kind of 21st century human, who genuinely structures large parts of his life around helping others - inspiring enough to persuade her to increase her own donations. Such inspiration is likely to be much more powerful if the causes you're supporting are ones said average person a) intuitively supports and b) can see a powerful case that they're superior to some of the alternatives one might consider. The Fred Hollows vs Seeing Eye comparison holds for both points, the SIAI vs Fred Hollows comparison holds for neither.

You can potentially imagine the tabloids running a sympathetic story about someone who'd cured n people of blindness last week. If they ran a similar story about someone who'd reduced the chance of a lab universe by a factor of n, you can bet it would only be to mock them. The same probably goes, albeit slightly less, for the broadsheets - they're not exactly written exclusively by utilitarianism sympathisers.
"These were my only good shoes."
"You ought to have put on an old pair, if you wished to go a-diving," said Professor Graham, who had not studied moral philosophy in vain.
User avatar
Arepo
 
Posts: 1065
Joined: Sun Oct 05, 2008 10:49 am

Re: Reasons SIAI (and research generally) is not optimal?

Postby Brian Tomasik on 2009-07-19T23:12:00

Thanks for the comments, Arepo.

Arepo wrote:(I find this with your arguments for Pascal's wager, for eg, which is a relatively simple case in that all you need to do is show that the Christian heaven is more plausible than any other infinite utility alternative).

I think it is the case that "all you need to do" is show that a particular possibility has sufficiently large potential consequences, not discounted by a correspondingly small probability, that a particular term dominates in the expected-value calculation. But showing that is the whole meat of the question and isn't trivial.

I guess you might say it's "close to impossible." Still, I try to be an expected-value maximizer, so any non-infinitesimal change in probabilities here is, in my view, highly valuable.

Your point about public sympathy is relevant. Indeed, when making the case for, say, living frugally in order to donate large amounts, it does make sense to use tangible causes for purposes of illustration. Singer has probably made more of a positive impact on the world by taking his example charity to be Oxfam than SIAI (or even perhaps GiveWell, arguably for the same reason). Still, if your donations are private, this needn't be a problem, unless you make public claims about actually donating to Oxfam. And being public about donations to more speculative projects could be a good idea among utilitarians (hence this post).
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Reasons SIAI (and research generally) is not optimal?

Postby CarlShulman on 2009-07-19T23:50:00

"The chance that humans do have such an astronomical impact is small,"

Not that I necessarily disagree, but why?

In Nick Bostrom's framework, is this because of extinction (including simulations being turned off), stagnation, or posthumans that produce few sentient organisms?

http://www.nickbostrom.com/papers/future.pdf

CarlShulman
 
Posts: 32
Joined: Thu May 07, 2009 2:01 pm

Re: Reasons SIAI (and research generally) is not optimal?

Postby Brian Tomasik on 2009-07-20T00:20:00

The main reason I had in mind was that such scenarios sound like science fiction. That is, they depend on certain technologies (superintelligence, space travel, perhaps nanotechnology, etc.) whose development is not certain. Some may even be close to impossible -- like space travel over long distances, given the huge energy requirements? (Of course, I suppose interstallar probes manned by non-biological controls are pretty feasible in principle.)

And then there are the more abstract reasons you point out, including anthropic ones.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Reasons SIAI (and research generally) is not optimal?

Postby CarlShulman on 2009-07-20T02:09:00

"That is, they depend on certain technologies (superintelligence, space travel, perhaps nanotechnology, etc.) whose development is not certain."

Betting odds, please.
http://lesswrong.com/lw/mp/0_and_1_are_ ... abilities/

CarlShulman
 
Posts: 32
Joined: Thu May 07, 2009 2:01 pm

Re: Reasons SIAI (and research generally) is not optimal?

Postby Brian Tomasik on 2009-07-20T02:35:00

I like your style. ;)

I'll give some off-the-cuff probabilities (rather than odds), but I'd be glad to revise them (perhaps until agreement). Without updating for anthropics -- just based on technological risks -- I might say

P(humans extinct in 50 years) = 0.3
P(humans extinct in 100 years) = 0.6
P(humans extinct in 1000 years) = 0.8.

Conditional on humans surviving long enough that they could develop the following technologies, here are some probabilities that they would:

P(artificial general intelligence) = 0.35
P(Drexler-style molecular nanotechnology that produces almost any manufactured goods) = 0.6
P(self-replicating space probes that could be successfully disbursed throughout the galaxy) = 0.3.

Conditioning on human survival makes things messy, because the fact that humans survive may give us information about how easy these technologies are. I may not have fully accounted for that above, but the numbers are completely rough anyway. Also, the technologies are interrelated. For instance, P(nanotech given AGI) would be more like 0.9.

What are your estimates? And what other technologies / scenarios should be considered?
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Reasons SIAI (and research generally) is not optimal?

Postby CarlShulman on 2009-07-20T04:22:00

I won't get into the probabilities of extinction here (but check your email for a relevant decision aid), that's a much longer discussion, but your first and third conditional probabilities seem weirdly low.

You're saying there's only a 35% chance of artificial intelligence being developed with thousands of years to work on the problem (with IA, expanding population, etc), in a world with apparently computable materialist physics, where evolved human brains implement intelligence enough for civilization? That seems really hard to justify.

Likewise, you don't need to go very near the speed of light for astronomical waste concerns to come into play. Orion pulse drives and the like don't rely on wacky new physics.

Drexlerian nanotechnology may not work out as advertised, but automated manufacturing bases are a much broader class, and the class that's most relevant.

CarlShulman
 
Posts: 32
Joined: Thu May 07, 2009 2:01 pm

Re: Reasons SIAI (and research generally) is not optimal?

Postby Brian Tomasik on 2009-07-20T04:38:00

CarlShulman wrote:You're saying there's only a 35% chance of artificial intelligence being developed with thousands of years to work on the problem (with IA, expanding population, etc), in a world with apparently computable materialist physics, where evolved human brains implement intelligence enough for civilization?


Yeah, maybe that's a little low. 50%?

I guess my intuition is that AGI might very well require a comprehensive understanding of how the human brain works, and it's quite possible to me that people just aren't smart enough to ever figure that out. Intelligence augmentation could help, but I'm also rather skeptical (probability 50%?) about whether those technologies will ever get to a stage where they work well enough to provide significant benefit.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Reasons SIAI (and research generally) is not optimal?

Postby RyanCarey on 2009-07-20T06:27:00

The average reasonably smart person is quite capable of understanding eg Toby's comparison of Fred Hollows to Seeing Eye, and might find it particularly inspiring that someone like you were exemplifying a new kind of 21st century human, who genuinely structures large parts of his life around helping others - inspiring enough to persuade her to increase her own donations. Such inspiration is likely to be much more powerful if the causes you're supporting are ones said average person a) intuitively supports and b) can see a powerful case that they're superior to some of the alternatives one might consider. The Fred Hollows vs Seeing Eye comparison holds for both points, the SIAI vs Fred Hollows comparison holds for neither.

I agree with Arepo, but I believe he hasn't criticised your idea harshly enough.
When you donate to Fred Hollows, you're an outstanding human being.
When you donate to SIAI, you're a philanthropist-maverick.
I think you're really bringing the team down here. I think that we have to recognise here that much more than we can get done alone, we can get done by inducing favourable behaviour in friends, colleagues, relatives, acquaintances, and others. To counter that you may achieve a maximal impact by inducing other philnathropic utilitarians to direct their donations towards SIAI, you're just delaying the fundamental question. Does donation to SIAI marginalise utilitarianism as a politically viable choice. And in my opinion, the answer here is definitely yes. I think that disregarding absurd targets of philanthropy will hardly compromise our integrity and it will clearly favour our public relations. I suppose some sociological & historical expertise might help. From what sociology I know, I would imagine we can get the most done by opposing those behaviours that have moderate prevalence at the time. For example, now that slavery is inexistent in Western Countries, we have shifted our attention to sex-based, race-based, disability-based, species-based discrimination. I fear that to promote the wellbeing of aliens is decades too far ahead of our time.
You can read my personal blog here: CareyRyan.com
User avatar
RyanCarey
 
Posts: 682
Joined: Sun Oct 05, 2008 1:01 am
Location: Melbourne, Australia

Re: Reasons SIAI (and research generally) is not optimal?

Postby EmbraceUnity on 2009-07-20T21:17:00

The primary driver of humanity's immense power to create and destroy has been technology. Altering the course of it can have radical implications. A more relevant question would be what sort of technological advancement should be promoted.

It seems to me that Open Source modes of production are more equitable, just, and efficient. If we are to create a world free of artificial scarcity, we must collaborate. Luckily, the logic of the new communications technologies we have been inundated with inherently fosters decentralized, distributed innovation and collaboration.

We need wiki-science. We need Open Source biotechnology and nanotechnology. Patents in these areas are already showing themselves to have perverse consequences. The domain of life should not be patentable, it should remain in the Commons.

Considering it is bio and nanotechnologies which hold the potential to initiate mega-scale projects such as the elimination of wild animal suffering, we must be very concerned with their development. How is eliminating wild animal suffering profitable? It clearly is not. Only the open source mode of production can effectively mobilize people to tackle this issue, and other issues of similar scale and importance which have no profit motive to incentivize them.

I don't know enough to comment on AI specifically, but certainly its utility function would need to be coded to value all sentient life. There is no way to be certain of this without it being open source. However, I don't hold out any hope for the singularity since it is all over my head, and I cannot place any reasonable probability estimate upon it. It is like one big deus ex machina that many people invoke to tell people to forget about any immediate political concerns.

EmbraceUnity
 
Posts: 58
Joined: Thu Jul 09, 2009 12:52 am
Location: USA

Re: Reasons SIAI (and research generally) is not optimal?

Postby Brian Tomasik on 2009-07-21T02:34:00

RyanCarey wrote:I fear that to promote the wellbeing of aliens is decades too far ahead of our time.


I think that is a legitimate concern. Many people still aren't sure whether to care about, say, the chicken that went into the nuggets they're eating, so the suggestion that extraterrestrial-wild-animal suffering matters might give them fuel for a reductio against caring about animals at all.

Still, for someone who thinks extraterrestrial suffering is potentially orders of magnitude more important than chicken suffering, there's a risk that, if he doesn't promote the cause, no one will -- maybe society would never get to that stage of progress. In the case of SIAI, I think the main argument for action now is that we may simply not have time to wait for society to come around to thinking about these questions, because AGI might come first (possibly within a few decades).

EmbraceUnity, thanks for the comments. I'll defer to SIAI for answering the question of to what extent open-source AI is a good idea -- I'm not sure. As far as coding a value on sentience into the objective function of the AI, I agree with the sentiment, though I would just remark that we need to be extremely careful about how the AI determines what "happiness" and "suffering" are. We don't want an AI, wired to have a happy-face-expression detector, that turns the solar system into molecule-scale smiley-face pictures (p. 15 here). Working out such issues is one of the main projects that SIAI is tackling.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Reasons SIAI (and research generally) is not optimal?

Postby Arepo on 2009-07-21T11:53:00

Alan Dawrst wrote:Still, for someone who thinks extraterrestrial suffering is potentially orders of magnitude more important than chicken suffering, there's a risk that, if he doesn't promote the cause, no one will -- maybe society would never get to that stage of progress.


I think this is unlikely. The logic of caring about non-human consciousnesses seems to flow quite straightforwardly from any view beyond basic egoism. Society as a whole, and even philosophers seem to collectively take decades more than individual people to draw basic logical inferences, but it does seem to happen.

Another issue I have with preventing low-probability massive disasters is that it seems like focussing on them could potentially make util genuinely self-effacing.

If we imagine we live in a universe with infinite subtletly to its physical laws, for example, then at any given point in the lifespan of sentience we might be able to point to a remote probability R that we'll be able to achieve some breakthrough quantity of utility U which we could target T in preference to maximising short-term/likely utility M such that (I'm almost certainly going to screw up the notation, but hopefully you can decipher what I'm trying to get at) P(RU|T) > M. Ie that we should always suffer now to reduce maximise total utility in a future we never get to.

This seems especially problematic when you're trying to reduce the risk of extinction events (which I think is a large part of what the SIAI do) in a world where you're not very confident net utility is (and is expected to continue being) positive.
"These were my only good shoes."
"You ought to have put on an old pair, if you wished to go a-diving," said Professor Graham, who had not studied moral philosophy in vain.
User avatar
Arepo
 
Posts: 1065
Joined: Sun Oct 05, 2008 10:49 am

Re: Reasons SIAI (and research generally) is not optimal?

Postby Brian Tomasik on 2009-07-24T07:24:00

Arepo wrote:If we imagine we live in a universe with infinite subtletly to its physical laws, for example, then at any given point in the lifespan of sentience we might be able to point to a remote probability R that we'll be able to achieve some breakthrough quantity of utility U which we could target T in preference to maximising short-term/likely utility M such that (I'm almost certainly going to screw up the notation, but hopefully you can decipher what I'm trying to get at) P(RU|T) > M. Ie that we should always suffer now to reduce maximise total utility in a future we never get to.

A good point. This general scenario has been brought up by SIAI supporters, actually. I'm not convinced that it's a bug; maybe it's just a feature of utilitarianism that we should support. Sometimes potential costs are too big to get wrong, and if we could potentially prevent extraordinary amounts of suffering by finding Pascal's button, maybe we ought to look for it.

Your particular illustration raises the concern that, at any given time, it may be optimal to postpone reward, leading to the reward never actually being achieved. This is indeed a concern, and it has been discussed some in the philosophical literature. Again, this is precisely the kind of problem that SIAI has and will research, hopefully before someone builds a naive AI that makes these kinds of mistakes.

Arepo wrote:This seems especially problematic when you're trying to reduce the risk of extinction events (which I think is a large part of what the SIAI do) in a world where you're not very confident net utility is (and is expected to continue being) positive.

You're right. This is my concern about making sure that the friendly AI actually would care about animal suffering and would be sufficiently utilitarian in its general goals that it wouldn't create massive amounts of uncompensated suffering, such as in new universes.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Reasons SIAI (and research generally) is not optimal?

Postby DanielLC on 2009-07-24T16:02:00

Although always suffering now to maximize the potential total utility would, indeed, be bad, we can't do it. At some point, people will stop supporting the idea and try to get utility now.

If you were capable of that sort of thing, the thing to do would be to look for as long as you can without looking forever. For example, you can't count to a googolplex, so if you tried to go for a googolplex years, you wouldn't know when to stop.
Consequentialism: The belief that doing the right thing makes the world a better place.

DanielLC
 
Posts: 703
Joined: Fri Oct 10, 2008 4:29 pm

Re: Reasons SIAI (and research generally) is not optimal?

Postby Brian Tomasik on 2009-07-25T07:44:00

DanielLC wrote:At some point, people will stop supporting the idea and try to get utility now.


Keep in mind that we're not necessarily talking about people -- AIs can be very different. And even human descendants could look very strange to us.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Reasons SIAI (and research generally) is not optimal?

Postby Arepo on 2009-08-04T21:53:00

I had a few more thoughts against this, not particularly related:

1) The kind of AI-induced catastrophe you envision seems likely to wipe out local life pretty quickly if it happens. This seems likely to be a relatively clean cut compared to more familiar disasters like meteor strikes, global warfare etc. If it wipes us out, it will probably do so with much less suffering than they would.

2) The same kind of catastrophe would probably (the way you describe it) eliminate all sentient life on earth, rather than just all humans, as a major war/impact/plague etc might. If you believe that wild animal suffering outweighs wild animal happiness, then this is a much preferable result to one that left the biosphere going as a misery generator.

3) The fact that someone is asking a question doesn't mean they're answering it. One recurring objection I have to utilitarian arguments is that where precise data isn't available they fudge - fair enough in itself - but then act much too confident about their probability guesstimates, often relying on (often equally haphazard) large numbers to overwhelm the difference. I would be wary of funding any such group of people until they can show some sort of substantial evidence that their analysis of the world is more accurate than, for eg, a reasonably intelligent scientist's. Otherwise they seem as likely as the rest of us to suffer cognitive biases, not least self-preservation bias if honest enquiry would lead to the conclusion that their jobs are an inefficient use of money.

4) This is the most interesting thought I've had, specifically aimed at Alan and others with a fair amount of money to throw around and the willingness to throw it - if there are various causes which it's very hard to differentiate between with any degree of confidence, rather than picking one somewhat arbitrarily (see 3), you could do something more deliberate:

a) pick out those causes that are obviously more utilitarian than many others, and not obviously less so than any (eg curing cataracts, education for third world women, promoting veganism, universal welfare organisations in general, funding research into certain technologies and more, perhaps promoting political action on extremely cut and dried issues)

b) try to identify the most effective organisations in terms of the effectiveness of your dollar - again with the same elimination criteria (Givewell, Population Services International, Fred Hollows, SIAI etc). Probably aim to select an equal number of organisations in each category so they're equally weighted.

c) put together a list of the remaining organisations.

d) pledge a regular donation of $N to all of the organisations on the list, initially to be distributed evenly among them.

e) set up a scheme where you invite people to donate to one or more of the selected groups and offer to adjust your donation according to those given by people who sign up for it. Two possible egs:

i) first come first serve. You'll match all other donations D up to a total of N from the $N pool, distributing N-D (if >0) evenly among the list afterwards.

ii) proportionately. At the end of a specified time, you'll calculate how much of D has been given to each organisation, and divide N up with the same proportional distribution.

The obvious plus to this setup is that what you lose in slightly greater credence for the efficacy of one approach, you surely gain in giving people (both utilitarian and others) an incentive to give more - if I feel much more strongly about the risk of climate change than of short term poverty reduction, I can give pay for money to go my preferred cause at a drastically increased rate.

If you tried ii) there'd be the risk of richer philanthropists dominating the pool and disillusioning the poorer ones, but if that happened, you could always ask some of said philanthropists to donate their money to expanding N rather than D. i) might be a better way of winning people over though, since if they donated early enough they could almost guarantee close to doubling their contribution, whereas if it's proportional, some people might find the expected gain too nebulous to feel as motivated by.
"These were my only good shoes."
"You ought to have put on an old pair, if you wished to go a-diving," said Professor Graham, who had not studied moral philosophy in vain.
User avatar
Arepo
 
Posts: 1065
Joined: Sun Oct 05, 2008 10:49 am

Re: Reasons SIAI (and research generally) is not optimal?

Postby RyanCarey on 2009-08-05T09:35:00

if I could try to recap your 4th point Arepo:
> one could match others donations to encourage contribution.
> this donation-matching could have a limit
> the donation-matching could apply to only organisations that have some claim to being the most cost-effective charity in the world.

Sounds very interesting!
You can read my personal blog here: CareyRyan.com
User avatar
RyanCarey
 
Posts: 682
Joined: Sun Oct 05, 2008 1:01 am
Location: Melbourne, Australia

Re: Reasons SIAI (and research generally) is not optimal?

Postby Arepo on 2009-08-05T12:18:00

That's about the size of it, although I don't think I was proposing an artificial limit to the donation-matching. There's obviously a practical one in that even rich benefactors can't match more than they have, but otherwise it seems to me like the bigger N is, the more motivating it would be.

That said, I had a couple of further thoughts:

f) rather than selecting one particular way of apportioning N, you could divide it up into multiple pools. Then you could see which actually received the most contributions, and adjust the pool size/number accordingly.

g) As an alternative to i) and ii), I thought of the perverse sounding

iii) Absolute. Invest N in a relatively high interest (but probably low-risk) account, and don't donate any of it except to match contributions. If at the end of your assigned time limit any of N remains, you put in the next set of N, as you would in the other examples. But at any point, the sum of N-D is sitting somewhere gathering interest, but not going to any charity.

This one seems like it might have a really powerful motivating factor, especially for people who weren't confident enough of their views to make larger contributions than normal just to swing donations from one direction to another. In this case, those contributing to D know that (in a sense, at least) N is actually not going to go anywhere unless they give their money.

You'd obviously want to find a fine balance here so that the value of iii)N wasn't too far beyond the expected sum of iii)D. But you also wouldn't want to modify it so much that it ruined the sense that N is only going to charity if you pay for it to.

Anecdotally, I would find iii) extremely motivating. If someone like Alan were to set up eg ii) and i), I'd certainly give a few quid to both, but if he were to set up iii) and ii) I'd probably give a token sum to ii)D to test the waters, but I'd be keen to instantly give a large sum relative my income to iii)D.
"These were my only good shoes."
"You ought to have put on an old pair, if you wished to go a-diving," said Professor Graham, who had not studied moral philosophy in vain.
User avatar
Arepo
 
Posts: 1065
Joined: Sun Oct 05, 2008 10:49 am

Re: Reasons SIAI (and research generally) is not optimal?

Postby Brian Tomasik on 2009-08-09T03:17:00

Arepo, all of your points 1-3 from the 04 Aug 2009 are excellent; in fact, they've been concerns of mine as well. I didn't happen to mention them in the original post, but they are important to think about.

As far as point #3, I agree that's a concern. Still, even if the SIAI community isn't more capable than your average group of philosophers, it remains the case that they're doing important philosophical work, and it seems worthwhile to increase the total size of the funding pie devoted to such research.

The various ideas under point #4 are interesting. Indeed, I think there are major philanthropists who do something like this through their matching-grant challenges: i.e., they donate an amount to one of their preferred causes that's somehow proportional to the amount others donate. (iii) is like a 2-for-1 match (or n-for-1, for some n), though the threat that the money wouldn't otherwise be donated is more credible in the case of (iii) -- usually, I suspect the philanthropists will donate anyway what they don't use for matching.

My main objection to the proposal is that I don't think there's a large number of almost equally valuable charitable causes, even within the fudge factors of our ignorance. I don't claim to know that cause X is very likely better than cause Y, but if you can make an argument that X might be 10,000 times better than Y, while it's somewhat less likely that Y is 10,000 times better than X, then I'll go with X over Y.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Reasons SIAI (and research generally) is not optimal?

Postby Arepo on 2009-08-10T01:06:00

Alan Dawrst wrote:That brings up point #3, which I agree is a concern. Still, even if the SIAI community isn't more capable than your average group of philosophers, it remains the case that they're doing important philosophical work, and it seems worthwhile to increase the total size of the funding pie devoted to such research.


My problem is that I seriously doubt the competence of most philosophers. There are a few exceptions who I won't name for diplomatic reasons, but it's not a subject with a safeguard on letting inaccurate thinkers in or obvious merit to good thinkers. It has some kind of memetic Darwinism where ideas are obviously selected for something, but that something isn't really (as most philosophers will tell you) correspondence with any standard you can check in the real world.

I have a 5-odd year old philosophical logic textbook that claims as 'controversial among logicians' a problem that a statistician friend told me both answers to in seconds, before pointing out that the only reason there are two possible answers is a key part of the question is ill-defined.

I don't claim to know that cause X is very likely better than cause Y, but if you can make an argument that X might be 10,000 times better than Y, while it's somewhat less likely that Y is 10,000 times better than X, then I'll go with X over Y.


This is hopefully false. I 'can make an argument' for any proposition - I did a philosophy degree. The argument has to be compelling.

Personally, since you get the get the highest expected payoffs from things with roughly even chance if you increase utility/decrease probability proportionately, I think you should aim for something in that area of probability as a rule of thumb, and only move away from it where it's very clear to.

Coming up with these longshots and assigning super-high numbers to them seems like a cop-out that creates an infinite amount of resource-sinks. As the magnitude of any event increase, its likelihood falls, and I've never seen any reason to believe that they do so at a disproportional enough rate that backing the crazy event seems like a good idea. If you want to find whether Pascal's button is worth taking seriously, I think you should discuss it with some competent mathematicians, not philosophers, nor your personal estimation, for reasons given above. I've yet to meet any who think it has any value.

Even if I thought it might, I'd much rather fund a new bunch of mathematicians who find the whole thing trivial to address the problem and show why it's trivial, rather than pay an existing group of philosophers for whom it's a gravy train to discuss the ways in which it's completely improbable.

In any case, if you were to set up some kind of matching scheme, there'd be nothing to stop you trying to persuade people of the value of one over the other. And you'd generally expect those capable of giving the most towards it to tend toward greater intelligence, and therefore greater ability to give to the charities you lean towards if your arguments are sound.

Incidentally, do you know of any similar grant-matching schemes up and running?

As you suggested above, it's sort of like Pascal's wager: Yes, there are religions and anti-religions -- scenarios in which doing something will save you from hell, and others in which doing that same thing will send you to hell -- but unless the scenarios seem almost exactly symmetric, I'm going to treat them differently. A 50.0001% chance of avoiding hell is much, much better than a 49.9999% chance.


We've discussed this before, and my opinion hasn't changed. I don't think that a person believing something is an a priori reason to have greater confidence in it (it seems quite plausible to me that no-one in the history of Earth has ever believed anything that was entirely true), and I don't think belief is at all well-defined. You'd be hard pressed to find a Christian who a) claimed believed hell's suffering and heaven's pleasures were infinite, b) could show that they understood the concept of infinity well enough to know what the claim entailed, c) believed the sole criterion for entering heaven/avoiding hell is swearing allegiance to (while not necessarily considering as likely) the Christian god. And even if you do, so what? The reason for his claim won't be based on any evidence for the above except other people having believed it first.

Besides, when you start telling me about probabilities remote enough (and I rate the chance of any hell at all resembling the Christian one much much smaller than 0.0001%, conditional on me being vaguely sane), it eventually seems more probable to me that I've completely misunderstood logic my entire life in any one of an infinite number of ways that would invalidate this as an issue, such as infinity/(a finite number) still being infinity in some or all cases.
"These were my only good shoes."
"You ought to have put on an old pair, if you wished to go a-diving," said Professor Graham, who had not studied moral philosophy in vain.
User avatar
Arepo
 
Posts: 1065
Joined: Sun Oct 05, 2008 10:49 am

Re: Reasons SIAI (and research generally) is not optimal?

Postby Brian Tomasik on 2009-08-10T03:49:00

Arepo wrote:As the magnitude of any event increase, its likelihood falls, and I've never seen any reason to believe that they do so at a disproportional enough rate that backing the crazy event seems like a good idea.

What about Pascal's mugging?

Arepo wrote:And you'd generally expect those capable of giving the most towards it to tend toward greater intelligence, and therefore greater ability to give to the charities you lean towards if your arguments are sound.


Given the quality of many philanthropic donations, I'm not sure about that. Indeed, many of the academic types who would have most insight into these issues would tend to have the least money. People don't get paid much for researching utilitarian topics.

Arepo wrote:Incidentally, do you know of any similar grant-matching schemes up and running?


Not offhand -- sorry. I think Peter Thiel may have done something like this with SIAI's matching challenge a few years back.

Arepo wrote:I don't think that a person believing something is an a priori reason to have greater confidence in it (it seems quite plausible to me that no-one in the history of Earth has ever believed anything that was entirely true), and I don't think belief is at all well-defined.


My claim is that the existence of people who believe something may be probabilistically entangled with the truth of that belief (by which I mean partial truth about the relevant matters -- not complete truth, which I agree is next to impossible). This breaks the symmetry between hypotheses like "I'll go to hell for rejecting Christianity" or "I'll go to hell for adopting Christianity." For instance, consider the point that Carl Shulman makes in the comments section here: "Why would you think that Christianity and anti-Christianity plausibly balance exactly? Spend some time thinking about the distribution of evolved minds and what they might simulate, and you'll get divergence." If we're being simulated, the fact that our simulators allowed for the spread of beliefs like Christianity gives non-zero information about what kinds of motivations they may have.

Arepo wrote:Besides, when you start telling me about probabilities remote enough (and I rate the chance of any hell at all resembling the Christian one much much smaller than 0.0001%, conditional on me being vaguely sane), it eventually seems more probable to me that I've completely misunderstood logic my entire life in any one of an infinite number of ways that would invalidate this as an issue, such as infinity/(a finite number) still being infinity in some or all cases.


I agree that the probability is rather large that our approaches to decision making, mathematical tools, logical rules, and so on are wrong, and that this is much more likely than the existence of a hell. But why does that matter? It's not as though, if these things are wrong, you definitely should avoid Christianity; if they're wrong, then we don't know either way whether Christianity is a good idea.

I assign higher probability to the incorrectness of my decision procedures than I do to getting in a car accident tomorrow. Does that mean I should avoid wearing a seat belt the next time I drive, because my ordinary decision theory suggests that wearing a seat belt is a good idea?
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Reasons SIAI (and research generally) is not optimal?

Postby EmbraceUnity on 2009-08-10T04:22:00

Alan,

About a year ago I made many of the exact same points as Arepo, and if I remember correctly, eventually you admitted that you could not find any reason for rejecting Islam as the proper religion, considering their high fecundity, with the exception of your inability to follow the strict doctrines. Would you still say this is a fair statement?

If Islam became the clear dominant religion in the world, to the point where it was no contest based on numbers, would you seriously consider voluntarily changing? If not, perhaps the issue is more emotional than anything else, despite your persistent attempts at bayesian rationality.

One cannot induct values. There is no way to look at the way things are and figure out the way they should be, or even what potential simulators are looking for. How do you know that atheism isn't indeed what the simulators are selecting for, considering all logic tends to point that way? If supposed simulators were selecting for Christianity, surely they could create a world much more likely to confirm it... and provide actual evidence in favor of it.

EmbraceUnity
 
Posts: 58
Joined: Thu Jul 09, 2009 12:52 am
Location: USA

Re: Reasons SIAI (and research generally) is not optimal?

Postby Arepo on 2009-08-10T10:51:00

Alan Dawrst wrote:What about Pascal's mugging?

What about it? It's one of the examples I had in mind. As the utility the mugger offers increases, my credence that he'll provide it goes down (generally faster. I think the chance of him providing me with twice the GDP of earth - say $150 trillion - is less than ((the probability of him providing me $10)/ 15 trillion). Similarly, I don't see the fact that that he claims something as being evidence of its truth, ceteris paribus.

[quote=]Given the quality of many philanthropic donations, I'm not sure about that. Indeed, many of the academic types who would have most insight into these issues would tend to have the least money. People don't get paid much for researching utilitarian topics.


Again, I don't have great confidence that academics necessarily have great insight into this. That appears to be an unfounded assumption by them/you. Yes it's something they've spilled more ink over, but that doesn't mean an intelligent person with a background in maths can't review their arguments competently.

Not offhand -- sorry. I think Peter Thiel may have done something like this with SIAI's matching challenge a few years back.


Hm, looking it up on Wikipedia it sounds as though it's long since over, and only applied to donations to SIAI. I think you could set up something much more motivationally powerful, and much longer-term, which would evidence powerful real world benefits. Instead you're pouring money one of an infinite number of Pascal's Buttons, based on no greater evidence than a hunch, when you freely admit that destroying the world in its current state might actually increase net utility.

Another frustrating thing about this is the immediacy of certain social and political issues that have 'real' scientists - ie people who make risky claims that could be falsified - teaming up to tell governments and philanthropists that they're a huge and immediate problem. If our society survives the knock-on effects of peak oil and climate change and, say, establishes a self-sufficient extra-terrestrial colony, you might want to look at more obscure existential risks. But right now, I place more faith (though perhaps not much) in James Lovelock's warnings of impending civilisation collapse and possible extinction than I do the arguments of non-scientists.

You can also bet that if civilisation just takes heavy damage and continues roughly on its present course, that heavy damage will reflect in the SIAI's resources. So there's an additional argument that efforts to prevent/ mitigate short-term catastrophe will help them in the long run.

Arepo wrote:My claim is that the existence of people who believe something may be probabilistically entangled with the truth of that belief (by which I mean partial truth about the relevant matters -- not complete truth, which I agree is next to impossible).


It may be, but I see no reason to believe it's so closely entangled that you can't extricate them.

"Why would you think that Christianity and anti-Christianity plausibly balance exactly? Spend some time thinking about the distribution of evolved minds and what they might simulate, and you'll get divergence." If we're being simulated, the fact that our simulators allowed for the spread of beliefs like Christianity gives non-zero information about what kinds of motivations they may have.


Notwithstanding the fact that I'm far from convinced by the simulation argument, if we're being simulated, we're still most likely in a deterministic simulation that follows local rules of physics. Presumably our simulators weren't aware at the time of programming the simulation exactly how it would unfold, or they'd have no reason to run it.

If you think our simulators might have believed in Christianity then, as EU says, it's a very odd flavour of it that doesn't mind creating billions of people when the majority will go to hell. Moreover, I don't have any more reason to put any trust in their beliefs than I do the local street preachers'. In fact I have good reason to reject their values - they've performed an act which is completely monstrous by my standards, and obviously irrational (as in non-optimally self-serving) by theirs.

Arepo wrote:I agree that the probability is rather large that our approaches to decision making, mathematical tools, logical rules, and so on are wrong, and that this is much more likely than the existence of a hell. But why does that matter? It's not as though, if these things are wrong, you definitely should avoid Christianity; if they're wrong, then we don't know either way whether Christianity is a good idea.


This argument is conditional on our logic not having broken down. If it has, who knows what conclusions we should/could draw or what weighting they might have? And let me rephrase: conditional on my logic not having broken down, I rate the probabilities of infinite hells/heavens as infinitessimal.

I'd also add that many Christians don't subscribe to the all-or-anti-all view of theology that you have (generally the more intelligent ones reject it in my experience), and that I might rate the probability of non-infinite hells/heavens as measurable (but sufficiently low that my expected utility for spending any time trying to believe in them is much lower than of trying to enjoy my life as it appears to be).

I assign higher probability to the incorrectness of my decision procedures than I do to getting in a car accident tomorrow. Does that mean I should avoid wearing a seat belt the next time I drive, because my ordinary decision theory suggests that wearing a seat belt is a good idea?


Conditional on my wearing a seat belt being roughly the action I think it is (with all that that implies about the world), I don't agree with your assessment. (although I'm not sure what 'correctness of decision' procedure actually means) I think I'm more likely to get into a car accident than that I'm wrong enough about wearing a seat belt that I increase my expected welfare by not wearing one.
"These were my only good shoes."
"You ought to have put on an old pair, if you wished to go a-diving," said Professor Graham, who had not studied moral philosophy in vain.
User avatar
Arepo
 
Posts: 1065
Joined: Sun Oct 05, 2008 10:49 am

Re: Reasons SIAI (and research generally) is not optimal?

Postby Brian Tomasik on 2009-08-10T11:47:00

Thanks for continuing to put up with my responses, Arepo. I'm enjoying our discussion.

Arepo wrote:You can also bet that if civilisation just takes heavy damage and continues roughly on its present course, that heavy damage will reflect in the SIAI's resources. So there's an additional argument that efforts to prevent/ mitigate short-term catastrophe will help them in the long run.


That may be. But I think they would argue that we may not have time to wait, because AGI could be developed in the next few decades. It comes down to an assessment of the differential risk of different scenarios, which I agree is tough. On the other hand, if AGI is never developed, then it's a lot less important whether humans survive, since their ability to impact the universe would be many orders of magnitude smaller than if they do develop AGI.

Arepo wrote:Notwithstanding the fact that I'm far from convinced by the simulation argument, if we're being simulated, we're still most likely in a deterministic simulation that follows local rules of physics. Presumably our simulators weren't aware at the time of programming the simulation exactly how it would unfold, or they'd have no reason to run it.


I agree that our simulators may not have known how our deterministic universe would unfold, but why does that have bearing on whether they punish certain actions? If we're just a science experiment, then I guess it does seem unlikely our simulators would send us to hell either way. But I can imagine other simulators with different motivations who would punish some but not others.

Arepo wrote:If you think our simulators might have believed in Christianity then, as EU says, it's a very odd flavour of it that doesn't mind creating billions of people when the majority will go to hell.


I'm not sure that's an odd flavor -- isn't that precisely what the Christian God is believed by all fundamentalists to have done?

Arepo wrote:In fact I have good reason to reject their values - they've performed an act which is completely monstrous by my standards,


Of course. That's what hell always is, whether real or simulated.

Arepo wrote:I'd also add that many Christians don't subscribe to the all-or-anti-all view of theology that you have (generally the more intelligent ones reject it in my experience),


No disagreement there. But if hell doesn't exist, that scenario becomes irrelevant in the Pascal's-wager equation for those who want to avoid hell -- just as does ordinary atheism.

Arepo wrote:(although I'm not sure what 'correctness of decision' procedure actually means)


I was thinking of things like the notion that I should maximize the expected value of my actions, calculated using some Bayesian probability distribution, using the particular sort of math that humans happen to have developed, assuming the usual sorts of logical truths we take for granted (e.g., law of the excluded middle), assuming I've done the math correctly, assuming I'm not insane, etc.

Arepo wrote:I think I'm more likely to get into a car accident than that I'm wrong enough about wearing a seat belt that I increase my expected welfare by not wearing one.


Reduce the time period from "an accident the next time I drive" to "an accident within the next 5 seconds," or a sufficiently small interval of time from now that your probability of a wrong decision theory is higher than your probability of an accident during that interval. Should you then take off your seat belt, because your decision theory is more likely wrong? No, of course not -- but why not? I'm claiming the same reason applies for Pascal's wager. In particular, if your decision theory is wrong, that fact neither recommends nor discourages unbuckling your seat belt.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Reasons SIAI (and research generally) is not optimal?

Postby Arepo on 2009-08-10T13:05:00

I'm going to be away for a few days Alan, but in case you missed it, EU posted a short response above mine.
"These were my only good shoes."
"You ought to have put on an old pair, if you wished to go a-diving," said Professor Graham, who had not studied moral philosophy in vain.
User avatar
Arepo
 
Posts: 1065
Joined: Sun Oct 05, 2008 10:49 am

Re: Reasons SIAI (and research generally) is not optimal?

Postby Brian Tomasik on 2009-08-11T03:15:00

Arepo, thanks for pointing out EU's comment.

EmbraceUnity wrote:if I remember correctly, eventually you admitted that you could not find any reason for rejecting Islam as the proper religion, considering their high fecundity, with the exception of your inability to follow the strict doctrines. Would you still say this is a fair statement?

For the most part, yes. In practice, the reason I don't pray five times a day or fast during Ramadan is that I'm too selfish to force myself to do so. But another reason is related to what you suggest: Our simulators may very well be atheists and so might punish support for religious-type values; this is especially true considering that nearly all transhumanists are atheists. On the other hand, (thankfully) few transhumanists support the idea of punishing people eternally in hell, while many religious fundamentalists do. Imagine, then, a culturally fundamentalist Christian who retains the values of his former faith through the Singularity and then, upon acquiring large computational resources, decides to put into practice the religious fantasies of his childhood. Think of how many people play Left Behind video games. Imagine those games enhanced such that the players were actually conscious. It's a frightening thought!

EmbraceUnity wrote:If Islam became the clear dominant religion in the world, to the point where it was no contest based on numbers, would you seriously consider voluntarily changing?

Yes. More strongly: I actually think Islam is not a bad choice today.

EmbraceUnity wrote:If supposed simulators were selecting for Christianity, surely they could create a world much more likely to confirm it... and provide actual evidence in favor of it.

Maybe. But lots of fundamentalists subscribe to the notion that God "will destroy the wisdom of the wise" (1 Corinthians 1:19) and save the foolish children instead. Calvin seemed to have no objections against God's saving only the few to whom he gives knowledge. The same goes for Islam: "And if We [Allah] had pleased We would certainly have given to every soul its guidance, but the word (which had gone forth) from Me was just: I will certainly fill hell with the jinn and men together." (32:13)

What's to keep cultural fundamentalists with access to computational resources from acting upon those values in their simulations? And can you actually imagine atheist transhumanists who would torture people for obeying religious commandments?
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Reasons SIAI (and research generally) is not optimal?

Postby EmbraceUnity on 2009-08-11T03:52:00

This seems very clearly a status-quo bias philosophy, and furthermore it seems quite presumptuous that we would be able to accurately speculate upon the motives and thought processes of potential simulators.

By this logic, almost anything could be the way it is because that is the way it was meant to be(tm)

Perhaps we should give up the idea of eliminating wild animal suffering because our simulators would prefer a vibrant and lively ecosystem to have fun in. People read shakespeare, not stories about perfect utopias.... perhaps we should give up on all utilitarian aims.

This argument is patently absurd. Furthermore, why would you speculate that any civilization would spend infinite resources on torture? What utility could that possibly have? There are huge diminishing marginal returns as far as deterrence goes.... if deterrence is the motive. Though since nobody can see the afterlife while they are alive, deterrence is clearly not a motive. What would be the point for some logical deity to undertake this as a goal?

Perhaps the only way is if enough people adopted this outlook, at which point there would be memetic incentives for various ideologies to make infinite torture their goal.

I was tempted to say "I now dedicate my life to making simulations of people who I will torture eternally for having differing views than mine" .... there now you have to agree with me on everything... neeneeneeneebooboo

EmbraceUnity
 
Posts: 58
Joined: Thu Jul 09, 2009 12:52 am
Location: USA

Re: Reasons SIAI (and research generally) is not optimal?

Postby Brian Tomasik on 2009-08-12T03:33:00

EmbraceUnity wrote:This seems very clearly a status-quo bias philosophy

I agree, actually. The reason I originally thought about hell was the fiery imagery of Christianity and Islam, but it's not necessarily the case that, upon reflection, adopting one of those religions is the best option. Maybe there are paperclipping civilizations that will punish me for not building lots of paperclips, for instance. In fact, the purpose of this post was to elicit feedback on what sorts of scenarios for punishment should concern me most -- whether they have anything to do with "religion" or not. Any thoughts?

EmbraceUnity wrote:it seems quite presumptuous that we would be able to accurately speculate upon the motives and thought processes of potential simulators.

Perhaps, but for someone like me who greatly fears hell, what alternative do I have? Plus, it's not the case that we have no insight into our potential simulators: We know a lot about the world they created. And the fact that we have human-type minds may suggest that they themselves have similar minds. (Minds are more likely to simulate minds similar to their own than to simulate random points in mind-space, I would guess.)

EmbraceUnity wrote:By this logic, almost anything could be the way it is because that is the way it was meant to be(tm)

Sure, but some scenarios are more likely than others. That's what Bayesian inference is all about: Starting with a reasonable Occam-abiding prior and updating based on evidence. Why is this different from any other area of research?

EmbraceUnity wrote:why would you speculate that any civilization would spend infinite resources on torture?

That is a good point. That's why I think the scenarios of most concern are those with not only strange simulators but strange physics that allows for some sort of low-cost hypercomputation in finite time or else finite computation for an infinite time. As far as I can tell, the latter scenarios needn't contradict a Solomonoff-type prior, as Eliezer Yudkowsky has noted:
many computationally simple laws of physics, like the laws of Conway's Life, permit indefinitely running Turing machines to be encoded. So we can't say that it requires a complex miracle for us to confront the prospect of unboundedly long-lived, unboundedly large civilizations. Just there being a lot more to discover about physics - say, one more discovery of the size of quantum mechanics or Special Relativity - might be enough to knock (our model of) physics out of the region that corresponds to "You can only run boundedly large Turing machines".

So while we have no particular reason to expect physics to allow unbounded computation, it's not a small, special, unjustifiably singled-out possibility like the Christian God; it's a large region of what various possible physical laws will allow.


EmbraceUnity wrote:Though since nobody can see the afterlife while they are alive, deterrence is clearly not a motive. What would be the point for some logical deity to undertake this as a goal?

Well, some folks at SIAI are actually working on a paper possibly suggesting that deterrence or achieving other goals could be motives. The fact that no one can see the afterlife needn't prevent someone who subscribes to a timeless decision theory from committing to such an action and then following through. And of course, there's also the possibility that people could simulate hell worlds for fun. Just search "torture your sims" and you'll find lots of extremely disturbing expressions of human boredom....

EmbraceUnity wrote:I was tempted to say "I now dedicate my life to making simulations of people who I will torture eternally for having differing views than mine" .... there now you have to agree with me on everything... neeneeneeneebooboo

Well, that's actually not a bad strategy on your part. The more credible your threat, the more seriously I'll take it. Perhaps I should avoid getting into conversations that lead people to have incentives for committing to torture me, though.... :(
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Reasons SIAI (and research generally) is not optimal?

Postby EmbraceUnity on 2009-08-12T17:26:00

Alan Dawrst wrote:Perhaps, but for someone like me who greatly fears hell, what alternative do I have?


The alternative is not fearing hell. Not all fears are rational, and in this case the only thing you have to fear is fear itself. Why? Because you are asking questions that are metaphysical, and yet your senses are limited to the physical world. None of your speculations meet the test of falsifiability. The number of possible scenarios which you are overlooking is infinite, and thus there are an infinite number of "risks" from the possibility of the Flying Spaghetti Monster to Quazecotl.

No matter how many people you gather up to think of new metaphysical risks, there is always an infinite amount left. Stop trying... and no, not just because openly speculating about it gives incentives for people (and memes) to torture you.

EmbraceUnity
 
Posts: 58
Joined: Thu Jul 09, 2009 12:52 am
Location: USA

Re: Reasons SIAI (and research generally) is not optimal?

Postby Brian Tomasik on 2009-08-13T01:52:00

EmbraceUnity wrote:The alternative is not fearing hell. Not all fears are rational,

Well, by "fears" I meant "places extreme negative value on." Rationality is relative to a given objective function, and I'm saying my (selfish) objective function views hell as sufficiently bad that it may indeed be worth trying to take steps to avoid it. (I say my "selfish objective function" because, in utilitarian terms, my individual suffering in hell would pale by comparison with the suffering of other organisms, in hell or otherwise. There are far easier ways to prevent organisms from going to hell than to focus on saving my own skin from eternal torture.)

EmbraceUnity wrote:Because you are asking questions that are metaphysical, and yet your senses are limited to the physical world. None of your speculations meet the test of falsifiability.

I understand a hypothesis to be falsifiable relative to another if the two can give different likelihoods for observed evidence. In that case, the hypotheses we're talking about are surely falsifiable. One simple example: The hypothesis of a hell-punishing god who wants everyone to be saved but requires belief in Jesus has much lower likelihood than, say, a hell-punishing god who doesn't want everyone to be saved and requires belief in Jesus.

If you meant "falsifiable" in a non-Bayesian sense, then I guess we just have different views on epistemological methodology.

EmbraceUnity wrote:The number of possible scenarios which you are overlooking is infinite, and thus there are an infinite number of "risks" from the possibility of the Flying Spaghetti Monster to Quazecotl.

Every observed phenomenon has an infinite number of hypotheses that could explain it. But we don't treat them equally: That's what Occam's razor is for. And Occam's razor applies just as well to theistic hypotheses as to anything else. The Flying Spaghetti Monster and Quazecotl almost certainly have different Kolmogorov complexities.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Reasons SIAI (and research generally) is not optimal?

Postby DanielLC on 2009-08-21T04:43:00

Regarding the Pascal's Mugging:

I can't really say I have a background in math. I've taken statistics (and calculus, but statistics is what applies here) and I'm really good at it.

Math has the annoying habit of requiring some a priori probability. The whole occam's razor and kolmogorov complexity is essentially just defining every possible set of a priori probabilities. Namely, the ones that add up to 100%. You can still say that there's a 99.999% a priori probability there really is a Flying Spaghetti Monster. As such, math can't really say if you should just use the difficulty of defining or also include how long it would take to calculate it.

If you do just use the difficulty of defining it, what less wrong said is quite correct. If you don't want to have to live with the cognitive dissonance of not listening to pascal's muggers: I have the power to create and torture limitless people at will. If you ever do what a pascal's mugger other than me is trying to get you to do because they use that method, I'll cause the amount of disutility they claim to squared (in QALYs). Problem solved.

Regarding SIAI:

I don't trust people like that to do it well. I think research and investment generally will hasten the singularity more (assuming it happens). I think the best way to encourage that is to invest money. I started a thread on that idea.
Consequentialism: The belief that doing the right thing makes the world a better place.

DanielLC
 
Posts: 703
Joined: Fri Oct 10, 2008 4:29 pm

Re: Reasons SIAI (and research generally) is not optimal?

Postby Brian Tomasik on 2009-08-22T06:38:00

DanielLC wrote:As such, math can't really say if you should just use the difficulty of defining or also include how long it would take to calculate it.

Yup. The speed prior is an example of an alternative that does also penalize computation time.

DanielLC wrote:If you do just use the difficulty of defining it, what less wrong said is quite correct. If you don't want to have to live with the cognitive dissonance of not listening to pascal's muggers: I have the power to create and torture limitless people at will. If you ever do what a pascal's mugger other than me is trying to get you to do because they use that method, I'll cause the amount of disutility they claim to squared (in QALYs). Problem solved.

Well, not necessarily. You openly admitted that you were inventing your mugging statement specifically in reaction to the original and in order to defuse it, which gives little confidence in its veracity. Of course, the original mugger clearly has an ulterior motive for lying as well, but it seems plausible that an honest Solomonoff-inducting AI would conclude that the original mugger is slightly more trustworthy. At the very least, the posterior probabilities aren't exactly equal.

DanielLC wrote:I think research and investment generally will hasten the singularity more (assuming it happens.

I don't necessarily want a Singularity unless it's done right -- that's sort of SIAI's point, I think. It's especially true for me, as someone who cares a lot about wild animals and other helpless sentients. If post-humans don't give sufficient consideration to ethical issues, I fear the potential consequences of post-human technological advancements: spreading life into space, creating lab universes, running painful sentient simulations (e.g., reinforcement-learning algorithms?), and so on.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Reasons SIAI (and research generally) is not optimal?

Postby DanielLC on 2009-09-12T05:54:00

Well, not necessarily. You openly admitted that you were inventing your mugging statement specifically in reaction to the original and in order to defuse it, which gives little confidence in its veracity.


That's why I squared it. This is about mugging them into ignoring the mugging, not countering it out.

It occurred to me that there's another problem with this. Even if someone doesn't tell you that they'll torture 3^^^^3 people, it doesn't mean they won't. They also might do it with 3^^^^^3 people, etc. Each of these increase the expected amount of pain immensely. They also might make that much happiness. It doesn't cancel out. At least, it only does if you add it together in the right order. In short, there is no expected utility. I don't mean it's zero, I mean that there is no expected value. It's sort of like how if you take the integral of x from negative infinity to infinity, it's undefined, not zero.
Consequentialism: The belief that doing the right thing makes the world a better place.

DanielLC
 
Posts: 703
Joined: Fri Oct 10, 2008 4:29 pm

Re: Reasons SIAI (and research generally) is not optimal?

Postby Brian Tomasik on 2009-09-12T06:36:00

DanielLC wrote:That's why I squared it. This is about mugging them into ignoring the mugging, not countering it out.

Hmm, good point. You're right that the probability of your claim doesn't obviously decrease as fast as the magnitude of consequence increases -- that's the whole point of the original question, after all.

DanielLC wrote:Even if someone doesn't tell you that they'll torture 3^^^^3 people, it doesn't mean they won't.

The point of the original mugging was that the fact that you encounter the claim is non-zero evidence which breaks an otherwise symmetrical situation. Your suggestion is that the original situation may not have been symmetrical -- maybe there's a priori a higher probability that failing to, say, throw away $5 will cause torture of 3^^^^3 people than that throwing away $5 will. I doubt the difference would be big enough to overcome the expected harm caused by getting rid of money that could be donated to, say, SIAI, but it's certainly possible. If you find reason to think so, let me know. :)

DanielLC wrote:I don't mean it's zero, I mean that there is no expected value.

Yeah, there are definitely lots of thorny problems with consequentialist decision theory that need to be worked out, like infinities.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Reasons SIAI (and research generally) is not optimal?

Postby DanielLC on 2009-09-12T15:21:00

The problem with these infinite sums is that addition isn't cumulative. What order do we add them in? Sure it's symmetric if we do (3^^^3 - 3^^^3) + (3^^^^3 - 3^^^^3) + (3^^^^^3 - 3^^^^^3) + ..., but what if we do 3^^^3 + 3^^^^3 - 3^^^3 + 3^^^^^3 - 3^^^^3 + 3^^^^^^3 - 3^^^^^3 + ... ? It's commonly even possible to make it a specific number just by changing the order. In this case, the error would grow each time, so with the limit method of infinite sums, it just wouldn't add. If we use hyperreals instead, that would be possible.
Consequentialism: The belief that doing the right thing makes the world a better place.

DanielLC
 
Posts: 703
Joined: Fri Oct 10, 2008 4:29 pm

Re: Reasons SIAI (and research generally) is not optimal?

Postby Brian Tomasik on 2009-12-26T06:07:00

For those who do like the idea of funding research to make a difference, I'll just mention my new blog post on SIAI's current matching-grants challenge (through 28 Feb. 2010) in which donors can choose particular research projects to support or can even propose their own. As I noted in my blog entry:
the page explains that donors contributing at least $1K can contact Anna Salamon to discuss the possibility of a new research topic. So, utilitarians: If you're interested in donating and have a project in mind, do contact Anna and see what can be done. SIAI might, for instance, fund an exploration of the types of suffering computations we decide to care about. Or perhaps a paper assimilating research on some aspect of mathematics, physics, computer science, economics, psychology, or cognitive science that is crucially important to know about when trying to reduce large-scale suffering.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Reasons SIAI (and research generally) is not optimal?

Postby RyanCarey on 2009-12-26T07:54:00

Well, not necessarily. You openly admitted that you were inventing your mugging statement specifically in reaction to the original and in order to defuse it, which gives little confidence in its veracity. Of course, the original mugger clearly has an ulterior motive for lying as well, but it seems plausible that an honest Solomonoff-inducting AI would conclude that the original mugger is slightly more trustworthy. At the very least, the posterior probabilities aren't exactly equal.

An alternative proposition: god has told me to pass on to you that all previous religious teachings are false.

My proposition would appear unlikely to be true because you know I have presented it with the ulterior motive of doing philosophy. If you are to accept it, you will have to overturn some psychological facts that you thought you knew.

On the other hand, if you are to regard my proposition to be false, you will maintain your belief in christianity. However, to read the bible to be the inerrant word of god will require you to overturn what you thought you knew about evolutionary biology, geology, history, anthropology, and so on.

The question, then, becomes whether you trust one psychological fact above all the evidence against young earth creationism.

Suppose, then, that you put an argument that the bible contains the literal word of god on the basis of Pascal's wager to one side. Then, you're faced with deciding just what in the bible to take literally. Those that violate what modern day science, philosophy or ethics should be regarded as more likely figurative than literal. Those that do not are benign and will not interfere with what modern society knows or does.
You can read my personal blog here: CareyRyan.com
User avatar
RyanCarey
 
Posts: 682
Joined: Sun Oct 05, 2008 1:01 am
Location: Melbourne, Australia

Re: Reasons SIAI (and research generally) is not optimal?

Postby Recumbent on 2009-12-26T20:51:00

So we generally agree that developing artificial intelligence and self replicating nanotechnology has a probability order of magnitude of at least 0.1. To see how important this scenario is, we need an estimate of the number of entities there may be. While of course this number would be huge if we extend out our light cone for centuries, it turns out it is still huge even in this century. With self replicating nanotechnology, we could create in a few years enough one micron thick solar cells from the asteroid belt mass to create the original Dyson sphere (orbiting independent satellites, not a solid sphere). Kurzweil says that in 2030, a personal computer (order 100 W) could be functionally equivalent to a human brain. Since the capability of computers have been doubling every 1.5 years, but the energy use has not been increasing very much, this means the energy efficiency has been doubling every 1.5 years. Kurzweil talks about reversible computing in his book "the singularity is near," where no energy is required for computing. I am skeptical of this, especially because I learned that the theoretical limit for flipping a bit is the Boltzmann constant times the temperature, or ~5E-21 joules. So if we take this limit, computers can become approximately 11 orders more efficient than they are now. This means we could support approximately 1E35 consciousnesses with the Dyson sphere. So this is about 1E25 times as many as current humans. If it turns out that insects can suffer, since they have vastly smaller brains, we could induce far more suffering if we simulated insects. But if we tried to simulate an entire world, we might have to go to the atom level, which would require far more computation power, so we would not be able to simulate nearly as many organisms.
Also, if we are just limited to the energy falling on the Earth's deserts, we would lose quite a few orders. But the point is that within this century, it is quite feasible that we can create far more consciousnesses than we currently have. Even if you think this only has a 1% chance of happening, the suffering of these entities is far more important than any "earthly" concern.

But the next question is how much our actions now can actually affect the outcome. This depends on how rapidly the technologies will come about. From my study of self replicating nanotechnology, it is clear that we will first develop technology that can replicate itself only under controlled circumstances. This would attract widespread media attention, and then we can debate whether we wanted to develop technology that could replicate in ambient conditions. As for artificial intelligence, we have been making steady progress. But it is possible that we could come up with the right algorithm that could increase its intelligence (in a similar way as unintelligent babies do, not just changing the strength of connections between neurons, but actually growing the neurons) and if set free on the Internet, it could become super intelligent in a matter of weeks. It could write a virus to take over the spare computing power of computers on the Internet. If it is hostile, we would be in trouble...

Recumbent
 
Posts: 17
Joined: Sat Dec 26, 2009 8:17 pm

Re: Reasons SIAI (and research generally) is not optimal?

Postby Brian Tomasik on 2009-12-27T01:59:00

RyanCarey, I largely agree with your criticisms of fundamentalist Christianity on factual grounds. The types of punishment I fear most tend to be more unusual. Still, I do think "Left Behind"-video-game-style simulations by religious fundamentalists aren't inconceivable either....

Recumbent, I largely agree with your claim that "the suffering of these entities is far more important than any 'earthly' concern," although I do think 1% may be too high a probability to assign to your scenario. But yes, that basic "Astronomical Waste"-type point is a convincing one. A caveat would apply if, as is possible though I think currently unlikely, I decide that I actually don't care about computer simulations of insects but only about "real, physical" insects. This would seem rather arbitrary, but so is the entire question of what computations we care about.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Reasons SIAI (and research generally) is not optimal?

Postby Jesper Östman on 2009-12-28T00:07:00

If you see some significant probability of you stopping to care about simulated insects, would you also see a significant possibility of you stopping to care about any simulations?

Jesper Östman
 
Posts: 159
Joined: Mon Oct 26, 2009 5:23 am

Re: Reasons SIAI (and research generally) is not optimal?

Postby Brian Tomasik on 2009-12-28T05:41:00

Jesper Östman wrote:If you see some significant probability of you stopping to care about simulated insects, would you also see a significant possibility of you stopping to care about any simulations?

Yes, I'd assign maybe 25% probability to that. It's all a matter of how far I want to use abstraction to stretch my evolved impulses.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Reasons SIAI (and research generally) is not optimal?

Postby LadyMorgana on 2011-07-12T13:14:00

Forgive my perhaps annoying habit of resurrecting old threads, but I've just read through this one and made a note of some points that I wanted to make:

However, I should point out that while SIAI has no explicit ideology, several of its members do lean strongly utilitarian, and many more lean strongly toward some sort of rationalist consequentialism.

I heard Yudkowsky give a talk in which he mocked the idea of SIAI working towards happiness-maximising AIs. The only person I know well who's involved with SIAI is Ben Hoskin and he's not too keen on utilitarianism. Carl Shulman, you're on this forum - what's your credence in utilitarianism? What would you guess the credence in utilitarianism is generally amongst the SIAI crew?

R.e. the choice between supporting x-risk research vs. meme propagation of concern for wild animal suffering, it's worth noting that the former promotes the latter to some extent (since the longer humans survive, the more morally intelligent they seem to become), but not vice versa.

What about Pascal's mugging?

What about it? It's one of the examples I had in mind. As the utility the mugger offers increases, my credence that he'll provide it goes down (generally faster. I think the chance of him providing me with twice the GDP of earth - say $150 trillion - is less than ((the probability of him providing me $10)/ 15 trillion). Similarly, I don't see the fact that that he claims something as being evidence of its truth, ceteris paribus.

I don't think that your credence should go down proportionally or faster as the utility offered increases. The unlikeliness stems from the mugger giving Pascal any utility at all, not how much he is going to give him. Thus, it is extremely unlikely that the mugger will come back and magically give Pascal one utilon the following day. But surely it's not 10 times as unlikely that he'll come back and give Pascal ten utilons the following day?

You might get out of Pascal's Mugging by arguing that the mugger's claiming something provides absolutely no evidence of the claim's truth, because it's 100% obvious that the reason the mugger is making this claim has nothing to do with truth and is just to convince Pascal to give him his money. But bear in mind that you can't apply this to x-risk scenarios. The universe doesn't know you're Pascal; the evidence for these futuristic scenarios is actual evidence, not a claim borne out of someone trying to trick you.
"Three passions, simple but overwhelmingly strong, have governed my life: the longing for love, the search for knowledge, and unbearable pity for the suffering of mankind" -- Bertrand Russell, Autobiography
User avatar
LadyMorgana
 
Posts: 141
Joined: Wed Mar 03, 2010 12:38 pm
Location: Brighton & Oxford, UK

Re: Reasons SIAI (and research generally) is not optimal?

Postby Arepo on 2011-07-12T17:00:00

LadyMorgana wrote:Forgive my perhaps annoying habit of resurrecting old threads,


No crime here. By the way, forgot to answer your question about attributed quoting in the other thread - in the initial code for the quote (the word 'quote') in square brackets, interrupt the e and the closing bracket with ="personname" - or if you only want to do it once, just click the 'quote' link, and you'll see the code at the top anyway.

I don't think that your credence should go down proportionally or faster as the utility offered increases. The unlikeliness stems from the mugger giving Pascal any utility at all, not how much he is going to give him.


I disagree with a couple of things here. One is that what you describe is either exclusively or overwhelmingly a source of unlikeliness. The other is that you believe what you say you do :P

On the first, forget utilons, let's assume he's just offering money. And ditch the word 'mugger', since it begs the question, suggesting someone with nefarious intentions. It's not hard to imagine that someone could sincerely say to you 'I really need £10 today. If you can provide it, I'll give you £20 tomorrow' - banks operate on the principle that people do just this all the time, as do we when we lend money to friends.

So if someone with the same sort or mannerisms as a person you'd trust in that situation raised the stakes, either you would believe them to the same degree as you would the more modest person, or there's some other factor affecting your credence.

On the second, we've got quite a lot of evidence that neither you nor anyone else who claims to accept the reasoning of Pascal's Mugging actually does so. To wit, I've made the offer to a few such people (and in fact I'm prepared to extend it to almost anyone who claims to) that in exchange for N money today, where N is all the money they own, I'll give them say 10N tomorrow, honest to goodness. On a case by case basis, if they think that doesn't give them favourable enough odds, I might raise the amount I return by several orders of magnitude.

This argument doesn't really help you:

You might get out of Pascal's Mugging by arguing that the mugger's claiming something provides absolutely no evidence of the claim's truth, because it's 100% obvious that the reason the mugger is making this claim has nothing to do with truth and is just to convince Pascal to give him his money.


- partly because my word surely has some evidential weight (you can imagine some claims which, if I made them, you'd consequently think were more likely to be true), and partly because even if you think it doesn't, a priori, we can test it with a lesser amount - say you give me a penny today and I'll give you 10 tomorrow. Then you have direct evidence that my word correlates with my actions, and that I'm likely to see a deal of this form through. It doesn't matter if you think it's weak evidence - so long as you don't believe in the equally diminishing probability argument, there must be some amount of money I could offer you tomorrow that would make it worth your while.

I don't think anyone who fails to take me up on this offer can consistently claim to believe Pascal's Mugging holds on the arguments I've seen them advance for it so far.
"These were my only good shoes."
"You ought to have put on an old pair, if you wished to go a-diving," said Professor Graham, who had not studied moral philosophy in vain.
User avatar
Arepo
 
Posts: 1065
Joined: Sun Oct 05, 2008 10:49 am

Re: Reasons SIAI (and research generally) is not optimal?

Postby Hedonic Treader on 2011-07-12T22:28:00

Arepo wrote:To wit, I've made the offer to a few such people (and in fact I'm prepared to extend it to almost anyone who claims to) that in exchange for N money today, where N is all the money they own, I'll give them say 10N tomorrow, honest to goodness. On a case by case basis, if they think that doesn't give them favourable enough odds, I might raise the amount I return by several orders of magnitude.

The utility of money doesn't scale linearly. Losing all my possessions with 99% probability would destroy much more utility for me personally than gaining 100x or even 10000x the money value of my possessions with 1% probability.

Furthermore, the higher your N, the more unlikely that you can pay up and intend to pay up, considering the money has to come from somewhere in the real world, and if you had it you wouldn't need my 1/N. This is different from the metaphysical aspect of Pascal's "mugger", who might just be able to provide any amount of utility out of thin air, because he's from the 7th dimension.

In that case, however, the motivations of such a being become so unpredictable that you can't assign more probability to gaining the promised utility than to losing that much utility in addition to your wallet (e.g. the "mugger" could torture anyone who accepts his offer, as a wicked form of 7th dimension entertainment). The best response to the traditional Pascal's wager, imo, is that if there is a God who allows exactly this universe to exist, then His motives are so obscure and bizarre that I expect Him to send all Christians to hell and non-Christians to heaven with exactly the same - and very small - probability that He will send all non-Christians to hell and all Christians to heaven. Similar logic seems to apply to all permutations of such outlandish metaphysical claims, at least as far as I can see.
"The abolishment of pain in surgery is a chimera. It is absurd to go on seeking it... Knife and pain are two words in surgery that must forever be associated in the consciousness of the patient."

- Dr. Alfred Velpeau (1839), French surgeon
User avatar
Hedonic Treader
 
Posts: 342
Joined: Sun Apr 17, 2011 11:06 am

Re: Reasons SIAI (and research generally) is not optimal?

Postby Arepo on 2011-07-13T16:55:00

Hedonic Treader wrote:The utility of money doesn't scale linearly. Losing all my possessions with 99% probability would destroy much more utility for me personally than gaining 100x or even 10000x the money value of my possessions with 1% probability.


Sure, but I can promise you enough extra money to compensate for this (also note that, as a utilitarian, you're presumably not only concerned with the utility of your money to you, in which case, at least up to a point, it might actually scale faster upwards than it does downwards - losing all of your money in a welfare state isn't a disaster, whereas having enough to pay off all the low-hanging fruit Giving What We Can have found generates a *huge* benefit).

I don't see any reason to be interested in a Pascal's Mugger whose existence is so far outside the laws of physics that it's fantasy. Anything he can plausibly offer, I am willing to offer. Anything I can't plausibly offer, he can't either.
"These were my only good shoes."
"You ought to have put on an old pair, if you wished to go a-diving," said Professor Graham, who had not studied moral philosophy in vain.
User avatar
Arepo
 
Posts: 1065
Joined: Sun Oct 05, 2008 10:49 am

Re: Reasons SIAI (and research generally) is not optimal?

Postby Hedonic Treader on 2011-07-13T19:05:00

Arepo wrote:Sure, but I can promise you enough extra money to compensate for this

I don't think you can, because as I've pointed out before, my probability estimate of your honesty and ability to keep your end of the bargain decreases exponentially with a linear increase of the compensation factor. The higher your N, the less realistic your motivation and ability to pay. The only way out would be additional prior knowledge. For instance, you could show me a specific and potentially successful plan to build super-intelligent FAI using an initial investment - or you could be loaded like Bill Gates, and have a history of doing such deals for the lulz.

(also note that, as a utilitarian, you're presumably not only concerned with the utility of your money to you, in which case, at least up to a point, it might actually scale faster upwards than it does downwards - losing all of your money in a welfare state isn't a disaster, whereas having enough to pay off all the low-hanging fruit Giving What We Can have found generates a *huge* benefit).

Okay, but only assuming you don't extract an equal amount of utility by providing the money from the social context. For instance, if I think you're a utilitarian yourself, I might wonder why you don't just donate. If you're not, I might wonder what methods you use to get the money, and if they cause more harm than the donation is worth. And of course, we all do have an intuitive self-bias; I haven't donated myself into financial ruin so far, and I don't think I will in the future.

I don't see any reason to be interested in a Pascal's Mugger whose existence is so far outside the laws of physics that it's fantasy. Anything he can plausibly offer, I am willing to offer. Anything I can't plausibly offer, he can't either.

I think that's wrong. You can't plausibly offer to have access to unlimited resources, or unlimited utility generation methods. A denizen from the 7th dimension could. That's why Pascal's Mugging would be a real problem if you assigned a non-zero probability to that entity being both honest/predictable and from the 7th dimension (unlimited utility access). In this case, all they would have to do is promise a large enough amount of utility to offset the small probability. The only logical defense for unbounded expected value maximizers is the entity's unpredictability, imho. It might create disutility where it promised utility.
"The abolishment of pain in surgery is a chimera. It is absurd to go on seeking it... Knife and pain are two words in surgery that must forever be associated in the consciousness of the patient."

- Dr. Alfred Velpeau (1839), French surgeon
User avatar
Hedonic Treader
 
Posts: 342
Joined: Sun Apr 17, 2011 11:06 am

Re: Reasons SIAI (and research generally) is not optimal?

Postby Arepo on 2011-07-14T09:55:00

'A denizen from the 7th dimension' is just a pat way of saying 'a fantasy figure not subject to the laws of the universe'. For all intents an purposes, it's an impossible concept. Given basic scepticism there's obviously a non-0 chance that we might have got a lot of stuff wrong and he might be, but the same goes for me - I could be just such a denizen.

I think you're doing what Hare describes here - framing one objection as a matter of logic and objecting to it on intuitive terms, and the other the other way around.

I should stress that I obviously agree with everything you're saying about me, and would reject any similar bet anyone offered me, so the only difference between us is that I'm not willing to suspend disbelief long enough to find myself in a situation in which the laws of the logic I was just applying no longer seem to apply.

(I have a similar issue with Newcomb's box, incidentally - the entire 'paradox' stems from asserting something completely at odds with our understanding of the universe, and then not defining the problem very well anyway. In both cases, my reaction to this uberbeing would be much the same as if someone showed up, started producing miracles, and claimed to be god - I'd think it was a very powerful (but finitely so) liar. The credence I placed in its claim about how powerful it was would effectively tend to 0 as the power it claimed tended towards infinity)
"These were my only good shoes."
"You ought to have put on an old pair, if you wished to go a-diving," said Professor Graham, who had not studied moral philosophy in vain.
User avatar
Arepo
 
Posts: 1065
Joined: Sun Oct 05, 2008 10:49 am

Re: Reasons SIAI (and research generally) is not optimal?

Postby Brian Tomasik on 2011-07-14T13:59:00

LadyMorgana wrote:R.e. the choice between supporting x-risk research vs. meme propagation of concern for wild animal suffering, it's worth noting that the former promotes the latter to some extent (since the longer humans survive, the more morally intelligent they seem to become), but not vice versa.

However, it's only if humans survive that many of the risks to wild animals (terraforming, directed panspermia, lab universes, sentient simulations) rear themselves. Raising concern for wild-animal suffering is in large part designed to prevent harm arising from human survival. The other part is to increase chances that humans engage in "cosmic rescue missions" for wild extraterrestrials, but the value of this is unclear to me. (Wild extraterrestrials may have small likelihood of being sentient, and there may not be many of them within realistic reach.)
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Reasons SIAI (and research generally) is not optimal?

Postby tog on 2011-07-15T12:21:00

Interesting discussion of Pascal's mugging - I hadn't heard of it. Am I right in thinking it requires the premises that the mugger’s claiming X (e.g. that he’ll create and torture 3^^^3 people unless you give him $5) makes it more likely that X is true, and that X’s probability (given this claim) decreases slower than the payoff it involves? People above have given reasons to doubt both premises, particularly the first.

Here's a partly formed response I'd add. It's clearly very hard to judge the probability of scenarios like X, partly because our past experience doesn't help us decide between 'infinitesimal' and 'tiny'. Given that, I’ve no reason to think that the expected utility of giving into the mugging is positive rather than negative. One reason for this is that my judgement of this expected utility switches from positive to negative depending on various questions I really have no idea about. Another is that if I start factoring in far out possibilities like X I’d have to factor in other far out possibilities (including that the mugger will do the opposite of what he says) and I have no idea how to do so.
User avatar
tog
 
Posts: 76
Joined: Thu Nov 25, 2010 10:58 am

Re: Reasons SIAI (and research generally) is not optimal?

Postby Gedusa on 2011-07-17T22:12:00

I heard Yudkowsky give a talk in which he mocked the idea of SIAI working towards happiness-maximising AIs.

In case you're interested he explains why here. I'd also point out that they're almost all consequentialists of some creed, which is better than nothing if you're a utilitarian :D

R.e. the choice between supporting x-risk research vs. meme propagation of concern for wild animal suffering, it's worth noting that the former promotes the latter to some extent (since the longer humans survive, the more morally intelligent they seem to become), but not vice versa.

Hmm, I'm not so sure. After all, total animal suffering (inflicted by humans - not in the wild) has increased over the last century (factory farms mostly). It could be that there will be something of a selection effect though, humans may have to be at a certain level of morality to not all die. I guess if you pushed me though, I would accept that most humans would come to care about wild-animal suffering, given time to contemplate the problem, and so that we should try to prevent x-risks, not promote the anti-wild-animal-suffering meme.

I have little to add about Pascal's mugging that hasn't already been said.
World domination is such an ugly phrase. I prefer to call it world optimization
User avatar
Gedusa
 
Posts: 111
Joined: Thu Sep 23, 2010 8:50 pm
Location: UK

Re: Reasons SIAI (and research generally) is not optimal?

Postby Jesper Östman on 2011-07-18T21:46:00

Gedusa:

To be strict, consequentialism needn't be better than nothing for a utilitarian. Even other utilitarian varieties can be worse than nothing to a utilitarian (eg a total classic utilitarian might be worse for a negative utilitarian than ordinary people, at least under certain assumptions).

Holly/Gedusa: AFAIK there seems to be at least a significant minority (and perhaps even a majority) at SIAI who seem to have values that aren't that different from the values of most people here. Eg Shulman, Salamon and Anissimov.

Jesper Östman
 
Posts: 159
Joined: Mon Oct 26, 2009 5:23 am

Re: Reasons SIAI (and research generally) is not optimal?

Postby LadyMorgana on 2011-07-19T00:06:00

Arepo wrote:This argument doesn't really help you

Lol maybe that's because it was intended to help you. That wasn't very clear though.
I now also have little else to say about Pascal's Mugging that hasn't already been said.

Alan Dawrst wrote:it's only if humans survive that many of the risks to wild animals...rear themselves...The other part is to increase chances that humans engage in "cosmic rescue missions" for wild extraterrestrials

So it's also fair to say that it's only if humans survive that many of the benefits to wild animals rear themselves (including, not just cosmic rescue missions, but large numbers of biotechnologically-enhanced animals that can only experience happiness). So I still think that reducing x-risk has the upper hand here.

Gedusa wrote:In case you're interested he explains why here.

His explanation there doesn't make me any more comfortable.
Jesper Östman wrote:To be strict, consequentialism needn't be better than nothing for a utilitarian.

Agreed.
Jesper Östman wrote:AFAIK there seems to be at least a significant minority (and perhaps even a majority) at SIAI who seem to have values that aren't that different from the values of most people here. Eg Schulman, Salamon and Anissimov.

I'd like to know more to see whether this closeness in values is actually better than nothing in these cases :P Has Schulman disappeared from Facebook? Facebook used to suggest him to me as a friend a lot and now when I want to befriend him he is nowhere to be found...
"Three passions, simple but overwhelmingly strong, have governed my life: the longing for love, the search for knowledge, and unbearable pity for the suffering of mankind" -- Bertrand Russell, Autobiography
User avatar
LadyMorgana
 
Posts: 141
Joined: Wed Mar 03, 2010 12:38 pm
Location: Brighton & Oxford, UK

Re: Reasons SIAI (and research generally) is not optimal?

Postby Arepo on 2011-07-19T10:05:00

LadyMorgana wrote:
Arepo wrote:This argument doesn't really help you

Lol maybe that's because it was intended to help you. That wasn't very clear though.
I now also have little else to say about Pascal's Mugging that hasn't already been said.


I meant it doesn't save you from getting Pascalianly mugged by me :P
"These were my only good shoes."
"You ought to have put on an old pair, if you wished to go a-diving," said Professor Graham, who had not studied moral philosophy in vain.
User avatar
Arepo
 
Posts: 1065
Joined: Sun Oct 05, 2008 10:49 am

Re: Reasons SIAI (and research generally) is not optimal?

Postby Jesper Östman on 2011-07-19T15:21:00

One thing which does save you from mugging in practice though is that there are much better ways to invest one's money in hope of infinite return than to pay muggers (eg reducing existential risk).

Jesper Östman
 
Posts: 159
Joined: Mon Oct 26, 2009 5:23 am

Re: Reasons SIAI (and research generally) is not optimal?

Postby LadyMorgana on 2011-07-19T23:08:00

Arepo wrote:I meant it doesn't save you from getting Pascalianly mugged by me
Ooooh okay, forgive me, I read that in completely the wrong way!
"Three passions, simple but overwhelmingly strong, have governed my life: the longing for love, the search for knowledge, and unbearable pity for the suffering of mankind" -- Bertrand Russell, Autobiography
User avatar
LadyMorgana
 
Posts: 141
Joined: Wed Mar 03, 2010 12:38 pm
Location: Brighton & Oxford, UK

Re: Reasons SIAI (and research generally) is not optimal?

Postby Mike Radivis on 2011-08-18T11:19:00

LadyMorgana wrote:R.e. the choice between supporting x-risk research vs. meme propagation of concern for wild animal suffering, it's worth noting that the former promotes the latter to some extent (since the longer humans survive, the more morally intelligent they seem to become), but not vice versa.

Interestingly, I have been thinking it's exactly the other way around, but more generally with concern for suffering promoting x-risk prevention, but not vice versa.

Arguments for concern for suffering => x-risk reduction:
1) A world in which people care about reducing suffering might be a politically more stable world, because suffering often leads to some kind of conflict. I think it's relatively clear that a politically unstable world would be more prone to x-risks.
2) Concern for suffering would increase the probability that a superintelligent AI singleton cares about suffering, too. But the conclusion of that is everything but clear. It might reduce x-risks. Or the AI might wipe out humanity, but replace them with some kind of superior sentient being. Or it might go on a destructive negative utilitarian rampage and wipe out all life in its future lightcone.

Arguments for not(x-risk reduction => Concern for suffering):
3) I am rather unsure about whether survival really increases moral standards or not. After all, we still can end up in a very dystopian future that is darker than the darkest passages of medieval times. What if moral standards mostly depend on available energy and wealth? Then our moral standards might decline after peak oil.
LadyMorgana wrote:the longer humans survive, the more morally intelligent they seem to become

What is that claim based on? And what do you mean exactly? The age of a single human or the age of mankind?
4) A reduction of x-risks could just be the result of increased rationality and not increased moral intelligence. In that case, we would need to know whether increased rationality implies more utilitarian (say, as opposed to other consequentialist) ethical reasoning. I hope and guess that is the case, but I am also quite skeptical at the same time.

Concern for wild animal suffering might be a special separate case. There's the possibility that concern for wild animal suffering increases x-risks if it leads us to applying extreme measures like destroying the biosphere (by intentional venusforming of Earth for example).

Gedusa wrote:Hmm, I'm not so sure. After all, total animal suffering (inflicted by humans - not in the wild) has increased over the last century (factory farms mostly). It could be that there will be something of a selection effect though, humans may have to be at a certain level of morality to not all die. I guess if you pushed me though, I would accept that most humans would come to care about wild-animal suffering, given time to contemplate the problem, and so that we should try to prevent x-risks, not promote the anti-wild-animal-suffering meme.

I would make a distinction between a level of morality that is sufficient for survival of mankind and a level of morality that includes the prevention of wild animal suffering. It could be argued that the first level is rather low (perhaps comparable to the status quo, as it was sufficient for our survival thus far) and might even have negative expected utility, while the other one is quite high. If you optimize for survival you might end up with a higher probability of a lower average standard of morality than if you optimize for higher moral standards directly (which would be pretty relevant at least from an average utilitarian point of view).

LadyMorgana wrote:So it's also fair to say that it's only if humans survive that many of the benefits to wild animals rear themselves (including, not just cosmic rescue missions, but large numbers of biotechnologically-enhanced animals that can only experience happiness). So I still think that reducing x-risk has the upper hand here.

See previous argument: This only applies if we are in a high morality setting. In a low morality setting mankind might cause much more (wild) animal suffering (e.g. by panspermia or simulations with sentient animals). It is very hard to make estimates about the expectation values of focusing on anti-wild-animal-suffering vs. x-risk prevention. That's why I'm uncertain about which path to follow, but I am currently much in favor or raising moral standards.
User avatar
Mike Radivis
 
Posts: 32
Joined: Thu Aug 04, 2011 7:35 pm
Location: Reutlingen, Germany

Re: Reasons SIAI (and research generally) is not optimal?

Postby Gedusa on 2011-08-18T12:10:00

Then our moral standards might decline after peak oil.

Off-Topic: Can anyone point me to decent info on peak oil. I've heard people say that it is a problem on here before, and yet I've always agreed with wikipedia's criticism section on it. It may be important if it is a likely thing to happen.
I would make a distinction between a level of morality that is sufficient for survival of mankind and a level of morality that includes the prevention of wild animal suffering. It could be argued that the first level is rather low (perhaps comparable to the status quo, as it was sufficient for our survival thus far) and might even have negative expected utility, while the other one is quite high.

Yes, level of morality for survival is probably lower than level of morality for caring for wild animals. This assumes, as you point out, that morality causally influences x-risks. I think it does, but this effect is weak. I think reductions in x-risks can be done most efficiently by improving rationality and by specific interventions (e.g. making sure there aren't asteroids headed for us). Most people don't really want extinction, at least not within the lifetimes of them or their children, so we would expect that if they were more rational, they would be likely to take steps to avoid it, regardless of increased moral feelings on the subject.

Oh and @ "comparable to the status quo, as it was sufficient for our survival thus far"; observer selection effects seem plausible: we must find ourselves in a world where we didn't go extinct, regardless of the probability of such a world existing. So, we shouldn't assume that our morality level is high enough to prevent extinction relying only on past evidence. Or at least, I think so. I'm rubbish at anthropics :)
World domination is such an ugly phrase. I prefer to call it world optimization
User avatar
Gedusa
 
Posts: 111
Joined: Thu Sep 23, 2010 8:50 pm
Location: UK

Re: Reasons SIAI (and research generally) is not optimal?

Postby RyanCarey on 2011-08-18T12:16:00

Off-topic
I don't know much about peak oil, but here's an article critical of the idea:
http://crookedtimber.org/2011/08/05/peak-oil-was-thirty-years-ago/
You can read my personal blog here: CareyRyan.com
User avatar
RyanCarey
 
Posts: 682
Joined: Sun Oct 05, 2008 1:01 am
Location: Melbourne, Australia

Re: Reasons SIAI (and research generally) is not optimal?

Postby Arepo on 2011-08-23T18:30:00

I don't think much of that article. The key issue for economic stability is the discrepancy between supply and demand (where demand is defined as something less tautological than how much of it gets bought), and the rate of change between the two. Peak oil will see supply drop much faster than demand ever has.
"These were my only good shoes."
"You ought to have put on an old pair, if you wished to go a-diving," said Professor Graham, who had not studied moral philosophy in vain.
User avatar
Arepo
 
Posts: 1065
Joined: Sun Oct 05, 2008 10:49 am

Re: Reasons SIAI (and research generally) is not optimal?

Postby LadyMorgana on 2011-10-12T00:08:00

Mike Radivis wrote:Interestingly, I have been thinking it's exactly the other way around, but more generally with concern for suffering promoting x-risk prevention, but not vice versa.
Actually this makes sense if you think of the process as creating utilitarians. But:

Gedusa wrote:Most people don't really want extinction, at least not within the lifetimes of them or their children, so we would expect that if they were more rational, they would be likely to take steps to avoid it, regardless of increased moral feelings on the subject.

Your average man on the street thinks it would be very bad if the world blew up. A slightly more morally enlightened fellow probably thinks it would be good (he's started thinking about how wild animal suffering probably outweighs happiness). A long-term utilitarian probably thinks it would be very very bad if the world blew up.

So my question is, just thinking about reducing x-risk (not taking into account the independent value of spreading the meme of reducing wild animal suffering): Is it really worth the effort of taking people through the utilitarian process only for them to end up in roughly the same position on the issue that they started with, and with the danger of them getting stuck in Stage 2 along the way ("There's the possibility that concern for wild animal suffering increases x-risks if it leads us to applying extreme measures like destroying the biosphere (by intentional venusforming of Earth for example).")?


Mike Radivis wrote:LadyMorgana wrote:
the longer humans survive, the more morally intelligent they seem to become

What is that claim based on? And what do you mean exactly? The age of a single human or the age of mankind?

The age of mankind - mainly based on our moral sphere expanding from tribe to country to world to other worlds, and similarly expanding to give equal consideration of interests to women, other races, animals.
"Three passions, simple but overwhelmingly strong, have governed my life: the longing for love, the search for knowledge, and unbearable pity for the suffering of mankind" -- Bertrand Russell, Autobiography
User avatar
LadyMorgana
 
Posts: 141
Joined: Wed Mar 03, 2010 12:38 pm
Location: Brighton & Oxford, UK

Re: Reasons SIAI (and research generally) is not optimal?

Postby Gedusa on 2011-10-12T11:36:00

Your average man on the street thinks it would be very bad if the world blew up. A slightly more morally enlightened fellow probably thinks it would be good (he's started thinking about how wild animal suffering probably outweighs happiness). A long-term utilitarian probably thinks it would be very very bad if the world blew up.

I agree with some of that - but not other bits. I think you've made a large jump from "average guy" to "morally enlightened" guy who cares about wild animals. The most common reason I've come across for people thinking extinction is a good thing is misanthropy followed by some form of environmentalism. I'm not sure about your categorization of long-term utilitarians either - whilst most I've met generally come to the view that they shouldn't be trying to promote extinction, a lot of us seem to be bet hedgers - saying that we should try and make more people moral in case we do survive, but that a whole bunch of suffering (both current and expected) will be prevented if we don't .
Is it really worth the effort of taking people through the utilitarian process only for them to end up in roughly the same position on the issue that they started with

Yes. If they're utilitarians or some form of maximizing consequentialist, then they're more likely to make greater efforts in whatever domain they think is right.
Think of how much the average guy on the street does to prevent extinction.
Now think of what the average utilitarian does.
There's the possibility that concern for wild animal suffering increases x-risks if it leads us to applying extreme measures

Ohh wow. I'd never thought of that. Hmmm. It's an unlikely scenario - but if I were to quantify it, I'd say that it seemed like a good bet to make: it increases the risk of "good" x-risks (where the biosphere goes down) and decreases the risk of "bad" x-risks (where the biosphere continues). It also might be worth the risk because a galaxy spanning civilization that didn't care about animal suffering would probably be very bad.
World domination is such an ugly phrase. I prefer to call it world optimization
User avatar
Gedusa
 
Posts: 111
Joined: Thu Sep 23, 2010 8:50 pm
Location: UK

Re: Reasons SIAI (and research generally) is not optimal?

Postby Brian Tomasik on 2011-10-14T14:01:00

Gedusa wrote:
Is it really worth the effort of taking people through the utilitarian process only for them to end up in roughly the same position on the issue that they started with

Yes. If they're utilitarians or some form of maximizing consequentialist, then they're more likely to make greater efforts in whatever domain they think is right.

I guess it depends on the cost-benefit tradeoff. If you want to reduce existential risk, and if you convince 10 people to take action on the grounds of normal moral intuitions versus getting 1 person to do the same action on grounds of utilitarianism, then the former may be the better option. However, I agree with Gedusa that spreading the right framework for thinking about these questions is extremely valuable, because the future's landscape for moral decisions will look radically different from how it does today in ways that no one has thought of.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Reasons SIAI (and research generally) is not optimal?

Postby Gedusa on 2011-11-07T18:16:00

A random tidbit - nowhere else seemed appropriate.

I asked Nick Bostrom whether we should give to SI or FHI if we wanted to lower x-risk and thought unfriendly AI was the greatest x-risk. He gave what he referred to as the "non-answer answer". But the rest of his answer was vaguely interesting (paraphrased a week after the fact, so possible low accuracy):

The two organizations have a lot of overlap in terms of their missions. They are pretty synergistic - and therefore if one were about to go under you should probably donate to that one. There is also a lot of collaboration between the two organizations - in papers we write and so on. However there are notable differences. SI doesn't have to deal with bureaucracy and try to get grants (as we do). They can also more easily hire people from non-academic backgrounds to do useful work. On the other hand - we have more influence in academia and turn out a greater number of papers. Our sights are on all x-risks, whereas SI focuses just on AI. So it's really a question of which set of characteristics you think are the most important.


Bluntly, I think he's just being polite. I think he thinks FHI is better, as he works for (well... leads) FHI and could certainly work for SI if he wanted to. I'm not sure if I'm just being uncharitable though. And I've forgotten 1/3 of his answer - and probably tweaked the rest in my head.

Edit: I now disagree with my final paragraph. A proper transcription of the talk and the relevant passaged is here.
World domination is such an ugly phrase. I prefer to call it world optimization
User avatar
Gedusa
 
Posts: 111
Joined: Thu Sep 23, 2010 8:50 pm
Location: UK


Return to General discussion