Expected harm: AIs vs climate change

Whether it's pushpin, poetry or neither, you can discuss it here.

Expected harm: AIs vs climate change

Postby Arepo on 2009-10-18T13:06:00

I just discovered a blog post which makes a strong case (stronger for the fact that I already shared its opinions, naturally ;)) for concerning ourselves more with climate change. I think his concluding remark is very pertinent, and one many people seem to gloss over:

pozorvlak wrote:Given that, and I can't emphasise this enough, my disaster scenario is already happening, I think the onus is on you to explain why it's so overwhelmingly probable that we'll be saved at the last minute by a deus ex machina.


I think there are a few reasons not to take the AI risk as seriously many transhumanists claim - I recently had an interesting discussion on Facebook about it, which I'll paste here:

David Pearce
If a superintelligent being were to covert your matter and energy into a superhappy, quasi-immortal supergenius, would that act be friendly or unfriendly? After all, if offered a wonderpill today that promised life-long happiness, cognitive enhancement and eternal youth, you'd probably take it. Ah, you might reply, the difference is consent. Yet ... Read morejust as small children and pets don't always know their own best interests, maybe most humans don't either. The responsible caregiver in each case may feel duty-bound to intervene. [I should add I don't think this scenario is likely. But I trust the judgement of a posthuman superintelligence more than mine.]

This probably isn't the kind of "unfriendliness" most analysts have in mind. But some conceptions I've read of so-called "Superintelligence" strike me as quite dumb - mere SuperAspergers or glorified idiots savants that lack a capacity for empathetic understanding of other sentient beings.


Sasha Cooper
If the singularity unfolds anything like singularitarians believe it will, it seems very unlikely that this mind emulation thing will be more than a momentary fad.

Super-smart AIs have no obvious reason to worry about the evolved 'personal identity' fiction many philosophers desperately cling to. And it's surely going to be far more efficient to have a single utility-monster mind than a bunch of irrational self-serving duplicates of the deeply flawed products of natural selection all fighting each other in the virtual world.

So it's not obvious that 'friendly' and 'unfriendly' AIs would amount to anything radically different in the long run. A friendly one will want to maximise welfare, and David's scenario seems like a pretty obvious way of it doing so. An unfriendly one, of the type many transhumanists envisage, is basically an obsessive pursuer of a non-utilitarian goal (the idiot savant you're talking about, David?).

But if, as I believe, 'intelligence' of the kind we're talking about here entails emotion and if, as I also believe, emotion combined with accurate reasoning implies utilitarianism, then any AI smart enough to wipe us out in pursuit of its goals is smart enough to realise (as some of us have done) that its desires are self-contradictory and to change itself (as we've yet to manage) into something with more coherent - or utilitarian - desire. Perhaps it will have wiped out all life on earth by then, but if we accept something resembling Bostrom's astronomical waste argument (http://www.nickbostrom.com/astronomical/waste.html), then the sudden demise of human life is a - drop in the ocean doesn't capture it - speck of dust floating in a supercluster.

Meanwhile, if today's futurists turn out to be as unreliable as yesterday's (is there any reason to think they won't?), maybe the future will look nothing like their predictions. Perhaps due to some inherent limitation we're not yet aware of we'll just continue to develop and expand gradually. In which case in the meantime we'd have done much better to stop throwing money at organisations like SIAI, who's founder - against every mathematician I've asked - actually thinks Newcomb's Paradox is a real problem <http://lesswrong.com/lw/nc/newcombs_problem_and_regret_of_rationality/>, and put it back into more conventional tasks run by people who actually accept the scientific and mathematical consensus on major issues, and try to improve the world we can reliably model.

Maybe that's the only chance we have to persuade a benevolent AI that we're worth keeping alive.


David Pearce
Sasha, excellent post, just one clarification. Yes, I worry about the possible unfriendliness of what (for lack of an existing term) I call a SuperAsperger (as distinct from a true SuperIntelligence). A mindblind SuperAsperger may act hostile or indifferent. But a SuperAsperger may instead be a utilitarian bent on converting the accessible cosmos into orgasmium - a far more likely outcome IMO than the oft-invoked paperclip maximiser.


Jesper Östman
Sasha, very interesting post! Your point about that 'friendly' and 'unfriendly' AI may amount to the same thing is both good and important.

I believe philosophers are among the people most likely to give up fictions like personal identity, at least the more talented philosophers like Hume or Parfit. Perhaps some (?) Buddhists believe this too. However, if you truly have embraced the fact that personal identity is a fiction, how come you think it is of any importance to convince an AI to keep "us" alive?

Maybe all predictions predictions of those futurists are wrong. Let us assume that you are right and we have a reason to assume that. How strong is that reason, are justified in giving the thesis that they are wrong a 60% credibility,80%, 90%?. Even if we are justified in holding that belief to 99% degreee, will not the huge disutility of a truly hostile (non-utilitarian) superintelligence justify spending a considerable amount of money on trying to minimize that risk?

The same goes for your thesis that AI will have emotions and (any conceivable set of) emotions combined with superintelligence will lead to utilitarianism. You may have good evidence for this thesis, but how good? Good enough to risk everything without a second thought on the matter?

Perhaps the SIAI is not doing a good job, perhaps they are. In any case, in a time where we spend much larger amounts of resources than SIAI get on things like soccer or fashionable clothing is it really rationally justified to spend nothing at all on trying to avoid possible unexpected disasters?


David Pearce
If the universe had an "off" button, the negative utilitarian would seem obliged to press it.
If the universe had a "convert to orgasmium" button, the classical utilitarian would seem obliged to press it.
Whether the negative utilitarian can have a principled reason for pressing one button rather than the other is unclear. Either way, the outcome is the destruction of ourselves and the world as we understand it.

Despite speculation about "end of the world" high-energy particle accelerator experiments, the universe doesn't appear to have an "off" button. But conversion of the accessible universe into orgasmium seems feasible, in principle at least. So perhaps the policy prescriptions of NU and classical utilitarianism converge. The conversion job wouldn't even need SuperIntelligence, just a SuperAsperger.


Sasha Cooper

Re David: I’d got the impression from your writing that you were very much in favour of the orgasmium button yourself. As you say, it seems to be the logical conclusion of CU, so I don’t really think we can justify calling it a SA, or dumb for reaching it simply because the conclusion happens to cause us some anxiety.

Re Jesper: It was actually through reading Reasons and Persons that I finally realised the problem with personal identity concepts, which was quite peculiar, since Parfit seems unwilling to follow his conclusions all the way. Having demolished usual ideas of personal identity he still seems to cling to the belief that there’s this other one hiding just around the corner - that psychological continuity actually matters in a non-utilitarian way. He’s now apparently become a prioritarian, so while I have a lot of respect for some aspects of his writing it turns out we have quite different worldviews after all.... Read more

Re why I think it important to persuade an AI to keep us alive, I have two answers:

1) I don’t, in the same sense I don’t think it’s intrinsically important that I pass on my genes – or at least go through the motions of doing so. But I’m fundamentally programmed to desire these things emotionally.

2) More to the point, I want to persuade people who do think it’s intrinsically important to put their efforts into more immediate goals. So it’s their sense of self-preservation I’m arguing from, not necessarily mine.

Re spending money on the risk of me being wrong. Sure, but there are plenty of other existential risks facing us in the more immediate and more foreseeable future. If any of them is greater than the threat of death-by-AI (or rather, if we can prevent more possible-world-extinctions per dollar), then we should simply put all our money into that project until the point where it becomes more efficient to divert it elsewhere.

Given our uncertainty about the future it doesn’t seem crazy to put *some* money into various options (although even that’s not necessarily sensible: http://www.slate.com/id/2034/). But given how rapidly our uncertainty about the future multiplies out as we go further into it, it seems like (all things being roughly equal) we should strongly prioritise those projects designed to present existential threats nearer to our time.

With something like SIAI, I doubt there’s much to gain by giving them more money than they already have. Using pure maths as a model, I’m told that most advances in any given field typically come from two or three hyper-geniuses at the top, and everything else is window-dressing. Since the SIAI’s work is still largely abstract, I suspect the same applies to them. In which case, once they have enough money to afford the appropriate hyper-geniuses, they either fund them, in which case they obviously don’t need any more, or they fund other people, in which case they obviously don’t deserve any more.

So what I object to about SIAI is not their existence or the fact of their funding per se, but their absorption of funds from utilitarian donors (see eg this thread viewtopic.php?f=25&t=170) which might have - to give one alternative - cured hundreds of people of lifelong blindness (see Toby Ord’s superb post http://www.facebook.com/group.php?gid=4 ... topic=3320), alleviating suffering with *vastly* lower risk than giving to futurists, and reducing existential risk with (IMO) a comparably high probability by allowing more of the currently living humans to contribute to global welfare and - perhaps most importantly - by making global society that much more harmonious.


Sasha Cooper
I had a couple of other thoughts about this today:
---
An egoistic AI might again be indistinguishable from an benevolent or super-aspergers one since, again, its goal of maximising its own welfare would turn it for selfish reasons into the same utility monster that the other two would turn into for instrumental reasons.
---
If, as I suspect, the biosphere experiences net negative welfare (something I agree with Alan Dawrst on: http://www.utilitarian-essays.com/suffering-nature.html) then of the two SIAI-relevant scenarios, i) AI research goes ahead unrestricted and a SA wipes out the biosphere before deciding to maximise (its) welfare and ii) SIAI get involved and delay research into such an AI until they’re sure it will behave like they want it to, i) is actually far preferable. You get rid of a huge source of suffering earlier than SIAI would let you. ... Read more
It also saves a lot of potential harm from the conceit that we can envision a perfect universe better than superintelligences, who the conceit might drive us to handicap.
---
If you tell a perfectly logical being to maximise multiple variables (or to maximise one variable while never flipping another variable from 0 to 1 - ie never ‘violating rights’), it’s going to either crash, or assign a weighting to each instruction that allows it to convert them into one variable. It seems unlikely that imagining any variable pursued to its ultimate conclusion will give us a picture we’re intuitively comfortable with if like the thought of any kind of diversity (as we seem to have evolved to). No amount of SIAI research is likely to change that, suggesting we’ll either end up with a universe we’d currently find intuitively unappealing or we’ll never develop AI.
If you only tell it to satisfice variables, you’ll probably still have the same problem.
---
None of these thoughts are meant to be an argument for anything more profound than diverting utilitarians’ resources away from AI-related research towards more immediate (ie. less risk-discounted) suffering-prevention.


Jesper Östman
Misc:
Regarding Parfit, yeah I agree that Parfit's view that psychological continuity matters some is ultimately unjustifiable.
Thanks for the links to Ord's post and the felicifia thread, they were very stimulating!

The main question:
I take the main question of our discussion to be if we as utilitarians should use our resources to minimize current (human) suffering (and/or maximize happiness) or to minimize existential risk. (A question we have not yet discussed is whether we should prioritize minimizing current human suffering or current animal suffering.)

We can agree that if we give money today to the causes Toby Ord promotes we can with a very high probability (perhaps 0.95, but we can assume it is 1) get utility at a rate of up to 7 dollars per DALY. That is, of course, very good.

How good would it be to
I believe that sooner or later it will be possible to convert into hedonium *a lot* of the energy available on (1) our planet, (2) our solar system, (3) our light-cone (4) perhaps even more than that, depending on what the true physics will be. Even on the most conservative of these scenarios, (1), the utility will be astronomical.

Assuming (1) is technologically possible, anything we can do to raise the probability of it happening even with what we consider to be extremely low probability, such as 0.001 (or much less than that), will have a huge utility.

A superintelligent AI may not prevent it, but perhaps instead promote it. Still, I believe there is at least some risk, even if it may be small that such an AI will prevent it (perhaps through maximizing something else). However, other existential risks such as the extinction the human race or the crippling of our science will definitely prevent it from happening.

If we are able to use our resources to somewhat reduce the probability of such catastrophies this will give us a *huge* gain in expected utility.

Thus, if the odds of existential risk are not astronomically low and we have some ability to affect them, spending resources on that would be *extremely* good.

But if that is the case we seem to have a good reason to give priority to reduce existential risks rather than reducing immediate suffering.


David Pearce
The financial cost of effectively eliminating most existential risks is IMO quite low. Self-sustaining bases on Mars and the Moon might cost a few hundred million dollars to establish. This compares favourably with, say, the 700+ billion dollars just spent rescuing the banks.

Based on some fairly modest assumptions, by far the biggest source of ... Read moreavoidable suffering is the world today is factory farming. Mass-produced cultured meat
http://www.new-harvest.org/
could in theory deliver global veganism in a few decades. Alas the research is shamefully under-funded. This is mainly because the two or three decades needed (probably) needed to deliver gourmet steaks - as distinct from artificial mincemeat - is beyond the time-horizon of most commercial investors.


Sasha Cooper
Jesper -> I agree with most of the logic, but not the input. I don't want to assume 1 is technically possible - the probability of it being so seems like it should be part of the sum.

But the main problems I have with this kind of 'here's such a big number that even when you multiply it by a tiny probability you still end up with a massive number' reasoning are (roughly) that a) as your large number grows, the probability attaining it by the specified action decreases at a rate that seems likely to be comparable and b) as your large number grows, the variables stack so that the probability of attaining it via alternative, but loosely related routes grow.

For eg, giving money to fight poverty (if you can do so without increasing overpopulation) IMO reduces existential risk at least as much as giving to a group like SIAI. Generally speaking I'd much rather give money to promote use of technologies we already have than ones we might get, since depleting resources from peak oil and climate change seem likely to retard our technology (though not our science) in the near future.

Funding space programs seems more plausible than most sciences, since there's a pretty broad scientific consensus that the technology to make them work really is around the corner, since we'll probably have to look to space for more resources anyway, and for the reason David gives (though I wonder whether a 'permanent' moonbase could really sustain itself indefinitely if we wiped out all life on Earth).

I tend to think we should fund climate change prevention programs though, since there's a massive scientific consensus on it, it's just around the corner, its effects seem likely to create serious political tensions, ramping up existential risk by giving us increased motivation to blow each other up. It's also likely to cause a lot of extra poverty according to its severity.

That's not a very well informed opinion though, and I have no idea which programs would be most cost-effective (and given Toby's research elsewhere I imagine there'll be orders of magnitude between them, so I'm reluctant to commit to anything without more info).


I've also subsequently thought that if AI turns out to be reasonably friendly (in a more conservative way) after all, delays from SIAI might (probably would?) still turn out to be hugely costly if you expect the kind of exponential growth many singularitarians do, since even slowing down the development of AI by a two minutes will mean that from then until extinction, we're two minutes behind on our exponential curve. That's a lot of lost utility.
"These were my only good shoes."
"You ought to have put on an old pair, if you wished to go a-diving," said Professor Graham, who had not studied moral philosophy in vain.
User avatar
Arepo
 
Posts: 1065
Joined: Sun Oct 05, 2008 10:49 am

Re: Expected harm: AIs vs climate change

Postby larry on 2009-10-18T13:52:00

The thing about AI, is that there really isn't that much doubt of it occurring. I think I saw a post on here where someone has stated the likelihood of it being 0.30 or to that effect. 0.30 in all of the foreseeable future? Do they mean on earth or in any part of the universe that will be in earth's light cone, for... the entire future? I have no idea how that number was established, but the likelihood is more in the range of 0.999 for example. It is an inevitable result of the laws of the universe. Evolution is the driver. Something will become more intelligent than current humans, and the word "artificial" is largely irrelevant, it will be some form of intelligence. Either transhumans, machines, alien contact, or an engineered species, etc. If it is not human equivalent DNA based sentience then we can call it artificial if you like, but it will just be intelligence in my mind, I don't care about the semantics.

Intelligence increases over time, I believe Ray Kurzweil is fond of the charts showing all of the trends, and the rates are increasing and have been since before the time of the first organisms. So we have to have the confidence to believe they will continue, because that is what we know at the present. At any rate, yes, it is very critical that we anticipate that the future will have exponentially more intelligence than we have now. And plan for utility models at this time, to take into account these trends to make current decisions. The main question mark, as you mentioned, is the factor of exponentiation, as we don't know how fast things are coming. And that makes such a massive difference in the long term. We just have to put into our models that we don't know that, and not make assumptions on things we do not know with enough certainty and concentrate on things that we do. Because when you concentrate on things with massive uncertainty it is too potentially dangerous that you are working in the wrong direction. But yes, plan for it to come with reasonable certainty.

As for climate change. That is the example of something we are unsure of. The science on that is still in its infancy. The models the climate scientists are coming up with are not coherent yet, nor well proven or reasoned, and the probabilities of the various scenarios are scattered. The costs in the present to achieve the utility in the future, once again, is in my opinion, a very big risk or misallocation. So in this case, putting untoward effort to making changes in the here and now to prevent climate change, is premature, as that effort can be expended on more definite utility producing efforts. Improving the models on the other hand, can be more useful, depending on the costs to research them.

As I said before, the massive intelligence is coming, if we expend energy now to solve problems that we don't understand, such as climate change, we are delaying the intelligence boom itself which will consider problems like climate change trivial. I think that is the main point. Determining the probabilities with more accuracy is sometimes of higher benefit than expending resources to act, when the uncertainty of outcome is very very high.

larry
 
Posts: 11
Joined: Fri Oct 09, 2009 8:56 pm

Re: Expected harm: AIs vs climate change

Postby Arepo on 2009-10-18T16:30:00

I disagree with a lot of that:

larry wrote:The thing about AI, is that there really isn't that much doubt of it occurring. I think I saw a post on here where someone has stated the likelihood of it being 0.30 or to that effect. 0.30 in all of the foreseeable future? Do they mean on earth or in any part of the universe that will be in earth's light cone, for... the entire future?


That does sound like a very conservative estimate. But note three of the differences between the CC and AI problems:

1) CC's biggest risk to our existence is within 100 years or so. If we survive that long without massive technological collapse, we can move onto other worries. Vs AI, which becomes a risk whenever it appears - which we really can't realistically guesstimate, since we have no idea what the breakthrough that creates it would be, and very little idea about how issues like peak oil and CC will affect our technological capacity.

More importantly is window of opportunity:

2) For CC this is now. The cost-effectiveness of all our options drops rapidly as time goes on. For AI it's not clear when this is, partly because of our uncertainty about when it will happen and partly because of our uncertainty of what, if anything, to do about it. If we're going to develop AI within the next decade or two, chances are nothing SIAI will have done will matter, since it will just be some guys playing around on a PC somewhere, or some natural evolution from the interplay of viruses on the web, or whatever. If we're not going to develop it for over a century then we'll probably have time on the other side of our CC-activities to do what SIAI are trying to do now.

3) We already know at least some of what we can do to reduce harm of CC. We can put this into practice now. For all we know (as seems quite likely, even), there's nothing we can do to significantly alter the existential risks posed by AI. It is, after all, going to be far smarter than the researchers. The most likely thing we might be able to achieve is stalling its arrival, and see the OP for why I don't think that's a big deal.

It is an inevitable result of the laws of the universe. Evolution is the driver. Something will become more intelligent than current humans, and the word "artificial" is largely irrelevant, it will be some form of intelligence.


I think any evolutionary theorist would dispute this. Intelligence prevails when the conditions are right for it, and increases iff some mechanism arises for increasing it. Evolution doesn't magically provide the latter, and the former only holds in select corners of the biosphere.

Intelligence increases over time, I believe Ray Kurzweil is fond of the charts showing all of the trends, and the rates are increasing and have been since before the time of the first organisms. So we have to have the confidence to believe they will continue, because that is what we know at the present.


I've seen Nick Bostrom assert something similar about (pre)historical GDP, and it seems completely unconvincing. The last 100/200/6000 years have all been anomalies in the history of the world - black swans of some type. You can't infer anything about them from looking more broadly at the surrounding history - or if you claim you can, you need to actually show detailed evidence of why there's a valid analogy. What's more, you need to define your terms very carefully. Perhaps species have tended towards greater ability to manipulate symbols in some way - but that doesn't mean we'll create something resembling an AI just by getting better at manipulating symbols.

As for climate change. That is the example of something we are unsure of. The science on that is still in its infancy.


I find this a really bizarre assertion following such assured statements about AI , which we really know nothing about. I can't speak with personal authority, but every climate scientist I've seen opine claims that the basic principle of the greenhouse effect is one of the best-tested conclusions in modern science. Naomi Oreskes showed the consensus on anthropogenic CC to be overwhelming 5 years ago, and it's only increased since then.

The details of what overall effects it will have are obviously hazy, but much less so than the details of what a super-AI would do. Many - perhaps most - AI researchers in the world are just getting on with their jobs without any apparent expectation that they're bringing about the end of the world.

I think that is the main point. Determining the probabilities with more accuracy is sometimes of higher benefit than expending resources to act, when the uncertainty of outcome is very very high.


Agreed, but at any given time the option of researching probabilities further is just one of the actions available. When you have high uncertainty of one outcome and high confidence of another, you shouldn't automatically throw resources at the former - especially when the latter is an imminent threat which you know how to combat. If anything you should bias towards the latter.
"These were my only good shoes."
"You ought to have put on an old pair, if you wished to go a-diving," said Professor Graham, who had not studied moral philosophy in vain.
User avatar
Arepo
 
Posts: 1065
Joined: Sun Oct 05, 2008 10:49 am

Re: Expected harm: AIs vs climate change

Postby larry on 2009-10-18T17:49:00

Arepo wrote:I find this a really bizarre assertion following such assured statements about AI , which we really know nothing about. I can't speak with personal authority, but every climate scientist I've seen opine claims that the basic principle of the greenhouse effect is one of the best-tested conclusions in modern science. Naomi Oreskes showed the consensus on anthropogenic CC to be overwhelming 5 years ago, and it's only increased since then.


Maybe I misunderstand and maybe you jumped to conclusions.

Probably it was my fault and I wasn't clear. I have no doubt that the makeup of the atmosphere affects the climate, I didn't state anything to the contrary, (or did I?). The point is, or I was trying to make, I thought it was clear or implied, is if we try to control that, what degree of effort becomes non-utilitarian? Or would it be best to use the effort in another direction until we are more sure of the science behind CC, that tries to implicate the results of any climate disruptions, and the outcomes are more proven, and quantifiable.

Arepo wrote:but that doesn't mean we'll create something resembling an AI just by getting better at manipulating symbols.


Manipulating symbols, does that mean writing software? Intelligence is not created necessarily by writing software. Increasing it is an inherent quality of itself, when it exists it finds ways to increase itself and it increases on its own, software has not been around very long, yet intelligence has increased for a very very long time. If you mean that man, by trying to increase the rate at which intelligence is increasing is wasting his time, that is more sensible to me, and that I would consider. But I have no doubt that man can increase the rate at which intelligence is increasing, he has already done it, and he continues to do so.

Arepo wrote:I think any evolutionary theorist would dispute this. Intelligence prevails when the conditions are right for it, and increases iff some mechanism arises for increasing it.


I think you should be more careful with your wording, "any" implies every single one believes this. Which we know cannot be true. I don't think you really believe this, but it just slipped maybe. It hurts the credibility of the argument to say something is definitely true, I thought we were trained in philosophy. Cogito, ergo sum? :P

larry
 
Posts: 11
Joined: Fri Oct 09, 2009 8:56 pm

Re: Expected harm: AIs vs climate change

Postby Arepo on 2009-10-19T20:11:00

larry wrote:Probably it was my fault and I wasn't clear. I have no doubt that the makeup of the atmosphere affects the climate, I didn't state anything to the contrary, (or did I?). The point is, or I was trying to make, I thought it was clear or implied, is if we try to control that, what degree of effort becomes non-utilitarian?


It's hard to be sure, but IIRC Nicholas Stern's report concluded that the expected harm of runaway climate change would be about 1/5 of the world's economic production (he subsequently revised the figure up to 1/3 - I don't have links to hand, but can dig them out if you need them), so we should be willing to give up to that much to prevent it, plus presumably a little more to insure against the existential risk of a worst-case scenario.

Or would it be best to use the effort in another direction until we are more sure of the science behind CC, that tries to implicate the results of any climate disruptions, and the outcomes are more proven, and quantifiable.


It's pretty well established that costs of dealing with it will shoot up the longer we wait. Against that, the risk of overspending in the present doesn't seem too serious, especially given how much we're actually underspending. You don't actually need it to be precisely quantifiable for that logic to hold, which is just as well, because it doesn't seem like we'll have that level of sophistication in modelling complex systems for decades or even centuries.

Against that, why doesn't this logic apply to AI research? Let's wait for someone to quantify the size of the threat and value of research before we put any more of our own money into it (groups like SIAI have a lot of rich backers, so they're never going to run out of funding entirely).

Arepo wrote:But I have no doubt that man can increase the rate at which intelligence is increasing, he has already done it, and he continues to do so.


I think that's probably true, but it's not really supported by the claim that intelligence happens to have been a successful evolutionary trait in a few niches.

Arepo wrote:I think you should be more careful with your wording, "any" implies every single one believes this. Which we know cannot be true. I don't think you really believe this, but it just slipped maybe. It hurts the credibility of the argument to say something is definitely true, I thought we were trained in philosophy. Cogito, ergo sum?


This doesn't seem very important. For the record, I'm a radical epistemological sceptic, meaning that there's (probably) nothing I'm certain of, nor even anything I'm actually confident of, since any statement of probability or confidence bounds is uncertain. In practice I use words like 'know' and 'sure' when I have high confidence because it's a lot more practical than hedging every aspect of everything I say, including the hedge (and the hedge of the hedge, ad infinitum).

So when I say 'any', I obviously don't claim to know for sure what would happen if you asked any evolutionary theorist. But I'm very confident that if you asserted that anything about evolution was 'inevitable', they would contradict you, saying that adaptations occur by a mixture of chance mutation and natural selection. So the fact that intelligence has found a niche in some cases is no more relevant than that gigantism has. Your logic seems to be equivalent to a diplodocus saying 'I can see that species tend to get larger, so it's inevitable that in a few million years an organism will evolve that's heavier than the planet I live on'.

To put it another way, this :P
"These were my only good shoes."
"You ought to have put on an old pair, if you wished to go a-diving," said Professor Graham, who had not studied moral philosophy in vain.
User avatar
Arepo
 
Posts: 1065
Joined: Sun Oct 05, 2008 10:49 am

Re: Expected harm: AIs vs climate change

Postby larry on 2009-10-20T00:36:00

Arepo wrote:It's hard to be sure, but IIRC Nicholas Stern's report concluded that the expected harm of runaway climate change would be about 1/5 of the world's economic production (he subsequently revised the figure up to 1/3


Oh, yes, here is a page about the report. The criticisms section sounds very similar to what I posted a bit earlier, I should have just linked and cut and paste from here to start with. They are trying to predict economies to the end of the century? They can't even get a handle on trying to get them in the ballpark 2 years in advance. GDP numbers for this year, go check the forecasted vs actual predicted just a couple of years ago. Check predicted surplus, whoops, I mean beyond imagination 1.8 trillion deficit, etc.

Stern_Review

The track record for economic forecasts is atrocious. I admit the models to try to predict economies are beyond complex. Throw in another variable like climate change with wildly varying forecasts to modify the basic assumption, and it is laughably inane.

Anyway, yes, i will drop it, we don't agree obviously, but thanks for the banter, I am sure we agree on more than we disagree on. But on this one, we just diverge. Time will tell.

larry
 
Posts: 11
Joined: Fri Oct 09, 2009 8:56 pm

Re: Expected harm: AIs vs climate change

Postby Arepo on 2009-10-20T11:36:00

I'm the first to agree that the future is unpredictable. What I don't understand here is whence comes the claim that the emergence of AI, something on which we have basically no data, is a greater threat than something on which we have lots (of an admittedly huge amount of potential data) for. After all, AI development is subject to economics too.

Re economic forecasts being difficult, that's not necessarily relevant to the claim that climate change will damage the economy by a particular amount. Things can predictably affect the economy without being part of its conventional models - eg. if we suddenly proved that the sun would go nova next Tuesday, we could confidently predict that the economy would shrink by a fraction of 1 next Tuesday. We could also use that to advise policy - ie if there was something with a greater than 0 chance of preserving us, and which cost less than 100% of our total GDP between next Tuesday and the end of the universe, we would want to pursue it (assuming we expected net positive utility if we survived).

I don't want to put words into your mouth, but as far as I can see your claim must amount to one of two things:

1) We should spend more on general risk analysis now, even though a) we have good evidence that the expected value of spending more on climate change now (assuming payment comes from a fixed pool) will save us more than it costs us later (+ reduce short-medium-term existential risk), and b) we have little or no evidence about the value of further analysis.

2) We should spend more on AI research specifically, even though we have no evidence that AI is an existential risk, no reason to believe that we can do anything about it (besides not develop it) if it is, and little reason as utilitarians to think that we should if we could.

I don't like agreeing to disagree before we've even isolated the disagreement. If you're bored of the conversation, fine, but if not I would like you to post some short numerical/expected utility claim showing very roughly what you think benefits of spending on analysis (or whatever you're advocating) vs CC reduction are. Pull the numbers out of thin air, if you like - we can quibble about them later - I just want to see the the structure of your argument.
"These were my only good shoes."
"You ought to have put on an old pair, if you wished to go a-diving," said Professor Graham, who had not studied moral philosophy in vain.
User avatar
Arepo
 
Posts: 1065
Joined: Sun Oct 05, 2008 10:49 am


Return to General discussion