Holden Karnofsky's criticisms of SingInst

Whether it's pushpin, poetry or neither, you can discuss it here.

Holden Karnofsky's criticisms of SingInst

Postby tog on 2012-05-11T09:24:00

In case you haven't spotted it, this is interesting: http://lesswrong.com/lw/cbs/thoughts_on_the_singularity_institute_si/
User avatar
tog
 
Posts: 76
Joined: Thu Nov 25, 2010 10:58 am

Re: Holden Karnofsky's criticisms of SingInst

Postby Arepo on 2012-05-14T15:54:00

Very much. It echoes a lot of my concerns with them.
"These were my only good shoes."
"You ought to have put on an old pair, if you wished to go a-diving," said Professor Graham, who had not studied moral philosophy in vain.
User avatar
Arepo
 
Posts: 1065
Joined: Sun Oct 05, 2008 10:49 am

Re: Holden Karnofsky's criticisms of SingInst

Postby tog on 2012-05-15T08:30:00

Alan, I'd be interested to hear your thoughts on this, since I gather you used to donate to SingInst?
User avatar
tog
 
Posts: 76
Joined: Thu Nov 25, 2010 10:58 am

Re: Holden Karnofsky's criticisms of SingInst

Postby Brian Tomasik on 2012-05-18T14:17:00

I don't donate at the moment. See this thread, including the big quote in lukeprog's first post. Currently I donate to Vegan Outreach and The Humane League.

I also liked Holden's post a lot, but I actually disagreed with some of his points -- e.g., his suggestion that SIAI should get more authoritative endorsements and especially the suggestion that it try to commercialize its work. SIAI is a philosophy organization, and it would be wasteful to work on narrow AI stuff that won't serve any purpose. No one has told Peter Singer that he needs to start a company to prove his credentials.

I thought the "tool AI" point was well taken and needs to be better addressed by SIAI.

All of that said, one of my biggest differences with Holden is that I'm not sure if preventing existential risk is a good idea in the first place. :)
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Holden Karnofsky's criticisms of SingInst

Postby Gedusa on 2012-06-12T16:19:00

Eliezer's post trying to meet a particular one of Holden's objections, namely: "Why don't we just use a tool AI/Google Maps type AGI - instead of an agent AGI with a utility function".

(And for those who - like me! - have a tremendously hard time understanding dense material like this: a comment summarizing the post)
World domination is such an ugly phrase. I prefer to call it world optimization
User avatar
Gedusa
 
Posts: 111
Joined: Thu Sep 23, 2010 8:50 pm
Location: UK

Re: Holden Karnofsky's criticisms of SingInst

Postby Arepo on 2012-06-13T11:43:00

Seems like a decent reply overall against the weakest part of Holden's critique. I found the fourth point deeply unconvincing, though. Holden has said what his 'extra information' is - to wit that whereas the world's best experts would normally test a complicated programme by running it, isolating out what (inevitably) went wrong by examining the results it produced, rewriting it, then doing it again.

Almost no programmes are glitch free, so this is at best an optimization process and one which - as Holden pointed out - you can't do with this type of AI. If (/when) it goes wrong the first time (and assuming SIAI's general predictions about unfriendly AI are right), you don't get a second chance.
"These were my only good shoes."
"You ought to have put on an old pair, if you wished to go a-diving," said Professor Graham, who had not studied moral philosophy in vain.
User avatar
Arepo
 
Posts: 1065
Joined: Sun Oct 05, 2008 10:49 am

Re: Holden Karnofsky's criticisms of SingInst

Postby Arepo on 2012-06-13T12:23:00

Having said that, I think the Pascalian reasoning behind SIAI’s mission doesn't work. If we don't expect their mission to work and work soon, we can get a lot of compound benefits just by funnelling resources into the best of conventional charities, which would put us in a much better position to divert resources to FAI research later.

I also think their value structure is broken – CEV is preference utilitarianism less well defined, and preference utilitarianism IMHO is already incoherent simply because ‘a preference’ is when defined is either i. still vague, ii. expressible as happiness/unhappiness with some superfluous details bolted on or iii. a psychotic value system that removes emotion from the picture altogether, allowing total emotional torture in the name of expediting ‘goal-seeking behaviour’.

I have never seen them give any argument against (relatively) simple hedonistic util as a value system for an AI other than that a) people don’t always want happiness (begging the question – what people want is the sole criterion for judging the worth of something iff you accept some form of preference consequentialism), b) it would lead to a utilitronium shockwave that would destroy life on earth as we know it (begging the question again by assuming that’s a bad outcome) or c) it might lead to an AI tiling the universe with smiley faces (basically a complaint that HU is ill-defined; which might be a fair comment in itself, but is ludicrous when it inevitably includes the suppressed premise that PU – sorry, CEV – is clearer).

I surely sound quite aggressive here (I’m a bit fed up with the glib dismissals I get every time I raise the above objections that basically just repeat the arguments I’ve criticised), but I really would love to know if anyone knows of some argument they’ve made that refutes – or at least takes into account - these kind of concerns.
"These were my only good shoes."
"You ought to have put on an old pair, if you wished to go a-diving," said Professor Graham, who had not studied moral philosophy in vain.
User avatar
Arepo
 
Posts: 1065
Joined: Sun Oct 05, 2008 10:49 am


Return to General discussion