Hello crew

Whether it's pushpin, poetry or neither, you can discuss it here.

Hello crew

Postby Michael Anissimov on 2010-08-12T08:35:00

Hi all,

I'm Michael Anissimov, I work for the Singularity Institute. Our goal is to code a self-improving AI with positive values.

I'm concerned that specifying human values in terms of code is too difficult for us to achieve, and that so-called "Friendly AI" may be incredibly difficult. If that were the case, we would be forced to resort to human intelligence enhancement as a proxy to coding a Friendly AI, which would be a huge hassle.

I'm interesting in absorbing whatever knowledge is necessary to make progress on the Friendly AI question.

Nice to meet you all!
User avatar
Michael Anissimov
 
Posts: 1
Joined: Sat Dec 26, 2009 7:41 pm
Location: San Francisco

Re: Hello crew

Postby RyanCarey on 2010-08-12T09:58:00

Hi Michael,

I've hardly investigated the concept of friendly AI. However, I think I can easily provide a utilitarian perspective on it:

1. You're concerned that specifying human values in terms of code is difficult. This is understandable because human values are multiple and they are diverse. Societies hold values that contradict with others. However, there's not just conflict between people. Even within any person's set of values, contradictions are easily found. Even if a coherent set of values was found, they would have to be weighted against each other in an variable, case-dependent and obscure fashion.

2. The solution? You guessed it. Rather than programming human values, program utilitarian values into the AI. Classical utilitarians all hold the same values as each other. Or rather, there is only one classical utilitarian value, the value of happiness over suffering. Put differently, classical utilitarians favour pleasant experiences over unpleasant ones. There's no contradiction within classical utilitarianism. There's no weighting principles against each other.

3. Utilitarianism is ethically simple. The only tricky parts are a) distinguishing happiness from suffering and b) figuring out how to maximise it. i.e. the tricky parts are a) neuroscientific and b) logistical. Granted, neuroscience hasn't yet found any specific and sensitive way to distinguish happiness from suffering.

4. However, as neuroscience progresses, we do not need to idly wait. We can easily produce heuristics for the production of happiness and the elimination of suffering. People enjoy freedom. They need food, water and shelter. They like to be able to enter relationships with others who have common interests and styles of behaviour. They like to be be satisfied intellectually. and so on.

I hope that discussion is relevant to what you refer to as friendly AI.

Anyway, I look forward to seeing your ideas on a range of issues and I hope you enjoy your stay here
Ryan Carey
You can read my personal blog here: CareyRyan.com
User avatar
RyanCarey
 
Posts: 682
Joined: Sun Oct 05, 2008 1:01 am
Location: Melbourne, Australia

Re: Hello crew

Postby Arepo on 2010-08-12T18:46:00

Hey Michael, welcome along. I think a few of us read your blog at least sporadically.

My views on 'friendly' AI probably aren't going to gather much popular support. Specifically, I think we shouldn't care too much about the idea, because a friendly AI is going to be functionally identical to a megalomaniac one.

To wit, if a megalomaniac AI were to arise, its goal would be to make itself as happy as possible, ideally by expanding itself until it had consumed all useable matter. Conversely, if a perfectly utilitarian AI were to arise, its goal would be to create as much (self-perpetuating) utilitronium as quickly as possible, overwriting whatever the matter involved happened to be initially.

To me, these scenarios comprise almost exactly equivalent events, especially if you don't adhere to the idea that something fundamental called 'identity' exists and is somehow special - which to me is an outdated concept that cognitive science/books like Reasons and Persons have helped expose for the meaningless idea that it is. If you're a utilitarian, or even a less specific consequentialist, the massive utility gain from either is equally (and highly) desirable.

The only real danger, then (from a consequentialist perspective), is creating the kind of unconscious paperclip AI that somehow manages to outthink humankind without ever actually experiencing (positive) emotions of its own. To me there's something basically contradictory about the idea of an unconscious superintelligence. Admittedly I don't have any strong argument to show the contradiction, but it seems intuitive enough to me that until someone shows me a strong argument for the possibility I won't worry about AI domination.
"These were my only good shoes."
"You ought to have put on an old pair, if you wished to go a-diving," said Professor Graham, who had not studied moral philosophy in vain.
User avatar
Arepo
 
Posts: 1065
Joined: Sun Oct 05, 2008 10:49 am

Re: Hello crew

Postby LadyMorgana on 2010-11-09T21:38:00

To me there's something basically contradictory about the idea of an unconscious superintelligence. Admittedly I don't have any strong argument to show the contradiction...


...Do you have one yet? I don't think it's obvious that there is a contradiction. I would think it obvious that emotion requires consciousness but not the ability to reason, to adjudicate between different possibilities in order to decide which is more likely to be true. Admittedly I don't have any strong argument to show that there is no contradiction though :-)
"Three passions, simple but overwhelmingly strong, have governed my life: the longing for love, the search for knowledge, and unbearable pity for the suffering of mankind" -- Bertrand Russell, Autobiography
User avatar
LadyMorgana
 
Posts: 141
Joined: Wed Mar 03, 2010 12:38 pm
Location: Brighton & Oxford, UK


Return to General discussion