Hi all,
I'm Michael Anissimov, I work for the Singularity Institute. Our goal is to code a self-improving AI with positive values.
I'm concerned that specifying human values in terms of code is too difficult for us to achieve, and that so-called "Friendly AI" may be incredibly difficult. If that were the case, we would be forced to resort to human intelligence enhancement as a proxy to coding a Friendly AI, which would be a huge hassle.
I'm interesting in absorbing whatever knowledge is necessary to make progress on the Friendly AI question.
Nice to meet you all!
I'm Michael Anissimov, I work for the Singularity Institute. Our goal is to code a self-improving AI with positive values.
I'm concerned that specifying human values in terms of code is too difficult for us to achieve, and that so-called "Friendly AI" may be incredibly difficult. If that were the case, we would be forced to resort to human intelligence enhancement as a proxy to coding a Friendly AI, which would be a huge hassle.
I'm interesting in absorbing whatever knowledge is necessary to make progress on the Friendly AI question.
Nice to meet you all!