Welcome to steven0461

Whether it's pushpin, poetry or neither, you can discuss it here.

Welcome to steven0461

Postby Arepo on 2008-11-21T12:55:00

Bout time we started welcoming people properly...

So hi Steven - will you tell us a bit about yourself? I'm guessing from your blog you're a transhumanist, but I'm curious what that actually means to you - thus far I'm ambivalent about THism, mainly because it's difficult to find out what it actually implies. Presumably few rational people would challenge the idea that using technology to improve our quality of life is a sensible thing to do in itself, but the salient questions must be about how to weigh the risks of technologies whose consquences we don't fully understand vs their expected benefits, and about how much we should focus on first world-improving technolgies vs implementing (eg) basic sustainable agriculture in the third world.

Most of the TH literature I've seen is quite vague on such questions...
"These were my only good shoes."
"You ought to have put on an old pair, if you wished to go a-diving," said Professor Graham, who had not studied moral philosophy in vain.
User avatar
Arepo
 
Posts: 1065
Joined: Sun Oct 05, 2008 10:49 am

Re: Welcome to steven0461

Postby faithlessgod on 2008-11-21T16:02:00

Hi Steven welcome

I too would interested in how transhumanism relates to utilitarianism and in particular to issue over animals and the environment.

PS Arepo what do you think of my sig? ;) ;)
Do not sacrifice truth on the altar of comfort
User avatar
faithlessgod
 
Posts: 160
Joined: Fri Nov 07, 2008 2:04 am
Location: Brighton, UK

Re: Welcome to steven0461

Postby steven0461 on 2008-11-21T20:05:00

Hi all, and thanks for the welcome! Just wrote a long reply on transhumanism which I'll put in a separate post.

Other than the futurist stuff I'm extremely interested in applying insights from math, statistics, econ, philosophy, and so on to the problem of how we as humans can get more skilled at figuring out the truth when we have less than complete information (which is just about always). I highly recommend Overcoming Bias as a source of insights.

I'm a consequentialist, but my best guess for what we should be maximizing is some function mostly of positive conscious experiences and the absence of negative conscious experiences, but not just those things. I do believe the value of a 4D universe does not just depend on the happiness of the mind states contained within it... "organic unities" and so on. I'm not sure how exactly but I don't think it's just desire satisfaction, either.

steven0461
 
Posts: 5
Joined: Thu Nov 20, 2008 8:30 am

Re: Welcome to steven0461

Postby steven0461 on 2008-11-21T20:09:00

Yeah, there's some vagueness as to what transhumanism is. I don't think the label matters much, but to the extent that I'd call myself a "transhumanist" it's because I take very seriously the possibility that in the not too distant future, we'll be able to use technology not just to improve the quality of human life, but to do so by changing what you might say are the human hardware defaults; and I think this could work out to be a wonderful thing, depending on whether we're wise about it. On the other hand things like artificial general intelligence, molecular nanotechnology, and maybe cryonics don't fall under "changing human hardware defaults" but would be considered "transhumanist" subjects, so really it's just a cluster of advanced technologies that a particular subculture of people is interested in.

There are indeed huge risks from these technologies. People should focus on these a lot more than they're currently doing. See here and here for some writings by transhumanists who agree. I (and many transhumanists) would not endorse some version of transhumanism that made blanket statements like "technology is good" or that said we should push ahead with all technologies as fast as possible. I do think the most helpful strategy is going to be to think very hard about the possibilities in advance, and to push those technologies that we can expect to mitigate rather than worsen the risks (e.g. friendly AI), rather than to declare (as many people have done) either that it's too silly to consider or that we should somehow try to keep the whole technology genie in the bottle. I also think that in the long run if the world does get into some safe stable state, life will be better (and not somehow "meaningless") with things like extended lifespans and technologically augmented happiness.

On the question of "how much we should focus on first world-improving technolgies vs implementing (eg) basic sustainable agriculture in the third world", I would say that the most utilitarianly useful transhumanist projects are those trying to safeguard the future for human civilization as a whole (the future being much bigger than the present), but even in cases where at first they'll be only helpful to currently-living individuals in the western world (like SENS) it's arguable that the return in expected human welfare is high because there's so little in resources being expended on them. (I haven't done any calculations or anything on that though.) Anyway I'm not sure this isn't moot; energy invested into transhumanist projects would mostly be substituted out of less worthy pursuits than helping the third world.

On the topic of transhumanism + utilitarianism + animals you can't beat the HedWeb. On the environment I'm probably not the right guy to ask, as I tend to think direct technological risks like human-hostile AI and high-tech warfare are far more dangerous than environmental risks. To the extent that transhumanist technologies can create greater problem-solving ability and/or moral virtue in humans and/or machines, I would expect that to be useful for any issue including third-world poverty and environmental ones.

steven0461
 
Posts: 5
Joined: Thu Nov 20, 2008 8:30 am

Re: Welcome to steven0461

Postby Arepo on 2008-11-22T01:37:00

steven0461 wrote:Other than the futurist stuff I'm extremely interested in applying insights from math, statistics, econ, philosophy, and so on to the problem of how we as humans can get more skilled at figuring out the truth when we have less than complete information (which is just about always). I highly recommend Overcoming Bias as a source of insights.


I read OB from time to time, but I still haven't quite figured out what its theme is supposed to be. Most of what I've seen there isn't obviously to do with biases.

I'm a consequentialist, but my best guess for what we should be maximizing is some function mostly of positive conscious experiences and the absence of negative conscious experiences, but not just those things. I do believe the value of a 4D universe does not just depend on the happiness of the mind states contained within it... "organic unities" and so on. I'm not sure how exactly but I don't think it's just desire satisfaction, either.


Would you to expand on this in the util forum? IMO the word 'value' doesn't really have an intelligible referent, except when you use it to mean 'that which humans [x]' - where x is the verb that best describes your consequentialist goals. (Something like 'enjoy' for Ryan and me, 'desire' for faithlessgod and so on.)

push those technologies that we can expect to mitigate rather than worsen the risks (e.g. friendly AI), rather than to declare (as many people have done) either that it's too silly to consider or that we should somehow try to keep the whole technology genie in the bottle.


Agreed in principle. That said, the possibility of a hostile AI doesn't worry my classical util sensibilities much. As long as there's happiness being experienced, it's not really important who or what is doing the experience. Sure, if the AI killed us all it would make for a pretty unpleasant few years, but assuming it then got on with the task of terraforming earth into a happiness generator, the trade-off would quickly be repaid - factory farming and bloody law of the jungle all gone overnight.

Also, since I believe that CU is in a sense the best view, I find it hard to imagine a hyperintelligent AI being actively evil.

That said, I can imagine one that was a bit confused in its early life accidentally wiping us out by stretching its metaphorical legs, which would be a shame - but that doesn't seem anywhere near as likely as us accidentally doing it to ourselves.

(the future being much bigger than the present)


Maybe not that much bigger :P

(by the way, am I the only one who finds Bostrom incredibly frustrating? So many people I agree with on so many things rate his arguments very highly, but so much of what he writes seems devoid of purpose - I find the doomsday argument particularly banal)

even in cases where at first they'll be only helpful to currently-living individuals in the western world (like SENS) it's arguable that the return in expected human welfare is high because there's so little in resources being expended on them.


I'm not entirely sure what you mean here. Do you mean in a law of diminishing returns sense, that 100 more dollars spent on any 100 dollar research program is likely to create more benefit than 100 more dollars spent on a multimillion dollar program? If so, I'd say that's not obviously true, especially when compared to the relative cheapness of providing basic services for people in the third world. Maybe worth a thread in itself, if you're up for making the argument?

On the topic of transhumanism + utilitarianism + animals you can't beat the HedWeb.


Y, I like a lot of David Pearce's writings (though I prefer the non-TH stuff). I think he owns the felicifia.org domain, incidentally.
"These were my only good shoes."
"You ought to have put on an old pair, if you wished to go a-diving," said Professor Graham, who had not studied moral philosophy in vain.
User avatar
Arepo
 
Posts: 1065
Joined: Sun Oct 05, 2008 10:49 am

Re: Welcome to steven0461

Postby Arepo on 2008-11-22T01:38:00

faithlessgod wrote:PS Arepo what do you think of my sig? ;) ;)


Eerily familiar :?
"These were my only good shoes."
"You ought to have put on an old pair, if you wished to go a-diving," said Professor Graham, who had not studied moral philosophy in vain.
User avatar
Arepo
 
Posts: 1065
Joined: Sun Oct 05, 2008 10:49 am

Re: Welcome to steven0461

Postby steven0461 on 2008-11-22T18:34:00

Arepo wrote:Sure, if the AI killed us all it would make for a pretty unpleasant few years, but assuming it then got on with the task of terraforming earth into a happiness generator, the trade-off would quickly be repaid - factory farming and bloody law of the jungle all gone overnight.

Also, since I believe that CU is in a sense the best view, I find it hard to imagine a hyperintelligent AI being actively evil.

I don't think this outcome is either the most desirable, or at all likely. CU may be the "best" view but whatever "best" means here, this is not a kind of "best" that forces all possible intelligent agents to implement it. There's nothing incoherent about imagining an AI that's set up to maximize the number of e.g. bicycles in the universe to take a silly example (you can describe one as a mathematical system without logical contradictions). It can be shown, by the way, that all goal systems, even innocent-seeming ones, will give the AI a number of "drives" like acquiring computing resources that (if the AI were sufficiently powerful) would cause the extinction of life on Earth as a side effect. No active evil needed, just indifference; a regard for happiness does not spring up into an AI without specific effort, and I'm confident that any argument you might have to the effect that it does ("psychological hedonism" and the like?) is confused.
Do you mean in a law of diminishing returns sense, that 100 more dollars spent on any 100 dollar research program is likely to create more benefit than 100 more dollars spent on a multimillion dollar program? If so, I'd say that's not obviously true, especially when compared to the relative cheapness of providing basic services for people in the third world.

Yes, I mean in a law of diminishing returns sense. It seems to me that if one effort is much smaller than another for reasons other than that it's not doable (an important reason in this case being that life extension is considered an issue for SF weirdos only rather than a generally accepted humanitarian goal), then there's going to be far more low-hanging fruit there. I agree that this isn't very rigorous. From a pure utilitarian perspective avoiding "existential risks" more or less trumps all else anyway because of the aforementioned bigness of the future, but on the other hand taking that view consistently leads to somewhat monstrous-seeming conclusions and for "indirect-utilitarian" reasons solving more immediate problems should probably not be seen as worthless.

Re: what "value" means, I'm just using it as meaning whatever is a good outcome, whatever is "moral", whatever we "should" strive toward. I think I agree with the metaethical positions that Eliezer Yudkowsky has been setting out on Overcoming Bias; it's kind of complicated and I can't really find a single good reference, maybe try here.

steven0461
 
Posts: 5
Joined: Thu Nov 20, 2008 8:30 am

Re: Welcome to steven0461

Postby Arepo on 2008-11-23T12:41:00

steven, can I persuade you to add to the recommended reading thread? So far all the suggestions are mine, so it would be nice to get some fresh ideas in there.

It would also be really helpful for categorisation (feel free to suggest new or alternative headings in that thread) - few people will have time to read through all your links in order, but if they want to research a specific subject, they should come in very useful.
"These were my only good shoes."
"You ought to have put on an old pair, if you wished to go a-diving," said Professor Graham, who had not studied moral philosophy in vain.
User avatar
Arepo
 
Posts: 1065
Joined: Sun Oct 05, 2008 10:49 am


Return to General discussion