steven0461 wrote:Other than the futurist stuff I'm extremely interested in applying insights from math, statistics, econ, philosophy, and so on to the problem of how we as humans can get more skilled at figuring out the truth when we have less than complete information (which is just about always). I highly recommend
Overcoming Bias as a source of insights.
I read OB from time to time, but I still haven't quite figured out what its theme is supposed to be. Most of what I've seen there isn't obviously to do with biases.
I'm a consequentialist, but my best guess for what we should be maximizing is some function mostly of positive conscious experiences and the absence of negative conscious experiences, but not just those things. I do believe the value of a 4D universe does not just depend on the happiness of the mind states contained within it... "organic unities" and so on. I'm not sure how exactly but I don't think it's just desire satisfaction, either.
Would you to expand on this in the util forum? IMO the word 'value' doesn't really have an intelligible referent, except when you use it to mean 'that which humans [x]' - where x is the verb that best describes your consequentialist goals. (Something like 'enjoy' for Ryan and me, 'desire' for faithlessgod and so on.)
push those technologies that we can expect to mitigate rather than worsen the risks (e.g. friendly AI), rather than to declare (as many people have done) either that it's too silly to consider or that we should somehow try to keep the whole technology genie in the bottle.
Agreed in principle. That said, the possibility of a hostile AI doesn't worry my classical util sensibilities much. As long as there's happiness being experienced, it's not really important who or what is doing the experience. Sure, if the AI killed us all it would make for a pretty unpleasant few years, but assuming it then got on with the task of terraforming earth into a happiness generator, the trade-off would quickly be repaid - factory farming and bloody law of the jungle all gone overnight.
Also, since I believe that CU is in a sense the best view, I find it hard to imagine a hyperintelligent AI being actively evil.
That said, I can imagine one that was a bit confused in its early life accidentally wiping us out by stretching its metaphorical legs, which would be a shame - but that doesn't seem anywhere near as likely as us accidentally doing it to ourselves.
Maybe
not that much bigger
(by the way, am I the only one who finds Bostrom incredibly frustrating? So many people I agree with on so many things rate his arguments very highly, but so much of what he writes seems devoid of purpose - I find the doomsday argument particularly banal)
even in cases where at first they'll be only helpful to currently-living individuals in the western world (like
SENS) it's arguable that the return in expected human welfare is high because there's so little in resources being expended on them.
I'm not entirely sure what you mean here. Do you mean in a law of diminishing returns sense, that 100 more dollars spent on any 100 dollar research program is likely to create more benefit than 100 more dollars spent on a multimillion dollar program? If so, I'd say that's not obviously true, especially when compared to the relative cheapness of providing basic services for people in the third world. Maybe worth a thread in itself, if you're up for making the argument?
On the topic of transhumanism + utilitarianism + animals you can't beat the
HedWeb.
Y, I like a lot of David Pearce's writings (though I prefer the non-TH stuff). I think he owns the felicifia.org domain, incidentally.