I see a lot of you talk about simulations and such and creating or losing util in them. What about starting a life simulation NOW with a digital entity which was constructed using current AI technology in a way that we assume can experience as much happiness as possible. I see it like a whole bunch of really happy insects, as many of them as possible. It might create warm fuzzies. The downside I could see is the amount of effort in creating such a simulation and the amount of energy spent running on your computer, which may make the thing a total loss. Something to think about. Maybe tell your programmer friends to get on it.
Simulations
6 posts
Re: Simulations
A thought happened that is a bit of an extension of this after I posted.
If you could reduce pain and pleasure to just a class of algorithms or physical relations,
maybe it does not matter the result or output of an algorithm, but how it reaches it/solves it/is implemented:
If i was playing with a doll, and my body was the brain that moved the doll around in a dollhouse,
I could make the doll cry or react to pain or somehow give the illusion of suffering,
but it seems absurd to me that the doll is experiencing suffering, and i (the brain) am not suffering
by how i am playing with the doll. But maybe if the doll did the same thing but reached it
in a different way, the doll falling out of the dollhouse would cause suffering.
I have problems explaining my thoughts.
Maybe you can create your own questions and answer them.
If you could reduce pain and pleasure to just a class of algorithms or physical relations,
maybe it does not matter the result or output of an algorithm, but how it reaches it/solves it/is implemented:
If i was playing with a doll, and my body was the brain that moved the doll around in a dollhouse,
I could make the doll cry or react to pain or somehow give the illusion of suffering,
but it seems absurd to me that the doll is experiencing suffering, and i (the brain) am not suffering
by how i am playing with the doll. But maybe if the doll did the same thing but reached it
in a different way, the doll falling out of the dollhouse would cause suffering.
I have problems explaining my thoughts.
Maybe you can create your own questions and answer them.
-
strangeopendoor - Posts: 3
- Joined: Wed Dec 08, 2010 7:12 pm
Re: Simulations
Your first post reminds me of Norn torture. According to the wiki, the majority of the people who are against it consider norns sentient.
My best guess for what matters is based on classical conditioning. If something makes you happy, you do it more. Thus, if you start doing whatever it was you were doing more, it makes you happy. I doubt that's the whole picture, though.
As for your thought experiment, I doubt you'd be biased towards putting the doll in a given situation unless it makes you happy. If you like torturing the doll, and it ends up getting tortured more and more, that would seem to suggest it likes it. That said, I still think how it's implemented has some effect. I can't imagine it mattering if you use a look-up table.
My best guess for what matters is based on classical conditioning. If something makes you happy, you do it more. Thus, if you start doing whatever it was you were doing more, it makes you happy. I doubt that's the whole picture, though.
As for your thought experiment, I doubt you'd be biased towards putting the doll in a given situation unless it makes you happy. If you like torturing the doll, and it ends up getting tortured more and more, that would seem to suggest it likes it. That said, I still think how it's implemented has some effect. I can't imagine it mattering if you use a look-up table.
Consequentialism: The belief that doing the right thing makes the world a better place.
-
DanielLC - Posts: 703
- Joined: Fri Oct 10, 2008 4:29 pm
Re: Simulations
Interesting link, DanielLC. Interesting points that you've brought up too, strangeopendoor. I think noone is ready to create a simulation and try to ensure its wellbeing because noone is convinced that any software so far created is at all sentient. Personally, I think we'd be silly to discuss it in any but the most theoretical philosophical terms lest we be seen as insane! Sure, it's an interesting topic though.
You can read my personal blog here: CareyRyan.com
-
RyanCarey - Posts: 682
- Joined: Sun Oct 05, 2008 1:01 am
- Location: Melbourne, Australia
Re: Simulations
I don't think it's insane to contemplate carrying out such a proposal (I've considered it myself), but I agree with RyanCarey that any program we could write with our current understanding of consciousness would almost certainly not be sentient in the way that we mean the term. Of course, we can choose the types of algorithms and implementations that we care about, but I do think there's a class of self-reflective, aware processes that we could agree should count as happiness or suffering in an ethical sense. I doubt that any current computer programs meet those criteria.
Existing animals like insects provide interesting case studies for exactly where these boundaries should be drawn. Presumably unconscious reflex responses don't matter morally, but what additional machinery is required for an organism to have "the feeling of what it's like to be conscious"?
Existing animals like insects provide interesting case studies for exactly where these boundaries should be drawn. Presumably unconscious reflex responses don't matter morally, but what additional machinery is required for an organism to have "the feeling of what it's like to be conscious"?
-
Brian Tomasik - Posts: 1130
- Joined: Tue Oct 28, 2008 3:10 am
- Location: USA
Re: Simulations
DanielLC wrote:Your first post reminds me of Norn torture. According to the wiki, the majority of the people who are against it consider norns sentient.
Insane. Didn't expect less from a species who drinks the blood of a tortured one from a thousands year old book every last day of the week. Cuckoo!
-
Gee Joe - Posts: 93
- Joined: Tue Feb 09, 2010 4:44 am
- Location: Spain. E-mail: michael_retriever at yahoo.es
6 posts