The Brights and Morality

Whether it's pushpin, poetry or neither, you can discuss it here.

The Brights and Morality

Postby faithlessgod on 2008-11-07T13:06:00

RyanCarey said in
I have read your piece on the Brights and morality on your double standards blog. While I am only a bright in so far as I have made a few posts on their forum, I feel I should point out that the dispute might simply be a misunderstanding.

They've said that they believe morality is evolved and biologically underpinned. You've countered that morals are a matter of philosophical enquiry, not empirical enquiry. But as far as I'm concerned, both statements are true. That is, moral opinions occur through natural processes which can be empirically studied and moral truth can be obtained through philosophy. I daresay, I don't think the Brights necessarily intended to promote intuitionism, the idea that naturally occuring morals are good morals, or the idea that 'is' equals 'ought'. Unless you reject the brach of psychology which studies the formation of moral opinion, I don't see what the problem is.

First note my original post is at http://impartialism.blogspot.com/2008/1 ... ality.html
Secondly thread is not particularly about my concern over the Brights. That point is not relevant here but I want to respond to what Ryan understands above. In another forum's thread on this topic I said

1. Plenty of naturalists reject the possibility of a science of morality - we are united in disagreeing with them.
2. Other naturalists, me included, disagree with the definition of morality presented in the first draft statement of the morality
project- and it is quite irrelevant how many scientific studies are cited, it is a question over what is the target domain to be
investigated that could be called morality.The definition specifies something which is not, in mine and others views e.g. Bobsie's
(whether he thinks there could be a science of morality or not), morality.
3. This is only an issue with respect to the Brights Movement to the degree that this is officially endorsed by the Bright Movement, which,
as currently stands in its presentation on the main site, it appears to be.

Point 3 is irrelevant here. It is point 2 there is confusion over. To re-emphasize it I will re-quote "it is a question over what is the target domain to be
investigated that could be called morality". I am a consequentialist and utilitarian but of a particular type called Desire Utilitarianism. This is a ethically reductive naturalist form of realism , in other words I argue that a science of morality (for want of a better phrase) is possible. I accept that there is a branch of psychology studying moral thinking but that is not the same as studying morality empirically. Related, of course, but not the same.

"They've said that they believe morality is evolved and biologically underpinned" yes and so is astronomy, astrology, pigeon shooting, whatever. It is not saying very much AFAICS. It is either trivially true or substantively misplaced and it is the latter I am now highlighting here. What it is, is a physical and material process amenable to empirical analysis - the moral rules and codes being one of the outcomes of such a process - these also being amenable to empirical analysis. The process itself is the effects the people have on each other through their social interactions, which includes their reasoning over and application of "maps" - about what they are doing and what is good/bad and right/wrong but without a "territory" - actual physical, material effects on each other there is no underlying problem of morality. My issue with the Brights definition is that it defines all meaning out the term, focusing instead on how we produce maps. In other words their definition disregards consequences - the territory - however we might differ amongst ourselves as to what these are.

Since we are all utilitarianists here what does everyone else think is "the target domain to be investigated that could be called morality"?
Do not sacrifice truth on the altar of comfort
User avatar
faithlessgod
 
Posts: 160
Joined: Fri Nov 07, 2008 2:04 am
Location: Brighton, UK

Re: The Brights and Morality

Postby RyanCarey on 2008-11-08T00:29:00

faithlessgod wrote:"it is a question over what is the target domain to be investigated that could be called morality". I am a consequentialist and utilitarian but of a particular type called Desire Utilitarianism. This is a ethically reductive naturalist form of realism , in other words I argue that a science of morality (for want of a better phrase) is possible. I accept that there is a branch of psychology studying moral thinking but that is not the same as studying morality empirically. Related, of course, but not the same.

"They've said that they believe morality is evolved and biologically underpinned" yes and so is astronomy, astrology, pigeon shooting, whatever. It is not saying very much AFAICS. It is either trivially true or substantively misplaced and it is the latter I am now highlighting here. What it is, is a physical and material process amenable to empirical analysis - the moral rules and codes being one of the outcomes of such a process - these also being amenable to empirical analysis. The process itself is the effects the people have on each other through their social interactions, which includes their reasoning over and application of "maps" - about what they are doing and what is good/bad and right/wrong but without a "territory" - actual physical, material effects on each other there is no underlying problem of morality. My issue with the Brights definition is that it defines all meaning out the term, focusing instead on how we produce maps. In other words their definition disregards consequences - the territory - however we might differ amongst ourselves as to what these are.

Since we are all utilitarianists here what does everyone else think is "the target domain to be investigated that could be called morality"?

I have read the thread in which you discuss these things on the Brights Board and I have read your post here a few times over. I think that I am finally starting to understand your position. So let me summarise my understanding of it:
The Brights are designing a statement on the naturalistic origins of morals. You do not dispute that morals have naturalistic origins. The Brights produce a definition of morals which emphasises evolution, biological underpinnings and experience. You do not dispute that these are elements of a branch of psychology which studies morals but morality means something different to you. Morality, to you, is something that can be pursued scientifically, like astronomy. You do not dispute that we are compelled to study astronomy because of our biological makeup either. However, to define astronomy as such would be silly. Astronomy is the study of celestial objects and Morality is the study of right and wrong. Specifically, you resent the fact that they omit to discuss consequences.

Is that correct?
You can read my personal blog here: CareyRyan.com
User avatar
RyanCarey
 
Posts: 682
Joined: Sun Oct 05, 2008 1:01 am
Location: Melbourne, Australia

Re: The Brights and Morality

Postby faithlessgod on 2008-11-08T19:21:00

RyanCarey wrote:Astronomy is the study of celestial objects and Morality is the study of right and wrong. Specifically, you resent the fact that they omit to discuss consequences.

Is that correct?

Yes indeed. Apart for the "resent" element yes," concerned" rather than "resent" maybe.Without consequences there is nothing to talk about - right and wrong refer to actions and we understand - can evaluate - whether they are right and wrong, at least partly, from the consequences (just a different emphasis based on whether one is a consequentialist, deontologist or aretist etc.). To exclude any notion of consequences it to define away the problem space to which solutions could be addressed. The Brights definition is an error of philosophy not science and a popular error to boot shared by subjectivists, relativists, non-cognitivists and so on.
Do not sacrifice truth on the altar of comfort
User avatar
faithlessgod
 
Posts: 160
Joined: Fri Nov 07, 2008 2:04 am
Location: Brighton, UK

Re: The Brights and Morality

Postby RyanCarey on 2008-11-09T00:45:00

I see a place for both the psychological study of moral opinion and the pursuit of moral truth. The pursuit of moral truth ought to be scientific too. I agree that moral opinions arise because of the consequences of actions. But I don’t see the brights definition of morality and our view on morality as incompatible.

Are they trivialising the pursuit of moral truth? In my opinion, they're merely completing the mission of their project. Reality About Human Morality is about the ‘is’, not the ‘ought’ of morality. Look at their definition of morality. Motives, intentions and actions. Do you really disagree with the statement that ‘Motives, intentions and actions are an evolved repertoire of cognitive and emotional mechanisms with distinct biological underpinnings, as modified by experience'? They make no claims about ethical truth. They are not getting bogged down by competing views of what the 'problem space' is because they are not looking for solutions.

Specifically, consequences can underlie ‘evolution’. Consequences can also be included in ‘modification by experiences’. People experience consequences of others’ actions and they act to impart consequences on others.

Your complaint here is that they have failed to emphasise consequences. Personally, I mightn't mind some extra emphasis But if they did give consequences special emphasis, mightn't some deontologists and moral subjectivists be alienated?
You can read my personal blog here: CareyRyan.com
User avatar
RyanCarey
 
Posts: 682
Joined: Sun Oct 05, 2008 1:01 am
Location: Melbourne, Australia

Re: The Brights and Morality

Postby rob on 2008-11-09T19:36:00

I found their definition of morals completely lacking of anything of substance and quite circular. They essentially say that doing what is moral is doing what is right. I'd like to see how they define "right"....lemme guess, "doing what is moral"?

If they have a naturalistic definition, I'd love to see it. I believe there is one, and it isn't really that complicated, but the great majority of definitions of such things (including things like "happiness" and "good" and "right" and such) all seem subjective and circular to me.

rob
 
Posts: 20
Joined: Sun Nov 09, 2008 5:29 pm
Location: San Francisco

Re: The Brights and Morality

Postby faithlessgod on 2008-11-10T09:58:00

RyanCarey wrote:I see a place for both the psychological study of moral opinion and the pursuit of moral truth. The pursuit of moral truth ought to be scientific too. I agree that moral opinions arise because of the consequences of actions. But I don’t see the brights definition of morality and our view on morality as incompatible.

OK what I am arguing towards - using the Bright's definitions - is a ethics as a science, constrained by but not just philosophy. Now they give two defintions of morality:

    1."Morality (noun): a set of motives, intentions, and/or actions of an individual or a group conforming to principles or standards of right conduct."
    2."Morality is an evolved repertoire of cognitive and emotional mechanisms with distinct biological underpinnings, as modified by experience. (23 studies)"
As I understand it 2 is meant to be based on 1 but entirely misses the italicised (by me) part of the core definition, that is according to their own definitions it is incomplete, misleading and incorrect.

RyanCarey wrote:Are they trivialising the pursuit of moral truth? In my opinion, they're merely completing the mission of their project. Reality About Human Morality is about the ‘is’, not the ‘ought’ of morality. Look at their definition of morality. Motives, intentions and actions. Do you really disagree with the statement that ‘Motives, intentions and actions are an evolved repertoire of cognitive and emotional mechanisms with distinct biological underpinnings, as modified by experience'? They make no claims about ethical truth. They are not getting bogged down by competing views of what the 'problem space' is because they are not looking for solutions.

Then this is not a science of morality but a science of moral psychology, reasoning and moral persuasion. I have not beef with that indeed I suspect - if I could even find the list of those 28 studies, another suspect avoidance by the Brights - that is exactly what they are about. I am guessing that none of this about standards of rights and wrong,I fail to see how one can claim a science of morality without tackling that and this is supported by their own definition 1.

RyanCarey wrote:Specifically, consequences can underlie ‘evolution’. Consequences can also be included in ‘modification by experiences’. People experience consequences of others’ actions and they act to impart consequences on others.

I agree a broad conception of evolution includes consequences - but only evolutionary consequences not necessarily moral consequences. This needs ot be argued for and is nowhere here. To presume that evolutionary value and moral value are the same is another mistake IMHO.

RyanCarey wrote:Your complaint here is that they have failed to emphasise consequences. Personally, I mightn't mind some extra emphasis But if they did give consequences special emphasis, mightn't some deontologists and moral subjectivists be alienated?

That is a key problem with this project. As it stands this, ironically, alienates moral objectivists such as myself, ironical given it is mean to be about science. This does - sort of- support a moral subjectivist approach and if it does then it completely fails to address the criticism of moral subjectivism and moral relativism by divine command moral absolutists - just saying we have science behind us and evolution is hardly going to endear us to those who any reject evolution! (Remember what the intention of this project was)! IMHO moral subjectivism and moral relativism imply there is no science of morality. Another query is that if moral subjectivism where true, then surely it would be invisible to natural selection!! The more I look at this as a definition of "morality" rather than of "moral reasoning" etc. the more confused it seems to be.
Do not sacrifice truth on the altar of comfort
User avatar
faithlessgod
 
Posts: 160
Joined: Fri Nov 07, 2008 2:04 am
Location: Brighton, UK

Re: The Brights and Morality

Postby faithlessgod on 2008-11-10T10:01:00

rob wrote:I found their definition of morals completely lacking of anything of substance and quite circular. They essentially say that doing what is moral is doing what is right. I'd like to see how they define "right"....lemme guess, "doing what is moral"?

If they have a naturalistic definition, I'd love to see it. I believe there is one, and it isn't really that complicated, but the great majority of definitions of such things (including things like "happiness" and "good" and "right" and such) all seem subjective and circular to me.

Yes the more I look at this the more confused they appear to be, I totally agree with you.

(I am interested but maybe for another thread what is your naturalistic definition of morality? This might be a good topic for us utilitarians to explore.)
Do not sacrifice truth on the altar of comfort
User avatar
faithlessgod
 
Posts: 160
Joined: Fri Nov 07, 2008 2:04 am
Location: Brighton, UK

Re: The Brights and Morality

Postby faithlessgod on 2008-11-12T10:38:00

As I said in the other thread on a naturalistic basis of morality I am posting an topic part of my reply that is relevant here:
... But you end up making the point of my main issue with the Brights. "document[ing] the origins of a sense of right conduct" is not the same as morality. "They are looking to explore how motives, intentions, and/or actions [work]" yes and that also is not morality. This point is made also by there being a clash between their two definitions - it is bad philosophy and no scientifically justifiable...
Do not sacrifice truth on the altar of comfort
User avatar
faithlessgod
 
Posts: 160
Joined: Fri Nov 07, 2008 2:04 am
Location: Brighton, UK

Re: The Brights and Morality

Postby Arepo on 2008-11-12T14:17:00

While I agree that the Brights' idea of morality is at best incomplete, it seems futile to criticise them for it. Just as our goal here is to promote consequentialism, so theirs is to promote nontheism. We might think these goals compatible, but to promote something well, you have to appeal to all its advocates - and many nontheists are also nonconsequentialist.

So criticising their ambiguity on ethics is like criticising Felicifia for its ambiguity on religion. The relevant question (for utilitarians) is not 'do they profess to share our goals?' but 'do their activities coincide with our goals?' If you that the Bright's methods for promoting secularism will improve the world, you should probably. If you think any of these things aren't true, then you probably shouldn't.

You might try to persuade the organisation or its members to change their view on ethics, but given that ethics aren't one of their major concerns, it seems unlikely that the wording of the paragraph in question will drastically change their activities. So it seems excessive to quit over it...
"These were my only good shoes."
"You ought to have put on an old pair, if you wished to go a-diving," said Professor Graham, who had not studied moral philosophy in vain.
User avatar
Arepo
 
Posts: 1065
Joined: Sun Oct 05, 2008 10:49 am

Re: The Brights and Morality

Postby rob on 2008-11-12T17:29:00

Arepo wrote:While I agree that the Brights' idea of morality is at best incomplete, it seems futile to criticise them for it. Just as our goal here is to promote consequentialism, so theirs is to promote nontheism. We might think these goals compatible, but to promote something well, you have to appeal to all its advocates - and many nontheists are also nonconsequentialist.


I see no problem in criticising them for it. Honestly, I don't think they understand where their definition is lacking, because they haven't seen a non-circular definition out there. Why not try to come up with one that makes sense? I don't see that as futile at all.

rob
 
Posts: 20
Joined: Sun Nov 09, 2008 5:29 pm
Location: San Francisco

Re: The Brights and Morality

Postby RyanCarey on 2008-11-12T21:48:00

To summarise my position:
Some fundamentalists ask how we operate without religion. They say 'how do you even function?'. So The brights create a project for explaining how we choose actions. It's a project about morality. Morality can conventionally mean motivation (moral opinion) or ethics (scientific pursuit of moral truth). In their morality project, they mean to observe motivation, not to create an ethical system. it's like they're trying to study appetite and they don't care about our catalogue of favourite foods. I think you might be happier about the situation if you understood 'reality about human morality' to be a project about motivation, not ethics.
You can read my personal blog here: CareyRyan.com
User avatar
RyanCarey
 
Posts: 682
Joined: Sun Oct 05, 2008 1:01 am
Location: Melbourne, Australia

Re: The Brights and Morality

Postby rob on 2008-11-13T02:11:00

Well let me summarize my own position on the Brights' project: I think it's a great idea to try to find a naturalistic way to view morality, so I am not criticizing them for that. I also don't see any reason why such a thing is more specific to them than it is to Utilitarians.

I just think what the Brights have come up with so far is completely lacking in substance. It is perfectly circular, as they use synonomous, subjective terms to define other subjective terms ( http://www.the-brights.net/action/activ ... tions.html ), or they simply avoid saying what morality is altogether ( http://www.the-brights.net/action/activ ... ation.html ). I don't know how receptive they are to new input, but I can't imagine they'd have a issue with an attempt at addressing that problem. (well, lots of people tend to be defensive! :) )

Ok here is my 20 word or less definition of "morality":

behavior that gives significant priority to the goals of others relative to one's own goals

There is not a single thing that I would consider to be "morally good" that doesn't fit this definition. Cheating, lying, stealing, hurting, killing all happen because the perpetrator does not prioritize the goals of others (their victims) highly. Accepting a high paying job writing a spam-bot is another example. Helping a person with a flat tire, even if you are late to work, and even though you don't expect to have the favor repaid, is an example of high prioritization of others' goals. Donating to charity or volunteering at the homeless shelter is as well. As is returning a wallet without keeping the money for yourself.

There are edge cases, of course, as there are for most definitions. (for instance, does the definition of "building" include the Eiffel Tower?) What counts as "significant" is an important question. The important point there is that the more priority given the goals of others relative to one's own goals, the more "morally good" the behaviour. Another question is, who counts as "others"? Animals? Bugs? The unborn? While I don't try to clarify the edge cases so much, the center cases are pretty clear cut and unambiguous.

My definition is not restricted to the behaviour of humans and animals; it could apply to machines as well. Just for kicks, I'll illustrate it with members of a colony of autonomous robots sent to terraform a planet to prepare it for humans. Let's assume that while the robots are sophisticated and skilled at figuring out how to do their assigned task, they are not what anyone would consider sentient or conscious.

Say there is a landscaping robot and a cable-burying robot (among many others). The cable laying robot's algorithm doesn't take into account that burying a cable might mess up the work of the landscaping robot -- all it considers is whether it gets its cable buried. This isn't so much "on purpose" as it was just the simplest way to program the robots. At first it seems to work pretty well when there are only a few robots spread over a wide area.

Eventually though, back on earth they see that the robots' work is not getting accomplished as efficiently as it would if the robots were able to take into account the goals of the other robots. For instance, they notice that right after the landscaping robot completed a Japanese garden, the cable laying robot dug right through the middle of it to place its cable, requiring a lot of extra work for the landscaping robot to fix. So earth sends the robots a software upgrade -- let's call it the "don't be a dick" module --that allows them to communicate among themselves, and to take into account the goals of the other robots, while prioritizing their own specific goals slightly lower.

Now the cable robot can figure out that its cable-laying goal potentially conflicts with the goals of the landscaper. Once it takes into account the landscaper's goals, it calculates that it can still accomplish its own goal, albeit somewhat less efficiently, without interfering nearly so much with the goals of the landscaping robot. All it needs to do is to schedule the cable laying such that it will usually happen in areas that haven't yet been landscaped, and to occasionally find alternate routes so as to avoid certain "highly landscaped" areas that will cause the most work for the landscaping robot to re-beautify. For instance, it might spend an extra 2 hours of its own time going around the outside of a Japanese garden, so the landscaper doesn't have to spend two days repairing the damage of it going right through the middle. Importantly, the cable robot doesn't so much know much about landscaping, it simply is able to receive messages from the landscaper about where and when digging cable ditches would most harm the landscaper's ability to acheive it's goals. Likewise, it can receive such messages from all the other robots on the planet.

The software upgrade is, essentially, altruism. A sense of right and wrong. This is not to imply that the robots are sentient, simply that they have goals that are prioritized, and that, in this prioritization, they can take into account the goals of the other robots. It isn't magic, it doesn't require God, consciousness, or really anything special to explain it.

(Note that I am not trying to say why natural selection would put altruism in humans, that is equally easy to answer, but not the topic of this post)

Here is a relevant article about altruism applying to simple machines: http://news.cnet.com/2100-1033-984694.html

rob
 
Posts: 20
Joined: Sun Nov 09, 2008 5:29 pm
Location: San Francisco

Re: The Brights and Morality

Postby faithlessgod on 2008-11-13T12:50:00

Arepo wrote:You might try to persuade the organisation or its members to change their view on ethics, but given that ethics aren't one of their major concerns, it seems unlikely that the wording of the paragraph in question will drastically change their activities. So it seems excessive to quit over it...
Yes that was just rhetorical trumpet playing on my part - that is used as emphasis for a plausible danger. Anyway I did say "tentative", I wonder why I bother :lol:
Do not sacrifice truth on the altar of comfort
User avatar
faithlessgod
 
Posts: 160
Joined: Fri Nov 07, 2008 2:04 am
Location: Brighton, UK

Re: The Brights and Morality

Postby faithlessgod on 2008-11-13T12:52:00

rob wrote:I see no problem in criticising them for it. Honestly, I don't think they understand where their definition is lacking, because they haven't seen a non-circular definition out there. Why not try to come up with one that makes sense? I don't see that as futile at all.

Good point , I... ahem... think that is being pursued on the naturalistic basis of morality thread 8-)
Do not sacrifice truth on the altar of comfort
User avatar
faithlessgod
 
Posts: 160
Joined: Fri Nov 07, 2008 2:04 am
Location: Brighton, UK

Re: The Brights and Morality

Postby faithlessgod on 2008-11-13T12:58:00

RyanCarey wrote: I think you might be happier about the situation if you understood 'reality about human morality' to be a project about motivation, not ethics.
That is basically my point and if that were all they were saying I would have no issues. Still it then fails to address their objective as theists could argue that god evolved us to be that way (or some equivalent), but still say "so what" - regarding morality? In other words it then misses whole point of the exercise! Was I too long winded in making this point originally and therefore encouraging confusion to occur (I mistakenly thought the
detail was to prevent this happening)? Don't answer that :roll:
Do not sacrifice truth on the altar of comfort
User avatar
faithlessgod
 
Posts: 160
Joined: Fri Nov 07, 2008 2:04 am
Location: Brighton, UK

Re: The Brights and Morality

Postby faithlessgod on 2008-11-13T13:44:00

rob wrote:Ok here is my 20 word or less definition of "morality":

behavior that gives significant priority to the goals of others relative to one's own goals

I pretty much agree with everything you have said here Rob. I think we could later have a fruitful discussion on the details. I have one quibble.

rob wrote:The software upgrade is, essentially, altruism. A sense of right and wrong. This is not to imply that the robots are sentient, simply that they have goals that are prioritized, and that, in this prioritization, they can take into account the goals of the other robots. It isn't magic, it doesn't require God, consciousness, or really anything special to explain it.

I disagree that this is "altruism" but note I think there is a triunary not binary distinction that I make between egoism, altruism and utilitarianism (see my post Desire Utilitariansim for an expansion on this). I have a rare point of agreement with Randians on this, namely altruism necessarily implies sacrifice whereas utilitarianism (to which they are blind - they only see egoism and altruism) does not.

This quibble aside, however how are moral relativists/subjectivists etc. assuming they are otherwise naturalists going to take this? Have you tested it on them?
Do not sacrifice truth on the altar of comfort
User avatar
faithlessgod
 
Posts: 160
Joined: Fri Nov 07, 2008 2:04 am
Location: Brighton, UK

Re: The Brights and Morality

Postby rob on 2008-11-13T16:17:00

Well I don't understand why you don't see the robot as being altruistic. There is sacrifice, in the sense that for the cable bot to accomodate the landscape bot, it has to do more work. Do you only see it as altruism if it crosses some greater threshold? Do you disagree with the use of the term altruism in the article on routers?

I avoided the word happiness in this article, and simply used the word "goal" and related terms. Maybe that is why you have an easier time getting most of what I am saying.

Maybe I should, rather than using such words as "altruism" or "happiness", always qualify them as "the logical equivalent to altruism"...etc. (again, I go back to the "submarines swimming" issue. I know you said you can accept that submarines can swim, but I think it hilights an important issue: sometimes we understand a word to be defined in a restricted way, and our brains lock up if we try to extend it beyond that. Here in california this is a big issue over the definition of "marriage"!)

Anyway I am glad we are able to agree on some things here. I have not tested this on the moral relativists. Obviously I am not one.....I acknowledge that certain details of morals are subjective/relative, but there are certain universal concepts of morality that are not at all relative, and to say otherwise is just absurd, in my opinion.

Actually I haven't been talking about any of this stuff for a long time, until Ryan PM'd me last week via the Dawkins board, which I haven't participated in in two years.

rob
 
Posts: 20
Joined: Sun Nov 09, 2008 5:29 pm
Location: San Francisco

Re: The Brights and Morality

Postby Arepo on 2008-11-13T17:42:00

rob wrote:The software upgrade is, essentially, altruism. A sense of right and wrong. This is not to imply that the robots are sentient, simply that they have goals that are prioritized, and that, in this prioritization, they can take into account the goals of the other robots. It isn't magic, it doesn't require God, consciousness, or really anything special to explain it.


I think it does require consciousness to be an example of what most people mean by 'altruism'. The machines bumbling around on Mars are just a complicated physical process, like the workings of earth's ecosystem or (cf other thread) the tendency of order to become chaos. If you don't claim they have consciousness, by definition they don't harm each other. They don't even really interfere, per se - they just interact. The only sense in which the cable-layer is doing something undesirable is that there are conscious people on earth who want a certain outcome, which the CL is impeding.

Take away the associated conscious desires from your scenario (or from the internet), and there's nothing to distinguish its significance from any other physical interaction in the universe. In other words, take away the humans from the scenario and the machine before the upload is no more or no less altruistic than it is afterwards.

I think the relevant point point is your idea of 'goals'. I claim the universe doesn't contain any discrete events - everything is part of one big entropic process. It also doesn't contain any discrete macroscopic objects. 'Two magnets' repelling each other each comprise billions of indistinguishable particles that have indistinguishable properties. If you moved a few of the particles from one to the other, or put them in a dusty corner somewhere, their 'goals' would become completely different.

It's only consciousness (regardless of whether you think of it as properly emergent - which I don't) that provides a qualitative difference between different classes of (things that we call) events. And then it only seems to divide them into two - goal-seeking and not goal-seeking.
"These were my only good shoes."
"You ought to have put on an old pair, if you wished to go a-diving," said Professor Graham, who had not studied moral philosophy in vain.
User avatar
Arepo
 
Posts: 1065
Joined: Sun Oct 05, 2008 10:49 am

Re: The Brights and Morality

Postby rob on 2008-11-13T18:11:00

Arepo wrote:I think it does require consciousness to be an example of what most people mean by 'altruism'

What is consciousness, then, in naturalistic terms?
Arepo wrote:The only sense in which the cable-layer is doing something undesirable is that there are conscious people on earth who want a certain outcome, which the CL is impeding.

No, I am not talking about the goals of those on earth. I am talking about the goals of each individual robot (which may conflict, even if the goals of the designer -- which could be the same person -- do not). The goals of those on earth are as irrelevant as the goals of natural selection are to a person's individual desires. Yes natural selection played a role in putting them there, but concentrating on that is missing the point.

Unfortunately, the only way we know that sophisticated things come into existance is via natural selection or human design. I wish I could come up with a scenario where it was neither of these, so that I could make an easy to understand example (biology is too complicated), and people don't get hung up on the issue of the goals of the designer. But that is pretty much impossible. So work with me here. :)

rob
 
Posts: 20
Joined: Sun Nov 09, 2008 5:29 pm
Location: San Francisco

Re: The Brights and Morality

Postby Arepo on 2008-11-13T18:46:00

rob wrote:What is consciousness, then, in naturalistic terms?


In (small) part, a matter of definition. I'm inclined to suggest that consciousness best thought of as a synonym for emotion.

But it doesn't really matter whether you accept that suggestion, because I don't know how to explain either consciousness or emotion empirically. Don Alhambra is a neuroscientist, so I'll ask him to look at this post, but I think his answer will basically be 'I don't know either'. So we can either use a sort of god-of-the-gaps argument to suppose that our lack of knowledge of the link means only dualism can account for it, or we can deny that consciousness exists (which seems to me to be just redefining it another way), or we can say as a working hypothesis that consciousness is an integral element of behaviour that closely resembles ours.

But I think this doesn't matter yet - I'm not arguing about whether or not your rover is conscious, I'm taking your word that it isn't.

Arepo wrote:No, I am not talking about the goals of those on earth. I am talking about the goals of each individual robot (which may conflict, even if the goals of the designer -- which could be the same person -- do not).


But I am denying that there are any goals in your scenario, except those of the conscious people on earth. On Mars, there are just processes. I'm happy to follow you except on that point - if you want to define altruism as self-sacrificing behaviour for the benefit of others, that seems useful enough. But many of these terms rely on consciousness (or at least emotion).

A vase breaking isn't 'sacrifice', it's an amount of force momentarily increasing to enough to outweigh the force holding the vase together, and then various bits of crockery suddenly having much less densely packed neighbours. In fact, as far as the universe is concerned, a vase breaking increases entropy.

And equally, a vase or garden getting made isn't benefiting - it's just the rearrangement of particles.

To go further, a human having a good time isn't benefiting in any transcendental sense - he's just having a good time, something which he considers a benefit. But he can't do any considering or good-time-having without consciousness or emotion respectively (or equivalently).

***

I must make this my last post for a while - need to put my energy into bringing more utilitarians to the forum rather than irritating the ones who're already here :P
"These were my only good shoes."
"You ought to have put on an old pair, if you wished to go a-diving," said Professor Graham, who had not studied moral philosophy in vain.
User avatar
Arepo
 
Posts: 1065
Joined: Sun Oct 05, 2008 10:49 am

Re: The Brights and Morality

Postby faithlessgod on 2008-11-13T20:06:00

rob wrote:Well I don't understand why you don't see the robot as being altruistic. There is sacrifice, in the sense that for the cable bot to accomodate the landscape bot, it has to do more work. Do you only see it as altruism if it crosses some greater threshold? Do you disagree with the use of the term altruism in the article on routers?

No the point I took, which I agree you did not quite make and indeed emphasized otherwise, was that the "don't be a dick"(DBAD) module enables the robots to increase their net efficiency. The altruism you are asserting is not necessary but optional - indeed path that you took in your description to emphasize this does not really make sense. Surely if they were already making each other less efficient without this DBAD module, the addition was intended to make them individually hence aggregatively hence net more efficient - otherwise it would not have worked?! :?

rob wrote:I avoided the word happiness in this article, and simply used the word "goal" and related terms. Maybe that is why you have an easier time getting most of what I am saying.

As I understand the physical realisation of goals and the issue of efficiency maps simply onto desire-fulfilment (or preference satisfaction) but not to happiness without additional ad hoc and less parsimonious semantic games. There was nothing there that indicated happiness in any plausible sense of that term. I wonder if and why you are fixated on the term happiness? Reduce it to what you conceive it is pretty much the same as what I call desire fulfilment. I think we are both ethically reductive naturalists??

rob wrote:Maybe I should, rather than using such words as "altruism" or "happiness", always qualify them as "the logical equivalent to altruism"...etc. (again, I go back to the "submarines swimming" issue. I know you said you can accept that submarines can swim, but I think it hilights an important issue: sometimes we understand a word to be defined in a restricted way, and our brains lock up if we try to extend it beyond that. Here in california this is a big issue over the definition of "marriage"!)

The logical equivalent to altruism does not address the point I made above, it does not add anything to the debate. Is there a logical equivalent to happiness? The way you used it, it instead ends up closer to preference satisfaction or desire fulfilment. These are not the same even if you define away the distinction, which is my point here and in the other thread.

rob wrote:Anyway I am glad we are able to agree on some things here. I have not tested this on the moral relativists. Obviously I am not one.....I acknowledge that certain details of morals are subjective/relative, but there are certain universal concepts of morality that are not at all relative, and to say otherwise is just absurd, in my opinion.

Absolutely :D
Do not sacrifice truth on the altar of comfort
User avatar
faithlessgod
 
Posts: 160
Joined: Fri Nov 07, 2008 2:04 am
Location: Brighton, UK

Re: The Brights and Morality

Postby faithlessgod on 2008-11-13T21:12:00

Arepo wrote:
rob wrote:The software upgrade is, essentially, altruism. A sense of right and wrong. This is not to imply that the robots are sentient, simply that they have goals that are prioritized, and that, in this prioritization, they can take into account the goals of the other robots. It isn't magic, it doesn't require God, consciousness, or really anything special to explain it.


I think it does require consciousness to be an example of what most people mean by 'altruism'.

Damn a point of disagreement between us :cry:
Granted that I think altruism is moot IMHO, it is still the case that altruism in the biological and indeed mechanical sense does not require conciousness in any useful sense of the term. Emotion is not the same thing and can be modelled in BDI agents - as in "emotional reactions" to fulfilment of goals that modify the weights of desires when they are activated next - this is a key part of how BDI agents learn and adapt to circumstances.

Arepo wrote:If you don't claim they have consciousness, by definition they don't harm each other.

hmmm... depends what you mean by harm, what do you mean?

Arepo wrote:The only sense in which the cable-layer is doing something undesirable is that there are conscious people on earth who want a certain outcome, which the CL is impeding.

I see your point I made a similar but not the same one (it does not need consciousness) over means/ends in the other thread. However in this case I think you are overextending this issue. The robots do have ends - e.g cable laying, landscape gardening, they are made more efficient by the internalisation of the otherwise lacking external DBAD module. Now what is the difference between this and a thermostat your all asking? A short answer is that the thermostat does not learn and adapt - it is not malleable. I think this is an important point and might run and run ;) lets see below...

Arepo wrote:Take away the associated conscious desires from your scenario (or from the internet), and there's nothing to distinguish its significance from any other physical interaction in the universe. In other words, take away the humans from the scenario and the machine before the upload is no more or no less altruistic than it is afterwards.

This is a very interesting avenue to explore. I am a no free will(ist?). There are minimum requirements to qualify as a moral agent. First one must be capable of optionally thwarting the goals (desire/preferences) of others and capable of selecting alternatives that do not or minimize such thwarting. The thermostat does not have such a capacity, and on the principle of charity I am interpreting Rob's robots as having such capacities.


Arepo wrote:I think the relevant point point is your idea of 'goals'. I claim the universe doesn't contain any discrete events - everything is part of one big entropic process. It also doesn't contain any discrete macroscopic objects. 'Two magnets' repelling each other each comprise billions of indistinguishable particles that have indistinguishable properties. If you moved a few of the particles from one to the other, or put them in a dusty corner somewhere, their 'goals' would become completely different.

Yes goals..interesting. A goal is a type of desire (as are needs, wants, intentions, preferences, interests, fears, aversions etc. at least that is how I use the term, so now you know) and a desire it is an attitude to keep or keep something true or more technically it is a propositional attitude to make or keep a state of affairs true. This has already been done in BDI and other intelligent agents so ipso facto anything physical that can instantiate such states has part of the necessary conditions (with the additions of the other noted above) to qualify as a moral agent (I have not worked out if these are together sufficient but necessary yes)?



Arepo wrote:It's only consciousness (regardless of whether you think of it as properly emergent - which I don't) that provides a qualitative difference between different classes of (things that we call) events. And then it only seems to divide them into two - goal-seeking and not goal-seeking.

You can have goal-seeking without consciousness or you arguing that all such is derived?
Do not sacrifice truth on the altar of comfort
User avatar
faithlessgod
 
Posts: 160
Joined: Fri Nov 07, 2008 2:04 am
Location: Brighton, UK

Re: The Brights and Morality

Postby rob on 2008-11-14T03:48:00

Arepo, your argument that altruism only applies to conscious entities (as well as "goals", as well as faithlessgod's stance on other words I use) reminds me of the infamous criticism of Dawkins' Selfish Gene by Mary Midgely, which has been roundly considered an "egregiously bad" review ( http://www.pandasthumb.org/archives/200 ... eps-i.html ), for how badly she misunderstood his use of the term "selfish".

She said:
"he resorts to arguing from speculations about the emotional nature of genes"
and
"Genes cannot be selfish or unselfish, any more than atoms can be jealous, elephants abstract or biscuits teleological. "

As this article points out http://www.butterfliesandwheels.com/art ... .php?num=1
"Whatever she meant, two things are clear: (a) no reputable biologist thinks that genes have an emotional nature; and (b) genes can be selfish in the sense that Dawkins - and other sociobiologists - use the term."


If the term selfish can be applied to genes, the term (and its opposite, altruism) can be applied to robots. You can also use Google to find numerous places that plants ( http://www.scienceagogo.com/news/200705 ... _sys.shtml ), software ( http://news.cnet.com/2100-1033-984694.html ) , and other things that are indisputably "not conscious" are referred to as being selfish or altruistic.

According to this article http://plato.stanford.edu/entries/altruism-biological/
In everyday parlance, an action would only be called 'altruistic' if it was done with the conscious intention of helping another. But in the biological sense there is no such requirement. Indeed, some of the most interesting examples of biological altruism are found among creatures that are (presumably) not capable of conscious thought at all, e.g. insects. For the biologist, it is the consequences of an action for reproductive fitness that determine whether the action counts as altruistic, not the intentions, if any, with which the action is performed.

Now, I chose to talk about something that isn't even biological, but it shouldn't matter. The concept is the same. Whether or not you choose to use the words this way, surely you can recognise that there is a parallel, and understand how I am using them, whether talking about genes, plants, routers or robots.

(note that many of these articles will also refer to "goals" of entities that no one would argue are conscious)

rob
 
Posts: 20
Joined: Sun Nov 09, 2008 5:29 pm
Location: San Francisco

Re: The Brights and Morality

Postby rob on 2008-11-14T06:21:00

faithlessgod wrote:No the point I took, which I agree you did not quite make and indeed emphasized otherwise, was that the "don't be a dick"(DBAD) module enables the robots to increase their net efficiency. The altruism you are asserting is not necessary but optional - indeed path that you took in your description to emphasize this does not really make sense. Surely if they were already making each other less efficient without this DBAD module, the addition was intended to make them individually hence aggregatively hence net more efficient - otherwise it would not have worked?! :?

Hmmm, prior to DBAD, the cable bot gets more done, after, it gets less done, since it took 2 extra hours to "do a favor" for the landscape bot, by avoiding trashing the garden. However, the landscape bot got much more done after DBAD (it saved it two days worth of work) because of that 2 hour sacrifice made by the cable bot. There is a net increase in efficiency, yes, but in this case it comes at a cost to cable bot's productivity....that is why it is altruistic.

Read it again, I'm sure it makes sense. :)

Note that the cable bot may get favors from another bots, whether it be the landscape bot or any other, thanks to DBAD. But I didn't cover that, I just talked of one altruistic act done by the cable bot that the landscape bot was the beneficiary of.

faithlessgod wrote:As I understand the physical realisation of goals and the issue of efficiency maps simply onto desire-fulfilment (or preference satisfaction) but not to happiness without additional ad hoc and less parsimonious semantic games.

Sleepy....so sleepy....

faithlessgod wrote:There was nothing there that indicated happiness in any plausible sense of that term. I wonder if and why you are fixated on the term happiness? Reduce it to what you conceive it is pretty much the same as what I call desire fulfilment. I think we are both ethically reductive naturalists??

For one, happiness is the generic, common everyday umbrella term representing the "emotion" which I am trying to reduce the mystery of, by explaining in naturalistic terms. Second, happiness is usually used in the definition of utilitarianism, so defining it in an objective way seems important.

For another, it is no more anthropomorphic to me than "desire" -- in fact less so. I have never heard someone speak of a inanimate object as having "desires", but I regularly hear the word "happy" used to describe the state of inanimate objects, especially something in a computer program. (I noticed someone use it today at work, in fact)

rob
 
Posts: 20
Joined: Sun Nov 09, 2008 5:29 pm
Location: San Francisco

Re: The Brights and Morality

Postby faithlessgod on 2008-11-14T10:41:00

Hi Rob

I think you are again obfuscating important differences but, on the other hand, this could be for here, hair splitting. I am making a distinction between alturism and utilitarianism. This is not of my invention see the peer reviewed article in referred to in my post which I am guessing you did not read otherwise you would know why I think this is important. See the Internet Encyclopedia of Philosophy on consequentialism

Your actual model itself can be used as a framework to discuss these differences and that is all I was doing. If you refuse to see this, that is a pity as I would have thought this was neutral ground to examine such concepts. Instead you want to use it to force a particular view and it cannot to do that.

What you find so sleepy is critical. You choose to fail to differentiate these but in application, AFAICS, we are in total agreement!!! It is your insistence on redefining terms way beyond what is reasonable that creates a fuzziness, vagueness and lack of precision that is the major weakness to your model of utility and with which I both disagree and think is unnecessary. You appear to be more fixated on the semantics of terms rather than, which is what I am emphasizing, what they refer to. This is the same point as in a natural basis of ethics thread. It is what referred to that is such a basis not the words used, words are only vehicles that can be used to either make things transparently clear - as I am trying to do - or hopelessly vague and confusing - as you seem to be trying to do (I know you don't see it that way, but that is the way it is). :cry:

You are not reducing the mystery of happiness you are doing the opposite and so, unwittingly, force in unnecessary issues, which we both agree are irrelevant, such as consciousness and qualia. I am trying to show how this can be avoided. Now you say happiness is an emotion and this narrow sense is quite different to the broad (overkill as someone else put it) sense you are arguing for. To switch between one and the other is obfuscation, equivocation and it is a non sequitur (from the broad one you cannot draw any inferences about the narrow one and vice verse).

We both agree on an objective basis here and pretty much the same one to boot! However I am trying to disambiguate otherwise confusing and misleading language and doing this in a quite standard, orthodox and conventional manner. Your strategy is to, instead, redefine these terms, in an atypical, unorthodox and unconventional manner for which there is no useful benefit or justification so IMHO does not achieve that goal - otherwise we would not be having this conversation! If you refuse to accept the fact that there are at least four well accepted distinct concepts of utility - pain/pleasure, happiness, preference satisfaction and desire fulfilment - then you are only going to repeatedly get into such confusing debates, which is a pity as I think we both are otherwise working to the same goal and have come up with similar empirical, physical and material models.

rob wrote:For another, is no more anthropomorphic to me than "desire" -- in fact less so. I have never heard someone speak of a inanimate object as having "desires", but I regularly hear the word "happy" used to describe the state of inanimate objects, especially something in a computer program. (I noticed someone use it today at work, in fact)

To repeat your "no-one" challenge back to you (as I think you did to me ;) ), I have more often heard say the equivalent of a thermostat wants to be at 70 degrees rather than it is happy to be at 70 degrees, that sounds far odder to me. Another example is that selfish gene theory if full of examples of this but not of happiness. I think you will be hard put to show that desires and its cognates - needs, wants, interests, goals, preferences, drives, motives, instincts - are less prevalent than happiness and pleasure, ecstasy, bliss etc. when people "anthropomorphize" (really 'metaphorically project onto') inanimate objects. I don't think you have a leg to stand on on this particular point. :o
Do not sacrifice truth on the altar of comfort
User avatar
faithlessgod
 
Posts: 160
Joined: Fri Nov 07, 2008 2:04 am
Location: Brighton, UK

Re: The Brights and Morality

Postby Arepo on 2008-11-14T21:30:00

rob wrote:Arepo, your argument that altruism only applies to conscious entities (as well as "goals", as well as faithlessgod's stance on other words I use) reminds me of the infamous criticism of Dawkins' Selfish Gene by Mary Midgely, which has been roundly considered an "egregiously bad" review ( http://www.pandasthumb.org/archives/200 ... eps-i.html ), for how badly she misunderstood his use of the term "selfish".


Mary Midgley? Let's have less of that awful language round here, please!

Seriously though, I can't comment on MM's review, since the links to it from Panda's Thumb are dead. But having read other bits and pieces by her, I don't find it hard to imagine it was aggressive drivel.

If the term selfish can be applied to genes, the term (and its opposite, altruism) can be applied to robots. You can also use Google to find numerous places that plants ( http://www.scienceagogo.com/news/200705 ... _sys.shtml ), software ( http://news.cnet.com/2100-1033-984694.html ) , and other things that are indisputably "not conscious" are referred to as being selfish or altruistic.


Fair enough.

(note that many of these articles will also refer to "goals" of entities that no one would argue are conscious)


Yup, point taken. Your use of the terms seems to be standard enough.

But it's important to keep in mind that these are two different uses of the words 'selfishness' and 'altruism' (both your quotes imply this, in fact). So if something is true of colloquial selfishness/altruism (consciously seeking your own benefit/consciously seeking the benefit of other sentient beings), it's not automatically true of biological selfishness/altruism.

Mary Midgely's key mistake in the paragraph quoted in 'Butterflies and Wheels' is in equivocating the two uses, thereby implying that Richard Dawkins believed that genes were actually sentient. To turn things around, I think that you're making or very near making a similar mistake yourself.

Your idea that morality = 'behavior that gives significant priority to the goals of others relative to one's own goals' makes pretty good sense when you take 'goals' in the colloquial sense (as intentions). But you can't move straight from this idea to the idea that morality also applies to goals in the scientific sense. I mean, you're free to claim that it does, but it requires adding a proposition. You now have 1) Morality applies to sentient things and 2) morality applies to nonsentient things.

There's no particular reason why you can't claim 2); I believe Luciano Floridi does. But a) I don't think it's a defensible position from any sensible epistemology, b) I don't think many scientists, including Dawkins et al would support it, and c) if true, it would turn the universe into a chaotic moral mess. Each person-sized 'object' comprises many layers of objects that constitute it or that it constitutes (ie a robot comprises bits of metal, microchips etc, which in turn comprise smaller bits of metal or plastic, which comprise... down to fundamental particles - and in the other direction, each robot composes part of the mass of the planet it's on, which composes part of a solar system... up to the entire cosmos) - and each of these layers have different and often conflicting 'goals'.
"These were my only good shoes."
"You ought to have put on an old pair, if you wished to go a-diving," said Professor Graham, who had not studied moral philosophy in vain.
User avatar
Arepo
 
Posts: 1065
Joined: Sun Oct 05, 2008 10:49 am

Re: The Brights and Morality

Postby faithlessgod on 2008-11-16T11:09:00

Well it seems I partly agree and partly disagree with Rob and Arepo differently, certainly this demonstrates there are more than just two ways of looking at this ;)

1)I agree with Rob and against Arepo that selfishness and altruism need conscious processes. It is quite possible to have a third person or impersonal behavioural description and that is what is useful when available.

2)I agree with Rob and against Arepo that non human/aminmal entities can be moral agents. No that any automatically are that Rob implies, but they can be if certain conditions are met including that detectable behaviour descriptions.

3)I agree with Arepo and against Rob as Arepo says "You[Rob] now have 1) Morality applies to sentient things and 2) morality applies to nonsentient things.". That is a condition to meet as to what is a moral agent, depends on what is capable of being sentient. Presumably Rob's emphasis would be to fail discriminate sentience from non-sentience by using some implausibly broad conception of it - to define the meaning out of the term - should he feel he needs to, as he has already and repeatedly done with happiness and has implied with selfishness and altruism, I see no other way and Rob has shown to date no other way to deal with such distinctions.

4) I disagree based on 2 and 3 therefore with both as to what the conditions for sentience and hence moral agency are.

At least Arepo seeks clarity by making distinctions and argues that they are important. We can debate these and whilst I disagree that are all important we do not deny that such distinctions can be made.

Rob, on the other hand, seems everywhere to be just using semantic tricks - redefinitions and equivocations primarily - to avoid making an argument that any of these distinctions are unimportant since he defines them all away. That is, when it is his lack of a distinction that is in dispute, he cannot assume it's lack in resolving such a dispute. Instead he has to either show its non-existence or acknowledge it, he has done neither. And, so, in spite of Rob's assertions to the contrary, his approach is not naturalism or realism but metaphysics, not clear but confusing, not simple but complex, not science but philosophy . Sorry but that is the way I see it. :cry: :cry:

Anyway I will be very busy the next few so will be hardly able to read this forum. Will catch up in a week or two.
Do not sacrifice truth on the altar of comfort
User avatar
faithlessgod
 
Posts: 160
Joined: Fri Nov 07, 2008 2:04 am
Location: Brighton, UK


Return to General discussion