Thunder from Down Under

Whether it's pushpin, poetry or neither, you can discuss it here.

Thunder from Down Under

Postby Benjamin Martens on 2013-03-21T11:49:00

Greetings, my name is Benjamin.

I'm an Australian student enrolled in a double degree of arts and Science. I complete my Arts degree--double major in Philosophy and Politics--early next year, at which time I begin my study of Science. However, I'm rethinking the wisdom of beginning this second degree: I'm math illiterate and think it's unlikely I can contribute meaningfully to the field.

I'm able to generate ideas but probably can do no more than that, so for now, in lieu of the above, I'm overcoming some mental tribulations and am educating myself about my interests, one of which is utilitarianism. That's why I'm here. Consequentialist theory appeals to me because of all the moral theories I have yet encountered it's the most complex and pragmatic. When applied to actual ethical considerations it couches few or no taboos, which generally means it can explore a wider range of potential scenarios than, say, deontological theories, and by definition will hone in on the solution applicable to the largest set of morally relevant beings, which, of course, is optimal and to be desired.

I'm 20, vegan, utilitarian, with very few strong moral intuitions and, though I'm appreciative of anonymity, am frightened of dying without having made a considerable positive impact on the lives of other beings. I'm glad to be here (and here, too: http://www.facebook.com/MartensBenjamin?ref=tn_tnmn) and hope to be of help.

Benjamin Martens
 
Posts: 8
Joined: Wed Dec 19, 2012 11:18 am

Re: Thunder from Down Under

Postby Arepo on 2013-03-21T13:21:00

Hey Ben, welcome along. Do you know David Pearce? I think he suffers from depression, and has done a lot of research into the best treatments currently available for it. He sometimes posts on here, but if he doesn't see this, you can get in touch with him on Facebook (he'll be the David Pearce with whom you have about 500 friends in common)

For career concerns, I strongly recommend getting in touch with 80,000 Hours, who aren't explicitly utilitarian, but a large proportion of their members are, and they specialise in answering that sort of question.

Re complexity, I like utilitarianism because (among other reasons) it's simple :P Alternative views always need a bunch of ad hoc tampering to deal with specific cases the way their proponents want them to (and if they're tweaking their ethics to fit their intuitions rather than the other way round, why are they bothering to look at ethics at all?)
"These were my only good shoes."
"You ought to have put on an old pair, if you wished to go a-diving," said Professor Graham, who had not studied moral philosophy in vain.
User avatar
Arepo
 
Posts: 1065
Joined: Sun Oct 05, 2008 10:49 am

Re: Thunder from Down Under

Postby peterhurford on 2013-03-21T18:53:00

Good to have you here!
Felicifia Head Admin | Ruling Felicifia with an iron fist since 2012.

Personal Site: www.peterhurford.com
Utilitarian Blog: Everyday Utilitarian

Direct Influencer Scoreboard: 2 Meatless Monday-ers, 1 Vegetarian, and 2 Giving What We Can 10% pledges.
User avatar
peterhurford
 
Posts: 410
Joined: Mon Jul 02, 2012 11:19 pm
Location: Denison University


Re: Thunder from Down Under

Postby Benjamin Martens on 2013-03-22T09:35:00

Thanks, men.

I'm in contact with Dave. He says he is or was melancholic, while I'm rather more anhedonic. He'd like me to blog but frankly I lack ability and don't feel up to doing it at all well. In any case, he and his websites are a great help.

On 80,000 Hours: Ruari recommended them to me some months ago. Could anyone confirm, by recounting their personal experience, whether the Hours careers helpline service is helpful?

On the complexity and simplicity of utilitarianism:

if they're tweaking their ethics to fit their intuitions rather than the other way round, why are they bothering to look at ethics at all?


Let's suppose utils. are serious ethicists and do not perpetuate belief bias (http://en.wikipedia.org/wiki/Belief_bias). This would make utility reasoning more complex because it'd be without direction. Completing task [defend conclusion x using resources y] would presumably be simpler than completing task [use resources y to arrive at unknown optimal conclusion]. This is a crude explanation but you understand: consequentialism is complex precisely because of its lack of priors and known constraints.

Benjamin Martens
 
Posts: 8
Joined: Wed Dec 19, 2012 11:18 am

Re: Thunder from Down Under

Postby Arepo on 2013-03-22T13:05:00

On 80,000 Hours: Ruari recommended them to me some months ago. Could anyone confirm, by recounting their personal experience, whether the Hours careers helpline service is helpful at all?


They’re a new group who’re striving for evidence-based improvements in the services they provide, so any such account is likely to be out of date (and hopefully overly negative). From my experience, they don’t have the info to provide anywhere near fully confident answers, but they’re honest about their epistemic limitations, and far more familiar with your motivations for career-seeking than any other career advisory service out there (not to mention a lot freer than many!).

The only quibble I would make about what I know of their more recent advice is I think they overrate alternatives to earning to give (aka professional philanthropy) careers, which in my view are likely to be the best option for almost all (utilitarian) EAs.

Let's suppose utils. are serious ethicists and do not perpetuate belief bias (http://en.wikipedia.org/wiki/Belief_bias). This would make utility reasoning more complex because it'd be without direction. Completing task [defend conclusion x using resources y] would presumably be simpler than completing task [use resources y to arrive at known optimal conclusion].


It depends what you mean by ‘simpler’. I normally use the word to mean ‘adhering to the principle of parsimony’, which often differs from ‘following the path of least cognitive resistance’ or similar.
"These were my only good shoes."
"You ought to have put on an old pair, if you wished to go a-diving," said Professor Graham, who had not studied moral philosophy in vain.
User avatar
Arepo
 
Posts: 1065
Joined: Sun Oct 05, 2008 10:49 am

Re: Thunder from Down Under

Postby Benjamin Martens on 2013-03-30T02:05:00

It depends what you mean by ‘simpler’. I normally use the word to mean ‘adhering to the principle of parsimony’, which often differs from ‘following the path of least cognitive resistance’ or similar.


The principle of parsimony requires that we accept the simplest proposition as the one most likely to be true. In this case, deontologists, with reference to a number of unchanging dictates, would arrive at a conclusion quicker than the utilitarian, who includes deontological imperatives in their calculations but is not limited to them. Intuitively, parsimony sides with the deontologist, who can refer to a basic method of parsing the truth by deciding whether something agrees with a definite statement such as "do not kill." There are only two alternatives: does it agree or does it not. The utlitarian cannot do this.

Benjamin Martens
 
Posts: 8
Joined: Wed Dec 19, 2012 11:18 am

Re: Thunder from Down Under

Postby Arepo on 2013-04-02T14:08:00

The principle of parsimony requires that we accept the simplest proposition as the one most likely to be true.


That’s a circular definition of ‘simplicity’. It’s also not really true – parsimony is invoking the fewest number of physical and conceptual entities needed to describe the phenomenon. One way of expressing this is as something that has the lowest Kolgomorov complexity. The simplest proposition on this account isn’t necessarily the easiest one for a human to parse, which is how people often interpret the principle (and I think how you’re interpreting it).

Utilitarians posit positive/negative emotion and (typically, though not necessarily in my view) ‘value’ which supervenes on them.

Deontologists accept the existence and value of positive and negative emotions, but also claim that there are such things as (typically) discrete acts, multiple universal imperatives, agents, discrete causality etc.

If you ask a utilitarian for a reason not to torture someone, they’ll say that torture tends to decrease net welfare (and maybe that high net welfare is good).

If you ask a deontologist for a reason not to torture someone, they’ll say that torture tends to decreases net welfare (and maybe that high net welfare is good), and also that the torturing agent does some kind of intangible wrong by transgressing the right of the torturee to not be hurt (although they’d probably add the wrinkle that hurting someone with their consent doesn’t do so).

Already utilitarianism is more parsimonious.

But then if you ask a utilitarian for a reason not to lie to someone, they’ll say that lying tends to decrease net welfare – invoking exactly the same principle as above (and accepting that where it doesn’t decrease net welfare there’s no reason not to lie to anyone).

And if you then ask a deontologist for a reason not to lie to someone, they’ll say that lying tends to decrease net welfare, and that the lie-ee has an additional right to not be lied to.

Discuss theft, and the utilitarian’s answer will be the same; the deontologist will say that the victim has yet another right to property. Or if they’re a particularly left-leaning deontologist, they’ll say that having property transgresses everyone else’s right to use resources according to their need.

And so on. Deontology starts off far less parsimonious, and the more you clarify it to deal with real life situations, the less parsimonious it becomes, since there’s no underlying principle that allows you to algorithmically select new sub-principles – instead they all rely on the judgement of the human considering the question. Utilitarianism starts with just one or principles (depending on how you describe them) and consistently applies them to every new situation.
"These were my only good shoes."
"You ought to have put on an old pair, if you wished to go a-diving," said Professor Graham, who had not studied moral philosophy in vain.
User avatar
Arepo
 
Posts: 1065
Joined: Sun Oct 05, 2008 10:49 am

Re: Thunder from Down Under

Postby Benjamin Martens on 2013-06-30T11:26:00

Thank you for correcting the error in my understanding. Now, after definition, onto argument.

Utilitarians posit positive/negative emotion and (typically, though not necessarily in my view) ‘value’ which supervenes on them. Deontologists accept the existence and value of positive and negative emotions, but also claim that there are such things as (typically) discrete acts, multiple universal imperatives, agents, discrete causality etc.


You're isolating concepts of chief importance to two moral theories. Utilitarians posit emotion and value and utility, without which utilitarianism is mere hedonism. Deontologists posit imperatives and value. For me, parsimony sees deontology as simpler. Also, each additional concept believed necessary for deontology is important to deontologists and to utilitarians without being central to either; that is, agents and discrete acts, for example, are only important for questions of applied ethics: agents carry out imperatives via discrete acts, and agents maximise value through discrete acts. Those concepts are equally relevant to both theories, provided they are applied, not normative.

If you ask a utilitarian for a reason not to torture someone, they’ll say that torture tends to decrease net welfare (and maybe that high net welfare is good). If you ask a deontologist for a reason not to torture someone, they’ll say that torture tends to decreases net welfare (and maybe that high net welfare is good), and also that the torturing agent does some kind of intangible wrong by transgressing the right of the torturee to not be hurt (although they’d probably add the wrinkle that hurting someone with their consent doesn’t do so).


No. A utilitarian will say that torture decreases utility. A deontologist will say that torture is against an imperative. Both representatives make only a single declaration. (To my knowledge, deontologists do not invoke utility. Each explanation appeals to an implied system of value, so there is no need for "and maybe that x is good").

However, you might reply that in every case the utilitarian refers only to utility, while, in each case, the deontologist refers to a different imperative. Thus the utilitarian is more parsimonous. This is true--but only on the surface. For it's easy to say, when deliberating over practical matters, "increase utility!", but it's difficult to decide how to actually do this. The deontologist doesn't have this problem. There may be 200 imperatives, but they are easy to do, or, it takes no effort to know which imperative applies to each situation, and so on. I know that, when in a riot, I shouldn't kill anyone; but actually maximising utility, now, that's hard to do, at least if you want to go beyond simple adherence to categorical imperatives.

In conclusion: there is a normative/applied distinction in ethics. In normative ethics, deontology is conceptually simpler than utilitarianism, as it invokes one less fundamental concept. In applied ethics, deontology is simpler, because, in the real world, "increase utility" is not easily understood at all, while, say, "do not murder", is.

Benjamin Martens
 
Posts: 8
Joined: Wed Dec 19, 2012 11:18 am


Re: Thunder from Down Under

Postby Arepo on 2013-07-01T12:55:00

Utilitarians posit emotion and value and utility, without which utilitarianism is mere hedonism.


I don’t see why ‘utility’ is needed – it’s just a label the early utilitarians gave their philosophy. To Bentham it seemed to be only a synonym for happiness/suffering gradients; to other writers it became a synonym for preference satisfaction; to cataloguers of the difference between the first to it became a meta-label to describe either. But it doesn’t play any important role in the conception or expression of the theory.

For me, parsimony sees deontology as simpler.


Suppose I were to try to programme a computer to be a deontologist. For the sake of sanity, let’s allow that the computer speaks English – but is very literal minded. What’s the shortest number of words in which I could get it to function?

I would give the equivalent utilitarian computer a phrase like ‘Maximise expected positive emotion’.

The tricky bit is that I would then need to give it a working definition of ‘positive emotion’, since I’d somehow have to communicate the notion of feeling, and I’d have to give it some sort of subtraction method for negative emotion.

But any sane conception of deontology also features ‘positive emotion’ as one of its parameters, since deontologists will agree that less suffering is better than more suffering. So while there are certainly unanswered questions for util, deontology shares all of that complexity.

So how many words would you need to use to make a deontologyAI function? Let’s assume that for any common ground, like ‘positive emotion’, between DAI and UAI, we can omit them, or assume that the AI has solved the problem for us, or whatever. What unique-to-DAI instructions would you give the programme?

(if you want to be demanding, you could make me spell out what ‘maximising expectation’ means, but it’s a mathematically well-defined concept, so I could probably do so in about 30 or 40, and a mathematician could probably do so more parsimoniously still)

Also, each additional concept believed necessary for deontology is important to deontologists and to utilitarians without being central to either; that is, agents and discrete acts, for example, are only important for questions of applied ethics: agents carry out imperatives via discrete acts, and agents maximise value through discrete acts. Those concepts are equally relevant to both theories, provided they are applied, not normative.


A physical description of the universe will not include ‘agents’, ‘discrete acts’. An actual utilitarian might think of such things as a heuristic that suits the human mind, but they’re not included anywhere in the theory he’s trying to follow. That’s probably what you mean by applied rather than normative, but for deontology they are normative – eg Kant’s talking of people as ends in themselves. For util, they aren’t.

(To my knowledge, deontologists do not invoke utility.


Of course they do! The deontologist who’d say ‘no-one is wronged in either situation x or y, and we know that everyone is happy in x and everyone suffers in y but don’t think there’s any reason to choose x over y’ doesn’t exist. Everyone likes happiness and dislikes suffering, but the typical criticism levelled against utilitarianism is precisely that it’s too simple in only focusing on such things. I just happen to think that that’s it’s greatest strength.

it's easy to say, when deliberating over practical matters, "increase utility!", but it's difficult to decide how to actually do this. The deontologist doesn't have this problem.


He does if he wants to ever actually do *anything*. It’s easy enough to write an algorithm that doesn’t give any instruction (you just leave the algorithm blank), but as soon as that algorithm becomes applicable to the real world, it has to contend with real world situations in just the same way as the ‘maximise utility’ algorithm would. If you think that’s simple, then try programming a computer (even our friendly English-speaking computer above) to specifically eschew killing people while it’s going about any other goal. You’d have to tell it what death is (and whether advances in medical technology change that), what people are, what kills them etc.

You’d also have to tell it about stuff like risk and expectation anyway, unless you wanted it to literally follow the deontologist imperative of being certain to never kill anyone itself, in which case it would probably never do anything – or maybe it would just crash - since ‘doing nothing’ is obviously doing something. If you gave it a more intelligible instruction like ‘minimise the chance that you kill someone’ it would just huddle in a corner until it died of thirst.

Also, you’d have to give it some independent instructions on how to with situations where all options were apparently prohibited – go right to kill Bob, left to kill Victor, or stand as you are to kill Amy. And if it was something resembling a Kantian, you’d have to give it some way of dealing with the contradiction of having multiple imperatives, like ‘don’t like’ and ‘don’t kill’, since if you just ranked one over the other it would completely ignore the lower ranked one and put all of its efforts into following the higher.
"These were my only good shoes."
"You ought to have put on an old pair, if you wished to go a-diving," said Professor Graham, who had not studied moral philosophy in vain.
User avatar
Arepo
 
Posts: 1065
Joined: Sun Oct 05, 2008 10:49 am



Return to General discussion