Utilitarians posit emotion and value and utility, without which utilitarianism is mere hedonism.
I don’t see why ‘utility’ is needed – it’s just a label the early utilitarians gave their philosophy. To Bentham it seemed to be only a synonym for happiness/suffering gradients; to other writers it became a synonym for preference satisfaction; to cataloguers of the difference between the first to it became a meta-label to describe either. But it doesn’t play any important role in the conception or expression of the theory.
For me, parsimony sees deontology as simpler.
Suppose I were to try to programme a computer to be a deontologist. For the sake of sanity, let’s allow that the computer speaks English – but is very literal minded. What’s the shortest number of words in which I could get it to function?
I would give the equivalent utilitarian computer a phrase like ‘Maximise expected positive emotion’.
The tricky bit is that I would then need to give it a working definition of ‘positive emotion’, since I’d somehow have to communicate the notion of feeling, and I’d have to give it some sort of subtraction method for negative emotion.
But any sane conception of deontology
also features ‘positive emotion’ as one of its parameters, since deontologists will agree that less suffering is better than more suffering. So while there are certainly unanswered questions for util, deontology shares all of that complexity.
So how many words would you need to use to make a deontologyAI function? Let’s assume that for any common ground, like ‘positive emotion’, between DAI and UAI, we can omit them, or assume that the AI has solved the problem for us, or whatever. What unique-to-DAI instructions would you give the programme?
(if you want to be demanding, you could make me spell out what ‘maximising expectation’ means, but it’s a mathematically well-defined concept, so I could probably do so in about 30 or 40, and a mathematician could probably do so more parsimoniously still)
Also, each additional concept believed necessary for deontology is important to deontologists and to utilitarians without being central to either; that is, agents and discrete acts, for example, are only important for questions of applied ethics: agents carry out imperatives via discrete acts, and agents maximise value through discrete acts. Those concepts are equally relevant to both theories, provided they are applied, not normative.
A physical description of the universe will not include ‘agents’, ‘discrete acts’. An actual utilitarian might think of such things as a heuristic that suits the human mind, but they’re not included anywhere in the theory he’s trying to follow. That’s probably what you mean by applied rather than normative, but for deontology they are normative – eg Kant’s talking of people as ends in themselves. For util, they aren’t.
(To my knowledge, deontologists do not invoke utility.
Of course they do! The deontologist who’d say ‘no-one is wronged in either situation x or y, and we know that everyone is happy in x and everyone suffers in y but don’t think there’s any reason to choose x over y’ doesn’t exist. Everyone likes happiness and dislikes suffering, but the typical criticism levelled against utilitarianism is precisely that it’s too simple in
only focusing on such things. I just happen to think that that’s it’s greatest strength.
it's easy to say, when deliberating over practical matters, "increase utility!", but it's difficult to decide how to actually do this. The deontologist doesn't have this problem.
He does if he wants to ever actually do *anything*. It’s easy enough to write an algorithm that doesn’t give any instruction (you just leave the algorithm blank), but as soon as that algorithm becomes applicable to the real world, it has to contend with real world situations in just the same way as the ‘maximise utility’ algorithm would. If you think that’s simple, then try programming a computer (even our friendly English-speaking computer above) to specifically eschew killing people while it’s going about any other goal. You’d have to tell it what death is (and whether advances in medical technology change that), what people are, what kills them etc.
You’d also have to tell it about stuff like risk and expectation anyway, unless you wanted it to literally follow the deontologist imperative of being certain to never kill anyone itself, in which case it would probably never do anything – or maybe it would just crash - since ‘doing nothing’ is obviously doing something. If you gave it a more intelligible instruction like ‘minimise the chance that you kill someone’ it would just huddle in a corner until it died of thirst.
Also, you’d have to give it some independent instructions on how to with situations where all options were apparently prohibited – go right to kill Bob, left to kill Victor, or stand as you are to kill Amy. And if it was something resembling a Kantian, you’d have to give it some way of dealing with the contradiction of having multiple imperatives, like ‘don’t like’ and ‘don’t kill’, since if you just ranked one over the other it would completely ignore the lower ranked one and put all of its efforts into following the higher.
Elijah wrote:I'm math-illiterate too, for what it is worth. So is E. O. Wilson.
Don't worry about math literacy too much.
This seems like poor advice. I think it’s fair to say there’s a soft cap on how much mathematical literacy has practical use, but that it’s probably higher than anyone who hasn’t studied maths at undergrad level has actually reached.
I would love to know more about statistical reasoning than I do – it’s only a combination of more immediate practical self-training and akrasia that keep me from educating myself in it.
Utilitarians posit emotion and value and utility, without which utilitarianism is mere hedonism.
I don’t see why ‘utility’ is needed – it’s just a label the early utilitarians gave their philosophy. To Bentham it seemed to be only a synonym for happiness/suffering gradients; to other writers it became a synonym for preference satisfaction; to cataloguers of the difference between the first to it became a meta-label to describe either. But it doesn’t play any important role in the conception or expression of the theory.
For me, parsimony sees deontology as simpler.
Suppose I were to try to programme a computer to be a deontologist. For the sake of sanity, let’s allow that the computer speaks English – but is very literal minded. What’s the shortest number of words in which I could get it to function?
I would give the equivalent utilitarian computer a phrase like ‘Maximise expected positive emotion’.
The tricky bit is that I would then need to give it a working definition of ‘positive emotion’, since I’d somehow have to communicate the notion of feeling, and I’d have to give it some sort of subtraction method for negative emotion.
But any sane conception of deontology
also features ‘positive emotion’ as one of its parameters, since deontologists will agree that less suffering is better than more suffering. So while there are certainly unanswered questions for util, deontology shares all of that complexity.
So how many words would you need to use to make a deontologyAI function? Let’s assume that for any common ground, like ‘positive emotion’, between DAI and UAI, we can omit them, or assume that the AI has solved the problem for us, or whatever. What unique-to-DAI instructions would you give the programme?
(if you want to be demanding, you could make me spell out what ‘maximising expectation’ means, but it’s a mathematically well-defined concept, so I could probably do so in about 30 or 40, and a mathematician could probably do so more parsimoniously still)
Also, each additional concept believed necessary for deontology is important to deontologists and to utilitarians without being central to either; that is, agents and discrete acts, for example, are only important for questions of applied ethics: agents carry out imperatives via discrete acts, and agents maximise value through discrete acts. Those concepts are equally relevant to both theories, provided they are applied, not normative.
A physical description of the universe will not include ‘agents’, ‘discrete acts’. An actual utilitarian might think of such things as a heuristic that suits the human mind, but they’re not included anywhere in the theory he’s trying to follow. That’s probably what you mean by applied rather than normative, but for deontology they are normative – eg Kant’s talking of people as ends in themselves. For util, they aren’t.
(To my knowledge, deontologists do not invoke utility.
Of course they do! The deontologist who’d say ‘no-one is wronged in either situation x or y, and we know that everyone is happy in x and everyone suffers in y but don’t think there’s any reason to choose x over y’ doesn’t exist. Everyone likes happiness and dislikes suffering, but the typical criticism levelled against utilitarianism is precisely that it’s too simple in
only focusing on such things. I just happen to think that that’s it’s greatest strength.
it's easy to say, when deliberating over practical matters, "increase utility!", but it's difficult to decide how to actually do this. The deontologist doesn't have this problem.
He does if he wants to ever actually do *anything*. It’s easy enough to write an algorithm that doesn’t give any instruction (you just leave the algorithm blank), but as soon as that algorithm becomes applicable to the real world, it has to contend with real world situations in just the same way as the ‘maximise utility’ algorithm would. If you think that’s simple, then try programming a computer (even our friendly English-speaking computer above) to specifically eschew killing people while it’s going about any other goal. You’d have to tell it what death is (and whether advances in medical technology change that), what people are, what kills them etc.
You’d also have to tell it about stuff like risk and expectation anyway, unless you wanted it to literally follow the deontologist imperative of being certain to never kill anyone itself, in which case it would probably never do anything – or maybe it would just crash - since ‘doing nothing’ is obviously doing something. If you gave it a more intelligible instruction like ‘minimise the chance that you kill someone’ it would just huddle in a corner until it died of thirst.
Also, you’d have to give it some independent instructions on how to with situations where all options were apparently prohibited – go right to kill Bob, left to kill Victor, or stand as you are to kill Amy. And if it was something resembling a Kantian, you’d have to give it some way of dealing with the contradiction of having multiple imperatives, like ‘don’t like’ and ‘don’t kill’, since if you just ranked one over the other it would completely ignore the lower ranked one and put all of its efforts into following the higher.