Meta-Ethical Uncertainty

Whether it's pushpin, poetry or neither, you can discuss it here.

Meta-Ethical Uncertainty

Postby Gedusa on 2012-06-04T20:01:00

Inspired by conversations with random moral realists and Will Crouch's latest paper...

Moral uncertainty seems mainly to apply to moral realists and certain types of subjectivists/non-cognitivists. So they worry about the consequences of being wrong about their ethical beliefs, and try to take this uncertainty into account.

Some types of non-cognitivists (probably emotivists?) don't care about this kind of stuff. It seems pretty implausible that we can be wrong about our values - so if you assign high credence to a meta-ethical view of this sort, then you should reject the argument for moral uncertainty.

But! Assigning absolute confidence to a meta-ethical theory like the one above seems wrong. You should have some uncertainty. But this leaves us with a problem. Moral uncertainty suddenly starts to affect you. If you assign some confidence to moral realism (hereafter used as a shorthand for: any meta-ethical theory vulnerable to moral uncertainty), then you're forced into taking some account of moral uncertainty in your actions.

There are really two versions where that happens though - a strong and weak one. The strong version assumes the only sort of views that moral uncertainty doesn't apply to are nihilistic ones - views that assume there is no reason to do anything (morally). Therefore, even if you're pretty sure nihilism is true, and pretty sure moral realism isn't - then you should act as if moral realism is entirely true. And therefore take complete account of moral uncertainty.

The weak version assumes that we can allow views which give us some sort of reason to do stuff (morally) - yet are fairly nihilistic/non-realist. For example - I'm pretty sure that emotivism gives us some reason to do things, and it even makes sense to talk about that kind of stuff in a moral sense. I'm not sure how likely this view is, but it seems like if you could say that your confidence in such a theory was pretty high, and the reasons for action as strong as realist ones - then the effect of moral realism (and so uncertainty) is diluted. The exact dilution depends on the credence assigned to realism and the strength of reason for action.

I still don't know enough about meta-ethics obviously, and this is just a half-baked idea. I'd be happy for links to relevant literature.

Thoughts anyone?

(Will's quote explaining this)
I’ll assume a meta-ethical view compatible with the existence [of] moral uncertainty. As has been convincingly argued by Andrew Sepielli, this assumption only rules out a very small number of moral views. I’m also going to assume that nihilism is false: so all the probability judgments that I discuss are conditional on there existing positive moral facts.
World domination is such an ugly phrase. I prefer to call it world optimization
User avatar
Gedusa
 
Posts: 111
Joined: Thu Sep 23, 2010 8:50 pm
Location: UK

Re: Meta-Ethical Uncertainty

Postby Brian Tomasik on 2012-06-05T09:48:00

Thanks, Gedusa. I'm not an expert in metaethics either, so take what I say as a novice's answer.

Eliezer talks about the Pascalian wager for moral realism in a nice essay, and it's actually a very common argument. My response is that moral realism isn't so much a possibility to which I would assign some probability. Rather, moral realism is a confused concept. It would be like assigning a probability to 1+1=3.

Maybe there are systems in which you could assign probabilities less than 1 to logical truths without undercutting the foundation within which you're speaking. I agree that to the extent that probability expresses a feeling, it does feel like logic could be wrong. But I don't know how to express this without talking nonsense.

I could also be wrong in the brain processes that lead me to believe that moral realism is confused. Indeed, I'm not 100% sure that 53 is a prime number. As I said in an old blog post:
I've done enough math homework problems to know that my probability of making an algebra mistake is not only nonzero but fairly high. And it's not incoherent to reason about errors of this type. For instance, if I do a utility calculation involving a complex algebraic formula, I may be uncertain as to whether I've made a sign error, in which case the answer would be negated. It's perfectly reasonable for me to assign, say, 90% probability to having done the computation correctly and 10% to having made the sign error and then multiply these by their corresponding utility-values-if-correct-computation. There's no mystery here: I'm just assigning probabilities over the conceptually unproblematic hypotheses "Alan got the right answer" vs. "Alan made a sign error."

In practice, of course, it's rarely useful to apply this sort of reasoning, because the number of wrong math answers is, needless to say, infinite. [...] When someone objects to a rationalist's conclusion about such and such on the grounds that "Your cognitive algorithm might be flawed," the rationalist can usually reply, "Well, maybe, sure. But what am I going to do about it? Which element of the huge space of alternatives am I going to pick instead?"

Perhaps one answer to that question could be "Beliefs that fellow humans, running their own cognitive algorithms, have arrived at." After all, those people are primates trying to make sense of their environment just like you are, and it doesn't seem inconceivable that not only are you wrong but they're actually right. This would seem to suggest some degree of philosophical majoritarianism.

So maybe the Pascalian argument does hold some water -- but only if we're prepared to accept that it also does in the case of libertarian free will, dualism, and theism. :) In any event, what I'll say is that in practice, I don't care much about the Pascalian possibilities for moral realism. This gets at the heart of emotivism in general: I care about what I care about, and if I don't feel emotionally that I'm obligated to follow this (apparently absurd) conclusion, then I won't.

In any event, it's not as though nothing matters if moral realism is false; it's just that nothing objectively matters. Things still matter to me. I don't know how to linearly combine the cases where moral realism is true and where it's false to come up with an expected-value answer. You were getting at this same point when you said, "but it seems like if you could say that your confidence in such a theory was pretty high, and the reasons for action as strong as realist ones - then the effect of moral realism (and so uncertainty) is diluted."

All of that said, I don't think moral uncertainty is totally useless, for the following reason: I (happen to) care (to some degree) about changes in opinion that my future self would undergo if it learned more about how the world works and experienced a wider variety of emotions and life-events. For example, I think it's important to study mechanisms of suffering and whether they exist in insects, and after that study, my judgment about whether insects suffer in a morally relevant way will be a better one than it is now. So moral disagreement does matter in the sense that it offers candidates for what a more-informed version of myself might come to believe upon further research.

But this doesn't lead to full-blown ethical majoritarianism or coherent-extrapolated volition. Ultimately, it's still my feelings that I care about, and the feelings of others are only relevant as evidence insofar they come from minds with a high degree of similarity to my own. The more I think another mind is emotionally different from mine, the less weight I give to its conclusions. I give almost no weight to suffering-maximizer minds (which are not just a lofty though experiment but must literally exist somewhere in our multiverse).

How much weight I want to give to what I might feel in other circumstances is itself subject to my emotions. It's a tune-able parameter based on how strongly I feel that doing this matters.

---

ETA, 7 April 2013:

So why don't I give high weight to moral uncertainty, even though I've changed my ethical views many times? It's because I happen not to care that much about future changes to my values, except in cases where I think I don't know enough about the situation to form a judgment at all (e.g., with respect to whether insects suffer).

My moral intuitions change based on fluctuations in my neurochemistry, as well as in the longer term based on what kinds of thinkers have inspired me lately. There's not necessarily "progress" happening here: It's just like a leaf in an alleyway getting blown back and forth in various directions by the wind. Why not just go with what I want where I am rather than trying to imagine the average of all possible places I could be blown? What would that average even look like? Should I include the possibility of being brainwashed to think that needless torture was wonderful?

I'm pretty confident that there's not a unique stance to which my brain would converge over a broad set of modifications, influences, and experiences. Where I would end up depends a lot on my neurochemistry and life history, and I realize how path-dependent it is. This doesn't mean I'm inspired to imagine what other places I might have ended up upon different rolls of the dice. I would then have to decide how to pick the weightings for different possible histories, and that choice would itself be arbitrary. If it's all arbitrary, why not just do what I feel is right now? That can include giving weight to what my future self thinks about a given values-based question, but only insofar as I care about doing that.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Meta-Ethical Uncertainty

Postby Arepo on 2012-06-05T10:17:00

Holly talked a bit about this at the Wales weekend. In fairness she was mainly just trying to raise the idea for those who hadn't considered it offer an introduction to it. Nonetheless, I'm quite unconvinced by the arguments she offered that came from Will and Toby thus far, for much the same reason as Alan:

Alan Dawrst wrote:Eliezer talks about the Pascalian wager for moral realism in a nice essay, and it's actually a very common argument. My response is that moral realism isn't so much a possibility to which I would assign some probability. Rather, moral realism is a confused concept. It would be like assigning a probability to modus ponens being false.


In which case it might be that the main value for arguments about moral uncertainty is that they function as a reductio ad absurdum for certain kinds of metaethical theory.

Anyway, I'll read Will's paper before I comment any further.
"These were my only good shoes."
"You ought to have put on an old pair, if you wished to go a-diving," said Professor Graham, who had not studied moral philosophy in vain.
User avatar
Arepo
 
Posts: 1065
Joined: Sun Oct 05, 2008 10:49 am

Re: Meta-Ethical Uncertainty

Postby Mestroyer on 2012-06-09T05:19:00

For a pascal's wager-type argument to work on me, I would have to want to do the thing you are arguing for if I knew that the unlikely statement was true. But just as I would be proud to be part of the red team if the blue team was the one who wants to torture everyone forever and the red team opposes them, and proud to be part of the blue team if it was the red who wanted to torture everyone forever, I would be proud to be part of the good team only if it was the evil team that wanted to torture everyone forever, and I would be proud to be part of the evil team if it was the good team that wanted to torture everyone forever.

As a biological robot, there is no programming in me that says "be good." There is also no programming that says "be red." There is however some that says "don't let everyone be tortured forever." I don't pursue the goals that I do because I have a goal of pursuing the goals that have some kind of special metaphysical property (rightness, or goodness), but because they (the ones I do pursue) are the goals I am programmed to pursue.


I definitely want to have uncertainty over the proper means to get what I think is good, for the obvious reason that it will help me discard ineffective or counterproductive methods of getting those goals.

I also want to have uncertainty about my end goals, because many of them (because they are formed from intuitions) contradict each other. So each instinct is compelling me to start ignoring certain others. And I have another end goal, which is to get rid of contradictions in my thinking, which leads me to pit them in arenas of thought experiments against each other.

Mestroyer
 
Posts: 5
Joined: Sun Mar 25, 2012 12:00 am

Re: Meta-Ethical Uncertainty

Postby Brian Tomasik on 2012-06-09T05:51:00

Mestroyer wrote:As a biological robot, there is no programming in me that says "be good." There is also no programming that says "be red." There is however some that says "don't let everyone be tortured forever."

Exactly. This is the precise problem with moral realism.

Mestroyer wrote:I also want to have uncertainty about my end goals, because many of them (because they are formed from intuitions) contradict each other.

Yes. This, IMO, is the main area where ethical uncertainty has a legitimate role to play.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Meta-Ethical Uncertainty

Postby AndrewSepielli on 2012-06-17T20:47:00

Hi everyone. Jake Ross at USC advances what y'all are calling "Pascalian arguments" for accepting moral objectivism in both his Ph.D. dissertation and in a 2006 paper in the journal Ethics. His stuff is a must-read on this topic. I particularly recommend the dissertation, just because it develops the arguments in greater detail than the paper is able to.

As for non-cognitivism -- this is really tricky territory. My paper that Will Crouch wrongly characterizes as containing convincing arguments is really only an attempt to reply to to a particular argument -- Michael Smith's -- that there can't be anything like moral uncertainty if non-cognitivism is true. (Smith, btw., tries to show on this basis that non-cogntivism is false, not that there's no moral uncertainty.) I think any version of non-cognitivism that can surmount the Frege-Geach problem can also surmount Smith's problem.

Now, Gedusa is pushing a different concern about non-cognitivism and moral uncertainty. If I'm understanding the argument, it's that 1) the non-cognitivist doesn't think she could be wrong about morals, but 2) one would only be morally uncertain if one thought one could be wrong about morals, so 3) one would not be morally uncertain if one were a non-cognitivist.

Here are my worries about this argument: First, I don't immediately see why (2) is true. If someone has a brief argument for it, I'd be interested in hearing it. But more importantly, it's far from obvious that (1) is right. Consider a counterfactual statement like "Rape would be wrong even if I thought it was right". This is something that the non-cognitivist will see as internal to normative discourse. The non-cognitivist will disagree with the cognitivist about whether this statement expresses a cognitive or non-cognitive state, but she is just as apt to have/express this state as the cognitivist is. But if it's possible for X to be F but me to think X is not F, then it's possible for me to be wrong about whether X is F. At least, that's what it looks like at first glance. But now we might imagine the non-cognitivist thinking not about rape and the possible worlds where it's wrong, but about the norms governing the making of moral judgments. She might say "Well, beliefs are the kinds of things that can be correct or incorrect. But moral judgments aren't beliefs; they're more like feelings. And feelings can't be correct or incorrect. So it turns out I can't be wrong about morals after all." We can imagine our non-cognitivist giving up her initial within-the-discourse claim based on this claim about the norms governing the making of moral judgments, but we can also imagine her giving up her claim that moral judgments can't be correct or incorrect based on her within-the-discourse claim. Again, it's a tough question.

AndrewSepielli
 
Posts: 3
Joined: Sun Jun 17, 2012 7:41 pm

Re: Meta-Ethical Uncertainty

Postby Brian Tomasik on 2012-06-18T06:26:00

Welcome, Andrew!!

AndrewSepielli wrote:Here are my worries about this argument: First, I don't immediately see why (2) is true.

I don't think (2) is true because of this statement I made earlier: "I (happen to) care (to some degree) about changes in opinion that my future self would undergo if it learned more about how the world works and experienced a wider variety of emotions and life-events." However, this is the only way in which I can see non-cognitivists caring about moral uncertainty.

AndrewSepielli wrote:The non-cognitivist will disagree with the cognitivist about whether this statement expresses a cognitive or non-cognitive state, but she is just as apt to have/express this state as the cognitivist is.

Why? As above, the only reason she would be in this state is if she didn't know if she really thinks rape was right or not. If she's certain that she thinks rape is right, there's no question anymore.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Meta-Ethical Uncertainty

Postby AndrewSepielli on 2012-06-18T21:15:00

AndrewSepielli wrote:The non-cognitivist will disagree with the cognitivist about whether this statement expresses a cognitive or non-cognitive state, but she is just as apt to have/express this state as the cognitivist is.

Why? As above, the only reason she would be in this state is if she didn't know if she really thinks rape was right or not. If she's certain that she thinks rape is right, there's no question anymore.


Non-cognitivism is not the view that what's right depends on our feelings. It's the view that moral claims express our feelings. But that a claim expresses a mental state does not imply that the truth of that claim depends upon that mental state. For example, I am expressing a belief when I say "Jupiter is more massive than Earth is", but this by no means implies that the truth of this claim depends on my beliefs. So the non-cognitivist is not committed to the view that in all possible worlds where she approves of X, X is right.

AndrewSepielli
 
Posts: 3
Joined: Sun Jun 17, 2012 7:41 pm

Re: Meta-Ethical Uncertainty

Postby Brian Tomasik on 2012-06-20T13:19:00

I apologize for not being proficient with the standard terminology, but your reply has me somewhat confused. :)

AndrewSepielli wrote:Non-cognitivism is not the view that what's right depends on our feelings. It's the view that moral claims express our feelings.

Yes, this was my understanding. And moreover, there isn't any content behind discussion about "what's right." There is just our feelings. No?

AndrewSepielli wrote:But that a claim expresses a mental state does not imply that the truth of that claim depends upon that mental state.

But I thought non-cognitivists didn't believe in moral truth. Morality is as simple as how your emotions play out.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Meta-Ethical Uncertainty

Postby AndrewSepielli on 2012-06-21T03:14:00

Alan Dawrst wrote:I apologize for not being proficient with the standard terminology, but your reply has me somewhat confused. :)

AndrewSepielli wrote:Non-cognitivism is not the view that what's right depends on our feelings. It's the view that moral claims express our feelings.

Yes, this was my understanding. And moreover, there isn't any content behind discussion about "what's right." There is just our feelings. No?


I don't know exactly what you mean by "it" in "it's just our feelings". Your characterization is a good one of the versions of non-cognitivisms that were popular before WWII -- Ayer's, for example. Proponents of more sophisticated versions of non-cognitivism will reject this characterization. Check out the non-cognitivism entry on SEP for some examples.

Alan Dawrst wrote:
AndrewSepielli wrote:But that a claim expresses a mental state does not imply that the truth of that claim depends upon that mental state.

But I thought non-cognitivists didn't believe in moral truth. Morality is as simple as how your emotions play out.


Some non-cognitivists do. But anyway, I wasn't saying that non-cognitivists think moral statements can be true. I was thinking that was what YOU were thinking, because I was thinking that you were confusing non-cognitivism with subjectivism. Why? Well, you wrote: "the only reason [a non-cognitivist] would be [morally uncertain] is if she didn't know if she really thinks rape was right or not". But it's the subjectivist who would think that moral uncertainty was uncertainty about one's own feelings, not the non-cognitivist.

As for your claims that "morality is as simple as how your emotions play out" -- I don't know exactly what this means. As I explained, the non-cognitivist is in no way obviously committed to statements like "What's right depends on my emotions", any more than the cognitivist is committed to "What's right depends on my beliefs". Now, you might think "Well, look, beliefs can be correct or incorrect, but emotions/feelings/pro-attitudes can't be correct or incorrect. So that's why there can't be any right answers about morality if non-cognitivism is true" (or something to that effect -- I don't want to quibble). And indeed, I admitted in my first post that that's reason for thinking that, even though many non-cognitivists do think that there are objectively right answers about morals, that non-cognitivism commits you to the view that there are no such right answers. But if you wanted to take this route, you'd need an argument that emotions, etc. can't be correct/incorrect. And as I suggested in the first post, there's quick-and-easy argument to the contrary.

AndrewSepielli
 
Posts: 3
Joined: Sun Jun 17, 2012 7:41 pm

Re: Meta-Ethical Uncertainty

Postby Brian Tomasik on 2012-06-21T11:21:00

AndrewSepielli wrote:Your characterization is a good one of the versions of non-cognitivisms that were popular before WWII -- Ayer's, for example.

Yes. I often say that emotivism is the best description of my views on metaethics.

AndrewSepielli wrote:Proponents of more sophisticated versions of non-cognitivism will reject this characterization. Check out the non-cognitivism entry on SEP for some examples.

Which did you have in mind?

AndrewSepielli wrote:As for your claims that "morality is as simple as how your emotions play out" -- I don't know exactly what this means. As I explained, the non-cognitivist is in no way obviously committed to statements like "What's right depends on my emotions", any more than the cognitivist is committed to "What's right depends on my beliefs".

I would drop the confusing "What's right" terms and just say, "I have emotions about how I want things to be."
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Meta-Ethical Uncertainty

Postby Arepo on 2012-06-21T13:50:00

Still haven’t read Will’s paper, but not exactly plowing through my reading list at the moment, so here’s one of my main problems with the expected utility argument:

If I’m uncertain about a factual claim, it means there are two+ exclusive possible states of affairs I can conceive. For example ‘it is raining’ and ‘it is not raining’

If I’m uncertain about a moral claim, and we’re drawing this analogy, it should mean there are two+ exclusive possible norms I can conceive. For example, ‘you ought never to kill’ and ‘you ought sometimes to kill’.

But in the first example, subject to some linguistic vagueness about how much water is necessary for ‘rain’, I will continue to be able to cope with the world if the proposition turns out to be true or false, no matter how convinced I had been that it was the opposite.

In the second example, there’s the same resolvable vagueness about ‘kill’, but unlike the verb ‘is’, the (auxiliary) verb ‘ought’ means nothing to me, so that I can neither conceive of the truth nor falsity of the proposition.

Normativity seems to be a concept that’s a) ineffable, and b) unnecessary for me to understand in order to get through life. Applying Occam’s Razor, the best explanation is that it’s a nonconcept, merely disguised as a concept by familiarity (comparable maybe to ‘a noncontradictory square circle’).

I am open (just) to the possibility that normativity exists/has relevance to the universe, but really only in the sense that I’m open to the possibility that all my probability calculations have been wrong and the universe, right down to the basics of logic, is nothing like I understand it at all. That is to say, I can’t say it’s even probably false (since that would invoke concepts that wouldn’t apply if it were), but conditional on it being so, the universe bears no relation to what I think it is.

This is a slight exaggeration in that given the widespread belief in norms, it seems slightly more plausible that someone would be able to persuade me that they’re at least a conceivable concept. But I know of no-one who’s been able to come close, and hardly anyone who’s tried to describe what an actual norm would be without simply referring to another piece of normative language.

So in summary if someone merely could explain the concepts which views like ‘you ought not to kill’ entail, then I could attach to them some level of practical probability. Given that no-one can do that, it only makes sense to attach a theoretically undefined but practical probability of nil to them.

Suspect Alan will agree with all of this, though we still seem to manage to fundamentally disagree on what it means to us.
"These were my only good shoes."
"You ought to have put on an old pair, if you wished to go a-diving," said Professor Graham, who had not studied moral philosophy in vain.
User avatar
Arepo
 
Posts: 1065
Joined: Sun Oct 05, 2008 10:49 am

Re: Meta-Ethical Uncertainty

Postby Brian Tomasik on 2012-06-21T15:37:00

Arepo wrote:Suspect Alan will agree with all of this, though we still seem to manage to fundamentally disagree on what it means to us.

Yes. Your explanation was very similar to what I was getting at in my insanity blog post quoted above.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Meta-Ethical Uncertainty

Postby Arepo on 2012-06-22T13:10:00

Am I right in taking from that you're willing to say that you're probably not fundamentally mistaken on these things you've checked multiple times?

That seems a step too far to me. I guess my problem is that we have one concept (probability) that we apply without modification to two conceptual structures – ontology and epistemology (three if you count ethics/expected value reasoning as separate from epistemology). While it kind of works for the latter, or at least we have no better option than to apply it to decision-making, it seems obviously impossible to apply it to the former.

We might very well be totally wrong about everything. It’s just impossible to function until we condition that we’re not.
"These were my only good shoes."
"You ought to have put on an old pair, if you wished to go a-diving," said Professor Graham, who had not studied moral philosophy in vain.
User avatar
Arepo
 
Posts: 1065
Joined: Sun Oct 05, 2008 10:49 am

Re: Meta-Ethical Uncertainty

Postby Arepo on 2012-06-22T13:12:00

Btw, I've lost the link for Will's paper - can someone point me back to it?
"These were my only good shoes."
"You ought to have put on an old pair, if you wished to go a-diving," said Professor Graham, who had not studied moral philosophy in vain.
User avatar
Arepo
 
Posts: 1065
Joined: Sun Oct 05, 2008 10:49 am

Re: Meta-Ethical Uncertainty

Postby Brian Tomasik on 2012-06-22T15:16:00

Arepo wrote:Am I right in taking from that you're willing to say that you're probably not fundamentally mistaken on these things you've checked multiple times?

Hmm, I don't think I had that in mind. What led you to infer that?

Arepo wrote:We might very well be totally wrong about everything. It’s just impossible to function until we condition that we’re not.

Exactly.

Will's paper is linked from the CEA thread.
User avatar
Brian Tomasik
 
Posts: 1130
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: Meta-Ethical Uncertainty

Postby Jesper Östman on 2012-07-24T20:59:00

If you do not care intrinsically about morality (or doing the "right" thing, etc), does not meta-ethical uncertainty become a non-issue then? I think this has been pointed out earlier in the thread but as I think it is an important point I want to make sure.

Personally, I do not care about morality, I care about people, animals and so forth. This was one reason I originally identified as a utilitarian - since other ethical systems seemed to imply that certain rules were important in themselves, regardless of whether following them helped or hurt sentient beings.

Perhaps I might have made mistakes when trying to find which things I do care about. But there seems to be no special reason to prioritize morality before other things I do not care about (say for example Gods, rocks, various arrangements of atoms, prime numbers).

Jesper Östman
 
Posts: 159
Joined: Mon Oct 26, 2009 5:23 am

Re: Meta-Ethical Uncertainty

Postby Bruno Coelho on 2012-07-27T08:19:00

The right path is the derivations of empirical uncertain to moral uncertain have some promise.

The argument in simple terms, atribute credences to rival theories in scenarios where there is no consensus, with means, almost all. Most of the normative ethical debate assumes that it's impossible to do this, intertheorical comparison. There can be only one, as the argument goes.

In the metaethical level, things are not so good, the rivals theories are compartimentalizations -- the "isms". . Maybe I'm being a bit unjust, but if you read the bibliography, will find only disagreement.

Bruno Coelho
 
Posts: 7
Joined: Fri Jul 27, 2012 7:41 am
Location: Brazil


Return to General discussion