Occam's Razor and 'moral' epistemology

Whether it's pushpin, poetry or neither, you can discuss it here.

Occam's Razor and 'moral' epistemology

Postby Arepo on 2013-01-16T18:01:00

This is another thread from Facebook - there's basically just one post I want to be able to refer back to, relating to why I want to eliminate even talk of 'value' from ethics (re a conversation about what the phrase 'I value pleasure' means/evokes), so I'll just repost here:

Eliminating value related language would finally let us address the world we actually see in front of us, rather than the one we wish we saw.

Another way of thinking about it is that there are currently two obvious ‘primitive’ epistemological categories which we cannot address the world without – the physical and the mental. At the moment we can’t describe either in terms of the other, though most of us probably imagine that will change after some key breakthrough in neuroscience or perhaps philosophy of mind.

There is nothing stopping us from positing extra primitive categories (the logical is sometimes a useful category, though arguably it can usually/always be subsumed into the other two), but positing them is very much contrary to Occam’s Razor - the principle of parsimony (PoP).

The PoP, IMO, is one of the most important epistemological concepts we have, since it reduces the admissible explanations for *any* given phenomenon in any epistemological category (including the metacategory of epistemological categories) from infinity to a finite number (usually to 1).

While we can say in some cases that we need extra entities to explain a concept, this isn’t contrary to the PoP, so long as we keep those entities to the minimum *necessary* number. When we allow entities that offer no predictive power that a smaller subset of them can’t offer (predictive power is relevant to both/all three of the primitive categories above – uncontroversially so for the physical and probably for predicting mental states, but also for logical processes that we’re less accustomed to thinking of as predictions, such as ‘if you add 1 + 1 you’ll get two’), we are disregarding the PoP.

And that opens a whole new can of worms, since once we relinquish the PoP once, applying it anywhere else becomes basically self-contradictory (a contradiction between ‘posit the minimum necessary number of entities everywhere’ and ‘posit the minimum number of entities everywhere else, except posit this case as a special exception to (ie extra entity than) that rule’). So now you need to construct your world view without reference to it, but now you’ve allowed an infinite number of possible alternatives - since you’ve removed the ‘don’t allow an infinite number of possible alternatives’ clause from your worldview.

In which case, anyone positing ‘value’ as a primitive category, either needs to show it has greater predictive power than the other categories would without it, or come up with an entirely new worldview that admits for infinite equally reasonable interpretations of any given phenomenon but is of any use to anyone.

I don’t suppose the latter is possible, so the question is what value adds? (no pun intended) I cannot think of any value statement that can’t be equally effectively construed with reference only to the other categories. ‘I value pleasure’ for example, might either be something like ‘inasmuch as my, logical, emotional and physical limitations allow it, I seek to maximise pleasure’.

But given that ‘I value pleasure’ is a primitive statement if value itself is primitive, it would be more accurate to translate it into the reduced set of primitive categories as merely ‘pleasure exists’.

If you think my version has lost information, then I challenge you to derive any prediction from yours that you can’t from mine.
"These were my only good shoes."
"You ought to have put on an old pair, if you wished to go a-diving," said Professor Graham, who had not studied moral philosophy in vain.
User avatar
Arepo
 
Posts: 1065
Joined: Sun Oct 05, 2008 10:49 am

Occam's Razer, Solomnoff Induction

Postby RyanCarey on 2013-01-17T06:37:00

Interesting. Ideas in computer science that relate to this, and to the idea of moral uncertainty are the concepts of minimum message length (the simplest compression of the data), Kolmogorov Complexity (a combination of all weighted compressions of the data, weighted toward simpler compressions) and Solomnoff Induction, the way that a computer with unlimited processing power would view the world.

I think that there should be such a thing as giving moral uncertainty the "Solomnoff Treatment". Assigning probabilities to all hypotheses about which criteria to use for decisions. It might mean something like an AIXI but with a complexity prior applied to both the utility function and the world-model. I would be interested to see how and whether this might work.
You can read my personal blog here: CareyRyan.com
User avatar
RyanCarey
 
Posts: 682
Joined: Sun Oct 05, 2008 1:01 am
Location: Melbourne, Australia

Re: Occam's Razor and 'moral' epistemology

Postby Arepo on 2013-01-17T13:01:00

Ta, I'd been wondering if OR had been formally described.
"These were my only good shoes."
"You ought to have put on an old pair, if you wished to go a-diving," said Professor Graham, who had not studied moral philosophy in vain.
User avatar
Arepo
 
Posts: 1065
Joined: Sun Oct 05, 2008 10:49 am

Re: Occam's Razor and 'moral' epistemology

Postby Arepo on 2013-01-18T15:39:00

I just read the Solomonoff induction piece, which I really liked. Would like to find time to try and educate myself in it more formally, but based on my understanding from this, it seems like it can't really apply here. As a formalisation of OR, it seems to work iff you have initial agreement on what your 'language' is (which seems to be equivalent to what constitutes a datum and/or the minimal description of a fundamental concept).

But since I'm trying to apply OR to the question of what those points are, I think it's insufficient, since an opponent of whatever I claim as a datum could assert that it can be reduced further/has been reduced too far. Possibly SI could answer the challenge, but maybe then it becomes a recursive argument. Still thinking.
"These were my only good shoes."
"You ought to have put on an old pair, if you wished to go a-diving," said Professor Graham, who had not studied moral philosophy in vain.
User avatar
Arepo
 
Posts: 1065
Joined: Sun Oct 05, 2008 10:49 am

Re: Occam's Razor and 'moral' epistemology

Postby twschoon689 on 2013-02-12T14:40:00

I would like to note that something is lost between the translation of "I value x" to "Given my physical and mental limitations, I am seeking to do x." These are not equivalent, since there are plenty of scenarios in which we value things that we are not seeking to do-- and not just because of limitations on us. These situations occur whenever we face moral dilemmas, or when we value something much more than another thing (as when we exhibit self control, thereby curtailing our value of pleasure). Thus, by getting rid of value, we are getting rid of our ability to talk about any of these scenarios.

There is a second reason why we shouldn't throw value away as an epistemological category: we would lose our ability to theorize human action. Let's assume that we translate the value statement "I value pleasure" into "inasmuch as my logical, emotional, and physical limitations allow it, I seek to maximise pleasure." Now I ask: "Why do you seek to maximize pleasure?" The simplest, satisfactory answer to that question is because you value pleasure. And so here's my point: any theory of human action requires more than two epistemic categories (mental and physical)-- it also requires a value category and proabably a category of emotions. I'll demonstrate by taking the challenge issued at the end of the post. Predict the following: Under what conditions would a soldier give up his or her life for fellow soldiers? Predict the following: Will Israel bomb Iran's nuclear weapons facilities? Predict the following: Will x cheat on their partner? I contend that any accurate prediction of these and similar scenarios involving human action must rely on categories of value or emotion.

twschoon689
 
Posts: 3
Joined: Tue Feb 12, 2013 2:36 pm

Re: Occam's Razor and 'moral' epistemology

Postby Arepo on 2013-02-13T14:57:00

I would like to note that something is lost between the translation of "I value x" to "Given my physical and mental limitations, I am seeking to do x."


Keep in mind, that’s not my favoured translation. ‘I value X’ could be used by all sorts of people to mean all sorts of different things, so the above is somewhat presumptive – also the translation was specifically of ‘I value happiness’, which may not have a translation equivalent to just substituting ‘happiness’ for ‘X’. That said, I don’t suppose you’d revise your claim for the translation ‘happiness exists’, so I’ll assume it’s that to which you’re objecting.

These are not equivalent, since there are plenty of scenarios in which we value things that we are not seeking to do-- and not just because of limitations on us. These situations occur whenever we face moral dilemmas, or when we value something much more than another thing (as when we exhibit self control, thereby curtailing our value of pleasure). Thus, by getting rid of value, we are getting rid of our ability to talk about any of these scenarios.


I can talk about them fine. If I exhibit self-control, it’s because I expect not doing so to cause me or others greater net harm than doing so, or perhaps I’m doing something irrational and reactive. There are many possible explanations, but I don’t lose any predictive power by excluding intrinsic value from them, so what’s the problem? Or if you think I do lose predictive power, can you actually give me any instance of where I would fail to predict something without appealing to intrinsic value?

Now I ask: "Why do you seek to maximize pleasure?" The simplest, satisfactory answer to that question is because you value pleasure.


Let me ask what you mean by ‘why?’ Here are the typical definitions:

1) ‘To what end?’
2) ‘Due to what cause?’

Now if we use them in your question, 1) makes the question redundant, since it’s the end that you’re questioning.
If we use 2, the answer is naturalistic – due to the confluence of genetic and environmental factors that created me.
For the question to mean what you want it to, it has to presuppose the point you’re trying to establish – that one can usefully seek some extra-physical property of something, and suffer for not finding it. So your argument here is circular.

And so here's my point: any theory of human action requires more than two epistemic categories (mental and physical)-- it also requires a value category and probably a category of emotions. I'll demonstrate by taking the challenge issued at the end of the post. Predict the following:

Under what conditions would a soldier give up his or her life for fellow soldiers?


When genetic and environmental factors had shaped an individual who’d be willing to do so.

Will Israel bomb Iran's nuclear weapons facilities?


I don’t know. They will iff genetic and environmental factors shape a group of people able and willing to do so. How would positing a value epistemic category help me to answer this? (please include in your response an analogous future-predicting scenario in which we can actually test our prediction and in which said prediction fails without ‘value’)?

Will x cheat on their partner?


Iff genetic and environmental factors shape him/her such that he/she’d who’d be able and willing to do so.

Clearly these are not good answers to your questions. They’re rubbish answers, in fact, since questions of human behaviour involve an enormous set of interrelated factors, and I lack almost all of the relevant information. But the point is not whether they’re good answers, but whether they would be improved by introducing the concept of value. Then, for eg, we might answer the first one ‘When genetic and environmental factors had shaped an individual who’d be willing to do so because he valued something about the outcome of giving up his life more than his life.’

Has this made the question easier to answer? Now perhaps it has, since it might narrow down a potential research programme’s scope if ‘people thinking as though there were a value category’ turns out to be a good predictor of their behaviour. But perhaps it’s made it harder, since ‘thinking there were a value category’ might complicate things and prevent them from looking at the material facts that are actually most relevant.

But this is answering addressing a different issue from the original one – whether we should include ‘belief in value’ in our worldview different to whether we should include ‘value’ in it. People can often believe false things, such as the Creation story, or a gambler believing that a die isn’t loaded. We can perfectly well use someone’s beliefs to help us predict what the person holding them will do without needing to duplicate their mistakes.
"These were my only good shoes."
"You ought to have put on an old pair, if you wished to go a-diving," said Professor Graham, who had not studied moral philosophy in vain.
User avatar
Arepo
 
Posts: 1065
Joined: Sun Oct 05, 2008 10:49 am

Re: Occam's Razor and 'moral' epistemology

Postby twschoon689 on 2013-02-15T15:16:00

Or if you think I do lose predictive power, can you actually give me any instance of where I would fail to predict something without appealing to intrinsic value?


I suggested that in cases where we face moral dilemmas or when we value one thing more than another, we would need a value category in order to accurately describe such situations. Let's consider the case of a pregnant, teenage girl conflicted over whether or not to get an abortion. Surely she experiences the conflict as a result of opposing value commitments-- for example, not to take life and to not act irresponsibly by having a child she cannot support (obviously there can be others). Any non-valuative description of this scenario would be innaccurate insofar as people experience such scenarios as conflicts in value commitments that they hold.

As far as predictive power goes, I'll critique the answers you gave to the predictive scenarios I proposed in my first post.
When genetic and environmental factors had shaped an individual who’d be willing to do so.

This wouldn't predict anything since being willing to do something and doing it are two separate things. Being disposed toward something and doing it are two separate things.

How would positing a value epistemic category help me to answer this? (please include in your response an analogous future-predicting scenario in which we can actually test our prediction and in which said prediction fails without ‘value’)?

Knowing that someone is willing and able is not sufficient to predict their actions because they might also be willing and able to do something to the contrary. I am arguing that the deciding factor here is value; that is, the only way that we can know whether or not someone will choose to do what they are willing and able to do, rather than something else that they are also willing and able to, is by finding out 1) what their value commitments are and 2) how they are reasoning about those commitments.
For example, I have a friend who has been married for 10 years, but has been fantasizing about having an affair with a particular co-worker that is clearly willing. Will (s)he have the affair? Or, if you think that "affair" is already using value-laden language, will (s)he have sex with the coworker? There is no way to accurately answer this question without considering the seriousness and strength of the commitments that s(he) holds about marriage and the moral status of having sex outside of it.
...In which we can actually test out prediction...

You cannot test your prediction since the amount and detail of information that you require is not obtainable. This is a practical benefit of the epistemic value category-- we can have access to a person's values.
1) ‘To what end?’
2) ‘Due to what cause?’

Now if we use them in your question, 1) makes the question redundant, since it’s the end that you’re questioning.
If we use 2, the answer is naturalistic – due to the confluence of genetic and environmental factors that created me.
For the question to mean what you want it to, it has to presuppose the point you’re trying to establish – that one can usefully seek some extra-physical property of something, and suffer for not finding it. So your argument here is circular.

1) is not redundant-- why is it a problem to question the end? I was using second definition of "why" here. I don't think that my argument is circular, however. I'm trying to make the claim that you can't answer the question without appeal to a value category because something is lost without value categories. How can I show what has been lost if I am not able to presuppose the value categories in order to demonstrate what has been lost? Secondly, I do not think that I even have to "presuppose" the value categories, since they are there already; everyone who thinks that some things are "right" and others "wrong" (a majority of philosophers and non-philosophers) is already using the epistemic value categories. The burden of proof, so to speak, is therefore on you. You must show that nothing is lost by doing away with value categories, and you must propose an alternate vocabularly with which to talk about morality and decision-making.

But perhaps it’s made it harder, since ‘thinking there were a value category’ might complicate things and prevent them from looking at the material facts that are actually most relevant.

First, it's not clear that there are more relevant facts than the ones presented by the value category. Second, there is not reason to think it would, in the same way that the epistemic category of "mental" doesn't distract from the epistemic category of "physical"

twschoon689
 
Posts: 3
Joined: Tue Feb 12, 2013 2:36 pm

Re: Occam's Razor and 'moral' epistemology

Postby Arepo on 2013-02-15T17:19:00

twschoon689 wrote: Surely she experiences the conflict as a result of opposing value commitments-- for example, not to take life and to not act irresponsibly by having a child she cannot support (obviously there can be others). Any non-valuative description of this scenario would be innaccurate insofar as people experience such scenarios as conflicts in value commitments that they hold.


If someone tells you they had an encounter with God, do you take their word for it? Or do you assume that we can explain their experiences scientifically, even if doing so didn’t change their sensation of the experience?

Assuming the latter, why can the same not apply for sensations of ‘valuing’?

This wouldn't predict anything since being willing to do something and doing it are two separate things. Being disposed toward something and doing it are two separate things.


You’re quibbling over semantics. I don’t care what phrase you use for it – substitute ‘would be willing to do’ with ‘would opt to do’, or whatever you prefer. You might argue that this prediction is so vague as to be basically restating the idea that they would do something. In which case I’d basically agree; again, I only state that your proposed ‘explanation’ fares no better.

You cannot test your prediction since the amount and detail of information that you require is not obtainable.


Given nonzero empirical information I could put odds on it, such that I might be able to make a profit by betting on analogous situations. Obviously I can never know for sure in advance, since the universe never allows us total certainty in any prediction.

This is a practical benefit of the epistemic value category-- we can have access to a person's values.


I still ask you to show this benefit. You’re merely asserting that there is one. To make a case for value you’d need to show (or at least give me a stronger reason to believe than bald assertion) that you’d expect make more money betting with precisely the same information (and analytical aptitude) if, against the weak version of my claim*, you believed in the primitive epistemological concept of value.

* The weak version – that value is unnecessary as a primitive epistemology category – is what I actually hold. The strong version – that incorporating other people thinking they value something into our predictive models allows for more successful prediction than the best alternative – is a basically unrelated empirical(ish) claim, which I doubt but don’t have a strong view on, so we’re already focusing on it too much.

1) is not redundant-- why is it a problem to question the end?


If you can ask a worthwhile question about it, it’s not. But ‘to what end do you seek your ultimate end?’ is no more worthwhile than ‘what colour is blue?’

I'm trying to make the claim that you can't answer the question without appeal to a value category because something is lost without value categories. How can I show what has been lost if I am not able to presuppose the value categories in order to demonstrate what has been lost?


How can you understand what you’re missing by not believing in God if you’re not able to have faith in His existence?

everyone who thinks that some things are "right" and others "wrong" (a majority of philosophers and non-philosophers) is already using the epistemic value categories. The burden of proof, so to speak, is therefore on you.


It’s remarkable how many ways this conversation mirrors discussions with Christians I used to have. They would often throw around the concept of ‘burden of proof’ as though it had some fundamental weighting, when really if we’re thinking properly we’re just trying to weigh the reasons to believe and not, and use our analysis (to paraphrase XKCD) to become right, rather than to prove we already are.

If you won’t relinquish your belief til I prove their negation, that’s your prerogative. You’re welcome to take as axiomatic any non-scientific proposition from value as a category to (cf Alvin Plantinga) the existence of a beardy Abrahamic sky guy. Doing so is unlikely to persuade rational people who don’t already accept your belief to adopt it, won’t help you process the world (again on the weak version of the claim only), will use up slightly extra processing power in your brain than a more parsimonious set of axioms would.
"These were my only good shoes."
"You ought to have put on an old pair, if you wished to go a-diving," said Professor Graham, who had not studied moral philosophy in vain.
User avatar
Arepo
 
Posts: 1065
Joined: Sun Oct 05, 2008 10:49 am

Re: Occam's Razor and 'moral' epistemology

Postby twschoon689 on 2013-02-15T22:43:00

If someone tells you they had an encounter with God, do you take their word for it? Or do you assume that we can explain their experiences scientifically, even if doing so didn’t change their sensation of the experience?

Assuming the latter, why can the same not apply for sensations of ‘valuing’?

Because we do not need to assume that God exists in order to explain a person's behaving as if God exists. The epistemic category "mental" would suffice for such an explanation. Also because the existence of God and the "existence" of values are two separate claims with distinct ontological implications. When a person says God exists, they mean that God exists independently of human beings. When I say that values exist, I mean as dependnet on human beings. Furthermore, in order to explain why people do certain things, or even to give an adequate empirical description of a person's experience, we need a value category because (as I have been arguing) without one we cannot produce accurate descriptions and predictions. We cannot simply say that a person "believes that they have a value" because valuing something is not just a belief in or about the value; valuing something implies feeling obligated by it, responsible to it, accountable for it, etc. And a person's values motivate his/her action.

I still ask you to show this benefit. You’re merely asserting that there is one. To make a case for value you’d need to show (or at least give me a stronger reason to believe than bald assertion) that you’d expect make more money betting with precisely the same information (and analytical aptitude) if, against the weak version of my claim*, you believed in the primitive epistemological concept of value.

The first benefit is non-predictive. It is descriptive and it is beneficial for understanding, empathizing, and the like. Refer to my example of the young woman considering abortion; it is not desciptively adequate to say that she is having a belief about the value, since she is in fact having "sensations of the experience", which, I'd like to point out, is a semantic quibble on your part. What is the difference between sensing an experience, experiencing, and valuing? But if you are not willing to concede that there is a descriptive benefit to retaining the value category, then I suppose there is no point pressing the argument further, since there would be no empirical scenario in which you might be able to prove that one description is "better" than another, absent prediction.
So we're back at the predictive scenario. Let's take women in math-related careers. Women score as well as men do in math on standardized tests, yet they are underrepresented in math-related fields. Many studies have suggested that this is in part due to the perception that STEM careers are incompatible with childcare and that mathematicians and scientists lead solitary lives. These perceptions conflict with some of women's values: an obligation to rear children and an ethic of care, which tends toward jobs that involve interpersonal commucication and directly helping others. See: Cheryan, Sapna. 2012. “Understanding the Paradox in Math-related Fields: Why Do Some Gender Gaps Remain While Others Do Not?” Sex Roles 66: 184-190.
You will probably not take this as good evidence of the increased predictive power provided by a value category, in which case we are probably stalemated again, as the opportunity to test our predictions under conditions of equal information will probably not present itself.

You’re quibbling over semantics. I don’t care what phrase you use for it – substitute ‘would be willing to do’ with ‘would opt to do’

"Being willing" and "opting to" are not the same thing either. Being willing to do something does not imply that I will do it. So saying that biological and environmental factors make me willing to do something does not constitute a prediction of an action, since no action has been predicted. Only willingness has been predicted. But I see your point.

If you can ask a worthwhile question about it, it’s not. But ‘to what end do you seek your ultimate end?’ is no more worthwhile than ‘what colour is blue?

I didn't ask to what end do you seek your ultimate end. I asked why (to what end) do you seek pleasure? A perfectly acceptable, non-redundant answer (and not the only answer) would be "for the sake of happiness."

How can you understand what you’re missing by not believing in God if you’re not able to have faith in His existence?

I don't think that that is a bad question. Nietzsche knew what we would be missing without belief in God-- absolute purpose, absolute "basis" for morality, and an ultimate meaning to life (where absolute and ultimate just mean "provided by God"-- curiously, though, these adjectives seem to hold great sway over people). Obviously, you and I do not suffer crises of meaning or of morality without belief in God. But many people do/would. The proper way to have them understand the issue, I think, is to suggest that the bases of morality/meaning/identity are not dependent upon God's existence (atheists are prime examples of that).

They would often throw around the concept of ‘burden of proof’ as though it had some fundamental weighting, when really if we’re thinking properly we’re just trying to weigh the reasons to believe and not, and use our analysis (to paraphrase XKCD) to become right, rather than to prove we already are.

The burden of proof is an important concept for science as well as jurisprudence. In the justice system, the burden of proof lies of the prosecution, since the accused are assumed to be innocent. Scientists also operate with an assumption about burden of proof-- it's on the scientist proposing a new idea. Thus, Einstein had to show why his conception of gravity and space was a better one than the extension of Newton's ideas. And still the ideas were not accepted until empirical proof could be provided. So the rule is that the consensus view does not have the burden of proof-- the new idea does.
In the case of the existence of God, the burden of proof is surely on the believer in God even though most people believe in a god, since it is far from empirically obvious that God exists, there are so many conceptions of God, and supernatural explanations of natural phenomena do a piss-poor job of describing and predicting. The case of values, however, is not analogous to the case of belief in God since the act of valuing is empirically verifiable (we all have felt obligation, responsibility, accountability, and affirmation) and there is no disagreement about what values are (just which ones are the "right" ones).
But your right that the proper stance to take in the case of our argument is one of reasoned inquiry aimed at the discovery of a maximally accurate explanatory schema. Our case is different from scientific theory or jurisprudence, since we are not trying to establish any fact (we both agree that values do not exist independently of people); rather, we are debating the explanatory and predictive power of two conceptual schemes. I therefore concede your point that there is no burden of proof in this case.
But I fear that, unless a situation arises in which we can test the "value added" of a value category, then we will remain at loggerheads.

twschoon689
 
Posts: 3
Joined: Tue Feb 12, 2013 2:36 pm


Return to General discussion