A LONGER (pedantic) ANSWER
Ruari wrote:can whats the right way to behave be the same for everyone like something like the laws of physics are the same for everyone?
You are asking if moral principles are objective. If you review my "short answer" from above, you might notice that people can respond differently to each point based on their disposition. For example, some people might claim that they are "not afraid of" retaliation because they know kung-fu, or they own guns, or other such nonsense, but if they do not recognize the intrinsic value of life, then there is little hope finding a mutually-agreeable answer to ethical questions. As far as I can tell, a prerequisite to reaching a rational agreement on ethical issues is not merely a strong interest in truth, but a supreme interest in truth and all of its implications and antecedents. A person who discards truth or tolerates contradiction can feel satisfied in beliefs that seem ludicrous to rational people. Given a supreme commitment to truth, I still find it difficult to explain why I hold a belief in the intrinsic value of (all) life and in nature in general (I can construct an argument in defense of that belief and describe the contradictions implied by its denial, but ultimately I can't say why a non-believers are compelled to subjugate themselves to the antecedent principles). My subsequent responses to your post are not simple and require some background because the set of ethical "first principles" held by lay people are (IMHO) erroneous and the basis for further confusion.
My first topic is about the illusion of "right and wrong" and what I propose to replace it: skillful decision-making based on consequences of actions. Many people experience a negative emotion when they witness things that they believe to be morally wrong. I think a more scientific view of this is as follows: we do not react to some situations BECAUSE they are morally right or wrong--we react to some situations through largely unconscious processes and then describe the situation as if it were morally right or wrong (i.e., we "create" right and wrong to match what we have already done). See Haidt (2001) and the responses to it. There is a fair amount of evidence for a biological (as opposed to philosophical) basis of moral action: thinking of emotionally-charged events evokes changes in brain activity that are associated with physical sensation of the internal state of the body (Damasio et al, 2000); a system of neurons in the brain ("mirror neurons") activate when people witness certain types of behavior in others (Molnar-Szakacs, 2010); and the biological basis of empathy is specific enough to identify different brain regions for affective versus cognitive empathy (where cognitive empathy is conscious recognition of the responses of other people; Fan et al., 2011). The emotions related to empathy affect moral reasoning processes (Berenguer, 2010), and people who do not have the typical empathic responses are less inclined to perceive certain acts as morally right or wrong as others do (such as people with narcissistic personality disorder; Ritter, in press). The evidence in support of the biological basis for perceptions of right and wrong and the absence of any evidence that there is a property of the universe that defines what is right or wrong leads me to believe that people invent concepts of right and wrong to (inaccurately) explain their emotions.
Besides the biological basis of moral thought and behavior, many philosophers have cast doubt on the traditional conception of "right and wrong." In Chapter 2 of Principles of Morals and Legislation
, Bentham said that we do not have a reliable way to obtain knowledge of right and wrong through divine revelation. Others suggest that right and wrong do not even exist. I suggest that the traditional concepts of right and wrong are simply examples of G. E. Moore's naturalistic fallacy (see the opening chapters G. E. Moore's Principia Ethica as an example of this discussion), and the biological basis of empathy and moral reasoning provides the only scientific understanding of why we believe in the idea of right and wrong (the full case is not presented here).
Those of us who deny that right and wrong exist are now in the difficult position of explaining why we refrain from killing babies and why we strive to "do the right thing." In your case, the question is why we have certain values or why we adopt philosophy that is based on happiness (or something like it). Here is a *description* (not a justification) of how I act: I am a human being. Human behavior is a complex product of genetics, developmental factors (like nutrients during gestation), and many types of social and environmental influence and experiences. These factors contribute to what I will call "my constitution." I seek to continue living and I seek to avoid harm because that is what my constitution has led me to do. I experience negative emotional reactions when I see things like babies being killed or animals being injured, and other parts of my constitution compel me to act in certain ways in response to those emotions, so I avoid negative activities and contribute to efforts to reduce those activities. I have a negative emotional reaction when I see people acting selfishly in a way that harms me or others. Some lions and some humans, for whatever reason, act against my best interests, and I will therefore attempt to thwart those actions in a way that is consistent with the other elements of my constitution (this also means that I will sometimes work against people who are selfish).
Here is a philosophical description of my behavior (which assumes some degree of free-will). I sometimes ponder the rational basis for my action--the ultimate goal of the pondering is to seek truth. I do this as if I had enough free will to respond to whatever my inquiry leads me to. The combined explanation and justification for why I seek truth and how I do it entails a long list of assertions, each one of which can become a point of disagreement... I believe that truth exists and I have adopted this as my primary epistemological goal (i.e., I strive to direct my acquisition of knowledge so that I can attain an understanding of truth). I believe that self-contradiction is irrational and therefore something to be avoided. I believe that arguing that something is true because it is self-evident is irrational. I believe that the majority of my accomplishments are nearly entirely due to the work of other people or to the availability of natural resources that were not destroyed by previous generations (e.g., I didn't invent language, math, logic, or science, I didn't build roads, cars, electronics or social infrastructure without which I would be an illiterate ape living in the jungle [or I wouldn't even exist]), and that recognition affects my willingness to sacrifice for the greater good. On each of those points, and hundreds more like them, people hold different beliefs and thereby reach different conclusions of how people should act. To argue that one approach to ethics is right and another is wrong is to hold beliefs about the large collection of subordinate beliefs that affect how we perceive things, how we value things, and how we reason. Consequently, I am not aware of any short, meaningful, truthful argument that will convince people with opposing beliefs why they should adopt a given ethical orientation. Given the history of political and religious conflict, I am confident that there is no short, meaningful, truthful, and convincing argument that would compel people to adopt the ethical stance that I have taken (or that you have taken).
Sometimes large groups of people agree on a similar set of philosophical principles (or it seems as if they do). To Mao's communist supporters during the Chinese revolution, their group shared collective insight into the truth of how governments should be constructed. Members of libertarian forums seem to believe that they share insight into how governments should work, but they disagree with Mao's supporters. Members of this board seem to think that they have insight into philosophy and it seems like the "community" of "believers" adds legitimacy to the beliefs. In group situations like this, people sometimes tend to mistake group consensus for truth. Psychologists who study morally-relevant behavior have found that people tend to adopt beliefs that are not randomly distributed but that they cluster together (clusters of values or beliefs or personality traits), but the clustering of beliefs does not indicate that one cluster of beliefs are better than another. My point is that regardless of what the group says, it is up to you to explore the rationale for your beliefs if you are inclined to seek truth.
Berenguer, J. (2010). The Effect of Empathy in Environmental Moral Reasoning. Environment and Behavior, 42(1), 110–134. doi:10.1177/0013916508325892
Damasio, A. R., Grabowski, T. J., Bechara, A., Damasio, H., Ponto, L. L. B., Parvizi, J. & Hichwa, R. D. (2000). Subcortical and cortical brain activity during the feeling of self-generated emotions. Nature Neuroscience, 3(10), 1049–1056.
Fan, Y., Duncan, N. W., de Greck, M., & Northoff, G. (2011). Is there a core neural network in empathy? An fMRI based quantitative meta-analysis. Neuroscience and Biobehavioral Reviews, 35, 903–911. doi:10.1016/j.neubiorev.2010.10.009
Haidt, J. (2001). The Emotional Dog and Its Rational Tail: A Social Intuitionist Approach to Moral Judgment. Psychological Review, 108(4), 814–834. doi:10.1037//0033-295X. 108.4.814
Molnar-Szakacs, I. (2010). From actions to empathy and morality – A neural perspective. Journal of Economic Behavior & Organization. doi:10.1016/j.jebo.2010.02.019
Ritter, K., Isabel Dziobek and, S. P., Rüter, A., Vater, A., Fydrich, T., Lammers, C.-H., et al. (in press). Lack of empathy in patients with narcissistic personality disorder. Psychiatry Research. DOI: 10.1016/j.psychres.2010.09.013