Why do we care about free will? Sure, whether we have it is an interesting question to ponder, but I think the only practical reason we care about free will is that it implies moral agency. Without free will, morality is meaningless. Given everything that science is telling us, it seems that we are not as free as we have historically believed, or at least, whatever free will we have is not what we have historically believed it to be. I propose that we work on a new morality that would function usefully regardless of the answer to the free will question.
Utilitarianism, or something like it, seems to make sense to most of us, and it seems a good place to start, but again, without free will, it just doesn't work very well, at least in any framework we would currently recognize. The problem is that Utilitarianism (actually, every single one of our existing moral theories) works only for moral agents. Fortunately, there's a very simple, useful set of principles we could use to guide our lives without moral agency.
Sam Harris' book "The Moral Landsape" received mixed reviews, but Harris made one very important point: Soon, if not already, we will be able actually to measure and even quantify suffering. This is a giant leap forward for morality, because finally we have an objective standard. (I want to be very careful about the word objective, because often when people say "objective morality," they use objective to mean binding. I don't mean a binding morality. I mean a way of making ethical decisions that can be based on facts rather than opinions, or at least based more on fact and less on opinions.)
Also as Harris pointed out, the fundamental concern of any morality is whether conscious creatures are flourishing or suffering. If we can find another framework that addresses flourishing/suffering directly, then we will have addressed the issue that necessitated us inventing morality in the first place. Such a framework would be very simple in principle: I choose to consider flourishing/suffering the most important issue. I choose to contribute to everyone's flourishing, or at least to refrain from contributing to anyone's suffering. I choose to refrain from imposing my will on anyone, unless it is absolutely necessary to do so in order to prevent them causing someone else to suffer.
If these principles seem useful, then perhaps more and more people will adopt them, and maybe we could eventually make these choices at the societal level. We as a society would refrain from imposing our collective will on anyone, unless it were absolutely necessary in order to prevent them causing someone else to suffer. Of course, in deciding on necessity and the form our imposition would take, we'd use all the excellent behavioral and social sciences that have been going on for the last century or so, rather than our so-called common sense, which has steered us wrong countless times (think flat earth, Ptolemaic geocentrism, spontaneous generation of complex organisms from inanimate matter--these ideas were once common sense, and they were dead wrong). Punishment as we know it would have to change or perhaps even disappear entirely, because only a moral agent with free will can "deserve" punishment.
If we want better government, we need to use science to figure out ways to improve the process, to attract better candidates, to discourage empty shells and charlatans, to involve everyone in policymaking, to make it a lot harder for power to be concentrated into the hands of the few as it is now. Seems like game theory could be brought to bear here also.
If we want less crime, we need to use science to figure out the causes that underlie crime. We could start with poverty, lack of opportunity, lack of education, undernourishment, lack of coping skills, etc. US Department of Justice figures say that among prisoners in the US, 82% have indications of specific learning disabilities and the average IQ is about 85. An article in the journal Intelligence says that we have known since 1977 that IQ is "a powerful individual-level predictor of criminal behavior." A person with low IQ doesn't "deserve" to be punished any more than he "deserves" the hand that life has dealt to to him. Seems like we could reduce a lot of suffering in the world if we addressed the underlying causes of crime.
If we want our children to behave "better," we need to use science to figure out how to make it happen. Of course, we also need to consider carefully what kind of behavior we want our children to exhibit, and what kind of motivations we want them to have. It took me many, many adult years finally to understand that bad and good are not the same as I-will-be-punished and I-will-not-be-punished. Do we want our children to use this kind of moral calculus? I hope mine will not. We also need to ask ourselves as parents whether the behavior for which we "discipline" our children is really "wrong," or "undesirable," or just a parental pet peeve. But I digress.
I remain convinced, for the present, that given even a marginally healthy childhood, the vast, vast majority of humans will behave cooperatively without any kind of legal or social coercion, and that those of us who have trouble cooperating need help, not punishment. If we were to focus on suffering/flourishing and make our laws proactive, we might start improving the world. And of course, the best part is that this kind of ethics would work regardless of whether we have free will.
Interested in anyone's thoughts.
Utilitarianism, or something like it, seems to make sense to most of us, and it seems a good place to start, but again, without free will, it just doesn't work very well, at least in any framework we would currently recognize. The problem is that Utilitarianism (actually, every single one of our existing moral theories) works only for moral agents. Fortunately, there's a very simple, useful set of principles we could use to guide our lives without moral agency.
Sam Harris' book "The Moral Landsape" received mixed reviews, but Harris made one very important point: Soon, if not already, we will be able actually to measure and even quantify suffering. This is a giant leap forward for morality, because finally we have an objective standard. (I want to be very careful about the word objective, because often when people say "objective morality," they use objective to mean binding. I don't mean a binding morality. I mean a way of making ethical decisions that can be based on facts rather than opinions, or at least based more on fact and less on opinions.)
Also as Harris pointed out, the fundamental concern of any morality is whether conscious creatures are flourishing or suffering. If we can find another framework that addresses flourishing/suffering directly, then we will have addressed the issue that necessitated us inventing morality in the first place. Such a framework would be very simple in principle: I choose to consider flourishing/suffering the most important issue. I choose to contribute to everyone's flourishing, or at least to refrain from contributing to anyone's suffering. I choose to refrain from imposing my will on anyone, unless it is absolutely necessary to do so in order to prevent them causing someone else to suffer.
If these principles seem useful, then perhaps more and more people will adopt them, and maybe we could eventually make these choices at the societal level. We as a society would refrain from imposing our collective will on anyone, unless it were absolutely necessary in order to prevent them causing someone else to suffer. Of course, in deciding on necessity and the form our imposition would take, we'd use all the excellent behavioral and social sciences that have been going on for the last century or so, rather than our so-called common sense, which has steered us wrong countless times (think flat earth, Ptolemaic geocentrism, spontaneous generation of complex organisms from inanimate matter--these ideas were once common sense, and they were dead wrong). Punishment as we know it would have to change or perhaps even disappear entirely, because only a moral agent with free will can "deserve" punishment.
If we want better government, we need to use science to figure out ways to improve the process, to attract better candidates, to discourage empty shells and charlatans, to involve everyone in policymaking, to make it a lot harder for power to be concentrated into the hands of the few as it is now. Seems like game theory could be brought to bear here also.
If we want less crime, we need to use science to figure out the causes that underlie crime. We could start with poverty, lack of opportunity, lack of education, undernourishment, lack of coping skills, etc. US Department of Justice figures say that among prisoners in the US, 82% have indications of specific learning disabilities and the average IQ is about 85. An article in the journal Intelligence says that we have known since 1977 that IQ is "a powerful individual-level predictor of criminal behavior." A person with low IQ doesn't "deserve" to be punished any more than he "deserves" the hand that life has dealt to to him. Seems like we could reduce a lot of suffering in the world if we addressed the underlying causes of crime.
If we want our children to behave "better," we need to use science to figure out how to make it happen. Of course, we also need to consider carefully what kind of behavior we want our children to exhibit, and what kind of motivations we want them to have. It took me many, many adult years finally to understand that bad and good are not the same as I-will-be-punished and I-will-not-be-punished. Do we want our children to use this kind of moral calculus? I hope mine will not. We also need to ask ourselves as parents whether the behavior for which we "discipline" our children is really "wrong," or "undesirable," or just a parental pet peeve. But I digress.
I remain convinced, for the present, that given even a marginally healthy childhood, the vast, vast majority of humans will behave cooperatively without any kind of legal or social coercion, and that those of us who have trouble cooperating need help, not punishment. If we were to focus on suffering/flourishing and make our laws proactive, we might start improving the world. And of course, the best part is that this kind of ethics would work regardless of whether we have free will.
Interested in anyone's thoughts.