The Biological Basis of Morality

Where Do Values Come From?

If the problem is the value of nature, the first question we face is the nature of values. In philosophy, the most compelling answer to this question comes from Hume; in the sciences, from sociobiology. Neither are necessarily incompatible with each other, and in most ways they are entirely compatible because the central contention is the same: ethics is a question of human nature.

Hume is infamous for, among other things, an idea known as the “is/ought problem,” which argues that one can never derive a justified ought merely from the way things are. For example, one cannot argue that disparities between men and women in certain career fields should be supported by policy simply because sex differences in job preference exist. A little thinking will reveal this idea to be indisputable, at least so long as we regard moral judgments abstractly.

Although it may seem an innocuous observation, the “is/ought problem” has at least one far-reaching implication: morality can never be wholly empirically and rationally derived. Instead, moral attitudes are in large part due to sentiments and feelings. Regarding things we consider wrong, Hume (1978, p. 469) writes, “You never can find [the vice], till you turn your reflexion into your own breast, and find a sentiment of disapprobation, which arises in you, towards this action.”

Note that this does not mean that our morals stem merely from passion. To identify their source does not necessarily degrade them and certainly doesn’t release us from their hold. But accepting this account does mean we have to dismiss some popular ideas in environmental philosophy. Consider, for instance, a common pursuit within environmental ethics to identify value as a metaphysical property that a thing literally has and that humans can discover (see, e.g., Holmes Rolston III). This sets up quite the task for the philosopher, who must defend a metaphysical scheme that in most cases is in direct conflict with modern scientific understanding. At the very least it requires a good bit of philosophical gymnastics.

In contrast, Hume’s account is simpler and compatible with sociobiological insights into human nature, which argue that moral attitudes are the result of evolution. If humans are fully material creatures then moral precepts have to originate in biological processes. As a result, these moral precepts can be acted on by natural selection and we can expect some moral attitudes to be strongly favored in certain environmental conditions. To argue otherwise, one would have to challenge the fairly well-established premises, for example, by claiming that humans have a supernatural component.

Explaining the universal cultural presence of an incest taboo is one example of the theory’s robustness. Haidt (2001) once ran an experiment in which he told his subjects about imaginary siblings named Julie and Mark. In the story, the imaginary characters decide to go on a vacation and decide to have sex with each other. Julie is on the pill, and Mark uses a condom. The brother and sister enjoy having sex but decide not to do it again, and they also agree not to tell anyone about it. After telling this story, Haidt asked the subjects whether they thought what Mark and Julie did was okay. Most said it was not, but cited reasons like “the children could be deformed” or “they might have damaged their relationship,” despite the fact that the story already addressed these concerns. After some questioning, many of the subjects simply said, “I don’t really know why, it’s just wrong.”

Further supporting the notion of a biologically-based incest taboo, the “Westermarck hypothesis” states that children raised together will probably not be sexually attracted to each other, since being raised together would trigger the evolutionary mechanism that guards against incest. This was supported in a study of the Israeli kibbutzim, wherein it was revealed that out of 2,769 marriages in second-generation kibbutzim, none were between two members of the same peer group, and no heterosexual activity between two members of the same peer group was discovered either (Shepher 1971).

Data such as these suggest that at least some of our moral precepts are shaped directly by biology. Other examples support this conclusion, like evolutionary explanations for altruism (Lieberman, Tooby, & Cosmides 2007; Fehr & Fischbache 2003) or prejudice (Alexander 1985; Greene 2011).

To be clear, this does not mean that every moral precept is “encoded” in human nature. For example, the cultural practice of removing one’s hat to show respect is clearly learned. But if we are to regard human beings as material, biological creatures, the practice is rooted in our biology in the sense that our hormones, neurochemical pathways, and so on have all been affected by the environment to produce our conformity to that particular cultural practice. Furthermore, it is not beyond reason to speculate that respect itself is an unlearned value, something humans are predisposed to because of its social, and therefore survival, utility. This is true for all such moral convictions: the incest taboo is likely to be encoded in human nature because of the consequences for survival those who did not regard it would have had to face. That the evidence suggests this is true only further supports the idea that value is imbued by human beings by supporting the naturalist account of human nature overall.

It is also unlikely that all natural moral attitudes (i.e., moral attitudes that would arise without any human or technical intervention) are present at the moment of birth. To give a non-moral example, sexual development does not occur until later on in a human being’s life, but the developments are still a result of natural selection and one of the aspects of human nature most strongly influenced by that process. In other words, we can expect some changes in moral attitudes to be developmental (see Pinker 2003, pp. 90-93 for more on this point). Findings in cognitive psychology tend to support this idea, noting that moral attitudes develop in children along a fairly definable trajectory (Piaget 1997). Furthermore, to the extent that our biology constrains the application of certain values, we can expect individual opinions to change as the brain undergoes development, such as when a teenager’s pre-frontal cortex, linked to moral behavior (Koenigs et. al 2007), more fully develops in early adulthood.

Finally, although natural selection has probably tightly constrained moral attitudes in some spheres, moral attitudes in other spheres are likely to permit more variation. For example, a number of studies indicate that personality differences and disparities in IQ could affect political attitudes, which helps explain why these attitudes are inheritable (Alford, Funk, & Hibbing 2005; Bouchard 2009; Settle, Dawes, Christakis, & Fowler 2010). However, because political attitudes tend to respond to highly variable evolutionary problems, like social cohesion in certain environmental conditions, rather than evolutionary problems that are more consistent among human beings, like reproductive practices, they are predicted to be more variable than, say, an incest taboo, and more easily modified through artificial intervention (see theories of moral ecology as in Dean 2012).

The Problem of Moral Relativism

This account of the origin of values does not invalidate the idea that, for example, nature “has” intrinsic value. However, as Callicott (1995) put it, we can only say that “nature has intrinsic value when it is valued (verb transitive) for its own sake, as an end itself.” Some may be wary of such an account because they feel as though it is relativistic. Strictly speaking, this is true. But the relativist implications do not extend as far as is often assumed.

For instance, that people may occupy different moral landscapes does not invalidate the hold their values have over them. “If we are committed to our commitments, then we need not relinquish them just because somebody else disagrees with us” (Kaebnick 2008). Furthermore, this account of values adequately describes and explains the way moral reasoning occurs in the real world, by, for instance, making clear that appeals to the value of something are impotent among those who do not accept that value. In truth, even if moral value existed independently of a valuer, nothing about an independent value would cause it to be enforced outside of normal social methods, like persuasion or force.

It is also quite clear that morality is descriptively relative. That is, whether or not we can abstract an objective moral system from our condition, or discover it through empirical investigation, the world as it stands contains individuals and groups who differ widely in their moral attitudes. Indeed, moral rules are expected in a population precisely because of differences in interests (Alexander 1985). They are a natural problem-solving technique. This means that they can be expected so solve some problems even if they are between two people or groups with incommensurable values. Practical examples of this abound, the most illustrative and common being the conflict between people who value objects or relationships intrinsically and those who only value them for their utility, economic or otherwise. In cases such as these, common ground exists if the objects or relationships that are intrinsically valued also happen to have utility. This is apparent in existing policies among historical preservationists and environmental conservationists.

The account just as easily recognizes ways by which individual opinions on moral matters can change through empirical evidence. Consider, for instance, a historical preservationist who argues strongly that a specific document should be preserved because of its historical value. If someone demonstrated that this document did not actually have the historical value the preservationist thought it had, then he would have to abandon his case for that document even if he does not abandon his core normative commitment to preservation.

If all this is relativistic, then it is no more so than scientific investigation, since naturalism has similar implications for epistemology as it does for morality. Return to Hume’s contention that moral attitudes are built from a basic moral sense that one either has or does not have; that moral attitudes cannot be derived through means other than this, e.g., through descriptive investigation. In a similar fashion, Hume (1902) argued that no one can fully justify their reliance on their senses, nor can they justify certain natural modes of human reasoning like induction or belief in causality. Since Hume, philosophers have further demonstrated that even aspects of science more complex than immediate sense experience rely on values, power, and logical leaps (e.g., Lakatos 1978). If we accept all these arguments, our philosophical starting point for epistemology must be “radical skepticism,” the idea that absolute knowledge is impossible, a position that Hume held. Yet even in spite of this position, Hume did not dismiss induction or sensory evidence, appealing to common sense by pointing out that we do in fact live in the world and have no choice but to interact with it using the tools we have. This he called “mitigated skepticism.” In the end it makes epistemology a question of human nature just as morality is.

Through evolutionary theory, we can therefore shed light on why we tend to speak of morality in terms of “opinion” and descriptive investigation in terms of “fact”: the disparity results from a difference in evolutionary restrictions on variability. In other words, some aspects of human nature will be more similar and consistent than others because of similar and consistent selection pressures, so things like sexuality, bodily functioning, basic a priori elements of human reasoning, and sensory experience will lie on the “more similar and consistent” end of the spectrum. On the other hand, most moral attitudes will be much more diverse. This is most obvious in the case of psychopathy, which tends to have a “low but stable” prevalence in a given population, a finding predicted by evolutionary game theory (Coleman & Wilson 1997). A singular universal moral code is therefore impossible, and universal norms that aren’t strongly selected will have to be a result of compromise between individual and small group intuitions. In other words, while communication, understanding, and moral argument are all possible, we can expect many moral differences between humans to remain fundamental so long as we do not homogenize the human race biologically.

Moral Reasoning

Given the above, we have a general means of sorting out our moral attitudes toward nature. First, of course, we must determine what exactly we mean by the term, and we must examine our immediate intuitions in relation to it. For instance, if we mean “nature” in the context of the nature/artifice distinction, then some might say that their immediate feeling toward GMOs, now a popular symbol of artificial modification of nature, is repulsion, because of their artificialness. This may suggest a moral principle that values nature and denigrates artifice in all cases.

From there it is possible to engage in moral reasoning. For instance, a rival attitude might (and does) argue that human beings have been engaging in a genetic modification of sorts at least since the Neolithic. This unnaturalness is precisely what allowed for population growth, and the so-called “gene revolution” in agriculture is the only apparent means to sustain it through the 21st century. The argument is convincing to a person who values the benefits of agriculture and increased population; but those who do not hold similar values will have to resolve the conflict through other means, such as through force (e.g., protests) or compromise (e.g., labeling policies). Alternatively they might be convinced that although they value nature, they also value things like peace, and since the consequences of not using GMOs could mean an increase in future violence (because of resource issues), they may then accept that peace should be prioritized over naturalness.

Of course, such a strict moral dichotomy seems untenable. Under the nature/artifice distinction, artifice is inherent in the human condition, so short of complete misanthropy, most moral systems will allow some degree of it. Nevertheless, many people also recognize that naturalness has value, and questions regarding that value are a major part point of investigation in environmental, bio-, and sports ethics. As the technical powers of humanity increase, these questions will only become more pressing.

Note that evolutionary accounts of epistemology and ethics allow for the possibility of moral investigation to, hypothetically at least, be made into a science. One can imagine that some time in the future, evolutionary theory will be able to predict with a high degree of certainty what moral attitudes are universal among humans, and neurotechnics will be able to identify biological bases of values on an individual level. This, of course, remains to be seen and is not strictly necessary for moral investigation. Furthermore, the desirability of such a project is questionable. Nevertheless, the possibility exists, and some, like Harris (2011), Daleiden (1998), or Greene (2013), argue that such a science may be necessary for our Stone Age brains to cope with Space Age problems.

In the meantime, however, there are already a number of normative sciences that integrate moral components into their work. The two most prominent examples are medicine, which extols the value of health or human well-being, and conservation biology, which extols the value of biodiversity, wildness, or naturalness. In the current age, technical prowess not having developed enough to exactly identify values in the way the above speculation suggests is possible, these sciences operate through a combination of empirical findings and philosophical moral investigation. To deal with the latter, for instance, medicine has journals like the Journal of Medical Ethics, and conservation pulls from the field of environmental ethics or speaks openly of ethical issues in journals like Conservation Biology.

This approach seems to be one of the more useful in discerning the morality of human relationships to nature. Suited to our present conditions, it is not “scientized” completely, still including a good deal of philosophical investigation, yet it contains enough scientific reasoning to deal with the Stone Age – Space Age divide. Several lines of evidence suggest that the latter is necessary in our present conditions.

For instance, in Thinking, Fast and Slow, Kahneman (2011) pointed out that humans use a number of heuristic devices to analyze the world around them. One example he gives recalls an experiment in which he and the psychologist Amos Tversky told participants about an imaginary character named Linda. Linda, the story went, was single, smart, and outspoken on the issues of discrimination and social justice. After explaining this, the two psychologists asked if it was more probable for Linda to be a bank teller or for Linda to be a bank teller who was active in the feminist movement. Of course, basic lessons in statistical probability would reveal that the first answer is the correct one. Only a subset of all bank tellers are feminist bank tellers, so adding the extra detail will necessarily decrease the probability. But most participants said the second answer was correct.

Another phenomenon Kahneman reports is called the “availability heuristic,” which means that the easier something comes to mind, the more probable the human mind will judge it to be. For example, Kahneman and Tversky (1973) asked participants in one experiment to judge whether words that began with the letter k were more probable, or whether words with k as their third letter were more probable. Because we recall words by their onsets, words beginning with the letter k are easier to recall. Thus, the duo predicted, rightly, that participants would judge words beginning with k as more likely, even though the opposite is true. One could repeat this experiment using almost any letter.

While these mental shortcuts work fairly well in Paleolithic conditions, they often cause problems for modern man. For instance, the availability heuristic helps explain why people seem to fear things in a way that is incongruent with statistical probabilities. Death by falling furniture is much more likely than death by murder, but because it is easier to recall instances of murder, perhaps from the news or even novels, people fear it significantly more. As a result, individuals in nations with extremely low crime rates but oversaturated with news media suffer from undull anxiety about crime or terrorism, which of course influences a population’s openness toward declaring warfare or supporting harsh law enforcement policies.

Using this data, Kahneman argued that the human mind has two components: System 1 is intuitive, fast thinking, and it utilizes various shortcuts in order to come to conclusions; in contrast, System 2 is analytical, slow thinking, the part of the mind that humans use to write or do complicated math. We might compare this to a camera that has automatic settings suitable for most use-cases, but which can be switched into manual mode for unique situations. System 2 is not a new phenomenon, and evidence demonstrates that humans have used it ever since they evolved into their modern form (Liebenberg 1999), but it is more necessary for modern man because System 1 is less useful outside of natural conditions.

This demonstrates the importance of moral reasoning in the current age. If we are concerned with acting in line with our values, we cannot rely exclusively or perhaps even mostly on intuition. Sometimes an extensive amount of moral investigation is required. When this is possible then, we should take advantage of the opportunity. Such investigation will not always be practical—for example, urgent moral responses in the context of war or radical politics will have to rely on the intuitive mind to some extent—but unless one is willing to risk making extremely consequential decisions that one later regrets, moral investigation is necessary, especially in the context of problems as great as climate change, genetic engineering, or nanotechnology.

Bibliography

Alexander, R. (1985). A biological interpretation of moral systemsZygon 20(1), 3-20.

Alford, J., Funk, C., & John Hibbing. (2005). Are political orientations genetically transmitted?. American Political Science Review 99(02), 153-167.

Bouchard, T. (2009). Authoritarianism, religiousness and conservatism: Is “obedience to authority” the explanation for their clustering, universality and evolution. In (eds. Voland, E. & Wulf Schiefenhövel) The Biological Evolution of Religious Mind and Behaviour. Berlin Germany: Springer.

Callicott, B. (1995). Intrinsic value in nature: A metaethical analysis. The Electronic Journal of Analytic Philosophy(3).

Colman, A., & Janet Wilson. (1997). Antisocial personality disorder: An evolutionary game theory analysis. Legal and Criminological Psychology 2, 23-24.

Daleiden, J. (1998). The science of morality: The individual, community, and future generations. New York: Prometheus Books.

Dean, T. (2012). Evolution and moral ecology. Morality and the Cognitive Sciences 7, 1-16.

Fehr, E., & Urs Fischbache. (2003). The nature of human altruism. Nature 425(6960), 785-792.

Greene, J. (2013). Moral tribes: Emotion, reason, and the gap between us and them. New York: Penguin Books.

Haidt, J. (2001). The emotional dog and its rational tail: A social intuitist approach to moral judgement. Psychological Review 108(4), 814-834.

Harris, S. (2011). The moral landscape: How science can determine human values. New York: Simon and Schuster.

Hume, D. (1902). Enquiries concerning the human understanding: And concerning the principles of morals. London, England: Clarendon Press.

Hume, D. (1978). A treatise of human nature, 2nd edition. Oxford: Oxford University Press.

Kaebnick, G. (2008). Reason of the heart: Emotion, rationality, and the “wisdom of repugnance.” The Hastings Center Report 38(4), 36-45.

Kaebnick, G. (2014). Humans in nature: The world as we find it and the world as we create it. New York: Oxford University Press.

Kahneman, D. (2011). Thinking, fast and slow. London, England: Macmillan.

Kahneman, D., & Amos Tversky. (1973). Availability: A heuristic for judging frequency and probability. Cognitive Psychology 5(2), 207-232.

Koenigs, M., Young, L., Adolphs, R., Tranel, D., Cushman, F., Hauser, M., & Antonio Damasio. (2007). Damage to the prefrontal cortex increases utilitarian moral judgements. Nature 446(7138), 908-911.

Lakatos, I. (1978). The methodology of scientific research programmesCambridge: Cambridge University Press.

Liebenberg, L. (1999). The art of tracking: The origin of science. Cape Town, South Africa: David Philip Publishers.

Lieberman, D., Tooby, J., & Leda Cosmides. (2003). Does morality have a biological basis? An empirical test of the factors governing moral sentiments relating to incest. Proceedings of the Royal Society of London B 270(1517), 819-826.

Lieberman, D., Tooby, J., & Leda Cosmides. (2007). The architecture of human kin detection. Nature 445(7129), 727-731.

Piaget, J. (1997). The moral judgment of the child. New York: Simon and Schuster.

Pinker, S. (2003). The blank slate: The modern denial of human nature. New York: Penguin Books.

Rolston III, H. (1989). Environmental ethics: Duties to and values in the natural world. Philadelphia: Temple University Press.

Settle, J., Dawes, C., Christakis, N., & James Fowler. (2010). Friendships moderate an association between a dopamine gene variant and political ideology. The Journal of Politics 72(04), 1189-1198.

Shepher, J. (1971). Mate selection among second generation kibbutz adolescents and adults: Incest avoidance and negative imprinting. Archives of Sexual Behavior 1(4), 293-307.

Leave a Reply