A Science of Morality?
Critical Notice of Marc D. Hauser, Moral Minds: How Nature Designed Our Universal Sense of Right and Wrong (2006)
A SCIENCE of Morality?
[My] account shifts the burden of evidence from a philosophy of morality to a science of morality. (p. 2)1
The signature of progress in any science is an increasingly rich set of explanatory principles to account for the phenomenon at hand, as well as delivery of new questions that could never have been contemplated in the past. The science of morality is only at the earliest stages of growth, but with exciting new discoveries uncovered every month, and rich prospects on the horizon. If the recent history of linguistics is any guide, and if history provides insights for what is in store, then by raising new questions about our moral faculty, as I have done here, we can anticipate a renaissance in our understanding of the moral domain. Inquiry into our moral nature will no longer be the proprietary province of the humanities and social sciences, but a shared journey with the natural sciences. (p. 425)
A science of morality? Many philosophers will baulk at the idea. But why think that morality is not a fit subject for science? The scientific method is the best method ever devised for discovering the truth about reality. If morality is a real phenomenon, and if there are truths about morality, then science should be able to get a grip on the moral domain. It would be puzzling if morality were a real phenomenon, and yet lay beyond the reach of science.
We can begin by sorting out some different scientific projects. For starters, we need to identify the basic phenomena. Where in the world does morality make its appearance? The basic phenomena are present in human psychology and behaviour: human moral psychology and human moral behaviour. So let’s begin with the thought that a science of morality will be a behavioural science. Niko Tinbergen describes an agenda for the behavioural sciences.2 He distinguishes four questions that can be asked about any pattern of behaviour. Each of these questions motivates a particular science (or branch of inquiry) into behaviour. The questions are about the ontogeny and phylogeny of behaviour, and its causes and functions. A full explanation for some pattern of behaviour will provide answers to each of these four questions.
An explanation of the ontogeny of a specific pattern of behaviour is an explanation of its development and maturation in the individual organism. One branch of the science of morality, then, will be the study of the ontogeny of moral psychology and behaviour, the ontogeny (we might say) of the human moral sense.
Phylogeny is evolutionary history. Where ontogeny traces the appearance of a trait in an individual animal during its individual lifespan, phylogeny traces the evolutionary history of the trait in a lineage. What, then, were the evolutionary precursors of the modern moral sense? What ancestral traits, in early hominids or in other primates, were the building blocks of morality? Consider, for example, prosociality. A variety of nonhuman animals exhibit this trait, to varying degrees. It is plausible that prosociality is one of the evolutionary building blocks of morality. What are the components of prosociality that contributed to the evolution of the moral sense? And what drove the evolution of the moral sense from these earlier, precursor traits? In particular, is the moral sense an evolutionary adaptation, crafted by natural selection? If it is an adaptation, then it was naturally selected because it increased fitness (survival and reproduction). How (in that case) did the moral sense promote fitness? At what level of the biological hierarchy (genes, individuals, groups) did selection act, in the evolution of the moral sense?
Questions about ontogeny and phylogeny are about prior history: the history of development within an individual lifespan, and the longer, deeper history of the evolution of the modern trait from ancestral traits. Questions about cause and function, by contrast, are about the context of particular behavioural events: what are the proximal causes of the behaviour (its motives, say) and what is the function of the behaviour? In the science of morality, questions arise about the proximate causes of moral responses, whether these be moral judgements, moral feelings and intuitions, or patterns of moral behaviour. Finally, questions about function address the evolutionary purposes of morality. If morality has evolved through natural selection, then it has a function, an evolutionary purpose: it contributes to fitness, in some fashion. How does it contribute to fitness? How do the adaptive benefits of morality (whatever they are) outweigh the adaptive costs (whatever they are)?
Tinbergen’s questions presuppose that the pattern of behaviour of interest has been properly and objectively described. Where such descriptions are wanting, or exist but are controversial, we need to supplement Tinbergen’s canonical four questions with a fifth question: what is the proper description of the behaviour of interest? This question has particular bite where psychology is concerned. There is more to the phenomenon of morality than behaviour: there is as well a psychology of morality. So the fifth question, defining a fifth domain of scientific interest, concerns the psychological structures on which moral behaviour rests. Let’s call this the question of structure.
Now we have a broad idea of what a science of morality might be. It will have (at least) five parts: theories of ontogeny, phylogeny, cause, function, and structure. How does Hauser address these issues?
Hauser’s Moral Minds
The main thesis of Hauser’s book is that ‘we evolved a moral instinct, a capacity that naturally grows within each child, designed to generate rapid judgments about what is morally right or wrong based on an unconscious grammar of action’ (p. xvii). This thesis ‘shifts the burden of evidence from a philosophy of morality to a science of morality’ (p. 2).
Let’s do some unpacking. First, the human moral instinct is a product of Darwinian evolution, evolution by natural selection. The products of natural selection are traits that promote fitness.3 The human moral sense, therefore, promotes fitness. But how does morality promote fitness? And who benefits from the exercises of the moral faculty: genes, individuals, groups?
Second, the moral sense naturally grows within each child: it comes on stream in stages, as its different components mature. This kind of stage-like developmental pathway is typical of the development of ‘innate’ traits. So the moral sense, like the language sense, is innate.
Third, the human moral sense is designed to generate rapid judgements about moral right and wrong. One of the novel features of Hauser’s model of the human moral sense is that innate moral responses are construed as judgements. Judgements are cognitively complex. So our instinctive moral responses are not just raw feelings or impulses or brute reflexes.
Fourth, our instinctive judgements about moral right and wrong are based on an unconscious grammar of action. We have innate capacities for the analysis of actions. Moral judgements are outputs of unconscious cognitive processes in which the morally relevant features of agents and actions are identified and evaluated. We are instinctively primed to distinguish between intentional and unintentional actions, we instinctively look to the effects actions have on welfare, we instinctively distinguish between intended effects and foreseen but unintended consequences. The workings of the moral sense are cognitively complex, therefore, but because these workings are hidden from consciousness, we are unaware of their complexity. An instinctive feeling of disgust or moral horror might seem like an immediate response to an appalling act, but this is an illusion. Between the perception of the appalling act and the emotionally wrought response comes cold, dispassionate analysis: the unconscious, cognitive operations of the innate moral faculty. There is an analogy here with our understanding of language: you are ‘immediately’ shocked by what someone says, but between the saying and the shock come the cold, analytic activities of your language faculty, working out structure and meaning, before delivering an interpretation to your consciousness. So it is, analogously, with our intuitive feelings of moral horror and disgust.
The idea of a ‘grammar of action’ is explicitly modelled on Chomsky’s idea of a universal grammar of natural language. Universal grammar (a formal structure that is present in the innate, language acquisition device) is an innate toolkit for building the psychological capacity to speak and understand a language. Human children, unlike the juveniles of other primate species, are able to learn a language, and almost inevitably do learn a language, because they have brains that are biased towards the learning of natural languages in normal environments. Similarly, human children acquire moral competence because they have brains that are biased towards the learning of a specific array of cognitive skills, comprising the core of the moral sense. Learning a language and developing a moral sense are processes that are ‘more like growing a limb than sitting in Sunday school and learning about vices and virtues’ (p. xvii). The innate capacity to learn a language does not specify which natural language a child will learn: the environment determines the specific content of the mature language competence. Similarly, the innate moral capacity does not dictate the specific content of the moral system that the child will acquire: again, this depends on the moral environment inhabited by the child. The existence of an innate, evolved moral sense no more entails universal moral consensus than does the existence of universal grammar entail that everyone must speak the same language. Yet the range of possible moral variation is limited: only some possible languages are learnable as first languages by human juveniles, and analogously, only some possible moral systems are learnable (as first moral systems?) by human juveniles. Moral pluralism is possible, but unlimited variation is not. The only moral commandments that are learnable are ones that are consistent with the built-in biases of our innate moral system. ‘Our moral instincts are immune to the explicitly articulated commandments handed down by religions and governments. Sometimes our moral intuitions will converge with those that culture spells out, and sometimes they will diverge.’ (p. xvii)
Like many other cognitive capacities that have been designed to act swiftly, implicitly and economically, the moral sense is subject to error and illusion. In this respect, Hauser has a dual process view of moral psychology4. Fast, unconscious, largely automatic processes underlie our moral intuitions, while slow, conscious, deliberate processes of rational thought and inference seek to make sense of those intuitions. Sometimes—sometimes—rational deliberation in a cool hour can help people to recognise errors in their moral intuitions. But like the ability to work out the correct answer in a Wason test of conditional reasoning, this kind of conscious, deliberative moral reasoning is likely to be exceptional in human moral affairs.
Hauser distinguishes three models of the moral capacity, which he calls the Humean, Kantian and Rawlsian models. In the Humean model, the perception of an event triggers an emotion (love, hate, disgust ) which in turn is expressed as a moral judgement. In the Kantian model, the perception of an event triggers a conscious process of rational, principled thought, which in turn generates a moral judgement. In the Rawlsian model, however, event perception triggers an unconscious process of action analysis, which in turn generates a moral judgement: this intuitive judgement, in turn, might trigger consequent emotions or (less often) explicit acts of moral reasoning. Which of these three models should we prefer? ‘Only scientific evidence as opposed to philosophical intuition can determine which model is correct’ (47).
Two scientific questions, in particular, dominate Hauser’s analysis of the three models and his defence of the Rawlsian model. The first question concerns ontogeny: how does the moral faculty develop? In order to answer this question we need a model of the ‘finished’, adult state of moral competence: we need a structural model. The structural model will identify the various components of the moral sense and show how they are organised in the normal adult brain. The second of Hauser’s two leading questions is the question of phylogeny: how did the moral faculty evolve? Most philosophers will expect this question to be answered with an adaptationist narrative (‘morality is a Pleistocene adaptation for small-group living’, and so on and so forth). But that is not Hauser’s way. Instead, he proceeds comparatively. Which components of the moral faculty are shared with other animal species (in particular, the other primates) and which are uniquely human? Only comparative psychology can answer that question. A third question is also raised: are the various components of the moral faculty specific to our moral competence, or are some of them shared with other cognitive capacities? ‘We answer the uniquely human question by studying other animals, and we answer the uniquely moral question by studying other systems of knowledge’ (p. 49).
Here is Hauser’s ‘anatomy of the Rawlsian creature’s moral faculty’ (pp. 53-4):
- The moral faculty consists of a set of principles that guide our moral judgments but do not strictly determine how we act. The principles constitute the universal moral grammar, a signature of the species.
- Each principle generates an automatic and rapid judgment concerning whether an act or event is morally permissible, obligatory, or forbidden.
- The principles are inaccessible to conscious awareness.
- The principles operate on experiences that are independent of their sensory origins, including imagined and perceived visual scenes, auditory events, and all forms of language—spoken, signed, and written.
- The principles of the universal moral grammar are innate.
- Acquiring the native moral system is fast and effortless, requiring little to no instruction. Experience with the native morality sets a series of parameters, giving birth to a specific moral system.
- The moral faculty constrains the range of both possible and stable ethical systems.
- Only the principles of our universal moral grammar are uniquely human and unique to the moral faculty.
- To function properly, the moral faculty must interface with other capacities of the mind (e.g., language, vision, memory, attention, emotion, beliefs), some unique to humans and some shared with other species.
- Because the moral faculty relies on specialized brain systems, damage to these system can lead to selective deficits in moral judgements. Damage to areas involved in supporting the moral faculty (e.g., emotions, memory) can lead to deficits in moral action—of what individuals actually do, as distinct from what they think someone else should or would do.
The main body of Hauser’s book is in three parts. The first part aims to describe the mature state of the human moral competence (the structural and causal questions: features 1-4 in the list above). The second part concerns the development of the moral sense (ontogeny: features 5-7), and the third concerns some evolutionary issues (phylogeny: features 8-10).
In Part One, Hauser gives a survey of recent scientific work on moral psychology, and in particular, of empirical research on our judgements concerning justice and fairness (Chapter 2) and our judgements concerning benefit and harm (Chapter 3).
As a branch of philosophy, a subject known as ‘moral psychology’ has arisen largely on a foundation of informal understandings of moral thought and feeling. You do not need to know about scientific psychology to do philosophical moral psychology, because this branch of philosophy takes its domain of study from folk psychology. Folk psychology is our nonscientific, informal understanding of what makes people tick. Philosophical moral psychology studies the nonscientific, informal, folk conception of what makes people tick morally. But the comfortable notion that philosophy of moral psychology can ignore scientific psychology should be unsettled by exposure to the kind of empirical work that Hauser describes. Intuitions—both normative intuitions and conceptual intuitions—are the primary data of moral philosophy. But as empirical phenomena, those intuitions are psychological events which are equally data for science. What hidden processes give rise to these intuitions? Why do those specific processes exist, rather than different processes that would give rise to different intuitions? Nor should we assume that we know already what ‘our’ intuitions are.5 Hauser describes research into the moral intuitions that people actually have. How do people who have never been exposed to moral philosophy actually respond to trolly problems,6 for example? What intuitions do they actually have about fair shares, about punishment for defectors from cooperative activities, and the like? What do these data tell us about shared, universal features of moral psychology? How much variation is there in human moral intuitions, and how are differences correlated with differences in social class, culture, religion, and so on?
Take, for example, the principle of double effect, endorsed by some traditional moral systems, but rejected by utilitarians on the basis not of intuition but of utilitarian reasoning. What is the philosophical import of the fact that intuitive morality normally endorses this distinction? The principle asserts a moral difference between intended outcomes of actions, and outcomes which, though foreseen, are not specifically intended. For instance, some moralists believe that it is morally wrong to perform an operation which has the intended outcome of killing a foetus, but it is not morally wrong to perform an operation which is intended to save the life of a pregnant woman, even if a foreseeable but unintended side-effect of the operation is the death of the foetus. Most people have no explicit knowledge of the ethical principle of double effect. But in their intuitive judgements, they recognise moral differences between these two kinds of cases (pp. 124ff). They implicitly endorse a principle of double effect. When asked to justify the distinctions they intuitively recognise, however, people are usually ‘clueless’: they are largely incoherent when asked to explain or justify their intuitive moral judgements (p. 128).
Intuitive morality endorses the principle of double effect. What follows, normatively? Suppose that this principle is not only part of our actual intuitive morality, but also part of any possible intuitive morality. The utilitarian principle that sees no moral differences between the two classes of effects is not a part of any possible intuitive morality. A utilitarian who rationally endorses the denial of double effect will have to struggle with his own intuitive endorsement of the doctrine. His children will naturally endorse it and he will have his work cut out convincing them, by rational, explicit argument, that they should reject it. Attempts by Benthamite reformers to frame legislation that ignores the distinction will be met by popular resistance. But does it follow from the facts about intuitive morality that the principle is normatively correct? What should give, when our implicit, evolved, intuitive morality conflicts with the conclusions of explicit moral reasoning (by specially trained philosophers)? Hauser does not answer this question. It is in fact a particular instance of a much more general issue: how much normative authority should biological nature have? To put it bluntly, should the interests of our selfish genes dictate our own personal interests? Suppose that rigorous scientific inquiry supports some conclusion about human differences (sexual, racial, whatever) that conflicts with normal, liberal, moral ideals. For example, suppose that it turns out to be true that women on average are psychologically better equipped for child-rearing than are men on average. Should this fact influence our thinking about how child rearing will be handled in the ideal society? More generally, supposing that there is a gap between what is and what ought to be, how wide can that gap become, if morality is to retain its psychological authority over us? And can a morality that loses its psychological authority still have normative authority?
Intuitive morality endorses the principle of double effect. Hauser cites evidence that intuitive morality also endorses the distinctions between intentional and nonintentional actions, and between acts and omissions. Once again, it is not clear what relevance this has for philosophical ethics; but it would be unwise of philosophers to ignore it.
One more example: Carol Gilligan has famously argued that the moral development of girls follows a different course from the moral development of boys. Girls are more concerned with caring whereas boys are more concerned with justice. Gilligan’s views have been very influential in the development of a feminist ethics of care. However, the data (as reported by Hauser: pp. 125-6) do not support this distinction. There is no evidence of gender differences in intuitive morality. ‘Gender differences may play a role in performance, and the justifications that the sexes give. But when it comes to our evolved moral faculty—our moral competence—it looks like we speak in one voice: the voice of our species.’ Once again, what follows normatively? How if at all should this influence our rational evaluations of the ethics of care?
Part Two of Moral Minds is about moral development: ‘current knowledge [is used] to explore how the principles guiding our intuitive judgments are acquired and how they are represented in the brain’ (p. 164). If the human moral faculty is an evolutionary adaptation, then some of its components are innate.7 Strong (‘staunch’) nativism is the view that specific moral principles are hardwired into the brain. Staunchly nativist infants have innate knowledge of specific moral rules concerning helping and harming, sharing and trusting. By contrast, the weakest form of nativism (‘Lockean nativism’, perhaps) supposes only that infants have innate learning abilities—general learning abilities, not abilities cued specifically to moral learning—that enable children to acquire a moral sense through education and experience. Hauser rejects both views, and defends instead a ‘temperate’ nativism (pp. 165, 298-9). Once again, Chomskian linguistic nativism gives the lead. The infant brain is innately equipped with ‘a suite of principles and parameters for building moral systems’. These principles lack specific content: that is, they lack specific moral or normative content. Instead, these innate principles provide tools for the moral analysis of agency, of actions and their consequences, and of relations of social cooperation, but they do not dictate the specific normative values that are to be used in this analysis. Local culture provides the specific normative content that fills out the universal and formal grammar of morality. ‘[W]e are born with abstract rules or principles, with nurture entering the picture to set the parameters and guide us toward the acquisition of particular moral systems’ (p. 165).
Hauser’s discussion of the ontogeny of the moral faculty is built around the three models mentioned earlier: the Humean, Kantian and Rawlsian models. There are some interesting dynamics here. The primary division (in my view) is between Kantian models, on the one side, and Humean and Rawlsian models, on the other. In Kantian models, moral judgements are made on the basis of conscious reasoning from moral principles (p. 14). But these moral principles need not be Kantian in the narrow sense. A utilitarian who reasons consciously from the principle of utility to normative conclusions is a Kantian ‘creature’, in Hauser’s sense. Moral philosophy in general tends to work with a Kantian model of moral psychology. Kantian models are contrasted with what are sometimes called ‘dual-process’ models (e.g. dual-process models of reasoning). In these models, moral-psychological processes are of two different kinds. Conscious and explicit processes of moral reasoning are contrasted with unconscious, implicit processes for producing moral intuitions. Explicit processes of moral reasoning take moral intuitions as ‘raw data’. Implicit processes are responsible for producing those intuitions in the first place. Both Humean and Rawlsian models endorse the two-process view. They differ in the descriptions they give of the processes which produce moral intuitions.
The Humean model is a psychological form of emotivism: agents, actions and events trigger emotional responses, which are subsequently dressed up in moral concepts and delivered to the world in explicit moral judgements. Hauser argues (plausibly) that the Humean model cannot account for the discriminating way in which our moral intuitions map onto facts about agents, actions and events. For instance, our intuitions are sensitive to differences between intended consequences and unintended but foreseen consequences, and to differences between acts and omissions. These differences are not reliably represented in the perceptual stimulus. So moral intuitions which are sensitive to the differences cannot simply be emotional responses to perceptual stimuli (as the Humean tacitly supposes). Instead, an implicit and unconscious process of analysis must precede the triggering of our intuitions, as such cognitive processes must also precede emotional responses like moral horror and disgust (pp. 197-8). In the Rawlsian model, the moral faculty responds to inputs by analysing the components of the situation (agency, action, consequences), and then attaching moral markers to relevant factors, before giving rise to intuitions, emotions, or for that matter to subsequent explicit processes of moral reasoning or rationalization. No matter how heated our intuitive moral responses might be (think child sex), those responses are outputs from implicit processes of cold cognition: ‘Rawlsian creatures are appraisers, but their appraisals are unconscious and emotionally cool’ (p. 192).
Moral philosophy is very interested in explicit, deliberate, principled processes of moral reasoning. But dual-process theories give reasons for thinking that these explicit processes are less transparent and less rationally disciplined than we might otherwise think them to be. A nice example is the model developed by Jonathan Haidt (2001). Haidt points out that early research on moral psychology (Piaget, Kohlberg, etc) was dominated by models in which moral judgements are caused by explicit processes of moral reasoning. In opposition to these models, Haidt provides evidence ‘that moral reasoning typically does not cause moral judgment; rather, moral reasoning is usually a post hoc construction, generated after a judgment has been reached [M]oral judgment is generally the result of quick, automatic evaluations (intuitions)’ (ibid, p. 814). Moral reasoning, when it occurs at all, is most often a process of rationalization, in which explicit justifications are constructed for conclusions which we have already adopted as a result of implicit processes. ‘The reasoning process is more like a lawyer defending a client than a judge or scientist seeking truth’ (ibid, p. 820). Moral thinking is normally a matter of making up reasons for what we already believe, not a process of rational belief formation but of post hoc rationalization. Moral reasoning, as a psychological process, is a consequence, not a cause, of moral judgement.8
Part Three of Moral Minds is about the evolutionary origins of morality and human moral psychology. The discussion here is organized around the two comparative questions mentioned earlier.
- Which parts of the moral faculty are shared with other animal species (particularly the other primates) and which are uniquely human?
- Which parts of the moral faculty are specific to our moral competence and which of them are shared with other cognitive capacities?
‘We answer the uniquely human question by studying other animals, and we answer the uniquely moral question by studying other systems of knowledge’ (p. 49). Hauser argues (Chapter 6) that some of the core capacities of the human moral system are present in nonhuman animals, and in Chapter 7, he argues that humans appear to be unique in having capacities that enable large-scale cooperation among nonkin, and that these capacities are central to our evolved moral psychology.
From its birth in ancient Greece, moral philosophy has contrasted the demands and constraints of morality with the self-interested desires of the egoistic individual. The challenge that the moral philosopher takes on is to justify morality to the egoist. Selfishness is the default condition of humankind, the condition to which we would default if moral constraints were to disappear. These assumptions have in recent years been greatly enriched by an infusion of ideas from evolutionary theories of cooperation. From the biological point of view, one of the signatures of the human species is its capacity for forming stable, nonkin groups. Science can contribute a large amount of vital information about the conditions under which cooperation can emerge, what demands (cognitive, emotional, motivational, behavioural) it imposes on individuals, the benefits and costs of enforcement and of punishment for defection, the social structure of primary norms and of secondary norms of enforcement, and so on. Hauser is good on these topics, and shows by example that benefits will indeed flow from cooperation between science and philosophy in advancing our knowledge of morality ‘in the round’.
But what is the function of morality?
Hauser has surprisingly little to say about the function of morality. Yet functional questions are very important. Suppose it is true that we are Rawlsian creatures: human moral psychology has the structure that Hauser describes. A Darwinian science of morality is obligated to raise the functional question: what explains the presence in the modern human mind of these psychological structures? What is the function of our evolved moral psychology? The functional question does not arise (in a Darwinian way) if our moral psychology is not an evolutionary product. But that is not Hauser’s view. If it is an evolutionary product, it does not follow straight off that it has a function: it might not be an adaptation, but a Gouldian ‘exaptation’ or by-product of other evolutionary processes. However, the usual reasons for thinking that our moral psychology is an adaptation apply: it is a complex trait that has a significant impact on fitness. We need to explore adaptationist hypotheses, because we need to know why a moral sense evolved in our lineage and why it has taken the form that it now presents. Could humans be as culturally and technologically advanced as they are and not have a moral sense at all? Could we have come as far as this with a substantially different moral sense? Ernst Mayr called questions of evolutionary function ‘ultimate’ questions, and for good reason: answers to these questions explain the existence of specific structures, their presence in modern phenotypes. Why does morality exist at all? That is a very good question, but Hauser has little to say about it. This is a surprising and disappointing gap in his book.
Humans are not uniquely social animals, but they are unique in the sheer size of the social groups that they inhabit—especially since the advent of agriculture and more recently of states. Human social groups consist of much more than a lot of people living closely together. Group-living is not sufficient for sociality (think wildebeest and other grazing animals that live in large herds but exhibit minimal cooperation). Humans are remarkable in their ability to live in large, cooperative groups composed predominantly of nonkin. Social life in these groups depends on norms of cooperation, and requires of individual participants a suite of psychological resources, what Kim Sterelny usefully calls ‘the cooperation suite’ (Sterelny 2003). The prosocial emotions and desires are an important, perhaps essential part of the cooperation suite. Prosocial emotions and desires include empathy, sympathy, group-directed desires for friendship and ‘belonging’, love and kindness, and so on. There is evidence that nonhuman primates have some of these emotions and desires (de Waal 2006). But prosociality is not yet morality (Joyce 2006). Morality might very roughly be described as prosociality+normativity. Normativity is ‘oughtiness’. Morality is more than wanting to be nice to other people; but when wanting to be nice is accompanied by a sense of obligation (oughtiness), we have morality, more or less. Joyce (op. cit.) has argued persuasively that an evolutionary account of the origin of morality must explain more than prosociality. It must explain the evolutionary origins of oughtiness as well. Joyce himself favours a Humean projectivist view. An internal (perhaps evolved) sense of compulsion is projected upon the world, giving to moral principles an objective phenomenology: the normativity of morality appears to come from outside us, from the objective world.
For most of their evolutionary history, humans have lived in small groups—perhaps of a few tens of individuals, mostly related by kinship. The cooperation suite of psychological resources evolved in that environment. Perhaps they evolved in part through processes of group selection (Sober and Wilson 1998; Sterelny 2003). These resources, including perhaps the sense of normativity, evolved as a mechanism for small-group life. Individuals living in such groups did better than those living in less cooperative groups. Cooperative practices expanded into new groups and into new territories at the expense of less cooperative practices. But as Sober and Wilson remind us, there is a dark side to this process of small-group selection: ‘Organisms are frequently adapted to prey upon and compete aggressively with other organisms, so no less can be expected of groups. Group selection does not eliminate conflict so much as elevate it to a new level in the biological hierarchy, where it can operate with even more destructive force than before’ (Sober and Wilson 2000, p. 264). Niceness towards one’s fellows within the group sits comfortably beside hostility and nastiness towards those outside the group. So morality as an adaptation is, plausibly, only an adaptation for the benefit of small groups of humans. That is its evolved function.
These are important issues, and it is disappointing that the multi-talented Hauser does not tackle them head on.
Does the science of morality debunk morality?
Suppose that you are a religious person ‘by default’, as most religious people are. You have been raised in a particular religion, and have never seen sufficient reason to defect. But something that you read persuades you that there is a genetically designed faith system in the brain (we might generously call it ‘man’s need for faith’), and that a child’s culture fills up their faith centre with the local religious memes.9 It would not be altogether surprising if you came to doubt the validity of your locally-acquired religious beliefs. Knowledge about the origins of some of your beliefs can dispose you to doubt the truth of those beliefs. This might be described as a debunking effect. It is in the first instance a psychological process, in which you move from belief to disbelief. But it has an epistemic flavour: newly acquired beliefs give you reason to doubt (or perhaps cause you to doubt) that your old beliefs are really justified. Your new naturalistic beliefs explain why it is that you have until now held religious beliefs, and the explanation is of a kind that undermines the assumption that your religious beliefs track the truth. The natural facts indicate that you would have had those same religious beliefs, whether or not they were true. The truth of your religious beliefs does not enter into the best explanation for your holding them; nor do confirming evidence or rationally sound arguments enter into the best explanation for your holding religious beliefs. A naturalistic theory of religion has corroded your religious faith, even if it does not directly address the arguments for the existence of God.
Suppose that you are a normally moral person, and you read Hauser’s Moral Minds. You become convinced that your moral sense can be naturalistically explained. Our moral sense evolved to serve the interests of our genes. That being so, the moral sense evolved to track fitness-affecting facts about the world, not to track objective moral facts.10 Having recognized this, it now seems to you redundant to think that there are objective moral facts or any other set of objective facts on which morality can be grounded. You now accept an evolutionary, ultimate explanation for the existence of your own moral sense. What will you do? Will you decide that morality is a hoax or a delusion? (Richard Dawkins calls his book on religion The God Delusion.) Will you give up morality? This is a matter of empirical fact, which cannot be settled a priori. But it does seem intuitively implausible (at least to me) that you will undergo the sea change needed for you to defect from morality. Most of your moral psychology is implicit and unconscious (if a dual-process theory is true). Your conscious acceptance of a nonobjectivist view might well prove to have no discernible effect on those unconscious processes. (I have met philosophers who subscribe to moral nihilism, and they are all perfectly decent chaps.)
The analogy of morality with religion raises one final matter for comment. People are often avidly religious. Yet the same passion for morality is rare. The lives of many people are consumed by religion; few people are similarly passionate about and consumed by morality. Hauser does not seem to notice this. He describes how the development of a moral sense is a feature of normal ontogeny. But he does not notice how easily that developmental process can be derailed. Just about any modern city houses people whose moral sense failed to develop properly, and most of us prefer not even to think about the changes to the behaviour of ordinary people that follow upon the breakdown of civil order. Language competence develops much more robustly than does moral competence. Why should this be? Perhaps the human moral sense has an evolved function; but perhaps we should include it with other evolved traits that are very imperfect, that perform their functions rather poorly. If so, what does this tell us about morality, and about us?
Dawkins, Richard (2006), The God Delusion. Bantam Press: London
Dennett, Daniel (2006), Breaking the Spell. Religion as a natural phenomenon. Viking: New York.
de Waal, Frans (2006), Primates and Philosophers. How morality evolved. Edited by Stephen Macedo and Josiah Ober. Princeton University Press: Princeton and Oxford
Haidt, Jonathan (2001), ‘The emotional dog and its rational tail: a social intuitionist approach to moral judgment’, Psychological Review, Vol. 108, pp. 814-834
Hauser, Marc D. (2006), Moral Minds. How nature designed our universal sense of right and wrong. HarperCollins: New York
Joyce, Richard (2001), The Myth of Morality. Cambridge University Press: Cambridge
Joyce, Richard (2006), The Evolution of Morality. Bradford/MIT Press: Cambridge, MASS.
Sober, Elliott and David Sloan Wilson (1998), Unto Others: The evolution and psychology of unselfish behavior. Cambridge, MA: Harvard University Press.
Sober, Elliott and David Sloan Wilson (2000), ‘Morality and Unto Others’ (response to commentary discussion). Evolutionary Origins of Morality. Edited by Leonard D. Katz. Imprint Academic: Thorverton
Sterelny, Kim (2003), Thought in a Hostile World. The evolution of human cognition. Blackwell: Oxford.
Tinbergen, Niko (1963), ‘On the aims and methods of ethology’. Zeitschrift für Tierpsychologie, Vol. 20, pp. 410-33.
1 Unless otherwise stated, all page references are to Marc Hauser’s Moral Minds. Marc Hauser is the author of The Evolution of Communication (1996), Wild Minds: What animals really think (2000), and numerous journal articles on animal behaviour, developmental psychology, and related fields. He is Professor of Psychology, Organismic and Evolutionary Biology, and Biological Anthropology, at Harvard University, and director of the Cognitive Evolution Laboratory and co-director of the Mind, Brain and Behavior Program.
2 Tinbergen (1963).
3 Strictly, the traits promoted fitness in the environment of selection, whether or not they still do so.
4 Compare Haidt (2001), p. 819.
5 Education in philosophy is famously a process of teaching the next generation of philosophers to have the correct intuitions about Gettier cases, Twin Earth, and so on and so forth.
6 If you don’t know what trolly problems are, much pleasure awaits you (well, perhaps). Hauser is a good introduction.
7 Yes, innateness is a controversial concept; but Hauser uses it in familiar roles anyway.
8 Of course this is an empirical generalization. There will be exceptions, but the generalization might well be true of 99+% of cases in which moral reasoning occurs.
9 For much more sophisticated and plausible views, see Dennett (2006) and Dawkins (2006).
10 Levy (2006), p. 574, in discussion of an argument in Joyce (2001).