Category Archives: morality

Egalistarism and parochialism in young children

Fehr, E., Bernhard, H., & Rockenbach, B. (2008). Egalitarianism in young children. Nature, 454(7208), 1079–1083. http://doi.org/10.1038/nature07155

This is a classic and crucial study. The authors use an extremely simple experimental design (inspired in previous work with non-human primates) to test the hypothesis of a parallel development of children’s egalitarianism and parochialism. Children between 3 and 8 years of age are presented three situations:

  • Prosocial: either take one candy and assign one candy to another child, or take one candy and assign none to another child ((1,1) vs. (1,0).
  • Envy: either take one candy and assign one candy to another child, or take one candy and assign two candies to another child ((1,1) vs. (1,2).
  • Costly sharing: either take one candy and assign one candy to another child, or take two candies and assign none to another child ((1,1) vs. (2,0).

The study shows that children, as they grow, aim at reducing the inequality between themselves and their partner, regardless of whether the inequality is to their advantage or disadvantage.

The authors found that children at age 3–4 show little willingness to share resources (as tested by the sharing situation) but a non-negligible percentage of the children is willing to make choices that benefit the recipient if it is not costly (in the envy and prosocial situations). After this age, other-regarding preferences develop, which take the form of inequality aversion instead of a preference for increasing the partner’s or the joint payoff.

Thus, across the three situations, egalitarian choices increase with age. “If we pool the children’s choices in all three games, the percentage of children who preferred the egalitarian allocation in all three games increases from 4% at age 3–4 to 30% at age 7–8.” Also, “(…) the share of subjects who maximize the partner’s payoff by choosing both (1,1) in the prosocial game and (1,2) in the envy game decreases sharply from 43% at age 3–4 to 16% at age 7–8.” Egalitarianism rises as generosity declines.

This emphasis on equality (or inequality aversion) seems to be uniquely human; no animal shows a comparable behavioral pattern.

In addition, children (especially boys) seem to show an in-group bias. For example, in the envy game, boys tend to do egalitarian distributions (1,1) rather than generous distributions (1,2) more with the outgroup than with the ingroup. The effects of parochialism are also apparent in the other situations: In the prosocial game, the children remove inequality that favors themselves more often if the partner is an ingroup member. In the sharing game, egalitarian choices slightly decrease over time if the partner is an outgroup member, whereas sharing with ingroup members strongly increases with age

The conclusion is not only that egalitarianism and parochialism are important forces driving children’s judgments, but also that is that a utilitarian ethics seems absent from children’s minds. In other words, children do not try to maximize the total sum of benefits for everybody. That is why, in the “envy” situation, children (at least after 5 or 6 years of age) tend to prefer (1,1) over (1,2); that is, egalitarianism trumps maximization of benefits. Utilitarianism is not a factor in children’s reasoning. Equality aversion and parochialism grow between 3 and 8 years of age and explain children’s responses.

Comparing this paper with other studies, it is interesting to note that equality is said to appear at 5, or 6, or 7 or 8 years of age depending on the study, the methodology used and the way the results are interpreted (e.g., Rochat says that children at 5 are already steady defenders of strict equality and that they even can adopt an ethical stance, when they are willing to sacrifice their own resources to punish an agent that is not observing equality).

In addition, it is relevant to understand young children’s (3 and 4 year-olds) apparent discrepant or erratic behavior (sometimes they are generous, at other times they are selfish). In a previous study I claimed that the reason for this is that those children “don’t frame their relationships in terms of strict-reciprocity (tit for tat) contracts. It should be no surprise that their behavior in economic games and fairness experiments is consistent with a culture of associative reciprocity and the gift economy, which predominate in the context of familial institutions and peer relationships at this age. Preschoolers might appear as non-strategic from the point of view of economists who identify rationality with calculating the best means to achieve a desired end-result (individual profit, equality, etc.), but they are actually well adapted to their real social context. (…)  The apparently selfish tendencies of 3-year-olds moderate themselves as children mature, so that between five and seven years of age (depending on the specific study) children start demanding fairness and rejecting inequality. In certain cases, they even embrace an ethical stance and engage in costly punishment. This emerging mindset is in harmony with the strict reciprocity embedded in experiences such as bartering with peers or dealing with money and prices, which gain prominence in children’s daily life as they grow up. In the culture of adults, barter and monetary transactions are considered fair when both parties receive an equivalent value. Similarly, fair distributions between partners with the same merit are expected to be 50/50. This kind of institutional context comes to dominate children’s interactions and provides them with a new sense of fairness.”

Tisak and Turiel on moral and prudential rules

Tisak, M. S., & Turiel, E. (1984). Children’s Conceptions of Moral and Prudential Rules. CHILD DEVELOPMENT, 55(3), 1030–1039.

This article examines the relationship between moral and prudential rules in children. Moral and prudential events are similar in that they may involve consequences (for example, harm) to persons, but also differ in that morality bears upon social relations and prudence does not. The researchers interviewed children by using scripts depicting transgressions of moral (stealing, pushing) and prudential (running in the rain) rules. Participants were between 6 and 10 years of age. The authors conclude that 6-year-old children can already differentiate between moral and prudential rules. Children’s evaluations of moral and prudential rules are very similar in many respects; however, the authors claim that the reasons given in justification of moral rules focus on both consequences (harm) and the regulation of social relations (justice, fairness), while justification for the prudential rule is based only on consequences. Moral rules were also attributed more importance than prudential rules. As is typical in Moral Domain Theory, the interview is purely verbal and children are required to provide explicit justifications for their judgments. As a side note, I have a problem with Turiel’s prose: it’s dry and boring. But that’s my problem, I guess. (I know, this is supposed to be science, not literature).

Elliot Turiel on the Development of Morality

Turiel, E. (2008). The Development of Morality. In W. Damon & R. M. Lerner (Eds.), Child and Adolescent Development: An Advanced Course (pp. 473–514). SAGE Publications.

This is a great summary of trends and theoretical orientations in the study of moral development by Elliot Turiel. I will only comment on a couple of minor points.

First: I like the fact that Turiel considers children as active social agents that face conflicts and meaningful moral experiences in their everyday life. Moral development is not about absorbing information about moral rules or values, it is about actively constructing a moral understanding of the social world and of one’s own life. “…in many current formulations morality is not framed by impositions on children due to conflicts between their needs or interests and the requirements of society or the group. Many now think that children are, in an active and positive sense, integrated into their social relationships with adults and peers and that morality is not solely or even primarily an external or unwanted imposition on them.”

He also emphasizes that morality is not primarily negative (as one might infer from Freudian or behavioristic formulations); in other words, it’s not about the inhibition of aggressive or sexual impulses. Today, we know that children experience empathic feelings towards other people spontaneously; that our species has a natural tendency to do things within groups and to help each other. “The findings that young children show positive moral emotions and actions toward others indicate that the foundations of morality are established in early childhood and do not solely entail the control and inhibition of children’s tendencies toward gratifying needs or drives or acting on impulses. However, that the foundations of positive morality are established in early childhood does not necessarily establish that significant aspects of development do not occur beyond early childhood; that judgments, deliberations, and reflections are unimportant; or that many experiences, in addition to parental practices, do not contribute.”

Against the work of J. Haidt and R. Shweder, Turiel claims that:

“Studies of moral development suggest alternatives to the propositions that emotions are primary in morality, that moral acquisition is mainly due to effects of parental practices on children, or that morality largely reflects the acquisition of societal standards. Dunn et al. (1995) found differences in the two types of situations they assessed (physical harm and cheating) and documented that relationships with siblings influence development.” This suggests that children have a spontaneous capacity to reason according how they determine the domain they are dealing with (moral, societal, personal) or the kind of moral problem at hand.

Children do not receive passively the moral prescriptions upheld by adults; rather, they show a certain degree of autonomy early on. “By 2 or 3 years of age, children display a fair amount of teasing of mothers, physical aggression, destruction of objects, and an increasing ability to engage in arguments and disputes with mothers (Dunn & Munn, 1987). This increasing variety in young children’s social relationships is consistent with the findings reviewed by Grusec and Goodnow (1994) showing that parental practices are related to type of misdeed (e.g., moral or conventional), children judge the appropriateness of reasons given by parents when communicating with them, and parents may encourage ways of behaving that differ from those they engage in themselves… With acts entailing theft or physical harm to persons, young children (4 to 6 years) give priority to the act itself rather than the status of the person as in a position of authority.”

“Children’s judgments are not based on respect or reverence for adult authority but on an act’s harmful consequences to persons. Children’s judgments about harmful consequences emerge early in life along with emotions of sympathy, empathy, and respect (Piaget, 1932; Turiel, 2006b); at young ages children go well beyond social impulses and the habitual or reflexive, attempting to understand emotions, other persons, the self, and interrelationships (Arsenio, 1988; Arsenio & Lemerise, 2004; Nucci, 1981; Turiel, 1983, 2007). A great deal of research has demonstrated that young children make moral judgments about harm, welfare, justice, and rights, which are different from their judgments about other social domains.”

All this is consistent with my view of children as active, institutional agents.

Turiel notices that the differentiation between the societal, personal and moral domains appears early in life (at three years of age) but that doesn’t mean that it’s innate. Rather, he believes it might originate in the experiences and interactions children engage in during their first years.

Haidt on rationalism, social intuitionism and morality

 

Haidt, J. (2001). The emotional dog and its rational tail: a social intuitionist approach to moral judgment. Psychological Review, 108(4), 814–834. Retrieved from http://www.ncbi.nlm.nih.gov/pubmed/11699120

 

  1. Rationalism vs. intuitionism

Let me start by the end. This wonderful article closes with a beautiful sentence: “The time may be right, therefore, to take another look at Hume’s perverse thesis: that moral emotions and intuitions drive moral reasoning, just as surely as a dog wags its tail”.

This piece criticizes rationalist approaches in moral psychology and proposes an alternative: social intuitionism.

Rationalist approaches, according to the author, assume that moral knowledge and moral judgment are reached primarily by a process of reasoning and reflection. Intuitionist approaches, by way of contrast, claim that moral intuitions (including moral emotions) come first and directly cause moral judgments. Haidt believes that moral reasoning is usually an ex post facto process (a dog’s tail) used to influence the intuitions (and hence judgments) of other people (other dogs), and not to arrive at new moral truths.

Haidt begins by offering some affectively charged examples, such as incest and other taboo violations. For those cases, he says, an intuitionist model is more plausible than a rationalist model. He then tries to prove that the intuitionist model can handle the entire range of moral judgments.

Haidt relies on a schematic contrast between intuition and reason. Intuition, he says, occurs quickly, effortlessly, and automatically, such that the outcome but not the process is accessible to consciousness, whereas reasoning occurs more slowly, requires some effort, and involves at least some steps that are accessible to consciousness. When one uses intuition, “one sees or hears about a social event and one instantly feels approval or disapproval”.

He then suggests that research on moral development (for example, Kohlberg’s) is trapped in a vicious circle between theory and methods. Rationalist researchers assume that moral judgment results from conscious, verbal reasoning, and therefore they investigate it by using oral interviews that highlight rational discourse and obscure intuitive reactions. Standard moral judgment interviews distort our understanding of morality by boosting an unnaturally reasoned form of moral judgment, leading to the erroneous conclusion that moral judgment results from a reasoning process, and thus reinforcing the mistaken assumptions the researcher had at the very beginning of the study.

Haidt’s model posits that the intuitive process is the default process, handling everyday moral judgments in a rapid, easy, and holistic way. It is only when intuitions conflict, or when the social situation demands a thorough examination of all facets of a scenario, that the reasoning process is called upon.

  1. The social dimension

According to the social intuitionist model, moral intuitions and moral reasoning are partially shaped by culture. Given that people have no access to the processes behind their automatic evaluations, they provide justifications by consulting their a priori moral theories, i.e. culturally supplied norms for evaluating and criticizing the behavior of others. By the way, this point has been made many times in the past. Already Aristotle, in his treatise on rhetorics, describes how people use cultural commonplaces in persuasive speech to support pre-existing points of view. A priori moral theories provide acceptable reasons for praise and blame (e.g., “unprovoked harm is bad”; “people should strive to live up to God’s commandments”). The term “a priori moral theories” seems to cover roughly the aspect of culture that other scholars call social representations, ideology, background knowledge, topoi, etc.

The social intuitionist model acknowledges that moral reasoning can be effective in influencing other people. Words and ideas can make people see issues in a new way by reframing a problem and triggering new intuitions. Now, this is remarkable: Haidt refers to one of the paradigmatic rationalist philosophers–Plato!–to make the point that moral reasoning naturally occurs in social settings, for example in the context of a dialog between people who can challenge each other’s arguments and provoke new intuitions. This is an odd allusion because Plato embodies the very origin of the tradition that Haidt seems to be attacking, a tradition purporting that moral rules and beliefs ought to be established through rational discourse. When Haidt attacks this tradition, he depicts rationalists as people who think of morality as individual, internal and cognitive. So how can he refer to the same tradition to make the point that moral reasoning is social and interactive?

Haidt does not seem to acknowledge that sometimes rationalists themselves claim that morality develops through social processes and exchanges.  Piaget and Kohlberg, for example, give such social processes as much importance as Plato himself; and this is something that Haidt does not seem to recognize fully when treating Piaget’s and Kohlberg’s theories of moral development as cognitive.

More about this. Haidt makes the point that the intuitionist approach treats moral judgment style as an aspect of culture, and that educational interventions should aim at creating a culture that fosters a more balanced, reflective, and fair-minded style of judgment. At this point he says that the “just community” schools that Kohlberg created in the 1970s appear to do just that. How come this is not seen as a contradiction by Haidt?

Let me clarify. There are two important conceptual tensions to be noticed here. One is that Kohlberg is presented as the textbook moral rationalist and then as the proponent of a practical intervention that takes into account the social and cultural aspects of morality (same problem as with Plato). At this point Haidt should make clear what is going on. Either it is the case that there is an internal contradiction in Kohlberg’s system (so that he sometimes treats morality as an exclusively discursive, rational and cognitive matter, and at other times he understands it as a social and cultural process), or perhaps Kohlberg’s view of morality is subtler and more multi-layered that expected (and Haidt’s attacks are therefore aimed at a strawman). In my opinion, the second option is the case (see Donald Reed’s book on Kohlberg, “Liberalism and the practice of Democratic Community”).

The second conceptual tension comes to the fore in paragraphs such as the following: “By seeking out discourse partners who are respected for their wisdom and open-mindedness, and by talking about the evidence, justifications, and mitigating factors involved in a potential moral violation, people can help trigger a variety of conflicting intuitions in each other. If more conflicting intuitions are triggered, the final judgment is likely to be more nuanced and ultimately more reasonable.”

Here Haidt says that, even though in everyday settings morality is intuitive and automatic, in the long run it is desirable that people talk about evidence and justifications, that is, that they get involved in a rational argumentation. Most of the time, then, morality is intuitive and automatic, but it ought to be less emotional and intuitive and more rational and discursive. Now, in saying this Haidt is very close to the very tradition that he is criticizing: Plato, Piaget, Kohlberg, Rawls, etc. (but not cultural relativists like Shweder!) all say things in the same vein. Haidt seems to be close to the rationalist’s heart at this point.

The way in which Haidt articulates nature and culture, and sees innate cognitions and social modeling as complementary is interesting, and reminds me of other contemporary authors such as Michael Tomasello. I quote from Haidt’s paper:

“Morality, like language, is a major evolutionary adaptation for an intensely social species, built into multiple regions of the brain and body, that is better described as emergent than as learned yet that requires input and shaping from a particular culture.” And: “There is indeed a moral Rubicon that only Homo sapiens appears to have crossed: widespread third-party norm enforcement. Chimpanzee norms generally work at the level of private relationships, where the individual that has been harmed is the one that takes punitive action. Human societies, in contrast, are marked by a constant and vigorous discussion of norms and norm violators and by a willingness to expend individual or community resources to inflict punishment, even by those who were not harmed by the violator.”

We agree with Haidt that cognition in general and moral judgment in particular has been seen up to now as an overly intellectual matter. We agree with the turn towards embodied cognition and with the emphasis in the centrality of emotion. Also, Haidt is right in emphasizing the role of practice, repetition, and physical movement for the tuning up of cultural intuitions. “Social skills and judgmental processes that are learned gradually and implicitly then operate unconsciously, projecting their results into consciousness, where they are experienced as intuitions arising from nowhere”. “Moral development is primarily a matter of the maturation and cultural shaping of endogenous intuitions.” Perhaps he’s a bit shallow in his view of third-party norm enforcement as the mark of Homo Sapiens. Culture is certainly much more than that. Social organizations have developed explicit codes, laws, values and customs, complex representational systems, whole languages that allow humans to be aware of norms, to discuss about who is a criminal and who is virtuous, that take rule-following to a complete different level when compared even with the most advanced cases of animals’ social enforcement or punishment of anti-social behavior. But, in general, we agree with his vision of how morality operates in everyday settings.

  1. The physiological analogy

The problem with previous, prevalent views of moral reasoning seems to be that they do not represent faithfully what most people do most of the time. Haidt thus appears to use a naturalist criterion to argue that his theory overcomes the limitations and distortions of previous ones. Psychological science should be concerned with facts; the relevant fact at hand is here how people really think (most people, most of the time). It is true, for example, that most of the time we don’t spell out all the intermediate steps in moral reasoning; that our gut reactions to moral phenomena are quite automatic. This is a naturalist approach: a good theory of digestion, for example, should explain how animals digest their food in normal conditions (most animals, most of the time). Then, if I eat an inedible plant and I suffer from stomach ache and vomiting, those events should be treated as deviating from the natural, expected digestive process, and should be explained by additional, special theories about poisoning. Moral intuition performs the normal digestion of the moral fact; excessive verbal reasoning is a kind of intoxication.

The comparison between intuition (fast, effortless, automatic, unintentional, inaccessible, metaphorical, holistic, etc.) and reasoning (slow, effortful, intentional, controllable, consciously accessible and viewable, analytical, etc.) is based on this kind of physiological, functional view of the human mind.

Now, the physiological analogy, in my opinion, has some limitations. Think about this: we humans also have mathematical intuitions. If I pay with a 100 bill for something that costs $23 and I’m given a five as change I know immediately that the change is wrong. When someone asks me why, then I can offer justifications, produce an explicit calculation, but that doesn’t mean that such explicit argument was present from the beginning. It’s a justification of my point of view that I produce ex post-facto. Just as in morals. Thus it might be the case that, in many knowledge domains (math, physics, theory of mind, morality), most people, most of the time, produce automatic responses that are intuitive, effortless, quick, etc. Yet that doesn’t mean that math as a knowledge domain is irrational or purely intuitive, because mathematical rules might have been constructed according to rational criteria in the context of protracted ontogenetic or phylogenetic processes. Yet, in everyday settings, we don’t need to spell out all the intermediate steps that take us to a conclusion. We feel immediately that some things don’t make sense or are just wrong.

Let me compare this with Piaget’s theory. Although Piaget did use some biological, even physiological metaphors to account for how our mind tries to make sense of phenomena (e.g. assimilation and accommodation), he complemented this view with an epistemological approach that allowed him to characterize the domain of morality (and other domains of cognition) in a richer way. He incorporated logical, philosophical, sociological and historical considerations into his theories. For example, there is a sociological theory embedded in his differentiation between autonomous and heteronomous moral judgments. This interdisciplinary approach gives him additional criteria to decide what constitutes an interesting, relevant judgment or relevant cognition, beyond the naturalistic criterion of what most people do, most of the time.

When an individual has to deal with a typical moral transgression (a robbery, an act of selfishness, an unnecessary insult against an innocent victim) from within an unquestioned paradigm, then her moral reaction is automatic, intuitive, quick, just as if someone were to ask her “how much is 2 + 2”? But when the situation is new, or when it awakens contradictory moral convictions, then it may trigger a more explicit thinking process, an inner dialogue that in some cases might take her to new insights. Piagetian theory, by the way, gives a precise account of the distinction between experiences that are easily assimilated to the individual’s current conceptual framework and those that trigger cognitive conflict and, eventually, favor conceptual change. This contrast between paradigm continuity and revolution (to use Kuhn’s terms) is familiar to Piagetian psychologists. Again, Piaget takes into account structural and normative aspects of cognition and goes beyond a pure functional, physiological view that is simply interested in what most people do most of the time. Conceptual change might be something that happens rarely, but it might be interesting and relevant once one adopts a richer view of knowledge processes.

Another metaphor: once the roads are built, yes, it is true that cars tend to travel the same roads over and over again, without thinking about the direction they must go. But sometimes psychologists need to take a step back and think about how new roads are constructed (or abandoned). That’s what constructivism focuses on.

  1. The legal analogy

Haidt says: “The reasoning process is more like a lawyer defending a client than a judge or scientist seeking truth”. He stresses that moral reasoning is not free, that it resembles a lawyer employed only to seek confirmation of preordained conclusions.

Again, it is true that this is how most people reason, most of the time. This is how we think and interact with each other in everyday settings, while pursuing our particular goals. It was already noticed by Aristotle, in his treatise on Rhetoric, that people first offer conclusions and then search for supporting arguments (the opposite order to what we use when we try to present arguments according to logical standards.)

Yet: lawyers are not natural creatures, but they are necessary gears within a legal machine. Where there is a lawyer, there will also be a more complex legal ecosystem that includes other agents and roles. A lawyer, for example, presents her case to a judge, in order to prevail against an opposing party.

Think about it this way: even a scientist acts like a lawyer! When Haidt says: “… a judge or scientist seeking truth” a sociologist of science would disagree with the comparison. A scientist is not an objective judge, she’s an individual human being with particular interests. Yes, she wants to know the truth, but she also is fond of some particular hypothesis, intellectual traditions, lines of research, and tends to be partial, to favor some hypotheses over competing alternatives. And in her career, she has associated herself with such a hypothesis or line of work, and does not want to dilapidate her investment. She has a lot at stake. She’s closer to the lawyer than to the judge (ask Bourdieu, Kuhn, and many others…). It is only as a result of a whole adversarial process that a scientific community, in due time, can start recognizing one of the competing theories as closer to the truth, thus playing the role of judge. It takes lawyers, witnesses and judges to determine the truth within an adversarial system. This is what is called dialectics.

In other words: in thinking of the moral reasoner (or arguer) as a lawyer, Haidt does not distance himself from rationalism. On the contrary, he depicts the moral reasoner as part of a rational, intersubjective process. And he seems to acknowledge this:

“In the social intuitionist view, moral judgment is not just a single act that occurs in a single person’s mind but is an ongoing process, often spread out over time and over multiple people. Reasons and arguments can circulate and affect people, the fact that there are at least a few people among us who can reach such conclusions on their own and then argue for them eloquently (Link 3) means that pure moral reasoning can play a causal role in the moral life of a society.”

Now Haidt should make a decision here. He can either keep on insisting that the interesting, central part of moral judgments is the automatic, intuitive, moral reaction that takes place “inside” the individual, and that persuasion is a causal, lineal “link” by which an individual cognitive system impacts on another cognitive system (“causes” it to change a point of view). Or else, that there is a rational, intersubjective process of moral reflection that exceeds what an individual does at a particular moment, that is played out on the cultural stage, and that can be seen as rational from a larger perspective: social and historical processes of moral elaboration.

 

Pinker on moral realism

I’ve recently read an old opinion piece by Steven Pinker (http://www.nytimes.com/2008/01/13/magazine/13Psychology-t.html).

It’s a brilliant article. It summarizes current trends on the scientific study of morality. As I frequently do, I will focus on a tiny aspect of his argument.

In addition to a review of the intellectual landscape in this domain, towards the end Pinker integrates different recent findings and prevalent theories into the theoretical position of “moral realism”. By this expression, he means that morality is not just the result of a number of arbitrary conventions or contingent historical traditions. There are, rather, objective and universal reasons why fundamental moral rules are universally valid. There are moral truths just as there are mathematical truths. Let me quote him:

“This throws us back to wondering where those reasons could come from, if they are more than just figments of our brains. They certainly aren’t in the physical world like wavelength or mass. The only other option is that moral truths exist in some abstract Platonic realm, there for us to discover, perhaps in the same way that mathematical truths (according to most mathematicians) are there for us to discover. On this analogy, we are born with a rudimentary concept of number, but as soon as we build on it with formal mathematical reasoning, the nature of mathematical reality forces us to discover some truths and not others. (No one who understands the concept of two, the concept of four and the concept of addition can come to any conclusion but that 2 + 2 = 4.) Perhaps we are born with a rudimentary moral sense, and as soon as we build on it with moral reasoning, the nature of moral reality forces us to some conclusions but not others.”

So, just as Stan Dehaene talks about a “number sense”, Pinker talks about a “moral sense”. Just as there is a mathematical reality and mathematical facts, there is a moral reality and moral facts.

According to Pinker, moral realism is supported by two arguments:

1) Zero-sum games are games in which one party has to lose in order for the other to win. In nonzero-sum games, by way of contrast, win-win solutions are possible. Now, in many everyday situations, agents are better off when they act in a generous (as opposed to selfish) way. Thus, these everyday situations can be analyzed (in terms of game theory) as “nonzero-sum games.” His words: “You and I are both better off if we share our surpluses, rescue each other’s children in danger and refrain from shooting at each other, compared with hoarding our surpluses while they rot, letting the other’s child drown while we file our nails or feuding like the Hatfields and McCoys.”

Pinker does not explain this first argument clearly, but he seems to imply that societies respond to a number of constraints by developing norms and structures (such as reciprocity or mutual respect). A group or social organization that enforces the rules of reciprocity, mutual respect, authority, etc., is probably more stable, and it’s in a position to deliver more good to a greater number of members, as compared with a group that does not enforce those standards. This is not a new theory. It is already postulated by Plato (a defender of both mathematical realism and moral realism) in the Republic. It is also advanced, with different nuances, by more recent authors such as Hegel, Piaget, Quine, and others.

Now, in what sense might concepts like “just” or “moral” be real? Only in the sense of being a kind of “pattern” or “form” that regulates human interaction (they are “ideal realities”, not physical realities). Where might such patterns, such ideal realities, come from? They grow out of natural evolution and cultural history; they develop in human experience, relationships, “praxis” (as a Marxist would say). But if “moral truths” emerge from (are conditional on) natural and cultural history, and history is woven by the actions of free humans, can we still say that there is a universal, binding, “true morality”? Is such a “true” form of justice or morality valid for any possible individual or any possible society? At this point, everything gets blurry and fuzzy. My opinion is that, yes, there is one true universal morality, but that it is true in the context of our specific world history. So, ultimately, moral truths are not absolute (nothing is absolute unless you believe in god), but conditional on human nature, human history and human culture. They are real and universal within this context.

I quote Pinker again: “The other external support for morality is a feature of rationality itself: that it cannot depend on the egocentric vantage point of the reasoner. If I appeal to you to do anything that affects me — to get off my foot, or tell me the time or not run me over with your car — then I can’t do it in a way that privileges my interests over yours (say, retaining my right to run you over with my car) if I want you to take me seriously. Unless I am Galactic Overlord, I have to state my case in a way that would force me to treat you in kind. I can’t act as if my interests are special just because I’m me and you’re not, any more than I can persuade you that the spot I am standing on is a special place in the universe just because I happen to be standing on it.”

“Not coincidentally, the core of this idea — the interchangeability of perspectives — keeps reappearing in history’s best-thought-through moral philosophies, including the Golden Rule (itself discovered many times); Spinoza’s Viewpoint of Eternity; the Social Contract of Hobbes, Rousseau and Locke; Kant’s Categorical Imperative; and Rawls’s Veil of Ignorance.”

“Morality, then, is still something larger than our inherited moral sense.”

This second aspect, that one might call “generalized reciprocity”, simply consists in recognizing that others have the same rights that we demand for ourselves. This may have a cost in the short term (I cannot rape your daughter or loot your farm) but it will pay off in the long run (I feel that my land and my family are safer, which is a higher good). In our market-penetrated, contractual society, this reciprocal consideration takes the form of an ability to adopt, in everyday discourse, the point of view of others, overcoming our limited perspective and progressively approaching an inter-subjective or trans-subjective point of view. But, against Pinker, I don’t think that this is a different point than the previous one; it is rather a facet of it. Human societies have developed, throughout history, a more complex, democratic, and in some ways egalitarian structure; at the same time, markets have become central institutions of modern societies. Argument 1 is: societies have evolved internal structures that respond to certain constraints. From there, one can derive argument 2: such societies have tended to make generalized reciprocity both a relational pattern and a moral ideal.

Warneken & Tomasello – Emergence of contingent reciprocity in young children

Paper #7

Warneken, F., & Tomasello, M. (2013). The emergence of contingent reciprocity in young children. Journal of Experimental Child Psychology, 116(2), 338–350.

This is another crucial study by Tomasello and his team. The researchers designed games to be played individually by the toddlers participating in the study. The child and the researcher would play in parallel, side by side. At some point the child would need more resources to continue playing and these would have to be provided by the researcher; later the researcher would lack resources and the child would have the opportunity to either help the researcher or defect. As the authors put it: “we gave 2- and 3-year-old children the opportunity to either help or share with a partner after that partner either had or had not previously helped or shared with the children. Previous helping did not influence children’s helping. In contrast, previous sharing by the partner led to greater sharing in 3-year-olds but not in 2-year-olds.”

These results do not support theories claiming either that reciprocity is fundamental to the origins of children’s prosocial behavior or that it is irrelevant. Instead, they support an account in which children’s prosocial behavior emerges spontaneously but is later mediated by reciprocity.

It is not until 3.5 years of age that children modulate their sharing contingent on the partner’s antecedent behavior. Children first develop prosocial tendencies (already present in babies or young toddlers) and later those tendencies become mediated by reciprocal strategies. Helping and sharing emerge before children begin to worry about direct reciprocity. Later in development, they seem to become more sensitive to reciprocity, adjusting their prosocial behavior accordingly.

On Bloom’s “The Moral Life of Babies”

Very nice piece by Harold Bloom in the popular press (NYTimes), where he summarizes recent cognitivist-nativist research on morality. He claims, for instance that:

“A growing body of evidence (…) suggests that humans do have a rudimentary moral sense from the very start of life. With the help of well-designed experiments, you can see glimmers of moral thought, moral judgment and moral feeling even in the first year of life. Some sense of good and evil seems to be bred in the bone. Which is not to say that parents are wrong to concern themselves with moral development or that their interactions with their children are a waste of time. Socialization is critically important. But this is not because babies and young children lack a sense of right and wrong; it’s because the sense of right and wrong that they naturally possess diverges in important ways from what we adults would want it to be.”

Throughout the article he tries to present a moderate position that recognizes cultural variation in moral codes and the necessity of social experience for moral development, but claims that there is an innate core of morality, a cognitive starting point shared by all humanity. This innate aspect constitutes a basic moral sense (in a sense similar to which Stan Dehaene talks about the number sense). So, for instance, he acknowledges the relevance of the convincing studies by Joseph Henrich (this one, among others) yet asserts that those cultural codes are built upon the firm base of our innate capacity for feeling empathy, compassion, and for distinguishing aggressive (“evil”) agents from cooperative ones.

Thus, when commenting on Tomasello’s research that seems to imply an innate capacity for cooperation, he argues:

“Is any of the above behavior recognizable as moral conduct? Not obviously so. Moral ideas seem to involve much more than mere compassion. Morality, for instance, is closely related to notions of praise and blame: we want to reward what we see as good and punish what we see as bad. Morality is also closely connected to the ideal of impartiality — if it’s immoral for you to do something to me, then, all else being equal, it is immoral for me to do the same thing to you. In addition, moral principles are different from other types of rules or laws: they cannot, for instance, be overruled solely by virtue of authority. (Even a 4-year-old knows not only that unprovoked hitting is wrong but also that it would continue to be wrong even if a teacher said that it was O.K.) And we tend to associate morality with the possibility of free and rational choice; people choose to do good or evil. To hold someone responsible for an act means that we believe that he could have chosen to act otherwise.”

To present morality as a list of features, however, does not help us understand what is distinctive about morality in opposition to innate cognitions: its normative nature. So, when Bloom asserts that “the morality of contemporary humans really does outstrip what evolution could possibly have endowed us with” I couldn’t agree more (and I am happy to notice that a nativist like Bloom has the intellectual courage to make this point); but his very theoretical framework doesn’t help him to clarify in exactly what way cultural morality is different from a biological tendency to process information in a certain way.

“The aspect of morality that we truly marvel at — its generality and universality — is the product of culture, not of biology (…) A fully developed morality is the product of cultural development.” Yes, I agree. But: what is culture? How does exactly culture build the normative, universal, deontic discourse that we call morality on top of our innate capacities? That is the question.

Out of reciprocal exchanges, morality emerges

More Rochat:

Children between three and five years develop an understanding that they are potentially liable and that they are building a history of transactions with others. Needless to say, parents and educators foster this development in all cultures, but this fostering is essentially the enforcement of the basic rules of reciprocity, the constitutive elements of human exchanges. Children are channeled to adapt to these rules they depend on to maintain proximity with others. From this, they begin to build a moral space in relation to others, a moral space that is essentially based on the basic rules of reciprocity.

Again: Rochat, P., 2009. Others in Mind: Social Origins of Self-Consciousness. New York: Cambridge University Press, p. 180.