Category Archives: epistemology

Ernest Gellner on science and society

Text #18: Gellner, E. (1984). The scientific status of the social sciences. International Social Science Journal, 36, 567-586.

This is a well-written and sharp article which touches upon several connected themes. For example: reasons for the current prestige of science; relations between social and natural sciences; role of science in modern society; economic impact of scientific activities; descriptive and normative uses of epistemology; whether the social sciences are scientific, and in what sense; what should happen for the social sciences to achieve an undisputable scientific status. And many others that I will not discuss.

I will only refer to one of the topics Gellner discusses: the social nature of science. Gellner distinguishes between different degrees of “sociologization” of science:

  • Philosophical epistemologies assume that science can be a one-person enterprise. Inductivism and logical positivism fall in this category. “The practitioner of this approach works in terms of some kind of model of discovery or of the acquisition of knowledge, where the elements in that model are items drawn from individual activities, such as having ideas, experiences, setting up experiments, relating the lessons of experience or the results of experiments to generalizations based on the initial ideas, and so forth. An extreme individualistic theory of science would be one that offered a theory and a demarcation of science without ever going beyond the bounds of a model constructed in this way. Such a theory might concede or even stress that, in fact, scientists are very numerous and that they habitually co-operate and communicate with each other. But it would treat this as somehow contingent and inessential. A Robinson Crusoe could, for such a theory, practise science. Given resources, longevity, ingenuity and ability, no achievement of science as we know it would, “in principle”, be beyond his powers. Those who hold theories of this kind are not debarred from admitting that, in fact, criticism, testing and corroboration are, generally speaking, social activities, and that they depend for their effectiveness on a mathematical, technological and institutional infrastructure, which is far beyond the power of any individual to establish; but they are, I suppose, committed to holding that whether or not a social environment makes these preconditions available is, as it were, an external condition of science, but not in any essential way part of it.”
    I think many current cognitivist and developmental psychologists who view knowledge acquisition as an individual skill or activity also fall in Gellner’s description (think Gopnik).
  • First-degree sociologization of science: society constitutes an essential precondition for the existence of science, but only society as such, and not necessarily this or that kind of society. Think Émile Durkheim here.
  • A second degree of the sociologizing of the theory of science involves insisting not merely on the presence of a society, but of a special kind of society. Popper’s theory of science seems to be of this kind: society is not enough, science requires the “critical spirit”. Closed societies cannot engender science but an “open society” can do so. An open society is one in which men subject each other’s views to criticism, and which either possesses institutional underpinning for such a practice, or at least lacks the institutional means for inhibiting it. Science is the kind of institution that is not at the mercy of the virtues or vices of persons. Public testing by a diversified and uncontrollable community of scientists ensures the ultimate elimination of faulty ideas, however dogmatic and irrational their individual adherents may be. In this version, science and its advancement clearly does depend on the institutional underpinning of this public and plural testing.
    Thomas Kuhn also sociologizes science to the second degree. For him, the crucial difference between science-capable and science-incapable societies is the absence or presence of a paradigm. Kuhn, however, does not seem to distinguish between scientific and unscientific paradigms. For Popper, the only science capable society is one endowed with institutional guarantees of the possibility or even the encouragement of criticism; for Kuhn, science is made possible only by the presence of social conceptual control sufficiently tight to impose a paradigm on its member’s at most times. Paradigms are binding only by social pressure, which thus makes science possible. Unless the deep questions are arbitrarily prejudged, science cannot proceed.
  • Gellner’s position is that, to define science, one needs to sociologize the philosophy of science to the third degree. This means considering the features and activities of society that do not pertain to their cognitive activities alone. (There is something strange in Gellner’s argument here, because Popper’s and Kuhn’s theories, as described by Gellner, seem to include non-cognitive, i.e. institutional aspects of social life). In order to clarify his point, Gellner describes three crucial stages of human history:
    1. Societies that practice hunting and food gathering. He doesn’t talk about knowledge in these societies, but we know these are societies that organize their wisdom in myths (folk tales, oral traditions, etc.)
    2. Societies oriented towards food production, mainly agriculture and pastoralism. These societies are literate and are governed by a centralized political class. Recorded knowledge in such societies is used for administrative records, notably those connected with taxation; for communication along a political and religious hierarchy; and as parts of ritual and for the codification of religious doctrine. Conservation of the written truth is the central concern here, rather than its expansion.
    3. Societies based on production, which is linked to growing scientific knowledge. Here he includes all modern and post-modern societies: the continuously growing technology they engender is immeasurably superior to, and qualitatively distinct from, the practical skills of the craftsmen of agrarian society. In this society, the question is no longer “what is truth, wisdom or genuine knowledge?” Rather, science is seen as the key to expand and optimize the productive processes of society. A society endowed with a powerful and continuously growing technology lives by innovation, and its occupational role structure is perpetually in flux. Science in such societies is trans-social, trans-cultural, explicit, formalized and abstract knowledge.

If you have read my other posts in this blog, you already know what my position about this topic is. While I endorse a third-degree sociologization of science, I follow in this respect authors like Hegel, J-P Vernant and J. Samaja, who emphasize the relation between the social structure of society and the production of knowledge. Gellner, by way of contrast, thinks that theories of historical stages in terms of social organization do not work. The way he makes science depend on productive processes (on the economic features of society) seems more traditionally Marxist (which is ironic, given that he’s usually recognized as an anti-Marxist).

Russell’s The Problems of Philosophy

I’ve just finished reading Bertrand Russell’s “The Problems of Philosophy”. I loved the book even though I disagree with almost everything he has to say.

Here are some modest reflections about it:

  1. This book is a perfect hinge between modernity and the twentieth century. On the one hand, Russell sums up the contributions of modern philosophers like Descartes, Spinoza, Berkeley, Hume and Kant to the theory of knowledge; on the other, he sets out the foundations of the new epistemology that developed in the first half of the Twentieth Century and gave birth to Wittgenstein’s Tractatus, the Vienna Circle, logical positivism and Popperian falsificationism.
  2. I was surprised to learn that Russell, for all his empiricism and positivism, was also a Platonist. That is, he thought that universals such as the principle of induction, mathematical entities (numbers, geometrical figures), the law of causality, etc. are all real. Moreover, Russell claims that they don’t exist as material entities but as perfect, immutable, immaterial forms. Russell is not willing to follow Plato in his mystic moments, and he’s not willing to embrace mystic readings of Plato’s dialogs. In addition, he doesn’t want to trace a very sharp division between doxa and episteme; as a good British empiricist, Russell sees a continuity between common sense and philosophical-scientific knowledge. He trusts our human instincts and claims that sense-data (the “sensible world”) play a very important role in providing raw material to our knowledge. But, other than that, he’s a full fledged Platonist.
  3. Concerning the last point: this is perhaps where we feel more alien to Russell now. After Sellars’ denunciation of the “myth of the given” and Quine’s holistic understanding of knowledge processes as always based on ontological commitments, we are reluctant to accept Russell’s lineal view of knowledge acquisition, which starts from value-neutral atomic data and uses it to build more complex and theoretical knowledge forms, such as knowledge by description (or inference). We now believe that sense-data are already penetrated by theory, and that our knowledge processes are circular (but not necessarily visciously circular).

 

Bertrand Russell on the analogy between truth and justice

The following quote belongs to the penultimate paragraph of Bertrand Russell’s “Problems of Philosophy”:

The impartiality which, in contemplation, is the unalloyed desire for truth, is the very same quality of mind which, in action, is justice, and in emotion is that universal love which can be given to all, and not only to those who are judged useful or admirable. Thus contemplation enlarges not only the objects of our thoughts, but also the objects of our actions and our affections: it makes us citizens of the universe, not only of one walled city at war with all the rest. In this citizenship of the universe consists man’s true freedom, and his liberation from the thraldom of narrow hopes and fears.

This is one more beautiful example of the point I’ve made over and over again, and that you can find, expressed in different ways, in such varied authors such as Plato, Immanuel Kant, Georg Hegel, Jean Piaget, Charles Peirce, Jean-Pierre Vernant and many others: that there is a fundamental analogy between truth and justice; and that this analogy does not merely consist in a formal similarity between both concepts, but stems from a common, deeper source: the struggle for justice in the realm of the practical affairs of mankind has evolved into the search for truth in the theoretical realm.

Dialogue of the deaf

Dialogue of the deaf

I had a stimulating discussion with a neuroscientist the other day. I tried to explain to her that my interest in children’s cognitive development is linked to my interest in epistemology, that is, to what I refer to in this blog as the normativity of thought.

For example, I argue that researchers who try to explain children’s knowledge of math from a nativist point of view, can only explain the starting point of cognitive development. The starting point is innate mathematical knowledge, which is mostly implicit, and basically consists in an ability to identify the numerosity of collections of objects found in the outside world. In other words: researchers have shown that animals (humans included) have the innate ability to assess the size of a collection of perceived objects (for example, they can notice that a collection of 15 pebbles is greater than a collection of 10 pebbles). They can also discriminate among exact quantities, but only when dealing with small sets (two, three, and perhaps four objects). Also, some animals and human babies can perform elementary arithmetic operations on small sets (adding two plus one, subtracting one from two, etc.) I am referring here to studies by Dehaene (2011), Izard, Sann, Spelke, & Streri (2009), Spelke (2011), and many others.

This basic capacity is certainly different from fully-fledged “human math.” The latter involves, at the very least, the symbolic representation of exact numbers larger than three. We (humans) can represent an exact number by saying its name (“nine”), or by using a gesture that stands for the number in question (depending on the culture, this might be done by touching a part of one’s body, showing a number of fingers, etc. – see Saxe ( 1991) and also http://en.wikipedia.org/wiki/Chinese_number_gestures). And, of course, we can write down a sign that represents the number (for example, with using the Arabic numeral “9”).

Scholars agree on the fact that advanced math is explicit and symbolic, and that it builds on (and uses similar brain areas to) its precursor, innate math. Once they operate on the symbolic level, humans can do things like: performing operations (addition, subtraction, multiplication, division, and others), demonstrating mathematical propositions, proving that one particular solution to a mathematical problem is the correct one, etc. To sum up: our symbolic capacities allow us to re-describe our intuitive approach to math on a precise, normative, epistemic level.

Now, here’s when it gets tricky. I argue that the application of algorithms on the symbolic level is not merely mechanical. Humans are not computers applying rules from a rule book, one after the other (like Searle in his Chinese room). Rather, as Dehaene (2011) argues, numbers mean something for us. “Nine” means nine of something (anything). “Nine plus one” means performing the action of adding one more unit to the set of nine units. There is a core of meaning in innate math; and this core is expanded and refined in our more advanced, symbolic math.

When executing mathematical operations (either in a purely mental fashion, or supported by objects) one gets a feeling of satisfaction when one arrives to a right (fair, correct, just) result. Notice the normative language we apply here (fair, correct, right, true, just). We actually experience something similar to a sense of justice when both sides of an equation are equal, or when we arrive to a result that is necessarily correct. (Note to myself: talk to Mariano S. We might perhaps do brain fMRIs and study if the areas of the brain that get activated by the “sense of justice” in legal situations, also light up when the “sense of justice” is reached by finding the right responses in math. If a similar region gets activated, that might suggest that there is a normative aspect to math that corresponds to the normative aspect of morality).

For me, then, the million dollar question is: how do humans go from the implicit, non-symbolic, automatic level to the explicit, symbolic, intentional and normative level? What is involved in this transition? What kind of biological processes, social experiences and individual constructions are necessary to achieve the “higher,” explicit level? (These are interesting questions both for the field of math and for the field of morality). And my hypothesis is that this transition necessarily demands the intervention of a particular type of social experience, namely, the experience of the normative world of social exchanges and rules of ownership (I’ve talked a little about such reckless hypotheses in other posts of this blog).

Now, when I try to explain all this to the neuroscientist, I lose her. She doesn’t follow me. For her, human knowledge is the sum of a) innate knowledge and b) learning from the environment. Learning is the process by which our brain acquires new information from the world, information that was not pre-wired, that didn’t came ready to use “out of the box.” Whether such learning involves a direct exposure to certain stimuli that represent contents (a school teacher teaching math to his or her students) or a more indirect process of exposure to social interactions is not an interesting question for her. It doesn’t change her basic view according to which there are two things, and two things only: innate knowledge and acquired knowledge. What we know is the result of combining the two. And this is the case both for humans and for other animals. Period.

Something similar happens when I talk to her about the difference between “cold processing” and “hot processing.” We were discussing the research I am conducting right now. I interview children about ownership and stealing. In my interview design, children watch a movie where one character steals a bar of chocolate from another, and eats it. The interviewer then asks the child a series of questions aimed at understanding her reasoning about ownership and theft. Now, the movie presents a third person situation. This means that the child might be interested in the movie, but he or she is not really affected by it. Children reason about what they see in the movie, and sometimes they seem to say what they think it’s the appropriate thing to say, echoing adults’ discourse. Because, after all, the movie is fiction, not the real world.

I believe that normativity emerges not from absorbing social information that comes from external events (watching movies, attending to teachers’ explanations) but from children’s real immersion in first person, real world, conflictive situations. When a child is fighting against another for the possession of a toy, there are cries and sometimes there even is physical violence. These encounters end up in different ways; sometimes children work out a rule for sharing the scarce resource, sometimes they just fight, and sometimes an adult intervenes and adjudicates in the conflict. The child’s reactions during these events is not dictated by cold reasoning but by deeper impulses. It is in these situations where we should look for the emergence of our basic normative categories, such as reciprocity (both social and logical, or “reversibility”), ownership (or the relationship between substance and its “properties”), quantity (used to implement equity and equality), etc.

But, again, my biologist friend does not feel that the distinction between the impulsive, intense, hot reactions we experience when involved in real conflicts and the kind of third person reasoning that is triggered by movies and artificial stimuli is an important one. In both cases, she argues, it’s the same cognitive system that is at work. What we think about third person characters is probably similar to how we reason about ourselves (thanks to our capacity for empathy, our mirror-neurons, etc.)

I don’t know who’s right and who’s wrong here.

 

Dehaene, S. (2011). The Number Sense: How the Mind Creates Mathematics, Revised and Updated Edition. The number sense How the mind creates mathematics rev and updated ed (p. 352). Oxford University Press, USA. Retrieved from http://www.amazon.com/dp/0199753873

Izard, V., Sann, C., Spelke, E. S., & Streri, A. (2009). Newborn infants perceive abstract numbers. Proceedings of the National Academy of Science, 106(25), 10382–10385.

Saxe, G. B. (1991). Culture and Cognitive Development: Studies in Mathematical Understanding. Hillsdale: Lawrence Erlbaum Associates.

Spelke, E. S. (2011). Quinian bootstrapping or Fodorian combination? Core and constructed knowledge of number. Behavioral and Brain Sciences, 34(3), 149–150.

 

Alison Gopnik and the mirror of nature

Gopnik’s (1996) argues that scientific knowledge (as well as children’s theories) stems from a device-powered ability. In her candid account, a child (or a scientist) discovers truths by using a truth-discovering device we’re all equipped with. Individuals (children and scientists) have direct access to truths; and truths involve a two-way relationship: they are a mirror-like match between the individual’s representations and the world (as opposed to, for example, being the result of a social, normative, constructive process).

Gopnik acknowledges that epistemology has a normative component, but only in the sense that some epistemologists and philosophers of science prescribe the structure of the ideal scientific inquiry. Indeed, when most scholars talk about traditional epistemology schools (logical positivism, falsificationism, etc.) as being “normative” they mean exactly that kind of external, prescriptive attitude. Yet there is another way of understanding the normative side of epistemology (one that Piaget, for example, emphasizes frequently): epistemology is normative, in this second sense, because its object of study (science) is inherently normative; that is, because scientists try to conduct their research according to certain binding rules and, moreover, they try to formulate laws, rules and models that explain, not just how the world works, but also why the world must work in that way. Scientists use a deontological language when talking about their research; they believe some theories are bad and others are good; they require that scientific statements be justified; they demand other people to be fair in their evaluation of their theories. Epistemologists, in this second version of “the normative,” do not try to impose prescriptions from the outside, but to reveal what is inherently normative in actual science. Gopnik does not take into account this inherently normative nature of science, but she reduces normativity to the traditional epistemologist’s recommendation of certain rules of enquiry to the scientist.

Hand in hand with Gopnik’s neglect of the internal normativity of science, she sees science as stemming from an individual, internal ability to “find the truth,” that is, as something that “people do” (they eat, they sleep, they have sex, they find the truth). She consequently endorses a naïve realism according to which science “gets it right” and succeeds at “uncovering the truth” (Gopnik, 1996, p. 489), and this because “human beings are endowed by evolution with a wide variety of devices that enable us to arrive at a roughly veridical view of the world” (Gopnik, 1996, p. 487). She claims that human cognition is a system that “gets at the truth about the world” because “it is designed by evolution to get at the truth about the world” (Gopnik, 1996, p. 501).  I will not delve into the obvious circularity of such assertions (briefly: to assess whether our cognitive device works well and yields true representations we use that very device). But I believe that this very way of talking about cognition (“we have a device inside our head that operates with rules and representations and is ready-made to find the truth”) makes it impossible from the start to provide an adequate account of a) the normative and b) the social aspects of cognition, since social norms are in this view necessarily reduced to an external source of information, i.e., to the device’s input. Gopnik’s words: “They [mental representations and rules] may be deeply influenced by information that comes from other people, but they are not merely conventional and they could function outside of any social community” (Gopnik, 1996, p. 488). Furthermore, when Gopnik talks about the institutions of science or the division of labor in science, she sees social organization simply as a way of being more effective at achieving a certain goal (reaching truths). It’s a merely technical, means-end reasoning.

What concept of “truth” is Gopnik using when she asserts that the human cognitive system produces truths? She seems to rely on a naïve version of truth as correspondence: our cognitive system is like a mirror of the world; it produces representations that match up to the outside world (Gopnik, 1996, p. 502). Needless to say, this correspondence view of truth has been criticized and destroyed over and over again by philosophers and epistemologists from all schools; it is untenable for a number of reasons. The three main reasons: 1) knowledge processes do not imitate reality but to impose certain abstract, mathematical or relational models unto the world, 2) consequently, our mental representations are not copies of the world; rather, they contain abstract concepts (atom, mind, time, gravity, homeostasis) that radically redescribe the object we are trying to know; and 3) we only say that some things are true within a certain form of life or cultural context that provides the rules to evaluate what is true and what is not.

Gopnik treats truth as a natural fact and as a tangible property of representations, which are also pretty much treated as tangible things. Yet the concept of “truth” only exists within certain normative systems; and normative systems only exist in culture, not in nature; truths are not things; we say that certain propositions or theories are “true” always in the context of complex, relational systems such as science. Animals try to solve concrete problems, but they don’t search for the truth. Human interest in the truth cannot derive from having a natural device implanted in our brain only; something else needs to be added to the mix.

Most interesting theories about the social origins of scientific knowledge do not focus on “socially transmitted information” or “social input” but on social structure. Yet Gopnik finds it “hard to see how a particular social structure, by itself, could lead to veridicality” (Gopnik, 1996, p. 491).

It is in my opinion much easier to see how social structure could lead to veridicality than how a computer-like device could do so. Social structure creates institutions that formalize adversarial scenarios, so that one party is in charge of attacking a position and the opposite party is in charge of defending it. They enforce rules, in many contexts (from editorial boards to legislatures and courts) that specify what counts as a legitimate argument and as valid proof. Moreover, institutions create authorities that rule above the parties in the dispute and are in charge to adjudicate between them, to say who’s right, “who has the truth”. States have succeeded in creating the first institutions that were “impersonal” in the sense that they represented abstract principles or the common good (rather than the interest or the point of view or a specific individual); once people got used to think in terms of impersonal principles (the Greeks called them arches) they applied this form of thought to nature and started discovering principles and laws in the world around us. I’m collapsing into one paragraph thousands of pages written by very diverse authors (Hegel, Durkheim, Vernant) who recognized that social institutions created something absent in the natural world: truth.

If you accept at least provisionally that what is particular about science is not only that it gets things right (its efficacy) but also that produces legal-like knowledge (legitimate, verifiable knowledge that aims at universal validity), you can start to see what it is that social structure adds to the mix.

Says Gopnik: “An important point of the empirical developmental work, and a common observation about science, is that the search for better theories has a kind of internally-driven motivation, quite separate from the more superficial motivations provided by the sociology. From our point of view, we make theories in search of explanation or make love in search of orgasm” (Gopnik, 1996, p. 498). Her idea is that evolution built our internal device in such a way that would feel thrills of pleasure when finding the truth. Yet I believe that the passion of scientists has more to do with a social feeling, namely justice. They strive for truth with the passion that a rebel fights for justice. As when the equation works, the pleasant experience results from the recognition that the result is fair, that the right explanation is given its due value.

Summing up, my argument against Gopnik (1996) proceeds in three steps: 1) She doesn’t recognize the normative dimension of scientific knowledge, so she imagines we have a scientific-knowledge device that is effective, but not one that produces valid, legitimate knowledge; 2) The non-normative conception of truth (which is conceived as a match between the mind and the world) makes her embrace a naïve realism; 3) this narrows, or rather kills, the power of her theory to include the social aspects of knowledge. The main flaws in Gopnik’s theory, therefore, derive from her understanding of scientific activity as resulting from a mere ability to investigate and find truths rather than as a social, normative practice.

Gopnik, A. (1996). The scientist as child. Philosophy of Science, 63, 485–514. Retrieved from http://www.jstor.org/stable/188064

Alison Gopnik as a child

Shamelessly, Gopnik starts her seminal article on The Scientist as Child (Gopnik, 1996) by claiming that “recently, cognitive and developmental psychologists have invoked the analogy of science itself” (p. 485). Recently! That analogy is at the core of the Piagetian enterprise. Indeed, Piaget founded the field of cognitive development some 80 years ago by appealing to that very analogy, i.e., by claiming that the fields of epistemology (or philosophy of science) and developmental psychology can illuminate each other because there are functional similarities between the processes of knowledge acquisition in children and in scientists. The insight that the scientific investigation of children’s cognitive development sheds light on the history of science and vice versa is 100% Piagetian. Yet Gopnik discusses it as if it were a new idea.

Gopnik knows that Piaget already said this. In other writings she’s honest enough to admit she knows about Piaget’s systematic comparison between children and scientists, although she also claims that she means it in a different way; i.e., she affirms that the relationships she establishes between the fields of child psychology and epistemology are not the same as in Piaget’s. Yet in this particular paper (Gopnik, 1996) and in many other places (most notably, her lectures to undergraduates, of which I will speak some day) she pretends that it’s she and her theory-theory colleagues who have coined this famous analogy. In this particular article, Piaget’s name is not even mentioned.

There are many other ideas that are originally Piagetian and for which the Swiss researcher gets no credit at all. For example: that theory change is a process that goes through different stages: disregard or denial of uncomfortable evidence, compromise solutions, generalized crisis and substitution by a new theory. And, of course, the basic contention that children have theories in a sense comparable to scientists. She also claims: “Theory change proceeds more uniformly and quickly in children than in scientists, and so is considerably easier to observe, and we can even experimentally determine what kinds of evidence lead to change. In children, we may actually be able to see “the logic of discovery” in action” (Gopnik, 1996, p. 509). This is Piaget talking! Yet she presents these ideas as if they were completely her own.

This is not my main criticism of Gopnik’s work, of course. The central problem, in my opinion, is the way she understands science (as result of a mere ability to investigate and “find truths” rather than as a normative practice). I’ll talk about it in a different post.

Gopnik, A. (1996). The scientist as child. Philosophy of Science, 63, 485–514. Retrieved from http://www.jstor.org/stable/188064

The normativity of human knowledge

I am now reading Prof. Castorina’s lectures on Genetic Epistemology. There he makes the case that human knowledge in general, and scientific knowledge in particular, involves a normative dimension that is often overlooked by naturalistic approaches to knowledge.

Let me explain this topic in my own words. Naturalized Epistemology is right in considering human knowledge as a fact of the world. Human beings are real, corporeal, natural entities. Human beings have (are) bodies; they have a physical existence. Any explanation of human knowledge must recognize that humans can know their world only insofar as they are equipped with wet computers (aka brains) that receive information from the world, process it, and respond to the world in a certain manner. There’s input, information processing and output. If your computer gets broken (in a serious car accident, for example), you might lose your ability to know the world.

Although I am already using a highly metaphorical language here (because the brain is different from a digital computer in many significant ways), I can buy the previous description up to this point. Human knowledge is a natural phenomenon and therefore it can be studied by using the methods of the natural sciences (for example, the neurosciences).

Yet when we look at actual human beings engaged in knowledge-related practices (human beings investigating, thinking, theorizing, teaching, learning and discussing about different issues) an important aspect of human knowledge comes to light. Not only do people know about certain things, they also know that what they know is true. For instance, they know that the sentence “dogs are mammals” is true; and they can defend the truth of such a claim through arguments. People can (and frequently do) justify most of their knowledge claims. They offer reasons why things are in a certain (and not in another) way. They argue for specific positions. They follow rules and shared criteria for adjudicating between rival hypotheses. They claim that some assertions are true and they also claim to know why they are true. In certain cases (two plus two equals four) most human beings would argue that the truth of this claim is universal and necessary. That is, they would say that they know not only that things are in a certain way, but also why they must be that way and couldn’t possibly be in any other way.

To put it differently: people care not only about the efficacy of their knowledge (whether what they know allows them to adapt effectively to the external reality) but also about the legitimacy of their knowledge. Any observation of actual human beings involved in knowledge-related practices makes this point self-evident. Any observation of naturalistic epistemologists giving talks in conferences or workshops or making arguments to convince others makes this point self-evident. They are not just blind mechanisms sputtering output; they try to be rational, sensible, persuasive.

There is a normative dimension to human knowledge. The problem with the naturalistic approach to human knowledge is that it cannot bridge the gap between the mechanistic – naturalistic level of explanation and the normative phenomena. What humans know is not just the result of some material mechanism (involving the interaction between the world and the wet computer) but is also the result of a complex socio-cultural normative process that requires to be addressed on a different level. The natural sciences by themselves cannot account for this normative component; norms and institutions must be included.

Epistemology, therefore (and this is Castorina’s point) should deal with the fundamental problem of how people and societies give themselves norms. Any relevant epistemology must start by recognizing the normativity of human knowledge.