What do “normativity” and “intentionality” mean?
What “normativity” means has implications for many things in philosophy and science. Normativity has been distinguished between “semantic normativity” and “conceptual normativity” (Skorupski, 2007). On the semantic version, “any normative predicate is definitionally reducible to a reason predicate” and on the conceptual version “the sole normative ingredient in any normative concept is the concept of a reason” (Skorupski, 2007). Skorupski rejects the semantic version and holds to the conceptual version. The conceptual version does hold value, so I will be operating on this definition in this article. “Intentionality” is the power of mental states to be “about” things. My mental state right now is to write this article on the normativity of psychological traits, so I have a desire to perform this action, making it normative.
Regarding the mind-body problem, the meaning of normativity entails that what is normative is not reducible to (physical) dispositions. Human psychology is intentional. What is intentional is normative. Intentions are done “on purpose”, that is, they’re done “for a reason.” If something is done for a reason, then there is a goal that the agent desired by performing their action. When someone performs an action, we ask “Why?”, and the answer is they performed the action for a reason. “Why did I go to work?”, because I wanted to make money. “Why did I write down my thoughts?”, because I wanted a written record of what I was thinking at a certain moment in time. So how the normativity of intentionality comes into play here is this—if agents perform actions for reasons, and reasons are due to beliefs, goals, and desires to bring about some end by an agent, then what explains why an agent performed an action is their reason TO perform the action.
When one “does something for a reason”, they intend to “do something”, that is they perform an action “on purpose”, meaning they have a desired outcome that the action they carried out will hopefully, for the agent, manifest in reality. The best example I can think of is murder. Murder is the intentional killing of an individual. For whatever reason, the agent that committed the act of murder has a reason they want the person they killed dead. Contrast this with manslaughter, which is “the unlawful killing of a human being without malice.” There are two kinds of manslaughter, voluntary manslaughter which would happen in the heat of the moment, think a passion killing. The other kind being the unintentional killing of a human being. This distinction between murder and manslaughter is, basically, down to what an agent INTENDS TO DO. Thus, one is a murderer if they set out one night to kill an individual, that is if they plan it out (have a goal to murder); and one commits manslaughter if they did not intend to kill the other individual, let’s say two people have a fight and one punches the other and the hit person hits their head on the curb and dies.
Now that I have successfully stated what normativity means, and have distinguished between intentional and unintentional action (murder and manslaughter), I must discuss the distinction between intentions and dispositions.
The normativity of psychological states
The problem of action is how to distinguish what an agent does for reasons, goals, or desires, and what merely happens to them (Paul, 2021). I have argued before that reasons, goals, beliefs, and desires (what an agent does) make the distinction between antecedent conditions which then cause an agent’s movement but were not consciously done (what happens to them).
We know what intentions are, but what are dispositions? Behavior is dispositional, so Katz’s considerations have value here:
a disposition [is] a pattern of behavior exhibited frequently … in the absence of coercion … constituting a habit of mind under some conscious and voluntary control … intentional and oriented to broad goals” (1993b, 16).
There is a wealth of philosophical literature which argues that intentions are irreducible to dispositions (e.g. Kripke, 1980; Bilgrami, 2005, 2006; also see Weber, 2008). Intentional states are, then, irreducible to physical or functional explanations. It then follows that intentional states can’t be explained/studied by science. If intentional states can’t be explained/studied by science, then intentional states are special, indeed they are unique to agents (minded beings).
In the conclusion to Self-Knowledge and Resentment, Akeel Bilgrami describes his pincer argument using a Fregean extension of Moore’s non-naturalism:
Via a discussion of an imaginary subject wholly lacking agency, it was shown how deeply the very notion of thought or intentionality turns on possessing the point of view of agency, of subjectivity, the point of view of the first, rather than third, person. And it was there shown via an argument owing to a Fregean extension of Moore’s anti-naturalism that such a picture of intentionality required ceasing to see intentional subjects in wholly dispositional terns and, indeed, requires seeing intentional states such as beliefs and desires as themselves normative states or commitments. When so viewed, intentional states are very different from how they appear to a range of philosophers who think of them along normative lines, such as [Donald] Davidson. When so viewed, they are not only irreducible to and non-identical with the physical and causal states of the subjects; they cannot even be clearly assessed to be dependent on such states in the specific ways that philosophers like to capture with such terms as ‘supervenience’. (There are of course all sorts of other dependencies that intentional states have on the states of the central nervous system, which do not amount to anything like the relations that go by the name of ‘supervenience’.) This is because when they are so viewed, they are essentially first-person phenomena, phenomena whose claims to supervenient dependence on third person states such as physical or causal properties are either stateable or deniable. (Therefore, not assesable.) (Bilgrami, 2006: 291-292)
Intentionality is a sufficient and necessary condition for mentality according to Brentano. And intentionality along with normativity are 2 out of 5 of the “marks of the mental” (Pernu, 2017). It can even be said to be the aboutedness of the mind to a thing other than itself. If I talk about something or state that I have a desire to do something, this is the aboutness of intentional states. So mental states that are directed at things are said to be intentional states. Intentionality requires goals, beliefs, and desires, so this designates the intentional stance as one of action, which is distinguished from behavior. Since the mental is normative (Zangwill, 2005), then, since we have the problem of normativity for physicalism, this is yet another reason to reject dualism and to accept some kind of dualism.
Goal-directedness is another mark of intentionality. When one acts intentionally, they act in order to bring about a goal they have in mind about something. Take the example of murder I gave above. Knowing that murder is the intentional killing of a human being, the murderer has the goal in mind to end the life of the other person. They act in accordance with their desired to bring about the goal they have in mind.
Since psychological states are intentional states, and intentional states are normative (Wedgewood, 2007; Kazemi, 2022), then psychological states are normative. Since mental states that have content are normative then we cannot reductively explain mind. Thus, Yoo’s (2004) discussion of the normativity of intentionality holds value:
Thus, the reason why thought and behavior cannot be explained in terms of non-intentional, physical, vocabulary comes down to a certain “normative element” constitutive of our interpretation and attributions of the propositional attitudes. Clearly this normative element plays a pivotal role. But in spite of its significance, it is highly obscure and insufficiently understood. Indeed, there have been no serious attempts to systematically examine what, exactly, the normative element amounts to.
As Davidson points out, the normative element ultimately has its roots in the object of the interpreter’s inquiry, which is another mind. Unlike black holes and quarks, which do not conform to norms, let alone the norms of rationality, a mind, by its very nature, has to conform to the norms of rationality. Otherwise, we are not dealing with a mind, should no or too few norms of rationality apply. Black holes and quarks certainly conform to laws – nomological principles – that support statements like “Light ought to bend in a black hole,” but such uses of “ought” have no normative implications (see Brandom 1994, ch. 1). The mental states that make up a mind, on the other hand, are such that they bear normative relations among each other, since their very contents are individuated by the norms of rationality (which is clearly stated in the third account). And the observer of a person’s mind must discern in the other’s bodily movements and vocal utterances a rational pattern that is itself a pattern to which the observer (attributor, appraiser) must subscribe. Hence, insofar as the norms of rationality are reflexive – they constrain both the mental states of the interpreted mind as well as the process of interpretation engaged by the interpreter herself – this aspect of the normative fully satisfies the third constraint.
Many arguments exist which conclude that the mental cannot be explained in terms of words that refer only to physical properties, and this is one of them. And since the mental is normative, this is yet another reason why there cannot—and indeed why their never will be—reductive explanations of the mental to the physical.
The irreducibility of intentionality
If physicalism is true, then intentionality would reduce, or be identical to, something physical. Then we should have an explanation of intentionality in physical terms. However, I would say this is not possible. (See Heikinheimo’s Rule-Following and the Irreducibility of Intentional States.) It’s not possible because physical systems can’t intend, that is they can’t act intentionally.
The argument is a simple one: Only beings with minds can intend. This is because mind allows a being to think. Since the mind isn’t physical, then it would follow that a physical system can’t intend to do something—since it wouldn’t have the capacity to think. Take an alarm system. The alarm system does not intend to sound alarms when the system is tripped. It’s merely doing what it was designed to do, it’s not intending to carry out the outcome. The alarm system is a physical thing made up of physical parts. So we can then liken this to, say, A.I.. A.I. is made up of physical parts. So A.I. (a computer, a machine) can’t think. However, individual physical parts are mindless and no collection of mindless things counts as a mind. Thus, a mind isn’t a collection of physical parts. Physical systems are ALWAYS a complicated system of parts but the mind isn’t. So it seems to follow that nothing physical can ever have a mind.
Physical parts of the natural world lack intentionality. That is, they aren’t “about” anything. It is impossible for an arrangement of physical particles to be “about” anything—meaning no arrangement of intentionality-less parts will ever count as having a mind. So a mind can’t be an arrangement of physical particles, since individual particles are mindless. Since mind is necessary for intentionality, it follows that whatever doesn’t have a mind cannot intend to do anything, like nonhuman animals. It is human psychology that is normative, and since the normative ingredient for any normative concept is the concept of reason, and only beings with minds can have reasons to act, then human psychology would thusly be irreducible to anything physical. Indeed, physicalism is incompatible with intentionality (Johns, 2020). The problem of intentionality is therefore yet another kill-shot for physicalism. It is therefore impossible for intentional states (i.e. cognition) to be reduced to, or explained by, physicalist theories/physical things.
This is similar to Lynn Baker’s (1981) argument in Why Computers Can’t Act (note how in her conclusion she talks about language—the same would therefore hold for nonhuman animals):
P1: In order to be an agent, an entity must be able to formulate intentions.
P2: In order to formulate intentions, an entity must have an irreducible first-person perspective.
P3: Machines lack an irreducible first-person perspective.
C: Therefore, machines are not agents.
So machines cannot engage in intentional behavior of any kind. For example, they cannot tell lies, since lying involves the intent to deceive; they cannot try to avoid mistakes, since trying to avoid mistakes entails intending to conform to some normative rule. They cannot be malevolent, since having no intentions at all, they can hardly have wicked intentions. And, most significantly, computers cannot use language to make assertions, ask questions, or make promises, etc., since speech acts are but a species of intentional action. Thus, we may conclude that a computer can never have a will of its own.
So PP’s “depression” about ChatGPT “scoring” 11 points on his little (non-construct valid) test is irrelevant. It’s a machine and, as successfully argued, machines will NEVER have the capacity to think/act/intend.
What does this mean for a scientific explanation of human psychology?
The arguments made here point to one conclusion—since intentions don’t reduce to the physical and functional states of humans (like neurophysiology; Rose, 2005), then it is impossible for science to explain intentions, since what is normative isn’t reducible to, or identical with, physical properties. This is another arrow in the quiver of the anti-physicalist/dualist to show that there is something more than the physical—there is an irreducible SELF or MIND and we humans are the only minded beings. Science can’t explain the human mind and, along with it, the intentions that arrive from a deliberating mind. This is also an argument against Benjamin Libet’s experiments in which he concludes that the subjects’ brain activity preceded their actions, that is, it is the brain that initiates action. This view, however, is false, since the (minded) agent is what initiates action. Libet is therefore guilty of the mereological fallacy. Freely-willed processes are therefore not initiated by the brain (Radder and Meynen, 2012).
Elon Musk and Sam Harris have warned of a “robot rebellion” like what occurred in The Terminator. Though, since what I’ve argued here is true—that purely physical things lack minds, that is they can’t intend or think—then such worries should rightly stay in the realm of sci-fi. The implication is clear—since purely physical things cannot intend, and humans can intend, then there is an irreducible SELF or MIND which allows us to intend. The claim, then, that the human brain is a computer is clearly false. It then follows that humans aren’t purely physical; there is a mental and physical aspect to humans—that is, there are two substances that make us up, the mental and the physical, and it is clear that M (the mental) is irreducible to P (the physical). Sentient machines are, luckily, a myth. It’s just not possible for scientists to imbue a machine with a mind since machines are purely physical and minds aren’t. John Searle’ s Chinese Room Argument, too, is an argument against strong A.I.. Machines will never become conscious since consciousness isn’t physical.
This is yet another argument against the scientific study of the mind/self and, of course, against psychology and hereditarianism. This is then added to the articles that argue against the overall hereditarian program in psychology, and psychology more broadly: Conceptual Arguments Against Hereditarianism; Reductionism, Natural Selection, and Hereditarianism; and Why a Science of the Mind is Impossible. For if the main aspect of IQ test-taking is thinking, thinking is cognition, cognition is intentional and therefore psychological, it follows that since there can be no explanations of intentional states in terms of physical vocabulary, and if cognition—being a psychological trait—is normative, then the conclusion is, again, that hereditarianism and psychology fail their main goal. It is impossible.