“Missing Heritability” and Missing Children: On the Issues of Heritability and Hereditarian Interpretations
3100 words
“Biological systems are complex, non-linear, and non-additive. Heritability estimates are attempts to impose a simplistic and reified dichotomy (nature/nurture) on non-dichotomous processes.” (Rose, 2006)
“Heritability estimates do not help identify particular genes or ascertain their functions in development or physiology, and thus, by this way of thinking, they yield no causal information.” (Panofsky, 2016: 167)
“What is being reported as ‘genetic’, with high heritability, can be explained by difference-making interactions between real people. In other words, parents and children are sensitive, reactive, living beings, not hollow mechanical or statistical units.” (Richardson, 2022: 52)
Introduction
In the world of behavioral genetics, it is claimed that studies of twins, adoptees and families can point us to the interplay between genetic and environmental influences on complex behavioral traits. To study this, they use a concept called “heritability”—taken from animal breeding—which estimates the the degree of variation in a phenotypic trait that is due to genetic variation amongst individuals in the studied population. But upon the advent of molecular genetic analysis after the human genome project, something happened that troubled behavioral genetic researchers: The heritability estimates gleaned from twin, family and adoption studies did not match the estimates gleaned from the molecular genetic studies. This then creates a conundrum—why do the estimates from one way of gleaning heritability don’t match to other ways? I think it’s because biological models represent a simplistic (and false) model of biological causation (Burt and Simon, 2015; Lala, 2023). This is what is termed “missing heritability.” This raises questions that aren’t dissimilar to when a child dissappears.
Imagine a missing child. Imagine the fervor a family and authorities go through in order to find the child and bring them home. The initial fervor, the relentless pursuit, and the agonizing uncertainty constitute a parallel narrative in behavioral genetics, where behavioral geneticists—like the family of a missing child and the authorities—find themselves grappling with unforseen troubles. In this discussion, I will argue that the additivity assumption is false, that this kind of thinking is a holdover from the neo-Darwinian Modern Synthesis, that hereditarians have been told for decades that heritability just isn’t useful for what they want to do, and finally “missing heritability” and missing children are in some ways analogous, but that there is a key difference: The missing children actually existed, while the “missing heritability” never existed at all.
The additivity assumption
Behavioral geneticists pay lip service to “interactions”, but then conceptualize these interactions as due to additive heritability (Richardson, 2017a: 48-49). But the fact of the matter is, genetic interactions create phantom heritability (Zuk et al, 2012). When it comes to the additive claim of heritability, that claim is straight up false.
The additive claim is one of the most important things for the utility of the concept of heritability for the behavioral geneticist. The claim that heritability estimates for a trait are additive means that the contribution of each gene variant is independent and they all sum up to explain the overall heritability (Richardson 2017a: 44 states that “all genes associated with a trait (including intelligence) are like positive or negative charges“). But in reality, gene variants aren’t independent effects, they interact with other genes, the environment and other developmental resources. In fact, violations of the additivity assumption are large (Daw, Guo, and Harris, 2015).
Gene-gene, gene-environment, and environmental factors can lead to overestimates of heritability, and they are non-additive. So after the 2000s with the completion of the human genome project, these researchers realized that the genetic variants that heritability they identified using molecular genetics did not jive with the heritability they computed from twin studies from the 1920s until the late 1990s and then even into the 2020s. So the expected additive contribution of heritability fell short in actually explaining the heritability gleaned from twin studies using molecular genetic data.
Thinking of heritability as a complex jigsaw puzzle may better help to explain the issue. The traditional view of heritability assumes that each genetic piece fits neatly into the puzzle to then complete the overall genetic picture. But in reality, these pieces may not be additive. They can interact in unexpected ways which then creates gaps in our understanding, like a missing puzzle piece. So the non-additive effects of gene variants which includes interactions and their complexities, can be likened to missing pieces in the heritability puzzle. The unaccounted-for genetic interactions and nuances then contribute to what is called “missing heritability.” So just as one may search and search for missing puzzle pieces, so to do behavioral geneticists search and search for the “missing heritability”.
So heritability assumes no gene-gene and gene-environment interaction, no gene-environment correlation, among other false or questionable assumptions. But the main issue, I think, is that of the additivity assumption—it’s outright false and since it’s outright false, then it cannot accurately represent the intricate ways in which genes and other developmental resources interact to form the phenotype.
If heritability estimates assume that genetic influences on a trait are additive and independent, then heritability estimates oversimplify genetic complexity. If heritability estimates oversimplify genetic complexity, then heritability estimates do not adequately account for gene-environment interactions. If heritability does not account for gene-environment interactions, then heritability fails to capture the complexity of trait inheritance. Thus, if heritability assumes that genetic influences on a trait are additive and independent, then heritability fails to capture the complexity of trait inheritance due to its oversimplified treatment of genetic complexity and omission of gene-environment interactions.
One more issue, is that of the “heritability fallacy” (Moore and Shenk, 2016). One commits a heritability fallacy when they assume that heritability is an index of genetic influence on traits and that heritability can tell us anything about the relative contribution of trait inheritance and ontogeny. Moore and Shenk (2016) then make a valid conclusion based on the false belief that heritability us anything about the “genetic strength” on a trait:
In light of this, numerous theorists have concluded that ‘the term “heritability,” which carries a strong conviction or connotation of something “[in]heritable” in the everyday sense, is no longer suitable for use in human genetics, and its use should be discontinued.’31 Reviewing the evidence, we come to the same conclusion. Continued use of the term with respect to human traits spreads the demonstrably false notion that genes have some direct and isolated influence on traits. Instead, scientists need to help the public understand that all complex traits are a consequence of developmental processes.
“Missing heritability”, missing children
Twin studies traditionally find heritability to be estimated between 50 and 80 percent for numerous traits (eg Polderman et al, 2015; see Joseph’s critique). But as alluded to earlier, molecular studies have found heritabilities of 10 percent or lower (eg, Sniekers et al, 2017; Savage et al, 2018; Zabaneh et al, 2018). This discrepancy between different heritability estimates using different tools is what is termed “missing heritability” (Mathhews and Turkheimer, 2022). But the issue is, increasing the sample sizes will merely increase the chance of spurious correlations (Calude and Longo, 2018), which is all these studies show (Richardson, 2017b; Richardson and Jones, 2019).
This tells me one important thing—behavioral geneticists have so much faith in the heritability estimates gleaned from twin studies that they assume that the heritability is “missing” in the newer molecular genetic studies. But if something is “missing”, then that implies that it can be found. They have so much faith that eventually, as samples get higher and higher in GWAS and similar studies, that we will find the heritability that is missing and eventually, be able to identify genetic variants responsible for traits of interest such as IQ. However I think this is confused and a simple analogy will show why.
When a child goes missing, it is implied that they will be found by authorities, whether dead or alive. Now I can liken this to heritability. The term “missing heritability” comes from the disconnect between heritability estimates gleaned from twin studies and heritability estimates gleaned from molecular genetic studies like GWAS. So the implication here is, since twin studies show X percent heritability (high heritability), and molecular genetic studies show Y percent heritability (low heritability) – which is a huge difference between estimates between different tools – then the implication is that there is “missing heritability” that must be explained by rare variants or other factors.
So just like parents and authorities try so hard to find their missing children, so to do behavioral geneticists try so hard to find their “missing heritability.” As families endure anguish as they try to find their children, this is then mirrored in the efforts of behavioral geneticists to try and close the gap between two different kinds of tools that glean heritability.
But there is an important issue at play here—namely the fact that missing children actually exist, but “missing heritability” doesn’t, and that’s why we haven’t found it. Although some parents, sadly, may never find their missing children, the analogy here is that behavioral geneticists will never find their own “children” (their missing heritability) because it simply does not exist.
Spurious correlations
Even increasing the sample sizes won’t do anything, since the larger the sample size, the bigger chance for spurious correlations, and that’s all GWAS studies for IQ are (Richardson and Jones, 2019), while correlations with GWAS are inevitable and meaningless (Richardson, 2017b). Denis Noble (2018) puts this well:
As with the results of GWAS (genome-wide association studies) generally, the associations at the genome sequence level are remarkably weak and, with the exception of certain rare genetic diseases, may even be meaningless (13, 21). The reason is that if you gather a sufficiently large data set, it is a mathematical necessity that you will find correlations, even if the data set was generated randomly so that the correlations must be spurious. The bigger the data set, the more spurious correlations will be found (3). The current rush to gather sequence data from ever larger cohorts therefore runs the risk that it may simply prove a mathematical necessity rather than finding causal correlations. It cannot be emphasized enough that finding correlations does not prove causality. Investigating causation is the role of physiology.
Nor does finding higher overall correlations by summing correlations with larger numbers of genes showing individually tiny correlations solve the problem, even when the correlations are not spurious, since we have no way to find the drugs that can target so many gene products with the correct profile of action.
The Darwinian model
But the claim that there is a line that goes from G (genes) to P (phenotype) is just a mere holdover from the neo-Darwinian modern synthesis. The fact of the matter is, “HBD” and hereditarianism are based on reductionistic models of genes and how they work. But the reality is, genes don’t work how they think they do, reality is much more complex than they assume. Feldman and Ramachandran (2018) ask “Missing compared to what?”, effectively challenging the “missing heritability” claim. As Feldman and Ramachandran (2018) ask, would Herrnstein and Murray have written The Bell Curve if they believed that the heritability of IQ were 0.30? I don’t think they would have. In any case, such a belief in the heritability of IQ being between 0.4 and 0.8 shows the genetic determinist assumptions which are inherent in this type of “HBD” genetic determinist thinking.
Amusingly, as Ned Block (1995) noted, Murray said in an interview that “60 percent of the intelligence comes from heredity” and that that heritability is “not 60 percent of the variation. It is 60 percent of the IQ in any given person.” Such a major blunder from one of the “intellectual spearheads” of the “HBD race realist” movement…
Behavioral geneticists claim that the heritability is missing only because sample sizes are low, and as sample sizes increase, the missing heritability based on associated genes will be found. But this doesn’t follow at all since increasing sample sizes will just increase spurious hits of genes correlated with the trait in question but it says absolutely nothing about causation. Nevertheless, only a developmental perspective can provide us mechanistic knowledge and so-called heritability of a phenotype cannot give us such information because heritability isn’t a mechanistic variable and doesn’t show causation.
Importantly, a developmental perspective provides mechanistic knowledge that can yield practical treatments for pathologies. In contrast, information about the “heritability” of a phenotype—the kind of information generated by twin studies—can never be as useful as information about the development of a phenotype, because only developmental information produces the kind of thorough understanding of a trait’s emergence that can allow for successful interventions. (Moore 2015: 286)
The Darwinian model and it’s assumptions are inherent in thinking about heritability and genetic causation as a whole and are antithetical to developmental, EES-type thinking. Since hereditarianism and HBD-type thinking are neo-Darwinist, it then follows that such thinking is inherent in their beliefs, assumptions, and arguments.
Conclusion
Assumptions of heritability simply do not hold. Heritability, quite simply, isn’t a characteristic of traits but it is a characteristic of “relationships in a population observed in a particular setting” (Oyama, 1985/2000). Heritability estimates tell us absolutely nothing about development, nor the causes of development. Heritability is a mere breeding statistic and tells us nothing at all about the causes of development or whether or not genes are “causal” for a trait in question (Robette, Genin, and Clerget-Darpoux, 2022). It is key to understand that heritability along with the so-called “missing heritability” are based on reductive models of genetics that just do not hold, especially with newer knowledge that we have from systems biology (eg, Noble, 2012).
The assumption that heritability estimates tell us anything useful about genetics, traits, and causes along with a reductive belief in genetic causation for the ontogeny of traits has wasted millions of dollars. Now we need to grapple with the fact that heritability just doesn’t tell us anything about genetic causes of traits, but that genes are necessary, not sufficient, causes for traits because no genes (along with other developmental resources) means no organism. Also coming from twin, family and adoption studies are Turkheimer’s (2000) so-called “laws of behavioral genetics.” Further, the falsity of the EEA (equal environments assumption) is paramount here, and since the EEA is false, genetic conclusions from such studies are invalid (Joseph et al, 2015). There is also the fact that heritability is based on a false biological model. The issue is that heritability rests on a “conceptual model is unsound and the goal of heritability studies is biologically nonsensical given what we now know about the way genes work” (Burt and Simons, 2015: 107). What Richardson (2022) terms “the agricultural model of heritability” is known as false. In fact, the heritability of “IQ” is higher than any heritability found in the animal kingdom (Schonemann, 1997). Why this doesn’t give any researcher pause is beyond me.
Nonetheless, the Darwinian assumptions that are inherent in behavioral genetic, HBD “race realist” thinking are false. And the fact of the matter is, increasing the sample size of molecular genetic studies will only increase the chances of spurious correlations and picking up population stratification. So, it seems that using heritability to show genetic and environmental causes is a bust and has been a bust ever since Jensen revived the race and IQ debate in 1969, along with the subsequent responses that Jensen received against his argument which then led to the 1970s as being a decade in which numerous arguments were made against the concept of heritability (eg, Layzer, 1974).
It has also been pointed out to racial hereditarians for literally decades that heritability is is a flawed metric (Layzer, 1974; Taylor, 1980; Bailey, 1997; Schonemann, 1997; Guo, 2000; Moore, 2002; Rose, 2006; Schneider, 2007; Charney, 2012, 2013; Burt and Simons, 2015; Panofsky, 2014; Joseph et al, 2015; Moore and Shenk, 2016; Panofsky, 2016; Richardson, 2017; Lerner, 2018). These issues—among many more—lead Lerner to conclude:
However, the theory and research discussed across this chapter and previous ones afford the conclusion that no psychological attribute is pre-organized in the genes and unavailable to environmental influence. That is, any alleged genetic difference (or “inferiority”) of African Americans based on the high heritability of intelligence would seem to be an attribution built on a misunderstanding of concepts basic to an appropriate conceptualization of the nature–nurture controversy. An appreciation of the coaction of genes and context—of genes↔context relations—within the relational developmental system, and of the meaning, implications, and limitations of the heritability concept, should lead to the conclusion that the genetic-differences hypothesis of racial differences in IQ makes no scientific sense. (Lerner, 2018: 636)
That heritability doesn’t address mechanisms and ignores genetic factors, along with being inherently reductionist means that there is little to no utility of heritability for humans. And the complex, non-additive, non-linear aspects of biological systems are attempts at reducing biological systems to their component parts, (Rose, 2006), making heritability, again, inherently reductionist. We have to attempt to analyzed causes, not variances (Lewontin, 1974), which heritability cannot do. So it’s very obvious that the hereditarian programme which was revived by Jensen (1969)—and based on twin studies which were first undertaken in the 1920s—is based on a seriously flawed model of genes and how they work. But, of course, hereditarians have an ideological agenda to uphold, so that’s why they continue to pursue “heritability” in order to “prove” that “in part”, racial differences in many socio-behavioral traits—IQ included—are due to genes. But this type of argumentation quite clearly fails.
The fact of the matter is, “there are very good reasons to believe gene variations are at best irrelevant to common disorders and at worst a distraction from the social and political roots of major public health problems generally and of their unequal distribution in particular” (Chaufan and Joseph 2013: 284). (Also see Joseph’s, 2015 The Trouble with Twin Studies for more argumentation against the use of heritability and it’s inflation due to false assumptions along with arguments against “missing heritability.”) In fact, claims of “missing heritability” rest on “genetic determinist beliefs, a reliance on twin research, the use of heritability estimates, and the failure to seriously consider the possibility that presumed genes do not exist” (Joseph, 2012). Although it has been claimed that so-called rare variants explain the “missing heritability” (Genin, 2020), this is nothing but cope. So the heritability was never missing, it never existed at all.
The Multilingual Encyclopedia: On the Context-Dependency of Human Knowledge and Intelligence
3250 words
Introduction
Language is the road map of a culture. It tells you where its people come from and where they are going. – Rita May Brown
Communication bridges gaps. The words we use and the languages we speak along with the knowledge that we share serve as a bridge to weave together human culture and intelligence. So imagine a multilingual encyclopedia that encompasses the whole of human knowledge, a book of human understanding from the sciences, the arts, history and philosophy. This encyclopedia is a testament to the universal nature of human knowledge, but it also shows the interplay between culture, language, knowledge and human intelligence.
In my most recent article, I argued that human intelligence is shaped by cultural and social context and that this is shaped by interactions in a cultural and social context. So here I will argue that: there are necessary aspects of knowledge; knowledge is context-dependent; language, culture and knowledge interact with the specific contexts to form intelligence, mind and rationality; and my multilingual encyclopedia analogy shows that while there are what is termed “universal core knowledge”, these would then become context-dependent based on the needs for different cultures and I will also use this example to again argue against IQ. Finally I will conclude that the arguments in this article and the previous one show how the mind is socially formed based on the necessary physical substrates but that the socio-cultural contexts are what is necessary for human intelligence, mindedness, and rationality.
Necessary aspects of knowledge
There are two necessary and fundamental aspects of knowledge and thought—that of cognition and the brain. The brain is a necessary pre-condition for human mindedness, and cognition is influenced by culture, although my framework posits that cognitive processes play a necessary role in human cognition, just as the brain plays a necessary physical substrate for these processes. While cognition and knowledge are intertwined, they’re not synonymous. To cognize is to actively think about something that you want to, meaning it is an action. There is a minimal structure and it’s accounted for by cognition, like pattern recognition, categorization, sequential processing, sensory integration, associative memory and selective attention. And these processes are necessary, they are inherent in “cognition” and they set the stage for more complex mental abilities, which is what Vygotsky was getting at with the social formation of mind with his theory.
Individuals do interpret their experiences through a cultural lense, since culture provides the framework for understanding, categorizing, and making sense of experiences. I recognize the role of individual experiences and personal interpretations. So while cultural lenses may shape initial perceptions, people can also think critically and reflect on their interpretations over time due to the differing experiences they have.
Fundamental necessary aspects of knowledge like sensory perception are also pivotal. By “fundamental”, I mean “necessary”—that is, we couldn’t think or cognize without the brain and it therefore follows we couldn’t think without cognition. These things are necessary for thinking, language, culture and eventually intelligence, but what is sufficient for mind, thinking, language and rationality are the specific socio-cultural interactions and knowledge formulations that we get by being engrossed in linguistically-mediated cultural environments.
The context-dependence of knowledge
“Context-dependent knowledge” refers to information or understanding that can take on different meaning or interpretations based on the specific context in which it is applied or used. But I also mean something else by this: I mean that an individual’s performance on IQ tests is influenced by their exposure to specific cultural, linguistic, and contextual factors. Thus, this means that IQ tests aren’t culture-neutral or universally applicable, but they are biased towards people who share similar class-cultural backgrounds and experiences.
There is something about humans that allow us to be receptive to cultural and social contexts to form mind, language, rationality and intelligence (and I would say that something is the immaterial self). But I wouldn’t call it “innate.” Thus, so-called “innate” traits need certain environmental contexts to be able to manifest themselves. So called “innate” traits are experience-dependent (Blumberg 2018).
So while humans actively adapt, shape, and create cultural knowledge through cultural processes, knowledge acquisition isn’t solely mediated by culture. Individual experiences matter, as do interactions with the environment along with the accumulation of knowledge from various cultural contexts. So human cognitive capacity isn’t entirely a product of culture, and human cognition allows for critical thinking, creative problem solving, along with the ability to adapt cultural knowledge.
Finally, knowledge acquisition is cumulative—and by this, I mean it is qualitatively cumulative. Because as individuals acquire knowledge from their cultural contexts, individual experiences etc, this knowledge then becomes internalized in their cognitive framework. They can then build on thus existing knowledge to further adapt and shape culture.
The statement “knowledge is context-dependent” is a description of the nature of knowledge itself. It means that knowledge can take on different meaning or interpretations in different contexts. So when I say “knowledge is context-dependent”, I am acknowledging that it applies in all contexts, I’m discussing the contextual nature of knowledge itself.
Examples of the context-dependence of universal knowledge for example, are how English-speakers use the “+” sign for addition, while the Chinese have “加” or “Jiā”. So while this fundamental principle is the same, these two cultures have different symbols and notations to signify the operation. Furthermore, there are differences in thinking between Eastern and Western cultures, where thinking is more analytic in Western cultures and more holistic in Eastern cultures (Yates and de Oliveira, 2016; also refer to their paper for more differences between cultures in decision-making processes). There are also differences between cultures in visual attention (Jurkat et al, 2016). While this isn’t “knowledge” per se, it does attest to how cultures are different in their perceptions and cognitive processes, which underscores the broader idea that cognition, including visual attention, is influenced by cultural contexts and social situations. Even the brain’s neural activity (the brain’s physiology) is context-dependent—thus culture is context-dependent (Northoff, 2013).
But when it comes to culture, how does language affect the meaning of culture and along with it intelligence and how it develops?
Language, culture, knowledge, and intelligence
Language plays a pivotal role in shaping the meaning of culture, and by extension, intelligence and its development. Language is not only a way to communicate, but it is also a psychological tool that molds how we think, perceive and relate to the world around us. Therefore, it serves as the bridge between individual cognition and shares cultural knowledge, while acting as the interface through which cultural values and norms are conveyed and internalized.
So language allows us to encode and decode cultural information, which is how, then, culture is generationally transmitted. Language provides the framework for expressing complex thoughts, concepts, and emotions, which enables us to discuss and negotiate the cultural norms that define our societies. Different languages offer unique structures for expressing ideas, which can then influence how people perceive and make sense of their cultural surroundings. And important for this understanding is the fact that a human can’t have a thought unless they have language (Davidson, 1982).
Language is also intimately linked with cognitive development. Under Vygotsky’s socio-historical theory of learning and development, language is a necessary cognitive tool for thought and the development of higher mental functions. So language not only reflects our cognitive abilities, it also plays an active role in their formation. Thus, through social interactions and linguistic exchanges, individuals engage in a dynamic process of cultural development, building on the foundation of their native language and culture.
Feral children and deaf linguistic isolates show this dictum: that there is a critical window in which language could be acquired and thusly the importance of human culture in human development (Vyshedakiy, Mahapatra, and Dunn, 2017). Cases of feral children, then, show us how children would develop without human culture and shows the importance of early language hearing and use for normal brain development. In fact, this shows how social isolation has negative effects on children, and since human culture is inherently social, it shows the importance of human culture and society in forming and nurturing the formation of mind, intelligence, rationality and knowledge.
So the relationship between language, culture and intelligence is intricate and reciprocal. Language allows us to express ourselves and our cultural knowledge while shaping our cognitive processes and influencing how we acquire and express our intelligence. On the other hand, intelligence—as shaped by cultural contexts—contributes to the diversification of language and culture. The interplay underscores how language impacts our understanding of intelligence within it’s cultural framework.
Furthermore, in my framework, intelligence isn’t a static, universally-measureable trait, but it is a dynamic and constantly-developing trait shaped by social and cultural interactions along with individualsm experiences, and so intentionality is inherent in it. Moreover, in the context of acquiring cultural knowledge, Vygotsky’s ZPD concept shows that individuals can learn and internalize things outside of their current toolkit as guided by more knowledgeable others (MKOs). It also shows that learning and development occur mostly in this zone between what someone can do alone and what someone can do with help which then allows them to expand their cognitive abilities and cultural understanding.
Cultural and social exposure
Cultural and social exposure are critical to my conception of intelligence. Because, as we can see in cases of feral children, there is a clear developmental window of opportunity to gain language and to think and act like a human due to the interaction of the individual in human culture. The base cognitive capacities that we are born with and develop throughout infancy to toddlerhood to childhood and then adulthood aren’t just inert, passive things that merely receive information through vision and then we gain minds, intelligence and then become human. Critically, they need to be nurtured through culture and socialization. The infant needs the requisite experiences doing certain things to be able to learn how to roll over, crawl, and finally walk. They need to be exposed to different things in order to be exposed to the culture they were borne into correctly. So while we are born into both cultural, and linguistically-mediated environments, it’s these three types of environment—along with what the individual does themselves when they finally learn to walk, talk, and gain their mind, intelligence and rationality—that shape individual humans, the knowledge they gain and ultimately their intelligence.
If humans possess foundational cognitive capacities that aren’t entirely culturally determined or influenced, and culture serves as a mediator in shaping how these capacities are expressed and applied, then it follows that culture influences cognitive development while cognitive abilities provide the foundation for being able to learn at all, as well as being able to speak and to internalize the culture and language they are exposed to. So if culture interacts dynamically with cognitive capacities, and crucial periods exist during which cultural learning is particularly influential (cases of feral children), then it follows that early cultural exposure and socialization are critical. So it follows that my framework acknowledges both cognitive capacities and cultural influences in shaping human cognition and intelligence.
In his book Vygotsky and the Social Formation of Mind, Wertsch (1985) noted that Vygotsky didn’t discount the role of biology (like in development in the womb), but that after a certain point, biology no longer can be viewed as the sole or even primary factor in force of change for the individual, and that the explanation necessarily shifts to a sociocultural explanation:
However, [Vygotsky] argued that beyond a certain point in development, biological forces can no longer be viewed as the sole, or even the primary, force of change. At this point there is a fundamental reorganization of the forces of development and a need for a corresponding reorganization in the system of explanatory principles. Specifically, in Vygotsky’s view the burden of explanation shifts from biological to social factors. The latter operate within a given biological framework and must be compatible with it, but they cannot be reduced to it. That is, biological factors are still given a role in this new system, but they lose their role as the primary force of change. Vygotsky contrasted embryological and psychological development on this basis:
The embryological development of the child … in no way can be considered on the same level as the postnatal development of the child as a social being. Embryological development is a completely unique type of development subordinated to other laws than is the development of the child’s personality, which begins at birth. Embryological development is studied by an independent science—embryology, which cannot be considered one of the chapters of psychology … Psychology does not study heredity or prenatal development as such, but only the role and influence of heredity and prenatal development of the child in the process of social development. ([Vygotsky] 1972, p. 123)
The multilingual encyclopedia
Imagine a multilingual encyclopedia that encompasses knowledge of multiple disciplines from the sciences to the humanities to religion. This encyclopedia has what I term universal core knowledge. This encyclopedia is maintained by experts from around the world and is available in many languages. So although the information in the encyclopedia is written in different languages and upheld by people from different cultures, fundamental scientific discoveries, historical events and mathematical theorems remain constant across all versions of the encyclopedia. So this knowledge is context-independent because it holds true no matter the language it’s written in or the cultural context it is presented in. But the encyclopedia’s entries are designed to be used in specific contexts. The same scientific principles can be applied in labs across the world, but the specific experiments, equipment and cultural practices could vary. Moreover, historical events could be studied differently in different parts of the world, but the events themselves are context-independent.
So this thought experiment challenges the claim that context-independent knowledge requires an assertion of absolute knowledge. Context-independent knowledge exists in the encyclopedia, but it isn’t absolute. It’s merely a collection of universally-accepted facts, principles and theories that are applied in different contexts taking into account linguistic and cultural differences. Thus the knowledge in the encyclopedia is context-independent in that it remains the same across the world, across languages and cultures, but it is used in specific contexts.
Now, likening this to IQ tests is simple. When I say that “all IQ tests culture-bound, and this means that they’re class-specific”, this is a specific claim. What this means, in my view, is that people grow up in different class-cultural environments, and so they are exposed to different knowledge bases and kinds of knowledge. Since they are exposed to different knowledge bases and kinds of knowledge, when it comes time for test time, if they aren’t exposed to the knowledge bases and kinds of knowledge on the test, they necessarily won’t score as high as someone who was immersed in the knowledge bases and kinds of knowledge. Cole’s (2002) argument that all tests are culture-bound is true. Thus IQ tests aren’t culture-neutral, they are all culture-bound, and culture-neutral tests are an impossibility. This further buttresses my argument that intelligence is shaped by the social and cultural environment, underscoring the idea that the specific knowledge bases and cognitive resources that individuals are exposed to within their unique socio-cultural contexts play a pivotal role in the expression and development of their cognitive abilities.
IQ tests are mere cultural artifacts. So IQ tests, like the entries in the multilingual encyclopedia, are not immune to cultural biases. So although the multilingual encyclopedia has universal core knowledge, the way that the information is presented in the encyclopedia, like explanations and illustrations, would be culturally influenced by the authors/editors of the encyclopedia. Remember—this encyclopedia is an encyclopedia of the whole of human knowledge written in different languages, seen through different cultural lenses. So different cultures could have ways of explaining the universal core knowledge or illustrating the concepts that are derived from them.
So IQ tests, just like the entries in the encyclopedia, are only usable for certain contexts. While the entries in the encyclopedia could be usable for more than one context of idea one has, there is a difference for IQ testing. The tests are created by people from a narrow social class and so the items on them are therefore class-specific. This then results in cultural biases, because people from different classes and cultures are exposed to varying different knowledge bases, so people will be differentially prepared for test-taking on this basis alone. So the knowledge that people are exposed to based on their class membership or even different cultures within America or even from an immigrant culture would influence test scores. So while there is universal core knowledge, and some of this knowledge may be on IQ tests, the fact is that different classes and cultures are exposed to different knowledge bases, and so that’s why they score differently—the specific language and numerical skills on IQ tests are class-specific (Brito, 2017). I have noted how culturally-dependent IQ tests are for years, and this interpretation is reinforced when we consider knowledge and its varying interpretations found in the multilingual encyclopedia, which then highlights the intricate relationship between culture, language, and IQ. This then serves to show that IQ tests are mere knowledge tests—class-specific knowledge tests (Richardson, 2002).
So my thought experiment shows that while there are fundamental scientific discoveries, historical events and mathematical theorems that remain constant throughout the world and across different languages and cultures, the encyclopedia’s entries are designed to be used in specific contexts. So the multilingual encyclopedia thought experiment supports my claim that even when knowledge is context-independent (like that of scientific discoveries, historical facts), it can become context-dependent when it is used and applied within specific cultural and linguistic contexts. This, then, aligns with the part of my argument that knowledge is not entirely divorced from social, cultural and contextual influences.
Conclusion
The limitations of IQ tests become evident when we consider how individuals produce and acquire knowledge and the cultural and linguistic diversity and contexts that define our social worlds. The analogy of the multilingual encyclopedia shows that while certain core principles remain constant, the way that we perceive and apply knowledge is deeply entwined within the cultural and social contexts in which we exist. This dynamic relationship between culture, language, knowledge and intelligence, then, underscores the need to recognize the social formation of mind and intelligence.
Ultimately, human socio-cultural interactions, language, and the knowledge we accumulate together mold our understanding of intelligence and how we acquire it. The understanding that intelligence arises through these multifaceted exchanges and interactions within a social and cultural framework points to a more comprehensive perspective. So by acknowledging the vital role of culture and language in the formation of human intelligence, we not only deconstruct the limitations of IQ tests, but we also lay the foundation for a more encompassing way of thinking about what it truly means to be intelligent, and how it is shaped and nurtured by our social lives in our unique cultural contexts and the experiences that we have.
Thus, to truly grasp the essence of human intelligence, we don’t need IQ tests, and we certainly don’t need claims like genes causing IQ or psychological traits and this then is what makes certain people or groups more intelligent than others; we have to embrace the fact that human intelligence thrives within the web of social and cultural influences and interactions which then collectively form what we understand as the social formation of mind.
Intelligence without IQ: Towards a Non-IQist Definition of Intelligence
3000 words
Introduction
In the disciplines of psychology and psychometrics, intelligence has long been the subject of study, attempting to reduce intelligence to a number based on what a class-biased test spits out when an individual takes an IQ test. But what if intelligence resisted quantification, and we can’t state that IQ tests can put a number to one’s intelligence? The view I will present here will conceptualize intelligence as a psychological trait, and since it’s a psychological trait, it’s then resistant to being reduced to anything physical and it’s also resistant to quantification. I will draw on Vygotsky’s socio-cultural theory of learning and development and his emphasis on the role of culture, social interactions and cultural tools in shaping intelligence and then I will explain that Vygotsky’s theory supports the notion that intelligence is socially and contextually situated. I will then draw on Ken Richardson’s view that intelligence is a socially dynamic trait that’s irreducible, created by sociocultural tools.
All in all, the definition that I will propose here will be irrelevant to IQ. Although I do conceptualize psychological traits as irreducible, it is obvious that IQ tests are class-specific knowledge tests—that is they are biased against certain classes and so it follows that they are biased for certain classes. But the view that I will articulate here will suggest that intelligence is a complex and multifaceted construct that is deeply influenced by cultural and social factors and that it resists quantification because intentionality is inherent in it. And I don’t need to posit a specified measured object, object of measurement and measurement unit for my conception because I’m not claiming measurability.
Vygotsky’s view
Vygotsky is most well-known for his concepts of private speech, more knowledgeable others, and the zone of proximal development (ZPD). Intelligence involves the internalization of private speech, where individuals engage in a self-directed dialogue to solve problems and guide their actions. This internalized private speech then represents an essential aspect of one’s cognitive development, and reflects an individual’s ability to think and reason independently.
Intelligence is then nurtured through interactions with more knowledgeable others (MKOs) in a few ways. MKOs are individuals who possess a deeper understanding or expertise in specific domains. MKOs provide guidance, support, and scaffolding, helping individuals to reach higher levels of cognitive functioning and problem solving.
Along with MKOs, the ZPD is a crucial aspect in understanding intelligence. It represents a range of tasks that individuals can’t perform independently, but can achieve with guidance and support—it is the “zone” where learning and cognitive development take place. e. So intelligence isn’t only about what one can do alone, but also what one can achieve with the assistance of a MKO. Thus, in this context, intelligence is seen as a dynamic process of development where individuals continuously expand their ZPD through sociocultural interactions. So MKOs play a pivotal role in facilitating learning and cognitive development by providing the necessary help to individuals within their ZPD. The ZPD concept underscores the fact and idea that learning is most effective when it is in this zone, where the learner is neither too challenged or too comfortable, but is then guided by a MKO to reach higher levels of competence in what they’re learning.
So the takeaway from this discussion is this: Intelligence isn’t merely a product of individual cognitive abilities, but it is deeply influenced by cultural and social interactions. It encompasses the capacity for private speech which demonstrates an individual’s capacity to think and reason independently. It also involves learning and development ad facilitated by MKOs who contribute to an individual cognitive growth. And the ZPD underscores the importance of sociocultural guidance in shaping and expanding an individual’s intelligence, while reflecting the dynamic and collaborative nature of cognitive development within the sociocultural context. So intelligence, as understood here, is inseparable from Vygotsky’s concepts of private speech, more knowledgeable others and the ZPD and it highlights the dynamic interplay between individual cognitive processes and sociocultural interactions in the development of intelligence.
Davidson (1982) stated that “Neither an infant one week old nor a snail is a rational creature. If the infant survives long enough, he will probably become rational, while this is not true of the snail.” And on Vygotsky’s theory, the infant becomes rational—that is, intelligent—by interacting with MKOs, and internalizing private speech when they learn to talk and think in cultural contexts in their ZPD. Infants quite clearly have the capacity to become rational, and they begin to become rational through interactions with MKOs and caregivers who guide their cognitive growth within their ZPD. This perspective, then, highlights the role of social and cultural influences in the development of infant’s intelligence and their becoming rational creatures. Children are born into both cultural and linguistically-mediated environments, which is put well by Vasileva and Balyasnikova (2019):
Based on the conceptualization of cultural tools by Vygotsky (contrary to more traditional socio-cultural schools), it follows that a child can be enculturated from birth. Children are not only born in a human-created environment, but in a linguistically mediated environment that becomes internalized through development.
Richardson’s view
Ken Richardson has been a critic of IQ testing since the 1970s being one editor of the edited volume Race and Intelligence: The Fallacies Behind the Race-IQ Controversy. He has published numerous books critiquing the concept of IQ, most recently Understanding Intelligence (Richardson, 2022). (In fact, Richardson’s book was what cured me of my IQ-ist delusions and set me on the path to DST.) Nonetheless,
Richardson (2017: 273) writes:
Again, these dynamics would not be possible without the co- evolution of interdependencies across levels: between social, cognitive, and aff active interactions on the one hand and physiological and epigenetic processes on the other. As already mentioned, the burgeoning research areas of social neuroscience and social epigenetics are revealing ways in which social/cultural experiences ripple through, and recruit, those processes.
For example, different cognitive states can have different physiological, epigenetic, and immune-system consequences, depending on social context. Importantly, a distinction has been made between a eudaimonic sense of well-being, based on social meaning and involvement, and hedonic well-being, based on individual plea sure or pain. These different states are associated with different epigenetic processes, as seen in the recruitment of different transcription factors (and therefore genes) and even immune system responses.18 All this is part of the human intelligence system.
In that way human evolution became human history. Collaboration among brains and the emergent social cognition provided the conceptual breakout from individual limits. It resulted in the rapid progress seen in human history from original hunter-gatherers to the modern, global, technologiocal society—all on the basis of the same biological system with the same genes.
So intelligence emerges from the specific activities, experiences, and resources that individuals encounter throughout their development. Richardson’s view, too, is a Vygotskian one. And like Vygotsky, he emphasizes the significant cultural and social aspects in shaping human intelligence. He rejects the claim that human intelligence is reducible to a number (on IQ tests), genes, brain physiology etc.
Human intelligence cannot be divorced from the sociocultural context in which it is embedded and operates in. So in this view, intelligence is not “fixed” as the genetic reductionist IQ-ists would like you to believe, but instead it can evolve and adapt over time in response to learning, the environment, and experiences. Indeed, this is the basis for his argument on the intelligent developmental system. Indeed, Richardson (2012) even argues that “IQ scores might be more an index of individuals’ distance from the cultural tools making up the test than performance on a singular strength variable.” And due to what we know about the inherent bias in the items on IQ tests (how they’re basically middle-class cultural knowledge tests), it seems that Richardson is right here. Richardson (1991; cf 2001) even showed that when Raven’s progressive matrices items were couched in familiar contexts, the children were able to complete them, even when the same exact rules were there between Richardson’s re-built items and the abstract Raven’s items. This shows that couching items in cultural context even with the same rules as the Raven shows that cultural context matters for these kinds of items.
Returning the concept of cultural tools that Richardson brought up in the previous quote (which is derived from Vygotsky’s theory), cultural tools encompass language, knowledge, and problem solving abilities which are culturally-specific and influenced by that culture. These tools are embedded in IQ tests, influencing the problems presented and the types of questions. Thus, it follows that if one is exposed to different psychological and cultural tools (basically, if one is exposed to different knowledge bases of the test), then they will score lower on a test compared to another person whom is exposed to the item content and structure of the test. So individuals who are more familiar with the cultural references, language patterns, and knowledge will score better than those that don’t. Of course, there is still room here for differences in individual experiences, and these differences influence how individuals approach problem solving on the tests. Thus, Richardson’s view highlights that IQ scores can be influenced by how closely aligned an individual’s experiences are with the cultural tools that are embedded on the test. He has also argued that non-cognitive, cultural, and affective factors explain why individuals score differently on IQ tests, with IQ not measuring the ability for complex cognition (Richardson, 2002; Richardson and Norgate, 2014, 2015).
So contrary to how IQ-ists want to conceptualize intelligence (as something static, fixed, and genetic), Richardson’s view is more dynamic, and looks to the cultural and social context of the individual.
Culture, class, and intelligence
Since I have conceptualized intelligence as a socially embedded and culturally-influenced and dynamic trait, class and culture are deeply intertwined in my conception of intelligence. My definition recognizes that intelligence is culturally-influenced by cultural contexts. Culture provides different tools (cultural and psychological) which then develop and individual’s cognitive abilities. Language is a critical cultural (also psychological) tool which shapes how individuals think and communicate. So intelligence, in my conception and definition, encompasses the ability to effectively use these cultural tools. Furthermore, individuals from different cultures may developm unique problem solving strategies which are embedded in their cultural experiences.
Social class influences access to educational and cultural resources. Higher social classes often have greater access to quality education, books, and cultural experiences and this can then influence and impact an individual’s cognitive development and intelligence. My definition also highlights the limitations of reductionist approaches like IQ tests. It has been well-documented that IQ tests have class-specific knowledge and skills on them, and they also include knowledge and scenarios which are more familiar to individuals from certain social and cultural backgrounds. This bias, then, leads to disparities in IQ scores due to the nature of IQ tests and how the tests are constructed.
A definition of intelligence
Intelligence: Noun
Intelligence, as a noun, refers to the dynamic cognitive capacity—characterized by intentionality—possessed by individuals. It is characterized by a connection to one’s social and cultural context. This capacity includes a wide range of cognitive abilities and skills, reflecting the multifaceted nature of human cognition. This, then, shows that only humans are intelligent since intentionality is a human-specific ability which is due to the fact that we humans are minded beings and minds give rise and allow intentional action.
A fundamental aspect of intelligence is intentionality, which signifies that cognitive processes are directed towards single goals, problem solving, or understanding within the individual’s social and cultural context. So intelligence is deeply rooted in one’s cultural and social context, making it socially embedded. It’s influenced by cultural practices, social interactions, and the utilization of cultural tools for learning and problem solving. So this dynamic trait evolves over time as individuals engage with their environment and integrate new cultural and social experiences into their cognitive processes.
Intelligence is the dynamic capacity of individuals to engage effectively with their sociocultural environment, utilizing a diverse range of cognitive abilities (psychological tools), cultural tools, and social interactions. Richardson’s perspective emphasizes that intelligence is multifaceted and not reducible to a single numerical score, acknowledging the limits of IQ testing. Vygotsky’s socio-cultural theory underscores that intelligence is deeply shaped by cultural context, social interactions, and the use of cultural tools for problem solving and learning. So a comprehensive definition of intelligence in my view—informed by Richardson and Vygotsky—is that of a socially embedded cognitive capacity—characterized by intentionality—that encompasses diverse abilities and is continually shaped by an individual’s cultural and social interactions.
In essence, within this philosophical framework, intelligence is an intentional multifaceted cognitive capacity that is intricately connected to one’s cultural and social life and surroundings. It reflects the dynamic interplay of intentionality, cognition and socio-cultural influences. Thus is closely related to the concept of cognition in philosophy, which is concerned with how individuals process information, make sense of the world, acquire knowledge and engage in thought processes.
What IQ-ist conceptions of intelligence miss
The two concepts I’ll discuss are the two most oft-cited concepts that hereditarian IQ-ists talk about—that of Gottfredson’s “definition” of intelligence and Jensen’s attempt at relating g (the so-called general factor of intelligence) to PC1.
Gottfredson’s “definition” is the most-commonly cited one in the psychometric IQ-ist literature:
Intelligence is a very general mental capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly and learn from experience. It is not merely book learning, a narrow academic skill, or test-taking smarts. Rather, it reflects a broader and deeper capability for comprehending our surroundings-“catching on,” “ making sense” of things, or “figuring out” what to do.
I have pointed out the nonsense that is her “definition” since she says it’s “not merely book learning, a narrow academic skill or test-taking smarts“, yet supposedly, IQ tests “measure” this, and it’s based on… Book learning, is an academic skill and knowledge of the items on the test. That this “definition” is cited as something that is related to IQ tests is laughable. A research paper from OpenAI even cited this “definition” in their paper Sparks of Artifical Intelligence: Early Experiments with GPT4” (Bubeck et al, 2023), but the reference was seemingly removed. Strange…
Spearman “discovered” g in 1903, but his g theory was refuted mere years later. (Nevermind the fact that Spearman saw what he wanted to see in his data; Schlinger, 2003.) In fact, Spearman’s g falsified in 1947 by Thurstone and then again in 1992 by Guttman (Heene, 2008). Then Jensen came along trying to revive the concept, and he likened it to PC1. Here are the steps that show the circularity in Jensen’s conception:
(1) If there is a general intelligence factor “g,” then it explains why people perform well on various cognitive tests.
(2) If “g” exists and explains test performance, the absence of “g” would mean that people do not perform well on these tests.
(3) We observe that people do perform well on various cognitive tests (i.e., test performance is generally positive).
(4) Therefore, since “g” would explain this positive test performance, we conclude that “g” exists.
Nonetheless, Jensen’s g is an unfalsifiable tautology—it’s circular. These are the “best” conceptions of intelligence the IQ-ists have and they’re either self-contradictory nonsense (Gottfredson’s), already falsified (Spearman’s) or unfalsifiable circular tautology (Jensen’s). What makes Spearman’s g even more nonsensical was that he posited g as a mental energy (Jensen, 1999), and more recently it has been proposed that this mental energy can be found in mitochondrial cells (Geary, 2018, 2019, 2020, 2021). Though I have also shown how this is nonsense.
Conclusion
In this article, I have conceptualized intelligence as a socially embedded and culturally-influenced cognitive capacity characterized by intentionality. It is a dynamic trait which encompasses diverse abilities and is continually shaped by an individual’s cultural and social context and social interactions. I explained Vygotsky’s theory and also explained how his three main concepts relate to the definition I have provided. I then discussed Richardson’s view of intelligence (which is also Vygotskian), and showed how IQ tests are merely an index of one’s distance from the cultural tools that are embedded on the IQ test.
In discussing my conception of intelligence, I then contrasted it with the two “best” most oft-cited conceptions of “intelligence” in the psychological/psychometric literature (Gottfredson’s and Spearman’s/Jensen’s). I then showed how they fail. My conception of intelligence isn’t reductionist like the IQ-ists (they try to reduce intelligence/IQ to genes or physiology or brain structure), but it is inherently holistic in recognizing how intelligence develops over the course of the lifespan, from birth to death. My definition recognizes intelligence as a dynamic, changing trait that’s not fixed like the hereditarians claim it is, and in my conception there is no use for IQ tests. At best, IQ tests merely show what kind of knowledge and experiences one was exposed to in their lives due to the cultural tools inherent on the test. So my inherently Vygotskian view shows how intelligence can be conceptualized and then developed during the course of the human lifespan.
Intelligence, as I have conceived of it, is a dynamic and constantly-developing trait, which evolved through our experiences, cultural backgrounds, and how we interact with the world. It is a multifaceted, context-sensitive capacity. Note that I am not claiming that this is measurable, it cannot be reduced to a single quantifiable measure. And since intentionality is inherent in the definition, this further underscores how it resists quantification and measurability.
In sum, the discussions here show that the IQ-ist concept is lacking—it’s empty. And how we should understand intelligence is that of an irreducible, socially and culturally-influenced, dynamic and constantly-developing trait, which is completely at-ends with the hereditarian conception. Thus, I have argued for intelligence without IQ, since IQ “theory” is empty and it doesn’t do what they claim it does (Nash, 1990). I have been arguing for the massive limitations in IQ for years, and my definition here presents a multidimensional view, highlights the cultural and contextual influence, and emphasizes it’s dynamic nature. The same cannot be said for reductionist hereditarian conceptions.
Free Will and the Immaterial Self: How Free Will Proves that Humans Aren’t Fully Physical Beings
2200 words
Introduction
That humans have freedom of will demonstrates that there is an immaterial aspect to humans. It implies that there is a nonphysical aspect to humans, thus, humans aren’t fully physical beings. I will use the Ross-Feser argument on the immateriality of thought to strengthen that conclusion. But before that, I will demonstrate that we do indeed have free will. Then, the consequence that we have free will will then be used to generate the conclusion that we are not fully physical beings. This conclusion is, however, justified by arguments for many flavors of dualism. I will then conclude by providing a compelling case against the physicalist, materialist view that seeks to reduce human beings to purely physical entities—because this claim will be directly contested by the conclusion of my argument.
CID and free will
I recently argued for a view I call cognitive interface dualism (CID). The argument I formulated used action potentials (APs) as the intermediary between the mental and physical realms that Descartes was looking for (he thought that this interaction took place at the peneal gland, but he was wrong). So free will using my CID can be seen as a product of mental autonomy, non-deterministic mental causation, and the emergent properties of mind. So CID can accommodate free will and allow for it’s existence without relying on determinism.
The CID framework also argues that M is irreducible to P, consistent with other forms of dualism. This suggests that the mind has a level of autonomy that isn’t completely determined by physical or material processes. Furthermore, when it comes to decision-making, this occurs in the mental realm. CID allows for mental states to causally influence physical states (mental causation), and so, free will operates when humans make choices, and these choices can initiate actions which aren’t determined by physical factors. Free will is also compatible with the necessary role of the human brain for minds—it’s an emergent property of the interaction of M and P. The fact of the matter is, minds allow agency, the ability to reason and make choices. That is, humans are unique, special animals and humans are unique and special because humans have an immaterial mind which allows the capacity to make decisions and have freedom.
Overall, the CID framework provides a coherent explanation for the existence of free will, alongside the role of the brain in human cognition. It further allows for a nuanced perspective on human agency, while emphasizing the unique qualities of human decision-making and freedom.
Philosopher Peter Van Inwagen has an argument using modus ponens which states: If moral responsibility exists, then free will exists. Moral responsibility exists because individuals are held accountable for their actions in the legal system, ethical discussions, and everyday life. Thus, free will exists. Basically, if you’ve ever said to someone “That’s your fault”, you’re holding them accountable for their actions, assuming that they had the capacity to make choices and decisions independently. So this aligns with the concept of free will, since you’re implying that a person and the ability to act differently and make alternative choices.
Libet experiments claim that unconscious brain processes are initiated before an action is made, and that it precedes conscious intention to move. But the original Libet experiment nor any similar ones justify the claim that the brain initiates freely-willed processes (Radder and Meynen, 2012)—because the mind is what is initiating these freely-willed actions.
Furthermore, when we introspect and reflect on our conscious experiences, we unmistakably perceive ourselves as making choices and decisions in various situations in our lives. These choices and decisions feel unconstrained and open, we experience and feel a sense of deliberation when making them. But if we had no free will and our choices, were entirely determined by external factors, then our experience of making choices would be illusory; our choices would be mere illusions of free will. Thus, the fact that we have a direct and introspective awareness in making choices implies that free will exists; it’s a fundamental aspect of our human experiences. So while this argument doesn’t necessarily prove that free will exists, it highlights the compelling phenomenological aspects of human decision-making, which can be seen as evidence for free will.
Having said all of this, I can now make the following argument: If humans have the ability to reason and make logical decisions, then humans have free will. Humans have the ability to reason and make logical decisions. So humans have free will. I will then take this conclusion that I inferred and use it in a later argument to infer that humans aren’t purely physical beings.
Freedom and the immaterial self
Ronald Ross (1992) argued that all formal thinking is incompossibly determinate, while no physical process or a function of physical processes are incompossibly determinate, which allowed him to infer that thoughts aren’t a functional or physical process. Then Ed Feser (2013) argued that Ross’ argument cannot be refuted or coukd be refuted by any neuroscientific discovery. Feser then added to the argument and correctly inferred that humans aren’t fully physical beings.
A, B, and C are, after all, only the heart of Ross’s position. A little more fully spelled out, his overall argument essentially goes something like this:
A. All formal thinking is determinate.
B. No physical process is determinate.
C. No formal thinking is a physical process. [From A and B]
D. Machines are purely physical.
E. Machines do not engage in formal thinking. [From C and D]
F. We engage in formal thinking.
G. We are not purely physical. [From C and F] (Ed Feser, Can Machines Beg the Question?)
This is a conclusion that I myself have come to, through the fact that machines are purely physical and since thinking isn’t a physical process (but physical processes are necessary for thinking), then machines cannot think since they are purely physical and thinking isn’t a physical or functional process.
Only beings with minds can intend. This is because mind allows a being to think. Since the mind isn’t physical, then it would follow that a physical system can’t intend to do something—since it wouldn’t have the capacity to think. Take an alarm system. The alarm system does not intend to sound alarms when the system is tripped. It’s merely doing what it was designed to do, it’s not intending to carry out the outcome. The alarm system is a physical thing made up of physical parts. So we can then liken this to, say, A.I.. A.I. is made up of physical parts. So A.I. (a computer, a machine) can’t think. However, individual physical parts are mindless and no collection of mindless things counts as a mind. Thus, a mind isn’t a collection of physical parts. Physical systems are ALWAYS a complicated system of parts but the mind isn’t. So it seems to follow that nothing physical can ever have a mind.
Physical parts of the natural world lack intentionality. That is, they aren’t “about” anything. It is impossible for an arrangement of physical particles to be “about” anything—meaning no arrangement of intentionality-less parts will ever count as having a mind. So a mind can’t be an arrangement of physical particles, since individual particles are mindless. Since mind is necessary for intentionality, it follows that whatever doesn’t have a mind cannot intend to do anything, like nonhuman animals. It is human psychology that is normative, and since the normative ingredient for any normative concept is the concept of reason, and only beings with minds can have reasons to act, then human psychology would thusly be irreducible to anything physical. Indeed, physicalism is incompatible with intentionality (Johns, 2020). The problem of intentionality is therefore yet another kill-shot for physicalism. It is therefore impossible for intentional states (i.e. cognition) to be reduced to, or explained by, physicalist theories/physical things. (Why Purely Physical Things Will Never Be Able to Think: The Irreducibility of Intentionality to Physical States)
Now that I have argued for the existence of free will, I will now argue that our free will implies that there is an aspect of our selves and out existence that is not purely physical, but is immaterial. Effectively, I will be arguing that humans aren’t fully physical beings.
So if humans were purely physical beings, then our actions and choices would be solely determined by physical laws and processes. However, if we have free will, then our actions are not solely determined by physical laws and processes, but are influenced by our capacity to make decisions independently. So humans possess a nonphysical aspect—free will which is allowed by the immaterial mind and consciousness—which allows us to transcend the purely deterministic nature of purely physical things. Consequently, humans cannot be fully physical beings, since the existence of free will and the immaterial mind and consciousness suggests a nonphysical, immaterial aspect to out existence.
Either humans have free will, or humans do not have free will. If humans have free will, then humans aren’t purely physical. If humans don’t have free will, then it contradicts the premise that we have free will. So humans must have free will. Consequently, humans aren’t fully physical beings.
Humans aren’t fully physical beings, since we have the capacity for free will and thought—where free will is the capacity to make choices that are not determined by external factors alone. If humans have the ability to reason and make logical decisions, then humans have free will. Humans have the ability to reason and make logical decisions. So humans have free will. Reasoning and the ability to make logical decisions is based on thinking. Thinking is an immaterial—non-physical—process. So if thinking is an immaterial process, and what allows thinking are minds which can’t be physical, then we aren’t purely physical. Put into premise and conclusion form, it goes like this:
(1) If humans have the ability to reason and make logical decisions, then humans have free will.
(2) Humans have the ability to reason and make logical decisions.
(3) Reasoning and the ability to reason and make logical decisions are based on thinking.
(4) Thinking is an immaterial—non-physical—process.
(5) If humans have free will, and what allows free will is the ability to think and make decisions, then humans aren’t purely physical beings.
This argument suggests that humans possess free will and engage in immaterial thinking processes, which according to the Ross-Feser argument, implies the existence of immaterial aspects of thought. So what allows this is consciousness, and the existence of consciousness implies the existence of a nonphysical entity. This nonphysical entity is the mind.
So in CID, the self (S) is the subject the self is the subject of experience, while the mind (M) encompasses mental states, subjective experiences, thoughts, emotions, and consciousness, and consciousness (C) refers to the awareness of one’s own mental states and experiences. CID also recognizes that the brain is a necessary pre-condition for human mindedness but not a sufficient condition, so for there to be a mind at all there needs to be a brain—basically, for there to be mental facts, there must be physical facts. The self is what has the mind, and the mind is the realm in which mental states and experiences occur. So CID posits that the self is the unified experiencer, while the self interact is the entity that experiences and interacts with the contents of the mind through APs.
So this argument that I’ve mounted in this article and my original article on CID, is that humans aren’t fully physical beings since it’s based on the idea that thinking and conscious experiences are immaterial, nonphysical processes.
Conclusion
So CID offers a novel perspective on the mind-body problem, arguing that APs are the interface between the mental and the physical world. Now with this arguments I’ve made here, it establishes that humans aren’t purely physical beings. Through the argument that mental states are irreducible to physical states, CID acknowledges that the existence of an immaterial self plays a fundamental role in human mental life. Thus immaterial self—the seat of our conscious experiences, thoughts, decisions and desires—bridges the gap between M and P. This further underscores the argument that the mind is immaterial and thus so is the self (“I”, the experiencer, the subject of experience) and that the subject isn’t the brain or the nervous system.
CID recognizes that human mental life is characterized by its intrinsic mental autonomy and free will. We are not mere products of deterministic physical processes, rather we are agents capable of making genuine choices and decisions. The conscious experiences of making choices along with the profound sense of freedom in our are immediate and undeniable aspects of our profound sense of freedom in our decisions are immediate and undeniable aspects of our reality which then further cements the existence of free will. So the concept of free will reinforces the claim and argument that humans aren’t fully physical beings. These aspects of our mental life defy reduction to physical causation.
Hypertension, Brain Volume, and Race: Hypotheses, Predictions and Actionable Strategies
2300 words
Introduction
Hypertension (HT, also known as high blood pressure, BP) is defined as a BP of 140/90. But more recently, the guidelines were changed making HT being defend as a BP over 130/90 (Carey et al, 2022; Iqbal and Jamal, 2022). One 2019 study showed that in a sample with an age range of 20-79, 24 percent of men and 23 percent of women could be classified as hypertensive based on the old guidelines (140/90) (Deguire et al, 2019). Having consistent high BP could lead to devestating consequences like (from the patient’s perspective) hot flushes, dizziness, and mood disorders (Goodhart, 2016). However, one serious problem with HT is the issue that consistently high BP is associated with a decrease in brain volume (BV). This has been seen in two systematic reviews and meta-analyses (Alosco et al, 2013; Beauchet et al, 2013; Lane et al, 2019; Alateeq, Walsh and Cherbuin, 2021; Newby et al, 2022) while we know that long-standing hypertension has deleterious effects on brain health (Salerno et al, 1992). However, it’s not only high BP that’s related to this, it’s also lower BP in conjuction with lower pulse pressure (Muller et al, 2010; Foster-Dingley, 2015). So what this says to me is that too much or too little blood flow to the brain is deleterious for brain health.I will state the hypothesis and then I will state the predictions that follow from it. I will then provide three reasons why I think this relationship occurs.
The hypothesis
The hypothesis is simple: high BP (hypertension, HT) is associated with a reduced brain volume. This relationship is dose-dependent, meaning that the extent and duration of HT correlates with the degree of BV changes. So the hypothesis suggests that there is a relationship—an association—between HT and brain volume, where people with HT will be more likely to have decreased BVs than those who lack HT—that is, those with BP in the normal range.
The dose-dependent relationship that has been observed (Alateeq, Walsh and Cherbuin, 2021), and this shows that as HT increases and persists over time, the effects of decreased BV become more pronounced. This relationship suggests that it’s not a binary, either-or situation, present or absent situation, but that it varies across a continuum. So people with shorter-lasting HT will have fewer effects than those with constant and consistent elevated BP and they will then show subsequent higher decreases in BV. This dose-dependent relationship also suggests that as BP continues to elevate, the decrease in BV will worsen.
This dose-dependent relationship implies a few things. The consequences of HT on BV aren’t binary (either or), but are related to the severity of HT, how long one has HT, and at what age they have HT and that it varies on a continuum. For instance, people with mild or short-lasting HT would experience smaller reductions in BV than those that have severe or long-standing HT. The dose-dependent relationship also suggests that the longer one has HT without treatment, the more severe and worse the reduction in BV will be if it is uncontrolled. So as BP continues to elevate, it may lead to a gradual reduction in BV. So the relationship between HT and BV isn’t uniform, but it varies based on the intensity and duration of high BP.
So the hypothesis suggests that HT isn’t just a risk factor for cardiovascular disease, but it’s also a risk factor for decreased BV. This seems intuitive, since the higher one’s BP, the more likely it is that there is the beginnings of a blockage somewhere in the intricate system of blood vessels in the body. And since the brain is a vascular organ, then by decreasing the amount of blood flowing to it, this then would lead to cell death, white matter lesions which would lead to a smaller BV. One newer study showed, with a sample of Asians, whites, blacks, and “Latinos” that, compared to those with normal BP, those who were transitioning to higher BP or already had higher BP had lower brain connectivity, decreased cerebral gray matter and frontal cortex volume, while this change was worse for men (George et al, 2023). Shang et al (2021) showed that HT diagnosed in early and middle life but not late life was associated with decreased BV and increased risk of dimentia. This, of course, is due to the slow cumulative effects of HT and it’s effects on the brain. While Power et al (2016) “The pattern of hypertension ~15 years prior and hypotension concurrent with neuroimaging was associated with smaller volumes in regions preferentially affected by Alzheimer’s disease.” But not only is BP relevant here, so is the variability of BP at night (Gutteridge et al, 2022; Yu et al, 2022). Alateeq, Walsh and Cherbuin (2021) conclude that:
Although reviews have been previously published in this area, they only investigated the effects of hypertension on brain volume [86]. To the best of our knowledge, this study’s the first systematic review with meta-analysis providing quantitative evidence on the negative association between continuous BP and global and regional brain volumes. Our results suggest that heightened BP across its whole range is associated with poorer cerebral health which may place individuals at increased risk of premature cognitive decline and dementia. It is therefore important that more prevention efforts be directed at younger populations with a greater focus on achieving optimal BP rather than remaining below clinical or pre-clinical thresholds[5].
One would think that a high BP would actually increase blood flow to the brain, but HT actually causes alterations in the flow of blood to the brain which leads to ischaemia and it causes the blood-brain barrier to break down (Pires et al, 2013). Essentially, HT has devestating effects on the brain which could lead to dimentia and Alzheimer’s (Iadecola and Davisson, 2009).
So the association between HT and decreased BV means that individuals with HT can experience alterations in BV in comparison to those with normal BP. The hypothesis also suggests that there are several mechanisms (detailed below), which may lead to various physiological and anatomic changes in the brain, such as vascular damage, inflammation and tissue atrophy.
The mechanisms
(1) High BP can damage blood vessels in the brain, which leads to reduced blood flow. This is called “cerebral hypoperfusion.” The reduced blood flow can deprive the cells in the brain of oxygen and nutrients, which cause them to shrink or die which leads to decreased brain volume (BV). Over time, high BP can damage the arteries, making them less elastic
(2) Over a long period of time having high BP, this can cause hypertensive encephalopathy, which is basically brain swelling. A rapid increase in BP could over the short term increase BV, but left untreated it could lead to brain damage and atrophy over time.
And (3) Chronically high BP can lead to the creation of white matter lesions on the brain, and the lesions are areas of damaged brain tissue which could result in microvascular changes caused by high BP (hypertension, HT). Thus, over time, the accumulation of white matter lesions could lead to a decrease in brain volume. HT can contribute to white matter lesions in the brain, which are then associated with cognitive changes and decreased BV, and these lesions increase with BP severity.
So we have (1) cerebral hypoperfusion, (2) hypertensive encephalopathy, and (3) white matter lesions. I need to think/read more on which of these could lead to decreased BV, or if they all actually work together to decrease BV. We know that HT damages blood vessels, and of course there are blood vessels in the brain, so it then follows that HT would decrease BV.
I can also detail a step-by-step mechanism. The process beings with consistently elevated BP, which could be due to various factors like genetics, diet/lifestyle, and underlying medical conditions. High BP then places increased strain on the blood vessels in the body, including those in the brain. This higher pressure could then lead to structural change of the blood vessels over time. Then, chronic HT over time can lead to endothelial dysfunction, which could impair the ability of blood vessels to regulate blood flow and maintain vessel integrity. The dysfunction can result in oxidative stress and inflammation.
Then as a response to prolonged elevated BP, blood vessels in the brain could undergo vascular remodeling, which involves changes im blood vessel structure and thickness, which can then affect blood flow dynamics. Furthermore, in some cases, this could lead to something called cerebral small vessel disease which involves damage to the small blood vessels in the brain including capillaries and arterioles. This could impair delivery of oxygen and nutrients to brain tissue which could lead to cell death and consequently a decrease in BV. Then reduced blood flow along compromised blood vessel integrity could lead to cerebral ischaemia—reduced blood supply—and hypoxia—reduced oxygen supply—in certain parts of the brain. This can then result in neural damage and eventually cell death.
Then HT-related vascular changes and cerebral small vessel disease can trigger brain inflammation. Prolonged exposure to neural inflammation, hypoxia and ischemia can lead to neuronal atrophy, where neurons shrink and lose their functional integrity. HT can also increase the incidence of white matter lesions in the brain which can be seen in neuroimages, which involve areas of white matter tissue which become damaged. Finally, over time, the cumulative effects of the aforementioned processes—vascular changes, inflammation, neural atrophy, and white matter changes could lead to a decrease in BV. This reduction can manifest as brain atrophy which is then observed in parts of the brain which are susceptible and vulnerable to the effects of HT.
So the step-by-step mechanism goes like this: elevated BP —> increased vascular strain —> endothelial dysfunction —> vascular remodeling —> cerebral small vessel disease —> ischemia and hypoxia —> inflammation and neuroinflammation —> neuronal atrophy —> white matter changes —> reduction in BV.
Hypotheses and predictions
H1: The severity of HT directly correlates with the extent of BV reduction. One prediction would be that people with more severe HT would exhibit greater BV decreases than those with moderate (less severe) HT, which is where the dose-dependent relationship comes in.
H2: The duration of HT is a critical factor in BV reduction. One prediction would be that people with long-standing HT will show more significant BV changes than those with recent onset HT.
H3: Effective BP management can mitigate BV reduction in people with HT. One prediction would be that people with more controlled HT would show less significant BV reduction than those with uncontrolled HT.
H4: Certain subpopulations may be more susceptible to BV decreases due to HT. One prediction is that certain factors like age of onset (HT at younger age), genetic factors (some may have certain gene variants that make them more susceptible and vulnerable to damage caused by elevated BP), comorbities (people with diabetes, obesity and heart problems could be at higher risk of decreased BV due to the interaction of these factors), ethnic/racial factors (some populations—like blacks—could be at higher risk of having HT and they could be more at risk due to experiencing disparities in healthcare and treatment.
The hypotheses and predictions generated from the main proposition that HT is associated with a reduction in BV and that the relationship is dose-dependent can be considered risky, novel predictions. They are risky in the sense that they are testable and falsifiable. Thus, if the predictions don’t hold, then it could falsify the initial hypothesis.
Blacks and blood pressure
Due to this, for populations like black Americans, this is significant. About 33 percent of blacks have hypertension (Peters, Arojan, and Flack, 2006), while urban blacks are more likely to have elevated BP than whites (Lindhorst et al, 2007). Though Non, Gravlee, and Mulligan (2012) showed that racial differences in education—not genetic ancestry—explained differences in BP in blacks compared to whites. Further, Victor et al (2018) showed that in black male barbershop attendees who had uncontrolled BP, that along with medication and outreach, this lead to a decrease in BP. Williams (1992) cited stress, socioecologic stress, social support, coping patterns, health behavior, sodium, calcium, and potassium consumption, alcohol consumption, and obesity as social factors which lead to increased BP.
Moreover, consistent with the hypothesis discussed here (that chronic elevated BP leads to reductions in BV which lead to a higher chance of dementia and Alzheimer’s), it’s been shown that vulnerability to HT is a major determinate in the risk of acquiring Alzheimer’s (Clark et al, 2020; Akushevic et al, 2022). It has also been shown that “a lifetime of racism makes Alzheimer’s more common in black Americans” and consistent with the discussion here since racism is associated with stress which is associated with elevated BP, then consistent events of racial discrimination would lead to consistent and elevated BP which would then lead to decreased BV and then a higher chance of acquitting Alzheimer’s. But, there is evidence that blood pressure drugs (in this case telmisartan) reduce the incidence of Alzheimer’s in black Americans (Zhang et al, 2022) while the same result was also seen using antihyperintensive medications in blacks which led to a reduction in incidence of dementia (Murray et al, 2018), which lends credence to the discussed hypothesis. Stress and poverty—experiences—and not ancestry could explain higher rates of dementia in black Americans as well. Thus, since blood pressure could explain higher rates of dementia in black populations, this then lends credence to the discussed hypothesis.
Conclusion
The evidence that chronic elevated BP leads to reductions in BV are well-studied and the mechanisms are well-known. I discussed the hypothesis that chronically elevated BP leads to reduced blood flow to the brain which decreases BV. I then discussed the mechanisms behind the relationship, and then hypotheses and predictions that follow from them. Lastly, I discussed the well-known fact that blacks have higher rates of BP, and also higher rates of dementia and Alzheimer’s, and linked the fact that they have higher rates of BP to those maladies.
So by catching chronically elevated BP in the early ages, since the earlier one has high BP the more likely they are to have reduced brain volume and the associated maladies, we can then begin to fight the associated issues before they coalesce, since we know the mechanisms behind them, along with the fact that blood pressure drugs and antihypertensive medications decrease incidences of dementia and Alzheimer’s in black Americans.
Cope’s (Deperet’s) Rule, Evolutionary Passiveness, and Alternative Explanations
4450 words
Introduction
Cope’s rule is an evolutionary hypothesis which suggests that, over geological time, species have a tendency to increase in body size. (Although it has been proposed for Cope’s rule to be named Deperet’s rule, since Cope didn’t explicitly state the hypothesis while Deperet did, Bokma et al, 2015.) Named after Edward Drinker Cope, it proposes that on average through the process of “natural selection” species have a tendency to get larger, and so it implies a directionality to evolution (Hone and Benton, 2005; Liow and Taylor, 2019). So there are a few explanations for the so-called rule: Either it’s due to passive or driven evolution (McShea, 1994; Gould, 1996; Raia et al, 2012) or due to methodological artifacts (Sowe and Wang, 2008; Monroe and Bokma, 2010).
However, Cope’s rule has been subject to debate and scrutiny in paleontology and evolutionary biology. The interpretation of Cope’s rule hinges on how “body size” is interpreted (mass or length), along with alternative explanations. I will trace the history of Cope’s rule, discuss studies in which it was proposed that this directionality from the rule was empirically shown, discuss methodological issues. I propose alternative explanations that don’t rely on the claim that evolution is “progressive” or “driven.” I will also show that developmental plasticity throws a wrench in this claim, too. I will then end with a constructive dilemma argument showing that either Cope’s rule is a methodological artifact, or it’s due to passive evolution, since it’s not a driven trend as progressionists claim.
How developmental plasticity refutes the concept of “more evolved”
In my last article on this issue, I showed the logical fallacies inherent in the argument PumpkinPerson uses—it affirms the consequent, assuming it’s true leads to a logical contradiction, and of course reading phylogenies in the way he does just isn’t valid.
If the claim “more speciation events within a given taxon = more evolution” were valid, then we would consistently observe a direct correlation between the number of speciation events and the extent evolutionary change in all cases, but we don’t since evolutionary rates vary and other factors influence evolution, so the claim isn’t universally valid.
Take these specific examples: The horseshoe crab has a lineage going back hundreds of millions of years with few speciation events but it has undergone evolutionary changes. Consequently, microorganisms could undergo many speciation events and have relatively minor genetic change. Genetic and phenotypic diversity of the cichlid fishes (fishes that have undergone rapid evolutionary change and speciation), but the diversity between them doesn’t solely depend on speciation events, since factors like ecological niche partitioning and sexual selection also play a role in why they are different even though they are relatively young species (a specific claim that Herculano-Houzel made in her 2016 book The Human Advantage). Lastly, human evolution has relatively few speciation events but the extent of evolutionary change in our species is vast. Speciation events are of course crucial to evolution. But if one reads too much into the abstractness of the evolutionary tree then they will not read it correctly. The position of the terminal nodes is meaningless.
It’s important to realize that evolution just isn’t morphological change which then leads to the creation of a new species (this is macro-evolution), but there is also micro-evolution. Species that underwent evolutionary change without speciation include peppered moths, antibody resistance in bacteria, lactase persistence in humans, Darwin’s finches, and industrial melanism in moths. These are quite clearly evolutionary changes, and they’re due to microevolutionary changes.
Developmental plasticity directly refutes the contention of more evolved since individuals within a species can exhibit significant trait variation without speciation events. This isn’t captured by phylogenies. They’re typically modeled on genetic data and they don’t capture developmental differences that arise due to environmental factors during development. (See West-Eberhard’outstanding Developmental Plasticity and Evolution for more on how in many cases development precedes genetic change, meaning that the inference can be drawn that genes aren’t leaders in evolution, they’re mere followers.)
If “more evolved” is solely determined by the number of speciation events (branches) in a phylogeny, then species that exhibit greater developmental plasticity should be considered “more evolved.” But it is empirically observed that some species exhibit significant developmental plasticity which allows them to rapidly change their traits during development in response to environmental variation without undergoing speciation. So since the species with more developmental plasticity aren’t considered “more evolved” based on the “more evolved” criteria, then the assumption that “more evolved” is determined by speciation events is invalid. So the concept of “more evolved” as determined by speciation events or branches isn’t valid since it isn’t supported when considering the significant role of developmental plasticity in adaptation.
There is anagenesis and cladogenesis. Anagenesis is the creation of a species without a branching of the ancestral species. Cladogenesis is the formation of a new species by evolutionary divergence from an ancestral form. So due to evolutionary changes within a lineage, the organism that underwent evolutionary changes replaces the older one. So anagenesis shows that a species can slowly change and become a new species without there being a branching event. Horse, human, elephant, and bird evolution are examples of this.
Nonetheless, developmental plasticity can lead to anagenesis. Developmental, or phenotypic, plasticity is the ability of an organism to produce different phenotypes with the same genotype based on environmental cues that occur during development. Developmental plasticity can facilitate anagenesis, and since developmental plasticity is ubiquitous in development of not only an individual in a species but a species as a whole, then it is a rule and not an exception.
Directed mutation and evolution
Back in March, I wrote on the existence of directed mutations. Directed mutation directly speaks against the concept of “more evolved.” Here’s the argument:
(1) If directed mutations play a crucial role in helping organisms adapt to changing environments, then the notion of “more evolved” as a linear hierarchy is invalid.
(2) Directed mutations are known to occur and contribute to a species survivability in an environment undergoing change during development (the concept of evolvability is apt here).
(C) So the concept of “more evolved” as a linear hierarchy is invalid.
A directed mutation is a mutation that occurs due to environmental instability which helps an organism survive in the environment that changed while the individual was developing. Two mechanisms of DM are transcriptional activation (TA) and supercoiling. TAs can cause changes to single-stranded DNA, and can also cause supercoiling (the addition of more strands on DNA). TA can be caused by depression (a mechanism that occurs due to the absence of some molecule) or induction (the activation of an inactive gene which then gets transcribed). So these are examples of how nonrandom (directed) mutation and evolution can occur (Wright, 2000). Such changes are possibly through the plasticity of phenotypes during development and ultimately are due to developmental plasticity. These stress-directed mutations can be seen as quasi-Lamarckian (Koonin and Wolf, 2009). It’s quite clear that directed mutations are a thing and have been proven true.
DMs, along with developmental plasticity and evo-devo as a whole refute the simplistic thinking of “more evolved.”
Now here is the argument that PP is using, and why it’s false:
(1) More branches on a phylogeny indicate more speciation events.
(2) More speciation events imply a higher level of evolutionary advancement.
(C) Thus, more branches on a phylogeny indicate a higher level of evolutionary advancement.
The false premise is (2) since it suggests that more speciation events imply a higher level of evolutionary advancement. It implies a goal-directed aspect to evolution, where the generation of more species is equated with evolutionary progress. It’s just reducing evolution to linear advancement and progress; it’s a teleological bent on evolution (which isn’t inherently bad if argued for correctly, see Noble and Noble, 2022). But using mere branching events on a phylogeny to assume that more branches = more speciation = more evolved is simplistic thinking that doesn’t make sense.
If evolution encompasses changes in an organism’s phenotype, then changes in an organism’s phenotype, even without changing its genes, are considered examples of evolution. Evolution encompasses changes in an organism’s phenotype, so changes in an organism’s phenotype even without changes in genes are considered examples of evolution. There is nongenetic “soft inheritance” (see Bonduriansky and Day, 2018).
Organisms can exhibit similar traits due to convergent evolution. So it’s not valid to assume a direct and strong correlation between and organism’s position on a phylogeny and it’s degree of resemblance to a common ancestor.
Dolphins and ichthyosaurs share similar traits but dolphins are mammals while ichthyosaurs are reptiles that lived millions of years ago. Their convergent morphology demonstrates that common ancestry doesn’t determine resemblance. The Tasmanian and Grey wolf have independently evolved similar body plans and roles in their ecologies and despite different genetics and evolutionary history, they share a physical resemblance due to similar ecological niches. The LCA of bats and birds didn’t have wings but they have wings and they occurred independently showing that the trait emerged independently while the LCA didn’t have wings so it emerged twice independently. These examples show that the degree of resemblance to a common ancestor is not determined by an organism’s position on a phylogeny.
Now, there is a correlation between body size and branches (splits) on a phylogeny (Cope’s rule) and I will explain that later. That there is a correlation doesn’t mean that there is a linear progression and they don’t imply a linear progression. Years ago back in 2017 I used the example of floresiensis and that holds here too. And Terrance Deacon’s (1990) work suggests that pseudoprogressive trends in brain size can be explained by bigger whole organisms being selected—this is important because the whole animal is selected, not any one of its individual parts. The correlation isn’t indicative of a linear progression up some evolutionary ladder, either: It’s merely a byproduct of selecting larger animals (the only things that are selected).
I will argue that it is this remarkable parallelism, and not some progressive selection for increasing intelligence, that is responsible for many pseudoprogressive trends in mammalian brain evolution. Larger whole animals were being selected—not just larger brains—but along with the correlated brain enlargement in each lineage a multitude of parallel secondary internal adaptations followed. (Deacon, 1990)
Nonetheless, the claim here is one from DST—the whole organism is selected, so obviously so is it’s body plan (bauplan). Nevertheless, the last two havens for the progressionist is in the realm of brain size and body size. Deacon refuted the selection-for brain size claim, so we’re now left with body size.
Does the evolution of body size lend credence to claims of driven, progressive evolution?
The tendency for bodies to grow larger and larger over evolutionary time is something of a trusim. Since smaller bacterium have eventually evolved into larger (see Gould’s modal bacter argument), more complex multicellular organisms, then this must mean that evolution is progressive and driven, at least for body size, right? Wrong. I will argue here using a constructive dilemma that either evolution is passive and that’s what explains the evolution of body size increases, or is it due to methodological flaws in how body size is measured (length or mass)?
In Full House, Gould (1996) argued that the evolution of body size isn’t driven, but that it is passive, namely that it is evolution away from smaller size. Nonetheless, it seems that Cope’s (Deperet’s) rule is due to cladogenesis (the emergence of new species), not selection for body size per se (Bokma et al, 2015).
Given these three conditions, we note an increase in size of the largest species only because founding species start at the left wall, and the range of size can therefore expand in only one direction. Size of the most common species (the modal decade) never changes, and descendants show no bias for arising at larger sizes than ancestors. But, during each act, the range of size expands in the only open direction by increase in the total number of species, a few of which (and only a few) become larger (while none can penetrate the left wall and get smaller). We can say only this for Cope’s Rule: in cases with boundary conditions like the three listed above, extreme achievements in body size will move away from initial values near walls. Size increase, in other words, is really random evolution away from small size, not directed evolution toward large size. (Gould, 1996)
Dinosaurs were some of the largest animals to ever live. So we might say that there is a drivenness in their bodies to become larger and larger, right? Wrong. The evolution of body size in dinosaurs is passive, not driven (progressive) (Sookias, Butler, and Benson, 2012). Gould (1996) also showed passive trends in body size in plankton and forams. He also cited Stanley (1973) who argued that groups starting at the left wall of minimum complexity will increase in mean size as a consequence of randomness, not any driven tendency for larger body size.
In other, more legitimate cases, increases in means or extremes occur, as in our story of planktonic forams, because lineages started near the left wall of a potential range in size and then filled available space as the number of species increased—in other words, a drift of means or extremes away from a small size, rather than directed evolution of lineages toward large size (and remember that such a drift can occur within a regime of random change in size for each individual lineage—the “drunkard’s walk” model).
In 1973, my colleague Steven Stanley of Johns Hopkins University published a marvelous, and now celebrated, paper to advance this important argument. He showed (see Figure 27, taken from his work) that groups beginning at small size, and constrained by a left wall near this starting point, will increase in mean or extreme size under a regime of random evolution within each species. He also advocated that we test his idea by looking for right-skewed distributions of size within entire systems, rather than by tracking mean or extreme values that falsely abstract such systems as single numbers. In a 1985 paper I suggested that we speak of “Stanley’s Rule” when such an increase of means or extremes can best be explained by undirected evolution away from a starting point near a left wall. I would venture to guess (in fact I would wager substantial money on the proposition) that a large majority of lineages showing increase of body size for mean or extreme values (Cope’s Rule in the broad sense) will properly be explained by Stanley’s Rule of random evolution away from small size rather than by the conventional account of directed evolution toward selectively advantageous large size. (Gould, 1996)
Gould (1996) also discusses the results of McShea’s study, writing:
Passive trends (see Figure 33) conform to the unfamiliar model, championed for complexity in this book, of overall results arising as incidental consequences, with no favored direction for individual species, (McShea calls such a trend passive because no driver conducts any species along a preferred pathway. The general trend will arise even when the evolution of each individual species confirms to a “drunkard’s walk” of random motion.) For passive trends in complexity, McShea proposes the same set of constraints that I have advocated throughout this book: ancestral beginnings at a left wall of minimal complexity, with only one direction open to novelty in subsequent evolution.
But Baker et al (2015) claim that body size is an example of driven evolution. However, that they did not model cladogenetic factors calls their conclusion into question. But I think Baker et al’s claim doesn’t follow. If a taxon possesses a potential size range and the ancestral size approaches the lower limit of this range, it will result in a passive inclination for descendants to exceed the size of their ancestors. The taxon in question possesses a potential size range, and the ancestral size range is on the lower end of the range. So there will be a passive tendency for descendants of this taxon to be larger than their predecessors.
Here’s an argument that concludes that evolution is passive and not driven. I will then give examples of P2.
(1) Extant animals that are descended from more nodes on an evolutionary tree tend to be bigger than animals descended from fewer nodes (your initial premise).
(2) There exist cases where extant animals descended from fewer nodes are larger or more complex than those descended from more nodes (counterexamples of bats and whales, whales are descended from fewer nodes while having some of the largest body sizes in the world while bats are descended from more nodes while having a way comparatively smaller body size).
(C1) Thus, either P1 doesn’t consistently hold (not all extant animals descended from more nodes are larger), or it is not a reliable rule (given the counters).
(3) If P1 does not consistently hold true (not all extant animals descended from more nodes are larger), then it is not a reliable rule.
(4) P1 does not consistently hold true.
(C2) P1 is not a reliable rule.
(5) If P1 is not a reliable rule (given the existence of counterexamples), then it is not a valid generalization.
(6) P1 is not a reliable rule.
(C3) So P1 is not a valid generalization.
(6) If P1 isn’t a valid generalization in the context of evolutionary biology, then there must be exceptions to this observed trend.
(7) The existence of passive evolution, as suggested by the inconsistenties in P1, implies that the trends aren’t driven by progressive forces.
(C4) Thus, the presence of passive evolution and exceptions to P1’s trend challenge the notion of a universally progressive model of evolution.
(8) If the presence of passive evolution and exceptions to P1’s trend challenges the notion of a universally progressive model of evolution, then the notion of a universally progressive model of evolution isn’t supported by the evidence, as indicated by passive evolution and exceptions to P1’s trend.
(9) The presence of passive evolution and exceptions to P1’s trend challenge the notion. of a universally progressive model of evolution.
(1) Bluefin tuna are known to have a potential range of size, with some being small and others being massive (think of that TV show Deadliest Catch and the massive size ranges of tuna these fisherman catch, both in length and mass). So imagine a population of bluefin tuna where the ancestral size is found to be close to the lower end of their size range. So P2 is satisfied because bluefin tuna have a potential size range. So the ancestral size of the ancestors of the tuna were relatively small in comparison to the maximum size of the tuna.
(2) African elephants in some parts of Africa are small, due to ecological constraints and hunting pressures and these smaller-sized ancestors are close to the lower limit of the potential size range of African elephants. Thus, according to P1, there will be a passive tendency for descendants of these elephants to be larger than their smaller-sizes ancestors over time.
(3) Consider galapagos tortoises whom are also known for their large variation in size among the different species and populations on the galapagos islands. So consider a case of galapagos tortoises who have smaller body sizes due to either resource conditions or the conditions of their ecologies. So in this case, the potential size for the ancestors of these tortoises is close to the theoretical limit of their potential size range. Therefore, we can expect a passive tendency for descendants of these tortoises to evolve large body sizes.
Further, in Stanley’s (1973) study of Cope’s rule from fossil rodents, he observed that body size distributions in these rodents, over time, became bigger while the modal size stayed small. This doesn’t even touch the fact that because there are more small than large mammals, that there would be a passive tendency in large body sizes for mammals. This also doesn’t even touch the methodological issues in determining body size for the rule—mass, length? Nonetheless, Monroe and Bokma’s (2010) study showed that while there is a tendency for species to be larger than their ancestors, it was a mere 0.5 percent difference. So the increase in body size is explained by an increase in variance in body size (passiveness) not drivenness.
Explaining the rule
I think there are two explanations: Either a methodological artifact or passive evolution. I will discuss both, and I will then give a constructive dilemma argument that articulates this position.
Monroe and Bokma (2010) showed that even when Cope’s rule is assumed, the ancestor-descendant increase in body size showed a mere .4 percent increase. They further discussed methodological issues with the so-called rule, citing Solow and Wang (2008) who showed that Cope’s rule “appears” based on what assumptions of body size are used. For example, Monroe and Bokma (2010) write:
If Cope’s rule is interpreted as an increase in the mean size of lineages, it is for example possible that body mass suggests Cope’s rule whereas body length does not. If Cope’s rule is instead interpreted as an increase in the median body size of a lineage, its validity may depend on the number of speciation events separating an ancestor-descendant pair.
…
If size increase were a general property of evolutionary lineages – as Cope’s rule suggests – then even if its effect were only moderate, 120 years of research would probably have yielded more convincing and widespread evidence than we have seen so far.
Gould (1997) suggested that Cope’s rule is a mere psychological artifact. But I think it’s deeper than that. Now I will provide my constructive dilemma argument, now that I have ruled out body size being due to progressive, driven evolution.
The form of constructive dilemma goes: (1) A V B. (2) If A, then C. (3) If B, then D. (C) C V D. P1 represents a disjunction: There are two possible choices, A and B. P2 and P3 are conditional statements, that provide implications for both of the options. And C states that at least one or both of the implications have to be true (C or D).
Now, Gould’s Full House argument can be formulated either using modus tollens or constructive dillema:
(1) If evolution were a deterministic, teleological process, there would be a clear overall progression and a predetermined endpoint. (2) There is no predetermined endpoint or progression to evolution. (C) So evolution isn’t a deterministic or teleological process.
(1) Either evolution is a deterministic, teleological process (A) or it’s not (B). (2) If A, then there would be a clear direction and predetermined endpoint. (3) If B, then there is no overall direction or predetermined endpoint. (4) So either there is a clear overall progression (A), or there isn’t (B). (5) Not A. (6) Therefore, B.
Or (1) Life began at a relatively simple state (the left wall of complexity). (2) Evolution is influenced by a combination of chance events,, environmental factors and genetic variation. (3) Organisms may stumble I’m various directions along the path of evolution. (4) Evolution lacks a clear path or predetermined endpoint.
Now here is the overall argument combining the methodological issues pointed out by Sowe and Wang and the implications of passive evolution, combined with Gould’s Full House argument:
(1) Either Cope’s rule is a methodological artifact (A), or it’s due to passive, not driven evolution (B). (2) If Cope’s rule is a methodological artifact (A), then different ways to measure body size (length or mass) can come to different conclusions. (3) If Cope’s rule is due to passive, not driven evolution (B), then it implies that larger body sizes simply accumulate over time without being actively driven by selective pressures. (4) Either evolution is a deterministic, teleological process (C), or it is not (D). (5) If C, then there would be a clear overall direction and predetermined endpoint in evolution (Gould’s argument). (6) If D, then there is no clear overall direction or predetermined endpoint in evolution (Gould’s argument). (7) Therefore, either there is a clear overall direction (C) or there isn’t (D) (Constructive Dilemma). (8) If there is a clear overall direction (C) in evolution, then it contradicts passive, not driven evolution (B). (9) If there isn’t a clear overall direction (D) in evolution, then it supports passive, not driven evolution (B). (10) Therefore, either Cope’s rule is due to passive evolution or it’s a methodological artifact.
Conclusion
Evolution is quite clearly passive and non-driven (Bonner, 2013). The fact of the matter is, as I’ve shown, evolution isn’t driven (progressive), it is passive due to the drunken, random walk that organisms take from the minimum left wall of complexity. The discussions of developmental plasticity and directed mutation further show that evolution can’t be progressive or driven. Organism body plans had nowhere to go but up from the left wall of minimal complexity, and that means increase the variance in, say, body size is due to passive trends. Given the discussion here, we can draw one main inference: since evolution isn’t directed or progressive, then the so-called Cope’s (Deperet’s) rule is either due to passive trends or they are mere methodological artifacts. The argument I have mounted for that claim is sound and so, it obviously must be accepted that evolution is a random, drunken walk, not one of overall drivenness and progress and so, we must therefore look at the evolution of body size in this way.
Rushton tried to use the concept of evolutionary progress to argue that some races may be “more evolved” than other races, like “Mongoloids” being “more evolved” than “Caucasoids” who are “more evolved” than “Negroids.” But Rushton’s “theory” was merely a racist one, and it obviously fails upon close inspection. Moreover, even the claims Rushton made at the end of his book Race, Evolution, and Behavior don’t even work. (See here.) Evolution isn’t progressive so we can’t logically state that one population group is more “advanced” or “evolved” than another. This is of course merely Rushton being racist with shoddy “explanations” used to justify it. (Like in Rushton’s long-refuted r/K selection theory or Differential-K theory, where more “K-evolved” races are “more advanced” than others.)
Lastly, this argument I constructed based on the principles of Gould’s argument shows that there is no progress to evolution.
P1 The claim that evolutionary “progress” is real and not illusory can only be justified iff organisms deemed more “advanced” outnumber “lesser” organisms.
P2 There are more “lesser” organisms (bacteria/insects) on earth than “advanced” organisms (mammals/species of mammals).
C Therefore evolutionary “progress” is illusory.
The Theory of African American Offending versus Hereditarian Explanations of Crime: Exploring the Roots of the Black-White Crime Disparity
3450 words
Why do blacks commit more crime? Biological theories (racial differences in testosterone and testosterone-aggression, AR gene, MAOA) are bunk. So how can we explain it? The Unnever-Gabbidon theory of African American offending (TAAO) (Unnever and Gabbidon, 2011)—where blacks’ experience of racial discrimination and stereotypes increases criminal offenses—has substantial empirical support. To understand black crime, we need to understand the unique black American experience. The theory not only explains African American criminal offending, it also makes predictions which were borne out in independent, empirical research. I will compare the TAAO with hereditarian claims of why blacks commit more crime (higher testosterone and higher aggression due to testosterone, the AR gene and MAOA). I will show that hereditarian theories make no novel predictions and that the TAAO does make novel predictions. Then I will discuss recent research which shows that the predictions that Unnever and Gabbidon have made were verified. Then I will discuss research which has borne out the predictions made by Unnever and Gabbidon’s TAAO. I will conclude by offering suggestions on how to combat black crime.
The folly of hereditarianism in explaining black American offending
Hereditarians have three main explanations of black crime: (1) higher levels of testosterone and high levels of testosterone leading to aggressive behavior which leads to crime; (2) low activity MAOA—also known in the popular press as “the warrior gene”—could be more prevalent in some populations which would then lead to more aggressive, impulsive behavior; and (3) the AR gene and AR-CAG repeats with lower CAG repeats being associated with higher rates of criminal activity.
When it comes to (1), the evidence is mixed on which race has higher levels of testosterone (due to low-quality studies that hereditarians cite for their claim). In fact, two recent studies showed that non-Hispanic blacks didn’t have higher levels of testosterone than other races (Rohrmann et al, 2007; Lopez et al, 2013). Contrast this with the classical hereditarian response that blacks indeed do have higher rates of testosterone than whites (Rushton, 1995)—using Ross et al (1986) to make the claim. (See here for my response on why Ross et al is not evidence for the hereditarian position.) Although Nyante et al (2012) showed a small increase in testosterone in blacks compared to whites and Mexican Americans using longitudinal data, the body of evidence shows that there is no to small differences in testosterone between blacks and whites (Richard et al, 2014). So despite claims that “African-American men have repeatedly demonstrated serum total and free testosterone levels that are significantly higher than all other ethnic groups” (Alvarado, 2013: 125), claims like this are derived from flawed studies, and newer more representative analyses show that there is a small difference in testosterone between blacks and whites to no difference.
Nevertheless, even if blacks have higher levels of testosterone than other races, then this would still not explain racial differences in crime, since heightened aggression explains T increases, high T doesn’t explain heightened aggression. HBDers seem to have cause and effect backwards for this relationship. Injecting individuals with supraphysiological doses of testosterone as high as 200 and 600 mg per week does not cause heightened anger or aggression (Tricker et al, 1996; O’Connor et, 2002). If the hereditarian hypothesis on the relationship between testosterone and aggression were true, then we would see the opposite finding from what Tricker et al and O’Connor et al found. Thus this discussion shows that hereditarians are wrong about racial differences in testosterone and that they are wrong about causality when it comes to the T-aggression relationship. (The actual relationship is aggression causing increases in testosterone.) So this argument shows that the hereditarian simplification on the T-aggression relationship is false. (But see Pope, Kouri and Hudson, 2000 where they show that a 600 mg dose of testosterone caused increased manic symptoms in some men, although in most men there was little to no change; there were 8 “responders” and 42 “non-responders.”)
When it comes to (2), MAOA is said to explain why those who carry low frequency version of the gene have higher rates of aggression and violent behavior (Sohrabi, 2015; McSwiggin, 2017). Sohrabi shows that while the low frequency version of MAOA is related to higher rates of aggression and violent behavior, it is mediated by environmental effects. But MAOA, to quote Heine (2017), can be seen as the “Everything but the kitchen sink gene“, since MAOA is correlated with so many different things. But at the and of the day, we can’t blame “warrior genes” for violent, criminal behavior. Thus, the relationship isn’t so simple, so this doesn’t work for hereditarians either.
Lastly when it comes to (3), due to the failure of (1), hereditarians tried looking to the AR gene. Researchers tried to relate CAG repeat length with criminal behaviors. For instance, Geniole et al (2019) tried to argue that “Testosterone thus appears to promote human aggression through an AR-related mechanism.” Ah, the last gasps to explain crime through testosterone. But there is no relationship between CAG repeats, adolescent risk-taking, depression, dominance or self-esteem (Vermeer, 2010) and the number of CAG repeats in men and women (Valenzuela et al, 2022). So this, too, fails. (Also take look at the just-so story on why African slave descendants are more sensitive to androgens; Aiken, 2011.)
Now that I have shown that the three main hereditarian explanations for higher black crime are false, now I will show why blacks have higher rates of criminal offending than other races, and the answer isn’t to be found in biology, but sociology and criminology.
The Unnever-Gabbidon theory of African American criminal offending and novel predictions
In 2011, criminologists Unnever and Gabbidon published their book A Theory of African American Offending: Race, Racism, and Crime. In the book, they explain why they formulated the theory and why it doesn’t have any explanatory or predictive power for other races. That’s because it centers on the lived experiences of black Americans. In fact, the TAAO “incorporates the finding that African Americans are more likely to offend if they associate with delinquent peers but we argue that their inadequate reinforcement for engaging in conventional behaviors is related to their racial subordination” (Unnever and Gabbidon, 2011: 34). The TAAO focuses on the criminogenic effects of racism.
Our work builds upon the fundamental assumption made by Afrocentists that an understanding of black offending can only be attained if their behavior is situated within the lived experiences of being African American in a conflicted, racially stratified society. We assert that any criminological theory that aims to explain black offending must place the black experience and their unique worldview at the core of its foundation. Our theory places the history and lived experiences of African American people at its center. We also fully embrace the Afrocentric assumption that African American offending is related to racial subordination. Thus, our work does not attempt to create a “general” theory of crime that applies to every American; instead, our theory explains how the unique experiences and worldview of blacks in America are related to their offending. In short, our theory draws on the strengths of both Afrocentricity and the Eurocentric canon. (Unnever and Gabbidon, 2011: 37)
Two kinds of racial injustices highlighted by the theory—racial discrimination and pejorative stereotyping—have empirical support. Blacks are more likely to express anger, exhibit low self-control and become depressed if they believe the racist stereotype that they’re violent. It’s also been studied whether or not a sense of racial injustice is related to offending when controlling for low self control (see below).
The core predictions of the TAAO and how they follow from it with references for empirical tests are as follows:
(Prediction 1) Black Americans with a stronger sense of racial identity are less likely to engage in criminal behavior than black Americans with a weak sense of racial identity. How does this prediction follow from the theory? TAAO suggests that a strong racial identity can act as a protective factor against criminal involvement. Those with a stronger sense of racial identity may be less likely to engage in criminal behavior as a way to cope with racial discrimination and societal marginalization. (Burt, Simons, and Gibbons, 2013; Burt, Lei, and Simons, 2017; Gaston and Doherty, 2018; Scott and Seal, 2019)
(Prediction 2) Experiencing racial discrimination increases the likelihood of black Americans engaging in criminal actions. How does this follow from the theory? TAAO posits that racial discrimination can lead to feelings of frustration and marginalization, and to cope with these stressors, some individuals may resort to committing criminal acts as a way to exert power or control in response to their experiences of racial discrimination. (Unnever, 2014; Unnever, Cullen, and Barnes, 2016; Herda, 2016, 2018; Scott and Seal, 2019)
(Prediction 3) Black Americans who feel socially marginalized and disadvantaged are more prone to committing crime as a coping mechanism and have weakened school bonds. How does this follow from the theory? TAAO suggests that those who experience social exclusion and disadvantage may turn to crime as a way to address their negative life circumstances. and feelings of agency. (Unnever, 2014; Unnever, Cullen, and Barnes, 2016)
The data show that there is a racialized worldview shared by blacks, and that a majority of blacks believe that their fate rests on what generally happens to black people in America. Around 38 percent of blacks report being discriminated against and most blacks are aware of the stereotype of them as violent. (Though a new Pew report states that around 8 in 10—about 80 percent—of blacks have experienced racial discrimination.) Racial discrimination and the belief in the racist stereotype that blacks are more violent are significant predictors of black arrests. It’s been shown that the more blacks are discriminated against and the more they believe that blacks are violent, the more likely they are to be arrested. Unnever and Gabbidon also theorized that the aforementioned isn’t just related to criminal offending but also to substance and alcohol abuse. Unnever and Gabbidon also hypothesized that racial injustices are related to crime since they increase the likelihood of experiencing negative emotions like anger and depression (Simons et al, 2002). It’s been experimentally demonstrated that blacks who perceive racial discrimination and who believe the racist stereotype that blacks are more violent express less self-control. The negative emotions from racial discrimination predict the likelihood of committing crime and similar behavior. It’s also been shown that blacks who have less self-control, who are angrier and are depressed have a higher liklihood of offending. Further, while controlling for self-control, anger and depression and other variables, racial discrimination predicts arrests and substance and alcohol abuse. Lastly the experience of being black in a racialized society predicts offending, even after controlling for other measures. Thus, it is ruled out that the reason why blacks are arrested more and perceive more racial injustice is due to low self-control. (See Unnever, 2014 for the citations and arguments for these predictions.) The TAAO also has more empirical support than racialized general strain theory (RGST) (Isom, 2015).
So the predictions of the theory are: Racial discrimination as a contributing factor; a strong racial identity could be a protective factor while a weak racial identity would be associated with a higher likelihood of engaging in criminal activity; blacks who feel socially marginalized would turn to crime as a response to their disadvantaged social position; poverty, education and neighborhood conditions play a significant role in black American offending rates, and that these factors interact with racial identity and discrimination which then influence criminal behavior; and lastly it predicts that the criminal justice system’s response to black American offenders could be influenced by their racial identity and social perceptions which could then potentially lead to disparities in treatment compared to other racial groups.
Ultimately, the unique experiences of black Americans explain why they commit more crime. Thus, given the unique experiences of black Americans, there needs to be a race-centric theory of crime for black Americans, and this is exactly what the TAAO is. The predictions that Unnever and Gabbidon (2011) made from the TAAO have independent empirical support. This is way more than the hereditarian explanations can say on why blacks commit more crime.
One way, which follows from the theory, to insulate black youth from discrimination and prejudice is racial socialization, where racial socialization is “thoughts, ideas, beliefs, and attitudes regarding race and racism are communicated across generations (Burt, Lei, & Simons, 2017; Hughes, Smith, et al., 2006; Lesane-Brown, 2006) (Said and Feldmeyer, 2022).
But also related to the racial socialization hypothesis is the question “Why don’t more blacks offend?” Gaston and Doherty (2018) set out to answer this question. Gaston and Doherty (2018) found that positive racial socialization buffered the effects of weak school bonds on adolescent substance abuse and criminal offending for males but not females. This is yet again another prediction from the theory that has come to pass—the fact that weak school bonds increase criminal offending.
Doherty and Gaston (2018) argue that black Americans face racial discrimination that whites in general just do not face:
Empirical studies have pointed to potential explanations of racial disparities in violent crimes, often citing that such disparities reflect Black Americans’ disproportionate exposure to criminogenic risk factors. For example, Black Americans uniquely experience racial discrimination—a robust correlate of offending—that White Americans generally do not experience (Burt, Simons, & Gibbons, 2012; Caldwell, Kohn-Wood, Schmeelk-Cone, Chavous, & Zimmerman, 2004; Simons, Chen, Stewart, & Brody, 2003; Unnever, Cullen, Mathers, McClure, & Allison, 2009). Furthermore, Black Americans are more likely to face factors conducive to crime such as experiencing poor economic conditions and living in neighborhoods characterized by concentrated disadvantage.
They conclude that:
The support we found for ethnic-racial socialization as a crime-reducing factor has important implications for broader criminological theorizing and practice. Our findings show the value of race-specific theories that are grounded in the unique experiences of that group and focus on their unique risk and protective factors. African Americans have unique pathways to offending with racial discrimination being a salient source of offending. While it is beyond the scope of this study to determine whether TAAO predicts African American offending better than general theories of crime, the general support for the ethnic-racial socialization hypothesis suggests the value of theories that account for race-specific correlates of Black offending and resilience.
…
TAAO draws from the developmental psychology literature and contends, however, that positive ethnic-racial socialization offers resilience to the criminogenic effect of weak school bonds and is the main reason more Black Americans do not offend (Unnever & Gabbidon, 2011, p. 113, 145).
Thus, combined with the fact that blacks face racial discrimination that whites in general just do not face, and combined with the fact that racial discrimination has been shown to increase criminal offending, it follows that racial discrimination can lead to criminal offending, and therefore, to decrease criminal offending we need to decrease racial discrimination. Since racism is due to low education and borne of ignorance, then it follows that education can decrease racial attitudes and, along with it, decrease crime (Hughes et al, 2007; Kuppens et al, 2014; Donovan, 2019, 2022).
Even partial tests of the TAAO have shown that racial discrimination related to offending and I would say that it is pretty well established that positive ethnic-racial socialization acts as a protective factor for blacks—this also explains why more blacks don’t offend (see Gaston and Doherty, 2018). It is also know that bad (ineffective) parenting also increases the risk for lower self-control (Unnever, Cullen, and Agnew, 2006). Black Americans share a racialized worldview and they view the US as racist, due to their personal lived experiences with racism (Unnever, 2014).
The TAAO and situationism
Looking at what the TAAO is and the predictions it makes, we can see how the TAAO is a situationist theory. Situationism is a psychological-philosophical theory which emphasizes the influence of the situation and its effects on human behavior. It posits that people’s actions and decisions are primarily shaped by the situational context that they find themselves in. It highlights the role of the situation in explaining behavior, suggests that people may act differently based on the context they find themselves in, situational cues which are present in the immediate context of the environment can trigger specific behavioral responses, suggests that understanding the situation one finds themselves in is important in explaining why people act the way they do, and asserts that behavior is more context-dependent and unpredictable and could vary across different situations. Although it seems that situationism conflicts with action theory, it doesn’t. Action theory explains how people form intentions and make decisions within specific situations, basically addressing the how and why. Conversely, situationism actually compliments action theory, since it addresses the where and when of behavior from an external, environmental perspective.
So the TAAO suggests that experiencing racial discrimination can contribute to criminal involvement as a response to social marginalization. So situationism can provide a framework for exploring how specific instances of environmental stressors, discrimination, or situational factors can trigger criminal behavior in context. So while TAAO focuses on historical and structural factors which lead to why blacks commit more crime, adding in situationism could show how the situational context interacts with historical and structural factors to explain black American criminal behavior.
Thus, combining situationism and the TAAO can lead to novel predictions like: predictions of how black Americans when faced with specific discriminatory situations, may be more or less likely to engage in criminal behavior based on their perception of the situation; predictions about the influence of immediate peer dynamics in moderating the relationship between structural factors like discrimination and criminal behavior in the black American community; and predictions about how variations in criminal responses to different types of situational cues—like encounters with law enforcement, experiences of discrimination, and economic stress—within the broader context of the TAAO’s historical-structural framework.
Why we should accept the TAAO over hereditarian explanations of crime
Overall, I’ve explained why hereditarian explanations of crime fail. They fail because when looking at the recent literature, the claims they make just do not hold up. Most importantly, as I’ve shown, hereditarian explanations lack empirical support, and the logic they try to use in defense of them is flawed.
We should accept the TAAO over hereditarianism because there is empirical validity, in that the TAAO is grounded in empirical research and it’s predictions and hypotheses have been subject to empirical tests and they have been found to hold. The TAAO also recognizes that crime is a complex phenomena influenced by factors like historical and contemporary discrimination, socioeconomic conditions, and the overall situational context. It also addresses the broader societal issues related to disparities in crime, which makes it more relevant for policy development and social interventions, acknowledging that to address these disparities, we must address the contemporary and historical factors which lead to crime. The TAAO also doesn’t stigmatize and stereotype, while it does emphasize the situational and contextual factors which lead to criminal activity. On the other hand, hereditarian theories can lead to stereotypes and discrimination, and since hereditarian explanations are false, we should also reject them (as I’ve explained above). Lastly, the TAAO also has the power to generate specific, testable predictions which have clear empirical support. Thus, to claim that hereditarian explanations are true while disregarding the empirical power of the TAAO is irrational, since hereditarian explanations don’t generate novel predictions while the TAAO does.
Conclusion
I have contrasted the TAAO with hereditarian explanations of crime. I showed that the three main hereditarian explanations—racial differences in testosterone and testosterone caused aggression, the AR gene, and MAOA—all fail. I have also shown that the TAAO is grounded in empirical research, and that it generates specific, testable predictions on how we can address racial differences in crime. On fhe other hand, hereditarian explanations lack empirical support, specificity, and causality, which makes it ill-suited for generating testable predictions and informing effective policies. The TAAO’s complexity, empirical support, and potential for addressing real-world issues makes it a more comprehensive framework for understanding and attempting to ameliorate racial crime disparities, in contrast to the genetic determinism from hereditarianism. In fact, I was unable to find any hereditarian response to the TAAO, so that should be telling on its own.
Overall, I have shown that the TAAO’s predictions that Unnever and Gabbidon have generated enjoy empirical support, and I have shown that hereditarian explanations fail, so we should reject hereditarian explanations and accept the TAAO, due to the considerations above. I have also shown that the TAAO makes actionable policy recommendations, and therefore, to decrease criminal offending, we thusly need to educate more, since racism is borne of ignorance and education can decrease racial bias.