NotPoliticallyCorrect

Home » Articles posted by RaceRealist (Page 4)

Author Archives: RaceRealist

The Multilingual Encyclopedia: On the Context-Dependency of Human Knowledge and Intelligence

3250 words

Introduction

Language is the road map of a culture. It tells you where its people come from and where they are going. – Rita May Brown

Communication bridges gaps. The words we use and the languages we speak along with the knowledge that we share serve as a bridge to weave together human culture and intelligence. So imagine a multilingual encyclopedia that encompasses the whole of human knowledge, a book of human understanding from the sciences, the arts, history and philosophy. This encyclopedia is a testament to the universal nature of human knowledge, but it also shows the interplay between culture, language, knowledge and human intelligence.

In my most recent article, I argued that human intelligence is shaped by cultural and social context and that this is shaped by interactions in a cultural and social context. So here I will argue that: there are necessary aspects of knowledge; knowledge is context-dependent; language, culture and knowledge interact with the specific contexts to form intelligence, mind and rationality; and my multilingual encyclopedia analogy shows that while there are what is termed “universal core knowledge”, these would then become context-dependent based on the needs for different cultures and I will also use this example to again argue against IQ. Finally I will conclude that the arguments in this article and the previous one show how the mind is socially formed based on the necessary physical substrates but that the socio-cultural contexts are what is necessary for human intelligence, mindedness, and rationality.

Necessary aspects of knowledge

There are two necessary and fundamental aspects of knowledge and thought—that of cognition and the brain. The brain is a necessary pre-condition for human mindedness, and cognition is influenced by culture, although my framework posits that cognitive processes play a necessary role in human cognition, just as the brain plays a necessary physical substrate for these processes. While cognition and knowledge are intertwined, they’re not synonymous. To cognize is to actively think about something that you want to, meaning it is an action. There is a minimal structure and it’s accounted for by cognition, like pattern recognition, categorization, sequential processing, sensory integration, associative memory and selective attention. And these processes are necessary, they are inherent in “cognition” and they set the stage for more complex mental abilities, which is what Vygotsky was getting at with the social formation of mind with his theory.

Individuals do interpret their experiences through a cultural lense, since culture provides the framework for understanding, categorizing, and making sense of experiences. I recognize the role of individual experiences and personal interpretations. So while cultural lenses may shape initial perceptions, people can also think critically and reflect on their interpretations over time due to the differing experiences they have.

Fundamental necessary aspects of knowledge like sensory perception are also pivotal. By “fundamental”, I mean “necessary”—that is, we couldn’t think or cognize without the brain and it therefore follows we couldn’t think without cognition. These things are necessary for thinking, language, culture and eventually intelligence, but what is sufficient for mind, thinking, language and rationality are the specific socio-cultural interactions and knowledge formulations that we get by being engrossed in linguistically-mediated cultural environments.

The context-dependence of knowledge

“Context-dependent knowledge” refers to information or understanding that can take on different meaning or interpretations based on the specific context in which it is applied or used. But I also mean something else by this: I mean that an individual’s performance on IQ tests is influenced by their exposure to specific cultural, linguistic, and contextual factors. Thus, this means that IQ tests aren’t culture-neutral or universally applicable, but they are biased towards people who share similar class-cultural backgrounds and experiences.

There is something about humans that allow us to be receptive to cultural and social contexts to form mind, language, rationality and intelligence (and I would say that something is the immaterial self). But I wouldn’t call it “innate.” Thus, so-called “innate” traits need certain environmental contexts to be able to manifest themselves. So called “innate” traits are experience-dependent (Blumberg 2018).

So while humans actively adapt, shape, and create cultural knowledge through cultural processes, knowledge acquisition isn’t solely mediated by culture. Individual experiences matter, as do interactions with the environment along with the accumulation of knowledge from various cultural contexts. So human cognitive capacity isn’t entirely a product of culture, and human cognition allows for critical thinking, creative problem solving, along with the ability to adapt cultural knowledge.

Finally, knowledge acquisition is cumulative—and by this, I mean it is qualitatively cumulative. Because as individuals acquire knowledge from their cultural contexts, individual experiences etc, this knowledge then becomes internalized in their cognitive framework. They can then build on thus existing knowledge to further adapt and shape culture.

The statement “knowledge is context-dependent” is a description of the nature of knowledge itself. It means that knowledge can take on different meaning or interpretations in different contexts. So when I say “knowledge is context-dependent”, I am acknowledging that it applies in all contexts, I’m discussing the contextual nature of knowledge itself.

Examples of the context-dependence of universal knowledge for example, are how English-speakers use the “+” sign for addition, while the Chinese have “加” or “Jiā”. So while this fundamental principle is the same, these two cultures have different symbols and notations to signify the operation. Furthermore, there are differences in thinking between Eastern and Western cultures, where thinking is more analytic in Western cultures and more holistic in Eastern cultures (Yates and de Oliveira, 2016; also refer to their paper for more differences between cultures in decision-making processes). There are also differences between cultures in visual attention (Jurkat et al, 2016). While this isn’t “knowledge” per se, it does attest to how cultures are different in their perceptions and cognitive processes, which underscores the broader idea that cognition, including visual attention, is influenced by cultural contexts and social situations. Even the brain’s neural activity (the brain’s physiology) is context-dependent—thus culture is context-dependent (Northoff, 2013).

But when it comes to culture, how does language affect the meaning of culture and along with it intelligence and how it develops?

Language, culture, knowledge, and intelligence

Language plays a pivotal role in shaping the meaning of culture, and by extension, intelligence and its development. Language is not only a way to communicate, but it is also a psychological tool that molds how we think, perceive and relate to the world around us. Therefore, it serves as the bridge between individual cognition and shares cultural knowledge, while acting as the interface through which cultural values and norms are conveyed and internalized.

So language allows us to encode and decode cultural information, which is how, then, culture is generationally transmitted. Language provides the framework for expressing complex thoughts, concepts, and emotions, which enables us to discuss and negotiate the cultural norms that define our societies. Different languages offer unique structures for expressing ideas, which can then influence how people perceive and make sense of their cultural surroundings. And important for this understanding is the fact that a human can’t have a thought unless they have language (Davidson, 1982).

Language is also intimately linked with cognitive development. Under Vygotsky’s socio-historical theory of learning and development, language is a necessary cognitive tool for thought and the development of higher mental functions. So language not only reflects our cognitive abilities, it also plays an active role in their formation. Thus, through social interactions and linguistic exchanges, individuals engage in a dynamic process of cultural development, building on the foundation of their native language and culture.

Feral children and deaf linguistic isolates show this dictum: that there is a critical window in which language could be acquired and thusly the importance of human culture in human development (Vyshedakiy, Mahapatra, and Dunn, 2017). Cases of feral children, then, show us how children would develop without human culture and shows the importance of early language hearing and use for normal brain development. In fact, this shows how social isolation has negative effects on children, and since human culture is inherently social, it shows the importance of human culture and society in forming and nurturing the formation of mind, intelligence, rationality and knowledge.

So the relationship between language, culture and intelligence is intricate and reciprocal. Language allows us to express ourselves and our cultural knowledge while shaping our cognitive processes and influencing how we acquire and express our intelligence. On the other hand, intelligence—as shaped by cultural contexts—contributes to the diversification of language and culture. The interplay underscores how language impacts our understanding of intelligence within it’s cultural framework.

Furthermore, in my framework, intelligence isn’t a static, universally-measureable trait, but it is a dynamic and constantly-developing trait shaped by social and cultural interactions along with individualsm experiences, and so intentionality is inherent in it. Moreover, in the context of acquiring cultural knowledge, Vygotsky’s ZPD concept shows that individuals can learn and internalize things outside of their current toolkit as guided by more knowledgeable others (MKOs). It also shows that learning and development occur mostly in this zone between what someone can do alone and what someone can do with help which then allows them to expand their cognitive abilities and cultural understanding.

Cultural and social exposure

Cultural and social exposure are critical to my conception of intelligence. Because, as we can see in cases of feral children, there is a clear developmental window of opportunity to gain language and to think and act like a human due to the interaction of the individual in human culture. The base cognitive capacities that we are born with and develop throughout infancy to toddlerhood to childhood and then adulthood aren’t just inert, passive things that merely receive information through vision and then we gain minds, intelligence and then become human. Critically, they need to be nurtured through culture and socialization. The infant needs the requisite experiences doing certain things to be able to learn how to roll over, crawl, and finally walk. They need to be exposed to different things in order to be exposed to the culture they were borne into correctly. So while we are born into both cultural, and linguistically-mediated environments, it’s these three types of environment—along with what the individual does themselves when they finally learn to walk, talk, and gain their mind, intelligence and rationality—that shape individual humans, the knowledge they gain and ultimately their intelligence.

If humans possess foundational cognitive capacities that aren’t entirely culturally determined or influenced, and culture serves as a mediator in shaping how these capacities are expressed and applied, then it follows that culture influences cognitive development while cognitive abilities provide the foundation for being able to learn at all, as well as being able to speak and to internalize the culture and language they are exposed to. So if culture interacts dynamically with cognitive capacities, and crucial periods exist during which cultural learning is particularly influential (cases of feral children), then it follows that early cultural exposure and socialization are critical. So it follows that my framework acknowledges both cognitive capacities and cultural influences in shaping human cognition and intelligence.

In his book Vygotsky and the Social Formation of Mind, Wertsch (1985) noted that Vygotsky didn’t discount the role of biology (like in development in the womb), but that after a certain point, biology no longer can be viewed as the sole or even primary factor in force of change for the individual, and that the explanation necessarily shifts to a sociocultural explanation:

However, [Vygotsky] argued that beyond a certain point in development, biological forces can no longer be viewed as the sole, or even the primary, force of change. At this point there is a fundamental reorganization of the forces of development and a need for a corresponding reorganization in the system of explanatory principles. Specifically, in Vygotsky’s view the burden of explanation shifts from biological to social factors. The latter operate within a given biological framework and must be compatible with it, but they cannot be reduced to it. That is, biological factors are still given a role in this new system, but they lose their role as the primary force of change. Vygotsky contrasted embryological and psychological development on this basis:

The embryological development of the child … in no way can be considered on the same level as the postnatal development of the child as a social being. Embryological development is a completely unique type of development subordinated to other laws than is the development of the child’s personality, which begins at birth. Embryological development is studied by an independent science—embryology, which cannot be considered one of the chapters of psychology … Psychology does not study heredity or prenatal development as such, but only the role and influence of heredity and prenatal development of the child in the process of social development. ([Vygotsky] 1972, p. 123)

The multilingual encyclopedia

Imagine a multilingual encyclopedia that encompasses knowledge of multiple disciplines from the sciences to the humanities to religion. This encyclopedia has what I term universal core knowledge. This encyclopedia is maintained by experts from around the world and is available in many languages. So although the information in the encyclopedia is written in different languages and upheld by people from different cultures, fundamental scientific discoveries, historical events and mathematical theorems remain constant across all versions of the encyclopedia. So this knowledge is context-independent because it holds true no matter the language it’s written in or the cultural context it is presented in. But the encyclopedia’s entries are designed to be used in specific contexts. The same scientific principles can be applied in labs across the world, but the specific experiments, equipment and cultural practices could vary. Moreover, historical events could be studied differently in different parts of the world, but the events themselves are context-independent.

So this thought experiment challenges the claim that context-independent knowledge requires an assertion of absolute knowledge. Context-independent knowledge exists in the encyclopedia, but it isn’t absolute. It’s merely a collection of universally-accepted facts, principles and theories that are applied in different contexts taking into account linguistic and cultural differences. Thus the knowledge in the encyclopedia is context-independent in that it remains the same across the world, across languages and cultures, but it is used in specific contexts.

Now, likening this to IQ tests is simple. When I say that “all IQ tests culture-bound, and this means that they’re class-specific”, this is a specific claim. What this means, in my view, is that people grow up in different class-cultural environments, and so they are exposed to different knowledge bases and kinds of knowledge. Since they are exposed to different knowledge bases and kinds of knowledge, when it comes time for test time, if they aren’t exposed to the knowledge bases and kinds of knowledge on the test, they necessarily won’t score as high as someone who was immersed in the knowledge bases and kinds of knowledge. Cole’s (2002) argument that all tests are culture-bound is true. Thus IQ tests aren’t culture-neutral, they are all culture-bound, and culture-neutral tests are an impossibility. This further buttresses my argument that intelligence is shaped by the social and cultural environment, underscoring the idea that the specific knowledge bases and cognitive resources that individuals are exposed to within their unique socio-cultural contexts play a pivotal role in the expression and development of their cognitive abilities.

IQ tests are mere cultural artifacts. So IQ tests, like the entries in the multilingual encyclopedia, are not immune to cultural biases. So although the multilingual encyclopedia has universal core knowledge, the way that the information is presented in the encyclopedia, like explanations and illustrations, would be culturally influenced by the authors/editors of the encyclopedia. Remember—this encyclopedia is an encyclopedia of the whole of human knowledge written in different languages, seen through different cultural lenses. So different cultures could have ways of explaining the universal core knowledge or illustrating the concepts that are derived from them.

So IQ tests, just like the entries in the encyclopedia, are only usable for certain contexts. While the entries in the encyclopedia could be usable for more than one context of idea one has, there is a difference for IQ testing. The tests are created by people from a narrow social class and so the items on them are therefore class-specific. This then results in cultural biases, because people from different classes and cultures are exposed to varying different knowledge bases, so people will be differentially prepared for test-taking on this basis alone. So the knowledge that people are exposed to based on their class membership or even different cultures within America or even from an immigrant culture would influence test scores. So while there is universal core knowledge, and some of this knowledge may be on IQ tests, the fact is that different classes and cultures are exposed to different knowledge bases, and so that’s why they score differently—the specific language and numerical skills on IQ tests are class-specific (Brito, 2017). I have noted how culturally-dependent IQ tests are for years, and this interpretation is reinforced when we consider knowledge and its varying interpretations found in the multilingual encyclopedia, which then highlights the intricate relationship between culture, language, and IQ. This then serves to show that IQ tests are mere knowledge tests—class-specific knowledge tests (Richardson, 2002).

So my thought experiment shows that while there are fundamental scientific discoveries, historical events and mathematical theorems that remain constant throughout the world and across different languages and cultures, the encyclopedia’s entries are designed to be used in specific contexts. So the multilingual encyclopedia thought experiment supports my claim that even when knowledge is context-independent (like that of scientific discoveries, historical facts), it can become context-dependent when it is used and applied within specific cultural and linguistic contexts. This, then, aligns with the part of my argument that knowledge is not entirely divorced from social, cultural and contextual influences.

Conclusion

The limitations of IQ tests become evident when we consider how individuals produce and acquire knowledge and the cultural and linguistic diversity and contexts that define our social worlds. The analogy of the multilingual encyclopedia shows that while certain core principles remain constant, the way that we perceive and apply knowledge is deeply entwined within the cultural and social contexts in which we exist. This dynamic relationship between culture, language, knowledge and intelligence, then, underscores the need to recognize the social formation of mind and intelligence.

Ultimately, human socio-cultural interactions, language, and the knowledge we accumulate together mold our understanding of intelligence and how we acquire it. The understanding that intelligence arises through these multifaceted exchanges and interactions within a social and cultural framework points to a more comprehensive perspective. So by acknowledging the vital role of culture and language in the formation of human intelligence, we not only deconstruct the limitations of IQ tests, but we also lay the foundation for a more encompassing way of thinking about what it truly means to be intelligent, and how it is shaped and nurtured by our social lives in our unique cultural contexts and the experiences that we have.

Thus, to truly grasp the essence of human intelligence, we don’t need IQ tests, and we certainly don’t need claims like genes causing IQ or psychological traits and this then is what makes certain people or groups more intelligent than others; we have to embrace the fact that human intelligence thrives within the web of social and cultural influences and interactions which then collectively form what we understand as the social formation of mind.

Intelligence without IQ: Towards a Non-IQist Definition of Intelligence

3000 words

Introduction

In the disciplines of psychology and psychometrics, intelligence has long been the subject of study, attempting to reduce intelligence to a number based on what a class-biased test spits out when an individual takes an IQ test. But what if intelligence resisted quantification, and we can’t state that IQ tests can put a number to one’s intelligence? The view I will present here will conceptualize intelligence as a psychological trait, and since it’s a psychological trait, it’s then resistant to being reduced to anything physical and it’s also resistant to quantification. I will draw on Vygotsky’s socio-cultural theory of learning and development and his emphasis on the role of culture, social interactions and cultural tools in shaping intelligence and then I will explain that Vygotsky’s theory supports the notion that intelligence is socially and contextually situated. I will then draw on Ken Richardson’s view that intelligence is a socially dynamic trait that’s irreducible, created by sociocultural tools.

All in all, the definition that I will propose here will be irrelevant to IQ. Although I do conceptualize psychological traits as irreducible, it is obvious that IQ tests are class-specific knowledge tests—that is they are biased against certain classes and so it follows that they are biased for certain classes. But the view that I will articulate here will suggest that intelligence is a complex and multifaceted construct that is deeply influenced by cultural and social factors and that it resists quantification because intentionality is inherent in it. And I don’t need to posit a specified measured object, object of measurement and measurement unit for my conception because I’m not claiming measurability.

Vygotsky’s view

Vygotsky is most well-known for his concepts of private speech, more knowledgeable others, and the zone of proximal development (ZPD). Intelligence involves the internalization of private speech, where individuals engage in a self-directed dialogue to solve problems and guide their actions. This internalized private speech then represents an essential aspect of one’s cognitive development, and reflects an individual’s ability to think and reason independently.

Intelligence is then nurtured through interactions with more knowledgeable others (MKOs) in a few ways. MKOs are individuals who possess a deeper understanding or expertise in specific domains. MKOs provide guidance, support, and scaffolding, helping individuals to reach higher levels of cognitive functioning and problem solving.

Along with MKOs, the ZPD is a crucial aspect in understanding intelligence. It represents a range of tasks that individuals can’t perform independently, but can achieve with guidance and support—it is the “zone” where learning and cognitive development take place. e. So intelligence isn’t only about what one can do alone, but also what one can achieve with the assistance of a MKO. Thus, in this context, intelligence is seen as a dynamic process of development where individuals continuously expand their ZPD through sociocultural interactions. So MKOs play a pivotal role in facilitating learning and cognitive development by providing the necessary help to individuals within their ZPD. The ZPD concept underscores the fact and idea that learning is most effective when it is in this zone, where the learner is neither too challenged or too comfortable, but is then guided by a MKO to reach higher levels of competence in what they’re learning.

So the takeaway from this discussion is this: Intelligence isn’t merely a product of individual cognitive abilities, but it is deeply influenced by cultural and social interactions. It encompasses the capacity for private speech which demonstrates an individual’s capacity to think and reason independently. It also involves learning and development ad facilitated by MKOs who contribute to an individual cognitive growth. And the ZPD underscores the importance of sociocultural guidance in shaping and expanding an individual’s intelligence, while reflecting the dynamic and collaborative nature of cognitive development within the sociocultural context. So intelligence, as understood here, is inseparable from Vygotsky’s concepts of private speech, more knowledgeable others and the ZPD and it highlights the dynamic interplay between individual cognitive processes and sociocultural interactions in the development of intelligence.

Davidson (1982) stated that “Neither an infant one week old nor a snail is a rational creature. If the infant survives long enough, he will probably become rational, while this is not true of the snail.” And on Vygotsky’s theory, the infant becomes rational—that is, intelligent—by interacting with MKOs, and internalizing private speech when they learn to talk and think in cultural contexts in their ZPD. Infants quite clearly have the capacity to become rational, and they begin to become rational through interactions with MKOs and caregivers who guide their cognitive growth within their ZPD. This perspective, then, highlights the role of social and cultural influences in the development of infant’s intelligence and their becoming rational creatures. Children are born into both cultural and linguistically-mediated environments, which is put well by Vasileva and Balyasnikova (2019):

Based on the conceptualization of cultural tools by Vygotsky (contrary to more traditional socio-cultural schools), it follows that a child can be enculturated from birth. Children are not only born in a human-created environment, but in a linguistically mediated environment that becomes internalized through development.

Richardson’s view

Ken Richardson has been a critic of IQ testing since the 1970s being one editor of the edited volume Race and Intelligence: The Fallacies Behind the Race-IQ Controversy. He has published numerous books critiquing the concept of IQ, most recently Understanding Intelligence (Richardson, 2022). (In fact, Richardson’s book was what cured me of my IQ-ist delusions and set me on the path to DST.) Nonetheless,

Richardson (2017: 273) writes:

Again, these dynamics would not be possible without the co- evolution of interdependencies across levels: between social, cognitive, and aff active interactions on the one hand and physiological and epigenetic processes on the other. As already mentioned, the burgeoning research areas of social neuroscience and social epigenetics are revealing ways in which social/cultural experiences ripple through, and recruit, those processes.

For example, different cognitive states can have different physiological, epigenetic, and immune-system consequences, depending on social context. Importantly, a distinction has been made between a eudaimonic sense of well-being, based on social meaning and involvement, and hedonic well-being, based on individual plea sure or pain. These different states are associated with different epigenetic processes, as seen in the recruitment of different transcription factors (and therefore genes) and even immune system responses.18 All this is part of the human intelligence system.

In that way human evolution became human history. Collaboration among brains and the emergent social cognition provided the conceptual breakout from individual limits. It resulted in the rapid progress seen in human history from original hunter-gatherers to the modern, global, technologiocal society—all on the basis of the same biological system with the same genes.

So intelligence emerges from the specific activities, experiences, and resources that individuals encounter throughout their development. Richardson’s view, too, is a Vygotskian one. And like Vygotsky, he emphasizes the significant cultural and social aspects in shaping human intelligence. He rejects the claim that human intelligence is reducible to a number (on IQ tests), genes, brain physiology etc.

Human intelligence cannot be divorced from the sociocultural context in which it is embedded and operates in. So in this view, intelligence is not “fixed” as the genetic reductionist IQ-ists would like you to believe, but instead it can evolve and adapt over time in response to learning, the environment, and experiences. Indeed, this is the basis for his argument on the intelligent developmental system. Indeed, Richardson (2012) even argues that “IQ scores might be more an index of individuals’ distance from the cultural tools making up the test than performance on a singular strength variable.” And due to what we know about the inherent bias in the items on IQ tests (how they’re basically middle-class cultural knowledge tests), it seems that Richardson is right here. Richardson (1991; cf 2001) even showed that when Raven’s progressive matrices items were couched in familiar contexts, the children were able to complete them, even when the same exact rules were there between Richardson’s re-built items and the abstract Raven’s items. This shows that couching items in cultural context even with the same rules as the Raven shows that cultural context matters for these kinds of items.

Returning the concept of cultural tools that Richardson brought up in the previous quote (which is derived from Vygotsky’s theory), cultural tools encompass language, knowledge, and problem solving abilities which are culturally-specific and influenced by that culture. These tools are embedded in IQ tests, influencing the problems presented and the types of questions. Thus, it follows that if one is exposed to different psychological and cultural tools (basically, if one is exposed to different knowledge bases of the test), then they will score lower on a test compared to another person whom is exposed to the item content and structure of the test. So individuals who are more familiar with the cultural references, language patterns, and knowledge will score better than those that don’t. Of course, there is still room here for differences in individual experiences, and these differences influence how individuals approach problem solving on the tests. Thus, Richardson’s view highlights that IQ scores can be influenced by how closely aligned an individual’s experiences are with the cultural tools that are embedded on the test. He has also argued that non-cognitive, cultural, and affective factors explain why individuals score differently on IQ tests, with IQ not measuring the ability for complex cognition (Richardson, 2002; Richardson and Norgate, 2014, 2015).

So contrary to how IQ-ists want to conceptualize intelligence (as something static, fixed, and genetic), Richardson’s view is more dynamic, and looks to the cultural and social context of the individual.

Culture, class, and intelligence

Since I have conceptualized intelligence as a socially embedded and culturally-influenced and dynamic trait, class and culture are deeply intertwined in my conception of intelligence. My definition recognizes that intelligence is culturally-influenced by cultural contexts. Culture provides different tools (cultural and psychological) which then develop and individual’s cognitive abilities. Language is a critical cultural (also psychological) tool which shapes how individuals think and communicate. So intelligence, in my conception and definition, encompasses the ability to effectively use these cultural tools. Furthermore, individuals from different cultures may developm unique problem solving strategies which are embedded in their cultural experiences.

Social class influences access to educational and cultural resources. Higher social classes often have greater access to quality education, books, and cultural experiences and this can then influence and impact an individual’s cognitive development and intelligence. My definition also highlights the limitations of reductionist approaches like IQ tests. It has been well-documented that IQ tests have class-specific knowledge and skills on them, and they also include knowledge and scenarios which are more familiar to individuals from certain social and cultural backgrounds. This bias, then, leads to disparities in IQ scores due to the nature of IQ tests and how the tests are constructed.

A definition of intelligence

Intelligence: Noun

Intelligence, as a noun, refers to the dynamic cognitive capacity—characterized by intentionality—possessed by individuals. It is characterized by a connection to one’s social and cultural context. This capacity includes a wide range of cognitive abilities and skills, reflecting the multifaceted nature of human cognition. This, then, shows that only humans are intelligent since intentionality is a human-specific ability which is due to the fact that we humans are minded beings and minds give rise and allow intentional action.

A fundamental aspect of intelligence is intentionality, which signifies that cognitive processes are directed towards single goals, problem solving, or understanding within the individual’s social and cultural context. So intelligence is deeply rooted in one’s cultural and social context, making it socially embedded. It’s influenced by cultural practices, social interactions, and the utilization of cultural tools for learning and problem solving. So this dynamic trait evolves over time as individuals engage with their environment and integrate new cultural and social experiences into their cognitive processes.

Intelligence is the dynamic capacity of individuals to engage effectively with their sociocultural environment, utilizing a diverse range of cognitive abilities (psychological tools), cultural tools, and social interactions. Richardson’s perspective emphasizes that intelligence is multifaceted and not reducible to a single numerical score, acknowledging the limits of IQ testing. Vygotsky’s socio-cultural theory underscores that intelligence is deeply shaped by cultural context, social interactions, and the use of cultural tools for problem solving and learning. So a comprehensive definition of intelligence in my view—informed by Richardson and Vygotsky—is that of a socially embedded cognitive capacity—characterized by intentionality—that encompasses diverse abilities and is continually shaped by an individual’s cultural and social interactions.

In essence, within this philosophical framework, intelligence is an intentional multifaceted cognitive capacity that is intricately connected to one’s cultural and social life and surroundings. It reflects the dynamic interplay of intentionality, cognition and socio-cultural influences. Thus is closely related to the concept of cognition in philosophy, which is concerned with how individuals process information, make sense of the world, acquire knowledge and engage in thought processes.

What IQ-ist conceptions of intelligence miss

The two concepts I’ll discuss are the two most oft-cited concepts that hereditarian IQ-ists talk about—that of Gottfredson’s “definition” of intelligence and Jensen’s attempt at relating g (the so-called general factor of intelligence) to PC1.

Gottfredson’s “definition” is the most-commonly cited one in the psychometric IQ-ist literature:

Intelligence is a very general mental capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly and learn from experience. It is not merely book learning, a narrow academic skill, or test-taking smarts. Rather, it reflects a broader and deeper capability for comprehending our surroundings-“catching on,” “ making sense” of things, or “figuring out” what to do.

I have pointed out the nonsense that is her “definition” since she says it’s “not merely book learning, a narrow academic skill or test-taking smarts“, yet supposedly, IQ tests “measure” this, and it’s based on… Book learning, is an academic skill and knowledge of the items on the test. That this “definition” is cited as something that is related to IQ tests is laughable. A research paper from OpenAI even cited this “definition” in their paper Sparks of Artifical Intelligence: Early Experiments with GPT4” (Bubeck et al, 2023), but the reference was seemingly removed. Strange…

Spearman “discovered” g in 1903, but his g theory was refuted mere years later. (Nevermind the fact that Spearman saw what he wanted to see in his data; Schlinger, 2003.) In fact, Spearman’s g falsified in 1947 by Thurstone and then again in 1992 by Guttman (Heene, 2008). Then Jensen came along trying to revive the concept, and he likened it to PC1. Here are the steps that show the circularity in Jensen’s conception:

(1) If there is a general intelligence factor “g,” then it explains why people perform well on various cognitive tests.

(2) If “g” exists and explains test performance, the absence of “g” would mean that people do not perform well on these tests.

(3) We observe that people do perform well on various cognitive tests (i.e., test performance is generally positive).

(4) Therefore, since “g” would explain this positive test performance, we conclude that “g” exists.

Nonetheless, Jensen’s g is an unfalsifiable tautology—it’s circular. These are the “best” conceptions of intelligence the IQ-ists have and they’re either self-contradictory nonsense (Gottfredson’s), already falsified (Spearman’s) or unfalsifiable circular tautology (Jensen’s). What makes Spearman’s g even more nonsensical was that he posited g as a mental energy (Jensen, 1999), and more recently it has been proposed that this mental energy can be found in mitochondrial cells (Geary, 2018201920202021). Though I have also shown how this is nonsense.

Conclusion

In this article, I have conceptualized intelligence as a socially embedded and culturally-influenced cognitive capacity characterized by intentionality. It is a dynamic trait which encompasses diverse abilities and is continually shaped by an individual’s cultural and social context and social interactions. I explained Vygotsky’s theory and also explained how his three main concepts relate to the definition I have provided. I then discussed Richardson’s view of intelligence (which is also Vygotskian), and showed how IQ tests are merely an index of one’s distance from the cultural tools that are embedded on the IQ test.

In discussing my conception of intelligence, I then contrasted it with the two “best” most oft-cited conceptions of “intelligence” in the psychological/psychometric literature (Gottfredson’s and Spearman’s/Jensen’s). I then showed how they fail. My conception of intelligence isn’t reductionist like the IQ-ists (they try to reduce intelligence/IQ to genes or physiology or brain structure), but it is inherently holistic in recognizing how intelligence develops over the course of the lifespan, from birth to death. My definition recognizes intelligence as a dynamic, changing trait that’s not fixed like the hereditarians claim it is, and in my conception there is no use for IQ tests. At best, IQ tests merely show what kind of knowledge and experiences one was exposed to in their lives due to the cultural tools inherent on the test. So my inherently Vygotskian view shows how intelligence can be conceptualized and then developed during the course of the human lifespan.

Intelligence, as I have conceived of it, is a dynamic and constantly-developing trait, which evolved through our experiences, cultural backgrounds, and how we interact with the world. It is a multifaceted, context-sensitive capacity. Note that I am not claiming that this is measurable, it cannot be reduced to a single quantifiable measure. And since intentionality is inherent in the definition, this further underscores how it resists quantification and measurability.

In sum, the discussions here show that the IQ-ist concept is lacking—it’s empty. And how we should understand intelligence is that of an irreducible, socially and culturally-influenced, dynamic and constantly-developing trait, which is completely at-ends with the hereditarian conception. Thus, I have argued for intelligence without IQ, since IQ “theory” is empty and it doesn’t do what they claim it does (Nash, 1990). I have been arguing for the massive limitations in IQ for years, and my definition here presents a multidimensional view, highlights the cultural and contextual influence, and emphasizes it’s dynamic nature. The same cannot be said for reductionist hereditarian conceptions.

Free Will and the Immaterial Self: How Free Will Proves that Humans Aren’t Fully Physical Beings

2200 words

Introduction

That humans have freedom of will demonstrates that there is an immaterial aspect to humans. It implies that there is a nonphysical aspect to humans, thus, humans aren’t fully physical beings. I will use the Ross-Feser argument on the immateriality of thought to strengthen that conclusion. But before that, I will demonstrate that we do indeed have free will. Then, the consequence that we have free will will then be used to generate the conclusion that we are not fully physical beings. This conclusion is, however, justified by arguments for many flavors of dualism. I will then conclude by providing a compelling case against the physicalist, materialist view that seeks to reduce human beings to purely physical entities—because this claim will be directly contested by the conclusion of my argument.

CID and free will

I recently argued for a view I call cognitive interface dualism (CID). The argument I formulated used action potentials (APs) as the intermediary between the mental and physical realms that Descartes was looking for (he thought that this interaction took place at the peneal gland, but he was wrong). So free will using my CID can be seen as a product of mental autonomy, non-deterministic mental causation, and the emergent properties of mind. So CID can accommodate free will and allow for it’s existence without relying on determinism.

The CID framework also argues that M is irreducible to P, consistent with other forms of dualism. This suggests that the mind has a level of autonomy that isn’t completely determined by physical or material processes. Furthermore, when it comes to decision-making, this occurs in the mental realm. CID allows for mental states to causally influence physical states (mental causation), and so, free will operates when humans make choices, and these choices can initiate actions which aren’t determined by physical factors. Free will is also compatible with the necessary role of the human brain for minds—it’s an emergent property of the interaction of M and P. The fact of the matter is, minds allow agency, the ability to reason and make choices. That is, humans are unique, special animals and humans are unique and special because humans have an immaterial mind which allows the capacity to make decisions and have freedom.

Overall, the CID framework provides a coherent explanation for the existence of free will, alongside the role of the brain in human cognition. It further allows for a nuanced perspective on human agency, while emphasizing the unique qualities of human decision-making and freedom.

Philosopher Peter Van Inwagen has an argument using modus ponens which states: If moral responsibility exists, then free will exists. Moral responsibility exists because individuals are held accountable for their actions in the legal system, ethical discussions, and everyday life. Thus, free will exists. Basically, if you’ve ever said to someone “That’s your fault”, you’re holding them accountable for their actions, assuming that they had the capacity to make choices and decisions independently. So this aligns with the concept of free will, since you’re implying that a person and the ability to act differently and make alternative choices.

Libet experiments claim that unconscious brain processes are initiated before an action is made, and that it precedes conscious intention to move. But the original Libet experiment nor any similar ones justify the claim that the brain initiates freely-willed processes (Radder and Meynen, 2012)—because the mind is what is initiating these freely-willed actions.

Furthermore, when we introspect and reflect on our conscious experiences, we unmistakably perceive ourselves as making choices and decisions in various situations in our lives. These choices and decisions feel unconstrained and open, we experience and feel a sense of deliberation when making them. But if we had no free will and our choices, were entirely determined by external factors, then our experience of making choices would be illusory; our choices would be mere illusions of free will. Thus, the fact that we have a direct and introspective awareness in making choices implies that free will exists; it’s a fundamental aspect of our human experiences. So while this argument doesn’t necessarily prove that free will exists, it highlights the compelling phenomenological aspects of human decision-making, which can be seen as evidence for free will.

Having said all of this, I can now make the following argument: If humans have the ability to reason and make logical decisions, then humans have free will. Humans have the ability to reason and make logical decisions. So humans have free will. I will then take this conclusion that I inferred and use it in a later argument to infer that humans aren’t purely physical beings.

Freedom and the immaterial self

Ronald Ross (1992) argued that all formal thinking is incompossibly determinate, while no physical process or a function of physical processes are incompossibly determinate, which allowed him to infer that thoughts aren’t a functional or physical process. Then Ed Feser (2013) argued that Ross’ argument cannot be refuted or coukd be refuted by any neuroscientific discovery. Feser then added to the argument and correctly inferred that humans aren’t fully physical beings.

A, B, and C are, after all, only the heart of Ross’s position.  A little more fully spelled out, his overall argument essentially goes something like this:

A. All formal thinking is determinate.

B. No physical process is determinate.

C. No formal thinking is a physical process. [From A and B]

D. Machines are purely physical.

E. Machines do not engage in formal thinking. [From C and D]

F. We engage in formal thinking.

G. We are not purely physical. [From C and F] (Ed Feser, Can Machines Beg the Question?)

This is a conclusion that I myself have come to, through the fact that machines are purely physical and since thinking isn’t a physical process (but physical processes are necessary for thinking), then machines cannot think since they are purely physical and thinking isn’t a physical or functional process.

Only beings with minds can intend. This is because mind allows a being to think. Since the mind isn’t physical, then it would follow that a physical system can’t intend to do something—since it wouldn’t have the capacity to think. Take an alarm system. The alarm system does not intend to sound alarms when the system is tripped. It’s merely doing what it was designed to do, it’s not intending to carry out the outcome. The alarm system is a physical thing made up of physical parts. So we can then liken this to, say, A.I.. A.I. is made up of physical parts. So A.I. (a computer, a machine) can’t think. However, individual physical parts are mindless and no collection of mindless things counts as a mind. Thus, a mind isn’t a collection of physical parts. Physical systems are ALWAYS a complicated system of parts but the mind isn’t. So it seems to follow that nothing physical can ever have a mind.

Physical parts of the natural world lack intentionality. That is, they aren’t “about” anything. It is impossible for an arrangement of physical particles to be “about” anything—meaning no arrangement of intentionality-less parts will ever count as having a mind. So a mind can’t be an arrangement of physical particles, since individual particles are mindless. Since mind is necessary for intentionality, it follows that whatever doesn’t have a mind cannot intend to do anything, like nonhuman animals. It is human psychology that is normative, and since the normative ingredient for any normative concept is the concept of reason, and only beings with minds can have reasons to act, then human psychology would thusly be irreducible to anything physical. Indeed, physicalism is incompatible with intentionality (Johns, 2020). The problem of intentionality is therefore yet another kill-shot for physicalism. It is therefore impossible for intentional states (i.e. cognition) to be reduced to, or explained by, physicalist theories/physical things. (Why Purely Physical Things Will Never Be Able to Think: The Irreducibility of Intentionality to Physical States)

Now that I have argued for the existence of free will, I will now argue that our free will implies that there is an aspect of our selves and out existence that is not purely physical, but is immaterial. Effectively, I will be arguing that humans aren’t fully physical beings.

So if humans were purely physical beings, then our actions and choices would be solely determined by physical laws and processes. However, if we have free will, then our actions are not solely determined by physical laws and processes, but are influenced by our capacity to make decisions independently. So humans possess a nonphysical aspect—free will which is allowed by the immaterial mind and consciousness—which allows us to transcend the purely deterministic nature of purely physical things. Consequently, humans cannot be fully physical beings, since the existence of free will and the immaterial mind and consciousness suggests a nonphysical, immaterial aspect to out existence.

Either humans have free will, or humans do not have free will. If humans have free will, then humans aren’t purely physical. If humans don’t have free will, then it contradicts the premise that we have free will. So humans must have free will. Consequently, humans aren’t fully physical beings.

Humans aren’t fully physical beings, since we have the capacity for free will and thought—where free will is the capacity to make choices that are not determined by external factors alone. If humans have the ability to reason and make logical decisions, then humans have free will. Humans have the ability to reason and make logical decisions. So humans have free will. Reasoning and the ability to make logical decisions is based on thinking. Thinking is an immaterial—non-physical—process. So if thinking is an immaterial process, and what allows thinking are minds which can’t be physical, then we aren’t purely physical. Put into premise and conclusion form, it goes like this:

(1) If humans have the ability to reason and make logical decisions, then humans have free will.
(2) Humans have the ability to reason and make logical decisions.
(3) Reasoning and the ability to reason and make logical decisions are based on thinking.
(4) Thinking is an immaterial—non-physical—process.
(5) If humans have free will, and what allows free will is the ability to think and make decisions, then humans aren’t purely physical beings.

This argument suggests that humans possess free will and engage in immaterial thinking processes, which according to the Ross-Feser argument, implies the existence of immaterial aspects of thought. So what allows this is consciousness, and the existence of consciousness implies the existence of a nonphysical entity. This nonphysical entity is the mind.

So in CID, the self (S) is the subject the self is the subject of experience, while the mind (M) encompasses mental states, subjective experiences, thoughts, emotions, and consciousness, and consciousness (C) refers to the awareness of one’s own mental states and experiences. CID also recognizes that the brain is a necessary pre-condition for human mindedness but not a sufficient condition, so for there to be a mind at all there needs to be a brain—basically, for there to be mental facts, there must be physical facts. The self is what has the mind, and the mind is the realm in which mental states and experiences occur. So CID posits that the self is the unified experiencer, while the self interact is the entity that experiences and interacts with the contents of the mind through APs.

So this argument that I’ve mounted in this article and my original article on CID, is that humans aren’t fully physical beings since it’s based on the idea that thinking and conscious experiences are immaterial, nonphysical processes.

Conclusion

So CID offers a novel perspective on the mind-body problem, arguing that APs are the interface between the mental and the physical world. Now with this arguments I’ve made here, it establishes that humans aren’t purely physical beings. Through the argument that mental states are irreducible to physical states, CID acknowledges that the existence of an immaterial self plays a fundamental role in human mental life. Thus immaterial self—the seat of our conscious experiences, thoughts, decisions and desires—bridges the gap between M and P. This further underscores the argument that the mind is immaterial and thus so is the self (“I”, the experiencer, the subject of experience) and that the subject isn’t the brain or the nervous system.

CID recognizes that human mental life is characterized by its intrinsic mental autonomy and free will. We are not mere products of deterministic physical processes, rather we are agents capable of making genuine choices and decisions. The conscious experiences of making choices along with the profound sense of freedom in our are immediate and undeniable aspects of our profound sense of freedom in our decisions are immediate and undeniable aspects of our reality which then further cements the existence of free will. So the concept of free will reinforces the claim and argument that humans aren’t fully physical beings. These aspects of our mental life defy reduction to physical causation.

Hypertension, Brain Volume, and Race: Hypotheses, Predictions and Actionable Strategies

2300 words

Introduction

Hypertension (HT, also known as high blood pressure, BP) is defined as a BP of 140/90. But more recently, the guidelines were changed making HT being defend as a BP over 130/90 (Carey et al, 2022; Iqbal and Jamal, 2022). One 2019 study showed that in a sample with an age range of 20-79, 24 percent of men and 23 percent of women could be classified as hypertensive based on the old guidelines (140/90) (Deguire et al, 2019). Having consistent high BP could lead to devestating consequences like (from the patient’s perspective) hot flushes, dizziness, and mood disorders (Goodhart, 2016). However, one serious problem with HT is the issue that consistently high BP is associated with a decrease in brain volume (BV). This has been seen in two systematic reviews and meta-analyses (Alosco et al, 2013; Beauchet et al, 2013; Lane et al, 2019; Alateeq, Walsh and Cherbuin, 2021; Newby et al, 2022) while we know that long-standing hypertension has deleterious effects on brain health (Salerno et al, 1992). However, it’s not only high BP that’s related to this, it’s also lower BP in conjuction with lower pulse pressure (Muller et al, 2010; Foster-Dingley, 2015). So what this says to me is that too much or too little blood flow to the brain is deleterious for brain health.I will state the hypothesis and then I will state the predictions that follow from it. I will then provide three reasons why I think this relationship occurs.

The hypothesis

The hypothesis is simple: high BP (hypertension, HT) is associated with a reduced brain volume. This relationship is dose-dependent, meaning that the extent and duration of HT correlates with the degree of BV changes. So the hypothesis suggests that there is a relationship—an association—between HT and brain volume, where people with HT will be more likely to have decreased BVs than those who lack HT—that is, those with BP in the normal range.

The dose-dependent relationship that has been observed (Alateeq, Walsh and Cherbuin, 2021), and this shows that as HT increases and persists over time, the effects of decreased BV become more pronounced. This relationship suggests that it’s not a binary, either-or situation, present or absent situation, but that it varies across a continuum. So people with shorter-lasting HT will have fewer effects than those with constant and consistent elevated BP and they will then show subsequent higher decreases in BV. This dose-dependent relationship also suggests that as BP continues to elevate, the decrease in BV will worsen.

This dose-dependent relationship implies a few things. The consequences of HT on BV aren’t binary (either or), but are related to the severity of HT, how long one has HT, and at what age they have HT and that it varies on a continuum. For instance, people with mild or short-lasting HT would experience smaller reductions in BV than those that have severe or long-standing HT. The dose-dependent relationship also suggests that the longer one has HT without treatment, the more severe and worse the reduction in BV will be if it is uncontrolled. So as BP continues to elevate, it may lead to a gradual reduction in BV. So the relationship between HT and BV isn’t uniform, but it varies based on the intensity and duration of high BP.

So the hypothesis suggests that HT isn’t just a risk factor for cardiovascular disease, but it’s also a risk factor for decreased BV. This seems intuitive, since the higher one’s BP, the more likely it is that there is the beginnings of a blockage somewhere in the intricate system of blood vessels in the body. And since the brain is a vascular organ, then by decreasing the amount of blood flowing to it, this then would lead to cell death, white matter lesions which would lead to a smaller BV. One newer study showed, with a sample of Asians, whites, blacks, and “Latinos” that, compared to those with normal BP, those who were transitioning to higher BP or already had higher BP had lower brain connectivity, decreased cerebral gray matter and frontal cortex volume, while this change was worse for men (George et al, 2023). Shang et al (2021) showed that HT diagnosed in early and middle life but not late life was associated with decreased BV and increased risk of dimentia. This, of course, is due to the slow cumulative effects of HT and it’s effects on the brain. While Power et al (2016)The pattern of hypertension ~15 years prior and hypotension concurrent with neuroimaging was associated with smaller volumes in regions preferentially affected by Alzheimer’s disease.” But not only is BP relevant here, so is the variability of BP at night (Gutteridge et al, 2022; Yu et al, 2022). Alateeq, Walsh and Cherbuin (2021) conclude that:

Although reviews have been previously published in this area, they only investigated the effects of hypertension on brain volume [86]. To the best of our knowledge, this study’s the first systematic review with meta-analysis providing quantitative evidence on the negative association between continuous BP and global and regional brain volumes. Our results suggest that heightened BP across its whole range is associated with poorer cerebral health which may place individuals at increased risk of premature cognitive decline and dementia. It is therefore important that more prevention efforts be directed at younger populations with a greater focus on achieving optimal BP rather than remaining below clinical or pre-clinical thresholds[5].

One would think that a high BP would actually increase blood flow to the brain, but HT actually causes alterations in the flow of blood to the brain which leads to ischaemia and it causes the blood-brain barrier to break down (Pires et al, 2013). Essentially, HT has devestating effects on the brain which could lead to dimentia and Alzheimer’s (Iadecola and Davisson, 2009).

So the association between HT and decreased BV means that individuals with HT can experience alterations in BV in comparison to those with normal BP. The hypothesis also suggests that there are several mechanisms (detailed below), which may lead to various physiological and anatomic changes in the brain, such as vascular damage, inflammation and tissue atrophy.

The mechanisms

(1) High BP can damage blood vessels in the brain, which leads to reduced blood flow. This is called “cerebral hypoperfusion.” The reduced blood flow can deprive the cells in the brain of oxygen and nutrients, which cause them to shrink or die which leads to decreased brain volume (BV). Over time, high BP can damage the arteries, making them less elastic

(2) Over a long period of time having high BP, this can cause hypertensive encephalopathy, which is basically brain swelling. A rapid increase in BP could over the short term increase BV, but left untreated it could lead to brain damage and atrophy over time.

And (3) Chronically high BP can lead to the creation of white matter lesions on the brain, and the lesions are areas of damaged brain tissue which could result in microvascular changes caused by high BP (hypertension, HT). Thus, over time, the accumulation of white matter lesions could lead to a decrease in brain volume. HT can contribute to white matter lesions in the brain, which are then associated with cognitive changes and decreased BV, and these lesions increase with BP severity.

So we have (1) cerebral hypoperfusion, (2) hypertensive encephalopathy, and (3) white matter lesions. I need to think/read more on which of these could lead to decreased BV, or if they all actually work together to decrease BV. We know that HT damages blood vessels, and of course there are blood vessels in the brain, so it then follows that HT would decrease BV.

I can also detail a step-by-step mechanism. The process beings with consistently elevated BP, which could be due to various factors like genetics, diet/lifestyle, and underlying medical conditions. High BP then places increased strain on the blood vessels in the body, including those in the brain. This higher pressure could then lead to structural change of the blood vessels over time. Then, chronic HT over time can lead to endothelial dysfunction, which could impair the ability of blood vessels to regulate blood flow and maintain vessel integrity. The dysfunction can result in oxidative stress and inflammation.

Then as a response to prolonged elevated BP, blood vessels in the brain could undergo vascular remodeling, which involves changes im blood vessel structure and thickness, which can then affect blood flow dynamics. Furthermore, in some cases, this could lead to something called cerebral small vessel disease which involves damage to the small blood vessels in the brain including capillaries and arterioles. This could impair delivery of oxygen and nutrients to brain tissue which could lead to cell death and consequently a decrease in BV. Then reduced blood flow along compromised blood vessel integrity could lead to cerebral ischaemia—reduced blood supply—and hypoxia—reduced oxygen supply—in certain parts of the brain. This can then result in neural damage and eventually cell death.

Then HT-related vascular changes and cerebral small vessel disease can trigger brain inflammation. Prolonged exposure to neural inflammation, hypoxia and ischemia can lead to neuronal atrophy, where neurons shrink and lose their functional integrity. HT can also increase the incidence of white matter lesions in the brain which can be seen in neuroimages, which involve areas of white matter tissue which become damaged. Finally, over time, the cumulative effects of the aforementioned processes—vascular changes, inflammation, neural atrophy, and white matter changes could lead to a decrease in BV. This reduction can manifest as brain atrophy which is then observed in parts of the brain which are susceptible and vulnerable to the effects of HT.

So the step-by-step mechanism goes like this: elevated BP —> increased vascular strain —> endothelial dysfunction —> vascular remodeling —> cerebral small vessel disease —> ischemia and hypoxia —> inflammation and neuroinflammation —> neuronal atrophy —> white matter changes —> reduction in BV.

Hypotheses and predictions

H1: The severity of HT directly correlates with the extent of BV reduction. One prediction would be that people with more severe HT would exhibit greater BV decreases than those with moderate (less severe) HT, which is where the dose-dependent relationship comes in.

H2: The duration of HT is a critical factor in BV reduction. One prediction would be that people with long-standing HT will show more significant BV changes than those with recent onset HT.

H3: Effective BP management can mitigate BV reduction in people with HT. One prediction would be that people with more controlled HT would show less significant BV reduction than those with uncontrolled HT.

H4: Certain subpopulations may be more susceptible to BV decreases due to HT. One prediction is that certain factors like age of onset (HT at younger age), genetic factors (some may have certain gene variants that make them more susceptible and vulnerable to damage caused by elevated BP), comorbities (people with diabetes, obesity and heart problems could be at higher risk of decreased BV due to the interaction of these factors), ethnic/racial factors (some populations—like blacks—could be at higher risk of having HT and they could be more at risk due to experiencing disparities in healthcare and treatment.

The hypotheses and predictions generated from the main proposition that HT is associated with a reduction in BV and that the relationship is dose-dependent can be considered risky, novel predictions. They are risky in the sense that they are testable and falsifiable. Thus, if the predictions don’t hold, then it could falsify the initial hypothesis.

Blacks and blood pressure

Due to this, for populations like black Americans, this is significant. About 33 percent of blacks have hypertension (Peters, Arojan, and Flack, 2006), while urban blacks are more likely to have elevated BP than whites (Lindhorst et al, 2007). Though Non, Gravlee, and Mulligan (2012) showed that racial differences in education—not genetic ancestry—explained differences in BP in blacks compared to whites. Further, Victor et al (2018) showed that in black male barbershop attendees who had uncontrolled BP, that along with medication and outreach, this lead to a decrease in BP. Williams (1992) cited stress, socioecologic stress, social support, coping patterns, health behavior, sodium, calcium, and potassium consumption, alcohol consumption, and obesity as social factors which lead to increased BP.

Moreover, consistent with the hypothesis discussed here (that chronic elevated BP leads to reductions in BV which lead to a higher chance of dementia and Alzheimer’s), it’s been shown that vulnerability to HT is a major determinate in the risk of acquiring Alzheimer’s (Clark et al, 2020; Akushevic et al, 2022). It has also been shown that “a lifetime of racism makes Alzheimer’s more common in black Americansand consistent with the discussion here since racism is associated with stress which is associated with elevated BP, then consistent events of racial discrimination would lead to consistent and elevated BP which would then lead to decreased BV and then a higher chance of acquitting Alzheimer’s. But, there is evidence that blood pressure drugs (in this case telmisartan) reduce the incidence of Alzheimer’s in black Americans (Zhang et al, 2022) while the same result was also seen using antihyperintensive medications in blacks which led to a reduction in incidence of dementia (Murray et al, 2018), which lends credence to the discussed hypothesis. Stress and poverty—experiences—and not ancestry could explain higher rates of dementia in black Americans as well. Thus, since blood pressure could explain higher rates of dementia in black populations, this then lends credence to the discussed hypothesis.

Conclusion

The evidence that chronic elevated BP leads to reductions in BV are well-studied and the mechanisms are well-known. I discussed the hypothesis that chronically elevated BP leads to reduced blood flow to the brain which decreases BV. I then discussed the mechanisms behind the relationship, and then hypotheses and predictions that follow from them. Lastly, I discussed the well-known fact that blacks have higher rates of BP, and also higher rates of dementia and Alzheimer’s, and linked the fact that they have higher rates of BP to those maladies.

So by catching chronically elevated BP in the early ages, since the earlier one has high BP the more likely they are to have reduced brain volume and the associated maladies, we can then begin to fight the associated issues before they coalesce, since we know the mechanisms behind them, along with the fact that blood pressure drugs and antihypertensive medications decrease incidences of dementia and Alzheimer’s in black Americans.

Cope’s (Deperet’s) Rule, Evolutionary Passiveness, and Alternative Explanations

4450 words

Introduction

Cope’s rule is an evolutionary hypothesis which suggests that, over geological time, species have a tendency to increase in body size. (Although it has been proposed for Cope’s rule to be named Deperet’s rule, since Cope didn’t explicitly state the hypothesis while Deperet did, Bokma et al, 2015.) Named after Edward Drinker Cope, it proposes that on average through the process of “natural selection” species have a tendency to get larger, and so it implies a directionality to evolution (Hone and Benton, 2005; Liow and Taylor, 2019). So there are a few explanations for the so-called rule: Either it’s due to passive or driven evolution (McShea, 1994; Gould, 1996; Raia et al, 2012) or due to methodological artifacts (Sowe and Wang, 2008; Monroe and Bokma, 2010).

However, Cope’s rule has been subject to debate and scrutiny in paleontology and evolutionary biology. The interpretation of Cope’s rule hinges on how “body size” is interpreted (mass or length), along with alternative explanations. I will trace the history of Cope’s rule, discuss studies in which it was proposed that this directionality from the rule was empirically shown, discuss methodological issues. I propose alternative explanations that don’t rely on the claim that evolution is “progressive” or “driven.” I will also show that developmental plasticity throws a wrench in this claim, too. I will then end with a constructive dilemma argument showing that either Cope’s rule is a methodological artifact, or it’s due to passive evolution, since it’s not a driven trend as progressionists claim.

How developmental plasticity refutes the concept of “more evolved”

In my last article on this issue, I showed the logical fallacies inherent in the argument PumpkinPerson uses—it affirms the consequent, assuming it’s true leads to a logical contradiction, and of course reading phylogenies in the way he does just isn’t valid.

If the claim “more speciation events within a given taxon = more evolution” were valid, then we would consistently observe a direct correlation between the number of speciation events and the extent evolutionary change in all cases, but we don’t since evolutionary rates vary and other factors influence evolution, so the claim isn’t universally valid.

Take these specific examples: The horseshoe crab has a lineage going back hundreds of millions of years with few speciation events but it has undergone evolutionary changes. Consequently, microorganisms could undergo many speciation events and have relatively minor genetic change. Genetic and phenotypic diversity of the cichlid fishes (fishes that have undergone rapid evolutionary change and speciation), but the diversity between them doesn’t solely depend on speciation events, since factors like ecological niche partitioning and sexual selection also play a role in why they are different even though they are relatively young species (a specific claim that Herculano-Houzel made in her 2016 book The Human Advantage). Lastly, human evolution has relatively few speciation events but the extent of evolutionary change in our species is vast. Speciation events are of course crucial to evolution. But if one reads too much into the abstractness of the evolutionary tree then they will not read it correctly. The position of the terminal nodes is meaningless.

It’s important to realize that evolution just isn’t morphological change which then leads to the creation of a new species (this is macro-evolution), but there is also micro-evolution. Species that underwent evolutionary change without speciation include peppered moths, antibody resistance in bacteria, lactase persistence in humans, Darwin’s finches, and industrial melanism in moths. These are quite clearly evolutionary changes, and they’re due to microevolutionary changes.

Developmental plasticity directly refutes the contention of more evolved since individuals within a species can exhibit significant trait variation without speciation events. This isn’t captured by phylogenies. They’re typically modeled on genetic data and they don’t capture developmental differences that arise due to environmental factors during development. (See West-Eberhard’outstanding Developmental Plasticity and Evolution for more on how in many cases development precedes genetic change, meaning that the inference can be drawn that genes aren’t leaders in evolution, they’re mere followers.)

If “more evolved” is solely determined by the number of speciation events (branches) in a phylogeny, then species that exhibit greater developmental plasticity should be considered “more evolved.” But it is empirically observed that some species exhibit significant developmental plasticity which allows them to rapidly change their traits during development in response to environmental variation without undergoing speciation. So since the species with more developmental plasticity aren’t considered “more evolved” based on the “more evolved” criteria, then the assumption that “more evolved” is determined by speciation events is invalid. So the concept of “more evolved” as determined by speciation events or branches isn’t valid since it isn’t supported when considering the significant role of developmental plasticity in adaptation.

There is anagenesis and cladogenesis. Anagenesis is the creation of a species without a branching of the ancestral species. Cladogenesis is the formation of a new species by evolutionary divergence from an ancestral form. So due to evolutionary changes within a lineage, the organism that underwent evolutionary changes replaces the older one. So anagenesis shows that a species can slowly change and become a new species without there being a branching event. Horse, human, elephant, and bird evolution are examples of this.

Nonetheless, developmental plasticity can lead to anagenesis. Developmental, or phenotypic, plasticity is the ability of an organism to produce different phenotypes with the same genotype based on environmental cues that occur during development. Developmental plasticity can facilitate anagenesis, and since developmental plasticity is ubiquitous in development of not only an individual in a species but a species as a whole, then it is a rule and not an exception.

Directed mutation and evolution

Back in March, I wrote on the existence of directed mutations. Directed mutation directly speaks against the concept of “more evolved.” Here’s the argument:

(1) If directed mutations play a crucial role in helping organisms adapt to changing environments, then the notion of “more evolved” as a linear hierarchy is invalid.
(2) Directed mutations are known to occur and contribute to a species survivability in an environment undergoing change during development (the concept of evolvability is apt here).
(C) So the concept of “more evolved” as a linear hierarchy is invalid.

A directed mutation is a mutation that occurs due to environmental instability which helps an organism survive in the environment that changed while the individual was developing. Two mechanisms of DM are transcriptional activation (TA) and supercoiling. TAs can cause changes to single-stranded DNA, and can also cause supercoiling (the addition of more strands on DNA). TA can be caused by depression (a mechanism that occurs due to the absence of some molecule) or induction (the activation of an inactive gene which then gets transcribed). So these are examples of how nonrandom (directed) mutation and evolution can occur (Wright, 2000). Such changes are possibly through the plasticity of phenotypes during development and ultimately are due to developmental plasticity. These stress-directed mutations can be seen as quasi-Lamarckian (Koonin and Wolf, 2009). It’s quite clear that directed mutations are a thing and have been proven true.

DMs, along with developmental plasticity and evo-devo as a whole refute the simplistic thinking of “more evolved.”

Now here is the argument that PP is using, and why it’s false:

(1) More branches on a phylogeny indicate more speciation events.
(2) More speciation events imply a higher level of evolutionary advancement.
(C) Thus, more branches on a phylogeny indicate a higher level of evolutionary advancement.

The false premise is (2) since it suggests that more speciation events imply a higher level of evolutionary advancement. It implies a goal-directed aspect to evolution, where the generation of more species is equated with evolutionary progress. It’s just reducing evolution to linear advancement and progress; it’s a teleological bent on evolution (which isn’t inherently bad if argued for correctly, see Noble and Noble, 2022). But using mere branching events on a phylogeny to assume that more branches = more speciation = more evolved is simplistic thinking that doesn’t make sense.

If evolution encompasses changes in an organism’s phenotype, then changes in an organism’s phenotype, even without changing its genes, are considered examples of evolution. Evolution encompasses changes in an organism’s phenotype, so changes in an organism’s phenotype even without changes in genes are considered examples of evolution. There is nongenetic “soft inheritance” (see Bonduriansky and Day, 2018).

Organisms can exhibit similar traits due to convergent evolution. So it’s not valid to assume a direct and strong correlation between and organism’s position on a phylogeny and it’s degree of resemblance to a common ancestor.

Dolphins and ichthyosaurs share similar traits but dolphins are mammals while ichthyosaurs are reptiles that lived millions of years ago. Their convergent morphology demonstrates that common ancestry doesn’t determine resemblance. The Tasmanian and Grey wolf have independently evolved similar body plans and roles in their ecologies and despite different genetics and evolutionary history, they share a physical resemblance due to similar ecological niches. The LCA of bats and birds didn’t have wings but they have wings and they occurred independently showing that the trait emerged independently while the LCA didn’t have wings so it emerged twice independently. These examples show that the degree of resemblance to a common ancestor is not determined by an organism’s position on a phylogeny.

Now, there is a correlation between body size and branches (splits) on a phylogeny (Cope’s rule) and I will explain that later. That there is a correlation doesn’t mean that there is a linear progression and they don’t imply a linear progression. Years ago back in 2017 I used the example of floresiensis and that holds here too. And Terrance Deacon’s (1990) work suggests that pseudoprogressive trends in brain size can be explained by bigger whole organisms being selected—this is important because the whole animal is selected, not any one of its individual parts. The correlation isn’t indicative of a linear progression up some evolutionary ladder, either: It’s merely a byproduct of selecting larger animals (the only things that are selected).

I will argue that it is this remarkable parallelism, and not some progressive selection for increasing intelligence, that is responsible for many pseudoprogressive trends in mammalian brain evolution. Larger whole animals were being selected—not just larger brains—but along with the correlated brain enlargement in each lineage a multitude of parallel secondary internal adaptations followed. (Deacon, 1990)

Nonetheless, the claim here is one from DST—the whole organism is selected, so obviously so is it’s body plan (bauplan). Nevertheless, the last two havens for the progressionist is in the realm of brain size and body size. Deacon refuted the selection-for brain size claim, so we’re now left with body size.

Does the evolution of body size lend credence to claims of driven, progressive evolution?

The tendency for bodies to grow larger and larger over evolutionary time is something of a trusim. Since smaller bacterium have eventually evolved into larger (see Gould’s modal bacter argument), more complex multicellular organisms, then this must mean that evolution is progressive and driven, at least for body size, right? Wrong. I will argue here using a constructive dilemma that either evolution is passive and that’s what explains the evolution of body size increases, or is it due to methodological flaws in how body size is measured (length or mass)?

In Full House, Gould (1996) argued that the evolution of body size isn’t driven, but that it is passive, namely that it is evolution away from smaller size. Nonetheless, it seems that Cope’s (Deperet’s) rule is due to cladogenesis (the emergence of new species), not selection for body size per se (Bokma et al, 2015).

Given these three conditions, we note an increase in size of the largest species only because founding species start at the left wall, and the range of size can therefore expand in only one direction. Size of the most common species (the modal decade) never changes, and descendants show no bias for arising at larger sizes than ancestors. But, during each act, the range of size expands in the only open direction by increase in the total number of species, a few of which (and only a few) become larger (while none can penetrate the left wall and get smaller). We can say only this for Cope’s Rule: in cases with boundary conditions like the three listed above, extreme achievements in body size will move away from initial values near walls. Size increase, in other words, is really random evolution away from small size, not directed evolution toward large size. (Gould, 1996)

Dinosaurs were some of the largest animals to ever live. So we might say that there is a drivenness in their bodies to become larger and larger, right? Wrong. The evolution of body size in dinosaurs is passive, not driven (progressive) (Sookias, Butler, and Benson, 2012). Gould (1996) also showed passive trends in body size in plankton and forams. He also cited Stanley (1973) who argued that groups starting at the left wall of minimum complexity will increase in mean size as a consequence of randomness, not any driven tendency for larger body size.

In other, more legitimate cases, increases in means or extremes occur, as in our story of planktonic forams, because lineages started near the left wall of a potential range in size and then filled available space as the number of species increased—in other words, a drift of means or extremes away from a small size, rather than directed evolution of lineages toward large size (and remember that such a drift can occur within a regime of random change in size for each individual lineage—the “drunkard’s walk” model).

In 1973, my colleague Steven Stanley of Johns Hopkins University published a marvelous, and now celebrated, paper to advance this important argument. He showed (see Figure 27, taken from his work) that groups beginning at small size, and constrained by a left wall near this starting point, will increase in mean or extreme size under a regime of random evolution within each species. He also advocated that we test his idea by looking for right-skewed distributions of size within entire systems, rather than by tracking mean or extreme values that falsely abstract such systems as single numbers. In a 1985 paper I suggested that we speak of “Stanley’s Rule” when such an increase of means or extremes can best be explained by undirected evolution away from a starting point near a left wall. I would venture to guess (in fact I would wager substantial money on the proposition) that a large majority of lineages showing increase of body size for mean or extreme values (Cope’s Rule in the broad sense) will properly be explained by Stanley’s Rule of random evolution away from small size rather than by the conventional account of directed evolution toward selectively advantageous large size. (Gould, 1996)

Gould (1996) also discusses the results of McShea’s study, writing:

Passive trends (see Figure 33) conform to the unfamiliar model, championed for complexity in this book, of overall results arising as incidental consequences, with no favored direction for individual species, (McShea calls such a trend passive because no driver conducts any species along a preferred pathway. The general trend will arise even when the evolution of each individual species confirms to a “drunkard’s walk” of random motion.) For passive trends in complexity, McShea proposes the same set of constraints that I have advocated throughout this book: ancestral beginnings at a left wall of minimal complexity, with only one direction open to novelty in subsequent evolution.

But Baker et al (2015) claim that body size is an example of driven evolution. However, that they did not model cladogenetic factors calls their conclusion into question. But I think Baker et al’s claim doesn’t follow. If a taxon possesses a potential size range and the ancestral size approaches the lower limit of this range, it will result in a passive inclination for descendants to exceed the size of their ancestors. The taxon in question possesses a potential size range, and the ancestral size range is on the lower end of the range. So there will be a passive tendency for descendants of this taxon to be larger than their predecessors.

Here’s an argument that concludes that evolution is passive and not driven. I will then give examples of P2.

(1) Extant animals that are descended from more nodes on an evolutionary tree tend to be bigger than animals descended from fewer nodes (your initial premise).
(2) There exist cases where extant animals descended from fewer nodes are larger or more complex than those descended from more nodes (counterexamples of bats and whales, whales are descended from fewer nodes while having some of the largest body sizes in the world while bats are descended from more nodes while having a way comparatively smaller body size).
(C1) Thus, either P1 doesn’t consistently hold (not all extant animals descended from more nodes are larger), or it is not a reliable rule (given the counters).
(3) If P1 does not consistently hold true (not all extant animals descended from more nodes are larger), then it is not a reliable rule.
(4) P1 does not consistently hold true.
(C2) P1 is not a reliable rule.
(5) If P1 is not a reliable rule (given the existence of counterexamples), then it is not a valid generalization.
(6) P1 is not a reliable rule.
(C3) So P1 is not a valid generalization.
(6) If P1 isn’t a valid generalization in the context of evolutionary biology, then there must be exceptions to this observed trend.
(7) The existence of passive evolution, as suggested by the inconsistenties in P1, implies that the trends aren’t driven by progressive forces.
(C4) Thus, the presence of passive evolution and exceptions to P1’s trend challenge the notion of a universally progressive model of evolution.
(8) If the presence of passive evolution and exceptions to P1’s trend challenges the notion of a universally progressive model of evolution, then the notion of a universally progressive model of evolution isn’t supported by the evidence, as indicated by passive evolution and exceptions to P1’s trend.
(9) The presence of passive evolution and exceptions to P1’s trend challenge the notion. of a universally progressive model of evolution.

(1) Bluefin tuna are known to have a potential range of size, with some being small and others being massive (think of that TV show Deadliest Catch and the massive size ranges of tuna these fisherman catch, both in length and mass). So imagine a population of bluefin tuna where the ancestral size is found to be close to the lower end of their size range. So P2 is satisfied because bluefin tuna have a potential size range. So the ancestral size of the ancestors of the tuna were relatively small in comparison to the maximum size of the tuna.

(2) African elephants in some parts of Africa are small, due to ecological constraints and hunting pressures and these smaller-sized ancestors are close to the lower limit of the potential size range of African elephants. Thus, according to P1, there will be a passive tendency for descendants of these elephants to be larger than their smaller-sizes ancestors over time.

(3) Consider galapagos tortoises whom are also known for their large variation in size among the different species and populations on the galapagos islands. So consider a case of galapagos tortoises who have smaller body sizes due to either resource conditions or the conditions of their ecologies. So in this case, the potential size for the ancestors of these tortoises is close to the theoretical limit of their potential size range. Therefore, we can expect a passive tendency for descendants of these tortoises to evolve large body sizes.

Further, in Stanley’s (1973) study of Cope’s rule from fossil rodents, he observed that body size distributions in these rodents, over time, became bigger while the modal size stayed small. This doesn’t even touch the fact that because there are more small than large mammals, that there would be a passive tendency in large body sizes for mammals. This also doesn’t even touch the methodological issues in determining body size for the rule—mass, length? Nonetheless, Monroe and Bokma’s (2010) study showed that while there is a tendency for species to be larger than their ancestors, it was a mere 0.5 percent difference. So the increase in body size is explained by an increase in variance in body size (passiveness) not drivenness.

Explaining the rule

I think there are two explanations: Either a methodological artifact or passive evolution. I will discuss both, and I will then give a constructive dilemma argument that articulates this position.

Monroe and Bokma (2010) showed that even when Cope’s rule is assumed, the ancestor-descendant increase in body size showed a mere .4 percent increase. They further discussed methodological issues with the so-called rule, citing Solow and Wang (2008) who showed that Cope’s rule “appears” based on what assumptions of body size are used. For example, Monroe and Bokma (2010) write:

If Cope’s rule is interpreted as an increase in the mean size of lineages, it is for example possible that body mass suggests Cope’s rule whereas body length does not. If Cope’s rule is instead interpreted as an increase in the median body size of a lineage, its validity may depend on the number of speciation events separating an ancestor-descendant pair.

If size increase were a general property of evolutionary lineages – as Cope’s rule suggests – then even if its effect were only moderate, 120 years of research would probably have yielded more convincing and widespread evidence than we have seen so far.

Gould (1997) suggested that Cope’s rule is a mere psychological artifact. But I think it’s deeper than that. Now I will provide my constructive dilemma argument, now that I have ruled out body size being due to progressive, driven evolution.

The form of constructive dilemma goes: (1) A V B. (2) If A, then C. (3) If B, then D. (C) C V D. P1 represents a disjunction: There are two possible choices, A and B. P2 and P3 are conditional statements, that provide implications for both of the options. And C states that at least one or both of the implications have to be true (C or D).

Now, Gould’s Full House argument can be formulated either using modus tollens or constructive dillema:

(1) If evolution were a deterministic, teleological process, there would be a clear overall progression and a predetermined endpoint. (2) There is no predetermined endpoint or progression to evolution. (C) So evolution isn’t a deterministic or teleological process.

(1) Either evolution is a deterministic, teleological process (A) or it’s not (B). (2) If A, then there would be a clear direction and predetermined endpoint. (3) If B, then there is no overall direction or predetermined endpoint. (4) So either there is a clear overall progression (A), or there isn’t (B). (5) Not A. (6) Therefore, B.

Or (1) Life began at a relatively simple state (the left wall of complexity). (2) Evolution is influenced by a combination of chance events,, environmental factors and genetic variation. (3) Organisms may stumble I’m various directions along the path of evolution. (4) Evolution lacks a clear path or predetermined endpoint.

Now here is the overall argument combining the methodological issues pointed out by Sowe and Wang and the implications of passive evolution, combined with Gould’s Full House argument:

(1) Either Cope’s rule is a methodological artifact (A), or it’s due to passive, not driven evolution (B). (2) If Cope’s rule is a methodological artifact (A), then different ways to measure body size (length or mass) can come to different conclusions. (3) If Cope’s rule is due to passive, not driven evolution (B), then it implies that larger body sizes simply accumulate over time without being actively driven by selective pressures. (4) Either evolution is a deterministic, teleological process (C), or it is not (D). (5) If C, then there would be a clear overall direction and predetermined endpoint in evolution (Gould’s argument). (6) If D, then there is no clear overall direction or predetermined endpoint in evolution (Gould’s argument). (7) Therefore, either there is a clear overall direction (C) or there isn’t (D) (Constructive Dilemma). (8) If there is a clear overall direction (C) in evolution, then it contradicts passive, not driven evolution (B). (9) If there isn’t a clear overall direction (D) in evolution, then it supports passive, not driven evolution (B). (10) Therefore, either Cope’s rule is due to passive evolution or it’s a methodological artifact.

Conclusion

Evolution is quite clearly passive and non-driven (Bonner, 2013). The fact of the matter is, as I’ve shown, evolution isn’t driven (progressive), it is passive due to the drunken, random walk that organisms take from the minimum left wall of complexity. The discussions of developmental plasticity and directed mutation further show that evolution can’t be progressive or driven. Organism body plans had nowhere to go but up from the left wall of minimal complexity, and that means increase the variance in, say, body size is due to passive trends. Given the discussion here, we can draw one main inference: since evolution isn’t directed or progressive, then the so-called Cope’s (Deperet’s) rule is either due to passive trends or they are mere methodological artifacts. The argument I have mounted for that claim is sound and so, it obviously must be accepted that evolution is a random, drunken walk, not one of overall drivenness and progress and so, we must therefore look at the evolution of body size in this way.

Rushton tried to use the concept of evolutionary progress to argue that some races may be “more evolved” than other races, like “Mongoloids” being “more evolved” than “Caucasoids” who are “more evolved” than “Negroids.” But Rushton’s “theory” was merely a racist one, and it obviously fails upon close inspection. Moreover, even the claims Rushton made at the end of his book Race, Evolution, and Behavior don’t even work. (See here.) Evolution isn’t progressive so we can’t logically state that one population group is more “advanced” or “evolved” than another. This is of course merely Rushton being racist with shoddy “explanations” used to justify it. (Like in Rushton’s long-refuted r/K selection theory or Differential-K theory, where more “K-evolved” races are “more advanced” than others.)

Lastly, this argument I constructed based on the principles of Gould’s argument shows that there is no progress to evolution.

P1 The claim that evolutionary “progress” is real and not illusory can only be justified iff organisms deemed more “advanced” outnumber “lesser” organisms.
P2 There are more “lesser” organisms (bacteria/insects) on earth than “advanced” organisms (mammals/species of mammals).
C Therefore evolutionary “progress” is illusory.

The Theory of African American Offending versus Hereditarian Explanations of Crime: Exploring the Roots of the Black-White Crime Disparity

3450 words

Why do blacks commit more crime? Biological theories (racial differences in testosterone and testosterone-aggression, AR gene, MAOA) are bunk. So how can we explain it? The Unnever-Gabbidon theory of African American offending (TAAO) (Unnever and Gabbidon, 2011)—where blacks’ experience of racial discrimination and stereotypes increases criminal offenses—has substantial empirical support. To understand black crime, we need to understand the unique black American experience. The theory not only explains African American criminal offending, it also makes predictions which were borne out in independent, empirical research. I will compare the TAAO with hereditarian claims of why blacks commit more crime (higher testosterone and higher aggression due to testosterone, the AR gene and MAOA). I will show that hereditarian theories make no novel predictions and that the TAAO does make novel predictions. Then I will discuss recent research which shows that the predictions that Unnever and Gabbidon have made were verified. Then I will discuss research which has borne out the predictions made by Unnever and Gabbidon’s TAAO. I will conclude by offering suggestions on how to combat black crime.

The folly of hereditarianism in explaining black American offending

Hereditarians have three main explanations of black crime: (1) higher levels of testosterone and high levels of testosterone leading to aggressive behavior which leads to crime; (2) low activity MAOA—also known in the popular press as “the warrior gene”—could be more prevalent in some populations which would then lead to more aggressive, impulsive behavior; and (3) the AR gene and AR-CAG repeats with lower CAG repeats being associated with higher rates of criminal activity.

When it comes to (1), the evidence is mixed on which race has higher levels of testosterone (due to low-quality studies that hereditarians cite for their claim). In fact, two recent studies showed that non-Hispanic blacks didn’t have higher levels of testosterone than other races (Rohrmann et al, 2007; Lopez et al, 2013). Contrast this with the classical hereditarian response that blacks indeed do have higher rates of testosterone than whites (Rushton, 1995)—using Ross et al (1986) to make the claim. (See here for my response on why Ross et al is not evidence for the hereditarian position.) Although Nyante et al (2012) showed a small increase in testosterone in blacks compared to whites and Mexican Americans using longitudinal data, the body of evidence shows that there is no to small differences in testosterone between blacks and whites (Richard et al, 2014). So despite claims that “African-American men have repeatedly demonstrated serum total and free testosterone levels that are significantly higher than all other ethnic groups” (Alvarado, 2013: 125), claims like this are derived from flawed studies, and newer more representative analyses show that there is a small difference in testosterone between blacks and whites to no difference.

Nevertheless, even if blacks have higher levels of testosterone than other races, then this would still not explain racial differences in crime, since heightened aggression explains T increases, high T doesn’t explain heightened aggression. HBDers seem to have cause and effect backwards for this relationship. Injecting individuals with supraphysiological doses of testosterone as high as 200 and 600 mg per week does not cause heightened anger or aggression (Tricker et al, 1996O’Connor et, 2002). If the hereditarian hypothesis on the relationship between testosterone and aggression were true, then we would see the opposite finding from what Tricker et al and O’Connor et al found. Thus this discussion shows that hereditarians are wrong about racial differences in testosterone and that they are wrong about causality when it comes to the T-aggression relationship. (The actual relationship is aggression causing increases in testosterone.) So this argument shows that the hereditarian simplification on the T-aggression relationship is false. (But see Pope, Kouri and Hudson, 2000 where they show that a 600 mg dose of testosterone caused increased manic symptoms in some men, although in most men there was little to no change; there were 8 “responders” and 42 “non-responders.”)

When it comes to (2), MAOA is said to explain why those who carry low frequency version of the gene have higher rates of aggression and violent behavior (Sohrabi, 2015; McSwiggin, 2017). Sohrabi shows that while the low frequency version of MAOA is related to higher rates of aggression and violent behavior, it is mediated by environmental effects. But MAOA, to quote Heine (2017), can be seen as the “Everything but the kitchen sink gene“, since MAOA is correlated with so many different things. But at the and of the day, we can’t blame “warrior genes” for violent, criminal behavior. Thus, the relationship isn’t so simple, so this doesn’t work for hereditarians either.

Lastly when it comes to (3), due to the failure of (1), hereditarians tried looking to the AR gene. Researchers tried to relate CAG repeat length with criminal behaviors. For instance, Geniole et al (2019) tried to argue that “Testosterone thus appears to promote human aggression through an AR-related mechanism.” Ah, the last gasps to explain crime through testosterone. But there is no relationship between CAG repeats, adolescent risk-taking, depression, dominance or self-esteem (Vermeer, 2010) and the number of CAG repeats in men and women (Valenzuela et al, 2022). So this, too, fails. (Also take look at the just-so story on why African slave descendants are more sensitive to androgens; Aiken, 2011.)

Now that I have shown that the three main hereditarian explanations for higher black crime are false, now I will show why blacks have higher rates of criminal offending than other races, and the answer isn’t to be found in biology, but sociology and criminology.

The Unnever-Gabbidon theory of African American criminal offending and novel predictions

In 2011, criminologists Unnever and Gabbidon published their book A Theory of African American Offending: Race, Racism, and Crime. In the book, they explain why they formulated the theory and why it doesn’t have any explanatory or predictive power for other races. That’s because it centers on the lived experiences of black Americans. In fact, the TAAO “incorporates the finding that African Americans are more likely to offend if they associate with delinquent peers but we argue that their inadequate reinforcement for engaging in conventional behaviors is related to their racial subordination” (Unnever and Gabbidon, 2011: 34). The TAAO focuses on the criminogenic effects of racism.

Our work builds upon the fundamental assumption made by Afrocentists that an understanding of black offending can only be attained if their behavior is situated within the lived experiences of being African American in a conflicted, racially stratified society. We assert that any criminological theory that aims to explain black offending must place the black experience and their unique worldview at the core of its foundation. Our theory places the history and lived experiences of African American people at its center. We also fully embrace the Afrocentric assumption that African American offending is related to racial subordination. Thus, our work does not attempt to create a “general” theory of crime that applies to every American; instead, our theory explains how the unique experiences and worldview of blacks in America are related to their offending. In short, our theory draws on the strengths of both Afrocentricity and the Eurocentric canon. (Unnever and Gabbidon, 2011: 37)

Two kinds of racial injustices highlighted by the theory—racial discrimination and pejorative stereotyping—have empirical support. Blacks are more likely to express anger, exhibit low self-control and become depressed if they believe the racist stereotype that they’re violent. It’s also been studied whether or not a sense of racial injustice is related to offending when controlling for low self control (see below).

The core predictions of the TAAO and how they follow from it with references for empirical tests are as follows:

(Prediction 1) Black Americans with a stronger sense of racial identity are less likely to engage in criminal behavior than black Americans with a weak sense of racial identity. How does this prediction follow from the theory? TAAO suggests that a strong racial identity can act as a protective factor against criminal involvement. Those with a stronger sense of racial identity may be less likely to engage in criminal behavior as a way to cope with racial discrimination and societal marginalization. (Burt, Simons, and Gibbons, 2013; Burt, Lei, and Simons, 2017; Gaston and Doherty, 2018; Scott and Seal, 2019)

(Prediction 2) Experiencing racial discrimination increases the likelihood of black Americans engaging in criminal actions. How does this follow from the theory? TAAO posits that racial discrimination can lead to feelings of frustration and marginalization, and to cope with these stressors, some individuals may resort to committing criminal acts as a way to exert power or control in response to their experiences of racial discrimination. (Unnever, 2014; Unnever, Cullen, and Barnes, 2016; Herda, 2016, 2018; Scott and Seal, 2019)

(Prediction 3) Black Americans who feel socially marginalized and disadvantaged are more prone to committing crime as a coping mechanism and have weakened school bonds. How does this follow from the theory? TAAO suggests that those who experience social exclusion and disadvantage may turn to crime as a way to address their negative life circumstances. and feelings of agency. (Unnever, 2014; Unnever, Cullen, and Barnes, 2016)

The data show that there is a racialized worldview shared by blacks, and that a majority of blacks believe that their fate rests on what generally happens to black people in America. Around 38 percent of blacks report being discriminated against and most blacks are aware of the stereotype of them as violent. (Though a new Pew report states that around 8 in 10—about 80 percent—of blacks have experienced racial discrimination.) Racial discrimination and the belief in the racist stereotype that blacks are more violent are significant predictors of black arrests. It’s been shown that the more blacks are discriminated against and the more they believe that blacks are violent, the more likely they are to be arrested. Unnever and Gabbidon also theorized that the aforementioned isn’t just related to criminal offending but also to substance and alcohol abuse. Unnever and Gabbidon also hypothesized that racial injustices are related to crime since they increase the likelihood of experiencing negative emotions like anger and depression (Simons et al, 2002). It’s been experimentally demonstrated that blacks who perceive racial discrimination and who believe the racist stereotype that blacks are more violent express less self-control. The negative emotions from racial discrimination predict the likelihood of committing crime and similar behavior. It’s also been shown that blacks who have less self-control, who are angrier and are depressed have a higher liklihood of offending. Further, while controlling for self-control, anger and depression and other variables, racial discrimination predicts arrests and substance and alcohol abuse. Lastly the experience of being black in a racialized society predicts offending, even after controlling for other measures. Thus, it is ruled out that the reason why blacks are arrested more and perceive more racial injustice is due to low self-control. (See Unnever, 2014 for the citations and arguments for these predictions.) The TAAO also has more empirical support than racialized general strain theory (RGST) (Isom, 2015).

So the predictions of the theory are: Racial discrimination as a contributing factor; a strong racial identity could be a protective factor while a weak racial identity would be associated with a higher likelihood of engaging in criminal activity; blacks who feel socially marginalized would turn to crime as a response to their disadvantaged social position; poverty, education and neighborhood conditions play a significant role in black American offending rates, and that these factors interact with racial identity and discrimination which then influence criminal behavior; and lastly it predicts that the criminal justice system’s response to black American offenders could be influenced by their racial identity and social perceptions which could then potentially lead to disparities in treatment compared to other racial groups.

Ultimately, the unique experiences of black Americans explain why they commit more crime. Thus, given the unique experiences of black Americans, there needs to be a race-centric theory of crime for black Americans, and this is exactly what the TAAO is. The predictions that Unnever and Gabbidon (2011) made from the TAAO have independent empirical support. This is way more than the hereditarian explanations can say on why blacks commit more crime.

One way, which follows from the theory, to insulate black youth from discrimination and prejudice is racial socialization, where racial socialization is “thoughts, ideas, beliefs, and attitudes regarding race and racism are communicated across generations (Burt, Lei, & Simons, 2017Hughes, Smith, et al., 2006Lesane-Brown, 2006) (Said and Feldmeyer, 2022).

But also related to the racial socialization hypothesis is the question “Why don’t more blacks offend?” Gaston and Doherty (2018) set out to answer this question. Gaston and Doherty (2018) found that positive racial socialization buffered the effects of weak school bonds on adolescent substance abuse and criminal offending for males but not females. This is yet again another prediction from the theory that has come to pass—the fact that weak school bonds increase criminal offending.

Doherty and Gaston (2018) argue that black Americans face racial discrimination that whites in general just do not face:

Empirical studies have pointed to potential explanations of racial disparities in violent crimes, often citing that such disparities reflect Black Americans’ disproportionate exposure to criminogenic risk factors. For example, Black Americans uniquely experience racial discrimination—a robust correlate of offending—that White Americans generally do not experience (Burt, Simons, & Gibbons, 2012Caldwell, Kohn-Wood, Schmeelk-Cone, Chavous, & Zimmerman, 2004Simons, Chen, Stewart, & Brody, 2003Unnever, Cullen, Mathers, McClure, & Allison, 2009). Furthermore, Black Americans are more likely to face factors conducive to crime such as experiencing poor economic conditions and living in neighborhoods characterized by concentrated disadvantage.

They conclude that:

The support we found for ethnic-racial socialization as a crime-reducing factor has important implications for broader criminological theorizing and practice. Our findings show the value of race-specific theories that are grounded in the unique experiences of that group and focus on their unique risk and protective factors. African Americans have unique pathways to offending with racial discrimination being a salient source of offending. While it is beyond the scope of this study to determine whether TAAO predicts African American offending better than general theories of crime, the general support for the ethnic-racial socialization hypothesis suggests the value of theories that account for race-specific correlates of Black offending and resilience.

TAAO draws from the developmental psychology literature and contends, however, that positive ethnic-racial socialization offers resilience to the criminogenic effect of weak school bonds and is the main reason more Black Americans do not offend (Unnever & Gabbidon, 2011, p. 113, 145).

Thus, combined with the fact that blacks face racial discrimination that whites in general just do not face, and combined with the fact that racial discrimination has been shown to increase criminal offending, it follows that racial discrimination can lead to criminal offending, and therefore, to decrease criminal offending we need to decrease racial discrimination. Since racism is due to low education and borne of ignorance, then it follows that education can decrease racial attitudes and, along with it, decrease crime (Hughes et al, 2007Kuppens et al, 2014Donovan, 20192022).

Even partial tests of the TAAO have shown that racial discrimination related to offending and I would say that it is pretty well established that positive ethnic-racial socialization acts as a protective factor for blacks—this also explains why more blacks don’t offend (see Gaston and Doherty, 2018). It is also know that bad (ineffective) parenting also increases the risk for lower self-control (Unnever, Cullen, and Agnew, 2006). Black Americans share a racialized worldview and they view the US as racist, due to their personal lived experiences with racism (Unnever, 2014).

The TAAO and situationism

Looking at what the TAAO is and the predictions it makes, we can see how the TAAO is a situationist theory. Situationism is a psychological-philosophical theory which emphasizes the influence of the situation and its effects on human behavior. It posits that people’s actions and decisions are primarily shaped by the situational context that they find themselves in. It highlights the role of the situation in explaining behavior, suggests that people may act differently based on the context they find themselves in, situational cues which are present in the immediate context of the environment can trigger specific behavioral responses, suggests that understanding the situation one finds themselves in is important in explaining why people act the way they do, and asserts that behavior is more context-dependent and unpredictable and could vary across different situations. Although it seems that situationism conflicts with action theory, it doesn’t. Action theory explains how people form intentions and make decisions within specific situations, basically addressing the how and why. Conversely, situationism actually compliments action theory, since it addresses the where and when of behavior from an external, environmental perspective.

So the TAAO suggests that experiencing racial discrimination can contribute to criminal involvement as a response to social marginalization. So situationism can provide a framework for exploring how specific instances of environmental stressors, discrimination, or situational factors can trigger criminal behavior in context. So while TAAO focuses on historical and structural factors which lead to why blacks commit more crime, adding in situationism could show how the situational context interacts with historical and structural factors to explain black American criminal behavior.

Thus, combining situationism and the TAAO can lead to novel predictions like: predictions of how black Americans when faced with specific discriminatory situations, may be more or less likely to engage in criminal behavior based on their perception of the situation; predictions about the influence of immediate peer dynamics in moderating the relationship between structural factors like discrimination and criminal behavior in the black American community; and predictions about how variations in criminal responses to different types of situational cues—like encounters with law enforcement, experiences of discrimination, and economic stress—within the broader context of the TAAO’s historical-structural framework.

Why we should accept the TAAO over hereditarian explanations of crime

Overall, I’ve explained why hereditarian explanations of crime fail. They fail because when looking at the recent literature, the claims they make just do not hold up. Most importantly, as I’ve shown, hereditarian explanations lack empirical support, and the logic they try to use in defense of them is flawed.

We should accept the TAAO over hereditarianism because there is empirical validity, in that the TAAO is grounded in empirical research and it’s predictions and hypotheses have been subject to empirical tests and they have been found to hold. The TAAO also recognizes that crime is a complex phenomena influenced by factors like historical and contemporary discrimination, socioeconomic conditions, and the overall situational context. It also addresses the broader societal issues related to disparities in crime, which makes it more relevant for policy development and social interventions, acknowledging that to address these disparities, we must address the contemporary and historical factors which lead to crime. The TAAO also doesn’t stigmatize and stereotype, while it does emphasize the situational and contextual factors which lead to criminal activity. On the other hand, hereditarian theories can lead to stereotypes and discrimination, and since hereditarian explanations are false, we should also reject them (as I’ve explained above). Lastly, the TAAO also has the power to generate specific, testable predictions which have clear empirical support. Thus, to claim that hereditarian explanations are true while disregarding the empirical power of the TAAO is irrational, since hereditarian explanations don’t generate novel predictions while the TAAO does.

Conclusion

I have contrasted the TAAO with hereditarian explanations of crime. I showed that the three main hereditarian explanations—racial differences in testosterone and testosterone caused aggression, the AR gene, and MAOA—all fail. I have also shown that the TAAO is grounded in empirical research, and that it generates specific, testable predictions on how we can address racial differences in crime. On fhe other hand, hereditarian explanations lack empirical support, specificity, and causality, which makes it ill-suited for generating testable predictions and informing effective policies. The TAAO’s complexity, empirical support, and potential for addressing real-world issues makes it a more comprehensive framework for understanding and attempting to ameliorate racial crime disparities, in contrast to the genetic determinism from hereditarianism. In fact, I was unable to find any hereditarian response to the TAAO, so that should be telling on its own.

Overall, I have shown that the TAAO’s predictions that Unnever and Gabbidon have generated enjoy empirical support, and I have shown that hereditarian explanations fail, so we should reject hereditarian explanations and accept the TAAO, due to the considerations above. I have also shown that the TAAO makes actionable policy recommendations, and therefore, to decrease criminal offending, we thusly need to educate more, since racism is borne of ignorance and education can decrease racial bias.

Action Potentials and their Role in Cognitive Interface Dualism

3000 words

Introduction

Rene Descartes proposed that the peneal gland was the point of contact—the interface—between the immaterial mind and physical body. He thought that the peneal gland in humans was different and special to that of nonhuman animals, where in humans the peneal gland was the seat of the soul (Finger, 1995). This view was eventually shown to be false. However, claims that the mental can causally interact with the physical (interactionist dualism) have been met with similar criticism. If the mental is irreducible to the physical and if the mental does in fact causally interact with the physical, then the mental must be identical with the physical; that is, the mental is reducible to the physical due to physical laws like conservation of energy. This seems to be an issue for the truth of an interactionist dualist theory. But there are solutions. Deny that causal closure of the physical (CCP) is true (the world isn’t causally closed), or argue that CCP is compatible with interactionist dualism, or argue that CCP is question-begging (assuming in a premise what it seeks to establish and conclude) and assumes without proper justification that all physical events must be due to physical causes, which thereby illogically excludes the possibility of mental causation.

In this article I will provide some reasons to believe that CCP is question-begging, and I will argue that mental causation is invisible (see Lowe, 2008). I will also argue that action potentials are the interface by which the mental and the physical interact and which would then lead a conscious decision to make a movement be possible. I will provide arguments that show that interactionist dualism is consistent with physics, while showing that action potentials are the interface that Descartes was looking for. Ultimately, I will show how the mental interacts with the physical for mental causation to be carried out and how this isn’t an issue for the CCP. The view I will argue for here I will call “cognitive interface dualism” since it centers on the influence of mental states on action potentials and on the physical realm, and it conveys the idea that mental processes interface with physical processes through the conduit of action potentials, without implying a reduction of the mental to the physical, making it a substance dualist position since it still adheres to the mental and the physical as two different substances.

Causal closure of the physical

It is claimed that the world is causally closed—this means that every event or occurrence is due to physical causes, all physical events must be due to physical causes. Basically, no non-physical (mental) factors can cause or influence physical events. Here’s the argument:

(1) Every event in the world has a cause.
(2) Causes and effects within the physical world are governed by the laws of physics.
(3) Non-physical factors or entities, by definition, don’t belong to the physical realm.
(4) If a nonphysical factor were to influence a physical event, it would violate the laws of physics.
(5) Thus, the world is causally closed, meaning that all causes and effects in it are governed by physical interactions and laws.

But the issue here for the physicalist who wants to use causal closure is the fact that mental events and states are qualitatively different from physical events and states. This is evidenced in Lowe’s distinction between intentional (mental) and event (physical) causation. Mental states like thoughts and consciousness possess qualitatively different properties than physical states. The causal closure argument assumes that physical events are the only causes of other physical events. But mental states appear to exert causal influence over physical events, for instance voluntary action based on conscious decision, like my action right now to write this article. So if M states do influence P events, then there must be interaction between the mental and physical realms. This interaction contradicts the idea of strict causal closure of the physical realm. Since mental causation is necessary to explain aspects of human action and consciousness, it then follows that the physical world may not be causally closed.

The problem of interaction for interactionist dualism is premised on the CCP. It supposedly violated the conservation of energy (CoE). If P energy is needed to do P work, then a convergence of mental into physical energy then results in an increase in energy that is inexplicable. I think there are many ways to attack this supposed knock-down argument against interactionist dualism, and I will make the case in an argument below, arguing that action potentials are where the brain and the mind interact to carry out intentions. However, there are no strong, non-question begging arguments for causal closure that don’t beg the question (eg see Bishop, 2005; Dimitrijevic, 2010; Gabbani, 2013; Gibb, 2015), and the inductive arguments commit a sampling error or non-sequiturs (Buhler, 2020). So the CCP is either question-begging or unsound (Menzies, 2015). I will discuss this issue before concluding this article, and I will argue that my argument that APs serve as the interface between the mental and the physical, along with the question-beggingness of causal closure actually strengthens my argument.

The argument for action potentials as the interface between the mind and the brain

The view that I will argue for here, I think, is unique and has never been argued for in the philosophical literature on mental causation. In the argument that follows, I will show how arguing that action potentials (APs) are the point of contact—the interface—between the mind and brain doesn’t violate the CCP nor does it violate CoE.

In an article on strength and neuromuscular coordination, I explained the relationship between the mind-muscle connection and action potentials:

The above diagram I drew is the process by which muscle action occurs. In my recent article on fiber typing and metabolic disease, I explained the process by which muscles contract:

But the skeletal muscle will not contract unless the skeletal muscles are stimulated. The nervous system and the muscular system communicate, which is called neural activiation—defined as the contraction of muscle generated by neural stimulation. We have what are called “motor neurons”—neurons located in the CNS (central nervous system) which can send impulses to muscles to move them. This is done through a special synapse called the neuromuscular junction. A motor neuron that connects with muscle fibers is called a motor unit and the point where the muscle fiber and motor unit meet is callled the neuromuscular junction. It is a small gap between the nerve and muscle fiber called a synapse. Action potentials (electrical impulses) are sent down the axon of the motor neuron from the CNS and when the action potential reaches the end of the axon, hormones called neurotransmitters are then released. Neurotransmitters transport the electrical signal from the nerve to the muscle.

So action potentials (APs) are carried out at the junction between synapses. So, regarding acetylcholine, when it is released, it binds to the synapses (a small space which separates the muscle from the nerve) and it then binds onto the receptors of the muscle fibers. Now we know that, in order for a muscle to contract, the brain sends the chemical message (acetylcholine) across synapses which then initiates movement. So, as can be seen from the diagram above, the MMC refers to the chemo-electric connection between the motor cortex, the cortico-spinal column, peripheral nerves and the neuromuscular junction. A neuromuscular junction is a synapse formed by the contact between a motor neuron and a muscle fiber.

This explanation will set the basis for my argument on how action potentials are the interface—the point of contact—by which the mind and brain meet.

As I have already shown, APs are electrochemical events that transmit signals within the nervous system and are generated as the result of neural activity which can be influenced by mental states like thoughts and intentions. The brain operates in accordance with physical laws and obeys the CoE, the initiation of APs could be (and are, though not always) influenced by mental intentions and processes. Mental processes could modulate the threshold or likelihood of AP firing through complex biomechanical mechanisms that do not violate the CoE. Of course, the energy that is required for generating APs ultimately derives from metabolic processes within the body, which could be influenced by mental states like attention, intention and emotional states. This interaction between mental states does not violate the CoE, nor does it require a violation of the laws of physics, since it operates within the bounds of biochemical and electrochemical processes that respect the CoE. Therefore, APs serve as the point of controlled interaction between the mental and physical realms, allowing for mental causation without disrupting the overall energy balance in the physical world.

Lowe argued that mental causation is invisible, and so since it is invisible, it is not amenable to scientific investigation. This view can be integrated into my argument that APs serve as the interface between the two substances, mental and physical. APs are observable electrochemical events in a neuron which could be influenced by mental states. So as I argued above, mental processes could influence or modulate the veneration of APs. When it comes to the invisibility of mental causation, this refers to the idea that mental events like thoughts, intentions, and consciousness are not directly perceptible like physical objects or events are. Mental states are not observable in the same way that physical events or objects are. In my view, APs hold a dual role. They function as the interface between the mental and the physical, providing the means by which the mental can influence physical events while shaping APs, and they also act as the causal mechanism in connecting mental states to physical events.

Thus, given the distinction between physical events (like APs) and the subjective nature of mental states, the view I have argued for above is consistent with the invisibility of mental causation. Mental causation involves the idea that mental states can influence physical events, and that they have causal efficacy on the physical world. So our mental experiences can lead to physical changes in the world based on the actions we carry out. But since mental states aren’t observable like physical states are, it’s challenging to show how they could lead to effects on the physical world. We infer the influence of mental states on physical events through the effects on observable physical processes. We can’t directly observe intention, we infer it on the basis of one’s action. Mental states could influence physical events through complex chains of electrochemical and biochemical processes which would then make the causative relationship less apparent. So while APs serve as the interface, this doesn’t mean that mental states and APs are identical. This is because while the mental can’t be reduced to physiology (the physical), it encompasses a range of subjective experiences, emotions, thoughts, and intentions that transcend the mechanistic explanations of neural activity.

It is quite obviously an empirical fact that the mental can influence the physical. Think of the fight-or-flight response. When one sees something that they are fearful of (like, say, an animal), there is then a concurrent change in certain hormones. This simple example shows how the mental can have an effect on the physical—where the physical event of seeing something fearful (which would be also be a subjective experience) would then lead to a physical change. So the initial mental event of seeing something fearful is a subjective experience which occurs in the realm of consciousness and mental states. The subjective experience of fear then triggers the fight-or-flight response, which leads to the release of stress hormones like cortisol and adrenaline. These physiological changes are part of the body’s response to a perceived threat based on the subject’s personal subjective experience. So the release of stress hormones is a physical event, and these hormones then have a measurable effect on the body like an increase in heart rate, heightened alertness and energy mobilization which then prepares the subject for action. These physiological changes then prepare the subject to either fight or flee from the situation that caused them fear. This is a solid example on how the mental can influence the physical.

The only way, I think, that my view can be challenged is by arguing that the CCP is true. But if it is question-begging, then my proposition that mental states can influence APs is then less contentious. Furthermore, my argument on APs could be open to multiple interpretations of causal closure. So instead of strictly adhering to causal closure, my view could accommodate various interpretations that allow mental causation to have an effect in the physical realm. Thus, since I view causal closure as question begging, it provides a basis for my view that mental states can influence APs and by extension the physical world. And if the CCP is false, my view on action potentials is actually strengthened.

The view I have argued for here is a simplified perspective on the relationship between the mental and the physical. But my intention isn’t to offer a comprehensive account of all aspects of mental and physical interaction, rather, it is to highlight the role of APs as a point of connection between the mental and physical realms.

Cognitive interface dualism as a form of substance dualism

The view I have argued for here is a substance dualist position. Although it posits an intermediary in APs that facilitates interaction between the mental and physical realms, it still maintains the fundamental duality between mental and physical substances. Mental states are irreducible to physical states, and they interact though APs without collapsing into a single substance. Mental states involve subjective experiences, intentionality, and qualia which are fundamentally different from the objective and quantifiable nature of the physical realm, which I have argued before. APs serve as the bridge—the interface—between the mental and the physical realms, so my dualistic perspective allows for interaction while still preserving the unique properties of the mental and the physical.

Although APs serve as the bridge between the mental and the physical, the interaction between mental states and APs suggests that mental causation operates independently of physical processes. This, then, implies that the self which originates in mental states, isn’t confined to the physical realm, and that it isn’t reducible to the physical. The self’s subjective experiences, consciousness and self-awareness cannot be explained by physical or material processes, which indicates an immaterial substance beyond the physical. The unity of consciousness, which is the integrated sense of self and personal identity over time, are better accounted for by an immaterial self that transcends a change in physical states. Lastly mental states possess qualitative properties like qualia that defy reduction to physical properties. These qualities then, point to a distinct and immaterial self.

My view posits a form of non-reductive mental causation, where mental states influence APs, acknowledging the nonphysical influence on the mental to the physical. Interaction doesn’t imply reduction; mental states remain irreducible even though they impact physical processes. My view also accommodates consciousness, subjectivity, and intentionality which can’t be accounted for by material or physical processes. My view also addresses the explanatory gap between objective physical processes and subjective mental processes, which can’t be accounted for by reduction to physical brain (neural) processes.

Conclusion

The exploration of APs within the context of cognitive interface dualism offers a perspective on the interplay between the mental and physical substances. My view acknowledges APs as the bridge of interaction between the mental and the physical, and it fosters a deeper understanding of the role of mental causation in helping us understand reality.

Central to my view is recognizing that while APs do serve as the interface or conduit by which the mental and the physical interact, and how mental states can influence physical events, this does not entail that the mental is reducible to the physical. My cognitive interface dualism therefore presents a nuanced approach that navigates the interface between the seen and the unseen, the physical and the mental.

Traditional views of causal closure may raise questions about the feasibility of mental causation, the concept’s rigidity is challenged by the intermediary role of APs. While I do hold that the CCP is question-begging, the view I have argued for here explores an alternative avenue which seemingly transcends that limitation. So even if the strict view of the CCP were to fall, my view would remain strong.

This view is also inherently anti-reductionist, asserting that personal identity, consciousness, subjectivity and intentionality cannot be reduced to the physical. Thus, it doesn’t succumb to the traditional limitations of physicalism. Cognitive interface dualism also challenges the notion that we are reducible to our physical brains or our mental activity. The self—the bearer of mental states—isn’t confined to neural circuitry, although the physical is necessary for our mental lives, it isn’t a sufficient condition (Gabriel, 2018).

Lastly, of course this view means that since the mental is irreducible to the physical, then psychometrics isn’t a measurement enterprise. Any argument that espouses the view that the mental is irreducible to the physical would entail that psychometrics isn’t measurement. So by acknowledging that mental states, consciousness, and subjective experiences transcend the confines of physical quantification, cognitive interface dualism dismantles the assumption that the human mind can be measured and encapsulated using numerical metrics. This view holds that the mental resists quantification, since only the physical is quantifiable since only the physical have specified measured objects, objects of measurement and measurement units.

All in all, my view I title cognitive interface dualism explains how mental causation occurs through action potentials. It still holds that the mental is irreducible to the physical, but that the mental and physical interact without M being reduced to P. This view I have espoused, I think, is unique, and it shows how mental causation does occur, it shows how we perform actions.

IQ, Achievement Tests, and Circularity

2150 words

Introduction

In the realm of educational assessment and psychometrics, a distinction between IQ and achievement tests needs to be upheld. It is claimed that IQ is a measure of one’s potential learning ability, while achievement tests show what one has actually learned. However, this distinction is not strongly supported in my reading of this literature. IQ and achievement tests are merely different versions of the same evaluative tool. This is what I will argue in this article: That IQ and achievement tests are different versions of the same test, and so any attempt to “validate” IQ tests based not only on other IQ tests, achievement tests and job performance is circular, I will argue that, of course, the goal of psychometrics in measuring the mind is impossible. The hereditarian argument, when it comes to defending their concept and the claim that they are measuring some unitary and hypothetical variable, then, fails. At best, these tests show one’s distance from the middle class, since that’s the where most of the items on the test derive from. Thus, IQ and achievement tests are different versions of the same test and so, they merely show one’s “distance” from a certain kind of class-specific knowledge (Richardson, 2012), due to the cultural and psychological tools one must possess to score well on these tests (Richardson, 2002).

Circular IQ-ist arguments

IQ-ists have been using IQ tests since they were brought to America by Henry Goddard in 1913. But one major issue (one they still haven’t solved—and quite honestly never will) was that they didn’t have any way to ensure that the test was construct valid. So this is why, in 1923, Boring stated that “intelligence is what intelligence tests test“, while Jensen (1972: 76) said “intelligence, by definition, is what intelligence tests measure.” However, such statements are circular and they are circular because they don’t provide real evidence or explanation.

Boring’s claim that “intelligence is what intelligence tests test” is circular since it defines intelligence based on the outcome of “intelligence tests.” So if you ask “What is intelligence“, and I say “It’s what intelligence tests measure“, I haven’t actually provided a meaningful definition of intelligence. The claim merely rests on the assumption that “intelligence tests” measure intelligence, not telling us what it actually is.

Jensen’s (1976) claim that “intelligence, by definition, is what intelligence tests measure” is circular for similar reasons to Boring’s since it also defines intelligence by referring to “intelligence tests” and at the same time assumes that intelligence tests are accurately measuring intelligence. Neither claim actually provides an independent understanding of what intelligence is, it merely ties the concept of “intelligence” back to its “measurement” (by IQ tests). Jensen’s Spearman’s hypothesis on the nature of black-white differences has also been criticized as circular (Wilson, 1985). Not only was Jensen (and by extension Spearman) guilty of circular reasoning, so too was Sternberg (Schlinger, 2003). Such a circular claim was also made by Van der Mass, Kan, and Borsboom (2014).

But Jensen seemed to have changed his view, since in his 1998 book The g Factor, he argues that we should dispense with the term “intelligence”, but curiously that we should still study the g factor and assume identity between IQ and g… (Jensen made many more logical errors in his defense of “general intelligence”, like saying not to reify intelligence on one page and then a few pages later reifying it.) Circular arguments have been identified in not only Jensen’s writings Spearman’s hypothesis, but also in using construct validity to validate a measure (Gordon, Schonemann; Guttman, 1992: 192).

The same circularity can be seen when discussions of the correlation between IQ and achievement tests is brought up. “These two tests correlate so they’re measuring the same thing”, is an example one may come across. But the error here is assuming that mental measurement is possible and that IQ and achievement tests are independent of each other. However, IQ and achievement tests are different versions of the same test. This is an example of circular validation, which occurs when a test’s “validity” is established by the test itself, leading to a self-reinforcing loop.

IQ tests are often validated with other older editions of the test. For example, the newer version of the S-B would be “validated” against the older version of the test that the newer version was created to replace (Howe, 1997: 18; Richardson, 2002: 301), which not only leads to circular “validation”, but would also lead to the same assumptions from the older test constructors (like Terman) which would still then be alive in the test itself (since Terman assumed men and women should be equal in IQ and so this assumption is still there today). IQ tests are also often “validated” by comparing IQ test results to outcomes like job performance and academic performance. Richardson and Norgate (2015) have a critical review of the correlation between IQ and job performance, arguing that it’s inflated by “corrections”, while Sackett et al, 2023 show “a mean observed validity of .16, and a mean corrected for unreliability in the criterion and for range restriction of .23. Using this value drops cognitive ability’s rank among the set of predictors examined from 5th to 12th” for the correlation between “general cognitive ability” and job performance.

But this could lead to circular validation, in that if a high IQ is used as a predictor of success in school or work, then success in school or work would be used as evidence in validating the IQ test, which would then lead to a circular argument. The test’s validity is being supported by the outcome that it’s supposed to predict.

Achievement tests are destined to see what one had learned or achieved regarding a certain kind of subject matter. Achievement tests are often validated by correlating test scores with grades or other kinds of academic achievement (which would also be circular). But if high achievement test scores are used to validate the test and those scores are also used as evidence of academic achievement, then that would be circular. Achievement tests are “validated” on their relationship between IQ tests and grades. Heckman and Kautz (2013) note that “achievement tests are often validated using other standardized achievement tests or other measures of cognitive ability—surely a circular practice” and “Validating one measure of cognitive ability using other measures of cognitive ability is circular.” But it should also be noted that the correlation between college grades and job performance 6 or more years after college is only .05 (Armstrong, 2011).

Now what about the claim that IQ tests and achievement tests correlate so they measure the same thing? Richardson (2017) addressed this issue:

For example, IQ tests are so constructed as to predict school performance by testing for specific knowledge or text‐like rules—like those learned in school. But then, a circularity of logic makes the case that a correlation between IQ and school performance proves test validity. From the very way in which the tests are assembled, however, this is inevitable. Such circularity is also reflected in correlations between IQ and adult occupational levels, income, wealth, and so on. As education largely determines the entry level to the job market, correlations between IQ and occupation are, again, at least partly, self‐fulfilling

The circularity inherent in likening IQ and achievement tests has also been noted by Nash (1990). There is no distinction between IQ and achievement tests since there is no theory or definition of intelligence and how, then, this theory and definition would be likened to answering questions correctly on an IQ test.

But how, to put first things first, is the term ‘cognitive ability’ defined? If it is a hypothetical ability required to do well at school then an ability so theorised could be measured by an ordinary scholastic attainment test. IQ measures are the best measures of IQ we have because IQ is defined as ‘general cognitive ability’. Actually, as we have seen, IQ theory is compelled to maintain that IQ tests measure ‘cognitive ability’ by fiat, and it therefore follows that it is tautologous to claim that IQ tests are the best measures of IQ that we have. Unless IQ theory can protect the distinction it makes between IQ/ability tests and attainment/ achievement tests its argument is revealed as circular. IQ measures are the best measures of IQ we have because IQ is defined as ‘general cognitive ability’: IQ tests are the only measures of IQ.

The fact of the matter is, IQ “predicts” (is correlated with) school achievement since they are different versions of the same test (Schwartz, 1975; Beaujean et al, 2018). Since the main purpose of IQ tests in the modern day is to “predict” achievement (Kaufman et al, 2012), then if we correctly identify IQ and achievement tests as different versions of the same test, then we can rightly state that the “prediction” is itself a form of circular reasoning. What is the distinction between “intelligence” tests and achievement tests? They both have similar items on them, which is why they correlate so highly with each other. This, therefore, makes the comparison of the two in an attempt to “validate” one or the other circular.

I can now argue that the distinction between IQ and achievement tests is nonexistent. If IQ and achievement tests are different versions of the same test, then they share the same domain of assessing knowledge and skills. IQ and achievement tests contain similar informational content on them, and so they can both be considered knowledge tests—class-specific knowledge. IQ and achievement tests share the same domain of assessing knowledge and skills. Therefore, IQ and achievement tests are different versions of the same test. Put simply, if IQ and achievement tests are different versions of the same test, then they will have similar item content, and they do so we can correctly argue that they are different versions of the same test.

Moreover, even constructing tests has been criticized as circular:

Given the consistent use of teachers’ opinions as a primary criterion for validity of the Binet and Wechsler tests, it seems odd to claim  then that such tests provide “objective alternatives to the subjective judgments of teachers and employers.”  If the tests’ primary claim to predictive validity is that their results have strong correlations with academic success, one wonders how an objective test can predict performance in an acknowledged subjective environment?  No one seems willing to acknowledge the circular and tortuous reasoning behind the development of tests that rely on the subjective judgments of secondary teachers in order to develop an assessment device that claims independence of those judgments so as to then be able to claim that it can objectively assess a student’s ability to  gain the approval of subjective judgments of college professors.  (And remember, these tests were used to validate the next generation of tests and those tests validated the following generation and so forth on down to the tests that are being given today.) Anastasi (1985) comes close to admitting that bias is inherent in the tests when he confesses the tests only measure what many anthropologists would called a culturally bound definition of intelligence. (Thorndike and Lohman, 1990)

Conclusion

It seems clear to me that almost the whole field of psychometrics is plagued with the problem of inferring causes from correlation and using circular arguments in an attempt to justify and validate the claim that IQ tests measure intelligence by using flawed arguments that relate IQ to job and academic performance. However this idea is very confused. Moreover, circular arguments aren’t only restricted to IQ and achievement tests, but also in twin studies (Joseph, 2014; Joseph et al, 2015). IQ and achievement tests merely show what one knows, not their learning potential, since they are general knowledge tests—tests of class-specific knowledge. So even Gottfredson’s “definition” of intelligence fails, since Gottfredson presumes IQ to be a measure of learning ability (nevermind the fact that the “definition” is so narrow and I struggle to think of a valid way to operationalize it to culture-bound tests).

The fact that newer versions of tests already in circulation are “validated” against other older versions of the same test means that the tests are circularly validated. The original test (say the S-B) was never itself validated, and so, they’re just “validating” the newer test on the assumption that the older one was valid. The newer test, in being compared to its predecessor, means that the “validation” is occuring on the other older test which has similar principles, assumptions, and content to the newer test. The issue of content overlap, too, is a problem, since some questions or tasks on the newer test could be identical to questions or tasks on the older test. The point is, both IQ and achievement tests are merely knowledge tests, not tests of a mythical general cognitive ability.

Challenging the Myth of Objective Testing with an Absolute Scale in the Face of Non-Cognitive Influences

2200 words

The IQ-ists are at it again. This time, PP is claiming that his little tests he created are on an absolute scale—meaning that they have a true 0 point. This has been the Achilles heel of psychometry for many decades. But abstract concepts don’t have true 0 points, and this is why “cognitive measurement” isn’t possible. I will conceptually analyze PP’s arguments for his “spatial intelligence test” and his “verbal intelligence test” and show that they aren’t on absolute scales. I will then use the IQ-ists favorite measurement—temperature (one they try to claim is like IQ)—and show the folly in his reasoning on claiming that these tests are on an absolute scale. I will then discuss the real reasons for score disparities and relate them to social class and one’s life experiences and the argue that the score results reflect merely environmental variables.

Fixed reference points and absolute scales

There are no fixed reference points for “IQ” like there are for temperature. IQ-ists have claimed for decades that temperature is like IQ while thermometers are like IQ tests (Nash, 1990). But I have shown the confused thinking of hereditarians on this issue. An absolute scale requires a fixed reference point or a true 0 point which can be objectively established. Physical quantities like distance, weight, and temperature have natural objective 0 points which can serve as fixed reference points. But nonphysical or abstract concepts lack inherent or universally agreed-upon 0 points which can serve as consistent reference points. So only physical quantities can truly be measured in an absolute scale, since they possess natural 0 points which provide a foundation for measurement.

If “spatial intelligence” is a unitary and objectively measureable cognitive trait, then all individuals’ spatial abilities should consistently align across various tasks. But individuals often exhibit significant variablity in their performance across spatial tasks, excelling in one aspect and not others. This variablity suggests that “spatial intelligence” isn’t a unitary concept. So the concept of a single, unitary, measurable “spatial intelligence” is questionable.

If the test is on an absolute scale for measuring “spatial intelligence”, then the scores obtained directly reflect the inherent “spatial intelligence” of individuals, without being influenced by factors like puzzle complexity, practice, or other variables. The scores are influenced by factors like puzzle complexity and practice effects (like doing similar things in the past). Since the scores are influenced by various factors, then it’s not on an absolute scale.

If a measurement is on an absolute scale, then it should produce consistent results across different contexts and scenarios, reflecting a stable and underlying trait. But cognitive abilities can be influenced by various external factors like stress, fatigue, motivation, and test-taking conditions. These external factors can lead to fluctuations in performance which aren’t indicative of the “trait” that’s attempting to be measured. It’s merely reflective of the circumstances of the moment one took the test in. So the concept of an absolute scale for measuring cognitive abilities fails to account for the impact of external variables which can introduce variability and inaccuracies in the “measurement.” This argument undermines the claim that this—or any test—is on an absolute scale, since motivation, stress and other socio-cognitive factors, like Richardson (2002: 287-288) notes:

the basic source of variation in IQ test scores is not entirely (or even mainly) cognitive, and what is cognitive is not general or unitary. It arises from a nexus of sociocognitive-affective factors determining individuals’ relative preparedness for the demands of the IQ test. These factors include (a) the extent to which people of different social classes and cultures have acquired a specific form of intelligence (or forms of knowledge and reasoning); (b) related variation in ‘academic orientation’ and ‘self-efficacy beliefs’; and (c) related variation in test anxiety, self-confidence, and so on, which affect performance in testing situations irrespective of actual ability.

Such factors, which influence test scores, merely show what one was exposed to in their lives, under my DEC framework. Socio-cognitive factors related to social class could introduce bias, since people from different backgrounds are exposed to different information, have unequal access to information and test prep, along with familiarity with item content. Thus, we can then look at these scores as mere social class surrogates.

If test scores are influenced by stress, anxiety, fatigue, motivation, familiarity, non-cognitive factors, and socio-cognitive factors due to social class, then the concept of an absolute scale for measuring cognitive abilities may not hold true. I have established that test scores can indeed be influenced by myriad external factors. So given that these factors affect test scores and undermine the assumption of an absolute scale, the concept of measuring cognitive ability on such a scale is challenged (don’t forget the irreducibility arguments). Further, the argument that “spatial intelligence” is not measurable on an absolute scale due to its nonphysical nature aligns with this perspective, which further supports the idea that the concept of an absolute scale isn’t applicable in these contexts. Thus, the implications for testing are profound, and so score differences are due to social class and one’s life experiences, nor any kind of “genotypic IQ” (which is an oxymoron).

Regarding vocabulary, this is influenced by the home environment—the types of words one is exposed to as they grow up (and can therefore also be integrated into the DEC). Kids from lower SES families here fewer words at home and in their neighborhoods (low SES children hear 30 million fewer words than higher SES children) (Brito, 2017). We know that word usage is the strongest determinant of child vocabulary growth, and that less educated parents use fewer words with less complex syntax (Perkins, Finegood, and Swain, 2013). The language quality that is addressed to children also matters (Golinkoff et al, 2023). We can then liken this to the Vygotskian More Knowledgeable Other (MKO). An MKO would have the knowledge that their dependent doesn’t. But if the MKO in this instance isn’t educated or low income, then they will use fewer words and they then will have this feature in their home. Such tests merely show what one was exposed to in their lives, not any underlying unitary “thing” like the IQ-ists claim.

Increasing both the amount and diversity of language within the home can positively influence language development, regardless of SES. Repeated exposure to words and phrases increases the child’s opportunity to learn and remember (McGregor, Sheng, & Ball, 2007). The complexity of grammar, the responsiveness of language to the child, and the use of questions all aid language development (Bornstein, Tamis-LeMonda, Hahn, & Haynes, 2008; Huttenlocher, Waterfall, Vasilyeva, Vevea, & Hedges, 2010). Besides frequency of language input, how caregivers communicate with children also affects children’s language skills. Children from higher SES families experience more gestures by their care-givers during parent–child interactions; these SES differences predict vocabulary differences at 54 months of age (Rowe & Goldin-Meadow, 2009). Parent–child interactions provide a context for language exposure and mold the child’s language development. Specific characteristics of the caregiver, including affect, responsiveness, and sensitivity predict children’s early and later language skills (Murray & Hornbaker, 1997; Tamis-LeMonda, Bornstein, Baumwell, & Melstein Damast, 1996). Maternal sensitivity partially explains links between SES and both children’s receptive and expressive language skills at age 3 years (Raviv, Kessenich, & Morrison, 2004). These differences also appear across culture (Mistry, Biesanz, Chien, Howes, & Benner, 2008). Maternal supportiveness partially explained the link between SES and language outcomes at 3 years of age, for both immigrant and native families in the United States. (Brito, 2017: 3-4)

The issue of temperature

This can be illustrated using the IQ-ists favorite (real) measurement—temperature. The Kelvin scale avoids the issues in the first argument. In the Kelvin scale, temperature is measured in relation to absolutel 0 (the point where molecular motion theoretically stops). It doesn’t involve factors like variability in measurement techniques, practice effects, or individual differences. The Kelvin scale has a consistent reference point—absolute 0—which provides a consistent and fixed baseline for temperature measurement. The values in the Kelvin scale are directly tied to a true 0 point.

There are no external influences on the measurement of temperature (beyond that which influences the mercury in the thermometer to move up or down),  like the type of thermometer used or one’s familiarity with temperature measurement. External factors like these aren’t relevant to the Kelvin scale, unlike puzzle complexity and practice effects on the spatial abilities test.

Finally, temperature values on the Kelvin scale are universally applicable, which means that a specific temperature corresponds to the same level of molecular motion regardless of who performs the measurement, or what measurement instrument is used. So the Kelvin temperature scale doesn’t have the same issues as PP’s little “spatial intelligence” test. It has a clear and consistent measurement framework, where values directly represent the underlying physical phenomenon of molecular motion without being influenced by external factors or individual differences. When you think about actual, established measurements like temperature and then try to relate them to IQ, then the folly of “mental measurement” reveals itself.

Now, having said all of this, I can draw a parralel between the argument against an absolute scale for cognitive abilities and the concept of temperature.

Temperature measurements, while influenced by external factors (since this is what makes the mercury travel up or down in the thermometer) like atmospheric pressure and humidity, still have an absolute 0 point in the Kelvin scale which represents a complete absence of thermal energy. Unlike “spatial intelligence”, temperature has a fixed reference point which served as an objective 0 point, which allows it to be measured on an absolute scale. The external factors influencing temperature measurement are fundamentally different from the factors which influence one’s performance on a test, since they don’t introduce subjective variations in the same manner. So while temperature is influenced by external factors, it’s measurement is fundamentally different from nonphysical concepts due to the presence of an objective 0 point and the presence and distinct nature of influencing factors. This is put wonderfully by Nash (1990: 131):

First, the idea that the temperature scale is an interval scale is a myth and, second, a scale zero can be established for an intelligence scale by the same method of extrapolation used in defining absolute zero temperature. In this manner Eysenck (p. 16) concludes, ‘if the measurement of temperature is scientific (and who would doubt that it is?) then so is that of intelligence.’ It should hardly be necessary to point out that all of this is special pleading of the most unabashed sort. In order to measure temperature three requirements are necessary: (i) a scale, (ii) some thermometric property of an object and, (iii) fixed points of reference. Zero temperature is defined theoretically and successive interval points are fixed by the physical properties of material objects. As Byerly (p. 379) notes, that ‘the length of a column of mercury is a thermometric property presupposes a lawful relationship between the order of length and the temperature order under certain conditions.’ It is precisely this lawful relationship which does not exist between the normative IQ scale and any property of intelligence. The most obvious problem with the theory of IQ measurement is that although a scale of items held to test ‘intelligence’ can be constructed there are no fixed points of reference. If the ice point of water at one atmosphere fixes 276.16 K, what fixes 140 points of IQ? Fellows of the Royal Society? Ordinal scales are perfectly adequate for certain measurements, Moh’s scale of scratch hardness consists of ten fixed points, from talc to diamond, and is good enough for certain practical purposes. IQ scales (like attainment test scales) are ordinal scales, but this is not really to the point, for whatever the nature of the scale it could not provide evidence for the property IQ or, therefore, that IQ has been measured.

Conclusion

It’s quite obvious that IQ-ists have no leg to stand on, which is why they need to claim that their tests are on absolute scales even when it leads to an absurd conclusion. The fact that test performance is influenced by myriad non-cognitive traits due to one’s social class (Richardson, 2002) shows that these—and all tests—take place in certain cultural contexts, meaning that all tests are culture-bound, as argued by Cole (2004) with his West African Binet argument.

The fact of the matter is, “mental measurement” is impossible, and all these tests do is show the proximity to a certain kind of class-specific knowledge, not any kind of general cognitive “strength”. Taking a Vygotskian perspective on this issue will allow us to see how and why people score differently from each other, and it comes down to their home environment and what they learn in their lives.

Nevertheless, the claims from IQ-ists that they have a specified measured object, object of measurement and measurement unit for IQ or that their tests have a true 0 point are absurd, since these things are properties of physical objects, not non-physical, mental ones. The Vygotskian perspective will allow use to understand score variances between individuals and groups, as I have argued before. We don’t need to claim that there is an absolute scale for cognitive assessment nor do we need to claim that mental measurement is possible for this to be a truism. So, yet again, PP’s argument fails.

Ashkenazi Jews Are White

2700 words

Introduction

Recently, I have been seeing people say that Ashkenazi Jews (AJs) are not white. Some may say that Jews “pretend to be white”, so they can accomplish their “group goals” (like pitting whites and blacks against each other in an attempt to sow racial strife, due to their ethnic nepotism due to their genetic similarity). I have also seen people deriding Jews for saying “I’m white” and then finding an instance of them saying “I’m Jewish” (see here for an example), as if that’s a contradiction, but it’s not. It’s the same thing as saying “I’m Italian… I’m white” or “I’m German… I’m white.” But since pluralism about race is true, there could be some contexts and places that Jews aren’t white, due to the social construction of racial identities. However, in the American context it is quite clear: In both historical and contemporary thought in America, AJs are white.

But a claim like this, then, raises an important question: If AJs are not white, then what race are they? This is a question I will answer in this article, and I will of course show that AJs are indeed white in an American conception of race. Using Quayshawn Spencer’s racial identity argument, I will assume that Ashkenazi Jews aren’t white, and then I will argue that this leads to a contradiction, so Jews must be white. And while there was discussion about the racial status of Jews after they began emigrating to America through Ellis Island, I will show that Jews arrived to America as whites.

White or not?

The question of whether or not AJs are white is a vexing one. Of course, AJs are a religious group. However, this doesn’t mean that they themselves have their own specific racial category. It’s like if one says they are German, or Italian, or British. Those are mere ethnicities which make up the white racial group. One study found that AJs have “White privilege vis-á-vis persons of color. This privilege, however, is limited to Jews who can “pass” as White gentiles” (Blumenfeld, 2009). Jews that can “pass as white” are obviously white, and there is no other race for them to be.

This is due to the social nature of race. Since race is a social construct, then the way people’s racial background is perceived in America is based on how they look (their phenotype). An Ashkenazi Jew saying “I’m Jewish. I’m white” isn’t a contradiction, since AJs aren’t a race. It’s just like saying “I’m Italian. I’m white” or “I’m German. I’m white.” It’s quite obviously an ethnic group which is a part of the white race. Jews are white and whites are a socialrace.

This discussion is similar to the one where it is claimed that “Hispanic/Latino/Spanish” aren’t white. But that, too, is a ridiculous claim. In cluster studies, HLSs don’t have their own cluster, but they cluster near the group where their majority ancestry derives (Risch et al, 2002). Saying that AJs aren’t white is similar to this.

But during WWII, Jews were persecuted in Nazi German and eventually some 6 million Jews were killed. Jews, in this instance, were seen as a socialrace in Germany, and so they were themselves racialized. It has been shown that Germans who grew up under their Nazi regime are much more anti-Semitic than Germans who were born before or after the Nazi regime, and it was Nazi schooling which contributed to this the most (Voigtlander and Voth, 2015). This shows how one’s beliefs—and that of a whole society’s—are malleable along with how effective propaganda is. The Nuremberg laws of 1935 established anti-Jewish sentiment in the Nazi racial state, and so they had to have a way to identify Jews. They settled on the religious affiliation of one’s 4 grandparents as a way to identify Jews. But when one’s origins were in doubt, the Reich Kinship Office was deployed in order to ascertain one’s genealogy. But in the event this could not be done, one’s physical attributes would be assessed and compared to 120 physical measures between the individual and their parents (Rupnow, 2020: 373-374).

This can now be centered on Whoopi Goldberg’s divisive comment from February, 2022, where she states that the attempted genocide of Jews in Nazi Germany “wasn’t about race“, but it was about “man’s inhumanity to man; [it involved] two groups of white people.” Of course Goldberg is operating under an American conception of race, so I could see why she would say that. However, at the time in Nazi Germany, Jews were Racialized Others, and so they were a socialrace in Germany.

Per Pew, most Jews in America identify as white:

92% of U.S. Jews describe themselves as White and non-Hispanic, while 8% say they belong to another racial or ethnic group. This includes 1% who identify as Black and non-Hispanic; 4% who identify as Hispanic; and 3% who identify with another race or ethnicity – such as Asian, American Indian or Hawaiian/Pacific Islander – or with more than one race.

A super majority (94%) of American Jews are (and identify as) white and non-“Hispanic” in Pew’s 2013 research, which is down slightly from the 2020 research (Lugo et al, 2013):

From Lugo et al, 2013

AJs were viewed as white even as early as 1790 when the Naturalization Act was put into law, which stated that only free white persons were allowed to emigrate to America (Tanner, 2021). Even in 1965, Srole (1965) stated that “Jews are white.” But the perception that all Jews are white came after WWII (Levine-Rasky, 2020) and this claim is of course false. All Jews certainly aren’t white, but some Jews are white. Thus, even historically in the history of America, AJs were seen as white. Yang and Koshy (2016) write:

We found no evidence from U.S. censuses, naturalization legislation, and court cases that the racial categorization of some non-Anglo-Saxon European immigrant groups such as the Irish, Italians, and Jews changed to white. They were legally white and always white, and there was no need for them to switch to white.

White ethnics could be considered ethnically inferior and discriminated against because of their ethnic distinctions, but in terms of race or color, they were all white and had access to resources not available to nonwhites.

It was precisely because of the changing meanings of race that “the Irish race,” “the German race,” “the Dutch race,” “the Jewish race,” “the Italian race,” and so on changed their races and became white. In today’s terminology, it should be read that these European groups changed their ethnicities to become part of whites, or more precisely they were racialized to become white.

Our findings help resolve the controversy over whether certain U.S. non-Anglo-Saxon European immigrant groups became white in historical America. Our analysis suggests that “becoming white” carries different meanings: change in racial classification, and change in majority/minority status. In terms of the former, “becoming white” for non-Anglo-Saxon European immigrant groups is bogus. Hence, the argument of Eric Arnesen (2001), Aldoph Reed (2001), Barbara Fields (2001), and Thomas Guglielmo (2003) that the Irish, Italians, and Jews were white on arrival in America is vindicated.

But one article in The Forward argued that “Ashkenazi Jews are not functionally white.” The author (Danzig) attempts to make an analogy between the founder of the NAACP Walter White who was “white-passing” (both of his parents were born into slavery) and Jews who are “white-passing”, “due to years of colonialism, expulsion and exile in European lands.” The author then claims that as along as Jews maintain their unique Jewish identity, they therefore are a racial group. This article is a response to another which claims that Ashkenazi Jews are” functionally white” (Burton). Danzig discusses Button’s claim that a “white-passing ‘Latinx'” person could be deported if their immigration status is discovered. This of course implies that “Hispanics” are themselves a racial group (they aren’t). Danzig discusses the discrimination that his family went through in the 1920s, stating that they could do certain things because they were Jewish. The argument in Danzig’s article, I think, is confused. It’s confused because just because Jews were discriminated against in the past doesn’t mean they weren’t white. In fact, Jews, Italians, and the Irish were white on arrival to the United States (Steward, 1964; Yang and Koshy, 2016). But this doesn’t mean that they didn’t face discrimination. That is, Jews, Italians and the Irish didn’t change to white they were always legally white in America. (But see Gardaphe, 2002, Bisesi, 2017, Baddorf, 2020, and Rubin, 2021. Italians didn’t become white as those authors claim, they were white upon arrival). So Danzig’s claim fails—Jews are functionally white because they are white and they arrived in America as white. Claims to the contrary that AJs (and Italians and the Irish) became white are clearly false.

So despite claims that Jews became white after WWII, Jews are in fact white in America (Pearson and Geronimus, 2011). Of course in the early 1900s as immigrants were arriving to Ellis Island, the question of whether or not Jews (“Hebrews” in this instance) were white or even if they were their own racial group had a decent amount of discussion at the time (Goldstein, 2005; Pearlman, 2018). The fact that there was ethnic strife between new-wave immigrants to Ellis Island doesn’t entail that they were racial groups or that those European immigrants weren’t white. It’s quite clear that Jews—like italians and the Irish—were considered white upon arrival.

Now that I have established the fact that Jews AJs are indeed white (and arrived to America as white) despite the confused protestations of some authors, now I will formalize the argument that AJs are white, since if they aren’t white, then they would need to fit into one of the other 4 racial categories.

Many may know that I push Quayshawn Spencer’s OMB race theory, and that I am a pluralist about race. In the volume What is Race?: Four Philosophical Views, philosopher or race Quayshawn Spencer (2019: 98) writes:

After all, in OMB race talk, White is not a narrow group limited to Europeans, European Americans, and the like. Rather, White is a broad group that includes Arabs, Persians, Jews, and other ethnic groups originating from the Middle East and North Africa.

Although there is some research on the racial identity of MENA (Middle Eastern/North African people) and how they may not perceive themselves as white or be perceived as white (Maghbouleh, Schachter, and Flores, 2022), the OMB is quite clear that the social group designated “white” doesn’t refer only to Europeans (Spencer, 2019).

So, if AJs aren’t white, then they must be part of another of the 4 OMB races (black, Native American, East Asian or Pacific Islander). Part of this racial scheme is K=5—where when K is set to 5 in STRUCTURE, 5 clusters are spit out and these map onto the OMB races. But of those 5 clusters, there is no Jewish cluster. Note that I am not denying that there is some kind of genetic structure to AJs, I’m just denying that this would entail that they are a racial group. If they were, then they would appear in these runs. AJs are merely an ethno-religious in the white socialrace. So let’s assume this is true: Ashkenazi Jews are not white.

When we consider the complexities of racial classification, it becomes apparent that societies tend to organize individuals on numerous traits into distinct categories based on physical traits, cultural background, and ancestry. If AJs aren’t white in an American context, then they would have to fall into one of the four other racial groups in a Spencerian OMB race theory.

But there is one important aspect to consider here—that of the phenotype of Ashkenazi Jews. Many Ashkenazi Jews exhibit physical traits which are more likely associated with “white” populations. This simple observation shows that AJs don’t fit into the established categories of East Asian, Pacific Islander, black or Native American. AJs’ typical phenotype aligns more closely with that of white populations.

So examining the racial landscape in America, we can see that how social perceptions and classifications can significantly impact how individuals are positioned in a broader framework. AJs have historically been classified and perceived as white in the American racial context, as can be seen above. So within American racetalk, AJs are predominantly classified in the white racial grouping.

So taking all of this together, I can rightly state that Jews are white. Since we assumed at the outset that if they weren’t white they would belong to some other racial group, but they don’t look like any other racial group but look and are treated as white (both in contemporary thought and historically), then AJs are most definitely seen as white in American racetalk. Here’s the formalized argument:

P1: If AJs aren’t white, then they must belong to one of the other 4 racial categories (black, Native American, East Asian or Pacific Islander).
P2: AJs do not belong to any of the four racial categories mentioned (based on their phenotype typical of white people).
P3: In the American racial context, AJs are predominantly classified and perceived as white.
Conclusion: from P1, if AJs aren’t white then they must belong to one of the other 4 racial groups. But from P2, AJs do not belong to any of those categories, because from P3, AJs are perceived and classified as white. These premises, then, lead to a contradiction, since they all cannot be simultaneously true.

So we must reject the assumption that AJs aren’t white, and the logical conclusion is that AJs are considered white in the American context, based on their phenotype (and the fact that they arrived to America as white). Jews didn’t “become white” like some claim (eg, Brodkin, 2004). American Jews even benefit from white privilege (Schraub, 2019). MacDonald-Dennis’ (2005, 2006) qualitative research (although small not generalizable) shows that some Ashkenazi Jews think of themselves as white. AJs are legally and politically white.

All Jews aren’t white, but some (most) Jews are white (in America).

Conclusion

Thus, AJs are white. Although many authors have claimed that Jews became white after arrival to America (or even after WWII), this claim is false. It is false even as far back as 1790. If we accept the assumption that AJs aren’t white, then it leads to a contradiction, since they would have to be one of the other 4 racial groups, but since they look white, they cannot be a part of those racial groups.

There are white Jews and there are non-white Jews. But when it comes to AJs, the question “When did they become white?” is nonsense since they were always perceived and treated as white in America from it’s founding. Some AJs are white, some aren’t; some Mizrahi Jews are white, some aren’t. However in the context of this discussion, it is quite clear that AJs are white, and there is no other race for them to be, based on the OMB race theory. In fact, in the minds of most Americans, Jews aren’t a racialized group, but they are perceived as outsiders (Levin, Filindra, and Kopstein, 2022). But there were some instances in history where sometimes Jews were racialized, and sometimes they weren’t (Hochman, 2017). But what I have decisively shown here, in the American context ever since its inception, AJs are most definitely white. Saying that AJs are white is like saying that Italians or Germans are white. There is no contradiction. Jews get treated as white in the American social context, they look white, and have been considered white since they have arrived to America in the early 1900s (like the Irish and Italians).

The evidence and reasoning presented in this article points to one conclusion: That AJs are indeed white. This of course doesn’t mean that all AJs are white, it merely means that some (and I would say most) are white. AJs have been historically, legally, and politically white. Mere claims that they aren’t white are irrelevant.