NotPoliticallyCorrect

Home » Philosophy

Category Archives: Philosophy

Personality Changes and Organ Transplants

2200 words

Introduction

People who have received organ transplants have reported stark changes in their personalities. Some (truly outrageous) stories claim that people who receive organs from people then get some of their donor’s personality traits. There are a few explanations like cellular memory, psychological, physiological, neurological,, immunological, DNA/RNA/epigenetic explanations. I think that the cases of personality change post-transplant are the same as twin studies, reporting only where there is remarkable similarity. Nonetheless, I’m skeptical of such claims. And I don’t think that, even if they’re true, that dualism is harmed. I will conclude with a discussion of my cognitive interface dualism and how even if the proposed mechanisms to explain observed personality changes in organ transplant receivers would hold it wouldn’t undermine my theory of dualism.

Proposed explanations for personality change post-transplant

Psychological explanations—The psychological impact of receiving a new organ could lead to a change in behavior. They may feel a sense of gratitude or connection to the donor which could change their behavior. The emotional experience of having a transplantat could profoundly affect the patient’s personality before and after surgery. If people receive a heart from someone who was outgoing or adventurous and they then become adventurous, this is then attributed to the organ transplant, specifically in a kind of cellular memory (reviewed below). So the chain goes like this: transplant -> connection to donor -> change in personally

Physiological explanation—Medication used to prevent organ rejection could affect personality in virtue of affecting brain chemistry. People who are to undergo a transplant are given immunosuppressive medication, to prevent the rejection of the transplanted organ. These medications suppress the recipient’s immune system which then could have various effects on the body. Some could also pass the blood-brain barrier. Certain medications, too, could also influence neurotransmitter production like serotonin, norepinephrine, and dopamine. Having an organ transplant is a major surgery, and the body becomes inflamed after. So the physiological response to stress could affect organ systems after the transplant. So along with the stress on the body of organ transplantation along with immunosuppressive medications, both of these could lead to changes in hormonal levels and signaling pathways. The trauma of surgery and recovery could also affect a person’s mental states. Here’s the chain: immunosuppressive medication -> altered brain functioning -> brain chemistry/function changes could alter personality

Neurological explanation—Organ transplants can lead to trauma of surrounding tissue. The transplantation process along with the medications one had to take can then influence neurochemical activity in the brain. Surgical, pharmacological, immunological and psychological factors could interact to cause personality change. Here’s the chain: after transplantation, signals from organ interact with recipient nervous system -> the signals could affect neural networks associated with specific traits/memories -> over time these interactions compound to change personality.

Immunological explanation—Bidirectional communication between the immune system and CNS—known as neuroimmune crosstalk (Tian et al, 2012)—could also be responsible. Organ transplants and immunosuppressive medication could disrupt this crosstalk. Further, inflammation could also affect neural functioning. Here’s the chain: suppressed immune system so organ isn’t rejected -> immune cells could interact with CNS -> immunological interaction could make changes to brain physiology which leads to personality change.

There are quite a few explanations for why personality changes occur that don’t rely on cellular memory. Each of the proposed explanations offers potential mechanisms to explain observed personality changes. Whole the psychological explanation emphasizes the emotional and psychological aspects of organ transplantation, while the physiological explanation focuses on the broader physiological effects of transplantation on the recipient’s body. The neurological explanation goes into the direct effect of transplantation while the immunological explanation highlights the role of immune-mediated processes in influencing brain physiology.

Cellular memory—This is where organs, cells or tissues retain memories or information from their previous host which then influence the behavior of the new recipient of the organ. Of course this is a very speculative idea and there isn’t really much scientific evidence for the claim. I can see someone trying to say that the neurons in the transplanted organ somehow had an effect on the personality change.

Based on anecdotal reports along with case studies of organ recipients who claimed to have acquired new skills, personalities, or preferences following their transplants, such stories capture the imagination of people. Such reports often involve cases in which the recipient exhibits behaviors or preferences that are seemingly unrelated to past personal experiences but are related to their organ donor. (I will quote some people below on this and their experiences.) So these cases have pushed along the claim that cellular memories can be transferred along with transplanted organs.

One hypothesis is neural network transfer. Memories or information stored in the brain of the organ donor could be transferred to the recipient through neural connections which are established through the transplantation process. So neural networks associated with memories or learned behaviors could be preserved within the transplanted organ leading to an influence in the recipient’s brain functioning.

A small number of donor cells could persist in the transplanted organ, which then could involve microchimerism. The donor cells could then interact with the recipient’s tissues and cells and then influence behavioral or physiological characteristics.

Epigenetic modifications which regulate gene expression without a change to the genome could play a role in cellular memory. Changes in gene expression patterns could persist in the recipient which then leads to behavioral changes.

Finally, psychological changes like the placebo effect and expectations could contribute to the perception of cellular memory. They could unconsciously or consciously adopt behaviors of the organ donor due to psychological or social influence.

But the anecdotal reports of cellular memory fall prey to post hoc rationalization, the placebo effect, and selective reporting. Moreover, neural network transfer and microchimerism lack evidentiary support to substantiate their role in the behavioral changes in the donor. There is a lack of a causal relationship between recipient experiences and donor characteristics. Factors like the recipient’s pre-existing beliefs, psychological adjustment to transplantation and social support networks more than likely play a significant role in shaping the post-transplant experiences of the donor.

One study found that 3 patients reported changes in their personality post-heart transplant (Bunzel et al, 1992). One online survey of 47 transplant recipients (23 heart and 24 organ) found that 89 percent of the donor’s experienced personality changes (Carter et al, 2024) (which was substantially higher than that of the results of Bunzel et al).

One white man was given a heart from a black kid who was gunned down in a drive-by shooting, and he loved classical music. After the man’s transplant, he began liking classical music after previously hating it. He stated that he know it wasn’t his heart because “a black guy from the ‘hood wouldn’t be into that’…and now [classical music] calms my heart” (Christopher, 2024). The recipient’s wife then said that the donor was socializing more with black coworkers at work and he began to love classifical music post-transplant. She said “He even whistles classical music songs that he could never know. How does he know them? You’d think he’d like rap music or something because of his black heart.

In another case, a 19 year old woman was killed in a car accident. She was also a vegetarian and owned a health food restaurant. As she was dying, she said to her mother that she could feel the impact of the car hitting her. So the organ recipient was a 29 test old women who reported two things occurring post-transplant—she said she could feel the impact of the accident on her chest and she began hating meat after her surgery, saying that “now meat makes me throw up” (Christopher, 2024). Before her transplant she was a lesbian and then after, she was into men.

A 3 year old died in an accident at a family pool. The recipient—a 8 year old—loved the water before his surgery but after it, according to his mother, he was “now deathly afraid of water” (Christopher, 2024).

A 14 year old girl died in a gymnastics accident, and per her mother she had a “silly little giggle”. She was also kind of anorexic with food. Her recipient was a 47 year old man. After his surgery, the recipient’s brother states that he was acting “like a teenager” and that he’s “like a kid.” He also reported that when they went bowling he “yells and jumps like a girl” and that he “had a girls laugh.” He was also nauseous all the time and his doctor had a concern about his Wright (Christopher, 2024).

In the last case Christopher (2024) discussed, a cop was murdered by a drug dealer after being shot in the face. In his mug shot, the cop’s wife stated that the drug dealer looked like some depictions of Jesus. After the heart transplant, the donor stated that he would have dreams of seeing a “flash of light right in my face and my face gets real, real hot. It actually burns. Just before that time, I would get a glimpse of Jesus. I’ve had these dreams and now daydreams ever since: Jesus and then a flash” (Christopher, 2024). Finally a girl received a transplant from a teenage boy who died in a motorcycle accident. After her surgery her mother stated that she began liking KFC, “walking like a man“, and she wanted to drink beer. Come to find out, these were some things the boy who died liked to do. There is also a recent article on Psychology Today talking about cellular memory.

All of these cases could simply be an artifact of selective reporting or coincidence.

Conclusion

While these cases are no doubt interesting and if true means that we need to propose different mechanisms of the like as in with cellular, DNA/RNA, epigenetic and protein memory (Pearsall, Schwartz, and Russek, 2000), I think current evidence points it to be just coincidences or post hoc rationalization. Now of course, if these cases were proven to be genuine then we should revisit them and think about mechanisms like the above in this paragraph.

As can be seen, anecdotal reports and studies suggest the possibility of behavioral changes that mirror, in some cases, that of the donor. But the concept of cellular memory is currently speculative and lacks empirical evidence. We could have controlled studies on animal models to see whether behavioral or physiological traits associated with the donor are transferred to the recipient. We could also analyze gene expression, epigenetic modification, RNA expression, DNA methylation, and protein levels within transplanted tissues or organs from donors to recipients. We could then male comparisons between tissues and organs from donors and recipients to ascertain any kind of differences or similarity which could be indicative of memory transfer. These are but a few empirical tests I can think of that we can begin to carry out to test this if it’s more than coincidence or post hoc rationalization.

Lastly, in August of 2023 I formulated a theory of dualism I call cognitive interface dualism which argues that action potentials are the interface that Descartes was looking for. (I had an A&P professor state that out of the whole textbook he taught out of that muscle movement was some of the only conscious activity that could be done. Then that dawned on me and I formulated my dualist framework.) Dualism posits that mind and body are two separate, substances with mind being irreducible to body/brain. So even if there is a personality change, that doesn’t entail that the mind has changed. In cognitive interface dualism, interactions between the mind and body occur through action potentials (APs). Personality changes could occur through the interface of the interactions, but changes in physical organs like the brain do alter the fundamental nature of the immaterial mind. (Of course damage to the brain can influence the mind since the brain is a necessary pre-condition for human mindedness, but that’s different.) Even if a person’s personality undergoes changes after a transplant, their underlying sense of self, consciousness, and subjective experiences remain intact. It doesn’t necessarily imply a direct alteration of mind,

The other explanations I discussed above are also on different levels of explanation than dualism. Dualism is about ontological explanation whereas the other explanations operate at the physiological and molecular levels. Cellular mechanisms could influence certain aspects of behavior or experience, but it doesn’t undermine the existence of a separate, irreducible mental realm. Dualism and biology can also be complimentary, where biology would address any possible mechanisms like cellular memory, RNA/DNA/epigenetic expression while dualism addresses questions of consciousness, the nature of the mind and subjective experience. Even if cellular memory would be shown to be true this wouldn’t undermine my theory, since the core aspects of one’s consciousness, self, and subjective experiences remain intact. So these would offer complimentary perspectives.

In sum, while this is an interesting area to look at, I am a skeptic. I won’t completely discount it being true, but I have proposed some empirical tests to see if it does hold. And if it does, it doesn’t have any implications for dualist theories, including my cognitive interface dualism.

Race and Racial Identity in the US

2300 words

Introduction

The concept of RACE is both a biological and social construct. In the US, there are 5 racial groups, and every 10 years the Census Bureau attempts to get a tally of the breakdown of racial identity in the US. The Census defers to the OMB, who in 1997 updated their racial classification. So race is identities culturally, socially, and historically. But racial identity goes beyond the US Census survey and encompasses one’s experiences, beliefs and perceptions which shape their identity and how they understand themselves and the society in which they live.

In the US we have whites, blacks (or African American), East Asian (or Asian), Native American or Alaskan Native, and Native Hawaiian or other Pacific Islander. Each of these racial categories represents not only a demographic group, but also an amalgamation of historical, social, and cultural contexts which then influence how an individual navigates and forms their racial identity. Here, I will discuss which groups fall under which racial categories in the US, why Hispanics/Latinos and Arabs (MENA people) aren’t a race and the relationship between the self and racial identity.

Race in the US

The Census Bureau defers to the Office of Management and Budget (OMB) on matters of race. In 1997, the OMB separated Asians and Pacific Islanders and changed the term “Hispanic” to “Hispanic or Latino” (OMB, 1997). But in this discussion, they stated that there are 5 races: white, black, Native American, East Asian and Pacific Islander. The US Census Bureau has to defer to the OMB, and the OMB defines race as a socio-political category. Below are the 5 minimum reporting categories (races) as designated by the OMB.

White – A person having origins in any of the original peoples of Europe, the Middle East, or North Africa.

Black or African American – A person having origins in any of the Black racial groups of Africa.

American Indian or Alaska Native – A person having origins in any of the original peoples of North and South America (including Central America) and who maintains tribal affiliation or community attachment.

Asian – A person having origins in any of the original peoples of the Far East, Southeast Asia, or the Indian subcontinent including, for example, Cambodia, China, India, Japan, Korea, Malaysia, Pakistan, the Philippine Islands, Thailand, and Vietnam.

Native Hawaiian or Other Pacific Islander – A person having origins in any of the original peoples of Hawaii, Guam, Samoa, or other Pacific Islands. (About the Topic of Race)

Race in America is based on self-identification, and the OMB allows one to put that they are of one or more racial groups. They also allow write ins of “Some Other Race”, which I will get to below. For now, I will elaborate on each racial category, and begin with the—controversial to white nationalists—definition of “white” that designates MENA people as white.

White racial designation—I showed the 5 minimum reporting categories (racial groups) above, and there has been discussion of adding a MENA minimum reporting category per the Federal Register. Such a move would be because they don’t identify as white, they aren’t perceived as white (Maghbouleh, Schachter, Flores, 2022) and and don’t have the same lived experiences as white Europeans. But we know that in OMB racetalk, white isn’t a narrow group that refers only to Europeans, it’s a broad group that refers to the ME/NA (yes, even Ashkenazi Jews). For instance, in the 2000 Census, 80 percent of Arabs self-identified as only white (de la Cruz and Brittingham, 2003). Obviously, Arabs intend to use the white category in the same ah that the OMB uses it. Even then, we know that the aftermath of 9/11 hasn’t changed the self-reported race of around 63 percent of Arab Americans (Spencer, 2019). Further, know that those who feel that the term “Arab American” doesn’t describe them are more likely to identify as white and that some Arab Americans both report strong ethnic ties, identify as white, and reject the Arab American label (Ajrouch and Jamal, 2007). They aren’t afforded minority status in the US even though they account for 2 to 6 percent of the US population, and this is because of their designation as white. This isn’t to deny, though, the fact that they do experience discrimination and that they do have health inequalities (see Abboud, Chebli, and Rabelais, 2019), I just don’t think that they comprise a racial group, and at best they are an ethnicity in the overall white race—the fact that Arab Americans are discriminated against doesn’t justify their being a separate racial category (Jews, the Irish and Italians were also discriminated against upon arrival to the US but they were always politically and socially white; Yang and Koshy, 2016.) Arab Americans (and all MENA people) are simply like Italians, Irish British, Jews, and Poles in America—there is no need for an Arab/MENA racial category; the fact that they’re discriminated against and have differences in health from whites is irrelevant, because you can find both of these things in other ethnic groups labeled as white yet they don’t deserve a special racial status.

Of course, the term white in America also refers to people of European origin like Italians, Germans, Russians, Fins, and others and this designation has stayed relatively the same. Thus, the white race in American racetalk is designated for European and MENA people. (This would also hold for some “Hispanics/Latinos, see below.)

Black or African American racial designation—This category refers to black Americans (“African American”, AfAm “Foundational Black Americans” FBA, or “American Descendants of Slavery”, ADOS). For instance, the overlap between US race terms in the OMB and Blumenbacian racial designations is 1.0 for black or African (Spencer, 2014). Spencer (2019) noted one problem with the OMB’s definition of black or African American—that it would designate all people as black or African American since it says “A person having origins in any of the black racial groups of Africa.” But this can be avoided if we say they the way the OMB uses the term race is just it’s referent—it’s a set of categories or population groups (Spencer, 2014). So this racial designation just means any individual who can trace their ancestry back to Africa—which would comprise, say, Cubans/Puerto Ricans/Dominicans and other “Hispanics/Latinos” with African ancestry, black Americans, and immigrants from Africa who have sub-Saharan African ancestry.

American Indian or Alaskan Native racial designation—About 5.2 million people in America identify using this category (Nora, Vines, and Hoeffel, 2012). (This fell to 3.7 million in 2020.) This designation captures not only American Indians, but people who have Native ancestry from Central and South America, like the Maya, Aztec, Inca (which is referred to as “Latin American Indian”) and others. This also includes Alaskan Natives such as Yup’ik, Inuit, and other Natives such as Chippewa and Indians living on reservations. When it comes to American Indians, one must be able to prove their tribal affiliations, by showing that they or an ancestry had tribal affiliation, has an established “lineal ancestor“, or providing documentation that they have a relationship to a person using vital records.

Asian racial designation—This encompasses the far East, the Indian subcontinent and South East Asia. Before 1997, Asians and Pacific Islanders (PIs) were grouped together. For instance, in 1977 the OMB had 4 racial classifications since Asians and PIs were grouped together (and they still noted “Hispanics” as an ethnicity, with the option to identify as Hispanic or non-Hispanic). Thus, if one has ancestry to East Asia, South East Asia and the Indian subcontinent, they are therefore Asian.

Native Hawaiian or other Pacific Islander racial designation—As noted above, this group was split off from a broader “Asian or Pacific Islander” category. This designation refers to people Native Hawaiians and Oceanians. We know that the overlap between “Pacific Islander” and “Oceanian” is 1.0 (Spencer, 2014). Australian Aboriginals also fall under this category. Along with designating Native Hawaiians and Australian Aboriginals in this category, it also refers to people from other Pacific islands Samoa and other Pacific Islands like Melanesia, Guam, and Papua New Guinea (OMB, 1997). So the breaking up of the “Asian and or Pacific Islander” category is valid.

The question of “Latinos/Hispanics”—Back in August of 2020, I argued that “Latinos/Hispanics” were a group I called “HLS” or “Hispanics/Latinos/Spanish” people (OMB notes that these terms are and can be used interchangeably). This is because, at least where I grew up, people referred to Spanish speakers as one homogenous group, irregardless of their phenotype. So they would group together say Puerto Ricans and Salvadorians with Argentineans, Chileans and Cubans. However, these countries have radically different racial admixtures and culture based on what occurred there after 1492. But the issue is this—HLS isn’t a racial group. To me, it’s a socio-linguistic cultural group, since they share a language and some cultural customs. The category “Latin American is a social designation. But the thing is, the OMB rightly notes that” Hispanics or Latinos “are not a racial group, they are an ethnic group. In 1997 the OMB changed “Hispanic”to “Hispanic or Latino.” The OMB stated that the definition should be unchanged, but that the “Latino” qualifier should also be added. This category would comprise Cubans, Puerto Ricans, Mexicans, Central and South Americans and other Spanish culture or origins REGARDLESS OF RACE. Indeed the Census (who defer to the OMB) is quite clear: “Hispanics and Latinos may be of any race…People who identify their origin as Hispanic, Latino, or Spanish may be of any race.” Further, as noted above, the category “American Indian or Alaskan Native” also encompasses Latin American Indians (which some think of when they think of “Latinos or Hispanics”).

Furthermore, Spencer (2019: 98) notes that “Conducting a linear regression analysis shows that the average Caucasian ancestry of a Hispanic American national origin group positively and highly correlates (r=+0.864) with the proportion of that group that self-reported ‘White’ alone on the 2010 US Census questionnaire. Quite clearly, “white Hispanics” exist, and this is because as noted by the OMB, Hispanics aren’t a racial group. Forty percent of Central Americans identified as “some other race”, while 85 percent of Cubans, 53 percent of Puerto Ricans, and 35 percent of Dominicans identified as white in 2010; both Puerto Ricans and Dominicans were also more likely to identify as black or report multiple races (Ennis, Rios-Vargas, and Albert, 2011). HLS is clearly not a homogeneous group.

Therefore, phrases like “white Hispanic”, “Afro Latino/a” aren’t a contradiction of terms.

Throughout this discussion, I have shown that there is a relationship between racial identity and one’s self-identification. We also know—consistent with the TAAO—that moderate racial and ethnic identification for blacks and Asians acts as a buffer for racial discrimination while for whites, American Indians and Latinos it exacerbates it (Woo et al, 2019).

One final consideration leaves me with clustering studies. When K is set to 5, there are 5 clusters (Rosenberg et al, 2002). These are what Spencer calls human continental populations or Blumenbacian partitions. These clusters correspond to whites, blacks, Asians, Native Americans and Pacific Islanders. But “Hispanics”, being a recent amalgamation of admixed groups clustered in between other clusters and didn’t form their own cluster (Risch et al, 2002). Defenses of this study to show the biological reality of race can be found in Spencer (2014, 2019) and Hardimon (2017).

Conclusion

I have discussed what race means in the American context (it’s version of racetalk), it’s definition as defined by the OMB, and changes to the categories over the years. I don’t think they MENA people should be a separate racial category, since many of them identify as white, and although some do identify as Arab American and some are discriminated against, this isn’t relevant for their status as a racial category since Jews, the Irish and Italians were discriminated against upon their arrival to America and they also have a qualifier as well; this category also refers to European descendants. Black and African Americans refer to people with ancestry to Africa, so this could encompass many people like American blacks, certain Brazilians, Dominicans, and Puerto Ricans.

Native American or Alaskan Native refers to not only North American Indians and people native to Alaska but also Latin American Indians (Maya, Pima and others). Asian and Pacific Islanders were split in 1997, since before then (in 1977) there were only 4 racial groups per the OMB. The Asian category refers to South East Asia, East Asia and the Indian subcontinent. The Native Hawaiian or other Pacific Islander category refers to people native to Hawaii along with other Pacific Islands like Guam, Samoa and Papua New Guinea. Lastly, HLSs are not a racial designation and can be of any race. I showed that while many Caribbean Hispanics identify with different racial groups, they don’t themselves designate a separate racial group from their self-identification. Hispanics or Latinos can be of any race (like for example the former president of Peru Alberto Fujimori who had Japanese ancestry but was born in Peru, he’d be Hispanic as well, but his race is Asian).

I then showed that there are defenses of what is termed “cluster realism” (Kaplan and Winther, 2009), and that Hispanics aren’t in these clusters. This is a stark difference from hereditarians like Charles Murray who merely assume that race exists without an argument.

Therefore, since racial pluralism is true, there are a plurality of race concepts that hold across time and place (like with how race is defined in Brazil and South Africa). But for the context of this discussion, in America, race is a social construct of a biological reality and there are 5 racial groups and all theories of race are based off of the premise that race is a social construct. Spencer’s racial identity argument is true.

The Illusion of Separation: A Philosophical Analysis of “Variance Explained”

2050 words

Introduction

“Variance explained” (VE) is a statistical concept which is used to quantify the proportion of variance in a trait that can be accounted for or attributed to one or more independent variables in a statistical model. VE is represented by “R squared”, which ranges from 0 to 100 percent. An r2 of 0 percent means that none of the variance in the dependent variable is explained by the independent variable whereas an r2 of 100 percent means that all of the variance is explained. But VE doesn’t imply causation, it merely quantifies the degree of association or predictability between two variables.

So in the world of genetics, heritability and GWAS, the VE concept has been employed as a fundamental measure to quantify the extent to which a specific trait’s variability can be attributed to genetic factors. One may think that it’s intuitive to think that G and E factors can be separated and their relative influences can be seen and disentangled for human traits. But beneath its apparent simplicity lies a philosophically contentious issue, most importantly, due to the claim/assumption that G and E factors can be separated into percentages.

But I think the concept of VE in psychology/psychometrics and GWAS is mistaken, because (1) it implies a causal relationship that may not exist; (2) implies reductionism; (3) upholds the nature-nurture dichotomy; (4) doesn’t account for interaction and epigenetics; and (5) doesn’t account for context-dependency. In this article, I will argue that the concept of VE is confused, since it assumes too much while explaining too little. Overall, I will explain the issues using a conceptual analysis and then give a few arguments on why I think the phrase is confused.

Arguments against the phrase “variance explained”

While VE doesn’t necessarily imply causation, in psychology/psychometrics and GWAS literature, it seems to be used as somewhat of a causal phrase. The phrase also reduces the trait in question to a single percentage, which is of course not accurate—so basically it attempts at reducing T to a number, a percentage.

But more importantly, the notion of VE is subject to philosophical critique in virtue of the implications of what the phrase inherently means, particularly when it comes to the separation of genetic and environmental factors. The idea of VE most often perpetuates the nature-nurture dichotomy, assuming that G and E can be neatly separated into percentages of causes of a trait. Thus this simplistic division between G and E oversimplifies the intricate interplay between genes, environment and all levels of the developmental system and the irreducible interaction between all developmental resources that lead to the reliable ontogeny of traits (Noble, 2012).

Moreover, VE can be reductionist in nature, since it implies that a certain percentage of a trait’s variance can be attributable to genetics, disregarding the dynamic and complex interactions between genes and other resources in the developmental system. Therefore, this reductionism fails to capture the holistic and emergent nature of human development and behavior. So just like the concept of heritability, the reductionism inherent in the concept of VE focuses on isolating the contributions of G and E, rather than treating them as interacting factors that are not reducible.

Furthermore, we know that epigenetics demonstrates that environmental factors can influence gene expression which then blurs the line between G and E. Therefore, G and E are not separable entities but are intertwined and influence each other in unique ways.

It also may inadvertently carry implicit value judgements about which traits or outcomes are deemed desirable or significant. In a lot circles, a high heritability is seen as evidence for the belief that a trait is strongly influenced by genes—however wrong that may be (Moore and Shenk, 2016). Further, it could also stigmatize environmental influences if a trait is perceived as primarily genetic. This, then, could contribute to a bias that then downplays the importance of environmental factors which would then overlook their importance and potential impact in individual development and behavior.

This concept, moreover, doesn’t provide clarity on questions like identity and causality. Even if a high percentage of variance is attributed to genetics, it doesn’t necessarily reveal the causal mechanisms or genetic factors responsible, which then leads to philosophical indeterminancy regarding the nature of causation. Human traits are highly complex and the attempt to quantify them and break then apart into heat percentages or variances explained by G and E vastly oversimplifies the complexity of these traits. This oversimplification then further contributes to philosophical indeterminancy about the nature and true origins (which would be the irreducible interactions between all developmental resources) of these traits.

The act of quantifying variance also inherently involves power dynamics, where certain variables are deemed more significant or influential than others. This, then, introduces a potential bias that may reflect existing societal norms or power structures. “Variance explained” may inadvertently perpetuate and reinforce these power dynamics by quantifying and emphasizing certain factors over others. (Like eg the results of Hill et al, 2019 and Barth, Papageorge, and Thom, 2020 and see Joseph’s critique of these claims). Basically, these differences between people in income and other socially-important traits are due to genetic differences between them. (Even though there is no molecular genetic evidence for the claim made in The Bell Curve that we are becoming more genetically stratified; Conley and Domingue, 2016.)

The concept of VE also implies a kind of predictive precision that may not align with the uncertainty of human behavior. The illusion of certainty created by high r2 values can lead to misplaced confidence in predictions. In reality, the complexity of human traits often defies prediction and overreliance on VE may create a false sense of certainty.

We also have what I call the “veil of objectivity” argument. This argument challenges the notion that VE provides an entirely objective view. Behind the numerical representation lies a series of subjective decisions, like the selection of variables to the interpretation of results. From the initial selection of variables to be studied to the interpretation of their results, researchers exercise subjective judgments which then could introduce biases and assumptions. So if “variance explained” is presumed to offer an entirely objective view of human traits, then the numerical representation represents an objective measure of variance attribution. If, behind this numerical representation, subjective decisions are involved in variable selection and results interpretation, then the presumed objectivity implied by VE becomes a veil masking underlying subjectivity. So if subjective decisions are integral to the process of VE, then the presumed objectivity of the numerical representation serves as a veil concealing the subjective aspects of the research process. So if the veil of objectivity conceals subjective decisions, then there exists a potential for biases and assumptions which then would influence the quantitative analysis. Thus, if biases and assumptions are inherent in the quantitative analysis due to the veil of objectivity, then the objectivity attributed to VE is compromised, and a more critical examination of subjective elements becomes imperative. This argument of course is for “IQ” studies, heritability studies of socially-important human traits and the like, along with GWASs. In interpreting associations, GWASs and h2 studies also fall prey to the veil of objectivity argument, since as seen above, many people would like the hereditarian claim to be true. So when it comes to GWAS and heritability studies, VE refers to the propagation of phenotypic variance attributed to genetic variance.

So the VE concept assumes a clear separation between genetic and environmental factors which is often reductionist and unwarranted. It doesn’t account for the dynamic nature and influence of these influences, nor—of course—the influence of unmeasured factors. The concepts oversimplification can lead to misunderstandings and has ethical implications, especially when dealing with complex human traits and behaviors. Thus, the VE concept is conceptually flawed and should be used cautiously, if at all, in the fields in which it is applied. It does not adequately represent the complex reality of genetic and environmental influences on human traits. So the VE concept is conceptually limited.

If the concept of VE accurately separates genetic and environmental influences, then it should provide a comprehensive and nuanced representation of factors that contribute to a trait. But the concept does not adequately consider the dynamic interactions, correlations, contextual dependencies, and unmeasured variables. So if the concept does not and cannot address these complexities, then it cannot accurately separate genetic and environmental influences. So if a concept can’t accurately separate genetic and environmental influences, then it lacks coherence in the context of genetic and behavioral studies. Thus the concept of VE lacks coherence in the context of genetic and behavioral studies, as it does not and cannot adequately separate genetic and environmental influences.

Conclusion

In exploring the concept of VE and it’s application in genetic studies, heritability research and GWAS, a series of nuanced critiques have been uncovered that challenge its conceptual coherence. The phrase quantifies the proportion of variance in a trait that is attributed to certain variables, typically genetic and environmental ones. The reductionist nature of VE is apparent since it attempts to distill interplay between G and E into percentages (like h2 studies). But this oversimplification neglects the complexity and dynamic nature of these influences which then perpetuates the nature-nurture dichotomy which fails to capture the intricate interactions between all developmental resources in the system. The concepts inclination to overlook G-E interactions, epigenetic influences, and context-dependents variablity further speaks to its limitations. Lastly, normative assumptions intertwined with the concept thenninteouvde ethical considerations as implicit judgments may stigmatize certain traits or downplay the role and importance of environmental factors. Philosophical indeterminancy, therefore, arises from the inability of the concept of VE to offer clarity on identity, causality, and the complex nature of human traits.

So by considering the reductionist nature, the perpetuation of the false dichotomy between nature and nurture, the oversight of G-E interactions, and the introduction of normative assumptions, I have demonstrated through multiple cases that the phrase “variance explained” falls short in providing a nuanced and coherent understanding of the complexities involved in the study of human traits.

In all reality, the issue of this concept is refuted by the fact that the interaction between all developmental resources shows that the separation of the influences/factors is an impossible project, along with the fact that we know that there is no privileged level of causation. Claims of “variance explained”, heritability, and GWAS all push forth the false notion that the relative contributions of genes and environment can be be quantified into the causes of a trait in question. However, we know now that this is false since this is conceptually confused, since the organism and environment are interdependent. So the inseparability of nature and nurture, genes and environment, means that the The ability for GWAS and heritability studies to meet their intended goals will necessarily fall short, especially due to the missing heritability problem. The phrase “variance explained by” implies a direct causal link between independent and dependent variables. A priori reasoning suggests that the intracacies of human traits are probabilistic and context-dependent and it implicated a vast web of bidirectional influences with feedback loops and dynamic interactions. So if the a priori argument advocates for a contextual, nuanced and probabilistic view of human traits, then it challenges the conceptual foundations of VE.

At the molecular level, the nurture/nature debate currently revolves around reactive genomes and the environments, internal and external to the body, to which they ceaselessly respond. Body boundaries are permeable, and our genome and microbiome are constantly made and remade over our lifetimes. Certain of these changes can be transmitted from one generation to the next and may, at times, persist into succeeding generations. But these findings will not terminate the nurture/nature debate – ongoing research keeps arguments fueled and forces shifts in orientations to shift. Without doubt, molecular pathways will come to light that better account for the circumstances under which specific genes are expressed or inhibited, and data based on correlations will be replaced gradually by causal findings. Slowly, “links” between nurture and nature will collapse, leaving an indivisible entity. But such research, almost exclusively, will miniaturize the environment for the sake of accuracy – an unavoidable process if findings are to be scientifically replicable and reliable. Even so, increasing recognition of the frequency of stochastic, unpredictable events ensures that we can never achieve certainty. (Locke and Pallson, 2016)

The Multilingual Encyclopedia: On the Context-Dependency of Human Knowledge and Intelligence

3250 words

Introduction

Language is the road map of a culture. It tells you where its people come from and where they are going. – Rita May Brown

Communication bridges gaps. The words we use and the languages we speak along with the knowledge that we share serve as a bridge to weave together human culture and intelligence. So imagine a multilingual encyclopedia that encompasses the whole of human knowledge, a book of human understanding from the sciences, the arts, history and philosophy. This encyclopedia is a testament to the universal nature of human knowledge, but it also shows the interplay between culture, language, knowledge and human intelligence.

In my most recent article, I argued that human intelligence is shaped by cultural and social context and that this is shaped by interactions in a cultural and social context. So here I will argue that: there are necessary aspects of knowledge; knowledge is context-dependent; language, culture and knowledge interact with the specific contexts to form intelligence, mind and rationality; and my multilingual encyclopedia analogy shows that while there are what is termed “universal core knowledge”, these would then become context-dependent based on the needs for different cultures and I will also use this example to again argue against IQ. Finally I will conclude that the arguments in this article and the previous one show how the mind is socially formed based on the necessary physical substrates but that the socio-cultural contexts are what is necessary for human intelligence, mindedness, and rationality.

Necessary aspects of knowledge

There are two necessary and fundamental aspects of knowledge and thought—that of cognition and the brain. The brain is a necessary pre-condition for human mindedness, and cognition is influenced by culture, although my framework posits that cognitive processes play a necessary role in human cognition, just as the brain plays a necessary physical substrate for these processes. While cognition and knowledge are intertwined, they’re not synonymous. To cognize is to actively think about something that you want to, meaning it is an action. There is a minimal structure and it’s accounted for by cognition, like pattern recognition, categorization, sequential processing, sensory integration, associative memory and selective attention. And these processes are necessary, they are inherent in “cognition” and they set the stage for more complex mental abilities, which is what Vygotsky was getting at with the social formation of mind with his theory.

Individuals do interpret their experiences through a cultural lense, since culture provides the framework for understanding, categorizing, and making sense of experiences. I recognize the role of individual experiences and personal interpretations. So while cultural lenses may shape initial perceptions, people can also think critically and reflect on their interpretations over time due to the differing experiences they have.

Fundamental necessary aspects of knowledge like sensory perception are also pivotal. By “fundamental”, I mean “necessary”—that is, we couldn’t think or cognize without the brain and it therefore follows we couldn’t think without cognition. These things are necessary for thinking, language, culture and eventually intelligence, but what is sufficient for mind, thinking, language and rationality are the specific socio-cultural interactions and knowledge formulations that we get by being engrossed in linguistically-mediated cultural environments.

The context-dependence of knowledge

“Context-dependent knowledge” refers to information or understanding that can take on different meaning or interpretations based on the specific context in which it is applied or used. But I also mean something else by this: I mean that an individual’s performance on IQ tests is influenced by their exposure to specific cultural, linguistic, and contextual factors. Thus, this means that IQ tests aren’t culture-neutral or universally applicable, but they are biased towards people who share similar class-cultural backgrounds and experiences.

There is something about humans that allow us to be receptive to cultural and social contexts to form mind, language, rationality and intelligence (and I would say that something is the immaterial self). But I wouldn’t call it “innate.” Thus, so-called “innate” traits need certain environmental contexts to be able to manifest themselves. So called “innate” traits are experience-dependent (Blumberg 2018).

So while humans actively adapt, shape, and create cultural knowledge through cultural processes, knowledge acquisition isn’t solely mediated by culture. Individual experiences matter, as do interactions with the environment along with the accumulation of knowledge from various cultural contexts. So human cognitive capacity isn’t entirely a product of culture, and human cognition allows for critical thinking, creative problem solving, along with the ability to adapt cultural knowledge.

Finally, knowledge acquisition is cumulative—and by this, I mean it is qualitatively cumulative. Because as individuals acquire knowledge from their cultural contexts, individual experiences etc, this knowledge then becomes internalized in their cognitive framework. They can then build on thus existing knowledge to further adapt and shape culture.

The statement “knowledge is context-dependent” is a description of the nature of knowledge itself. It means that knowledge can take on different meaning or interpretations in different contexts. So when I say “knowledge is context-dependent”, I am acknowledging that it applies in all contexts, I’m discussing the contextual nature of knowledge itself.

Examples of the context-dependence of universal knowledge for example, are how English-speakers use the “+” sign for addition, while the Chinese have “加” or “Jiā”. So while this fundamental principle is the same, these two cultures have different symbols and notations to signify the operation. Furthermore, there are differences in thinking between Eastern and Western cultures, where thinking is more analytic in Western cultures and more holistic in Eastern cultures (Yates and de Oliveira, 2016; also refer to their paper for more differences between cultures in decision-making processes). There are also differences between cultures in visual attention (Jurkat et al, 2016). While this isn’t “knowledge” per se, it does attest to how cultures are different in their perceptions and cognitive processes, which underscores the broader idea that cognition, including visual attention, is influenced by cultural contexts and social situations. Even the brain’s neural activity (the brain’s physiology) is context-dependent—thus culture is context-dependent (Northoff, 2013).

But when it comes to culture, how does language affect the meaning of culture and along with it intelligence and how it develops?

Language, culture, knowledge, and intelligence

Language plays a pivotal role in shaping the meaning of culture, and by extension, intelligence and its development. Language is not only a way to communicate, but it is also a psychological tool that molds how we think, perceive and relate to the world around us. Therefore, it serves as the bridge between individual cognition and shares cultural knowledge, while acting as the interface through which cultural values and norms are conveyed and internalized.

So language allows us to encode and decode cultural information, which is how, then, culture is generationally transmitted. Language provides the framework for expressing complex thoughts, concepts, and emotions, which enables us to discuss and negotiate the cultural norms that define our societies. Different languages offer unique structures for expressing ideas, which can then influence how people perceive and make sense of their cultural surroundings. And important for this understanding is the fact that a human can’t have a thought unless they have language (Davidson, 1982).

Language is also intimately linked with cognitive development. Under Vygotsky’s socio-historical theory of learning and development, language is a necessary cognitive tool for thought and the development of higher mental functions. So language not only reflects our cognitive abilities, it also plays an active role in their formation. Thus, through social interactions and linguistic exchanges, individuals engage in a dynamic process of cultural development, building on the foundation of their native language and culture.

Feral children and deaf linguistic isolates show this dictum: that there is a critical window in which language could be acquired and thusly the importance of human culture in human development (Vyshedakiy, Mahapatra, and Dunn, 2017). Cases of feral children, then, show us how children would develop without human culture and shows the importance of early language hearing and use for normal brain development. In fact, this shows how social isolation has negative effects on children, and since human culture is inherently social, it shows the importance of human culture and society in forming and nurturing the formation of mind, intelligence, rationality and knowledge.

So the relationship between language, culture and intelligence is intricate and reciprocal. Language allows us to express ourselves and our cultural knowledge while shaping our cognitive processes and influencing how we acquire and express our intelligence. On the other hand, intelligence—as shaped by cultural contexts—contributes to the diversification of language and culture. The interplay underscores how language impacts our understanding of intelligence within it’s cultural framework.

Furthermore, in my framework, intelligence isn’t a static, universally-measureable trait, but it is a dynamic and constantly-developing trait shaped by social and cultural interactions along with individualsm experiences, and so intentionality is inherent in it. Moreover, in the context of acquiring cultural knowledge, Vygotsky’s ZPD concept shows that individuals can learn and internalize things outside of their current toolkit as guided by more knowledgeable others (MKOs). It also shows that learning and development occur mostly in this zone between what someone can do alone and what someone can do with help which then allows them to expand their cognitive abilities and cultural understanding.

Cultural and social exposure

Cultural and social exposure are critical to my conception of intelligence. Because, as we can see in cases of feral children, there is a clear developmental window of opportunity to gain language and to think and act like a human due to the interaction of the individual in human culture. The base cognitive capacities that we are born with and develop throughout infancy to toddlerhood to childhood and then adulthood aren’t just inert, passive things that merely receive information through vision and then we gain minds, intelligence and then become human. Critically, they need to be nurtured through culture and socialization. The infant needs the requisite experiences doing certain things to be able to learn how to roll over, crawl, and finally walk. They need to be exposed to different things in order to be exposed to the culture they were borne into correctly. So while we are born into both cultural, and linguistically-mediated environments, it’s these three types of environment—along with what the individual does themselves when they finally learn to walk, talk, and gain their mind, intelligence and rationality—that shape individual humans, the knowledge they gain and ultimately their intelligence.

If humans possess foundational cognitive capacities that aren’t entirely culturally determined or influenced, and culture serves as a mediator in shaping how these capacities are expressed and applied, then it follows that culture influences cognitive development while cognitive abilities provide the foundation for being able to learn at all, as well as being able to speak and to internalize the culture and language they are exposed to. So if culture interacts dynamically with cognitive capacities, and crucial periods exist during which cultural learning is particularly influential (cases of feral children), then it follows that early cultural exposure and socialization are critical. So it follows that my framework acknowledges both cognitive capacities and cultural influences in shaping human cognition and intelligence.

In his book Vygotsky and the Social Formation of Mind, Wertsch (1985) noted that Vygotsky didn’t discount the role of biology (like in development in the womb), but that after a certain point, biology no longer can be viewed as the sole or even primary factor in force of change for the individual, and that the explanation necessarily shifts to a sociocultural explanation:

However, [Vygotsky] argued that beyond a certain point in development, biological forces can no longer be viewed as the sole, or even the primary, force of change. At this point there is a fundamental reorganization of the forces of development and a need for a corresponding reorganization in the system of explanatory principles. Specifically, in Vygotsky’s view the burden of explanation shifts from biological to social factors. The latter operate within a given biological framework and must be compatible with it, but they cannot be reduced to it. That is, biological factors are still given a role in this new system, but they lose their role as the primary force of change. Vygotsky contrasted embryological and psychological development on this basis:

The embryological development of the child … in no way can be considered on the same level as the postnatal development of the child as a social being. Embryological development is a completely unique type of development subordinated to other laws than is the development of the child’s personality, which begins at birth. Embryological development is studied by an independent science—embryology, which cannot be considered one of the chapters of psychology … Psychology does not study heredity or prenatal development as such, but only the role and influence of heredity and prenatal development of the child in the process of social development. ([Vygotsky] 1972, p. 123)

The multilingual encyclopedia

Imagine a multilingual encyclopedia that encompasses knowledge of multiple disciplines from the sciences to the humanities to religion. This encyclopedia has what I term universal core knowledge. This encyclopedia is maintained by experts from around the world and is available in many languages. So although the information in the encyclopedia is written in different languages and upheld by people from different cultures, fundamental scientific discoveries, historical events and mathematical theorems remain constant across all versions of the encyclopedia. So this knowledge is context-independent because it holds true no matter the language it’s written in or the cultural context it is presented in. But the encyclopedia’s entries are designed to be used in specific contexts. The same scientific principles can be applied in labs across the world, but the specific experiments, equipment and cultural practices could vary. Moreover, historical events could be studied differently in different parts of the world, but the events themselves are context-independent.

So this thought experiment challenges the claim that context-independent knowledge requires an assertion of absolute knowledge. Context-independent knowledge exists in the encyclopedia, but it isn’t absolute. It’s merely a collection of universally-accepted facts, principles and theories that are applied in different contexts taking into account linguistic and cultural differences. Thus the knowledge in the encyclopedia is context-independent in that it remains the same across the world, across languages and cultures, but it is used in specific contexts.

Now, likening this to IQ tests is simple. When I say that “all IQ tests culture-bound, and this means that they’re class-specific”, this is a specific claim. What this means, in my view, is that people grow up in different class-cultural environments, and so they are exposed to different knowledge bases and kinds of knowledge. Since they are exposed to different knowledge bases and kinds of knowledge, when it comes time for test time, if they aren’t exposed to the knowledge bases and kinds of knowledge on the test, they necessarily won’t score as high as someone who was immersed in the knowledge bases and kinds of knowledge. Cole’s (2002) argument that all tests are culture-bound is true. Thus IQ tests aren’t culture-neutral, they are all culture-bound, and culture-neutral tests are an impossibility. This further buttresses my argument that intelligence is shaped by the social and cultural environment, underscoring the idea that the specific knowledge bases and cognitive resources that individuals are exposed to within their unique socio-cultural contexts play a pivotal role in the expression and development of their cognitive abilities.

IQ tests are mere cultural artifacts. So IQ tests, like the entries in the multilingual encyclopedia, are not immune to cultural biases. So although the multilingual encyclopedia has universal core knowledge, the way that the information is presented in the encyclopedia, like explanations and illustrations, would be culturally influenced by the authors/editors of the encyclopedia. Remember—this encyclopedia is an encyclopedia of the whole of human knowledge written in different languages, seen through different cultural lenses. So different cultures could have ways of explaining the universal core knowledge or illustrating the concepts that are derived from them.

So IQ tests, just like the entries in the encyclopedia, are only usable for certain contexts. While the entries in the encyclopedia could be usable for more than one context of idea one has, there is a difference for IQ testing. The tests are created by people from a narrow social class and so the items on them are therefore class-specific. This then results in cultural biases, because people from different classes and cultures are exposed to varying different knowledge bases, so people will be differentially prepared for test-taking on this basis alone. So the knowledge that people are exposed to based on their class membership or even different cultures within America or even from an immigrant culture would influence test scores. So while there is universal core knowledge, and some of this knowledge may be on IQ tests, the fact is that different classes and cultures are exposed to different knowledge bases, and so that’s why they score differently—the specific language and numerical skills on IQ tests are class-specific (Brito, 2017). I have noted how culturally-dependent IQ tests are for years, and this interpretation is reinforced when we consider knowledge and its varying interpretations found in the multilingual encyclopedia, which then highlights the intricate relationship between culture, language, and IQ. This then serves to show that IQ tests are mere knowledge tests—class-specific knowledge tests (Richardson, 2002).

So my thought experiment shows that while there are fundamental scientific discoveries, historical events and mathematical theorems that remain constant throughout the world and across different languages and cultures, the encyclopedia’s entries are designed to be used in specific contexts. So the multilingual encyclopedia thought experiment supports my claim that even when knowledge is context-independent (like that of scientific discoveries, historical facts), it can become context-dependent when it is used and applied within specific cultural and linguistic contexts. This, then, aligns with the part of my argument that knowledge is not entirely divorced from social, cultural and contextual influences.

Conclusion

The limitations of IQ tests become evident when we consider how individuals produce and acquire knowledge and the cultural and linguistic diversity and contexts that define our social worlds. The analogy of the multilingual encyclopedia shows that while certain core principles remain constant, the way that we perceive and apply knowledge is deeply entwined within the cultural and social contexts in which we exist. This dynamic relationship between culture, language, knowledge and intelligence, then, underscores the need to recognize the social formation of mind and intelligence.

Ultimately, human socio-cultural interactions, language, and the knowledge we accumulate together mold our understanding of intelligence and how we acquire it. The understanding that intelligence arises through these multifaceted exchanges and interactions within a social and cultural framework points to a more comprehensive perspective. So by acknowledging the vital role of culture and language in the formation of human intelligence, we not only deconstruct the limitations of IQ tests, but we also lay the foundation for a more encompassing way of thinking about what it truly means to be intelligent, and how it is shaped and nurtured by our social lives in our unique cultural contexts and the experiences that we have.

Thus, to truly grasp the essence of human intelligence, we don’t need IQ tests, and we certainly don’t need claims like genes causing IQ or psychological traits and this then is what makes certain people or groups more intelligent than others; we have to embrace the fact that human intelligence thrives within the web of social and cultural influences and interactions which then collectively form what we understand as the social formation of mind.

Free Will and the Immaterial Self: How Free Will Proves that Humans Aren’t Fully Physical Beings

2200 words

Introduction

That humans have freedom of will demonstrates that there is an immaterial aspect to humans. It implies that there is a nonphysical aspect to humans, thus, humans aren’t fully physical beings. I will use the Ross-Feser argument on the immateriality of thought to strengthen that conclusion. But before that, I will demonstrate that we do indeed have free will. Then, the consequence that we have free will will then be used to generate the conclusion that we are not fully physical beings. This conclusion is, however, justified by arguments for many flavors of dualism. I will then conclude by providing a compelling case against the physicalist, materialist view that seeks to reduce human beings to purely physical entities—because this claim will be directly contested by the conclusion of my argument.

CID and free will

I recently argued for a view I call cognitive interface dualism (CID). The argument I formulated used action potentials (APs) as the intermediary between the mental and physical realms that Descartes was looking for (he thought that this interaction took place at the peneal gland, but he was wrong). So free will using my CID can be seen as a product of mental autonomy, non-deterministic mental causation, and the emergent properties of mind. So CID can accommodate free will and allow for it’s existence without relying on determinism.

The CID framework also argues that M is irreducible to P, consistent with other forms of dualism. This suggests that the mind has a level of autonomy that isn’t completely determined by physical or material processes. Furthermore, when it comes to decision-making, this occurs in the mental realm. CID allows for mental states to causally influence physical states (mental causation), and so, free will operates when humans make choices, and these choices can initiate actions which aren’t determined by physical factors. Free will is also compatible with the necessary role of the human brain for minds—it’s an emergent property of the interaction of M and P. The fact of the matter is, minds allow agency, the ability to reason and make choices. That is, humans are unique, special animals and humans are unique and special because humans have an immaterial mind which allows the capacity to make decisions and have freedom.

Overall, the CID framework provides a coherent explanation for the existence of free will, alongside the role of the brain in human cognition. It further allows for a nuanced perspective on human agency, while emphasizing the unique qualities of human decision-making and freedom.

Philosopher Peter Van Inwagen has an argument using modus ponens which states: If moral responsibility exists, then free will exists. Moral responsibility exists because individuals are held accountable for their actions in the legal system, ethical discussions, and everyday life. Thus, free will exists. Basically, if you’ve ever said to someone “That’s your fault”, you’re holding them accountable for their actions, assuming that they had the capacity to make choices and decisions independently. So this aligns with the concept of free will, since you’re implying that a person and the ability to act differently and make alternative choices.

Libet experiments claim that unconscious brain processes are initiated before an action is made, and that it precedes conscious intention to move. But the original Libet experiment nor any similar ones justify the claim that the brain initiates freely-willed processes (Radder and Meynen, 2012)—because the mind is what is initiating these freely-willed actions.

Furthermore, when we introspect and reflect on our conscious experiences, we unmistakably perceive ourselves as making choices and decisions in various situations in our lives. These choices and decisions feel unconstrained and open, we experience and feel a sense of deliberation when making them. But if we had no free will and our choices, were entirely determined by external factors, then our experience of making choices would be illusory; our choices would be mere illusions of free will. Thus, the fact that we have a direct and introspective awareness in making choices implies that free will exists; it’s a fundamental aspect of our human experiences. So while this argument doesn’t necessarily prove that free will exists, it highlights the compelling phenomenological aspects of human decision-making, which can be seen as evidence for free will.

Having said all of this, I can now make the following argument: If humans have the ability to reason and make logical decisions, then humans have free will. Humans have the ability to reason and make logical decisions. So humans have free will. I will then take this conclusion that I inferred and use it in a later argument to infer that humans aren’t purely physical beings.

Freedom and the immaterial self

Ronald Ross (1992) argued that all formal thinking is incompossibly determinate, while no physical process or a function of physical processes are incompossibly determinate, which allowed him to infer that thoughts aren’t a functional or physical process. Then Ed Feser (2013) argued that Ross’ argument cannot be refuted or coukd be refuted by any neuroscientific discovery. Feser then added to the argument and correctly inferred that humans aren’t fully physical beings.

A, B, and C are, after all, only the heart of Ross’s position.  A little more fully spelled out, his overall argument essentially goes something like this:

A. All formal thinking is determinate.

B. No physical process is determinate.

C. No formal thinking is a physical process. [From A and B]

D. Machines are purely physical.

E. Machines do not engage in formal thinking. [From C and D]

F. We engage in formal thinking.

G. We are not purely physical. [From C and F] (Ed Feser, Can Machines Beg the Question?)

This is a conclusion that I myself have come to, through the fact that machines are purely physical and since thinking isn’t a physical process (but physical processes are necessary for thinking), then machines cannot think since they are purely physical and thinking isn’t a physical or functional process.

Only beings with minds can intend. This is because mind allows a being to think. Since the mind isn’t physical, then it would follow that a physical system can’t intend to do something—since it wouldn’t have the capacity to think. Take an alarm system. The alarm system does not intend to sound alarms when the system is tripped. It’s merely doing what it was designed to do, it’s not intending to carry out the outcome. The alarm system is a physical thing made up of physical parts. So we can then liken this to, say, A.I.. A.I. is made up of physical parts. So A.I. (a computer, a machine) can’t think. However, individual physical parts are mindless and no collection of mindless things counts as a mind. Thus, a mind isn’t a collection of physical parts. Physical systems are ALWAYS a complicated system of parts but the mind isn’t. So it seems to follow that nothing physical can ever have a mind.

Physical parts of the natural world lack intentionality. That is, they aren’t “about” anything. It is impossible for an arrangement of physical particles to be “about” anything—meaning no arrangement of intentionality-less parts will ever count as having a mind. So a mind can’t be an arrangement of physical particles, since individual particles are mindless. Since mind is necessary for intentionality, it follows that whatever doesn’t have a mind cannot intend to do anything, like nonhuman animals. It is human psychology that is normative, and since the normative ingredient for any normative concept is the concept of reason, and only beings with minds can have reasons to act, then human psychology would thusly be irreducible to anything physical. Indeed, physicalism is incompatible with intentionality (Johns, 2020). The problem of intentionality is therefore yet another kill-shot for physicalism. It is therefore impossible for intentional states (i.e. cognition) to be reduced to, or explained by, physicalist theories/physical things. (Why Purely Physical Things Will Never Be Able to Think: The Irreducibility of Intentionality to Physical States)

Now that I have argued for the existence of free will, I will now argue that our free will implies that there is an aspect of our selves and out existence that is not purely physical, but is immaterial. Effectively, I will be arguing that humans aren’t fully physical beings.

So if humans were purely physical beings, then our actions and choices would be solely determined by physical laws and processes. However, if we have free will, then our actions are not solely determined by physical laws and processes, but are influenced by our capacity to make decisions independently. So humans possess a nonphysical aspect—free will which is allowed by the immaterial mind and consciousness—which allows us to transcend the purely deterministic nature of purely physical things. Consequently, humans cannot be fully physical beings, since the existence of free will and the immaterial mind and consciousness suggests a nonphysical, immaterial aspect to out existence.

Either humans have free will, or humans do not have free will. If humans have free will, then humans aren’t purely physical. If humans don’t have free will, then it contradicts the premise that we have free will. So humans must have free will. Consequently, humans aren’t fully physical beings.

Humans aren’t fully physical beings, since we have the capacity for free will and thought—where free will is the capacity to make choices that are not determined by external factors alone. If humans have the ability to reason and make logical decisions, then humans have free will. Humans have the ability to reason and make logical decisions. So humans have free will. Reasoning and the ability to make logical decisions is based on thinking. Thinking is an immaterial—non-physical—process. So if thinking is an immaterial process, and what allows thinking are minds which can’t be physical, then we aren’t purely physical. Put into premise and conclusion form, it goes like this:

(1) If humans have the ability to reason and make logical decisions, then humans have free will.
(2) Humans have the ability to reason and make logical decisions.
(3) Reasoning and the ability to reason and make logical decisions are based on thinking.
(4) Thinking is an immaterial—non-physical—process.
(5) If humans have free will, and what allows free will is the ability to think and make decisions, then humans aren’t purely physical beings.

This argument suggests that humans possess free will and engage in immaterial thinking processes, which according to the Ross-Feser argument, implies the existence of immaterial aspects of thought. So what allows this is consciousness, and the existence of consciousness implies the existence of a nonphysical entity. This nonphysical entity is the mind.

So in CID, the self (S) is the subject the self is the subject of experience, while the mind (M) encompasses mental states, subjective experiences, thoughts, emotions, and consciousness, and consciousness (C) refers to the awareness of one’s own mental states and experiences. CID also recognizes that the brain is a necessary pre-condition for human mindedness but not a sufficient condition, so for there to be a mind at all there needs to be a brain—basically, for there to be mental facts, there must be physical facts. The self is what has the mind, and the mind is the realm in which mental states and experiences occur. So CID posits that the self is the unified experiencer, while the self interact is the entity that experiences and interacts with the contents of the mind through APs.

So this argument that I’ve mounted in this article and my original article on CID, is that humans aren’t fully physical beings since it’s based on the idea that thinking and conscious experiences are immaterial, nonphysical processes.

Conclusion

So CID offers a novel perspective on the mind-body problem, arguing that APs are the interface between the mental and the physical world. Now with this arguments I’ve made here, it establishes that humans aren’t purely physical beings. Through the argument that mental states are irreducible to physical states, CID acknowledges that the existence of an immaterial self plays a fundamental role in human mental life. Thus immaterial self—the seat of our conscious experiences, thoughts, decisions and desires—bridges the gap between M and P. This further underscores the argument that the mind is immaterial and thus so is the self (“I”, the experiencer, the subject of experience) and that the subject isn’t the brain or the nervous system.

CID recognizes that human mental life is characterized by its intrinsic mental autonomy and free will. We are not mere products of deterministic physical processes, rather we are agents capable of making genuine choices and decisions. The conscious experiences of making choices along with the profound sense of freedom in our are immediate and undeniable aspects of our profound sense of freedom in our decisions are immediate and undeniable aspects of our reality which then further cements the existence of free will. So the concept of free will reinforces the claim and argument that humans aren’t fully physical beings. These aspects of our mental life defy reduction to physical causation.

Cope’s (Deperet’s) Rule, Evolutionary Passiveness, and Alternative Explanations

4450 words

Introduction

Cope’s rule is an evolutionary hypothesis which suggests that, over geological time, species have a tendency to increase in body size. (Although it has been proposed for Cope’s rule to be named Deperet’s rule, since Cope didn’t explicitly state the hypothesis while Deperet did, Bokma et al, 2015.) Named after Edward Drinker Cope, it proposes that on average through the process of “natural selection” species have a tendency to get larger, and so it implies a directionality to evolution (Hone and Benton, 2005; Liow and Taylor, 2019). So there are a few explanations for the so-called rule: Either it’s due to passive or driven evolution (McShea, 1994; Gould, 1996; Raia et al, 2012) or due to methodological artifacts (Sowe and Wang, 2008; Monroe and Bokma, 2010).

However, Cope’s rule has been subject to debate and scrutiny in paleontology and evolutionary biology. The interpretation of Cope’s rule hinges on how “body size” is interpreted (mass or length), along with alternative explanations. I will trace the history of Cope’s rule, discuss studies in which it was proposed that this directionality from the rule was empirically shown, discuss methodological issues. I propose alternative explanations that don’t rely on the claim that evolution is “progressive” or “driven.” I will also show that developmental plasticity throws a wrench in this claim, too. I will then end with a constructive dilemma argument showing that either Cope’s rule is a methodological artifact, or it’s due to passive evolution, since it’s not a driven trend as progressionists claim.

How developmental plasticity refutes the concept of “more evolved”

In my last article on this issue, I showed the logical fallacies inherent in the argument PumpkinPerson uses—it affirms the consequent, assuming it’s true leads to a logical contradiction, and of course reading phylogenies in the way he does just isn’t valid.

If the claim “more speciation events within a given taxon = more evolution” were valid, then we would consistently observe a direct correlation between the number of speciation events and the extent evolutionary change in all cases, but we don’t since evolutionary rates vary and other factors influence evolution, so the claim isn’t universally valid.

Take these specific examples: The horseshoe crab has a lineage going back hundreds of millions of years with few speciation events but it has undergone evolutionary changes. Consequently, microorganisms could undergo many speciation events and have relatively minor genetic change. Genetic and phenotypic diversity of the cichlid fishes (fishes that have undergone rapid evolutionary change and speciation), but the diversity between them doesn’t solely depend on speciation events, since factors like ecological niche partitioning and sexual selection also play a role in why they are different even though they are relatively young species (a specific claim that Herculano-Houzel made in her 2016 book The Human Advantage). Lastly, human evolution has relatively few speciation events but the extent of evolutionary change in our species is vast. Speciation events are of course crucial to evolution. But if one reads too much into the abstractness of the evolutionary tree then they will not read it correctly. The position of the terminal nodes is meaningless.

It’s important to realize that evolution just isn’t morphological change which then leads to the creation of a new species (this is macro-evolution), but there is also micro-evolution. Species that underwent evolutionary change without speciation include peppered moths, antibody resistance in bacteria, lactase persistence in humans, Darwin’s finches, and industrial melanism in moths. These are quite clearly evolutionary changes, and they’re due to microevolutionary changes.

Developmental plasticity directly refutes the contention of more evolved since individuals within a species can exhibit significant trait variation without speciation events. This isn’t captured by phylogenies. They’re typically modeled on genetic data and they don’t capture developmental differences that arise due to environmental factors during development. (See West-Eberhard’outstanding Developmental Plasticity and Evolution for more on how in many cases development precedes genetic change, meaning that the inference can be drawn that genes aren’t leaders in evolution, they’re mere followers.)

If “more evolved” is solely determined by the number of speciation events (branches) in a phylogeny, then species that exhibit greater developmental plasticity should be considered “more evolved.” But it is empirically observed that some species exhibit significant developmental plasticity which allows them to rapidly change their traits during development in response to environmental variation without undergoing speciation. So since the species with more developmental plasticity aren’t considered “more evolved” based on the “more evolved” criteria, then the assumption that “more evolved” is determined by speciation events is invalid. So the concept of “more evolved” as determined by speciation events or branches isn’t valid since it isn’t supported when considering the significant role of developmental plasticity in adaptation.

There is anagenesis and cladogenesis. Anagenesis is the creation of a species without a branching of the ancestral species. Cladogenesis is the formation of a new species by evolutionary divergence from an ancestral form. So due to evolutionary changes within a lineage, the organism that underwent evolutionary changes replaces the older one. So anagenesis shows that a species can slowly change and become a new species without there being a branching event. Horse, human, elephant, and bird evolution are examples of this.

Nonetheless, developmental plasticity can lead to anagenesis. Developmental, or phenotypic, plasticity is the ability of an organism to produce different phenotypes with the same genotype based on environmental cues that occur during development. Developmental plasticity can facilitate anagenesis, and since developmental plasticity is ubiquitous in development of not only an individual in a species but a species as a whole, then it is a rule and not an exception.

Directed mutation and evolution

Back in March, I wrote on the existence of directed mutations. Directed mutation directly speaks against the concept of “more evolved.” Here’s the argument:

(1) If directed mutations play a crucial role in helping organisms adapt to changing environments, then the notion of “more evolved” as a linear hierarchy is invalid.
(2) Directed mutations are known to occur and contribute to a species survivability in an environment undergoing change during development (the concept of evolvability is apt here).
(C) So the concept of “more evolved” as a linear hierarchy is invalid.

A directed mutation is a mutation that occurs due to environmental instability which helps an organism survive in the environment that changed while the individual was developing. Two mechanisms of DM are transcriptional activation (TA) and supercoiling. TAs can cause changes to single-stranded DNA, and can also cause supercoiling (the addition of more strands on DNA). TA can be caused by depression (a mechanism that occurs due to the absence of some molecule) or induction (the activation of an inactive gene which then gets transcribed). So these are examples of how nonrandom (directed) mutation and evolution can occur (Wright, 2000). Such changes are possibly through the plasticity of phenotypes during development and ultimately are due to developmental plasticity. These stress-directed mutations can be seen as quasi-Lamarckian (Koonin and Wolf, 2009). It’s quite clear that directed mutations are a thing and have been proven true.

DMs, along with developmental plasticity and evo-devo as a whole refute the simplistic thinking of “more evolved.”

Now here is the argument that PP is using, and why it’s false:

(1) More branches on a phylogeny indicate more speciation events.
(2) More speciation events imply a higher level of evolutionary advancement.
(C) Thus, more branches on a phylogeny indicate a higher level of evolutionary advancement.

The false premise is (2) since it suggests that more speciation events imply a higher level of evolutionary advancement. It implies a goal-directed aspect to evolution, where the generation of more species is equated with evolutionary progress. It’s just reducing evolution to linear advancement and progress; it’s a teleological bent on evolution (which isn’t inherently bad if argued for correctly, see Noble and Noble, 2022). But using mere branching events on a phylogeny to assume that more branches = more speciation = more evolved is simplistic thinking that doesn’t make sense.

If evolution encompasses changes in an organism’s phenotype, then changes in an organism’s phenotype, even without changing its genes, are considered examples of evolution. Evolution encompasses changes in an organism’s phenotype, so changes in an organism’s phenotype even without changes in genes are considered examples of evolution. There is nongenetic “soft inheritance” (see Bonduriansky and Day, 2018).

Organisms can exhibit similar traits due to convergent evolution. So it’s not valid to assume a direct and strong correlation between and organism’s position on a phylogeny and it’s degree of resemblance to a common ancestor.

Dolphins and ichthyosaurs share similar traits but dolphins are mammals while ichthyosaurs are reptiles that lived millions of years ago. Their convergent morphology demonstrates that common ancestry doesn’t determine resemblance. The Tasmanian and Grey wolf have independently evolved similar body plans and roles in their ecologies and despite different genetics and evolutionary history, they share a physical resemblance due to similar ecological niches. The LCA of bats and birds didn’t have wings but they have wings and they occurred independently showing that the trait emerged independently while the LCA didn’t have wings so it emerged twice independently. These examples show that the degree of resemblance to a common ancestor is not determined by an organism’s position on a phylogeny.

Now, there is a correlation between body size and branches (splits) on a phylogeny (Cope’s rule) and I will explain that later. That there is a correlation doesn’t mean that there is a linear progression and they don’t imply a linear progression. Years ago back in 2017 I used the example of floresiensis and that holds here too. And Terrance Deacon’s (1990) work suggests that pseudoprogressive trends in brain size can be explained by bigger whole organisms being selected—this is important because the whole animal is selected, not any one of its individual parts. The correlation isn’t indicative of a linear progression up some evolutionary ladder, either: It’s merely a byproduct of selecting larger animals (the only things that are selected).

I will argue that it is this remarkable parallelism, and not some progressive selection for increasing intelligence, that is responsible for many pseudoprogressive trends in mammalian brain evolution. Larger whole animals were being selected—not just larger brains—but along with the correlated brain enlargement in each lineage a multitude of parallel secondary internal adaptations followed. (Deacon, 1990)

Nonetheless, the claim here is one from DST—the whole organism is selected, so obviously so is it’s body plan (bauplan). Nevertheless, the last two havens for the progressionist is in the realm of brain size and body size. Deacon refuted the selection-for brain size claim, so we’re now left with body size.

Does the evolution of body size lend credence to claims of driven, progressive evolution?

The tendency for bodies to grow larger and larger over evolutionary time is something of a trusim. Since smaller bacterium have eventually evolved into larger (see Gould’s modal bacter argument), more complex multicellular organisms, then this must mean that evolution is progressive and driven, at least for body size, right? Wrong. I will argue here using a constructive dilemma that either evolution is passive and that’s what explains the evolution of body size increases, or is it due to methodological flaws in how body size is measured (length or mass)?

In Full House, Gould (1996) argued that the evolution of body size isn’t driven, but that it is passive, namely that it is evolution away from smaller size. Nonetheless, it seems that Cope’s (Deperet’s) rule is due to cladogenesis (the emergence of new species), not selection for body size per se (Bokma et al, 2015).

Given these three conditions, we note an increase in size of the largest species only because founding species start at the left wall, and the range of size can therefore expand in only one direction. Size of the most common species (the modal decade) never changes, and descendants show no bias for arising at larger sizes than ancestors. But, during each act, the range of size expands in the only open direction by increase in the total number of species, a few of which (and only a few) become larger (while none can penetrate the left wall and get smaller). We can say only this for Cope’s Rule: in cases with boundary conditions like the three listed above, extreme achievements in body size will move away from initial values near walls. Size increase, in other words, is really random evolution away from small size, not directed evolution toward large size. (Gould, 1996)

Dinosaurs were some of the largest animals to ever live. So we might say that there is a drivenness in their bodies to become larger and larger, right? Wrong. The evolution of body size in dinosaurs is passive, not driven (progressive) (Sookias, Butler, and Benson, 2012). Gould (1996) also showed passive trends in body size in plankton and forams. He also cited Stanley (1973) who argued that groups starting at the left wall of minimum complexity will increase in mean size as a consequence of randomness, not any driven tendency for larger body size.

In other, more legitimate cases, increases in means or extremes occur, as in our story of planktonic forams, because lineages started near the left wall of a potential range in size and then filled available space as the number of species increased—in other words, a drift of means or extremes away from a small size, rather than directed evolution of lineages toward large size (and remember that such a drift can occur within a regime of random change in size for each individual lineage—the “drunkard’s walk” model).

In 1973, my colleague Steven Stanley of Johns Hopkins University published a marvelous, and now celebrated, paper to advance this important argument. He showed (see Figure 27, taken from his work) that groups beginning at small size, and constrained by a left wall near this starting point, will increase in mean or extreme size under a regime of random evolution within each species. He also advocated that we test his idea by looking for right-skewed distributions of size within entire systems, rather than by tracking mean or extreme values that falsely abstract such systems as single numbers. In a 1985 paper I suggested that we speak of “Stanley’s Rule” when such an increase of means or extremes can best be explained by undirected evolution away from a starting point near a left wall. I would venture to guess (in fact I would wager substantial money on the proposition) that a large majority of lineages showing increase of body size for mean or extreme values (Cope’s Rule in the broad sense) will properly be explained by Stanley’s Rule of random evolution away from small size rather than by the conventional account of directed evolution toward selectively advantageous large size. (Gould, 1996)

Gould (1996) also discusses the results of McShea’s study, writing:

Passive trends (see Figure 33) conform to the unfamiliar model, championed for complexity in this book, of overall results arising as incidental consequences, with no favored direction for individual species, (McShea calls such a trend passive because no driver conducts any species along a preferred pathway. The general trend will arise even when the evolution of each individual species confirms to a “drunkard’s walk” of random motion.) For passive trends in complexity, McShea proposes the same set of constraints that I have advocated throughout this book: ancestral beginnings at a left wall of minimal complexity, with only one direction open to novelty in subsequent evolution.

But Baker et al (2015) claim that body size is an example of driven evolution. However, that they did not model cladogenetic factors calls their conclusion into question. But I think Baker et al’s claim doesn’t follow. If a taxon possesses a potential size range and the ancestral size approaches the lower limit of this range, it will result in a passive inclination for descendants to exceed the size of their ancestors. The taxon in question possesses a potential size range, and the ancestral size range is on the lower end of the range. So there will be a passive tendency for descendants of this taxon to be larger than their predecessors.

Here’s an argument that concludes that evolution is passive and not driven. I will then give examples of P2.

(1) Extant animals that are descended from more nodes on an evolutionary tree tend to be bigger than animals descended from fewer nodes (your initial premise).
(2) There exist cases where extant animals descended from fewer nodes are larger or more complex than those descended from more nodes (counterexamples of bats and whales, whales are descended from fewer nodes while having some of the largest body sizes in the world while bats are descended from more nodes while having a way comparatively smaller body size).
(C1) Thus, either P1 doesn’t consistently hold (not all extant animals descended from more nodes are larger), or it is not a reliable rule (given the counters).
(3) If P1 does not consistently hold true (not all extant animals descended from more nodes are larger), then it is not a reliable rule.
(4) P1 does not consistently hold true.
(C2) P1 is not a reliable rule.
(5) If P1 is not a reliable rule (given the existence of counterexamples), then it is not a valid generalization.
(6) P1 is not a reliable rule.
(C3) So P1 is not a valid generalization.
(6) If P1 isn’t a valid generalization in the context of evolutionary biology, then there must be exceptions to this observed trend.
(7) The existence of passive evolution, as suggested by the inconsistenties in P1, implies that the trends aren’t driven by progressive forces.
(C4) Thus, the presence of passive evolution and exceptions to P1’s trend challenge the notion of a universally progressive model of evolution.
(8) If the presence of passive evolution and exceptions to P1’s trend challenges the notion of a universally progressive model of evolution, then the notion of a universally progressive model of evolution isn’t supported by the evidence, as indicated by passive evolution and exceptions to P1’s trend.
(9) The presence of passive evolution and exceptions to P1’s trend challenge the notion. of a universally progressive model of evolution.

(1) Bluefin tuna are known to have a potential range of size, with some being small and others being massive (think of that TV show Deadliest Catch and the massive size ranges of tuna these fisherman catch, both in length and mass). So imagine a population of bluefin tuna where the ancestral size is found to be close to the lower end of their size range. So P2 is satisfied because bluefin tuna have a potential size range. So the ancestral size of the ancestors of the tuna were relatively small in comparison to the maximum size of the tuna.

(2) African elephants in some parts of Africa are small, due to ecological constraints and hunting pressures and these smaller-sized ancestors are close to the lower limit of the potential size range of African elephants. Thus, according to P1, there will be a passive tendency for descendants of these elephants to be larger than their smaller-sizes ancestors over time.

(3) Consider galapagos tortoises whom are also known for their large variation in size among the different species and populations on the galapagos islands. So consider a case of galapagos tortoises who have smaller body sizes due to either resource conditions or the conditions of their ecologies. So in this case, the potential size for the ancestors of these tortoises is close to the theoretical limit of their potential size range. Therefore, we can expect a passive tendency for descendants of these tortoises to evolve large body sizes.

Further, in Stanley’s (1973) study of Cope’s rule from fossil rodents, he observed that body size distributions in these rodents, over time, became bigger while the modal size stayed small. This doesn’t even touch the fact that because there are more small than large mammals, that there would be a passive tendency in large body sizes for mammals. This also doesn’t even touch the methodological issues in determining body size for the rule—mass, length? Nonetheless, Monroe and Bokma’s (2010) study showed that while there is a tendency for species to be larger than their ancestors, it was a mere 0.5 percent difference. So the increase in body size is explained by an increase in variance in body size (passiveness) not drivenness.

Explaining the rule

I think there are two explanations: Either a methodological artifact or passive evolution. I will discuss both, and I will then give a constructive dilemma argument that articulates this position.

Monroe and Bokma (2010) showed that even when Cope’s rule is assumed, the ancestor-descendant increase in body size showed a mere .4 percent increase. They further discussed methodological issues with the so-called rule, citing Solow and Wang (2008) who showed that Cope’s rule “appears” based on what assumptions of body size are used. For example, Monroe and Bokma (2010) write:

If Cope’s rule is interpreted as an increase in the mean size of lineages, it is for example possible that body mass suggests Cope’s rule whereas body length does not. If Cope’s rule is instead interpreted as an increase in the median body size of a lineage, its validity may depend on the number of speciation events separating an ancestor-descendant pair.

If size increase were a general property of evolutionary lineages – as Cope’s rule suggests – then even if its effect were only moderate, 120 years of research would probably have yielded more convincing and widespread evidence than we have seen so far.

Gould (1997) suggested that Cope’s rule is a mere psychological artifact. But I think it’s deeper than that. Now I will provide my constructive dilemma argument, now that I have ruled out body size being due to progressive, driven evolution.

The form of constructive dilemma goes: (1) A V B. (2) If A, then C. (3) If B, then D. (C) C V D. P1 represents a disjunction: There are two possible choices, A and B. P2 and P3 are conditional statements, that provide implications for both of the options. And C states that at least one or both of the implications have to be true (C or D).

Now, Gould’s Full House argument can be formulated either using modus tollens or constructive dillema:

(1) If evolution were a deterministic, teleological process, there would be a clear overall progression and a predetermined endpoint. (2) There is no predetermined endpoint or progression to evolution. (C) So evolution isn’t a deterministic or teleological process.

(1) Either evolution is a deterministic, teleological process (A) or it’s not (B). (2) If A, then there would be a clear direction and predetermined endpoint. (3) If B, then there is no overall direction or predetermined endpoint. (4) So either there is a clear overall progression (A), or there isn’t (B). (5) Not A. (6) Therefore, B.

Or (1) Life began at a relatively simple state (the left wall of complexity). (2) Evolution is influenced by a combination of chance events,, environmental factors and genetic variation. (3) Organisms may stumble I’m various directions along the path of evolution. (4) Evolution lacks a clear path or predetermined endpoint.

Now here is the overall argument combining the methodological issues pointed out by Sowe and Wang and the implications of passive evolution, combined with Gould’s Full House argument:

(1) Either Cope’s rule is a methodological artifact (A), or it’s due to passive, not driven evolution (B). (2) If Cope’s rule is a methodological artifact (A), then different ways to measure body size (length or mass) can come to different conclusions. (3) If Cope’s rule is due to passive, not driven evolution (B), then it implies that larger body sizes simply accumulate over time without being actively driven by selective pressures. (4) Either evolution is a deterministic, teleological process (C), or it is not (D). (5) If C, then there would be a clear overall direction and predetermined endpoint in evolution (Gould’s argument). (6) If D, then there is no clear overall direction or predetermined endpoint in evolution (Gould’s argument). (7) Therefore, either there is a clear overall direction (C) or there isn’t (D) (Constructive Dilemma). (8) If there is a clear overall direction (C) in evolution, then it contradicts passive, not driven evolution (B). (9) If there isn’t a clear overall direction (D) in evolution, then it supports passive, not driven evolution (B). (10) Therefore, either Cope’s rule is due to passive evolution or it’s a methodological artifact.

Conclusion

Evolution is quite clearly passive and non-driven (Bonner, 2013). The fact of the matter is, as I’ve shown, evolution isn’t driven (progressive), it is passive due to the drunken, random walk that organisms take from the minimum left wall of complexity. The discussions of developmental plasticity and directed mutation further show that evolution can’t be progressive or driven. Organism body plans had nowhere to go but up from the left wall of minimal complexity, and that means increase the variance in, say, body size is due to passive trends. Given the discussion here, we can draw one main inference: since evolution isn’t directed or progressive, then the so-called Cope’s (Deperet’s) rule is either due to passive trends or they are mere methodological artifacts. The argument I have mounted for that claim is sound and so, it obviously must be accepted that evolution is a random, drunken walk, not one of overall drivenness and progress and so, we must therefore look at the evolution of body size in this way.

Rushton tried to use the concept of evolutionary progress to argue that some races may be “more evolved” than other races, like “Mongoloids” being “more evolved” than “Caucasoids” who are “more evolved” than “Negroids.” But Rushton’s “theory” was merely a racist one, and it obviously fails upon close inspection. Moreover, even the claims Rushton made at the end of his book Race, Evolution, and Behavior don’t even work. (See here.) Evolution isn’t progressive so we can’t logically state that one population group is more “advanced” or “evolved” than another. This is of course merely Rushton being racist with shoddy “explanations” used to justify it. (Like in Rushton’s long-refuted r/K selection theory or Differential-K theory, where more “K-evolved” races are “more advanced” than others.)

Lastly, this argument I constructed based on the principles of Gould’s argument shows that there is no progress to evolution.

P1 The claim that evolutionary “progress” is real and not illusory can only be justified iff organisms deemed more “advanced” outnumber “lesser” organisms.
P2 There are more “lesser” organisms (bacteria/insects) on earth than “advanced” organisms (mammals/species of mammals).
C Therefore evolutionary “progress” is illusory.

Action Potentials and their Role in Cognitive Interface Dualism

3000 words

Introduction

Rene Descartes proposed that the peneal gland was the point of contact—the interface—between the immaterial mind and physical body. He thought that the peneal gland in humans was different and special to that of nonhuman animals, where in humans the peneal gland was the seat of the soul (Finger, 1995). This view was eventually shown to be false. However, claims that the mental can causally interact with the physical (interactionist dualism) have been met with similar criticism. If the mental is irreducible to the physical and if the mental does in fact causally interact with the physical, then the mental must be identical with the physical; that is, the mental is reducible to the physical due to physical laws like conservation of energy. This seems to be an issue for the truth of an interactionist dualist theory. But there are solutions. Deny that causal closure of the physical (CCP) is true (the world isn’t causally closed), or argue that CCP is compatible with interactionist dualism, or argue that CCP is question-begging (assuming in a premise what it seeks to establish and conclude) and assumes without proper justification that all physical events must be due to physical causes, which thereby illogically excludes the possibility of mental causation.

In this article I will provide some reasons to believe that CCP is question-begging, and I will argue that mental causation is invisible (see Lowe, 2008). I will also argue that action potentials are the interface by which the mental and the physical interact and which would then lead a conscious decision to make a movement be possible. I will provide arguments that show that interactionist dualism is consistent with physics, while showing that action potentials are the interface that Descartes was looking for. Ultimately, I will show how the mental interacts with the physical for mental causation to be carried out and how this isn’t an issue for the CCP. The view I will argue for here I will call “cognitive interface dualism” since it centers on the influence of mental states on action potentials and on the physical realm, and it conveys the idea that mental processes interface with physical processes through the conduit of action potentials, without implying a reduction of the mental to the physical, making it a substance dualist position since it still adheres to the mental and the physical as two different substances.

Causal closure of the physical

It is claimed that the world is causally closed—this means that every event or occurrence is due to physical causes, all physical events must be due to physical causes. Basically, no non-physical (mental) factors can cause or influence physical events. Here’s the argument:

(1) Every event in the world has a cause.
(2) Causes and effects within the physical world are governed by the laws of physics.
(3) Non-physical factors or entities, by definition, don’t belong to the physical realm.
(4) If a nonphysical factor were to influence a physical event, it would violate the laws of physics.
(5) Thus, the world is causally closed, meaning that all causes and effects in it are governed by physical interactions and laws.

But the issue here for the physicalist who wants to use causal closure is the fact that mental events and states are qualitatively different from physical events and states. This is evidenced in Lowe’s distinction between intentional (mental) and event (physical) causation. Mental states like thoughts and consciousness possess qualitatively different properties than physical states. The causal closure argument assumes that physical events are the only causes of other physical events. But mental states appear to exert causal influence over physical events, for instance voluntary action based on conscious decision, like my action right now to write this article. So if M states do influence P events, then there must be interaction between the mental and physical realms. This interaction contradicts the idea of strict causal closure of the physical realm. Since mental causation is necessary to explain aspects of human action and consciousness, it then follows that the physical world may not be causally closed.

The problem of interaction for interactionist dualism is premised on the CCP. It supposedly violated the conservation of energy (CoE). If P energy is needed to do P work, then a convergence of mental into physical energy then results in an increase in energy that is inexplicable. I think there are many ways to attack this supposed knock-down argument against interactionist dualism, and I will make the case in an argument below, arguing that action potentials are where the brain and the mind interact to carry out intentions. However, there are no strong, non-question begging arguments for causal closure that don’t beg the question (eg see Bishop, 2005; Dimitrijevic, 2010; Gabbani, 2013; Gibb, 2015), and the inductive arguments commit a sampling error or non-sequiturs (Buhler, 2020). So the CCP is either question-begging or unsound (Menzies, 2015). I will discuss this issue before concluding this article, and I will argue that my argument that APs serve as the interface between the mental and the physical, along with the question-beggingness of causal closure actually strengthens my argument.

The argument for action potentials as the interface between the mind and the brain

The view that I will argue for here, I think, is unique and has never been argued for in the philosophical literature on mental causation. In the argument that follows, I will show how arguing that action potentials (APs) are the point of contact—the interface—between the mind and brain doesn’t violate the CCP nor does it violate CoE.

In an article on strength and neuromuscular coordination, I explained the relationship between the mind-muscle connection and action potentials:

The above diagram I drew is the process by which muscle action occurs. In my recent article on fiber typing and metabolic disease, I explained the process by which muscles contract:

But the skeletal muscle will not contract unless the skeletal muscles are stimulated. The nervous system and the muscular system communicate, which is called neural activiation—defined as the contraction of muscle generated by neural stimulation. We have what are called “motor neurons”—neurons located in the CNS (central nervous system) which can send impulses to muscles to move them. This is done through a special synapse called the neuromuscular junction. A motor neuron that connects with muscle fibers is called a motor unit and the point where the muscle fiber and motor unit meet is callled the neuromuscular junction. It is a small gap between the nerve and muscle fiber called a synapse. Action potentials (electrical impulses) are sent down the axon of the motor neuron from the CNS and when the action potential reaches the end of the axon, hormones called neurotransmitters are then released. Neurotransmitters transport the electrical signal from the nerve to the muscle.

So action potentials (APs) are carried out at the junction between synapses. So, regarding acetylcholine, when it is released, it binds to the synapses (a small space which separates the muscle from the nerve) and it then binds onto the receptors of the muscle fibers. Now we know that, in order for a muscle to contract, the brain sends the chemical message (acetylcholine) across synapses which then initiates movement. So, as can be seen from the diagram above, the MMC refers to the chemo-electric connection between the motor cortex, the cortico-spinal column, peripheral nerves and the neuromuscular junction. A neuromuscular junction is a synapse formed by the contact between a motor neuron and a muscle fiber.

This explanation will set the basis for my argument on how action potentials are the interface—the point of contact—by which the mind and brain meet.

As I have already shown, APs are electrochemical events that transmit signals within the nervous system and are generated as the result of neural activity which can be influenced by mental states like thoughts and intentions. The brain operates in accordance with physical laws and obeys the CoE, the initiation of APs could be (and are, though not always) influenced by mental intentions and processes. Mental processes could modulate the threshold or likelihood of AP firing through complex biomechanical mechanisms that do not violate the CoE. Of course, the energy that is required for generating APs ultimately derives from metabolic processes within the body, which could be influenced by mental states like attention, intention and emotional states. This interaction between mental states does not violate the CoE, nor does it require a violation of the laws of physics, since it operates within the bounds of biochemical and electrochemical processes that respect the CoE. Therefore, APs serve as the point of controlled interaction between the mental and physical realms, allowing for mental causation without disrupting the overall energy balance in the physical world.

Lowe argued that mental causation is invisible, and so since it is invisible, it is not amenable to scientific investigation. This view can be integrated into my argument that APs serve as the interface between the two substances, mental and physical. APs are observable electrochemical events in a neuron which could be influenced by mental states. So as I argued above, mental processes could influence or modulate the veneration of APs. When it comes to the invisibility of mental causation, this refers to the idea that mental events like thoughts, intentions, and consciousness are not directly perceptible like physical objects or events are. Mental states are not observable in the same way that physical events or objects are. In my view, APs hold a dual role. They function as the interface between the mental and the physical, providing the means by which the mental can influence physical events while shaping APs, and they also act as the causal mechanism in connecting mental states to physical events.

Thus, given the distinction between physical events (like APs) and the subjective nature of mental states, the view I have argued for above is consistent with the invisibility of mental causation. Mental causation involves the idea that mental states can influence physical events, and that they have causal efficacy on the physical world. So our mental experiences can lead to physical changes in the world based on the actions we carry out. But since mental states aren’t observable like physical states are, it’s challenging to show how they could lead to effects on the physical world. We infer the influence of mental states on physical events through the effects on observable physical processes. We can’t directly observe intention, we infer it on the basis of one’s action. Mental states could influence physical events through complex chains of electrochemical and biochemical processes which would then make the causative relationship less apparent. So while APs serve as the interface, this doesn’t mean that mental states and APs are identical. This is because while the mental can’t be reduced to physiology (the physical), it encompasses a range of subjective experiences, emotions, thoughts, and intentions that transcend the mechanistic explanations of neural activity.

It is quite obviously an empirical fact that the mental can influence the physical. Think of the fight-or-flight response. When one sees something that they are fearful of (like, say, an animal), there is then a concurrent change in certain hormones. This simple example shows how the mental can have an effect on the physical—where the physical event of seeing something fearful (which would be also be a subjective experience) would then lead to a physical change. So the initial mental event of seeing something fearful is a subjective experience which occurs in the realm of consciousness and mental states. The subjective experience of fear then triggers the fight-or-flight response, which leads to the release of stress hormones like cortisol and adrenaline. These physiological changes are part of the body’s response to a perceived threat based on the subject’s personal subjective experience. So the release of stress hormones is a physical event, and these hormones then have a measurable effect on the body like an increase in heart rate, heightened alertness and energy mobilization which then prepares the subject for action. These physiological changes then prepare the subject to either fight or flee from the situation that caused them fear. This is a solid example on how the mental can influence the physical.

The only way, I think, that my view can be challenged is by arguing that the CCP is true. But if it is question-begging, then my proposition that mental states can influence APs is then less contentious. Furthermore, my argument on APs could be open to multiple interpretations of causal closure. So instead of strictly adhering to causal closure, my view could accommodate various interpretations that allow mental causation to have an effect in the physical realm. Thus, since I view causal closure as question begging, it provides a basis for my view that mental states can influence APs and by extension the physical world. And if the CCP is false, my view on action potentials is actually strengthened.

The view I have argued for here is a simplified perspective on the relationship between the mental and the physical. But my intention isn’t to offer a comprehensive account of all aspects of mental and physical interaction, rather, it is to highlight the role of APs as a point of connection between the mental and physical realms.

Cognitive interface dualism as a form of substance dualism

The view I have argued for here is a substance dualist position. Although it posits an intermediary in APs that facilitates interaction between the mental and physical realms, it still maintains the fundamental duality between mental and physical substances. Mental states are irreducible to physical states, and they interact though APs without collapsing into a single substance. Mental states involve subjective experiences, intentionality, and qualia which are fundamentally different from the objective and quantifiable nature of the physical realm, which I have argued before. APs serve as the bridge—the interface—between the mental and the physical realms, so my dualistic perspective allows for interaction while still preserving the unique properties of the mental and the physical.

Although APs serve as the bridge between the mental and the physical, the interaction between mental states and APs suggests that mental causation operates independently of physical processes. This, then, implies that the self which originates in mental states, isn’t confined to the physical realm, and that it isn’t reducible to the physical. The self’s subjective experiences, consciousness and self-awareness cannot be explained by physical or material processes, which indicates an immaterial substance beyond the physical. The unity of consciousness, which is the integrated sense of self and personal identity over time, are better accounted for by an immaterial self that transcends a change in physical states. Lastly mental states possess qualitative properties like qualia that defy reduction to physical properties. These qualities then, point to a distinct and immaterial self.

My view posits a form of non-reductive mental causation, where mental states influence APs, acknowledging the nonphysical influence on the mental to the physical. Interaction doesn’t imply reduction; mental states remain irreducible even though they impact physical processes. My view also accommodates consciousness, subjectivity, and intentionality which can’t be accounted for by material or physical processes. My view also addresses the explanatory gap between objective physical processes and subjective mental processes, which can’t be accounted for by reduction to physical brain (neural) processes.

Conclusion

The exploration of APs within the context of cognitive interface dualism offers a perspective on the interplay between the mental and physical substances. My view acknowledges APs as the bridge of interaction between the mental and the physical, and it fosters a deeper understanding of the role of mental causation in helping us understand reality.

Central to my view is recognizing that while APs do serve as the interface or conduit by which the mental and the physical interact, and how mental states can influence physical events, this does not entail that the mental is reducible to the physical. My cognitive interface dualism therefore presents a nuanced approach that navigates the interface between the seen and the unseen, the physical and the mental.

Traditional views of causal closure may raise questions about the feasibility of mental causation, the concept’s rigidity is challenged by the intermediary role of APs. While I do hold that the CCP is question-begging, the view I have argued for here explores an alternative avenue which seemingly transcends that limitation. So even if the strict view of the CCP were to fall, my view would remain strong.

This view is also inherently anti-reductionist, asserting that personal identity, consciousness, subjectivity and intentionality cannot be reduced to the physical. Thus, it doesn’t succumb to the traditional limitations of physicalism. Cognitive interface dualism also challenges the notion that we are reducible to our physical brains or our mental activity. The self—the bearer of mental states—isn’t confined to neural circuitry, although the physical is necessary for our mental lives, it isn’t a sufficient condition (Gabriel, 2018).

Lastly, of course this view means that since the mental is irreducible to the physical, then psychometrics isn’t a measurement enterprise. Any argument that espouses the view that the mental is irreducible to the physical would entail that psychometrics isn’t measurement. So by acknowledging that mental states, consciousness, and subjective experiences transcend the confines of physical quantification, cognitive interface dualism dismantles the assumption that the human mind can be measured and encapsulated using numerical metrics. This view holds that the mental resists quantification, since only the physical is quantifiable since only the physical have specified measured objects, objects of measurement and measurement units.

All in all, my view I title cognitive interface dualism explains how mental causation occurs through action potentials. It still holds that the mental is irreducible to the physical, but that the mental and physical interact without M being reduced to P. This view I have espoused, I think, is unique, and it shows how mental causation does occur, it shows how we perform actions.

IQ, Achievement Tests, and Circularity

2150 words

Introduction

In the realm of educational assessment and psychometrics, a distinction between IQ and achievement tests needs to be upheld. It is claimed that IQ is a measure of one’s potential learning ability, while achievement tests show what one has actually learned. However, this distinction is not strongly supported in my reading of this literature. IQ and achievement tests are merely different versions of the same evaluative tool. This is what I will argue in this article: That IQ and achievement tests are different versions of the same test, and so any attempt to “validate” IQ tests based not only on other IQ tests, achievement tests and job performance is circular, I will argue that, of course, the goal of psychometrics in measuring the mind is impossible. The hereditarian argument, when it comes to defending their concept and the claim that they are measuring some unitary and hypothetical variable, then, fails. At best, these tests show one’s distance from the middle class, since that’s the where most of the items on the test derive from. Thus, IQ and achievement tests are different versions of the same test and so, they merely show one’s “distance” from a certain kind of class-specific knowledge (Richardson, 2012), due to the cultural and psychological tools one must possess to score well on these tests (Richardson, 2002).

Circular IQ-ist arguments

IQ-ists have been using IQ tests since they were brought to America by Henry Goddard in 1913. But one major issue (one they still haven’t solved—and quite honestly never will) was that they didn’t have any way to ensure that the test was construct valid. So this is why, in 1923, Boring stated that “intelligence is what intelligence tests test“, while Jensen (1972: 76) said “intelligence, by definition, is what intelligence tests measure.” However, such statements are circular and they are circular because they don’t provide real evidence or explanation.

Boring’s claim that “intelligence is what intelligence tests test” is circular since it defines intelligence based on the outcome of “intelligence tests.” So if you ask “What is intelligence“, and I say “It’s what intelligence tests measure“, I haven’t actually provided a meaningful definition of intelligence. The claim merely rests on the assumption that “intelligence tests” measure intelligence, not telling us what it actually is.

Jensen’s (1976) claim that “intelligence, by definition, is what intelligence tests measure” is circular for similar reasons to Boring’s since it also defines intelligence by referring to “intelligence tests” and at the same time assumes that intelligence tests are accurately measuring intelligence. Neither claim actually provides an independent understanding of what intelligence is, it merely ties the concept of “intelligence” back to its “measurement” (by IQ tests). Jensen’s Spearman’s hypothesis on the nature of black-white differences has also been criticized as circular (Wilson, 1985). Not only was Jensen (and by extension Spearman) guilty of circular reasoning, so too was Sternberg (Schlinger, 2003). Such a circular claim was also made by Van der Mass, Kan, and Borsboom (2014).

But Jensen seemed to have changed his view, since in his 1998 book The g Factor, he argues that we should dispense with the term “intelligence”, but curiously that we should still study the g factor and assume identity between IQ and g… (Jensen made many more logical errors in his defense of “general intelligence”, like saying not to reify intelligence on one page and then a few pages later reifying it.) Circular arguments have been identified in not only Jensen’s writings Spearman’s hypothesis, but also in using construct validity to validate a measure (Gordon, Schonemann; Guttman, 1992: 192).

The same circularity can be seen when discussions of the correlation between IQ and achievement tests is brought up. “These two tests correlate so they’re measuring the same thing”, is an example one may come across. But the error here is assuming that mental measurement is possible and that IQ and achievement tests are independent of each other. However, IQ and achievement tests are different versions of the same test. This is an example of circular validation, which occurs when a test’s “validity” is established by the test itself, leading to a self-reinforcing loop.

IQ tests are often validated with other older editions of the test. For example, the newer version of the S-B would be “validated” against the older version of the test that the newer version was created to replace (Howe, 1997: 18; Richardson, 2002: 301), which not only leads to circular “validation”, but would also lead to the same assumptions from the older test constructors (like Terman) which would still then be alive in the test itself (since Terman assumed men and women should be equal in IQ and so this assumption is still there today). IQ tests are also often “validated” by comparing IQ test results to outcomes like job performance and academic performance. Richardson and Norgate (2015) have a critical review of the correlation between IQ and job performance, arguing that it’s inflated by “corrections”, while Sackett et al, 2023 show “a mean observed validity of .16, and a mean corrected for unreliability in the criterion and for range restriction of .23. Using this value drops cognitive ability’s rank among the set of predictors examined from 5th to 12th” for the correlation between “general cognitive ability” and job performance.

But this could lead to circular validation, in that if a high IQ is used as a predictor of success in school or work, then success in school or work would be used as evidence in validating the IQ test, which would then lead to a circular argument. The test’s validity is being supported by the outcome that it’s supposed to predict.

Achievement tests are destined to see what one had learned or achieved regarding a certain kind of subject matter. Achievement tests are often validated by correlating test scores with grades or other kinds of academic achievement (which would also be circular). But if high achievement test scores are used to validate the test and those scores are also used as evidence of academic achievement, then that would be circular. Achievement tests are “validated” on their relationship between IQ tests and grades. Heckman and Kautz (2013) note that “achievement tests are often validated using other standardized achievement tests or other measures of cognitive ability—surely a circular practice” and “Validating one measure of cognitive ability using other measures of cognitive ability is circular.” But it should also be noted that the correlation between college grades and job performance 6 or more years after college is only .05 (Armstrong, 2011).

Now what about the claim that IQ tests and achievement tests correlate so they measure the same thing? Richardson (2017) addressed this issue:

For example, IQ tests are so constructed as to predict school performance by testing for specific knowledge or text‐like rules—like those learned in school. But then, a circularity of logic makes the case that a correlation between IQ and school performance proves test validity. From the very way in which the tests are assembled, however, this is inevitable. Such circularity is also reflected in correlations between IQ and adult occupational levels, income, wealth, and so on. As education largely determines the entry level to the job market, correlations between IQ and occupation are, again, at least partly, self‐fulfilling

The circularity inherent in likening IQ and achievement tests has also been noted by Nash (1990). There is no distinction between IQ and achievement tests since there is no theory or definition of intelligence and how, then, this theory and definition would be likened to answering questions correctly on an IQ test.

But how, to put first things first, is the term ‘cognitive ability’ defined? If it is a hypothetical ability required to do well at school then an ability so theorised could be measured by an ordinary scholastic attainment test. IQ measures are the best measures of IQ we have because IQ is defined as ‘general cognitive ability’. Actually, as we have seen, IQ theory is compelled to maintain that IQ tests measure ‘cognitive ability’ by fiat, and it therefore follows that it is tautologous to claim that IQ tests are the best measures of IQ that we have. Unless IQ theory can protect the distinction it makes between IQ/ability tests and attainment/ achievement tests its argument is revealed as circular. IQ measures are the best measures of IQ we have because IQ is defined as ‘general cognitive ability’: IQ tests are the only measures of IQ.

The fact of the matter is, IQ “predicts” (is correlated with) school achievement since they are different versions of the same test (Schwartz, 1975; Beaujean et al, 2018). Since the main purpose of IQ tests in the modern day is to “predict” achievement (Kaufman et al, 2012), then if we correctly identify IQ and achievement tests as different versions of the same test, then we can rightly state that the “prediction” is itself a form of circular reasoning. What is the distinction between “intelligence” tests and achievement tests? They both have similar items on them, which is why they correlate so highly with each other. This, therefore, makes the comparison of the two in an attempt to “validate” one or the other circular.

I can now argue that the distinction between IQ and achievement tests is nonexistent. If IQ and achievement tests are different versions of the same test, then they share the same domain of assessing knowledge and skills. IQ and achievement tests contain similar informational content on them, and so they can both be considered knowledge tests—class-specific knowledge. IQ and achievement tests share the same domain of assessing knowledge and skills. Therefore, IQ and achievement tests are different versions of the same test. Put simply, if IQ and achievement tests are different versions of the same test, then they will have similar item content, and they do so we can correctly argue that they are different versions of the same test.

Moreover, even constructing tests has been criticized as circular:

Given the consistent use of teachers’ opinions as a primary criterion for validity of the Binet and Wechsler tests, it seems odd to claim  then that such tests provide “objective alternatives to the subjective judgments of teachers and employers.”  If the tests’ primary claim to predictive validity is that their results have strong correlations with academic success, one wonders how an objective test can predict performance in an acknowledged subjective environment?  No one seems willing to acknowledge the circular and tortuous reasoning behind the development of tests that rely on the subjective judgments of secondary teachers in order to develop an assessment device that claims independence of those judgments so as to then be able to claim that it can objectively assess a student’s ability to  gain the approval of subjective judgments of college professors.  (And remember, these tests were used to validate the next generation of tests and those tests validated the following generation and so forth on down to the tests that are being given today.) Anastasi (1985) comes close to admitting that bias is inherent in the tests when he confesses the tests only measure what many anthropologists would called a culturally bound definition of intelligence. (Thorndike and Lohman, 1990)

Conclusion

It seems clear to me that almost the whole field of psychometrics is plagued with the problem of inferring causes from correlation and using circular arguments in an attempt to justify and validate the claim that IQ tests measure intelligence by using flawed arguments that relate IQ to job and academic performance. However this idea is very confused. Moreover, circular arguments aren’t only restricted to IQ and achievement tests, but also in twin studies (Joseph, 2014; Joseph et al, 2015). IQ and achievement tests merely show what one knows, not their learning potential, since they are general knowledge tests—tests of class-specific knowledge. So even Gottfredson’s “definition” of intelligence fails, since Gottfredson presumes IQ to be a measure of learning ability (nevermind the fact that the “definition” is so narrow and I struggle to think of a valid way to operationalize it to culture-bound tests).

The fact that newer versions of tests already in circulation are “validated” against other older versions of the same test means that the tests are circularly validated. The original test (say the S-B) was never itself validated, and so, they’re just “validating” the newer test on the assumption that the older one was valid. The newer test, in being compared to its predecessor, means that the “validation” is occuring on the other older test which has similar principles, assumptions, and content to the newer test. The issue of content overlap, too, is a problem, since some questions or tasks on the newer test could be identical to questions or tasks on the older test. The point is, both IQ and achievement tests are merely knowledge tests, not tests of a mythical general cognitive ability.

Challenging the Myth of Objective Testing with an Absolute Scale in the Face of Non-Cognitive Influences

2200 words

The IQ-ists are at it again. This time, PP is claiming that his little tests he created are on an absolute scale—meaning that they have a true 0 point. This has been the Achilles heel of psychometry for many decades. But abstract concepts don’t have true 0 points, and this is why “cognitive measurement” isn’t possible. I will conceptually analyze PP’s arguments for his “spatial intelligence test” and his “verbal intelligence test” and show that they aren’t on absolute scales. I will then use the IQ-ists favorite measurement—temperature (one they try to claim is like IQ)—and show the folly in his reasoning on claiming that these tests are on an absolute scale. I will then discuss the real reasons for score disparities and relate them to social class and one’s life experiences and the argue that the score results reflect merely environmental variables.

Fixed reference points and absolute scales

There are no fixed reference points for “IQ” like there are for temperature. IQ-ists have claimed for decades that temperature is like IQ while thermometers are like IQ tests (Nash, 1990). But I have shown the confused thinking of hereditarians on this issue. An absolute scale requires a fixed reference point or a true 0 point which can be objectively established. Physical quantities like distance, weight, and temperature have natural objective 0 points which can serve as fixed reference points. But nonphysical or abstract concepts lack inherent or universally agreed-upon 0 points which can serve as consistent reference points. So only physical quantities can truly be measured in an absolute scale, since they possess natural 0 points which provide a foundation for measurement.

If “spatial intelligence” is a unitary and objectively measureable cognitive trait, then all individuals’ spatial abilities should consistently align across various tasks. But individuals often exhibit significant variablity in their performance across spatial tasks, excelling in one aspect and not others. This variablity suggests that “spatial intelligence” isn’t a unitary concept. So the concept of a single, unitary, measurable “spatial intelligence” is questionable.

If the test is on an absolute scale for measuring “spatial intelligence”, then the scores obtained directly reflect the inherent “spatial intelligence” of individuals, without being influenced by factors like puzzle complexity, practice, or other variables. The scores are influenced by factors like puzzle complexity and practice effects (like doing similar things in the past). Since the scores are influenced by various factors, then it’s not on an absolute scale.

If a measurement is on an absolute scale, then it should produce consistent results across different contexts and scenarios, reflecting a stable and underlying trait. But cognitive abilities can be influenced by various external factors like stress, fatigue, motivation, and test-taking conditions. These external factors can lead to fluctuations in performance which aren’t indicative of the “trait” that’s attempting to be measured. It’s merely reflective of the circumstances of the moment one took the test in. So the concept of an absolute scale for measuring cognitive abilities fails to account for the impact of external variables which can introduce variability and inaccuracies in the “measurement.” This argument undermines the claim that this—or any test—is on an absolute scale, since motivation, stress and other socio-cognitive factors, like Richardson (2002: 287-288) notes:

the basic source of variation in IQ test scores is not entirely (or even mainly) cognitive, and what is cognitive is not general or unitary. It arises from a nexus of sociocognitive-affective factors determining individuals’ relative preparedness for the demands of the IQ test. These factors include (a) the extent to which people of different social classes and cultures have acquired a specific form of intelligence (or forms of knowledge and reasoning); (b) related variation in ‘academic orientation’ and ‘self-efficacy beliefs’; and (c) related variation in test anxiety, self-confidence, and so on, which affect performance in testing situations irrespective of actual ability.

Such factors, which influence test scores, merely show what one was exposed to in their lives, under my DEC framework. Socio-cognitive factors related to social class could introduce bias, since people from different backgrounds are exposed to different information, have unequal access to information and test prep, along with familiarity with item content. Thus, we can then look at these scores as mere social class surrogates.

If test scores are influenced by stress, anxiety, fatigue, motivation, familiarity, non-cognitive factors, and socio-cognitive factors due to social class, then the concept of an absolute scale for measuring cognitive abilities may not hold true. I have established that test scores can indeed be influenced by myriad external factors. So given that these factors affect test scores and undermine the assumption of an absolute scale, the concept of measuring cognitive ability on such a scale is challenged (don’t forget the irreducibility arguments). Further, the argument that “spatial intelligence” is not measurable on an absolute scale due to its nonphysical nature aligns with this perspective, which further supports the idea that the concept of an absolute scale isn’t applicable in these contexts. Thus, the implications for testing are profound, and so score differences are due to social class and one’s life experiences, nor any kind of “genotypic IQ” (which is an oxymoron).

Regarding vocabulary, this is influenced by the home environment—the types of words one is exposed to as they grow up (and can therefore also be integrated into the DEC). Kids from lower SES families here fewer words at home and in their neighborhoods (low SES children hear 30 million fewer words than higher SES children) (Brito, 2017). We know that word usage is the strongest determinant of child vocabulary growth, and that less educated parents use fewer words with less complex syntax (Perkins, Finegood, and Swain, 2013). The language quality that is addressed to children also matters (Golinkoff et al, 2023). We can then liken this to the Vygotskian More Knowledgeable Other (MKO). An MKO would have the knowledge that their dependent doesn’t. But if the MKO in this instance isn’t educated or low income, then they will use fewer words and they then will have this feature in their home. Such tests merely show what one was exposed to in their lives, not any underlying unitary “thing” like the IQ-ists claim.

Increasing both the amount and diversity of language within the home can positively influence language development, regardless of SES. Repeated exposure to words and phrases increases the child’s opportunity to learn and remember (McGregor, Sheng, & Ball, 2007). The complexity of grammar, the responsiveness of language to the child, and the use of questions all aid language development (Bornstein, Tamis-LeMonda, Hahn, & Haynes, 2008; Huttenlocher, Waterfall, Vasilyeva, Vevea, & Hedges, 2010). Besides frequency of language input, how caregivers communicate with children also affects children’s language skills. Children from higher SES families experience more gestures by their care-givers during parent–child interactions; these SES differences predict vocabulary differences at 54 months of age (Rowe & Goldin-Meadow, 2009). Parent–child interactions provide a context for language exposure and mold the child’s language development. Specific characteristics of the caregiver, including affect, responsiveness, and sensitivity predict children’s early and later language skills (Murray & Hornbaker, 1997; Tamis-LeMonda, Bornstein, Baumwell, & Melstein Damast, 1996). Maternal sensitivity partially explains links between SES and both children’s receptive and expressive language skills at age 3 years (Raviv, Kessenich, & Morrison, 2004). These differences also appear across culture (Mistry, Biesanz, Chien, Howes, & Benner, 2008). Maternal supportiveness partially explained the link between SES and language outcomes at 3 years of age, for both immigrant and native families in the United States. (Brito, 2017: 3-4)

The issue of temperature

This can be illustrated using the IQ-ists favorite (real) measurement—temperature. The Kelvin scale avoids the issues in the first argument. In the Kelvin scale, temperature is measured in relation to absolutel 0 (the point where molecular motion theoretically stops). It doesn’t involve factors like variability in measurement techniques, practice effects, or individual differences. The Kelvin scale has a consistent reference point—absolute 0—which provides a consistent and fixed baseline for temperature measurement. The values in the Kelvin scale are directly tied to a true 0 point.

There are no external influences on the measurement of temperature (beyond that which influences the mercury in the thermometer to move up or down),  like the type of thermometer used or one’s familiarity with temperature measurement. External factors like these aren’t relevant to the Kelvin scale, unlike puzzle complexity and practice effects on the spatial abilities test.

Finally, temperature values on the Kelvin scale are universally applicable, which means that a specific temperature corresponds to the same level of molecular motion regardless of who performs the measurement, or what measurement instrument is used. So the Kelvin temperature scale doesn’t have the same issues as PP’s little “spatial intelligence” test. It has a clear and consistent measurement framework, where values directly represent the underlying physical phenomenon of molecular motion without being influenced by external factors or individual differences. When you think about actual, established measurements like temperature and then try to relate them to IQ, then the folly of “mental measurement” reveals itself.

Now, having said all of this, I can draw a parralel between the argument against an absolute scale for cognitive abilities and the concept of temperature.

Temperature measurements, while influenced by external factors (since this is what makes the mercury travel up or down in the thermometer) like atmospheric pressure and humidity, still have an absolute 0 point in the Kelvin scale which represents a complete absence of thermal energy. Unlike “spatial intelligence”, temperature has a fixed reference point which served as an objective 0 point, which allows it to be measured on an absolute scale. The external factors influencing temperature measurement are fundamentally different from the factors which influence one’s performance on a test, since they don’t introduce subjective variations in the same manner. So while temperature is influenced by external factors, it’s measurement is fundamentally different from nonphysical concepts due to the presence of an objective 0 point and the presence and distinct nature of influencing factors. This is put wonderfully by Nash (1990: 131):

First, the idea that the temperature scale is an interval scale is a myth and, second, a scale zero can be established for an intelligence scale by the same method of extrapolation used in defining absolute zero temperature. In this manner Eysenck (p. 16) concludes, ‘if the measurement of temperature is scientific (and who would doubt that it is?) then so is that of intelligence.’ It should hardly be necessary to point out that all of this is special pleading of the most unabashed sort. In order to measure temperature three requirements are necessary: (i) a scale, (ii) some thermometric property of an object and, (iii) fixed points of reference. Zero temperature is defined theoretically and successive interval points are fixed by the physical properties of material objects. As Byerly (p. 379) notes, that ‘the length of a column of mercury is a thermometric property presupposes a lawful relationship between the order of length and the temperature order under certain conditions.’ It is precisely this lawful relationship which does not exist between the normative IQ scale and any property of intelligence. The most obvious problem with the theory of IQ measurement is that although a scale of items held to test ‘intelligence’ can be constructed there are no fixed points of reference. If the ice point of water at one atmosphere fixes 276.16 K, what fixes 140 points of IQ? Fellows of the Royal Society? Ordinal scales are perfectly adequate for certain measurements, Moh’s scale of scratch hardness consists of ten fixed points, from talc to diamond, and is good enough for certain practical purposes. IQ scales (like attainment test scales) are ordinal scales, but this is not really to the point, for whatever the nature of the scale it could not provide evidence for the property IQ or, therefore, that IQ has been measured.

Conclusion

It’s quite obvious that IQ-ists have no leg to stand on, which is why they need to claim that their tests are on absolute scales even when it leads to an absurd conclusion. The fact that test performance is influenced by myriad non-cognitive traits due to one’s social class (Richardson, 2002) shows that these—and all tests—take place in certain cultural contexts, meaning that all tests are culture-bound, as argued by Cole (2004) with his West African Binet argument.

The fact of the matter is, “mental measurement” is impossible, and all these tests do is show the proximity to a certain kind of class-specific knowledge, not any kind of general cognitive “strength”. Taking a Vygotskian perspective on this issue will allow us to see how and why people score differently from each other, and it comes down to their home environment and what they learn in their lives.

Nevertheless, the claims from IQ-ists that they have a specified measured object, object of measurement and measurement unit for IQ or that their tests have a true 0 point are absurd, since these things are properties of physical objects, not non-physical, mental ones. The Vygotskian perspective will allow use to understand score variances between individuals and groups, as I have argued before. We don’t need to claim that there is an absolute scale for cognitive assessment nor do we need to claim that mental measurement is possible for this to be a truism. So, yet again, PP’s argument fails.

Ashkenazi Jews Are White

2700 words

Introduction

Recently, I have been seeing people say that Ashkenazi Jews (AJs) are not white. Some may say that Jews “pretend to be white”, so they can accomplish their “group goals” (like pitting whites and blacks against each other in an attempt to sow racial strife, due to their ethnic nepotism due to their genetic similarity). I have also seen people deriding Jews for saying “I’m white” and then finding an instance of them saying “I’m Jewish” (see here for an example), as if that’s a contradiction, but it’s not. It’s the same thing as saying “I’m Italian… I’m white” or “I’m German… I’m white.” But since pluralism about race is true, there could be some contexts and places that Jews aren’t white, due to the social construction of racial identities. However, in the American context it is quite clear: In both historical and contemporary thought in America, AJs are white.

But a claim like this, then, raises an important question: If AJs are not white, then what race are they? This is a question I will answer in this article, and I will of course show that AJs are indeed white in an American conception of race. Using Quayshawn Spencer’s racial identity argument, I will assume that Ashkenazi Jews aren’t white, and then I will argue that this leads to a contradiction, so Jews must be white. And while there was discussion about the racial status of Jews after they began emigrating to America through Ellis Island, I will show that Jews arrived to America as whites.

White or not?

The question of whether or not AJs are white is a vexing one. Of course, AJs are a religious group. However, this doesn’t mean that they themselves have their own specific racial category. It’s like if one says they are German, or Italian, or British. Those are mere ethnicities which make up the white racial group. One study found that AJs have “White privilege vis-á-vis persons of color. This privilege, however, is limited to Jews who can “pass” as White gentiles” (Blumenfeld, 2009). Jews that can “pass as white” are obviously white, and there is no other race for them to be.

This is due to the social nature of race. Since race is a social construct, then the way people’s racial background is perceived in America is based on how they look (their phenotype). An Ashkenazi Jew saying “I’m Jewish. I’m white” isn’t a contradiction, since AJs aren’t a race. It’s just like saying “I’m Italian. I’m white” or “I’m German. I’m white.” It’s quite obviously an ethnic group which is a part of the white race. Jews are white and whites are a socialrace.

This discussion is similar to the one where it is claimed that “Hispanic/Latino/Spanish” aren’t white. But that, too, is a ridiculous claim. In cluster studies, HLSs don’t have their own cluster, but they cluster near the group where their majority ancestry derives (Risch et al, 2002). Saying that AJs aren’t white is similar to this.

But during WWII, Jews were persecuted in Nazi German and eventually some 6 million Jews were killed. Jews, in this instance, were seen as a socialrace in Germany, and so they were themselves racialized. It has been shown that Germans who grew up under their Nazi regime are much more anti-Semitic than Germans who were born before or after the Nazi regime, and it was Nazi schooling which contributed to this the most (Voigtlander and Voth, 2015). This shows how one’s beliefs—and that of a whole society’s—are malleable along with how effective propaganda is. The Nuremberg laws of 1935 established anti-Jewish sentiment in the Nazi racial state, and so they had to have a way to identify Jews. They settled on the religious affiliation of one’s 4 grandparents as a way to identify Jews. But when one’s origins were in doubt, the Reich Kinship Office was deployed in order to ascertain one’s genealogy. But in the event this could not be done, one’s physical attributes would be assessed and compared to 120 physical measures between the individual and their parents (Rupnow, 2020: 373-374).

This can now be centered on Whoopi Goldberg’s divisive comment from February, 2022, where she states that the attempted genocide of Jews in Nazi Germany “wasn’t about race“, but it was about “man’s inhumanity to man; [it involved] two groups of white people.” Of course Goldberg is operating under an American conception of race, so I could see why she would say that. However, at the time in Nazi Germany, Jews were Racialized Others, and so they were a socialrace in Germany.

Per Pew, most Jews in America identify as white:

92% of U.S. Jews describe themselves as White and non-Hispanic, while 8% say they belong to another racial or ethnic group. This includes 1% who identify as Black and non-Hispanic; 4% who identify as Hispanic; and 3% who identify with another race or ethnicity – such as Asian, American Indian or Hawaiian/Pacific Islander – or with more than one race.

A super majority (94%) of American Jews are (and identify as) white and non-“Hispanic” in Pew’s 2013 research, which is down slightly from the 2020 research (Lugo et al, 2013):

From Lugo et al, 2013

AJs were viewed as white even as early as 1790 when the Naturalization Act was put into law, which stated that only free white persons were allowed to emigrate to America (Tanner, 2021). Even in 1965, Srole (1965) stated that “Jews are white.” But the perception that all Jews are white came after WWII (Levine-Rasky, 2020) and this claim is of course false. All Jews certainly aren’t white, but some Jews are white. Thus, even historically in the history of America, AJs were seen as white. Yang and Koshy (2016) write:

We found no evidence from U.S. censuses, naturalization legislation, and court cases that the racial categorization of some non-Anglo-Saxon European immigrant groups such as the Irish, Italians, and Jews changed to white. They were legally white and always white, and there was no need for them to switch to white.

White ethnics could be considered ethnically inferior and discriminated against because of their ethnic distinctions, but in terms of race or color, they were all white and had access to resources not available to nonwhites.

It was precisely because of the changing meanings of race that “the Irish race,” “the German race,” “the Dutch race,” “the Jewish race,” “the Italian race,” and so on changed their races and became white. In today’s terminology, it should be read that these European groups changed their ethnicities to become part of whites, or more precisely they were racialized to become white.

Our findings help resolve the controversy over whether certain U.S. non-Anglo-Saxon European immigrant groups became white in historical America. Our analysis suggests that “becoming white” carries different meanings: change in racial classification, and change in majority/minority status. In terms of the former, “becoming white” for non-Anglo-Saxon European immigrant groups is bogus. Hence, the argument of Eric Arnesen (2001), Aldoph Reed (2001), Barbara Fields (2001), and Thomas Guglielmo (2003) that the Irish, Italians, and Jews were white on arrival in America is vindicated.

But one article in The Forward argued that “Ashkenazi Jews are not functionally white.” The author (Danzig) attempts to make an analogy between the founder of the NAACP Walter White who was “white-passing” (both of his parents were born into slavery) and Jews who are “white-passing”, “due to years of colonialism, expulsion and exile in European lands.” The author then claims that as along as Jews maintain their unique Jewish identity, they therefore are a racial group. This article is a response to another which claims that Ashkenazi Jews are” functionally white” (Burton). Danzig discusses Button’s claim that a “white-passing ‘Latinx'” person could be deported if their immigration status is discovered. This of course implies that “Hispanics” are themselves a racial group (they aren’t). Danzig discusses the discrimination that his family went through in the 1920s, stating that they could do certain things because they were Jewish. The argument in Danzig’s article, I think, is confused. It’s confused because just because Jews were discriminated against in the past doesn’t mean they weren’t white. In fact, Jews, Italians, and the Irish were white on arrival to the United States (Steward, 1964; Yang and Koshy, 2016). But this doesn’t mean that they didn’t face discrimination. That is, Jews, Italians and the Irish didn’t change to white they were always legally white in America. (But see Gardaphe, 2002, Bisesi, 2017, Baddorf, 2020, and Rubin, 2021. Italians didn’t become white as those authors claim, they were white upon arrival). So Danzig’s claim fails—Jews are functionally white because they are white and they arrived in America as white. Claims to the contrary that AJs (and Italians and the Irish) became white are clearly false.

So despite claims that Jews became white after WWII, Jews are in fact white in America (Pearson and Geronimus, 2011). Of course in the early 1900s as immigrants were arriving to Ellis Island, the question of whether or not Jews (“Hebrews” in this instance) were white or even if they were their own racial group had a decent amount of discussion at the time (Goldstein, 2005; Pearlman, 2018). The fact that there was ethnic strife between new-wave immigrants to Ellis Island doesn’t entail that they were racial groups or that those European immigrants weren’t white. It’s quite clear that Jews—like italians and the Irish—were considered white upon arrival.

Now that I have established the fact that Jews AJs are indeed white (and arrived to America as white) despite the confused protestations of some authors, now I will formalize the argument that AJs are white, since if they aren’t white, then they would need to fit into one of the other 4 racial categories.

Many may know that I push Quayshawn Spencer’s OMB race theory, and that I am a pluralist about race. In the volume What is Race?: Four Philosophical Views, philosopher or race Quayshawn Spencer (2019: 98) writes:

After all, in OMB race talk, White is not a narrow group limited to Europeans, European Americans, and the like. Rather, White is a broad group that includes Arabs, Persians, Jews, and other ethnic groups originating from the Middle East and North Africa.

Although there is some research on the racial identity of MENA (Middle Eastern/North African people) and how they may not perceive themselves as white or be perceived as white (Maghbouleh, Schachter, and Flores, 2022), the OMB is quite clear that the social group designated “white” doesn’t refer only to Europeans (Spencer, 2019).

So, if AJs aren’t white, then they must be part of another of the 4 OMB races (black, Native American, East Asian or Pacific Islander). Part of this racial scheme is K=5—where when K is set to 5 in STRUCTURE, 5 clusters are spit out and these map onto the OMB races. But of those 5 clusters, there is no Jewish cluster. Note that I am not denying that there is some kind of genetic structure to AJs, I’m just denying that this would entail that they are a racial group. If they were, then they would appear in these runs. AJs are merely an ethno-religious in the white socialrace. So let’s assume this is true: Ashkenazi Jews are not white.

When we consider the complexities of racial classification, it becomes apparent that societies tend to organize individuals on numerous traits into distinct categories based on physical traits, cultural background, and ancestry. If AJs aren’t white in an American context, then they would have to fall into one of the four other racial groups in a Spencerian OMB race theory.

But there is one important aspect to consider here—that of the phenotype of Ashkenazi Jews. Many Ashkenazi Jews exhibit physical traits which are more likely associated with “white” populations. This simple observation shows that AJs don’t fit into the established categories of East Asian, Pacific Islander, black or Native American. AJs’ typical phenotype aligns more closely with that of white populations.

So examining the racial landscape in America, we can see that how social perceptions and classifications can significantly impact how individuals are positioned in a broader framework. AJs have historically been classified and perceived as white in the American racial context, as can be seen above. So within American racetalk, AJs are predominantly classified in the white racial grouping.

So taking all of this together, I can rightly state that Jews are white. Since we assumed at the outset that if they weren’t white they would belong to some other racial group, but they don’t look like any other racial group but look and are treated as white (both in contemporary thought and historically), then AJs are most definitely seen as white in American racetalk. Here’s the formalized argument:

P1: If AJs aren’t white, then they must belong to one of the other 4 racial categories (black, Native American, East Asian or Pacific Islander).
P2: AJs do not belong to any of the four racial categories mentioned (based on their phenotype typical of white people).
P3: In the American racial context, AJs are predominantly classified and perceived as white.
Conclusion: from P1, if AJs aren’t white then they must belong to one of the other 4 racial groups. But from P2, AJs do not belong to any of those categories, because from P3, AJs are perceived and classified as white. These premises, then, lead to a contradiction, since they all cannot be simultaneously true.

So we must reject the assumption that AJs aren’t white, and the logical conclusion is that AJs are considered white in the American context, based on their phenotype (and the fact that they arrived to America as white). Jews didn’t “become white” like some claim (eg, Brodkin, 2004). American Jews even benefit from white privilege (Schraub, 2019). MacDonald-Dennis’ (2005, 2006) qualitative research (although small not generalizable) shows that some Ashkenazi Jews think of themselves as white. AJs are legally and politically white.

All Jews aren’t white, but some (most) Jews are white (in America).

Conclusion

Thus, AJs are white. Although many authors have claimed that Jews became white after arrival to America (or even after WWII), this claim is false. It is false even as far back as 1790. If we accept the assumption that AJs aren’t white, then it leads to a contradiction, since they would have to be one of the other 4 racial groups, but since they look white, they cannot be a part of those racial groups.

There are white Jews and there are non-white Jews. But when it comes to AJs, the question “When did they become white?” is nonsense since they were always perceived and treated as white in America from it’s founding. Some AJs are white, some aren’t; some Mizrahi Jews are white, some aren’t. However in the context of this discussion, it is quite clear that AJs are white, and there is no other race for them to be, based on the OMB race theory. In fact, in the minds of most Americans, Jews aren’t a racialized group, but they are perceived as outsiders (Levin, Filindra, and Kopstein, 2022). But there were some instances in history where sometimes Jews were racialized, and sometimes they weren’t (Hochman, 2017). But what I have decisively shown here, in the American context ever since its inception, AJs are most definitely white. Saying that AJs are white is like saying that Italians or Germans are white. There is no contradiction. Jews get treated as white in the American social context, they look white, and have been considered white since they have arrived to America in the early 1900s (like the Irish and Italians).

The evidence and reasoning presented in this article points to one conclusion: That AJs are indeed white. This of course doesn’t mean that all AJs are white, it merely means that some (and I would say most) are white. AJs have been historically, legally, and politically white. Mere claims that they aren’t white are irrelevant.