NotPoliticallyCorrect

Home » Refutations

Category Archives: Refutations

The Developmental Systems Argument Against Hereditarianism

2000 words

Genetic determinism can be described as the attribution of the formation of traits to genes, where genes are ascribed more causal power than what scientific consensus suggestsGerick et al (2017)

Defining hereditarianism and DST

Hereditarianism has many entailments, but a main one is that genes are necessary and sufficient for phenotypes. Hereditarianism can be defined succinctly as: the belief that human traits, behaviors, and capabilities are predominantly or solely caused by genetic inheritance, with the environment being negligible. So this belief implies that genes are necessary (without the specific genes, the trait wouldn’t appear) and sufficient (the genes in question can alone account for the appearance of the trait without significant environmental influence). So if genes are sufficient for phenotypes, then we could predict one’s phenotype from one’s genotype. (It’s also reductionist and deterministic). That a form of genetic determinism is taught in schools (Jamieson and Radick, 2017) is one reason why this hereditarian view must be fought.

But if development is understood as the dynamic interaction between genes, environment, and developmental products where no single factor dominates in the development of an organism (the DST view), then a view that assumes the primacy of one of the developmental resources (hereditarianism and it’s assumption about genes), then this leads to a logical incompatibility and incoherence. Since certain things are true about organismal development, then hereditarianism cannot possibly be true. I have made a similar argument to this before, but I have not formalized it in this way. Since we know that development is context-dependent, and we know that hereditarianism assumes the context-independence of genes, we can rightly assume that hereditarianism is false. Furthermore, since hereditarianism assumes no or negligible developmental plasticity, then that’s another reason to reject it. Here’s the argument:

(1) Hereditarianism (H) implies genetic determinism (GD).
(2) GD implies negligible developmental plasticity (DP).
(3) But DP isn’t negligible.
(C) Therefore H is false.

H=hereditarianism
GD=genetic determinism
D=developmental plasticity/environmental influence

(1) H->GD
(2) GD->¬D
(3) D
(C) ∴¬H

Under the assumption that hereditarianism is a species of genetic determinism, and DST is a context-dependent account of development: If DST is accurate, then hereditarianism is false. We know that traits aren’t genetically determined, so DST is accurate. Therefore, hereditarianism must be false.

Hereditarians have tried paying lip service to the interactionist/developmental systems view (as I showed here and here), but by definition, hereditarianism discounts interactionism since even their main tool (the heritability estimate) assumes no interaction between genes and environment (whereas the interaction between genes and environment is inherent in the DST philosophy).

We know that genes are not sole determinants of phenotypes, but they are one of many interacting developmental resources, which refutes the often unstated assumption that genes are blueprints or recipes for development. Hereditarianism doesn’t and can’t account for the fact that the environment can enable, contain, and alter genetic expression. Therefore, a holistic—and not reductionist—view of development is one we should accept. The hereditarian view of development is clearly untenable.

Below is an argument I’ve constructed that relies on the argument in Noble (2011) for genes as passive causes:

(1) If genes are passive information carriers, then they do not initiate biological processes independently.
(2) Genes do not initiate their own transcription or replication; they react to triggering signals within a biological system.
(3) Therefore, genes are passive information carriers.
(4) If something is a passive information carrier, it cannot be considered an active cause of biological processes.
(5) So if genes are passive information carriers, then genes cannot be considered an active cause of biological processes.

Noble’s biological relativity argument

Hereditarianism assumes a privileged level of causation (genes are the privileged resource of development). But we know—a priori—that there is no privileged level of causation in biological systems (Noble’s 2012 biological relativity argument). So hereditarianism must be false. Here’s the argument:


We know the biological systems are characterized by multiple interacting levels molecular, cellular, organismal, environmental) where each level can influence each other in a dynamic way. So no single level has a causal priority over another. In biological systems, causation is understood as the process by which one event or state leads to another. So for there to be a privileged level of causation in biological systems, one level would need to be inherently more deterministic or controlling of others, independent of the context that the developing organism is situated. But each level of biological organization (from genes to the ecosystem of the organisms) is interdependent where changes at one level can only be understood in relation to changes at other levels (genetic expression is influenced by cellular conditions, which are then affected by organismal health and environmental factors).

So no level of biological organization operates independently or can dictate outcomes without influence or interaction with other levels. Even what may seem like so-called “genetic causes” require the cell to read the context-dependent information in the gene. So there is a feedback loop where influences are not unidirectional but reciprocal. While genes can influence protein synthesis, the need for proteins can regulate gene expression through feedback mechanisms. Therefore, a priori, there is no privileged level of causation in biological systems, since each level is part of an integrated system where causation is distributed and context-dependent, not localized to any one of the levels of biological organization.


See these references for more on how genes are necessary, passive causes but not sufficient causes. These references attest to how genes are looked at today in systems biology, not using a reductionist viewpoint. Oyama, 2000; Moore, 2001; Shapiro, 2013; Kampourakis, 2017; Richardson, 2017, 2020, 2021, 2022; Baverstock, 2021; McKenna, Gawne, and Nijhout, 2022. But here is the gist:

“Active causation” is when X causes or initiates an event to occur, whereas “passive causation” is when X is causes to do something or forced to do something by something else or another situation/event. Both Baverstock and Noble argue that genes (DNA sequences) are passive causes, meaning they don’t initiate the causation of traits. Baverstock also argued that the phenotype plays an active role in morphogenesis and evolution, causing changes in processes (which is similar to West-Eberhard’s and Lerner’s views conceptualizing genes as followers, not leaders, in the evolutionary process).

Noble also argues that genes aren’t active, but rather passive causes, since they merely react to the signals from what is occurring in the developmental system and the environment (which, in this case is conceptualized differently in different contexts for the purpose of this argument like the uterine environment, the environments that get created through the interactions of gene products, gene and gene interactions and gene environment interactions which are ultimately caused by the physiological system). He then ultimately, using Shapiro’s “read write genome argument”, argued that the only kind of causation that can be attributed to genes is passive, in the same way that computer programs read and use databases.

Using Oyama’s concept of “information”, it’s not a property of biological things, but is a relational, contextual concept, arguing that is constructed by the history of the developmental system, while information then emerged through the irreducible interactions which are ultimately caused by the self-organizing developmental system; she calls this “constructive interactionism.”

Over the last 40 years since the publication of Oyama’s developmental systems theory and the subsequent strengthening of her view, we’ve come to find out that genes (and genotypes) aren’t causes in and of themselves, and that genes are mere inert molecules, outside of the living cell. So if the cell activates a gene, then the gene transcribes information (remembering how “information” is conceptualized in Oyama’s DST; this premise establishes a causal relationship between the cell and a gene, with the cell activating the gene since the cell is the active cause and the gene is the passive one). If the gene transcribes its information (of which then ontogeny of information is relational and contextual, emerging through the irreducible actions of the developmental resources), then it produces a protein. So if the cell activates a gene, then it produces a protein (the cell being the active cause and the gene and the protein being passive causes).

“But genes load the gun and environment pulls the trigger”

This is a phrase I’ve heard quite a bit, and I think it’s wrong due to what I’ve outlined above. It’s still deterministic and it looks at genes as active causes. The “genes load the gun” part of the phrase assumes that genes have an active role in initiating biological potentials. But we know that genes are acted on by cellular and environmental context, which then dictates genetic expression. It also assumes linear causation, in a one-way, cause-and-effect sequence.

The claim that the environment merely “pulls the trigger” assumes that there is already an inherent “information” that’s in the genes, which is why that’s a genetic determinist claim. It also minimizes the environment to an activator rather than a co-creator of biological outcomes. So using Oyama’s concept of information as something constructed through developmental interactions emphasizes that the environment doesn’t merely activate what’s already there, it also participates in the very construction of biological information and the ontogeny of traits. It also presumes that genes store all relevant information, but we know that it’s dynamically-shaped, using—but not limited to—genes as passive causes. Basically, biological information is an emergent property of biological systems, not a preexisting genetic code.

Furthermore, since we know that the phenotype plays an active role in morphogenesis and evolution, we know that the outcome (the phenotype) isn’t just a result of genetic loading followed by environmental activation. The phenotype actively contributes to shaping genetic expression and evolutionary trajectories. So if genes are activated by the cell and the broader physiological system, then the idea of genes loading anything independently falls apart. Genes are read or used by the physiological system to carry out certain processes in a context-dependent nanner, not setting the stage, but responding to it.

Conclusion

The role of genes in biological systems and causation as discussed by Noble, Richardson, Oyama, Moore, West-Eberhard, Baverstock, Shapiro and others directly refutes the hereditarian/genetic determinist view of what genes do in biological systems. Genes aren’t the primary architects of biological outcomes; instead genes are seen as passive components within a dynamic, interactive system.

By definition, hereditarianism assumes that genes are necessary and sufficient for causes for phenotypes (genes are the primary drivers of trait ontogeny and development). By definition, DST holds that development is an emergent property of a system where genes are just one component among many influencing factors. If development were primarily determined by genetics, then it would contradict the foundational tenet of DST, that development results from interdependent influences. So since hereditarianism and DST are mutually exclusive in their core assertions about the role of genetics in development, hereditarianism cannot be true since we know that a priori there is no privileged level of causation in biological systems.

So quite clearly hereditarianism fails on conceptual, logical, and empirical grounds. The work that’s been done over the past 50 years in biology—both conceptually and empirically—shows that the old way of viewing genes and their role in organismal development just doesn’t work anymore. Biological outcomes are not merely due to genetic blueprints but are dynamically-shaped outcomes, constructed through the irreducible interactions of multiple levels and resources, which then renders hereditarianism simplistic and outdated in the face of modern biological understanding. Noble’s biological relativity argument is a powerful argument that has direct implications for hereditarianism, and the strengthening of the argument from Baverstock and McKenna, Gawne, and Nijhout definitively show the emptiness in any kind of assumptions that genes are active cause of biological processes. Thus, we should ridicule hereditarian views of the gene and what it does in development. It’s simply an untenable view that one cannot logically defend in the face of the conceptual and empirical work on biological systems.

Therefore, to be a hereditarian in 2025 is to show that one does not understand current biological thinking.

Racism Disguised as Science: Why the HBD Movement is Racist

2600 words

Introduction

Over the last 10 years or so, claims from the human biodiversity (HBD) movement have been gaining more and more traction. Proponents of HBD may say something like “we’re not racists, we’re ‘Noticers'” (to use Steve Sailer terminology – more on him below). The thing is, the HBD movement is a racist movement for the following reasons: it promotes and justifies racial hierarchies, inequities, is justified by pseudoscience, and it’s historical connections to the eugenics movement which sought to use pseudoscientific theories of racial superiority to justify oppression and discrimination.

But ever since 1969, Arthur Jensen and others have tried to intellectualize such a position, the discussion around racism has moved on to things like not only overt examples of racism but to systemic inequities along with unconscious biases which perpetuate racial hierarchies. But despite a veneer of scientific objectivity, the underlying motivation appears to be that of upholding some groups as “better” and others “worse.” This is like when hereditarians like Rushton tried to argue in the 90s that they can’t be racist since they say Asians (who are a selected population) are better than whites who are better than blacks on trait X. We know that views on Asians have changed over the years, for example with the use of the term “Mongoloid idiot.” Nonetheless, it’s obvious that the HBD movement purports a racial hierarchy. Knowing this, I will show how HBD is a racist movement.

Why HBD is racist

I have previously provided 6 definitions of racism. In that article I discussed how racism “gets into the body” and causes negative health outcomes for black women. I have since written more about why racism and stereotypes are bad since they cause the black-white crime gap through the perpetuation of self-fulfilling prophecies and they also cause psychological and physiological harm.

One of the definitions of “racism” I gave came from John Lovchik in his book Racism: Reality Built on a Myth (2018: 12), where he wrote that “racism is a system of ranking human beings for the purpose of gaining and justifying an unequal distribution of political and economic power.” Using this definition, it is clear that the HBD movement is a racist movement since it attempts to justify this ranking or human beings to justify and gain different kinds of power. This definition from Lovchik encompasses both systemic racism and overt acts of discrimination.

HBD proponents believe they we can delineate races not only based on physical appearance but also genetic differences. This is inherent in their system of ranking. But I think the same. Spencer’s (2014, 2021) OMB race theory (to which I hold to) states that race is a referent denoting a proper name to population groups. But that’s where the similarities end; OMB race theory is nothing like HBD. The key distinction between the two is in the interpretation of said differences. While both perspectives hold that population groups can be sorted into distinct groups, there is a divergence in their intentions and conclusions regarding the significance of said racial categorization.

Spencer’s OMB race theory emphasizes the declination of races based on physical differences as well as genetic ones using K=5 and how the OMB defines race in America—as a proper name for population groups. But Spencer (2014: 1036) explicitly states that his theory has no normative conclusion in it, since the genetic evidence that supports the theory comes from noncoding DNA sequences. Thus, if individuals wish to make claims about one race being superior to another in some respect, they will have to look elsewhere for that evidence.” So the theory focuses solely on genetic ancestry without any normative judgements or hierarchical ranking of the races.

Conversely, the HBD movement, despite also genetically delineating races, differs in the application and interpretation of the evidence. Unlike Spencer’s OMB race theory, HBD states that genetic differences between groups contribute to differences in intelligence, social outcomes and behavior. HBD proponents use genetic analyses like GWAS to show that a trait has some kind of genetic influence and that, since there is a phenotypic difference in the trait between certain racial groups that it then follows that there is a genetic difference between certain racial groups when it comes to the phenotypic trait in question.

So this distinction that I have outlined shows the principle ways in which OMB race theory is nothing like HBD theory. So while both ideas involve genetic delineation of races, Spencer’s doesn’t support racist ideologies or hierarchical rankings among the races while the HBD movement does. Thus, the distinction shows the relationship between genetic analysis, racism and racial categorization is nuanced and that, just because one believes that human races exist, it doesn’t necessarily follow that they are a racist.

Furthermore, the attribution of social outcomes/inequality to biological/genetic differences is yet another reason why HBD is racist. They argue that most differences (read: outcomes/inequalities) between groups can come down mostly to genes, still leaving room for an environmental component. (This is also one of Bailey’s 1997 hereditarian fallacies.) It is this claim that socially-valued differences between groups are genetic in nature which then leads to systemic discrimination. So by attributing differences in outcome and resources, to biological differences, HBD attempts to perpetuate and legitimate systemic discrimination against certain racial groups. “It’s in their genes, nothing can be done.” Therefore, by ranking humans based on race and attributing differences in outcomes between groups—in part—to biological differences, the HBD movement justifies and perpetuates systemic discrimination against certain races, making HBD a racist movement.

Eugenic thinking arose in the late 1800s and began to be put into action in the 1900s. From sterilization to certain people deemed inferior, to advocating the enhancement of humanity through selective breeding of certain groups of people, some of the ideas from the eugenics movement are inherent in HBD-type thinking. The HBD movement then emerged as a more “respectable” iteration of the eugenics movement and they draw on similar themes. But why does this connection matter? It matters since the historical connection between the two shows how such pernicious thinking can penetrate social thought.

Lastly the HBD movement relies on pseudoscience. They often distort or misrepresent scientific findings. Most obvious is J. P. Rushton. In his discussion of Gould’s (1978) reanalysis of Morton’s skull collection, Rushton miscited Gould’s results in a way that jived with Rushton’s racial hierarchies (Cain and Vanderwolf, 1990). Rushton also misrepresented the skull data from Beals et al (1984). Rushton is the perfect example of this, since he misrepresented and ignored a ton of contrary data so that his theory could be more important. Rushton’s cherry-picking, misrepresentation of data, and ignoring contrary evidence while not responding to devestating critiques (Anderson, 1991; Graves 2002a, b) show this perfectly. This is the perfect example of confirmation bias.

They also rely on simplistic and reductionist interpretation of genetic research. By doing this, they also perpetuate stereotypes which can then have real-world consequences, like people committing horrific mass murder (the Buffalo shooter made reference to such genetic studies, which is why science communication is so important).

In his 2020 book Human Diversity the infamous Charles Murray made a statement about inferiority and superiority in reference to classes, races, and sexes, writing:

To say that groups of people differ genetically in ways that bear on cognitive repertoires (as this book does) guarantees accusations that I am misusing science in the service of bigotry and oppression. Let me therefore state explicitly that I reject claims that groups of people, be they sexes or races or classes, can be ranked from superior to inferior. I reject claims that differences among groups have any relevance to human worth or dignity.

Seeing as Chuck is most famous for his book The Bell Curve, this passage needs to be taken in context. So although he claims to reject such claims of inferiority and superiority, his previous work has contributed to such notions, and thus, it is implicit in his work. Furthermore, the language he used in the passage also implies hierarchical distinctions. When he made reference to “groups of people [who] differ genetically in ways that bear on cognitive repertoires“, there is a subtle suggestion that groups may possess inherent advantages or disadvantages in cognitive ability, thusly implying a form of hierarchy.

Murray’s work has been used by alt-right and white nationalist groups, and we know that white nationalist groups use such information for their own gain (Panofsky, Dasgupta, and Iturriaga, 2020; Bird, Jackson, and Winston, 2023). Panofsky and his coauthors write that “the claims that genetics defines racial groups and makes them different, that IQ and cultural differences among racial groups are caused by genes, and that racial inequalities within and between nations are the inevitable outcome of long evolutionary processes are neither new nor supported by science (either old or new). They’re the basic, tired evergreens of ancient racist thought.

Next we have Steve Sailer. He may claim that he is merely observing (or as he says “Noticing”) and discussing empirical data. So his focus on racial differences and how they are driven mainly by genetic differences aligns with Lovchik’s definition of racism, since it involves the ranking of races based on perceived genetic differences, in both IQ and crime. Therefore, by emphasizing these differences and their purported implications for socially-relevant traits and their so-called implications for social hierarchies, Sailer’s work can be seen as justifying social inequalities and therefore justifying systemic discrimination.

Lastly, we have Bo Winegard’s Aporia Magazine essay titled What is a racist? In the article he forwards 5 definitions (while giving a 10-point scale, I will bracket the score he gives each):

Flawed: 

1: Somebody who believes that race is a real, biological phenomenon and that races are different from each other. [1/10]

2: Somebody who believes that some races have higher average socially desirable traits such as intelligence and self-control than others. [3/10]

3: Somebody who treats members of one race differently from members of another race. [5/10]

Plausible: 

4: Somebody who dislikes members of other races. [8/10]

5: Somebody who advocates for differential treatment under the law for different races. [10/10]

Note that the first 2 encompass what, for the purposes of this article, I call racist in the HBD parlance. Nonetheless, I have tried to sufficiently argue that those 2 do constitute racism and I think I have shown how. In the first, if it is used to justify and legitimate social hierarchies, it is indeed racist. For the second, if someone holds the belief that races differ on socially values traits and that it is genetically caused, then it could perpetuate racist stereotypes and the continuation of racist ideologies. The third and fourth constitute racial discrimination. These 2 could also be known as hearts and minds racism, which operate at the level of individual beliefs, attitudes, and behavior. But the fifth definition that Bo forwarded is the most interesting one, since it has certain implications.

About the fifth definition, Bo wrote that (my emphasis) “a racist is somebody who advocates for differential treatment under the law for different races, [it] is the most incontrovertible and therefore paradigmatic definition of racist that I can imagine.” This is interesting. If it is not able to be denied, disputed, and serves as a typical example of the referent of racism, then this has implications for the views of certain hereditarians and the people they ran with.

We know that Jensen ran with actual racists and that he lent his name to their cause. (Jackson, 2022; see also Jackson and Winston, 2020 for a discussion). We know that hereditarians, despite their protestations, ignore evolutionary theory (Roseman and Bird, 2023). Nonetheless, we know that there is no support for the hereditarian hypothesis (Bird, 2021). But the issue here is the fifth definition that Bo said isn’t indisputable.

In his 2020 article Research on group differences in intelligence: A defense of free inquiry, philosopher Nathan Cofnas noted that hereditarians call for a kind of “tailored training program“, which John Jackson took to be “a two-tiered education system.” Although Cofnas didn’t say it, he cited hereditarians who DID say it. Thus, he showed how they ARE racists. And Cofnas states that we can’t know what would happen if race differences in intelligence would be found to have a genetic basis. But I argued before that since the hereditarian hypothesis is false and if we believe it is true then it could—and has—caused harm, so we should thusly ban IQ tests. Nonetheless, Cofnas’ passage in his article can be seen as racist under Lovchik’s definition, since he advocates for tailored training programs, which could result in unequal distribution of resources and further entrench inequities based on genetic differences between groups in their so-called intelligence which hereditarians argue is partly genetic in nature.

Prominent hereditarians Shockley and Cattell said some overtly racist things, Cattell even creating a religion called “Beyondism” (Tucker, 2009). Shockley called for the voluntary sterilization of black women (Thorp, 2022) and proposed a sterilization plan to pay anyone with an IQ a sum of money to get sterilized. I have also further documented the eugenic thinking of IQists and criminologists. It seems that this field is and has been a hole for racists ever since it’s inception.

Conclusion

Throughout this discussion, I have argued that the HBD movement is a racist one. Most importantly, a lot of their research was bankrolled by the Nazi Pioneer Fund. So financial support from a racist organization is pivotal in this matter, since these researchers were doing work that would justify the conclusions of the racist Fund (see Tucker 1996, 2002). So since the Fund had a history of funding research into eugenics, and of promoting research which could—implicitly—be seen as justification for racial superiorityp and inferiority, and therefore attempting to justify existing inequities.

Relying on John Lovchik’s definition of racism, I’ve shown how the HBD movement is a racist movement since it seeks to justify existing inequalities between racial groups and since it is a system of ranking human beings. I’ve also shown that mere belief in the existence of race isn’t enough for one to be rightly called a racist, since theories of race like Spencer’s (2014) OMB race theory is nothing like HBD theory since it doesn’t rank the races, nor does it argue that the genetic differences between races are causal for the socially important differences that hereditarians discuss. Racism isn’t only about individual attitudes, but also about systemic structures and institutional practices which perpetuate racial hierarchies and inequities.

I showed how, despite his protestations, Murray believes that races, classes, and sexes can be ranked—which is a form of hierarchy. I also showed how Steve “The Noticer” Sailer is a racist. Both of these men’s views are racist. I then discussed Winegard’s definitions, showing that they are all good definitions of the term under discussion. I then turned to how Jensen ran with racist Nazis and how Cofnas cited researchers who called for tailored training programs.

That the HBD movement promotes the idea that differences in socially valued traits are genetic in nature through pseudoscientific theories along with the fact that it quite obviously is an attempt at justifying a human hierarchy of socially valued traits means that there is no question about it—the HBD movement is a racist movement.

(P1) If the HBD movement promotes and justifies racial hierarchies and inequities, then it is a racist movement.
(P2) The HBD movement promotes and justifies racial hierarchies and inequities.
(C) So the HBD movement is a racist movement.

A Critical Examination of Responses to Berka’s (1983) and Nash’s (1990) Philosophical Inquiries on Mental Measurement from Brand et al (2003)

2750 words

Introduction

What I term “the Berka-Nash measurement objection” is—I think—one of the most powerful arguments against not only the concept of IQ “measurement” but against psychological “measurement” as a whole—this also compliments my irreducibility of the mental arguments. (Although there are of course contemporary authors who argue that IQ—and other psychological traits—are immeasurable, the Berka-Nash measurement objection I think touches the heart of the matter extremely well). The argument that Karel Berka (1983) mounted in Measurement: Its Concepts, Theories, and Problems is a masterclass in defining what “measurement” means and the rules needed for what designates X is a true measure and Y as a true measurement device. Then Roy Nash (1990) in Intelligence and Realism: A Materialist Critique of IQ brought Berka’s critique of extraphysical (mental) measurement to a broader audience, simplifying some of the concepts that Berka discussed and likened it to the IQ debate, arguing that there is no true property that IQ tests measure, therefore IQ tests aren’t a measurement device and IQ isn’t a measure.

I have found only one response to this critique of mental measurement by hereditarians—that of Brand et al (2003). Brand et al think they have shown that Berka’s and Nash’s critique of mental measurement is consistent with IQ, and that IQ can be seen as a form of “quasi-quantification.” But their response misses the mark. In this article I will argue how it misses the mark and it’s for these reasons: (1) they didn’t articulate the specified measured object, object of measurement and measurement unit for IQ and they overlooked the challenges that Berka discussed about mental measurement; (2) they ignored the lack of objectively reproducible measurement units; (3) they misinterpreted what Berka meant by “quasi-quantification” and then likening it to IQ; and (4) they failed to engage with Berka’s call for precision and reliability.

IQ, therefore, isn’t a measurable construct since there is no property being measured by IQ tests.

Brand et al’s arguments against Berka

The response from Brand et al to Berka’s critiques of mental measurement in the context of IQ raises critical concerns of Berka’s overarching analysis on measurement. So examining their arguments against Berka reveals a few shortcomings which undermine the central tenets of Berka’s thesis of measurement. From failing to articulate the fundamental components of IQ measurement, to overlooking the broader philosophical issues that Berka addressed, Brand et al’s response falls short in providing a comprehensive rebuttal to Berka’s thesis, and in actuality—despite the claims from Brand et al—Berka’s argument against mental measurement doesn’t lend credence to IQ measurement—it effectively destroys it, upon a close, careful reading of Berka (and then Nash).

(1) The lack of articulation of a specified measured object, object of measurement and measurement unit for IQ

This is critical for any claim that X is a measure and that Y is a measurement device—one needs to articulate the specified measured object, object of measurement and measurement unit for what they claim to be measuring. To quote Berka:

If the necessary preconditions under which the object of measurement can be analyzed on a higher level of qualitative aspects are not satisfied, empirical variables must be related to more concrete equivalence classes of the measured objects. As a rule, we encounter this situation at the very onset of measurement, when it is not yet fully apparent to what sort of objects the property we are searching for refers, when its scope is not precisely delineated, or if we measure it under new conditions which are not entirely clarified operationally and theoretically. This situation is therefore mainly characteristic of the various cases of extra-physical measurement, when it is often not apparent what magnitude is, in fact, measured, or whether that which is measured really corresponds to our projected goals.” (Berka, 1983: 51)

Both specific postulates of the theory of extraphysical measurement, scaling and testing – the postulates of validity and reliability – are then linked to the thematic area of the meaningfulness of measurement and, to a considerable extent, to the problem area of precision and repeatability. Both these postulates are set forth particularly because the methodologists of extra-physical measurement are very well aware that, unlike in physical measurement, it is here often not at all clear which properties are the actual object of measurement, more precisely, the object of scaling or counting, and what conclusions can be meaningfully derived from the numerical data concerning the assumed subject matter of investigation. Since the formulation, interpretation, and application of these requirements is a subject of very vivid discussion, which so far has not reached any satisfactory and more or less congruent conclusions, in our exposition we shall limit ourselves merely to the most fundamental characteristics of these postulates.” (Berka, 1983: 202-203)

At any rate, the fact that, in the case of extraphysical measurement, we do not have at our disposal an objectively reproducible and significantly interpretable measurement unit, is the most convincing argument against the conventionalist view of a measurement, as well as against the anti-ontological position of operationalism, instrumentalism, and neopositivism.” (Berka, 1983: 211)

One glaring flaw—and I think it is the biggest—in Brand et al’s response is their failure to articulate the specified measured object, object of measurement and measurement unit for IQ. Berka’s insistence on precision in measurement requires a detailed conception of what IQ tests aim to measure—we know this is “IQ” or “intelligence” or “g, but they then of course would have run into how to articulate and define it in a physical way. Berka emphasized that the concept of measurement demands precision in defining what is being measured (the specified measured object), the entity being measured (the object of measurement), and the unit applied for measurement (the measurement unit). Thus, for IQ to be a valid measure and for IQ tests to be a valid measurement device, it is crucial to elucidate exactly what the tests measure the nature of the mental attribute which is supposedly under scrutiny, and the standardized unit of measurement.

Berka’s insistence on precision aligns with a fundamental aspect of scientific measurement—the need for a well defined and standardized procedure to quantify a particular property. This is evidence for physical measurement, like the length of an object being measured using meters. But when transitioning to the mental, the challenge lies in actually measuring something that lacks a unit of measurement. (And as Richard Haier (2014) even admits, there is no measurement unit for IQ like inches, liters or grams.) So without a clear and standardized unit for mental properties, claims of measurement are therefore suspect—and impossible. Moreover, by sidestepping this crucial aspect of what Berka was getting at, their argument remains vulnerable to Berka’s foundational challenge regarding the essence of what is being measured along with how it is quantified.

Furthermore, Brand et al failed to grapple with what Berka wrote on mental measurement. Brand et al’s response would have been more robust if it had engaged with Berka’s exploration of the inherent intracacies and nuances involved in establishing a clear object of measurement for IQ, and any mental attributes.

Measurement units have to be a standardized and universally applicable quantity or physical property while allowing for standardized comparisons across different measures. And none exists for IQ, nor any other psychological trait. So we can safely argue that psychometrics isn’t measurement, even without touching contemporary arguments against mental measurement.

(2) Ignoring the lack of objectively reproducible measurement units

A crucial aspect of Berka’s critique involves the absence of objectively reproducible measurement units in the realm of measurement. Berka therefore contended that in the absence of such a standardized unit of measurement, the foundations for a robust enterprise of measurement are compromised. This is yet another thing that Brand et al overlooked in their response.

Brand et al’s response lacks a comprehensive examination of how the absence of objectively reproducible measurement units in mental measurement undermines the claim that IQ is a measure. They do not engage with Berka’s concern that the lack of such units in mental measurement actually hinders the claim that IQ is a measure. So the lack of attention to the absence of objectively reproducible measurement units in mental measurement actually weakens, and I think destroys, Brand et al’s response. They should have explored the ramifications of a so-called measure without a measurement unit. So this then brings me to their claims that IQ is a form of “quasi-quantification.”

(3) Misinterpretation of “quasi-quantification” and its application to IQ

Brand et al hinge their defense of IQ on Berka’s concept of “quasi-quantification”, which they misinterpret. Berka uses “quasi-quantification” to describe situations where the properties being measured lack the clear objectivity and standardization found in actual physical measurements. But Brand et al seem to interpret “quasi-quantification” as a justification for considering IQ as a valid form of measurement.

Brand et al’s misunderstanding of Berka’s conception of “quasi-quantification” is evidence in their attempt to equate it with a validation of IQ as a form of measurement. Berka was not endorsing it as a fully-fledged form of measurement, but he highlighted the limitations and distinctiveness compared to traditional quantification and measurement. Berka distinguishes between quantification, pseudo-quantification, and quasi-quantification. Berka explicitly states that numbering and scaling—in contrast to counting and measurement—cannot be regarded as kinds of quantification. (Note that “counting” in this framework isn’t a variety of measurement, since measurement is much more than enumeration, and counted elements in a set aren’t magnitudes.) Brand et al fail to grasp this nuanced difference, while mischaracterizing quasi-quantification as a blanket acceptance of IQ as a form of measurement.

Berka’s reservations of quasi-quantification are rooted in the challenges and complexities associated with mental properties, acknowledging that they fall short of the clear objectivity found in actual physical measurements. So Brand et al’s interpretation overlooks this critical aspect, which leads them to erroneously argue that accepting IQ as quasi-quantification is sufficient to justify its status as measurement.

Brand et al’s arguments against Nash

Nash’s book, on the other hand, is a much more accessible and pointed attack on the concept of IQ and it’s so-called “measurement.” He spends the book talking about the beginnings of IQ testing to the Flynn Effect, Berka’s argument and then ends with talking about test bias. IQ doesn’t have a true “0” point (like temperature, which IQ-ists have tried to liken to IQ, and the thermometer to IQ tests—there is no lawful property like the relation between mercury and temperature in a thermometer and IQ and intelligence, so again the hereditarian claim fails). But most importantly, Nash made the claim that there is actually no property to be measured by IQ tests—what did he mean by this?

Nash of course doesn’t deny that IQ tests rank individuals on their performance. So the claim that IQ is a metric property is already assumed in IQ theory. But the very fact that people are ranked doesn’t justify the claim that people are then ranked according to a property revealed by their performance (Nash, 1990: 134). Moreover, if intelligence/”IQ” were truly quantifiable, then the difference between 80 and 90 IQ and 110 and 120 IQ would represent the same cognitive difference between both groups of scores. But this isn’t the case.

Nash is a skeptic of the claim that IQ tests measure some property. (As I am.) So he challenges the idea that there is a distinct and quantifiable property that can be objectively measured by IQ tests (the construct “intelligence”). Nash also questions whether intelligence possesses the characteristics necessary for measurement—like a well-defined object of measurement and measurement unit. Nash successfully argued that intelligence cannot be legitimately expressed in a metric concept, since there is no true measurement property. But Brand et al do nothing to attack the arguments of Berka and Nash and they do not at all articulate the specified measured object, object of measurement and measurement unit for IQ, which was the heart of the critique. Furthermore, a precise articulation of the specified measured object when it comes to the metrication of X (any psychological trait) is necessary for the claim that X is a measure (along with articulating the object of measurement and measurement unit). But Brand et al did not address this in their response to Nash, which I think is very telling.

Brand et al do rightly note Nash’s key points, but they fall far, far from the mark in effectively mounting a sound argument against his view. Nash argues that IQ test results can only, at best, be used for ordinal comparisons of “less than, equal to, greater than” (which is also what Michell, 2022 argues, and the concludes the same as Nash). This is of course true, since people take a test and their performance is based on the type of culture they are exposed to (their cultural and psychological tools). Brand et al failed to acknowledge this and grapple with its full implications. But the issue is, Brand et al did not grapple at all with this:

The psychometric literature is full of plaintive appeals that despite all the theoretical difficulties IQ tests must measure something, but we have seen that this is an error. No precise specification of the measured object, no object of measurement, and no measurement unit, means that the necessary conditions for metrication do not exist. (Nash, 1990: 145)

All in all, a fair reading of both Berka and Nash will show that Brand et al slithered away from doing any actual philosophizing on the phenomena that Berka and Nash discussed. And, therefore, that their “response” is anything but.

Conclusion

Berka’s and Nash’s arguments against mental measurement/IQ show the insurmountable challenges that the peddlers of mental measurement have to contend with. Berka emphasized the necessity of clearly defining the measured object, object of measurement and measurement unit for a genuine quantitative measurement—these are the necessary conditions for metrication, and they are nonexistent for IQ. Nash then extended this critique to IQ testing, then concluding that the lack of a measurable property undermines the claim that IQ is a true measurement.

Brand et al’s response, on the other hand, was pitiful. They attempted to reconcile Berka’s concept of “quasi-quantification” with IQ measurement. Despite seemingly having some familiarity with both Berka’s and Nash’s arguments, they did not articulate the specified measured object, object of measurement and measurement unit for IQ. If Berka really did agree that IQ is “quasi-quantification”, then why did Brand et al not articulate what needs to be articulated?

When discussing Nash, Brand et al failed to address Nash’s claim that IQ can only IQ can only allow for ordinal comparisons. Nash emphasized numerous times in his book that an absence of a true measurement property challenges the claim that IQ can be measured. Thus, again, Brand et al’s response did not successfully and effectively engage with Nash’s key points and his overall argument against the possibility of intelligence/IQ (and mental measurement as a whole).

Berka’s and Nash’s critiques highlight the difficulties of treating intelligence (and psychological traits as a whole) as quantifiable properties. Brand et al did not adequately address and consider the issues I brought up above, and they outright tried to weasle their way into having Berka “agree” with them (on quasi-quantification). So they didn’t provide any effective counterargument against them, nor did they do the simplest thing they could have done—which was articulate the specified measured object, object of measurement and measurement unit for IQ. The very fact that there is no true “0” point is devestating for claims that IQ is a measure. I’ve been told on more than one occasion that “IQ is a unit-less measure”—but they doesn’t make sense. That’s just trying to cover for the fact that there is no measurement unit at all, and consequently, no specified measured object and object of measurement.

For these reasons, the Berka-Nash measurement objection remains untouched and the questions raised by them remain unanswered. (It’s simple: IQ-ists just need to admit that they can’t answer the challenge and that psychological traits aren’t measurable like physical traits. But then their whole worldview would crumble.) Maybe we’ll wait another 40 and 30 years for a response to the Berka-Nash measurement objection, and hopefully it will at least try harder than Brand et al did in their failure to address these conceptual issues raised by Berka and Nash.

“Missing Heritability” and Missing Children: On the Issues of Heritability and Hereditarian Interpretations

3100 words

“Biological systems are complex, non-linear, and non-additive. Heritability estimates are attempts to impose a simplistic and reified dichotomy (nature/nurture) on non-dichotomous processes.” (Rose, 2006)

Heritability estimates do not help identify particular genes or ascertain their functions in development or physiology, and thus, by this way of thinking, they yield no causal information.” (Panofsky, 2016: 167)

“What is being reported as ‘genetic’, with high heritability, can be explained by difference-making interactions between real people. In other words, parents and children are sensitive, reactive, living beings, not hollow mechanical or statistical units.” (Richardson, 2022: 52)

Introduction

In the world of behavioral genetics, it is claimed that studies of twins, adoptees and families can point us to the interplay between genetic and environmental influences on complex behavioral traits. To study this, they use a concept called “heritability”—taken from animal breeding—which estimates the the degree of variation in a phenotypic trait that is due to genetic variation amongst individuals in the studied population. But upon the advent of molecular genetic analysis after the human genome project, something happened that troubled behavioral genetic researchers: The heritability estimates gleaned from twin, family and adoption studies did not match the estimates gleaned from the molecular genetic studies. This then creates a conundrum—why do the estimates from one way of gleaning heritability don’t match to other ways? I think it’s because biological models represent a simplistic (and false) model of biological causation (Burt and Simon, 2015; Lala, 2023). This is what is termed “missing heritability.” This raises questions that aren’t dissimilar to when a child dissappears.

Imagine a missing child. Imagine the fervor a family and authorities go through in order to find the child and bring them home. The initial fervor, the relentless pursuit, and the agonizing uncertainty constitute a parallel narrative in behavioral genetics, where behavioral geneticists—like the family of a missing child and the authorities—find themselves grappling with unforseen troubles. In this discussion, I will argue that the additivity assumption is false, that this kind of thinking is a holdover from the neo-Darwinian Modern Synthesis, that hereditarians have been told for decades that heritability just isn’t useful for what they want to do, and finally “missing heritability” and missing children are in some ways analogous, but that there is a key difference: The missing children actually existed, while the “missing heritability” never existed at all.

The additivity assumption

Behavioral geneticists pay lip service to “interactions”, but then conceptualize these interactions as due to additive heritability (Richardson, 2017a: 48-49). But the fact of the matter is, genetic interactions create phantom heritability (Zuk et al, 2012). When it comes to the additive claim of heritability, that claim is straight up false.

The additive claim is one of the most important things for the utility of the concept of heritability for the behavioral geneticist. The claim that heritability estimates for a trait are additive means that the contribution of each gene variant is independent and they all sum up to explain the overall heritability (Richardson 2017a: 44 states that “all genes associated with a trait (including intelligence) are like positive or negative charges“). But in reality, gene variants aren’t independent effects, they interact with other genes, the environment and other developmental resources. In fact, violations of the additivity assumption are large (Daw, Guo, and Harris, 2015).

Gene-gene, gene-environment, and environmental factors can lead to overestimates of heritability, and they are non-additive. So after the 2000s with the completion of the human genome project, these researchers realized that the genetic variants that heritability they identified using molecular genetics did not jive with the heritability they computed from twin studies from the 1920s until the late 1990s and then even into the 2020s. So the expected additive contribution of heritability fell short in actually explaining the heritability gleaned from twin studies using molecular genetic data.

Thinking of heritability as a complex jigsaw puzzle may better help to explain the issue. The traditional view of heritability assumes that each genetic piece fits neatly into the puzzle to then complete the overall genetic picture. But in reality, these pieces may not be additive. They can interact in unexpected ways which then creates gaps in our understanding, like a missing puzzle piece. So the non-additive effects of gene variants which includes interactions and their complexities, can be likened to missing pieces in the heritability puzzle. The unaccounted-for genetic interactions and nuances then contribute to what is called “missing heritability.” So just as one may search and search for missing puzzle pieces, so to do behavioral geneticists search and search for the “missing heritability”.

So heritability assumes no gene-gene and gene-environment interaction, no gene-environment correlation, among other false or questionable assumptions. But the main issue, I think, is that of the additivity assumption—it’s outright false and since it’s outright false, then it cannot accurately represent the intricate ways in which genes and other developmental resources interact to form the phenotype.

If heritability estimates assume that genetic influences on a trait are additive and independent, then heritability estimates oversimplify genetic complexity. If heritability estimates oversimplify genetic complexity, then heritability estimates do not adequately account for gene-environment interactions. If heritability does not account for gene-environment interactions, then heritability fails to capture the complexity of trait inheritance. Thus, if heritability assumes that genetic influences on a trait are additive and independent, then heritability fails to capture the complexity of trait inheritance due to its oversimplified treatment of genetic complexity and omission of gene-environment interactions.

One more issue, is that of the “heritability fallacy” (Moore and Shenk, 2016). One commits a heritability fallacy when they assume that heritability is an index of genetic influence on traits and that heritability can tell us anything about the relative contribution of trait inheritance and ontogeny. Moore and Shenk (2016) then make a valid conclusion based on the false belief that heritability us anything about the “genetic strength” on a trait:

In light of this, numerous theorists have concluded that ‘the term “heritability,” which carries a strong conviction or connotation of something “[in]heritable” in the everyday sense, is no longer suitable for use in human genetics, and its use should be discontinued.’31 Reviewing the evidence, we come to the same conclusion. Continued use of the term with respect to human traits spreads the demonstrably false notion that genes have some direct and isolated influence on traits. Instead, scientists need to help the public understand that all complex traits are a consequence of developmental processes.

“Missing heritability”, missing children

Twin studies traditionally find heritability to be estimated between 50 and 80 percent for numerous traits (eg Polderman et al, 2015; see Joseph’s critique). But as alluded to earlier, molecular studies have found heritabilities of 10 percent or lower (eg, Sniekers et al, 2017; Savage et al, 2018; Zabaneh et al, 2018). This discrepancy between different heritability estimates using different tools is what is termed “missing heritability” (Mathhews and Turkheimer, 2022). But the issue is, increasing the sample sizes will merely increase the chance of spurious correlations (Calude and Longo, 2018), which is all these studies show (Richardson, 2017b; Richardson and Jones, 2019).

This tells me one important thing—behavioral geneticists have so much faith in the heritability estimates gleaned from twin studies that they assume that the heritability is “missing” in the newer molecular genetic studies. But if something is “missing”, then that implies that it can be found. They have so much faith that eventually, as samples get higher and higher in GWAS and similar studies, that we will find the heritability that is missing and eventually, be able to identify genetic variants responsible for traits of interest such as IQ. However I think this is confused and a simple analogy will show why.

When a child goes missing, it is implied that they will be found by authorities, whether dead or alive. Now I can liken this to heritability. The term “missing heritability” comes from the disconnect between heritability estimates gleaned from twin studies and heritability estimates gleaned from molecular genetic studies like GWAS. So the implication here is, since twin studies show X percent heritability (high heritability), and molecular genetic studies show Y percent heritability (low heritability) – which is a huge difference between estimates between different tools – then the implication is that there is “missing heritability” that must be explained by rare variants or other factors.

So just like parents and authorities try so hard to find their missing children, so to do behavioral geneticists try so hard to find their “missing heritability.” As families endure anguish as they try to find their children, this is then mirrored in the efforts of behavioral geneticists to try and close the gap between two different kinds of tools that glean heritability.

But there is an important issue at play here—namely the fact that missing children actually exist, but “missing heritability” doesn’t, and that’s why we haven’t found it. Although some parents, sadly, may never find their missing children, the analogy here is that behavioral geneticists will never find their own “children” (their missing heritability) because it simply does not exist.

Spurious correlations

Even increasing the sample sizes won’t do anything, since the larger the sample size, the bigger chance for spurious correlations, and that’s all GWAS studies for IQ are (Richardson and Jones, 2019), while correlations with GWAS are inevitable and meaningless (Richardson, 2017b). Denis Noble (2018) puts this well:

As with the results of GWAS (genome-wide association studies) generally, the associations at the genome sequence level are remarkably weak and, with the exception of certain rare genetic diseases, may even be meaningless (1321). The reason is that if you gather a sufficiently large data set, it is a mathematical necessity that you will find correlations, even if the data set was generated randomly so that the correlations must be spurious. The bigger the data set, the more spurious correlations will be found (3). The current rush to gather sequence data from ever larger cohorts therefore runs the risk that it may simply prove a mathematical necessity rather than finding causal correlations. It cannot be emphasized enough that finding correlations does not prove causality. Investigating causation is the role of physiology.

Nor does finding higher overall correlations by summing correlations with larger numbers of genes showing individually tiny correlations solve the problem, even when the correlations are not spurious, since we have no way to find the drugs that can target so many gene products with the correct profile of action.

The Darwinian model

But the claim that there is a line that goes from G (genes) to P (phenotype) is just a mere holdover from the neo-Darwinian modern synthesis. The fact of the matter is, “HBD” and hereditarianism are based on reductionistic models of genes and how they work. But the reality is, genes don’t work how they think they do, reality is much more complex than they assume. Feldman and Ramachandran (2018) ask “Missing compared to what?”, effectively challenging the “missing heritability” claim. As Feldman and Ramachandran (2018) ask, would Herrnstein and Murray have written The Bell Curve if they believed that the heritability of IQ were 0.30? I don’t think they would have. In any case, such a belief in the heritability of IQ being between 0.4 and 0.8 shows the genetic determinist assumptions which are inherent in this type of “HBD” genetic determinist thinking.

Amusingly, as Ned Block (1995) noted, Murray said in an interview that “60 percent of the intelligence comes from heredity” and that that heritability is “not 60 percent of the variation. It is 60 percent of the IQ in any given person.” Such a major blunder from one of the “intellectual spearheads” of the “HBD race realist” movement…

Behavioral geneticists claim that the heritability is missing only because sample sizes are low, and as sample sizes increase, the missing heritability based on associated genes will be found. But this doesn’t follow at all since increasing sample sizes will just increase spurious hits of genes correlated with the trait in question but it says absolutely nothing about causation. Nevertheless, only a developmental perspective can provide us mechanistic knowledge and so-called heritability of a phenotype cannot give us such information because heritability isn’t a mechanistic variable and doesn’t show causation.

Importantly, a developmental perspective provides mechanistic knowledge that can yield practical treatments for pathologies. In contrast, information about the “heritability” of a phenotype—the kind of information generated by twin studies—can never be as useful as information about the development of a phenotype, because only developmental information produces the kind of thorough understanding of a trait’s emergence that can allow for successful interventions. (Moore 2015: 286)

The Darwinian model and it’s assumptions are inherent in thinking about heritability and genetic causation as a whole and are antithetical to developmental, EES-type thinking. Since hereditarianism and HBD-type thinking are neo-Darwinist, it then follows that such thinking is inherent in their beliefs, assumptions, and arguments.

Conclusion

Assumptions of heritability simply do not hold. Heritability, quite simply, isn’t a characteristic of traits but it is a characteristic of “relationships in a population observed in a particular setting” (Oyama, 1985/2000). Heritability estimates tell us absolutely nothing about development, nor the causes of development. Heritability is a mere breeding statistic and tells us nothing at all about the causes of development or whether or not genes are “causal” for a trait in question (Robette, Genin, and Clerget-Darpoux, 2022). It is key to understand that heritability along with the so-called “missing heritability” are based on reductive models of genetics that just do not hold, especially with newer knowledge that we have from systems biology (eg, Noble, 2012).

The assumption that heritability estimates tell us anything useful about genetics, traits, and causes along with a reductive belief in genetic causation for the ontogeny of traits has wasted millions of dollars. Now we need to grapple with the fact that heritability just doesn’t tell us anything about genetic causes of traits, but that genes are necessary, not sufficient, causes for traits because no genes (along with other developmental resources) means no organism. Also coming from twin, family and adoption studies are Turkheimer’s (2000) so-called “laws of behavioral genetics.” Further, the falsity of the EEA (equal environments assumption) is paramount here, and since the EEA is false, genetic conclusions from such studies are invalid (Joseph et al, 2015). There is also the fact that heritability is based on a false biological model. The issue is that heritability rests on a “conceptual model is unsound and the goal of heritability studies is biologically nonsensical given what we now know about the way genes work” (Burt and Simons, 2015: 107). What Richardson (2022) terms “the agricultural model of heritability” is known as false. In fact, the heritability of “IQ” is higher than any heritability found in the animal kingdom (Schonemann, 1997). Why this doesn’t give any researcher pause is beyond me.

Nonetheless, the Darwinian assumptions that are inherent in behavioral genetic, HBD “race realist” thinking are false. And the fact of the matter is, increasing the sample size of molecular genetic studies will only increase the chances of spurious correlations and picking up population stratification. So, it seems that using heritability to show genetic and environmental causes is a bust and has been a bust ever since Jensen revived the race and IQ debate in 1969, along with the subsequent responses that Jensen received against his argument which then led to the 1970s as being a decade in which numerous arguments were made against the concept of heritability (eg, Layzer, 1974).

It has also been pointed out to racial hereditarians for literally decades that heritability is is a flawed metric (Layzer, 1974; Taylor, 1980; Bailey, 1997Schonemann, 1997Guo, 2000Moore, 2002Rose, 2006Schneider, 2007Charney, 20122013Burt and Simons, 2015Panofsky, 2014Joseph et al, 2015Moore and Shenk, 2016Panofsky, 2016Richardson, 2017; Lerner, 2018). These issues—among many more—lead Lerner to conclude:

However, the theory and research discussed across this chapter and previous ones afford the conclusion that no psychological attribute is pre-organized in the genes and unavailable to environmental influence. That is, any alleged genetic difference (or “inferiority”) of African Americans based on the high heritability of intelligence would seem to be an attribution built on a misunderstanding of concepts basic to an appropriate conceptualization of the nature–nurture controversy. An appreciation of the coaction of genes and context—of genes↔context relations—within the relational developmental system, and of the meaning, implications, and limitations of the heritability concept, should lead to the conclusion that the genetic-differences hypothesis of racial differences in IQ makes no scientific sense. (Lerner, 2018: 636)

That heritability doesn’t address mechanisms and ignores genetic factors, along with being inherently reductionist means that there is little to no utility of heritability for humans. And the complex, non-additive, non-linear aspects of biological systems are attempts at reducing biological systems to their component parts, (Rose, 2006), making heritability, again, inherently reductionist. We have to attempt to analyzed causes, not variances (Lewontin, 1974), which heritability cannot do. So it’s very obvious that the hereditarian programme which was revived by Jensen (1969)—and based on twin studies which were first undertaken in the 1920s—is based on a seriously flawed model of genes and how they work. But, of course, hereditarians have an ideological agenda to uphold, so that’s why they continue to pursue “heritability” in order to “prove” that “in part”, racial differences in many socio-behavioral traits—IQ included—are due to genes. But this type of argumentation quite clearly fails.

The fact of the matter is, “there are very good reasons to believe gene variations are at best irrelevant to common disorders and at worst a distraction from the social and political roots of major public health problems generally and of their unequal distribution in particular” (Chaufan and Joseph 2013: 284). (Also see Joseph’s, 2015 The Trouble with Twin Studies for more argumentation against the use of heritability and it’s inflation due to false assumptions along with arguments against “missing heritability.”) In fact, claims of “missing heritability” rest on “genetic determinist beliefs, a reliance on twin research, the use of heritability estimates, and the failure to seriously consider the possibility that presumed genes do not exist” (Joseph, 2012). Although it has been claimed that so-called rare variants explain the “missing heritability” (Genin, 2020), this is nothing but cope. So the heritability was never missing, it never existed at all.

Cope’s (Deperet’s) Rule, Evolutionary Passiveness, and Alternative Explanations

4450 words

Introduction

Cope’s rule is an evolutionary hypothesis which suggests that, over geological time, species have a tendency to increase in body size. (Although it has been proposed for Cope’s rule to be named Deperet’s rule, since Cope didn’t explicitly state the hypothesis while Deperet did, Bokma et al, 2015.) Named after Edward Drinker Cope, it proposes that on average through the process of “natural selection” species have a tendency to get larger, and so it implies a directionality to evolution (Hone and Benton, 2005; Liow and Taylor, 2019). So there are a few explanations for the so-called rule: Either it’s due to passive or driven evolution (McShea, 1994; Gould, 1996; Raia et al, 2012) or due to methodological artifacts (Sowe and Wang, 2008; Monroe and Bokma, 2010).

However, Cope’s rule has been subject to debate and scrutiny in paleontology and evolutionary biology. The interpretation of Cope’s rule hinges on how “body size” is interpreted (mass or length), along with alternative explanations. I will trace the history of Cope’s rule, discuss studies in which it was proposed that this directionality from the rule was empirically shown, discuss methodological issues. I propose alternative explanations that don’t rely on the claim that evolution is “progressive” or “driven.” I will also show that developmental plasticity throws a wrench in this claim, too. I will then end with a constructive dilemma argument showing that either Cope’s rule is a methodological artifact, or it’s due to passive evolution, since it’s not a driven trend as progressionists claim.

How developmental plasticity refutes the concept of “more evolved”

In my last article on this issue, I showed the logical fallacies inherent in the argument PumpkinPerson uses—it affirms the consequent, assuming it’s true leads to a logical contradiction, and of course reading phylogenies in the way he does just isn’t valid.

If the claim “more speciation events within a given taxon = more evolution” were valid, then we would consistently observe a direct correlation between the number of speciation events and the extent evolutionary change in all cases, but we don’t since evolutionary rates vary and other factors influence evolution, so the claim isn’t universally valid.

Take these specific examples: The horseshoe crab has a lineage going back hundreds of millions of years with few speciation events but it has undergone evolutionary changes. Consequently, microorganisms could undergo many speciation events and have relatively minor genetic change. Genetic and phenotypic diversity of the cichlid fishes (fishes that have undergone rapid evolutionary change and speciation), but the diversity between them doesn’t solely depend on speciation events, since factors like ecological niche partitioning and sexual selection also play a role in why they are different even though they are relatively young species (a specific claim that Herculano-Houzel made in her 2016 book The Human Advantage). Lastly, human evolution has relatively few speciation events but the extent of evolutionary change in our species is vast. Speciation events are of course crucial to evolution. But if one reads too much into the abstractness of the evolutionary tree then they will not read it correctly. The position of the terminal nodes is meaningless.

It’s important to realize that evolution just isn’t morphological change which then leads to the creation of a new species (this is macro-evolution), but there is also micro-evolution. Species that underwent evolutionary change without speciation include peppered moths, antibody resistance in bacteria, lactase persistence in humans, Darwin’s finches, and industrial melanism in moths. These are quite clearly evolutionary changes, and they’re due to microevolutionary changes.

Developmental plasticity directly refutes the contention of more evolved since individuals within a species can exhibit significant trait variation without speciation events. This isn’t captured by phylogenies. They’re typically modeled on genetic data and they don’t capture developmental differences that arise due to environmental factors during development. (See West-Eberhard’outstanding Developmental Plasticity and Evolution for more on how in many cases development precedes genetic change, meaning that the inference can be drawn that genes aren’t leaders in evolution, they’re mere followers.)

If “more evolved” is solely determined by the number of speciation events (branches) in a phylogeny, then species that exhibit greater developmental plasticity should be considered “more evolved.” But it is empirically observed that some species exhibit significant developmental plasticity which allows them to rapidly change their traits during development in response to environmental variation without undergoing speciation. So since the species with more developmental plasticity aren’t considered “more evolved” based on the “more evolved” criteria, then the assumption that “more evolved” is determined by speciation events is invalid. So the concept of “more evolved” as determined by speciation events or branches isn’t valid since it isn’t supported when considering the significant role of developmental plasticity in adaptation.

There is anagenesis and cladogenesis. Anagenesis is the creation of a species without a branching of the ancestral species. Cladogenesis is the formation of a new species by evolutionary divergence from an ancestral form. So due to evolutionary changes within a lineage, the organism that underwent evolutionary changes replaces the older one. So anagenesis shows that a species can slowly change and become a new species without there being a branching event. Horse, human, elephant, and bird evolution are examples of this.

Nonetheless, developmental plasticity can lead to anagenesis. Developmental, or phenotypic, plasticity is the ability of an organism to produce different phenotypes with the same genotype based on environmental cues that occur during development. Developmental plasticity can facilitate anagenesis, and since developmental plasticity is ubiquitous in development of not only an individual in a species but a species as a whole, then it is a rule and not an exception.

Directed mutation and evolution

Back in March, I wrote on the existence of directed mutations. Directed mutation directly speaks against the concept of “more evolved.” Here’s the argument:

(1) If directed mutations play a crucial role in helping organisms adapt to changing environments, then the notion of “more evolved” as a linear hierarchy is invalid.
(2) Directed mutations are known to occur and contribute to a species survivability in an environment undergoing change during development (the concept of evolvability is apt here).
(C) So the concept of “more evolved” as a linear hierarchy is invalid.

A directed mutation is a mutation that occurs due to environmental instability which helps an organism survive in the environment that changed while the individual was developing. Two mechanisms of DM are transcriptional activation (TA) and supercoiling. TAs can cause changes to single-stranded DNA, and can also cause supercoiling (the addition of more strands on DNA). TA can be caused by depression (a mechanism that occurs due to the absence of some molecule) or induction (the activation of an inactive gene which then gets transcribed). So these are examples of how nonrandom (directed) mutation and evolution can occur (Wright, 2000). Such changes are possibly through the plasticity of phenotypes during development and ultimately are due to developmental plasticity. These stress-directed mutations can be seen as quasi-Lamarckian (Koonin and Wolf, 2009). It’s quite clear that directed mutations are a thing and have been proven true.

DMs, along with developmental plasticity and evo-devo as a whole refute the simplistic thinking of “more evolved.”

Now here is the argument that PP is using, and why it’s false:

(1) More branches on a phylogeny indicate more speciation events.
(2) More speciation events imply a higher level of evolutionary advancement.
(C) Thus, more branches on a phylogeny indicate a higher level of evolutionary advancement.

The false premise is (2) since it suggests that more speciation events imply a higher level of evolutionary advancement. It implies a goal-directed aspect to evolution, where the generation of more species is equated with evolutionary progress. It’s just reducing evolution to linear advancement and progress; it’s a teleological bent on evolution (which isn’t inherently bad if argued for correctly, see Noble and Noble, 2022). But using mere branching events on a phylogeny to assume that more branches = more speciation = more evolved is simplistic thinking that doesn’t make sense.

If evolution encompasses changes in an organism’s phenotype, then changes in an organism’s phenotype, even without changing its genes, are considered examples of evolution. Evolution encompasses changes in an organism’s phenotype, so changes in an organism’s phenotype even without changes in genes are considered examples of evolution. There is nongenetic “soft inheritance” (see Bonduriansky and Day, 2018).

Organisms can exhibit similar traits due to convergent evolution. So it’s not valid to assume a direct and strong correlation between and organism’s position on a phylogeny and it’s degree of resemblance to a common ancestor.

Dolphins and ichthyosaurs share similar traits but dolphins are mammals while ichthyosaurs are reptiles that lived millions of years ago. Their convergent morphology demonstrates that common ancestry doesn’t determine resemblance. The Tasmanian and Grey wolf have independently evolved similar body plans and roles in their ecologies and despite different genetics and evolutionary history, they share a physical resemblance due to similar ecological niches. The LCA of bats and birds didn’t have wings but they have wings and they occurred independently showing that the trait emerged independently while the LCA didn’t have wings so it emerged twice independently. These examples show that the degree of resemblance to a common ancestor is not determined by an organism’s position on a phylogeny.

Now, there is a correlation between body size and branches (splits) on a phylogeny (Cope’s rule) and I will explain that later. That there is a correlation doesn’t mean that there is a linear progression and they don’t imply a linear progression. Years ago back in 2017 I used the example of floresiensis and that holds here too. And Terrance Deacon’s (1990) work suggests that pseudoprogressive trends in brain size can be explained by bigger whole organisms being selected—this is important because the whole animal is selected, not any one of its individual parts. The correlation isn’t indicative of a linear progression up some evolutionary ladder, either: It’s merely a byproduct of selecting larger animals (the only things that are selected).

I will argue that it is this remarkable parallelism, and not some progressive selection for increasing intelligence, that is responsible for many pseudoprogressive trends in mammalian brain evolution. Larger whole animals were being selected—not just larger brains—but along with the correlated brain enlargement in each lineage a multitude of parallel secondary internal adaptations followed. (Deacon, 1990)

Nonetheless, the claim here is one from DST—the whole organism is selected, so obviously so is it’s body plan (bauplan). Nevertheless, the last two havens for the progressionist is in the realm of brain size and body size. Deacon refuted the selection-for brain size claim, so we’re now left with body size.

Does the evolution of body size lend credence to claims of driven, progressive evolution?

The tendency for bodies to grow larger and larger over evolutionary time is something of a trusim. Since smaller bacterium have eventually evolved into larger (see Gould’s modal bacter argument), more complex multicellular organisms, then this must mean that evolution is progressive and driven, at least for body size, right? Wrong. I will argue here using a constructive dilemma that either evolution is passive and that’s what explains the evolution of body size increases, or is it due to methodological flaws in how body size is measured (length or mass)?

In Full House, Gould (1996) argued that the evolution of body size isn’t driven, but that it is passive, namely that it is evolution away from smaller size. Nonetheless, it seems that Cope’s (Deperet’s) rule is due to cladogenesis (the emergence of new species), not selection for body size per se (Bokma et al, 2015).

Given these three conditions, we note an increase in size of the largest species only because founding species start at the left wall, and the range of size can therefore expand in only one direction. Size of the most common species (the modal decade) never changes, and descendants show no bias for arising at larger sizes than ancestors. But, during each act, the range of size expands in the only open direction by increase in the total number of species, a few of which (and only a few) become larger (while none can penetrate the left wall and get smaller). We can say only this for Cope’s Rule: in cases with boundary conditions like the three listed above, extreme achievements in body size will move away from initial values near walls. Size increase, in other words, is really random evolution away from small size, not directed evolution toward large size. (Gould, 1996)

Dinosaurs were some of the largest animals to ever live. So we might say that there is a drivenness in their bodies to become larger and larger, right? Wrong. The evolution of body size in dinosaurs is passive, not driven (progressive) (Sookias, Butler, and Benson, 2012). Gould (1996) also showed passive trends in body size in plankton and forams. He also cited Stanley (1973) who argued that groups starting at the left wall of minimum complexity will increase in mean size as a consequence of randomness, not any driven tendency for larger body size.

In other, more legitimate cases, increases in means or extremes occur, as in our story of planktonic forams, because lineages started near the left wall of a potential range in size and then filled available space as the number of species increased—in other words, a drift of means or extremes away from a small size, rather than directed evolution of lineages toward large size (and remember that such a drift can occur within a regime of random change in size for each individual lineage—the “drunkard’s walk” model).

In 1973, my colleague Steven Stanley of Johns Hopkins University published a marvelous, and now celebrated, paper to advance this important argument. He showed (see Figure 27, taken from his work) that groups beginning at small size, and constrained by a left wall near this starting point, will increase in mean or extreme size under a regime of random evolution within each species. He also advocated that we test his idea by looking for right-skewed distributions of size within entire systems, rather than by tracking mean or extreme values that falsely abstract such systems as single numbers. In a 1985 paper I suggested that we speak of “Stanley’s Rule” when such an increase of means or extremes can best be explained by undirected evolution away from a starting point near a left wall. I would venture to guess (in fact I would wager substantial money on the proposition) that a large majority of lineages showing increase of body size for mean or extreme values (Cope’s Rule in the broad sense) will properly be explained by Stanley’s Rule of random evolution away from small size rather than by the conventional account of directed evolution toward selectively advantageous large size. (Gould, 1996)

Gould (1996) also discusses the results of McShea’s study, writing:

Passive trends (see Figure 33) conform to the unfamiliar model, championed for complexity in this book, of overall results arising as incidental consequences, with no favored direction for individual species, (McShea calls such a trend passive because no driver conducts any species along a preferred pathway. The general trend will arise even when the evolution of each individual species confirms to a “drunkard’s walk” of random motion.) For passive trends in complexity, McShea proposes the same set of constraints that I have advocated throughout this book: ancestral beginnings at a left wall of minimal complexity, with only one direction open to novelty in subsequent evolution.

But Baker et al (2015) claim that body size is an example of driven evolution. However, that they did not model cladogenetic factors calls their conclusion into question. But I think Baker et al’s claim doesn’t follow. If a taxon possesses a potential size range and the ancestral size approaches the lower limit of this range, it will result in a passive inclination for descendants to exceed the size of their ancestors. The taxon in question possesses a potential size range, and the ancestral size range is on the lower end of the range. So there will be a passive tendency for descendants of this taxon to be larger than their predecessors.

Here’s an argument that concludes that evolution is passive and not driven. I will then give examples of P2.

(1) Extant animals that are descended from more nodes on an evolutionary tree tend to be bigger than animals descended from fewer nodes (your initial premise).
(2) There exist cases where extant animals descended from fewer nodes are larger or more complex than those descended from more nodes (counterexamples of bats and whales, whales are descended from fewer nodes while having some of the largest body sizes in the world while bats are descended from more nodes while having a way comparatively smaller body size).
(C1) Thus, either P1 doesn’t consistently hold (not all extant animals descended from more nodes are larger), or it is not a reliable rule (given the counters).
(3) If P1 does not consistently hold true (not all extant animals descended from more nodes are larger), then it is not a reliable rule.
(4) P1 does not consistently hold true.
(C2) P1 is not a reliable rule.
(5) If P1 is not a reliable rule (given the existence of counterexamples), then it is not a valid generalization.
(6) P1 is not a reliable rule.
(C3) So P1 is not a valid generalization.
(6) If P1 isn’t a valid generalization in the context of evolutionary biology, then there must be exceptions to this observed trend.
(7) The existence of passive evolution, as suggested by the inconsistenties in P1, implies that the trends aren’t driven by progressive forces.
(C4) Thus, the presence of passive evolution and exceptions to P1’s trend challenge the notion of a universally progressive model of evolution.
(8) If the presence of passive evolution and exceptions to P1’s trend challenges the notion of a universally progressive model of evolution, then the notion of a universally progressive model of evolution isn’t supported by the evidence, as indicated by passive evolution and exceptions to P1’s trend.
(9) The presence of passive evolution and exceptions to P1’s trend challenge the notion. of a universally progressive model of evolution.

(1) Bluefin tuna are known to have a potential range of size, with some being small and others being massive (think of that TV show Deadliest Catch and the massive size ranges of tuna these fisherman catch, both in length and mass). So imagine a population of bluefin tuna where the ancestral size is found to be close to the lower end of their size range. So P2 is satisfied because bluefin tuna have a potential size range. So the ancestral size of the ancestors of the tuna were relatively small in comparison to the maximum size of the tuna.

(2) African elephants in some parts of Africa are small, due to ecological constraints and hunting pressures and these smaller-sized ancestors are close to the lower limit of the potential size range of African elephants. Thus, according to P1, there will be a passive tendency for descendants of these elephants to be larger than their smaller-sizes ancestors over time.

(3) Consider galapagos tortoises whom are also known for their large variation in size among the different species and populations on the galapagos islands. So consider a case of galapagos tortoises who have smaller body sizes due to either resource conditions or the conditions of their ecologies. So in this case, the potential size for the ancestors of these tortoises is close to the theoretical limit of their potential size range. Therefore, we can expect a passive tendency for descendants of these tortoises to evolve large body sizes.

Further, in Stanley’s (1973) study of Cope’s rule from fossil rodents, he observed that body size distributions in these rodents, over time, became bigger while the modal size stayed small. This doesn’t even touch the fact that because there are more small than large mammals, that there would be a passive tendency in large body sizes for mammals. This also doesn’t even touch the methodological issues in determining body size for the rule—mass, length? Nonetheless, Monroe and Bokma’s (2010) study showed that while there is a tendency for species to be larger than their ancestors, it was a mere 0.5 percent difference. So the increase in body size is explained by an increase in variance in body size (passiveness) not drivenness.

Explaining the rule

I think there are two explanations: Either a methodological artifact or passive evolution. I will discuss both, and I will then give a constructive dilemma argument that articulates this position.

Monroe and Bokma (2010) showed that even when Cope’s rule is assumed, the ancestor-descendant increase in body size showed a mere .4 percent increase. They further discussed methodological issues with the so-called rule, citing Solow and Wang (2008) who showed that Cope’s rule “appears” based on what assumptions of body size are used. For example, Monroe and Bokma (2010) write:

If Cope’s rule is interpreted as an increase in the mean size of lineages, it is for example possible that body mass suggests Cope’s rule whereas body length does not. If Cope’s rule is instead interpreted as an increase in the median body size of a lineage, its validity may depend on the number of speciation events separating an ancestor-descendant pair.

If size increase were a general property of evolutionary lineages – as Cope’s rule suggests – then even if its effect were only moderate, 120 years of research would probably have yielded more convincing and widespread evidence than we have seen so far.

Gould (1997) suggested that Cope’s rule is a mere psychological artifact. But I think it’s deeper than that. Now I will provide my constructive dilemma argument, now that I have ruled out body size being due to progressive, driven evolution.

The form of constructive dilemma goes: (1) A V B. (2) If A, then C. (3) If B, then D. (C) C V D. P1 represents a disjunction: There are two possible choices, A and B. P2 and P3 are conditional statements, that provide implications for both of the options. And C states that at least one or both of the implications have to be true (C or D).

Now, Gould’s Full House argument can be formulated either using modus tollens or constructive dillema:

(1) If evolution were a deterministic, teleological process, there would be a clear overall progression and a predetermined endpoint. (2) There is no predetermined endpoint or progression to evolution. (C) So evolution isn’t a deterministic or teleological process.

(1) Either evolution is a deterministic, teleological process (A) or it’s not (B). (2) If A, then there would be a clear direction and predetermined endpoint. (3) If B, then there is no overall direction or predetermined endpoint. (4) So either there is a clear overall progression (A), or there isn’t (B). (5) Not A. (6) Therefore, B.

Or (1) Life began at a relatively simple state (the left wall of complexity). (2) Evolution is influenced by a combination of chance events,, environmental factors and genetic variation. (3) Organisms may stumble I’m various directions along the path of evolution. (4) Evolution lacks a clear path or predetermined endpoint.

Now here is the overall argument combining the methodological issues pointed out by Sowe and Wang and the implications of passive evolution, combined with Gould’s Full House argument:

(1) Either Cope’s rule is a methodological artifact (A), or it’s due to passive, not driven evolution (B). (2) If Cope’s rule is a methodological artifact (A), then different ways to measure body size (length or mass) can come to different conclusions. (3) If Cope’s rule is due to passive, not driven evolution (B), then it implies that larger body sizes simply accumulate over time without being actively driven by selective pressures. (4) Either evolution is a deterministic, teleological process (C), or it is not (D). (5) If C, then there would be a clear overall direction and predetermined endpoint in evolution (Gould’s argument). (6) If D, then there is no clear overall direction or predetermined endpoint in evolution (Gould’s argument). (7) Therefore, either there is a clear overall direction (C) or there isn’t (D) (Constructive Dilemma). (8) If there is a clear overall direction (C) in evolution, then it contradicts passive, not driven evolution (B). (9) If there isn’t a clear overall direction (D) in evolution, then it supports passive, not driven evolution (B). (10) Therefore, either Cope’s rule is due to passive evolution or it’s a methodological artifact.

Conclusion

Evolution is quite clearly passive and non-driven (Bonner, 2013). The fact of the matter is, as I’ve shown, evolution isn’t driven (progressive), it is passive due to the drunken, random walk that organisms take from the minimum left wall of complexity. The discussions of developmental plasticity and directed mutation further show that evolution can’t be progressive or driven. Organism body plans had nowhere to go but up from the left wall of minimal complexity, and that means increase the variance in, say, body size is due to passive trends. Given the discussion here, we can draw one main inference: since evolution isn’t directed or progressive, then the so-called Cope’s (Deperet’s) rule is either due to passive trends or they are mere methodological artifacts. The argument I have mounted for that claim is sound and so, it obviously must be accepted that evolution is a random, drunken walk, not one of overall drivenness and progress and so, we must therefore look at the evolution of body size in this way.

Rushton tried to use the concept of evolutionary progress to argue that some races may be “more evolved” than other races, like “Mongoloids” being “more evolved” than “Caucasoids” who are “more evolved” than “Negroids.” But Rushton’s “theory” was merely a racist one, and it obviously fails upon close inspection. Moreover, even the claims Rushton made at the end of his book Race, Evolution, and Behavior don’t even work. (See here.) Evolution isn’t progressive so we can’t logically state that one population group is more “advanced” or “evolved” than another. This is of course merely Rushton being racist with shoddy “explanations” used to justify it. (Like in Rushton’s long-refuted r/K selection theory or Differential-K theory, where more “K-evolved” races are “more advanced” than others.)

Lastly, this argument I constructed based on the principles of Gould’s argument shows that there is no progress to evolution.

P1 The claim that evolutionary “progress” is real and not illusory can only be justified iff organisms deemed more “advanced” outnumber “lesser” organisms.
P2 There are more “lesser” organisms (bacteria/insects) on earth than “advanced” organisms (mammals/species of mammals).
C Therefore evolutionary “progress” is illusory.

The Theory of African American Offending versus Hereditarian Explanations of Crime: Exploring the Roots of the Black-White Crime Disparity

3450 words

Why do blacks commit more crime? Biological theories (racial differences in testosterone and testosterone-aggression, AR gene, MAOA) are bunk. So how can we explain it? The Unnever-Gabbidon theory of African American offending (TAAO) (Unnever and Gabbidon, 2011)—where blacks’ experience of racial discrimination and stereotypes increases criminal offenses—has substantial empirical support. To understand black crime, we need to understand the unique black American experience. The theory not only explains African American criminal offending, it also makes predictions which were borne out in independent, empirical research. I will compare the TAAO with hereditarian claims of why blacks commit more crime (higher testosterone and higher aggression due to testosterone, the AR gene and MAOA). I will show that hereditarian theories make no novel predictions and that the TAAO does make novel predictions. Then I will discuss recent research which shows that the predictions that Unnever and Gabbidon have made were verified. Then I will discuss research which has borne out the predictions made by Unnever and Gabbidon’s TAAO. I will conclude by offering suggestions on how to combat black crime.

The folly of hereditarianism in explaining black American offending

Hereditarians have three main explanations of black crime: (1) higher levels of testosterone and high levels of testosterone leading to aggressive behavior which leads to crime; (2) low activity MAOA—also known in the popular press as “the warrior gene”—could be more prevalent in some populations which would then lead to more aggressive, impulsive behavior; and (3) the AR gene and AR-CAG repeats with lower CAG repeats being associated with higher rates of criminal activity.

When it comes to (1), the evidence is mixed on which race has higher levels of testosterone (due to low-quality studies that hereditarians cite for their claim). In fact, two recent studies showed that non-Hispanic blacks didn’t have higher levels of testosterone than other races (Rohrmann et al, 2007; Lopez et al, 2013). Contrast this with the classical hereditarian response that blacks indeed do have higher rates of testosterone than whites (Rushton, 1995)—using Ross et al (1986) to make the claim. (See here for my response on why Ross et al is not evidence for the hereditarian position.) Although Nyante et al (2012) showed a small increase in testosterone in blacks compared to whites and Mexican Americans using longitudinal data, the body of evidence shows that there is no to small differences in testosterone between blacks and whites (Richard et al, 2014). So despite claims that “African-American men have repeatedly demonstrated serum total and free testosterone levels that are significantly higher than all other ethnic groups” (Alvarado, 2013: 125), claims like this are derived from flawed studies, and newer more representative analyses show that there is a small difference in testosterone between blacks and whites to no difference.

Nevertheless, even if blacks have higher levels of testosterone than other races, then this would still not explain racial differences in crime, since heightened aggression explains T increases, high T doesn’t explain heightened aggression. HBDers seem to have cause and effect backwards for this relationship. Injecting individuals with supraphysiological doses of testosterone as high as 200 and 600 mg per week does not cause heightened anger or aggression (Tricker et al, 1996O’Connor et, 2002). If the hereditarian hypothesis on the relationship between testosterone and aggression were true, then we would see the opposite finding from what Tricker et al and O’Connor et al found. Thus this discussion shows that hereditarians are wrong about racial differences in testosterone and that they are wrong about causality when it comes to the T-aggression relationship. (The actual relationship is aggression causing increases in testosterone.) So this argument shows that the hereditarian simplification on the T-aggression relationship is false. (But see Pope, Kouri and Hudson, 2000 where they show that a 600 mg dose of testosterone caused increased manic symptoms in some men, although in most men there was little to no change; there were 8 “responders” and 42 “non-responders.”)

When it comes to (2), MAOA is said to explain why those who carry low frequency version of the gene have higher rates of aggression and violent behavior (Sohrabi, 2015; McSwiggin, 2017). Sohrabi shows that while the low frequency version of MAOA is related to higher rates of aggression and violent behavior, it is mediated by environmental effects. But MAOA, to quote Heine (2017), can be seen as the “Everything but the kitchen sink gene“, since MAOA is correlated with so many different things. But at the and of the day, we can’t blame “warrior genes” for violent, criminal behavior. Thus, the relationship isn’t so simple, so this doesn’t work for hereditarians either.

Lastly when it comes to (3), due to the failure of (1), hereditarians tried looking to the AR gene. Researchers tried to relate CAG repeat length with criminal behaviors. For instance, Geniole et al (2019) tried to argue that “Testosterone thus appears to promote human aggression through an AR-related mechanism.” Ah, the last gasps to explain crime through testosterone. But there is no relationship between CAG repeats, adolescent risk-taking, depression, dominance or self-esteem (Vermeer, 2010) and the number of CAG repeats in men and women (Valenzuela et al, 2022). So this, too, fails. (Also take look at the just-so story on why African slave descendants are more sensitive to androgens; Aiken, 2011.)

Now that I have shown that the three main hereditarian explanations for higher black crime are false, now I will show why blacks have higher rates of criminal offending than other races, and the answer isn’t to be found in biology, but sociology and criminology.

The Unnever-Gabbidon theory of African American criminal offending and novel predictions

In 2011, criminologists Unnever and Gabbidon published their book A Theory of African American Offending: Race, Racism, and Crime. In the book, they explain why they formulated the theory and why it doesn’t have any explanatory or predictive power for other races. That’s because it centers on the lived experiences of black Americans. In fact, the TAAO “incorporates the finding that African Americans are more likely to offend if they associate with delinquent peers but we argue that their inadequate reinforcement for engaging in conventional behaviors is related to their racial subordination” (Unnever and Gabbidon, 2011: 34). The TAAO focuses on the criminogenic effects of racism.

Our work builds upon the fundamental assumption made by Afrocentists that an understanding of black offending can only be attained if their behavior is situated within the lived experiences of being African American in a conflicted, racially stratified society. We assert that any criminological theory that aims to explain black offending must place the black experience and their unique worldview at the core of its foundation. Our theory places the history and lived experiences of African American people at its center. We also fully embrace the Afrocentric assumption that African American offending is related to racial subordination. Thus, our work does not attempt to create a “general” theory of crime that applies to every American; instead, our theory explains how the unique experiences and worldview of blacks in America are related to their offending. In short, our theory draws on the strengths of both Afrocentricity and the Eurocentric canon. (Unnever and Gabbidon, 2011: 37)

Two kinds of racial injustices highlighted by the theory—racial discrimination and pejorative stereotyping—have empirical support. Blacks are more likely to express anger, exhibit low self-control and become depressed if they believe the racist stereotype that they’re violent. It’s also been studied whether or not a sense of racial injustice is related to offending when controlling for low self control (see below).

The core predictions of the TAAO and how they follow from it with references for empirical tests are as follows:

(Prediction 1) Black Americans with a stronger sense of racial identity are less likely to engage in criminal behavior than black Americans with a weak sense of racial identity. How does this prediction follow from the theory? TAAO suggests that a strong racial identity can act as a protective factor against criminal involvement. Those with a stronger sense of racial identity may be less likely to engage in criminal behavior as a way to cope with racial discrimination and societal marginalization. (Burt, Simons, and Gibbons, 2013; Burt, Lei, and Simons, 2017; Gaston and Doherty, 2018; Scott and Seal, 2019)

(Prediction 2) Experiencing racial discrimination increases the likelihood of black Americans engaging in criminal actions. How does this follow from the theory? TAAO posits that racial discrimination can lead to feelings of frustration and marginalization, and to cope with these stressors, some individuals may resort to committing criminal acts as a way to exert power or control in response to their experiences of racial discrimination. (Unnever, 2014; Unnever, Cullen, and Barnes, 2016; Herda, 2016, 2018; Scott and Seal, 2019)

(Prediction 3) Black Americans who feel socially marginalized and disadvantaged are more prone to committing crime as a coping mechanism and have weakened school bonds. How does this follow from the theory? TAAO suggests that those who experience social exclusion and disadvantage may turn to crime as a way to address their negative life circumstances. and feelings of agency. (Unnever, 2014; Unnever, Cullen, and Barnes, 2016)

The data show that there is a racialized worldview shared by blacks, and that a majority of blacks believe that their fate rests on what generally happens to black people in America. Around 38 percent of blacks report being discriminated against and most blacks are aware of the stereotype of them as violent. (Though a new Pew report states that around 8 in 10—about 80 percent—of blacks have experienced racial discrimination.) Racial discrimination and the belief in the racist stereotype that blacks are more violent are significant predictors of black arrests. It’s been shown that the more blacks are discriminated against and the more they believe that blacks are violent, the more likely they are to be arrested. Unnever and Gabbidon also theorized that the aforementioned isn’t just related to criminal offending but also to substance and alcohol abuse. Unnever and Gabbidon also hypothesized that racial injustices are related to crime since they increase the likelihood of experiencing negative emotions like anger and depression (Simons et al, 2002). It’s been experimentally demonstrated that blacks who perceive racial discrimination and who believe the racist stereotype that blacks are more violent express less self-control. The negative emotions from racial discrimination predict the likelihood of committing crime and similar behavior. It’s also been shown that blacks who have less self-control, who are angrier and are depressed have a higher liklihood of offending. Further, while controlling for self-control, anger and depression and other variables, racial discrimination predicts arrests and substance and alcohol abuse. Lastly the experience of being black in a racialized society predicts offending, even after controlling for other measures. Thus, it is ruled out that the reason why blacks are arrested more and perceive more racial injustice is due to low self-control. (See Unnever, 2014 for the citations and arguments for these predictions.) The TAAO also has more empirical support than racialized general strain theory (RGST) (Isom, 2015).

So the predictions of the theory are: Racial discrimination as a contributing factor; a strong racial identity could be a protective factor while a weak racial identity would be associated with a higher likelihood of engaging in criminal activity; blacks who feel socially marginalized would turn to crime as a response to their disadvantaged social position; poverty, education and neighborhood conditions play a significant role in black American offending rates, and that these factors interact with racial identity and discrimination which then influence criminal behavior; and lastly it predicts that the criminal justice system’s response to black American offenders could be influenced by their racial identity and social perceptions which could then potentially lead to disparities in treatment compared to other racial groups.

Ultimately, the unique experiences of black Americans explain why they commit more crime. Thus, given the unique experiences of black Americans, there needs to be a race-centric theory of crime for black Americans, and this is exactly what the TAAO is. The predictions that Unnever and Gabbidon (2011) made from the TAAO have independent empirical support. This is way more than the hereditarian explanations can say on why blacks commit more crime.

One way, which follows from the theory, to insulate black youth from discrimination and prejudice is racial socialization, where racial socialization is “thoughts, ideas, beliefs, and attitudes regarding race and racism are communicated across generations (Burt, Lei, & Simons, 2017Hughes, Smith, et al., 2006Lesane-Brown, 2006) (Said and Feldmeyer, 2022).

But also related to the racial socialization hypothesis is the question “Why don’t more blacks offend?” Gaston and Doherty (2018) set out to answer this question. Gaston and Doherty (2018) found that positive racial socialization buffered the effects of weak school bonds on adolescent substance abuse and criminal offending for males but not females. This is yet again another prediction from the theory that has come to pass—the fact that weak school bonds increase criminal offending.

Doherty and Gaston (2018) argue that black Americans face racial discrimination that whites in general just do not face:

Empirical studies have pointed to potential explanations of racial disparities in violent crimes, often citing that such disparities reflect Black Americans’ disproportionate exposure to criminogenic risk factors. For example, Black Americans uniquely experience racial discrimination—a robust correlate of offending—that White Americans generally do not experience (Burt, Simons, & Gibbons, 2012Caldwell, Kohn-Wood, Schmeelk-Cone, Chavous, & Zimmerman, 2004Simons, Chen, Stewart, & Brody, 2003Unnever, Cullen, Mathers, McClure, & Allison, 2009). Furthermore, Black Americans are more likely to face factors conducive to crime such as experiencing poor economic conditions and living in neighborhoods characterized by concentrated disadvantage.

They conclude that:

The support we found for ethnic-racial socialization as a crime-reducing factor has important implications for broader criminological theorizing and practice. Our findings show the value of race-specific theories that are grounded in the unique experiences of that group and focus on their unique risk and protective factors. African Americans have unique pathways to offending with racial discrimination being a salient source of offending. While it is beyond the scope of this study to determine whether TAAO predicts African American offending better than general theories of crime, the general support for the ethnic-racial socialization hypothesis suggests the value of theories that account for race-specific correlates of Black offending and resilience.

TAAO draws from the developmental psychology literature and contends, however, that positive ethnic-racial socialization offers resilience to the criminogenic effect of weak school bonds and is the main reason more Black Americans do not offend (Unnever & Gabbidon, 2011, p. 113, 145).

Thus, combined with the fact that blacks face racial discrimination that whites in general just do not face, and combined with the fact that racial discrimination has been shown to increase criminal offending, it follows that racial discrimination can lead to criminal offending, and therefore, to decrease criminal offending we need to decrease racial discrimination. Since racism is due to low education and borne of ignorance, then it follows that education can decrease racial attitudes and, along with it, decrease crime (Hughes et al, 2007Kuppens et al, 2014Donovan, 20192022).

Even partial tests of the TAAO have shown that racial discrimination related to offending and I would say that it is pretty well established that positive ethnic-racial socialization acts as a protective factor for blacks—this also explains why more blacks don’t offend (see Gaston and Doherty, 2018). It is also know that bad (ineffective) parenting also increases the risk for lower self-control (Unnever, Cullen, and Agnew, 2006). Black Americans share a racialized worldview and they view the US as racist, due to their personal lived experiences with racism (Unnever, 2014).

The TAAO and situationism

Looking at what the TAAO is and the predictions it makes, we can see how the TAAO is a situationist theory. Situationism is a psychological-philosophical theory which emphasizes the influence of the situation and its effects on human behavior. It posits that people’s actions and decisions are primarily shaped by the situational context that they find themselves in. It highlights the role of the situation in explaining behavior, suggests that people may act differently based on the context they find themselves in, situational cues which are present in the immediate context of the environment can trigger specific behavioral responses, suggests that understanding the situation one finds themselves in is important in explaining why people act the way they do, and asserts that behavior is more context-dependent and unpredictable and could vary across different situations. Although it seems that situationism conflicts with action theory, it doesn’t. Action theory explains how people form intentions and make decisions within specific situations, basically addressing the how and why. Conversely, situationism actually compliments action theory, since it addresses the where and when of behavior from an external, environmental perspective.

So the TAAO suggests that experiencing racial discrimination can contribute to criminal involvement as a response to social marginalization. So situationism can provide a framework for exploring how specific instances of environmental stressors, discrimination, or situational factors can trigger criminal behavior in context. So while TAAO focuses on historical and structural factors which lead to why blacks commit more crime, adding in situationism could show how the situational context interacts with historical and structural factors to explain black American criminal behavior.

Thus, combining situationism and the TAAO can lead to novel predictions like: predictions of how black Americans when faced with specific discriminatory situations, may be more or less likely to engage in criminal behavior based on their perception of the situation; predictions about the influence of immediate peer dynamics in moderating the relationship between structural factors like discrimination and criminal behavior in the black American community; and predictions about how variations in criminal responses to different types of situational cues—like encounters with law enforcement, experiences of discrimination, and economic stress—within the broader context of the TAAO’s historical-structural framework.

Why we should accept the TAAO over hereditarian explanations of crime

Overall, I’ve explained why hereditarian explanations of crime fail. They fail because when looking at the recent literature, the claims they make just do not hold up. Most importantly, as I’ve shown, hereditarian explanations lack empirical support, and the logic they try to use in defense of them is flawed.

We should accept the TAAO over hereditarianism because there is empirical validity, in that the TAAO is grounded in empirical research and it’s predictions and hypotheses have been subject to empirical tests and they have been found to hold. The TAAO also recognizes that crime is a complex phenomena influenced by factors like historical and contemporary discrimination, socioeconomic conditions, and the overall situational context. It also addresses the broader societal issues related to disparities in crime, which makes it more relevant for policy development and social interventions, acknowledging that to address these disparities, we must address the contemporary and historical factors which lead to crime. The TAAO also doesn’t stigmatize and stereotype, while it does emphasize the situational and contextual factors which lead to criminal activity. On the other hand, hereditarian theories can lead to stereotypes and discrimination, and since hereditarian explanations are false, we should also reject them (as I’ve explained above). Lastly, the TAAO also has the power to generate specific, testable predictions which have clear empirical support. Thus, to claim that hereditarian explanations are true while disregarding the empirical power of the TAAO is irrational, since hereditarian explanations don’t generate novel predictions while the TAAO does.

Conclusion

I have contrasted the TAAO with hereditarian explanations of crime. I showed that the three main hereditarian explanations—racial differences in testosterone and testosterone caused aggression, the AR gene, and MAOA—all fail. I have also shown that the TAAO is grounded in empirical research, and that it generates specific, testable predictions on how we can address racial differences in crime. On fhe other hand, hereditarian explanations lack empirical support, specificity, and causality, which makes it ill-suited for generating testable predictions and informing effective policies. The TAAO’s complexity, empirical support, and potential for addressing real-world issues makes it a more comprehensive framework for understanding and attempting to ameliorate racial crime disparities, in contrast to the genetic determinism from hereditarianism. In fact, I was unable to find any hereditarian response to the TAAO, so that should be telling on its own.

Overall, I have shown that the TAAO’s predictions that Unnever and Gabbidon have generated enjoy empirical support, and I have shown that hereditarian explanations fail, so we should reject hereditarian explanations and accept the TAAO, due to the considerations above. I have also shown that the TAAO makes actionable policy recommendations, and therefore, to decrease criminal offending, we thusly need to educate more, since racism is borne of ignorance and education can decrease racial bias.

IQ, Achievement Tests, and Circularity

2150 words

Introduction

In the realm of educational assessment and psychometrics, a distinction between IQ and achievement tests needs to be upheld. It is claimed that IQ is a measure of one’s potential learning ability, while achievement tests show what one has actually learned. However, this distinction is not strongly supported in my reading of this literature. IQ and achievement tests are merely different versions of the same evaluative tool. This is what I will argue in this article: That IQ and achievement tests are different versions of the same test, and so any attempt to “validate” IQ tests based not only on other IQ tests, achievement tests and job performance is circular, I will argue that, of course, the goal of psychometrics in measuring the mind is impossible. The hereditarian argument, when it comes to defending their concept and the claim that they are measuring some unitary and hypothetical variable, then, fails. At best, these tests show one’s distance from the middle class, since that’s the where most of the items on the test derive from. Thus, IQ and achievement tests are different versions of the same test and so, they merely show one’s “distance” from a certain kind of class-specific knowledge (Richardson, 2012), due to the cultural and psychological tools one must possess to score well on these tests (Richardson, 2002).

Circular IQ-ist arguments

IQ-ists have been using IQ tests since they were brought to America by Henry Goddard in 1913. But one major issue (one they still haven’t solved—and quite honestly never will) was that they didn’t have any way to ensure that the test was construct valid. So this is why, in 1923, Boring stated that “intelligence is what intelligence tests test“, while Jensen (1972: 76) said “intelligence, by definition, is what intelligence tests measure.” However, such statements are circular and they are circular because they don’t provide real evidence or explanation.

Boring’s claim that “intelligence is what intelligence tests test” is circular since it defines intelligence based on the outcome of “intelligence tests.” So if you ask “What is intelligence“, and I say “It’s what intelligence tests measure“, I haven’t actually provided a meaningful definition of intelligence. The claim merely rests on the assumption that “intelligence tests” measure intelligence, not telling us what it actually is.

Jensen’s (1976) claim that “intelligence, by definition, is what intelligence tests measure” is circular for similar reasons to Boring’s since it also defines intelligence by referring to “intelligence tests” and at the same time assumes that intelligence tests are accurately measuring intelligence. Neither claim actually provides an independent understanding of what intelligence is, it merely ties the concept of “intelligence” back to its “measurement” (by IQ tests). Jensen’s Spearman’s hypothesis on the nature of black-white differences has also been criticized as circular (Wilson, 1985). Not only was Jensen (and by extension Spearman) guilty of circular reasoning, so too was Sternberg (Schlinger, 2003). Such a circular claim was also made by Van der Mass, Kan, and Borsboom (2014).

But Jensen seemed to have changed his view, since in his 1998 book The g Factor, he argues that we should dispense with the term “intelligence”, but curiously that we should still study the g factor and assume identity between IQ and g… (Jensen made many more logical errors in his defense of “general intelligence”, like saying not to reify intelligence on one page and then a few pages later reifying it.) Circular arguments have been identified in not only Jensen’s writings Spearman’s hypothesis, but also in using construct validity to validate a measure (Gordon, Schonemann; Guttman, 1992: 192).

The same circularity can be seen when discussions of the correlation between IQ and achievement tests is brought up. “These two tests correlate so they’re measuring the same thing”, is an example one may come across. But the error here is assuming that mental measurement is possible and that IQ and achievement tests are independent of each other. However, IQ and achievement tests are different versions of the same test. This is an example of circular validation, which occurs when a test’s “validity” is established by the test itself, leading to a self-reinforcing loop.

IQ tests are often validated with other older editions of the test. For example, the newer version of the S-B would be “validated” against the older version of the test that the newer version was created to replace (Howe, 1997: 18; Richardson, 2002: 301), which not only leads to circular “validation”, but would also lead to the same assumptions from the older test constructors (like Terman) which would still then be alive in the test itself (since Terman assumed men and women should be equal in IQ and so this assumption is still there today). IQ tests are also often “validated” by comparing IQ test results to outcomes like job performance and academic performance. Richardson and Norgate (2015) have a critical review of the correlation between IQ and job performance, arguing that it’s inflated by “corrections”, while Sackett et al, 2023 show “a mean observed validity of .16, and a mean corrected for unreliability in the criterion and for range restriction of .23. Using this value drops cognitive ability’s rank among the set of predictors examined from 5th to 12th” for the correlation between “general cognitive ability” and job performance.

But this could lead to circular validation, in that if a high IQ is used as a predictor of success in school or work, then success in school or work would be used as evidence in validating the IQ test, which would then lead to a circular argument. The test’s validity is being supported by the outcome that it’s supposed to predict.

Achievement tests are destined to see what one had learned or achieved regarding a certain kind of subject matter. Achievement tests are often validated by correlating test scores with grades or other kinds of academic achievement (which would also be circular). But if high achievement test scores are used to validate the test and those scores are also used as evidence of academic achievement, then that would be circular. Achievement tests are “validated” on their relationship between IQ tests and grades. Heckman and Kautz (2013) note that “achievement tests are often validated using other standardized achievement tests or other measures of cognitive ability—surely a circular practice” and “Validating one measure of cognitive ability using other measures of cognitive ability is circular.” But it should also be noted that the correlation between college grades and job performance 6 or more years after college is only .05 (Armstrong, 2011).

Now what about the claim that IQ tests and achievement tests correlate so they measure the same thing? Richardson (2017) addressed this issue:

For example, IQ tests are so constructed as to predict school performance by testing for specific knowledge or text‐like rules—like those learned in school. But then, a circularity of logic makes the case that a correlation between IQ and school performance proves test validity. From the very way in which the tests are assembled, however, this is inevitable. Such circularity is also reflected in correlations between IQ and adult occupational levels, income, wealth, and so on. As education largely determines the entry level to the job market, correlations between IQ and occupation are, again, at least partly, self‐fulfilling

The circularity inherent in likening IQ and achievement tests has also been noted by Nash (1990). There is no distinction between IQ and achievement tests since there is no theory or definition of intelligence and how, then, this theory and definition would be likened to answering questions correctly on an IQ test.

But how, to put first things first, is the term ‘cognitive ability’ defined? If it is a hypothetical ability required to do well at school then an ability so theorised could be measured by an ordinary scholastic attainment test. IQ measures are the best measures of IQ we have because IQ is defined as ‘general cognitive ability’. Actually, as we have seen, IQ theory is compelled to maintain that IQ tests measure ‘cognitive ability’ by fiat, and it therefore follows that it is tautologous to claim that IQ tests are the best measures of IQ that we have. Unless IQ theory can protect the distinction it makes between IQ/ability tests and attainment/ achievement tests its argument is revealed as circular. IQ measures are the best measures of IQ we have because IQ is defined as ‘general cognitive ability’: IQ tests are the only measures of IQ.

The fact of the matter is, IQ “predicts” (is correlated with) school achievement since they are different versions of the same test (Schwartz, 1975; Beaujean et al, 2018). Since the main purpose of IQ tests in the modern day is to “predict” achievement (Kaufman et al, 2012), then if we correctly identify IQ and achievement tests as different versions of the same test, then we can rightly state that the “prediction” is itself a form of circular reasoning. What is the distinction between “intelligence” tests and achievement tests? They both have similar items on them, which is why they correlate so highly with each other. This, therefore, makes the comparison of the two in an attempt to “validate” one or the other circular.

I can now argue that the distinction between IQ and achievement tests is nonexistent. If IQ and achievement tests are different versions of the same test, then they share the same domain of assessing knowledge and skills. IQ and achievement tests contain similar informational content on them, and so they can both be considered knowledge tests—class-specific knowledge. IQ and achievement tests share the same domain of assessing knowledge and skills. Therefore, IQ and achievement tests are different versions of the same test. Put simply, if IQ and achievement tests are different versions of the same test, then they will have similar item content, and they do so we can correctly argue that they are different versions of the same test.

Moreover, even constructing tests has been criticized as circular:

Given the consistent use of teachers’ opinions as a primary criterion for validity of the Binet and Wechsler tests, it seems odd to claim  then that such tests provide “objective alternatives to the subjective judgments of teachers and employers.”  If the tests’ primary claim to predictive validity is that their results have strong correlations with academic success, one wonders how an objective test can predict performance in an acknowledged subjective environment?  No one seems willing to acknowledge the circular and tortuous reasoning behind the development of tests that rely on the subjective judgments of secondary teachers in order to develop an assessment device that claims independence of those judgments so as to then be able to claim that it can objectively assess a student’s ability to  gain the approval of subjective judgments of college professors.  (And remember, these tests were used to validate the next generation of tests and those tests validated the following generation and so forth on down to the tests that are being given today.) Anastasi (1985) comes close to admitting that bias is inherent in the tests when he confesses the tests only measure what many anthropologists would called a culturally bound definition of intelligence. (Thorndike and Lohman, 1990)

Conclusion

It seems clear to me that almost the whole field of psychometrics is plagued with the problem of inferring causes from correlation and using circular arguments in an attempt to justify and validate the claim that IQ tests measure intelligence by using flawed arguments that relate IQ to job and academic performance. However this idea is very confused. Moreover, circular arguments aren’t only restricted to IQ and achievement tests, but also in twin studies (Joseph, 2014; Joseph et al, 2015). IQ and achievement tests merely show what one knows, not their learning potential, since they are general knowledge tests—tests of class-specific knowledge. So even Gottfredson’s “definition” of intelligence fails, since Gottfredson presumes IQ to be a measure of learning ability (nevermind the fact that the “definition” is so narrow and I struggle to think of a valid way to operationalize it to culture-bound tests).

The fact that newer versions of tests already in circulation are “validated” against other older versions of the same test means that the tests are circularly validated. The original test (say the S-B) was never itself validated, and so, they’re just “validating” the newer test on the assumption that the older one was valid. The newer test, in being compared to its predecessor, means that the “validation” is occuring on the other older test which has similar principles, assumptions, and content to the newer test. The issue of content overlap, too, is a problem, since some questions or tasks on the newer test could be identical to questions or tasks on the older test. The point is, both IQ and achievement tests are merely knowledge tests, not tests of a mythical general cognitive ability.

Ashkenazi Jews Are White

2700 words

Introduction

Recently, I have been seeing people say that Ashkenazi Jews (AJs) are not white. Some may say that Jews “pretend to be white”, so they can accomplish their “group goals” (like pitting whites and blacks against each other in an attempt to sow racial strife, due to their ethnic nepotism due to their genetic similarity). I have also seen people deriding Jews for saying “I’m white” and then finding an instance of them saying “I’m Jewish” (see here for an example), as if that’s a contradiction, but it’s not. It’s the same thing as saying “I’m Italian… I’m white” or “I’m German… I’m white.” But since pluralism about race is true, there could be some contexts and places that Jews aren’t white, due to the social construction of racial identities. However, in the American context it is quite clear: In both historical and contemporary thought in America, AJs are white.

But a claim like this, then, raises an important question: If AJs are not white, then what race are they? This is a question I will answer in this article, and I will of course show that AJs are indeed white in an American conception of race. Using Quayshawn Spencer’s racial identity argument, I will assume that Ashkenazi Jews aren’t white, and then I will argue that this leads to a contradiction, so Jews must be white. And while there was discussion about the racial status of Jews after they began emigrating to America through Ellis Island, I will show that Jews arrived to America as whites.

White or not?

The question of whether or not AJs are white is a vexing one. Of course, AJs are a religious group. However, this doesn’t mean that they themselves have their own specific racial category. It’s like if one says they are German, or Italian, or British. Those are mere ethnicities which make up the white racial group. One study found that AJs have “White privilege vis-á-vis persons of color. This privilege, however, is limited to Jews who can “pass” as White gentiles” (Blumenfeld, 2009). Jews that can “pass as white” are obviously white, and there is no other race for them to be.

This is due to the social nature of race. Since race is a social construct, then the way people’s racial background is perceived in America is based on how they look (their phenotype). An Ashkenazi Jew saying “I’m Jewish. I’m white” isn’t a contradiction, since AJs aren’t a race. It’s just like saying “I’m Italian. I’m white” or “I’m German. I’m white.” It’s quite obviously an ethnic group which is a part of the white race. Jews are white and whites are a socialrace.

This discussion is similar to the one where it is claimed that “Hispanic/Latino/Spanish” aren’t white. But that, too, is a ridiculous claim. In cluster studies, HLSs don’t have their own cluster, but they cluster near the group where their majority ancestry derives (Risch et al, 2002). Saying that AJs aren’t white is similar to this.

But during WWII, Jews were persecuted in Nazi German and eventually some 6 million Jews were killed. Jews, in this instance, were seen as a socialrace in Germany, and so they were themselves racialized. It has been shown that Germans who grew up under their Nazi regime are much more anti-Semitic than Germans who were born before or after the Nazi regime, and it was Nazi schooling which contributed to this the most (Voigtlander and Voth, 2015). This shows how one’s beliefs—and that of a whole society’s—are malleable along with how effective propaganda is. The Nuremberg laws of 1935 established anti-Jewish sentiment in the Nazi racial state, and so they had to have a way to identify Jews. They settled on the religious affiliation of one’s 4 grandparents as a way to identify Jews. But when one’s origins were in doubt, the Reich Kinship Office was deployed in order to ascertain one’s genealogy. But in the event this could not be done, one’s physical attributes would be assessed and compared to 120 physical measures between the individual and their parents (Rupnow, 2020: 373-374).

This can now be centered on Whoopi Goldberg’s divisive comment from February, 2022, where she states that the attempted genocide of Jews in Nazi Germany “wasn’t about race“, but it was about “man’s inhumanity to man; [it involved] two groups of white people.” Of course Goldberg is operating under an American conception of race, so I could see why she would say that. However, at the time in Nazi Germany, Jews were Racialized Others, and so they were a socialrace in Germany.

Per Pew, most Jews in America identify as white:

92% of U.S. Jews describe themselves as White and non-Hispanic, while 8% say they belong to another racial or ethnic group. This includes 1% who identify as Black and non-Hispanic; 4% who identify as Hispanic; and 3% who identify with another race or ethnicity – such as Asian, American Indian or Hawaiian/Pacific Islander – or with more than one race.

A super majority (94%) of American Jews are (and identify as) white and non-“Hispanic” in Pew’s 2013 research, which is down slightly from the 2020 research (Lugo et al, 2013):

From Lugo et al, 2013

AJs were viewed as white even as early as 1790 when the Naturalization Act was put into law, which stated that only free white persons were allowed to emigrate to America (Tanner, 2021). Even in 1965, Srole (1965) stated that “Jews are white.” But the perception that all Jews are white came after WWII (Levine-Rasky, 2020) and this claim is of course false. All Jews certainly aren’t white, but some Jews are white. Thus, even historically in the history of America, AJs were seen as white. Yang and Koshy (2016) write:

We found no evidence from U.S. censuses, naturalization legislation, and court cases that the racial categorization of some non-Anglo-Saxon European immigrant groups such as the Irish, Italians, and Jews changed to white. They were legally white and always white, and there was no need for them to switch to white.

White ethnics could be considered ethnically inferior and discriminated against because of their ethnic distinctions, but in terms of race or color, they were all white and had access to resources not available to nonwhites.

It was precisely because of the changing meanings of race that “the Irish race,” “the German race,” “the Dutch race,” “the Jewish race,” “the Italian race,” and so on changed their races and became white. In today’s terminology, it should be read that these European groups changed their ethnicities to become part of whites, or more precisely they were racialized to become white.

Our findings help resolve the controversy over whether certain U.S. non-Anglo-Saxon European immigrant groups became white in historical America. Our analysis suggests that “becoming white” carries different meanings: change in racial classification, and change in majority/minority status. In terms of the former, “becoming white” for non-Anglo-Saxon European immigrant groups is bogus. Hence, the argument of Eric Arnesen (2001), Aldoph Reed (2001), Barbara Fields (2001), and Thomas Guglielmo (2003) that the Irish, Italians, and Jews were white on arrival in America is vindicated.

But one article in The Forward argued that “Ashkenazi Jews are not functionally white.” The author (Danzig) attempts to make an analogy between the founder of the NAACP Walter White who was “white-passing” (both of his parents were born into slavery) and Jews who are “white-passing”, “due to years of colonialism, expulsion and exile in European lands.” The author then claims that as along as Jews maintain their unique Jewish identity, they therefore are a racial group. This article is a response to another which claims that Ashkenazi Jews are” functionally white” (Burton). Danzig discusses Button’s claim that a “white-passing ‘Latinx'” person could be deported if their immigration status is discovered. This of course implies that “Hispanics” are themselves a racial group (they aren’t). Danzig discusses the discrimination that his family went through in the 1920s, stating that they could do certain things because they were Jewish. The argument in Danzig’s article, I think, is confused. It’s confused because just because Jews were discriminated against in the past doesn’t mean they weren’t white. In fact, Jews, Italians, and the Irish were white on arrival to the United States (Steward, 1964; Yang and Koshy, 2016). But this doesn’t mean that they didn’t face discrimination. That is, Jews, Italians and the Irish didn’t change to white they were always legally white in America. (But see Gardaphe, 2002, Bisesi, 2017, Baddorf, 2020, and Rubin, 2021. Italians didn’t become white as those authors claim, they were white upon arrival). So Danzig’s claim fails—Jews are functionally white because they are white and they arrived in America as white. Claims to the contrary that AJs (and Italians and the Irish) became white are clearly false.

So despite claims that Jews became white after WWII, Jews are in fact white in America (Pearson and Geronimus, 2011). Of course in the early 1900s as immigrants were arriving to Ellis Island, the question of whether or not Jews (“Hebrews” in this instance) were white or even if they were their own racial group had a decent amount of discussion at the time (Goldstein, 2005; Pearlman, 2018). The fact that there was ethnic strife between new-wave immigrants to Ellis Island doesn’t entail that they were racial groups or that those European immigrants weren’t white. It’s quite clear that Jews—like italians and the Irish—were considered white upon arrival.

Now that I have established the fact that Jews AJs are indeed white (and arrived to America as white) despite the confused protestations of some authors, now I will formalize the argument that AJs are white, since if they aren’t white, then they would need to fit into one of the other 4 racial categories.

Many may know that I push Quayshawn Spencer’s OMB race theory, and that I am a pluralist about race. In the volume What is Race?: Four Philosophical Views, philosopher or race Quayshawn Spencer (2019: 98) writes:

After all, in OMB race talk, White is not a narrow group limited to Europeans, European Americans, and the like. Rather, White is a broad group that includes Arabs, Persians, Jews, and other ethnic groups originating from the Middle East and North Africa.

Although there is some research on the racial identity of MENA (Middle Eastern/North African people) and how they may not perceive themselves as white or be perceived as white (Maghbouleh, Schachter, and Flores, 2022), the OMB is quite clear that the social group designated “white” doesn’t refer only to Europeans (Spencer, 2019).

So, if AJs aren’t white, then they must be part of another of the 4 OMB races (black, Native American, East Asian or Pacific Islander). Part of this racial scheme is K=5—where when K is set to 5 in STRUCTURE, 5 clusters are spit out and these map onto the OMB races. But of those 5 clusters, there is no Jewish cluster. Note that I am not denying that there is some kind of genetic structure to AJs, I’m just denying that this would entail that they are a racial group. If they were, then they would appear in these runs. AJs are merely an ethno-religious in the white socialrace. So let’s assume this is true: Ashkenazi Jews are not white.

When we consider the complexities of racial classification, it becomes apparent that societies tend to organize individuals on numerous traits into distinct categories based on physical traits, cultural background, and ancestry. If AJs aren’t white in an American context, then they would have to fall into one of the four other racial groups in a Spencerian OMB race theory.

But there is one important aspect to consider here—that of the phenotype of Ashkenazi Jews. Many Ashkenazi Jews exhibit physical traits which are more likely associated with “white” populations. This simple observation shows that AJs don’t fit into the established categories of East Asian, Pacific Islander, black or Native American. AJs’ typical phenotype aligns more closely with that of white populations.

So examining the racial landscape in America, we can see that how social perceptions and classifications can significantly impact how individuals are positioned in a broader framework. AJs have historically been classified and perceived as white in the American racial context, as can be seen above. So within American racetalk, AJs are predominantly classified in the white racial grouping.

So taking all of this together, I can rightly state that Jews are white. Since we assumed at the outset that if they weren’t white they would belong to some other racial group, but they don’t look like any other racial group but look and are treated as white (both in contemporary thought and historically), then AJs are most definitely seen as white in American racetalk. Here’s the formalized argument:

P1: If AJs aren’t white, then they must belong to one of the other 4 racial categories (black, Native American, East Asian or Pacific Islander).
P2: AJs do not belong to any of the four racial categories mentioned (based on their phenotype typical of white people).
P3: In the American racial context, AJs are predominantly classified and perceived as white.
Conclusion: from P1, if AJs aren’t white then they must belong to one of the other 4 racial groups. But from P2, AJs do not belong to any of those categories, because from P3, AJs are perceived and classified as white. These premises, then, lead to a contradiction, since they all cannot be simultaneously true.

So we must reject the assumption that AJs aren’t white, and the logical conclusion is that AJs are considered white in the American context, based on their phenotype (and the fact that they arrived to America as white). Jews didn’t “become white” like some claim (eg, Brodkin, 2004). American Jews even benefit from white privilege (Schraub, 2019). MacDonald-Dennis’ (2005, 2006) qualitative research (although small not generalizable) shows that some Ashkenazi Jews think of themselves as white. AJs are legally and politically white.

All Jews aren’t white, but some (most) Jews are white (in America).

Conclusion

Thus, AJs are white. Although many authors have claimed that Jews became white after arrival to America (or even after WWII), this claim is false. It is false even as far back as 1790. If we accept the assumption that AJs aren’t white, then it leads to a contradiction, since they would have to be one of the other 4 racial groups, but since they look white, they cannot be a part of those racial groups.

There are white Jews and there are non-white Jews. But when it comes to AJs, the question “When did they become white?” is nonsense since they were always perceived and treated as white in America from it’s founding. Some AJs are white, some aren’t; some Mizrahi Jews are white, some aren’t. However in the context of this discussion, it is quite clear that AJs are white, and there is no other race for them to be, based on the OMB race theory. In fact, in the minds of most Americans, Jews aren’t a racialized group, but they are perceived as outsiders (Levin, Filindra, and Kopstein, 2022). But there were some instances in history where sometimes Jews were racialized, and sometimes they weren’t (Hochman, 2017). But what I have decisively shown here, in the American context ever since its inception, AJs are most definitely white. Saying that AJs are white is like saying that Italians or Germans are white. There is no contradiction. Jews get treated as white in the American social context, they look white, and have been considered white since they have arrived to America in the early 1900s (like the Irish and Italians).

The evidence and reasoning presented in this article points to one conclusion: That AJs are indeed white. This of course doesn’t mean that all AJs are white, it merely means that some (and I would say most) are white. AJs have been historically, legally, and politically white. Mere claims that they aren’t white are irrelevant.

From Blank Slates to Dynamic Interactions: Dualistic Experiential Constructivism Challenges Hereditarian Assumptions

4000 words

Introduction

For decades, hereditarians have attempted to partition traits into relative genetic and environmental causes. The assumption here is of course that G and E are separable, independent components and another assumption is that we can discover the relative contribution of G and E by performing certain tests and statistical procedures. However, since Oyama’s publication of The Ontogeny of Information in 1985, this view has been called into question. The view that Oyama articulated is a philosophical theory based on the irreducible interactions between all developmental resources called developmental systems theory (DST)

However, we can go further. We can use the concept of dualism and argue that psychology is irreducible to the physical and so it’s irreducible to genes. We can then use the concepts laid forth in DST like that of gene-environment and the principle of biological relativity and argue that the development of organisms is irreducible to any one resource. Then, for the formation of mind and psychological traits in humans, we can say that they arise due to human-specific ecological contexts. I will call this view Dualistic Experiential Constructivism (DEC), and I will argue that it invalidates any and all attempts at partitioning G and E into quantifiable components. Thus, the hereditarian research program is bound to fail since it rests on a conceptual blunder.

The view that refutes the claim that genes and environment, nature and nurture, can’t be separated is this:

(1) Suppose that there can be no environmental effect without a biological organism to act on. (2) Suppose there can be no organism outside of its context (like the organism-environment system). (3) Suppose the organism cannot exist without the environment. (4) Suppose the environment has certain descriptive properties if and only if it is connected to the organism. Now here is the argument.

P1: If there can be no environmental effect without a biological organism to act on, and if the organism cannot exist without the environment, then the organism and environment are interdependent.
P2: If the organism and environment are interdependent, and if the environment has certain descriptive properties if and only if it is connected to the organism, then nature and nurture are inseparable.
C: Thus, nature and nurture are inseparable.

Rushton and Jensen’s false dichotomy

Rushton and Jensen (2005) uphold a 50/50 split between genes and environment and call this the “hereditarian” view. On the other side is the “culture-only” model which is 0 percent genes and 100 percent environment regarding black-white IQ differences. Of course note the false dichotomy here: What is missing? Well, an interactive GxE view. Rushton and Jensen merely put that view into their 2-way box and called it a day. They wrote:

It is essential to keep in mind precisely what the two rival positions do and do not say—about a 50% genetic–50% environmental etiology for the hereditarian view versus an effectively 0% genetic–100% environmental etiology for the culture-only theory. The defining difference is whether any significant part of the mean Black–White IQ difference is genetic rather than purely cultural or environmental in origin. Hereditarians use the methods of quantitative genetics, and they can and do seek to identify the environmental components of observed group differences. Culture-only theorists are skeptical that genetic factors play any independently effective role in explaining group differences.

Most of those who have taken a strong position in the scientific debate about race and IQ have done so as either hereditarians or culture-only theorists. Intermediate positions (e.g., gene–environment interaction) can be operationally assigned to one or the other of the two positions depending on whether they predict any significant heritable component to the average group difference in IQ. For example, if gene–environment interactions make it impossible to disentangle causality and apportion variance, for pragmatic purposes that view is indistinguishable from the 100% culture-only program because it denies any potency to the genetic component proposed by hereditarians.

Rushton and Jensen did give an argument here, here it is formalized:

P1: Gene-environment interactions make it impossible to disentangle causality and apportion variance correctly.
P2: If it is impossible to disentangle and apportion variance, then the view denying any potency to the genetic component proposed by hereditarians becomes indistinguishable from a 100% culture-only perspective.
C: Thus, for pragmatic purposes, the view denying any potency to the genetic component is indistinguishable from a 100% culture-only program.

This argument is easy enough to counter. Rushton and Jensen are explicitly putting the view that refutes their whole research program into their 2 boxes—their 50/50 split between genes and environment, and the 0 percent genes and 100 percent environment. The view that Rushton and Jensen articulated is basically a developmental systems theory (DST) view. DST highlights the interactive and dynamic nature of development. Rushton and Jensen’s view is clearly gene-centric, where gene-centric means centered on genes. I would impute to them—based on their writings—that genes are a sufficient, privileged cause for IQ, and traits as a whole. But that claim is false (Noble, 2012).

Although I understand where they’re coming from here, they’re outright wrong.

Put simply, they need to put everything into this box in order to legitimatize their “research.” Although I would be a “culture-only theorist” to them regarding my views on the cause of IQ gaps (since there is no other way to be), my views on genetic causation are starkly different than theirs are.

Most may know that I deny the claim that genes can cause or influence differences in psychological traits between people. (And that genes are outright causes on their own, independent of environment.) I hold this view due to conceptual arguments. The interactive view (of which is more complex than Rushton and Jensen are describing), is how development is carried out, with no one resource having primacy over another—a view called the causal parity thesis. This is the principle of biological relativity (Noble, 2012). This theory asserts that there is no privileged level of causation, and so if there is no privileged level of causation, then that holds for all of the developmental resources that interact to make up the phenotype. Thus, hereditarianism is false since hereditarianism privileges genes over other developmental resources when no developmental resource is privileged in biological systems.

Rushton and Jensen almost had it—if GxE makes it hard or impossible to disentangle causality and apportion variance, then the hereditarian program cannot and will not work since, basically, they apportion variance into G and E causes and claim that independent genetic effects are possible. However, many authors have a conceptual argument on heritability, for if G and E and anything else interact, then they are not separable, and if they are not separable, they are not quantifiable. For example, Burt and Simon (2015: 107) argue that the “conceptual model is unsound and the goal of heritability studies is biologically nonsensical given what we now know about the way genes work.

When it comes to “denying potency” to the “genetic component”, Rushton and Jensen seem to be quite specific in what they mean by this. Of course, a developmentalist (a GxE supporter) would not deny that genes are NECESSARY for the construction of the phenotype, though they would deny the PRIMACY that hereditarians place on genes. Genes are nothing special, they are not special resources when compared to other resources.

Of course, hereditarianism is a reductionist discipline. And by reductionist, I mean it attempts to break down the whole to the sum of its parts to ascertain the ontogeny of the desired object. Reductionism is false, and so that would apply to genetic and neuroreduction. Basically, reducing X to genes or the brain/brain physiology is the wrong way to go about this. Rushton (2003) even explicitly stated his adherence to the reductionist paradigm in a small commentary of Rose’s (1998) Lifelines. He repeats his “research” into brain size differences between races and argues that, due to the .4 correlation between MRI and IQ, due to differences in brain size between races (see here for critique) and since races have different cognitive abilities, then this is a “+” for reductionist science.

Since the behavioral genetic research program is reductive, it is necessarily committed to genetic determinism, even though most don’t explicitly state this. The way that Rushton and Jensen articulated the GxE (DST) view fit into their false dichotomy to try to reject it outright without grappling with its implications for organismal development. Unfortunately for the view put forth by Rushton and Jensen, organisms and environment are constantly interacting with each other. If they constantly interact, then they are not separable. If they are not separable, then the distinction made by Rushton and Jensen fails. If the distinction made by Rushton and Jensen fails, then ultimately, the quest of behavioral genetics—to apportion variance into genetic and environmental causes—fails.

Another hereditarian who tries to argue against interactionism is Gottfredson (2009) with her “interactionism fallacy.” Heritability estimates, it is claimed, can partition causes of variance between G and E components. Gottfredson—like all other hereditarians, I claim—completely misrepresent the view and (wilfully?) misunderstand what developmental systems theorists are saying. People like Rushton, Jensen, and Gottfredson quite obviously claim that science can solve the nature-nurture debate. The fact of the matter that destroys hereditarian assumptions and claims about the separability of nature and nurture is this: The genome is reactive (Fox-Keller, 2014) that is, it reacts to what is occurring in the environment, whether that be the environment outside or inside of the body.

At the molecular level, the nurture/nature debate currently revolves around reactive genomes and the environments, internal and external to the body, to which they ceaselessly respond. Body boundaries are permeable, and our genome and microbiome are constantly made and remade over our lifetimes. Certain of these changes can be transmitted from one generation to the next and may, at times, persist into succeeding generations. But these findings will not terminate the nurture/nature debate – ongoing research keeps arguments fueled and forces shifts in orientations to shift. Without doubt, molecular pathways will come to light that better account for the circumstances under which specific genes are expressed or inhibited, and data based on correlations will be replaced gradually by causal findings. Slowly, “links” between nurture and nature will collapse, leaving an indivisible entity. But such research, almost exclusively, will miniaturize the environment for the sake of accuracy – an unavoidable process if findings are to be scientifically replicable and reliable. Even so, increasing recognition of the frequency of stochastic, unpredictable events ensures that we can never achieve certainty. (Locke and Pallson, 2016)

The implication here is that science cannot resolve this debate, since “nature and nurture are not readily demarcated objects of scientific inquiry” (Locke and Pallson, 2016: 18). So if heritability estimates are useful for understanding phenotypic variation, then the organism and environment must not interact. If these interactions are constant and pervasive, then it becomes challenging—and I claim impossible—to accurately quantify the relative contribution of genes and environment. But the organism and environment constantly interact. Thus, heritability estimates aren’t useful for understanding phenotypic variation. This undermines the interpretability of heritability and invalidates any and all claims as to the relative contribution of G and E made by any behavioral geneticist.

The interactive view of G and E state that genes are necessary for traits but not sufficient for them. While genetic factors do of course lay the foundation for trait development, so do the other resources that interact with the genes (the suite of them) that are necessary for trait development. I can put the argument like this:

P1: An interactive view acknowledges that genes contribute to the development of traits.
P2: Genes are necessary pre-conditions for the expression of traits.
P3: Genes alone are not sufficient to fully explain the complexity of traits.
C: Thus, an interactive view states that genes are necessary pre-conditions for traits but not sufficient on their own.

Why my view is not blank slatism: On Dualistic Experiential Constructivism

Now I need to defend my view that the mind and body are distinct substances, so the mental is irreducible to the physical, so genes can’t cause psychology. One may say “Well that makes you a blank slatist since you deny that the mind has any innate properties.” Fortunately, my view is more complex than that.

I have been espousing certain points of view for years on this blog: The irreducibility of the mental, genes can’t cause mental/psychological traits, mind is constructed through interacting with other humans in species-relevant contexts to eventually form mind, and so-called innate traits are learned and experience-dependent. How can I reconcile these views? Doesn’t the fact that I deny any and all genetic influence on psychology due to my dualistic commitments mean I am a dreaded “blank slatist”? No it does not and I will explain why.

I call my view “Dualistic Experiential Constructivism” (DEC). It’s dualistic since it recognizes that the mind and body are separate, distinct substances. It’s experiential since it highlights the role of experiential factors in the forming of mind and the construction of knowledge and development of psychological traits. It is constructivist since individuals actively construct their knowledge and understanding of the world by interacting with other humans. Also in this framework is the concept of gene-environment interaction, where G and E interact to be inseparable and non-independent interactants.

Within the DEC framework, gene-environment interactions are influential in the development of cognition, psychology and behavior. This is because due to genes being necessary for the construction of humans, they need to be there to ensure they begin growing once conceived. Then, the system begins interacting irreducibly with other developmental interactants, which then begin to form the phenotype and eventually a human forms. So genes provide a necessary pre-condition for traits, but in this framework they are not sufficient conditions.

In Vygotsky’s socio-historical theory of learning and development, Vygotsky argued that individuals acquire psychological traits through interacting with other humans in certain social and environmental contexts through the use of cultural and psychological tools. Language, social interactions and culture mediate the cognitive development which then fosters higher-order thinking. Thus, Vygotsky’s theory highlights the dynamic and interactive nature of human development which emphasizes the social contexts of the actors in how mind is shaped and developed. So Vygotsky’s theory supports the idea I hold that mind is shaped through interactions and experiences within certain socio-historical contexts. So it would seem that adherence to this theory would mean that there are critical points in child development, where if the child does not get the rich exposure they need in order to develop their abilities, they then may never acquire the ability, indicating a critical window in which these abilities can be acquired (Vyshedakiy, Mahapatra, and Dunn, 2017). Cases of feral children allow us to see how one would develop without the role of social interaction and cultural tools in cognitive development. That these children are so stunted in their psychology and language shows the critical window in which children can learn and understand a language. The absence of social experiences in feral children thusly supports Vygotsky’s theory regarding the significance of cultural and social factors in shaping the mind. And cognitive development. Vygotsky’s theory is very relevant here, since it shows the necessary socio-historical and cultural experiences need to occur for higher order thinking, psychology, and mind to develop in humans. And since newborns, infants and young children are surrounded by what Vygotsky called More Knowledgeable Others, they learn from and copy what they see from people who already know how to act in certain social and cultural situations, which then develops an individual’s psychology and mind.

There is also another issue here: The fact that species-typical behaviors develop in reliable ecological contexts. If we assume this holds for humans—and I see no reason not to—then there need to be certain things in the environment that then jettison the beginnings of the construction of mind in humans, and this is in relevant social-historical-ecological contexts, basically, environments are inherited too.

In an article eschewing the concept of “innateness”, Blumberg (2018) has a great discussion on how species-typical traits arise. Quite simply, it’s due to the construction of species-specific niches which then allow the traits to reliably appear over time:

Species-typical behaviors can begin as subtle predispositions in cognitive processing or behavior. They also develop under the guidance of species-typical experiences occurring within reliable ecological contexts. Those experiences and ecological contexts, together comprising what has been called an ontogenetic niche, are inherited along with parental genes16. Stated more succinctly, environments are inherited—a notion that shakes the nature-nurture dichotomy to its core. That core is shaken still further by studies demonstrating how even our most ancient and basic appetites, such as that for water, are learned17. Our natures are acquired.

Contrasting the DEC with hereditarianism shows exactly how different they are and how DEC answers hereditarianism with a different framework. DEC offers an alternative perspective on the construction of psychological traits and mind in humans, and strongly emphasizes the role of individual experiences and environmental factors (like the social) in allowing the mind to form and shape psychological traits, but it does in fact highlight the need for genetic factors—though in a necessary, not sufficient, way. DEC suggests that genes alone aren’t enough to account for psychology. It argues that the mind is irreducible to the physical (genes, brain/brain structure) and that the development of psychological traits (and along with it the mind) requires the interactive influences of the individual, experiences, and environmental context.

There is one more line of evidence I need to discuss before I conclude—that of clonal populations living in the same controlled environment and what it does and does not show, along with the implications of behavioral genetic hereditarian explanations of behavior. Kate Laskowski’s (2022) team observed how genetically identical fish behaved in controlled environments. Substantial individuality still arises in clonal fishes with the same genes while being in a controlled environment. These studies from Laskowski’s team suggests that behavioral individuality “might be an inevitable and potentially unpredictable outcome of development” (Bierbach, Laskowski, and Wolf, 2017). So the argument below captures this fact, and is based on the assumption that if genes did cause psychological traits and behavior, then individuals with an identical genome would have identical psychology and behavior. But these studies show that they do not, so the conclusion follows that mind and psychological traits aren’t determined by psychology.

(P1) If the mind is determined by genetic factors, then all individuals with the same genetic makeup would exhibit identical psychological traits.
(P2) Not all individuals with the same genetic makeup exhibit identical psychological traits.
(C) Thus, mind isn’t determined by genetic factors.

I think it is a truism that an entailment of the hereditarian view would be identical genes would mean identical psychology and behavior. Quite obviously, experimental results have shown that this quite simply is not the case. If the view espoused by Rushton and Jensen and other hereditarians were true, then organisms with identical genomes would have the same behavior and psychology. But we don’t find this. Thus, we should reject hereditarianism since their claim has been tested in clonal populations and gas been found wanting.

Now how is my view not blank slatism? I deny the claim that psychology reduces to anything physical, and I deny that innate traits are a thing, so can there be nuances, or am I doomed to be labeled a blank slatist? Genetic factors are necessary pre-conditions for the mind but there are no predetermined, hardwired traits in them. While genetic factors lay the groundwork for this, the importance of learning, experience, and relevant ecological contexts must not be discounted here. While I recognize the interplay between genes and environment and other resources, I do not hold to the claim that any of them are sufficient to explain mind and psychology. I would say that Vygotsky’s theory shows how and why people and groups score differently on so-called psychological tests. There is the interplay between the child, the socio-cultural environment, and the individuals in that environment. Thus, by being in these kinds of environments, this allows the formation of mind and psychology (which is shown in cases of feral children), meaning that hereditarianism is ill-suited to explaining this with their fixation on genes, even when genes can’t explain psychology. If the mental is irreducible to the physical and genes are physical, then genes can’t explain the mental. This destroys the hereditarian argument.

Conclusion

Vygotsky’s theory provides a socio-cultural framework which acknowledges the role of subjective experiences within social contexts. Individuals engage in social interactions, and collaborative activities as conscious beings, and in doing so, they share their subjective experiences to the collective construction of knowledge and understanding. The brand of dualism I push entails that psychology doesn’t reduce to anything physical, which includes genes and the brain. But I do of course recognize the interactions between all developmental resources, I don’t think that any of them along are explanatory regarding psychology and behavior like the hereditarian, that’s one of the biggest differences between hereditarianism and the DEC. My view is similar to that of relational developmental systems theory (Lerner, 2021a, b). Further, this view is similar to Oyama’s (2002) view where she conceptualizes “nature” as a natural outcome of the organism-environment system (inline with Blumberg, 2018), and nurture as the ongoing developmental process. Thus, Oyama has reconceptualized the nature nurture debate.

Of course, my claim that psychology isn’t reducible to genes would put me in the “100% percent culture-only” camp that Rushton and Jensen articulated. However, there is no other way to be about this debate, since races are different cultural groups and different cultural groups are exposed to different cultural and psychological tools which lead to differences in knowledge and therefore lead to score differences. So I reject their dichotomy they mounted and I also reject the claim that the interactive view is effectively a “culture-only” view. But, ultimately, the argument that psychology doesn’t reduce to genes is sound, so hereditarianism is false. Furthermore, the hereditarian claim that genes cause differences in psychology and behavior is called into question due to the research on clonal populations. This shows that individuality arises randomly, and is not caused by genetic differences since there were no genetic differences.

The discussion surrounding the specific IQ debate concerning the hereditarian explanation necessitates a thorough examination of the intricate interplay between genetics and environment. A mere environmental explanation seems to be the only plausible rationale for the observed black-white IQ gap, considering that psychological states cannot be solely ascribed/reduced to genetic factors. In light of this, any attempts to dichotomize nature versus nurture, as was exemplified by Rushton and Jensen, fail to capture the essence of the matter at hand. Their reductionist approach, encapsulating a “100% culture-only program” within their 2 boxes that shows their adherence to the false dichotomy, followed by the triumphal proclamation of their seemingly preferred “50/50 split between genes and environment” explanation (although they later advocate an 80/20 perspective), can be regarded as nothing more than a fallacious oversimplification.

I have presented a comprehensive framework which challenges hereditarianism and provides an alternative perspective on the nature of human psychology and development. I integrated the principles of mind-body dualism, Vygotsky’s socio-historical theory of learning and development, and gene-environment interactions calling it Dualistic Experiential Constructivism, which acknowledges the interplay between genes, environment, and other developmental resources. Ultimately, DEC promoted a more holistic and interactive view in understanding the origin of mind through social processes and species-typical contextual-dependent events, while acknowledging genes as a necessary template for these things, since the organism is what is navigating the environment.

So this is the answer to hereditarianism—a view in which all developmental resources interact and are irreducible, in which first-personal subjective experiences with others of the species taking place in reliable ecological contexts jettison the formation of mind and psychological traits. This is called Dualistic Experiential Constructivism, and it entails a few different other frameworks that then coalesce into the view against hereditarianism that I hold.

Race, Brain Size, and “Intelligence”: A Critical View

5250 words

“the study of the brains of human races would lose most of its interest and utility” if variation in size counted for nothing ([Broca] 1861 , p. 141). Quoted in Gould, 1996: 115)

The law is: small brain, little achievement; great brain, great achievement (Ridpath, 1891: 571)

I can’t hope to give as good a review as Gould’s review in Mismeasure of Man on the history of skull measuring, but I will try to show that hereditarians are mistaken in their brain size-IQ correlations and racial differences in brain size as a whole.

The claim that brain size is causal for differences in intelligence is not new. Although over the last few hundred years there has been back and forth arguments on this issue, it is generally believed that there are racial differences in brain size and that this racial difference in brain size accounted for civilizational accomplishments, among other things. Notions from Samuel Morton which seem to have been revived by Rushton in the 80s while formulating his r/K selection theory show that the racism that was incipient in the time period never left us, even after 1964. Rushton and others merely revived the racist thought of those from the 1800s.

Using MRI scans (Rushton and Ankney, 2009) and measuring the physical skull, Rushton asserted that the differences in brain size and quality between races accounted for differences in IQ. Although Rushton was not alone in this belief, this belief on the relationship between brain weight/structure and intelligence goes back centuries. In this article, I will review studies on racial differences in brain size and see if Rushton et al’s conclusions hold on not only brain size being causally efficacious for IQ but there being racial and differences in brain size and the brain size and IQ correlation.

The Morton debate

Morton’s skull collection has received much attention over the years. Gould (1978) first questioned Morton’s results on the ranking of skulls. He argued that when the data was properly reinterpreted, “all races have approximately equal capacities.” The skulls in Morton’s collection were collected from all over. Morton’s men even robbed graves to procure skulls for Morton, even going as far to take “bodies in front of grieving relatives and boiled flesh off fresh corpses” (Fabian, 2010: 178). One man even told Morton that grave robbing gave him a “rascally pleasure” (Fabian, 2010: 15). Indeed, grave robbing seems to have been a way to procure many skulls which were used in these kinds of analyses (Monarrez et al, 2022). Nevertheless, since skulls house brains, the thought is that by measuring skulls then we can ascertain the brain of the individual that the skull belonged to. A larger skull would imply a larger brain. And larger brains, it was said, belong to more “intelligent” people. This assumption was one that was held by the neurologist Broca, and this then justified using brain weight as a measure of intelligence. Though in 1836, an anti-racist Tiedemann (1836) argued that there were no differences in brain size between whites and blacks. (Also see Gould, 1999 for a reanalysis of Tiedemann where he shows C > M > N in brain size, but concludes that the “differences are tiny and probably of no significance in the judgment of intelligence” (p 10).) It is interesting to note that Tiedemann and Morton worked with pretty much the same data, but they came to different conclusions (Gould, 1999; Mitchell, 2018).

In 1981 Gould published his landmark book The Mismeasure of Man (Gould, 1981/1996). In the book, he argued that bias—sometimes unconscious—pervaded science and that Morton’s work on his skull collection was a great example of this type of bias. Gould (1996: 140) listed many reasons why group (race) differences in brain size have never been demonstrated, citing Tobias (1970):

After all, what can be simpler than weighing a brain?—take it out, and put it on the scale. One set of difficulties refers to problems of measurement itself: at what level is the brain severed from the spinal cord; are the meninges removed or not (meninges are the brain’s covering membranes, and the dura mater, or thick outer covering, weighs 50 to 60 grams); how much time elapsed after death; was the brain preserved in any fluid before weighing and, if so, for how long; at what temperature was the brain preserved after death. Most literature does not specify these factors adequately, and studies made by different scientists usually cannot be compared. Even when we can be sure that the same object has been measured in the same way under the same conditions, a second set of biases intervenes—influences upon brain size with no direct tie to the desired properties of intelligence or racial affiliation: sex, body size, age, nutrition, nonnutritional environment, occupation, and cause of death.

Nevertheless, in Mismeasure, Gould argued that Morton had unconscious bias where he packed the skulls of smaller African skulls more loosely while he would pack the skulls of a smaller Caucasian skull tighter (Gould made this inference due to the disconnect between Morton’s lead shot and seed measurements).

Plausible scenarios are easy to construct. Morton, measuring by seed, picks up a threateningly large black skull, fills it lightly and gives it a few desultory shakes. Next, he takes a distressingly small Caucasian skull, shakes hard, and pushes mightily at the foramen magnum with his thumb. It is easily done, without conscious motivation; expectation is a powerful guide to action. (1996: 97)

Yet through all this juggling, I detect no sign of fraud or conscious manipulation. Morton mad e no attempt to cove r his tracks and I must presume that he was unaware he had left them. He explained all his procedure s and published all his raw data. All I can discern is an a priori conviction about racial ranking so powerful that it directed his tabulations along preestablished lines. Yet Morton was widely hailed as the objectivist of his age, the man who would rescue American science from the mire of unsupported speculation. (1996: 101)

But in 2011, a team of researchers tried to argue that Morton did not manipulate data to fit his a priori biases (Lewis et al, 2011). They claimed that “most of Gould’s criticisms are poorly supported or falsified.” They argued that Morton’s measurements were reliable and that Morton really was the scientific objectivist many claimed him to be. Of course, since Gould died in 2002 shortly after publishing his magnum opus The Stuecure of Evolutionary Theory, Gould could not defend his arguments against Morton.

However, a few authors have responded to Lewis et al and have defended Gould conclusions against Morton (Weisberg, 2014; Kaplan, Pigliucci and Banta, 2015; Weisberg and Paul, 2016).

Weisberg (2014) was the first to argue against Lewis et al’s conclusions on Gould. Weisberg argued that while Gould sometimes overstated his case, most of his arguments were sound. Weisberg argued that, contra what Lewis et al claimed, they did not falsify Gould’s claim, which was that the difference between shot and seed measurements showed Morton’s unconscious racial bias. While Weisberg rightly states that Lewis et al uncovered some errors that Gould made, they did not successfully refute two of Gould’s main claims: “that there is evidence that Morton’s seed‐based measurements exhibit racial bias and that there are no significant differences in mean cranial capacities across races in Morton’s collection.”

Weisberg (2014: 177) writes:

There is prima facie evidence of racial bias in Morton’s (or his assistant’s) seed‐basedmeasurements. This argument is based on Gould’s accurate analysis of the difference between the seed‐ and shot‐based measurements of the same crania.

Gould is also correct about two other major issues. First, sexual dimorphism is a very suspicious source of bias in Morton’s reported averages. Since Morton identified most of his sample by sex, this is something that he could have investigated and corrected for. Second, when one takes appropriately weighted grand means of Morton’s data, and excludes obvious sources of bias including sexual dimorphism, then the average cranial capacity of the five racial groups in Morton’s collection is very similar. This was probably the point that Gould cared most about. It has been reinforced by my analysis.

[This is Weisberg’s reanalysis]

So Weisberg successfully defended Gould’s claim that there are no general differences in the races as ascribed by Morton and his contemporaries.

In 2015, another defense of Gould was mounted (Kaplan, Pigliucci and Banta, 2015). Like Weisberg before them, they also state that Gould got some things right and some things wrong, but his main arguments weren’t touched by Lewis et al. Kaplan et al stated that while Gould was right to reject Morton’s data, he was wrong to believe that “a more appropriate analysis was available.” They also argue due to the “poor dataset no legitimate inferences to “naturalpopulations can be drawn.” (See Luchetti, 2022 for a great discussion of Kaplan, Pigliucci and Banta.)

In 2016, Weisberg and Paul (2016) argued that Gould assumed that Morton’s lead shot method  was an objective way to ascertain the cranial capacities of skulls. Gould’s argument rested on the differences between lead shot and seed. Then in 2018, Mitchell (2018) published a paper where he discovered lost notes of Morton’s and he argued that Gould was wrong. He, however, admitted that Gould’s strongest argument was untouched—the “measurement issue” (Weisberg and Paul, 2016) was Gould’s strongest argument, deemed “perceptive” by Mitchell. In any case, Mitchell showed that the case of Morton isn’t one of an objective scientist looking to explain the world sans subjective bias—Morton’s a priori biases were strong and strongly influenced his thinking.

Lastly, ironically Rushton used Morton’s data from Gould’s (1978) critique, but didn’t seem to understand why Gould wrote the paper, nor why Morton’s methodology was highly suspect. Rushton basically took the unweighted average for “Ancient Caucasian” skulls, and the sex/age of the skulls weren’t known. He also—coincidentally I’m sure—increased the “Mongoloid skull” size from 85 to 85.5cc (Gould’s table had it as 85cc). Amazingly—and totally coincidentally, I’m sure—Rushton miscited Gould’s table and basically combined Morton’s and Gould’s data, increased the skull size slightly of “Mongoloids” and used the unweighted average of “Ancient Caucasian” skulls (Cain and Vanderwolf, 1990). How honest of Rushton. It’s ironic how people say that Gould lied about Morton’s data and that Gould was a fraud, when in all actuality, Rushton was the real fraud, never recanting on his r/K theory, and now we can see that Rushton actually miscited and combined Gould’s and Morton’s results and made assumptions without valid justification.

The discussion of bias in science is an interesting one. Since science is a social endeavor, there necessarily will be bias inherent in it, especially when studying humans and discussing the causes of certain peculiarities. I would say that Gould was right about Morton and while Gould did make a few mistakes, his main argument against Morton was untouched.

Skull measuring after Morton

The inferiority of blacks and other non-white races has been asserted ever since the European age of discovery. While there were of course 2 camps at the time—one which argued that blacks were not inferior in intelligence and another that argued they were—the claim that blacks are inferior in intelligence was, and still is, ubiquitous. They argued that smaller heads meant that one was less intelligent, and if groups had smaller heads then they too were less intelligent than groups that had smaller heads. This then was used to argue that blacks hadn’t achieved any kind of civilizational accomplishments since they were intellectually inferior due to their smaller brains (Davis, 1869; Campbell, 1891; Hoffman, 1896; Ridpath, 1897; Christison, 1899).

Robert Bean (1906) stated, using cadavers, that his white cadavers had larger frontal lobes than his black cadavers. He concluded that blacks were more objective than whites who were more subjective, and that white cadavers has larger frontal and anterior lobes than black cadavers. However, it seems that Bean did not state one conclusion—that the brain’s of his cadavers seemed to show no difference. Gould (1996: 112) discusses this issue (see Mall, 1909: 8-10, 13; Reuter, 1927). Mall (1909: 32) concluded, “In this study of several anatomical characters said to vary according to race and sex, the evidence advanced has been tested and found wanting.

Franz Boas also didn’t agree with Bean’s analysis:

Furthermore, in “The Anthropological Position of the Negro,” which appeared in Van Norden)- Magazine a few months later, Boas attempted to refute Bean by arguing that “the anatomical differences” between blacks and whites “are minute,” and “no scientific proof that will stand honest proof … would prove the inferiority of the negro race.”39 (Williams, 1996: 20)

In 1912, Boas argued that the skull was plastic, so plastic that changes in skull shape between immigrants and their progeny were seen. His results were disputed (Sparks and Jantz, 2002), though Gravlee, Bernard, and Leonard (2002) argued that Boas was right—the shape of the skull indeed was influenced by environmental factors.

When it comes to sex, brain size, and intelligence, this was discredited by Alice Lee in her thesis in 1900. Lee created a way to measure the brain of living subjects and she used her method on the Anthropological Society and showed a wife variation, with of course overlapping sizes between men and women.

Lee, though, was a staunch eugenicist and did not apply the same thinking to race:

After dismantling the connection between gender and intellect, a logical route would have been to apply the same analysis to race. And race was indeed the next realm that Lee turned to—but her conclusions were not the same. Instead, she affirmed that through systematic measurement of skull size, scientists could indeed define distinct and separate racial groups, as craniometry contended. (The Statistician Who Debunked Sexist Myths About Skull Size and Intelligence)

Contemporary research on race, brain size, and intelligence

Starting from the mid-1980s when Rushton first tried to apply r/K to human races, there was a lively debate in the literature, with people responding to Rushton and Rushton responding back (Cain and Vanderwolf, 1990; Lynn, 1990; Rushton, 1990; Mouat, 1992). Why did Rushton seemingly revive this area of “research” into racial differences in brain size between human races?

Centring Rushton’s views on racial differences needs to start in his teenage years. Rushton stated that being surrounded by anti-white and anti-western views led to him seeking out right-wing ideas:

JPR recalls how the works of Hans Eysenck were significantly influential to the teenage Rushton, particularly his personality questionnaires mapping political affiliation to personality. During those turbulent years JPR describes bundled as growing his hair long  becoming outgoing but utterly selfish. Finding himself surrounded by what he described as anti-white and anti-western views, JPR became interested in right-wing groups. He went about sourcing old, forbidden copies of eugenics articles that argued that evolutionary differences existed between blacks and whites. (Forsythe, 2019) (See also Dutton, 2018.)

Knowing this, it makes sense how Rushton was so well-versed in old 18 and 1900s literature on racial differences.

For decades, J. P. Rushton argued that skulls and brains of blacks were smaller than whites. Since intelligence was related to brain size in Rushtonian r/K selection theory, this meant that what would account for some of the intelligence differences based on IQ scores between blacks and whites could be accounted for by differences in brain size between them. Since the brain size differences between races accounted for millions of brain cells, this could then explain race differences in IQ (Rushton and Rushton, 2003). Rushton (2010) went as far to argue that brain size was an explanation for national IQ differences and longevity.

Rushton’s thing in the 90s was to use MRI to measure endocranial volumes (eg Rushton and Ankney, 1996). Of course they attempt to show how smaller brain sizes are found in lower classes, women, and non-white races. Quite obviously, this is scientific racism, sexism, and classism (which Murray 2020 also wrote a book on). In any case, Rushton and Ankney (2009) tried arguing for “general mental ability” and whole brain size, trying to argue that the older studies “got it right” in regard to not only intelligence and brain size but also race and brain size. (Rushton and Ankney, just like Rushton and Jensen 2005, cited Mall, 1909 in the same sentence as Bean, 1906 trying to argue that the differences in brain size between whites and blacks were noted then, when Mall was a response specifically to Bean! See Gould 1996 for a solid review of Bean and Mall.) Kamin and Omari (1998) show that whites had greater head height than blacks while blacks had greater head length and circumference. They described many errors that Lynn, Rushton and Jensen made in their analyses of race and head size. Not only did Rushton ignore Tobias’ conclusions when it comes to measuring brains, he also ignored the fact that American Blacks, in comparison to American, French and English whites, had larger brains in Tobias’ (1970) study (Weizmann et al, 1990).

Rushton and Ankney (2009) review much of the same material they did in their 1996 review. They state:

The sex differences in brain size present a paradox. Women have proportionately smaller average brains than men but apparently have the same intelligence test scores.

This isn’t a paradox at all, it’s very simple to explain. Terman assumed that men and women should be equal in IQ and so constructed his test to fit that assumption. Since Terman’s Stanford-Binet test is still in use today, and since newer versions are “validated” on older versions that held the same assumption, then it follows that the assumption is still alive today. This isn’t some “paradox” that needs to be explained away by brain size, we just need to look back into history and see why this is a thing. The SAT has been changed many times to strengthen or weaken sex differences (Rosser, 1989). It’s funny how this completely astounds hereditarians. “There are large differences in brain size between men and women but hardly if any differences in IQ, but a 1 SD difference in IQ between whites and blacks which is accounted for in part by brain size.” I wonder why that never struck them as absurd? If Rushton accepted brain weight as an indicator that IQ test scores reflected differences in brain size between the races, then he would also need to accept that this should be true for men and women (Cernovsky, 1990), but Rushton never proposed anything like that. Indeed he couldn’t, since sex differences in IQ are small or nonexistent.

In their review papers, Rushton and Ankney, as did Rushton and Jensen (I should assume that this was Rushton’s contribution to the paper since he also has the same citations and arguments in his book and other papers) consistently return to a few references: Mall, Bean, Vint and Gordon, Ho et al and Beals et al. Cernovsky (1995) has a masterful response to Rushton where he dismantles his inferences and conclusions based on other studies. Cernovsky showed that Rushton’s claim that his claim that there are consistent differences between races in brain size is false; Rushton misrepresented other studies which showed blacks having heavier brains and larger cranial capacities than whites. He misrepresented Beals et al by claiming that the differences in the skulls they studied are due to race when race was spurious, climate explained the differences regardless of race. And Rushton even misrepresented Herskovits’ data which showed no difference in regarding statute or crania. So Rushton even misrepresented the brain-body size literature.

Now I need to discuss one citation line that Rushton went back to again and again throughout his career writing about racial differences. In articles like Rushton (2002) Rushton and Jensen (2005), Rushton and Ankney (2007, 2009) Rushton went back to a similar citation line: Citing 1900s studies which show racial differences. Knowing what we know about Rushton looking for old eugenics articles that showed that evolutionary differences existed between blacks and whites, this can now be placed into context.

Weighing brains at autopsy, Broca (1873) found that Whites averaged heavier brains than Blacks and had more complex convolutions and larger frontal lobes. Subsequent studies have found an average Black–White difference of about 100 g (Bean, 1906Mall, 1909Pearl, 1934Vint, 1934). Some studies have found that the more White admixture (judged independently from skin color), the greater the average brain weight in Blacks (Bean, 1906Pearl, 1934). In a study of 1,261 American adults, Ho et al. (1980) found that 811 White Americans averaged 1,323 g and 450 Black Americans averaged 1,223 g (Figure 1).

There are however, some problems with this citation line. For instance, Mall (1909) was actually a response to Bean (1906). Mall was race-blind to where the brains came from after reanalysis and found no differences in the brain between blacks and whites. Regarding the Ho et al citation, Rushton completely misrepresented their conclusions. Further, brains that are autopsied aren’t representative of the population at large (Cain and Vanderwolf, 1990; see also Lynn, 1989; Fairchild, 1991). Rushton also misrepresented the conclusions in Beals et al (1984) over the years (eg, Rushton and Ankney, 2009). Rushton reported that they found his same racial hierarchy in brain size. Cernovsky and Littman (2019) stated that Beals et al’s conclusion was that cranial size varied with climatic zone and not race, and that the correlation between race and brain size was spurious, with smaller heads found in warmer climates, regardless of race. This is yet more evidence that Rushton ignored data that fid not fit his a priori conclusions (see Cernovsky, 1997; Lerner, 2019: 694-700). Nevertheless, it seems that Rushton’s categorization of races by brain size cannot be valid (Peters, 1995).

It would seem to me that Rushton was well-aware of these older papers due to what he read in his teenage years. Although at the beginning of his career, Rushton was a social learning theorist (Rushton, 1980), quite obviously Rushton shifted to differential psychology and became a follower—and collaborator—of Jensenism.

But what is interesting here in the renewed ideas of race and brain size are the different conclusions that different investigators came to after they measured skulls. Lieberman (2001) produced a table which shies different rankings of different races over the past few hundred years.

Table 1 from Lieberman, 2001 showing different racial hierarchies in the 19th and 20th century

As can be seen, there is a stark contrast in who was on top of the hierarchy based on the time period the measurements were taken. Why may this be? Obviously, this is due to what the investigator wanted to find—if you’re looking for something, you’re going to find it.

Rushton (2004) sought to revive the scala naturae, proposing that gthe general factor of intelligence—sits a top a matrix of correlated traits and he tried to argue that the concept of progress should return to evolutionary biology. Rushton’s r/K theory has been addressed in depth, and his claim that evolution is progressive is false. Nevertheless, even Rushton’s claim that brain size was selected for over evolutionary history also seems to be incorrect—it was body size that was, and since larger bodies have larger brains this explains the relationship. (See Deacon, 1990a, 1990b.)

Salami et al (2017) used brains from fresh cadavers, severing them from the spinal cord at the forum magnum and they completely removed the dura mater. This then allowed them to measure the whole brain without any confounds due to parts of the spinal cord which aren’t actually parts of the brain. They found that the mean brain weight for blacks was 1280g with a ranging between 1015g to 1590g while the mean weight of male brains was 1334g. Govender et al (2018) showed a 1404g mean brain weight for the brains of black males.

Rushton aggregated data from myriad different sources and time periods, claiming that by aggregating even data which may have been questionable in quality, the true differences in brain size would appear when averaged out. Rushton, Brainerd, and Pressley, 1983 defended the use of aggregation stating “By combining numerous exemplars, such errors of measurement are averaged out, leaving a clearer view of underlying relationships.” However, this method that Rushton used throughout his career has been widely criticized (eg, Cernovsky, 1993; Lieberman, 2001).

Rushton was quoted as saying “Even if you take something like athletic ability or sexuality—not to reinforce stereotypes or some such thing—but, you know, it’s a trade-off: more brain or more penis. You can’t have both.” How strange—because for 30 years Rushton pushed stereotypes as truth and built a whole (invalid) research program around them. The fact of the matter is, for Rushton’s hierarchy when it comes to Asians, they are a selected population in America. Thus, even there, Rushton’s claim rests on values taken from a selected population into the country.

While Asians had larger brains and higher IQ scores, they had lower sexual drive and smaller genitals; blacks had smaller brains and lower IQ scores with higher sexual drive and larger genitals; whites were just right, having brains slightly smaller than Asians with slightly lower IQs and lower sexual drive than blacks but higher than Asians along with smaller genitals than blacks but larger than Asians. This is Rushton’s legacy—keeping up racial stereotypes (even then, his claims on racial differences in penis size do not hold.)

The misleading arguments on brain size lend further evidence against Rushton’s overarching program. Thus, this discussion is yet more evidence that Rushton was anything but a “serious scholar” who trolled shopping malls asking people their sexual exploits. He was clearly an ideologue with a point to prove about race differences which probably manifested in his younger, teenage years. Rushton got a ton wrong, and we can now add brain size to that list, too, due to his fudging of data, misrepresenting data, and not including data that didn’t fit his a priori biases.

Quite clearly, whites and Asians have all the “good” while blacks and other non-white races have all the “bad.” And thus, what explains social positions not only in America but throughout the world (based on Lynn’s fraudulent national IQs; Sear, 2020) is IQ which is mediated by brain size. Brain size was but a part of Rushton’s racial rank ordering, known as r-K selection theory or differential K theory. However, his theory didn’t replicate and it was found that any differences noticed by Rushton could be environmentally-driven (Gorey and Cryns, 1995; Peregrine, Ember and Ember, 2003).

The fact of the matter is, Rushton has been summarily refuted on many of his incendiary claims about racial differences, so much so that a couple of years ago quite a few of his papers were retracted (three in one swipe). While a theoretical article arguing about the possibility that melanocortin and skin color may mediate aggression and sexuality in humans (Rushton and Templer, 2012). (This appears to be the last paper that Rushton published before his death in October, 2012. How poetic that it was retracted.) This was due mainly to the outstanding and in depth look into the arguments and citations made by Rushton and Templer. (See my critique here.)

Conclusion

Quite clearly, Gould got it right about Morton—Gould’s reanalysis showed the unconscious bias that was inherent in Morton’s thoughts on his skull collection. Gould’s—and Weisberg’s—reanalysis show that there are small differences in skulls of Morton’s collection. Even then, Gould’s landmark book showed that the study of racial differences—in this case, in brain and skull size—came from a place of racist thought. Writings from Rushton and others carry on this flame, although Rushton’s work was shown to have considerable flaws, along with the fact that he outright ignored data that didn’t fit his a priori convictions.

Although comparative studies of brain size have been widely criticized (Healy and Rowe, 2007), they quite obviously survive today due to the assumptions that hereditarians have between “IQ” and brain size along with the assumption that there are racial differences in brain size and that these differences are causal for socially-important things. However, as can be seen, the comparative study of racial brain sizes and the assumption that IQ is causally mediated by it are hugely mistaken. Morton’s studies were clouded by his racial bias, as Gould and Weisberg and Kaplan et al showed. When Rushton, Jensen, and Lynn arose, they they tried to carry on that flame, correlating head size and IQ while claiming that smaller head sizes and—by identity—smaller brains are related to a suite of negative traits.

The brain is of course an experience-dependent organ and people are exposed to different types of knowledge based on their race and social class. This difference in knowledge exposure based on group membership, then, explains IQ scores. Not any so-called differences in brain size, brain physiology or genes. And while Cairo (2011) concludes that “Everything indicates that experience makes the great difference, and therefore, we contend that the gene-environment interplay is what defines the IQ of an individual“, genes are merely necessary for that, not sufficient. Of course, since IQ is an outcome of experience, this is what explains IQ differences between groups.

Table 1 from Lieberman (2001) is very telling about Gould’s overarching claim about bias in science. As the table shows, the hierarchy in brain size was constantly shifting throughout the years based on a priori biases. Even different authors coming to different conclusions in the same time period on whether or not there are differences in brain size between races pop up. Quite obviously, the race scientists would show that race is the significant variable in whatever they were studying and so the average differences in brain size then reflect differences in genes and then intelligence which would then be reflected in civilizational accomplishments. That’s the line of reasoning that hereditarians like Rushton use when operating under these assumptions.

Science itself isn’t racist, but racist individuals can attempt to use science to import their biases and thoughts on certain groups to the masses and use a scientific veneer to achieve that aim. Rushton, Jensen and others have particular reasons to believe what they do about the structure of society and how and why certain racial groups are in the societal spot they are in. However, these a priori conceptions they had then guided their research programs for the rest of their lives. Thus, Gould’s main claim in Mismeasure about the bias that was inherent in science is well-represented: one only needs to look at contemporary hereditarian writings to see how their biases shape their research and interpretations of data.

In the end, we don’t need just-so stories to explain how and why races differ in IQ scores. We most definitely don’t need any kinds of false claims about how brain size is causal for intelligence. Nor do we need to revive racist thought on the causes and consequences of racial differences in brain size. Quite obviously, Rushton was a dodgy character in his attempt to prove his tri-archic racial theory using r/K selection theory. But it seems that when one surveys the history of accounts of racial differences in brain size and how these values were ascertained, upon critical examination, such differences claimed by the hereditarian all but dissappear.