NotPoliticallyCorrect

Home » HBD (Page 2)

Category Archives: HBD

IQ, Achievement Tests, and Circularity

2150 words

Introduction

In the realm of educational assessment and psychometrics, a distinction between IQ and achievement tests needs to be upheld. It is claimed that IQ is a measure of one’s potential learning ability, while achievement tests show what one has actually learned. However, this distinction is not strongly supported in my reading of this literature. IQ and achievement tests are merely different versions of the same evaluative tool. This is what I will argue in this article: That IQ and achievement tests are different versions of the same test, and so any attempt to “validate” IQ tests based not only on other IQ tests, achievement tests and job performance is circular, I will argue that, of course, the goal of psychometrics in measuring the mind is impossible. The hereditarian argument, when it comes to defending their concept and the claim that they are measuring some unitary and hypothetical variable, then, fails. At best, these tests show one’s distance from the middle class, since that’s the where most of the items on the test derive from. Thus, IQ and achievement tests are different versions of the same test and so, they merely show one’s “distance” from a certain kind of class-specific knowledge (Richardson, 2012), due to the cultural and psychological tools one must possess to score well on these tests (Richardson, 2002).

Circular IQ-ist arguments

IQ-ists have been using IQ tests since they were brought to America by Henry Goddard in 1913. But one major issue (one they still haven’t solved—and quite honestly never will) was that they didn’t have any way to ensure that the test was construct valid. So this is why, in 1923, Boring stated that “intelligence is what intelligence tests test“, while Jensen (1972: 76) said “intelligence, by definition, is what intelligence tests measure.” However, such statements are circular and they are circular because they don’t provide real evidence or explanation.

Boring’s claim that “intelligence is what intelligence tests test” is circular since it defines intelligence based on the outcome of “intelligence tests.” So if you ask “What is intelligence“, and I say “It’s what intelligence tests measure“, I haven’t actually provided a meaningful definition of intelligence. The claim merely rests on the assumption that “intelligence tests” measure intelligence, not telling us what it actually is.

Jensen’s (1976) claim that “intelligence, by definition, is what intelligence tests measure” is circular for similar reasons to Boring’s since it also defines intelligence by referring to “intelligence tests” and at the same time assumes that intelligence tests are accurately measuring intelligence. Neither claim actually provides an independent understanding of what intelligence is, it merely ties the concept of “intelligence” back to its “measurement” (by IQ tests). Jensen’s Spearman’s hypothesis on the nature of black-white differences has also been criticized as circular (Wilson, 1985). Not only was Jensen (and by extension Spearman) guilty of circular reasoning, so too was Sternberg (Schlinger, 2003). Such a circular claim was also made by Van der Mass, Kan, and Borsboom (2014).

But Jensen seemed to have changed his view, since in his 1998 book The g Factor, he argues that we should dispense with the term “intelligence”, but curiously that we should still study the g factor and assume identity between IQ and g… (Jensen made many more logical errors in his defense of “general intelligence”, like saying not to reify intelligence on one page and then a few pages later reifying it.) Circular arguments have been identified in not only Jensen’s writings Spearman’s hypothesis, but also in using construct validity to validate a measure (Gordon, Schonemann; Guttman, 1992: 192).

The same circularity can be seen when discussions of the correlation between IQ and achievement tests is brought up. “These two tests correlate so they’re measuring the same thing”, is an example one may come across. But the error here is assuming that mental measurement is possible and that IQ and achievement tests are independent of each other. However, IQ and achievement tests are different versions of the same test. This is an example of circular validation, which occurs when a test’s “validity” is established by the test itself, leading to a self-reinforcing loop.

IQ tests are often validated with other older editions of the test. For example, the newer version of the S-B would be “validated” against the older version of the test that the newer version was created to replace (Howe, 1997: 18; Richardson, 2002: 301), which not only leads to circular “validation”, but would also lead to the same assumptions from the older test constructors (like Terman) which would still then be alive in the test itself (since Terman assumed men and women should be equal in IQ and so this assumption is still there today). IQ tests are also often “validated” by comparing IQ test results to outcomes like job performance and academic performance. Richardson and Norgate (2015) have a critical review of the correlation between IQ and job performance, arguing that it’s inflated by “corrections”, while Sackett et al, 2023 show “a mean observed validity of .16, and a mean corrected for unreliability in the criterion and for range restriction of .23. Using this value drops cognitive ability’s rank among the set of predictors examined from 5th to 12th” for the correlation between “general cognitive ability” and job performance.

But this could lead to circular validation, in that if a high IQ is used as a predictor of success in school or work, then success in school or work would be used as evidence in validating the IQ test, which would then lead to a circular argument. The test’s validity is being supported by the outcome that it’s supposed to predict.

Achievement tests are destined to see what one had learned or achieved regarding a certain kind of subject matter. Achievement tests are often validated by correlating test scores with grades or other kinds of academic achievement (which would also be circular). But if high achievement test scores are used to validate the test and those scores are also used as evidence of academic achievement, then that would be circular. Achievement tests are “validated” on their relationship between IQ tests and grades. Heckman and Kautz (2013) note that “achievement tests are often validated using other standardized achievement tests or other measures of cognitive ability—surely a circular practice” and “Validating one measure of cognitive ability using other measures of cognitive ability is circular.” But it should also be noted that the correlation between college grades and job performance 6 or more years after college is only .05 (Armstrong, 2011).

Now what about the claim that IQ tests and achievement tests correlate so they measure the same thing? Richardson (2017) addressed this issue:

For example, IQ tests are so constructed as to predict school performance by testing for specific knowledge or text‐like rules—like those learned in school. But then, a circularity of logic makes the case that a correlation between IQ and school performance proves test validity. From the very way in which the tests are assembled, however, this is inevitable. Such circularity is also reflected in correlations between IQ and adult occupational levels, income, wealth, and so on. As education largely determines the entry level to the job market, correlations between IQ and occupation are, again, at least partly, self‐fulfilling

The circularity inherent in likening IQ and achievement tests has also been noted by Nash (1990). There is no distinction between IQ and achievement tests since there is no theory or definition of intelligence and how, then, this theory and definition would be likened to answering questions correctly on an IQ test.

But how, to put first things first, is the term ‘cognitive ability’ defined? If it is a hypothetical ability required to do well at school then an ability so theorised could be measured by an ordinary scholastic attainment test. IQ measures are the best measures of IQ we have because IQ is defined as ‘general cognitive ability’. Actually, as we have seen, IQ theory is compelled to maintain that IQ tests measure ‘cognitive ability’ by fiat, and it therefore follows that it is tautologous to claim that IQ tests are the best measures of IQ that we have. Unless IQ theory can protect the distinction it makes between IQ/ability tests and attainment/ achievement tests its argument is revealed as circular. IQ measures are the best measures of IQ we have because IQ is defined as ‘general cognitive ability’: IQ tests are the only measures of IQ.

The fact of the matter is, IQ “predicts” (is correlated with) school achievement since they are different versions of the same test (Schwartz, 1975; Beaujean et al, 2018). Since the main purpose of IQ tests in the modern day is to “predict” achievement (Kaufman et al, 2012), then if we correctly identify IQ and achievement tests as different versions of the same test, then we can rightly state that the “prediction” is itself a form of circular reasoning. What is the distinction between “intelligence” tests and achievement tests? They both have similar items on them, which is why they correlate so highly with each other. This, therefore, makes the comparison of the two in an attempt to “validate” one or the other circular.

I can now argue that the distinction between IQ and achievement tests is nonexistent. If IQ and achievement tests are different versions of the same test, then they share the same domain of assessing knowledge and skills. IQ and achievement tests contain similar informational content on them, and so they can both be considered knowledge tests—class-specific knowledge. IQ and achievement tests share the same domain of assessing knowledge and skills. Therefore, IQ and achievement tests are different versions of the same test. Put simply, if IQ and achievement tests are different versions of the same test, then they will have similar item content, and they do so we can correctly argue that they are different versions of the same test.

Moreover, even constructing tests has been criticized as circular:

Given the consistent use of teachers’ opinions as a primary criterion for validity of the Binet and Wechsler tests, it seems odd to claim  then that such tests provide “objective alternatives to the subjective judgments of teachers and employers.”  If the tests’ primary claim to predictive validity is that their results have strong correlations with academic success, one wonders how an objective test can predict performance in an acknowledged subjective environment?  No one seems willing to acknowledge the circular and tortuous reasoning behind the development of tests that rely on the subjective judgments of secondary teachers in order to develop an assessment device that claims independence of those judgments so as to then be able to claim that it can objectively assess a student’s ability to  gain the approval of subjective judgments of college professors.  (And remember, these tests were used to validate the next generation of tests and those tests validated the following generation and so forth on down to the tests that are being given today.) Anastasi (1985) comes close to admitting that bias is inherent in the tests when he confesses the tests only measure what many anthropologists would called a culturally bound definition of intelligence. (Thorndike and Lohman, 1990)

Conclusion

It seems clear to me that almost the whole field of psychometrics is plagued with the problem of inferring causes from correlation and using circular arguments in an attempt to justify and validate the claim that IQ tests measure intelligence by using flawed arguments that relate IQ to job and academic performance. However this idea is very confused. Moreover, circular arguments aren’t only restricted to IQ and achievement tests, but also in twin studies (Joseph, 2014; Joseph et al, 2015). IQ and achievement tests merely show what one knows, not their learning potential, since they are general knowledge tests—tests of class-specific knowledge. So even Gottfredson’s “definition” of intelligence fails, since Gottfredson presumes IQ to be a measure of learning ability (nevermind the fact that the “definition” is so narrow and I struggle to think of a valid way to operationalize it to culture-bound tests).

The fact that newer versions of tests already in circulation are “validated” against other older versions of the same test means that the tests are circularly validated. The original test (say the S-B) was never itself validated, and so, they’re just “validating” the newer test on the assumption that the older one was valid. The newer test, in being compared to its predecessor, means that the “validation” is occuring on the other older test which has similar principles, assumptions, and content to the newer test. The issue of content overlap, too, is a problem, since some questions or tasks on the newer test could be identical to questions or tasks on the older test. The point is, both IQ and achievement tests are merely knowledge tests, not tests of a mythical general cognitive ability.

Examining Misguided Notions of Evolutionary “Progress”

2650 words

Introduction

For years, PumpkinPerson (PP) has been pushing an argument which states that “if you’re the first branch, and you don’t do anymore branching, then you’re less evolved than higher branches.” This is the concept of “more evolved” or the concept of evolutionary progress. Over the years I have written a few articles on the confused nature of this thinking. PP seems to like the argument since Rushton deployed a version of it for his r/K selection (Differential K) theory, which stated that “Mongoloids” are more “K evolved” than “Caucasians” who are more “K evolved” than “Negroids”, to use Rushton’s (1992) language. Rushton posited that this ordering occurred due to the cold winters that the ancestors of “Mongoloids” and “Caucasoids” underwent, and he theorized that this led to evolutionary progress, which would mean that certain populations are more advanced than others (Rushton, 1992; see here for response). It is in this context that PP’s statement above needs to be carefully considered and analyzed to determine its implications and relevance to Rushton’s argument. It commits the affirming the consequent fallacy, and assuming the statement is true leads to many logical inconsistenties like there being a “most evolved” species,

Why this evolutionary progress argument is fallacious

if you’re the first branch, and you don’t do anymore branching, then you’re less evolved than higher branches.

This is one of the most confused statements I have ever read on the subject of phylogenies. This misconception, though, is so widespread that there have been quite a few papers that talk about this and talk about how to steer students away from this kind of thinking about evolutionary trees (Crisp and Cook, 2004; Baum, Smith, and Donovan, 2005; Gregory, 2008; Omland, Cook, and Crisp, 2008). This argument is invalid since the concept of “evolved” in evolutionary trees doesn’t refer to a hierarchical scale, where higher branches are “more evolved” than lower branches (which are “less evolved”). What evolutionary trees do is show historical relationships between different species, which shows common ancestry and divergence over time. So each branch represents a lineage and all living organisms have been evolving foe the same amount of time since the last common ancestor (LCA). Thus, the position of a branch on the tree doesn’t determine a species’ level of evolution.

The argument is invalid since it incorrectly assumes that the position of the branch on a phylogeny determines the evolution or the “evolutionary advancement” of a species. Here’s how I formulate this argument:

(P1) If you’re the first branch on the evolutionary tree and you don’t do any more branching, then you’re less evolved than higher branches.
(P2) (Assumption) Evolutionary advancement is solely determined by the position on the tree and the number of branches.
(C) So species represented by higher branches on the evolutionary tree are more evolved than species represented by lower branches.

There is a contradiction in P2, since as I explained above, each branch represents a new lineage and every species on the tree is equally evolved. PP’s assumption seems to be that newer branches have different traits than the species that preceded it, implying that there is an advancement occurring. Nevertheless, I can use a reductio to refute the argument.

Let’s consider a hypothetical scenario in which this statement is true: “If you’re the first branch and you don’t do any more branching, then you’re less evolved than higher branches.” This suggests that the position of a species on a phylogeny determines its level of evolution. So according to this concept, if a species occupies a higher branch, it should be “more evolved” than a species on a lower branch. So following this line of reasoning, a species that has undergone extensive branching and diversification should be classified as “more evolved” compared to a species that has fewer branching points.

Now imagine that in this hypothetical scenario, we have species A and species B in a phylogeny. Suppose that species A is the first branch and that it hasn’t undergone any branching. Conversely, species B, which is represented on a higher branch, has experienced extensive branching and diversification, which adheres to the criteria for a species to be considered “more evolved.” But there are logical implications for the concept concerning the positions of species A and species B on the phylogeny.

So according to the concept of linear progression which is implied in the original statement, if species B is “more evolved” than species A due to its higher branch position, it logically follows that species B should continue to further evolve and diversify. This progression should lead to new branching points, as each subsequent stage would be considered “more evolved” than the last. Thus, applying the line of reasoning in the original statement, it suggests that there should always be a species represented on an even higher branch than species B, and this should continue ad infinitim, with no endpoint.

The logical consequence of the statement is that an infinite progression of increasingly evolved species, each species being represented by a higher branch than the one before, without any final of ultimate endpoint for a “most evolved” species. This result leads to an absurdity, since it contradicts our understanding of evolution as an ongoing and continuous process. The idea of a linear and hierarchical progression of a species in an evolutionary tree culminating in a “most evolved” species isn’t supported by our scientific understanding and it leads to an absurd outcome.

Thus, the logical implications of the statement “If you’re the first branch and you don’t do any more branching, then you’re less evolved than higher branches” leads to an absurd and contradictory result and so it must be false. The concept of the position of a species on an evolutionary tree isn’t supported by scientific evidence and understanding. Phylogenies represent historical relationships and divergence events over time.

(1) Assume the original claim is true: If you’re the first branch and you don’t do any more branching, then you’re less evolved than higher branches.

(2) Suppose species A is the first branch and undergoes no further branching.

(3) Now take species B which is in a higher branch which has undergone extensive diversification and branching, making it “more evolved”, according to the statement in (1).

(4) But based on the concept of linear progression implied in (1), species B should continue to evolved and diversity even further, leading to new branches and increased evolution.

(5) Following the logic in (1), there should always be a species represented on an even higher branch than species B, which is even more evolved.

(6) This process should continue ad infinitim with species continually branching and becoming “more evolved” without an endpoint.

(7) This leads to an absurd result, since it suggests that there is no species that could be considered “more evolved” or reach a final stage of evolution, contradicting our understanding of evolution as a continuous, ongoing process, with no ultimate endpoint.

(8) So since the assumption in (1) leads to an absurd result, then it must be false.

So the original statement is false, and a species’ position on a phylogeny doesn’t determine the level of evolution and the superiority of a species. The concept of a linear and hierarchical progression of advancement in a phylogeny is not supported by scientific evidence and assuming the statement in (1) is true leads to a logically absurd outcome. Each species evolves in its unique ecological context, without reaching a final state of evolution or hierarchical scale of superiority. This reductio ad absurdum argument therefore reveals the fallacy in the original statement.

Also, think about the claim that there are species that are “more evolved” than other species. This implies that there are “less evolved” species. Thus, a logical consequence of the claim is that there could be a “most evolved” species.

So if a species is “most evolved”, it would mean that that species has surpassed all others in evolutionary advancement and there are no other species more advanced than it. Following this line of reasoning, there should be no further branching or diversification of this species since it has already achieved the highest level of evolution. But evolution is an ongoing process. Organisms continously adapt to and change their surroundings (the organism-environment system), and change in response to this. But if the “most evolved” species is static, this contradicts what we know about evolution, mainly that it is continuous, ongoing change—it is dynamic. Further, as the environment changes, the “most evolved” species could become less suited to the environment’s conditions over time, leading to a decline in its numbers or even it’s extinction. This would then imply that there would have been other species that are “more evolved.” (It merely shows the response of the organism to its environment and how it develops differently.) Finally, the idea of a “most evolved” species implies an endpoint of evolution, which contradicts our knowledge of evolution and the diversification of life one earth. Therefore, the assumption that there is a “most evolved” species leads to a logical contradiction and an absurdity based on what we know about evolution and life on earth.

The statement possesses scala naturae thinking, which is also known as the great chain of being. This is something Rushton (2004) sought to bring back to evolutionary biology. However, the assumptions that need to hold for this to be true—that is, the assumptions that need to hold for this kind of tree reading to even be within the realm of possibility is false. This is wonderfully noted by Gregory (2008) who states that “The order of terminal noses is meaningless.Crisp and Cook (2004) also state how such tree-reading is intuitive and this intuition of course is false:

Intuitive interpretation of ancestry from trees is likely to lead to errors, especially the common fallacy that a species-poor lineage is more ‘ancestral’ or ‘diverges earlier’ than does its species-rich sister group. Errors occur when trees are read in a one-sided way, which is more commonly done when trees branch asymmetrically.

There are several logical implications of that statement. I’ve already covered the claim that there is a kind of progression and advancement in evolution—a linear and hierarchical ranking—and the fixed endpoint (“most evolved”). Further, in my view, this leads to value judgments, that some species are “better” or “superior” to others. It also seems to ignore that the branching signifies not which species has undergone more evolution, but the evolutionary relationships between species. Finally, evolution occurs independently in each lineage and is based on their specific histories and interactions between developmental resources, it’s not valid to compare species as “more evolved” than others based on the relationships between species on evolutionary trees, so it’s based on an arbitrary comparison between species.

Finally, I can refute this using Gould’s full house argument.

P1: If evolution is a ladder of progress, with “more evolved” species on higher rungs, then the fossil record should demonstrate a steady increase in complexity over time.
P2: The fossils record does not shit a steady increase in complexity over time.
C: Therefore, evolution is not a ladder of progress and species cannot be ranked as “more evolved” based on complexity.


P1: If the concept of “more evolved” is valid, then there would be a linear and hierarchical progression in the advancement of evolution, wjtcertsin species considered superior to others based on their perceived level of evolutionary change.
P2: If there a linear and hierarchical progression of advancement in evolution, then the fossil record should demonstrate a steady increase in complexity over time, with species progressively becoming more complex and “better” in a hierarchical sense.
P3: The fossils record does not show a steady increase in complexity over time; it instead shows a diverse and branching pattern of evolution.
C1: So the concept of “more evolved” isn’t valid, since there is an absence of a steady increase in complexity in the fossil record and this refutes the notion of a linear and hierarchical progression of advancement in evolution.
P4: If the concept of “more evolved” is not valid, then there is no objective hierarchy of superiority among species based in their positions on an evolutionary tree.
C2: Thus, there is no objective hierarchy of superiority among species based on their positions on an evolutionary tree.

There is one final fallacy contained in that statement: it affirms the consequent. This logical fallacy takes the form of: If P then Q, P is true so Q is true.” Even if the concept of “more evolved” were valid, just because a species doesn’t do any more branching doesn’t mean it’s less evolved. So this reasoning is as follows: If you’re the first branch and you don’t do anymore branching, then you’re less evolved than higher branches (If P and Q, then R). It affirms the consequent like this: You didn’t do anymore branching (Q), so this branch has to be less evolved than the higher branches (R). It incorrectly infers the consequent Q (not doing anymore branching) as a sufficient condition for the antecedent P (being the first branch), which leads to the flawed conclusion (R) that the species is less evolved than higher branches. Just because a species doesn’t do anymore branching doesn’t mean it’s less evolved than another species. There could be numerous reasons why branching didn’t occur and it doesn’t directly determine evolutionary status. The argument infers being less evolved from doing less branching, which affirms the consequent. If a species doesn’t do anymore branching then that branch is less evolved than a higher branch. So since the argument affirms the consequent, it is therefore invalid.

Conclusion

Reading phylogenies in such a manner—in a way that would make one infer the conclusion that evolution is progressive and that there are “more evolved” species—although intuitive is false. Misconceptions like this along with many others while reading evolutionary trees are so persistent that much thought has been put into educating the public on right and wrong ways to read evolutionary trees.

As I showed in my argument ad absurdums where I accepted the claim as true, it leads to logical inconsistenties and goes against everything we know about evolution. Evolution is not progressive, it’s merely local change. That a species changes over time from another species doesn’t imply anything about “how evolved” (“more or less”) it is in comparison to the other. Excising this thinking is tough, but it is doable by understanding how evolutionary trees are constructed and how to read them correctly. It further affirms the consequent, leading to a false conclusion.

All living species have spent the same amount of time evolving. Branching merely signifies a divergence, not a linear scale of advancement. Of course one would think that if evolution is happening and one species evolves into another and that this relationship is shown on a tree that this would indicate that the newer species is “better” in some way in comparison to the species it derived from. But it merely suggests that the species faced different challenges which influenced its evolution; each species adapted and survived in its own unique evolutionary ecology, leading to diversification and the formation of newer branches on the tree. Evolution does not follow a linear path of progress, and there is no inherent hierarchy of superiority among species based on their position on the evolutionary tree. While the tree visually represents relationships between species, it doesn’t imply judgments like “better” or “worse”, “more evolved” or “less evolved.” It merely highlights the complexity and diversity of all life on earth.

Evolution is quite obviously not progressive, and even if it were, we wouldn’t see evolutionary progression from reading evolutionary trees, since such evolutionary relationships between species can be ladderized or not, with many kinds of different branches that may not be intuitive to those who read evolutionary trees as showing “more evolved” species, they nevertheless show a valid evolutionary relationship.

When Will Bo Winegard Understand What Social Constructivists Say and Don’t Say About Race?: Correcting Bo’s Misconceptions About Social Constructivism

3200 words

For years it seems as if Bo Winegard—former professor (his contract was not renewed)—does not understand the difference between social constructivism about race (racial constructivism refers to the same thing as social constructivism about race so I will be using these two terms interchangeably) and biological realism about race. It seems like he assumes that racial constructivists say that race isn’t real. But if that were true, how would it make sense to call race a social concept if it doesn’t exist? It also seems to me like he is saying that social constructs aren’t real. Money and language are social constructs, too, so does that mean they aren’t real? It doesn’t make sense to say that.

Quite obviously, if X is a social convention, then X is real, albeit a social reality. So if race is a social convention, then race is real. However, Bo’s ignorance to this debate is seen perfectly with this quote from a recent article he wrote:

Some conservative social constructionists and culture-only theorists (i.e., non race realists) have pushed back against the excesses of racial progressivism

In this article, Bo (rightly) claims that hereditarianism is a subset of race realism (I make a distinction between psychological and racial hereditarianism). But where Bo goes wrong is in not making the distinction between biological and social racial realism, as for example Kaplan and Winther (2014) did. Kaplan and Winther’s paper is actually the best look into how the concepts of bio-genomic cluster/racial realism, socialrace, and biological racial realism are realist positions about race. The AAPA even stated a few years back that “race has become a social reality that structures societies and how we experience the world. In this regard, race is real, as is racism, and both have real biological consequences. Membership in socially-defined (racial) groups can have real-life impacts themselves even if there are no biological races in the human species (Graves, 2015). That is: The social can and does become biological (meaning that the social can manifest itself in biology).

Bo’s ignorance can be further seen with this quote:

there is an alternative position: race realism, which argues that people use the concept of race for the same reason that they use the concept of species or sex

And he also wrote a tweet while explaining his “moderate manifesto”, again pushing the same misconception that social constructivists about race aren’t realists about race:

(2) Modern elite discourse contends that race is illusory, a kind of reified figment of our social imagination. BUT, it also contends that we need to promote race-conscious policies to rectify past wrongs. Race is unreal. But some “races” deserve benefits.

… I should note that race, like many other categories, is partially a social construct. But that does not mean that it’s not real.

Bo makes claims like “race is real”, as if social constructivists don’t think that race is real. That’s the only way for social constructivists to be about race. I think Bo is talking about racial anti-realists like Joshua Glasgow, who claim that race is neither socially nor biologically real. But social constructivists and biological racial realists (this term gets thorny since it could refer to a “realist” like Rushton or Jensen, who have no solid grounding in their belief in race or it could refer to philosophers like Michael Hardimon (like his minimalist/populationist, racialist and socialrace concepts) and Quayshawn Spencer (his Blumenbachian partitions), are both realists about race, but in different ways. Kaplan and Winther’s distinctions are good to set these 2 different camps a part and distinguish between them while still accepting that they are both realists.

I don’t think Bo realizes the meandering he has been doing for years discussing this concept. Race is partially a social construct, but race is real (which hereditarians believe), yet social constructivists about race are talking about something. They are talking about a referent, where a referent for race here is a property name for a set of human population groups (Spencer, 2019). The concept of race refers to a social something, thus it is real since society makes it real. Nothing in that entails that race isn’t real. If Bo made the distinction between biological and social races, then he would be able to say that social constructivists about race believe that race is real (they are realists about race as a social convention), but they are anti-realists about biological races.

Social constructivists are anti-realists about biological races, but they hold that the category RACE is a social and not biological construct. I addressed this issue in my article on white privilege:

If race doesn’t exist, then why does white privilege matter?

Lastly, those who argue against the concept of white privilege may say that those who are against the concept of white privilege would then at the same time say that race—and therefore whites—do not exist so, in effect, what are they talking about if ‘whites’ don’t exist because race does not exist? This is of course a ridiculous statement. One can indeed reject claims from biological racial realists and believe that race exists and is a socially constructed reality. Thus, one can reject the claim that there is a ‘biological’ European race, and they can accept the claim that there is an ever-changing ‘white’ race, in which groups get added or subtracted based on current social thought (e.g., the Irish, Italians, Jews), changing with how society views certain groups.

Though, it is perfectly possible for race to exist socially and not biologically. So the social creation of races affords the arbitrarily-created racial groups to be in certain areas on the hierarchy of races. Roberts (2011: 15) states that “Race is not a biological category that is politically charged. It is a political category that has been disguised as a biological one.” She argues that we are not biologically separated into races, we are politically separated into them, signifying race as a political construct. Most people believe that the claim “Race is a social construct” means that “Race does not exist.” However, that would be ridiculous. The social constructivist just believes that society divides people into races based on how we look (i.e., how we are born) and then society divides us into races on the basis of how we look. So society takes the phenotype and creates races out of differences which then correlate with certain continents.

So, there is no contradiction in the claim that “Race does not exist” and the claim that “Whites have certain unearned privileges over other groups.” Being an antirealist about biological race does not mean that one is an antirealist about socialraces. Thus, one can believe that whites have certain privileges over other groups, all the while being antirealists about biological races (saying that “Races don’t exist biologically”).

Going off Kaplan and Winther’s distinction, there are 3 kinds of racial realism: bio-genomic/cluster realism, biological racial realism, and social constructivism about race (socialraces). These three different racial frameworks have one thing in common: they accept the reality of race, but they merely disagree as to the origins of racial groups. Using these distinctions set forth by Kaplan and Winther, we can see how best to view different racial concepts and how to apply them in real life. Kaplan and Winther (2014: 1042)are conventionalists about bio-genomic cluster/race, antirealists about biological race, and realists about social race.” And they can state this due to the distinction they’ve made between different kinds of racial realism. (Since I personally am a pluralist about race, all of these could hold under certain contexts, but I do hold that race is a social construct of a biological reality, pushing Spencer’s argument.)

Biological race isn’t socialrace

These 2 race concepts are distinct, where one talks about how race is viewed in the social realm while the other talks about how race is viewed in the biological realm. There is of course the view that race is a social construct of a biological reality (a view which I hold myself).

Biological race refers to the categorization of humans based on genetic traits and ancestry (using this definition, captures the bio-genomic/cluster realism as well). So if biological and socialrace were equivalent concepts, then it would mean that genetic differences that define different racial groups would map onto similar social consequences. So if two people from different racial groups are biologically different, then their social experiences and opportunities in society should also differ. Obviously, the claim here is that if two concepts are identical, then they should produce the same outcomes. But socialrace isn’t merely a reflection of biological race (as race realists like Murray, Jensen, Lynn, an Rushton hold to). Socialrace has been influenced by cultural, social, and political factors and so quite obviously it is socially constructed (constructed by society, the majority believe that race is real, so that makes it real).

There are of course social inequalities related to different racial groups. People of different socialraces find themselves treated differently while experiencing different things, and this then results in disparities in opportunity, privileges and disadvantages. These disparities can be noted in healthcare, criminal justice, education, employment and housing—anywhere individuals from a group can face systemic barriers and discrimination. People from certain racial groups may experience lower access to quality education, reduced job opportunities, increased chance of coming into contact with the law, how they are given healthcare, etc. Sin s these disparities persist even after controlling for SES, this shows how race is salient in everyday life.

The existence of social disparities and inequalities between racial groups shows that socialrace cannot be determined by biological factors. It is instead influenced by social constructs, historical context, and power dynamics in a society. So differences in societal consequences indicate the distinction between the two concepts. Socialrace isn’t a mere extension of biological race. Biological differences can and do exist between groups. But it is the social construction and attribution of meaning to these differences that have shaped the lived experiences and outcomes of individuals in society. So by recognizing that race isn’t solely determined by biology, and by recognizing that socialrace isn’t biological race and biological race isn’t social race (i.e., they are different concepts), we can have a better, more nuanced take on how we socially construct differences between groups. (Like that of “Hispanics/Latinos.) Having said all that, here’s the argument:

(P1) If biological and socialrace are the same, then they would have identical consequences in society regarding opportunities, privileges, and disadvantages
(P2) But there are disparities in opportunities, privileges, and disadvantages based on socialrace in various societies.
(C) So biological race and socialrace aren’t the same.

The claim that X is a social construct doesn’t mean that X is imaginary, fake, or unreal. Social constructs have real, tangible impacts on society and individuals’ lives which influences how they are perceived and treated. Further, historical injustices and systemic racism are more evidence that race is a social construct.

Going back to the distinction between three types of racial realism from Kaplan and Winther, what they phrase biological racial realism (Kaplan and Winther, 2014: 1040-1041):

Biological racial realism affirms that a stable mapping exists between the social groups identified as races and groups characterized genomically or, at least, phenomically.2 That the groups are biological populations explains why the particular social groups, and not others, are so identified. Furthermore, for some, but by no means all, biological racial realists, the existence of biological populations (and of the biologically grounded properties of their constituent individuals) explains and justifies at least some social inequalities (e.g., the “hereditarians”; Jensen 1969; Herrnstein and Murray 1995; Rushton 1995; Lynn and Vanhanen 2002). [This is like Hardimon’s racialist race/socialrace distinction.]

Quite obviously, distinguishing between the kinds of racial realism here points out that biological racial realists are of the hereditarian camp, and so race is an explanation for certain social inequalities (IQ, job market outcomes, crime). Social constructivists, however, have argued that what explains these differences is the social, historical, and political factors (see eg The Color of Mind).

(P1) If race is a social construct, then racial categories are not fixed and universally agreed-upon.
(P2) If racial categories are not fixed and universally agreed-upon, then different societies can define race differently.
(C1) So if race is a social construct, then different societies can define race differently.
(P3) If different societies can define race differently, then race lacks an inherent and biological basis.
(P4) Different societies do define race differently (observation of diverse racial classifications worldwide).
(C2) Thus, race lacks and inherent and biological basis.
(P5) If race lacks and inherent and biological basis, then race is a social construct.
(P6) Race lacks an inherent and biological basis.
(C3) Therefore, race is a social construct.

Premise 1: The concept of race varies across place and time. For example, we once had the one drop rule, which stated that any amount of “black blood” makes one black irregardless of their appearance or background. But Brazil has a more fluid approach to racial classification, like pardo and mullato. So this shows that racial categories aren’t fixed and universally agreed-upon, since race concepts in the US and Brazil are different. It also shows that race categories can change on the basis of social and cultural context and, in the context of Brazil, the number of slaves that were transported there.

Premise 2: Racial categories were strictly enforced in apartheid South Africa and people were placed into groups based on arbitrary criteria. Though this classification system differs from the caste system in India, where caste distinctions are based on a social hierarchy, not racial characteristics, which shows how different societies have different concepts of identity and social distinctions (how and in what way to structure their societies). So Conclusion 1 then follows: The variability in racial categorization across societies shows that the concept of race is not fixed, but is shaped by societal norms and beliefs.

Premise 3: Jim Crow laws and the one drop rule show how racial categorization can shift depending on the times and what is currently going on in the society in question. The example of Jim Crow laws show that historical context and social norms dictated racial classification and the boundary between races. Again, going back to the example of Brazil is informative to explain the point. The Brazilian racial system encompasses a larger range of racial groups which were influenced by slavery and colonization and the interactions between European, African, and indigenous peoples. So this shows how racial identity can and has been shaped by historical happenstance along with the intermixing or racial and ethnic groups.

Premise 4: As I already explained, Brazil and South Africa recognize a broader range of racial categories due to their historical circumstances and diverse social histories and dynamics. So Conclusion 2 follows, since these examples show that race doesn’t have a fixed, inherent and objective biological basis; it shows that race is shaped by social, historical, and cultural contexts.

Premise 5: So due to the variability in racial categorization historically and today, and the changing of racial boundaries in the past. For instance, Irish and Italian Americans were seen as different races in the 1900s, but over time as they assimilated into American society, racial categories began to blur and they then became part of the white race. Racial categories in Brazil are based on how the person is perceived, which leads to multiple different racial groups. Apartheid South Africa has 4 classifications: White, Black, Colored (mixed race) and Indian. These examples highlight the fact that based on changing social conventions and thought, how race can and does change with the times based on what is currently going on in the society in question. This highlights the fluid nature of racial categories. The argument up until this point has provided evidence for Premise 6, so Conclusion 3 follows: race is a social construct. Varying racial categories in different societies across time and place, the absence of an objective biological basis to race, along with the influence of historical, cultural, and social factors all point to the conclusion that race is a social construct.

Conclusion

Quite obviously, there is a distinction between biological and social race. The distinction is important, if we want to reject a concept of biological race while still stating that race is real. But hereditarians like Bo, it seems, don’t understand the distinction at hand. Bo, at least, isn’t alone in being confused about race concepts and their entailments. Murray (2020) stated:

Advocates of “race is a social construct” have raised a host of methodological and philosophical issues with the cluster analyses. None of the critical articles has published a cluster analysis that does not show the kind of results I’ve shown.

But social constructivists need to do no such thing. Why would they need to produce a cluster analysis that shows opposite results to what Murray claims? This shows that Murray doesn’t understand what social constructivists believe. Of course, equating race and biology is the MO of hereditarians, since they argue that some social inequalities between races are due to genetic differences between races.

Social constructivists talking about something real—a dynamic and evolving concept which has profound consequences for society. Rejecting the concept of biological races doesn’t entail that the social constructvist doesn’t believe that human groups don’t differ genetically. It does entail that the genetic diversity found in humans groups doesn’t necessitate the establishment of rigid and fixed racial categories. So in rejecting biological racial realism, social constructivists embrace the view that racial classifications are fixated by social, cultural, and historical contingencies. The examples I’ve given in this article show how racial categorization has changed based on time and place, and this is because race is a social construct.

I should note that I am a pluralist about race, which is “the view that there’s a plurality of natures and realities for race in the relevant linguistic context” (Spencer, 2019: 27), so “there is no such thing as a global meaning of ‘race’ (2019 : 43). The fact that there is no such thing as a global meaning of race entails that different societies across time and place will define racial groups differently, which we have seen, and so race is therefore a social construct (and I claim it is a social construct of a biological reality). Hereditarians like Bo don’t have this nuance, due to their apparent insistence that social constructivists are a kind of anti-realist about race as a whole. This claim, as I’ve exhaustively shown, is false. Different concepts of race can exist with each other based on context, leading to complex and multifaceted understandings of race and it’s place in society.

The second argument I formalized quite obviously gets at the distinction between biological and social racial realism shows the distinction between the two—I have defended the premises and have given examples on societies in different time and place that had different views of the racial groups in this country. This, then, shows that race is a social construct and that social constructivists about race are realists about race—since they are talking about something that has a referent.

Mind, Culture, and Test Scores: Dualistic Experiential Constructivism’s Insights into Social Disparities

2450 words

Introduction

Last week I articulated a framework I call Dualistic Experiential Constructivism (DEC). DEC is a theoretical framework which draws on mind-body dualism, experiential learning, and constructivism to explain human development, knowledge acquisition, and the formation of psychological traits and mind. In the DEC framework, knowledge construction and acquisition are seen as due to a dynamic interplay between individual experiences and the socio-cultural contexts that they occur in. It has a strong emphasis on the significance of personal experiences, interacting with others, shaping cognitive processes, social understanding and the social construction of knowledge by drawing on Vygotsky’s socio-historical theory of learning and development, which emphasizes the importance of cultural tools and the social nature of learning. It recognizes that genes are not sufficient for psychological traits, but necessary for them. It emphasizes that the manifestation of psychological traits and mind are shaped by experiences, interactions between the socio-cultural-environmental context.

My framework is similar to some other frameworks, like constructivism, experiential learning theory (Kolb) (Wijnen-Meyer et al, 2022), social constructivism, socio-cultural theory (Vygotsky), relational developmental systems theory (Lerner and Lerner, 2019) and ecological systems theory (Bronfenbrenner, 1994).

DEC shares a key point with constructivism—that of rejecting passive learning and highlight the importance of the learner’s active engagement in the construction of knowledge. Kolb’s experiential learning theory proposes that people learn best through direct experiences and reflecting on those experiences, while DEC emphasizes the fact significance of experiential learning in shaping one’s cognitive processes and understanding of knowledge. DEC also relies heavily on Vygotsky’s socio-historical theory of learning and development, where both the DEC and Vygotsky’s theory emphasize the role of socio-cultural factors in shaping human development along with the construction of knowledge. Vygotsky’s theory also highlights the importance of social interaction, cultural and psychological tools and historical contexts, which DEC draws from. Cognitive development and knowledge arise from dynamic interactions between individuals and their environment while also acknowledging the reciprocal influences between the individual and their social context. (This is how DEC can also be said to be a social constructivist position.) DEC is also similar to Uri Bronfenbrenner’ecological systems theory, which emphasizes the influence of multiple environmental systems on human development. With DEC’s focus on how individuals interact with their cultural contexts, it is therefore similar to ecological systems theory. Finally, DST shares similarities with Learner’s relational developmental systems theory focusing on interactions, genes as necessary but not sufficient causes for the developing system, rejecting reductionism and acknowledging environmental and cultural contexts in shaping human development. They are different in the treatment of mind-body dualism and the emphasis on cultural tools in shaping cognitive development and knowledge acquisition.

Ultimately, DEC posits that individuals actively construct knowledge through their engagement with the world, while drawing upon their prior experiences, interactions with others and cultural resources. So the socio-cultural context in which the individual finds themselves in plays a vital role in shaping the nature of learning experiences along with the construction of meaning and knowledge. Knowing this, how would race, gender, and class be integrated into the DEC and how would this then explain test score disparities between different classes, men and women, and races?

Social identities and test score differences: The impact of DEC on gender, race and class discrepancies

Race, class, and gender can be said to be social identities. Since they are social identities, they aren’t inherent or fixed characteristics in individuals, they are social categories which influence an individual’s experiences, opportunities, and interaction within society. These social identities are shaped by cultural, historical, and societal factors which intersect in complex ways, leading to different experiences.

When it comes to gender, it has been known that boys and girls have different interests and so they have different knowledge basis. This has been evidenced since Terman specifically constructed his test to eliminate differences between men and women in his Stanford-Binet, and also evidenced by the ETS changing the SAT to reflect these differences between men and women (Rosser, 1989; Mensh and Mensh, 1991). So when it comes to the construction of knowledge and the engagement with the world, an individual’s gender influences the way they perceive the world, and interpret social dynamics and act in social situations. There is also gendered test content, as Rosser (1989) shows for the SAT. Thus, the concept of gender in society influences test scores since men and women are exposed to different kinds of knowledge; the fact that there are “gendered test items” (items that reflect or perpetuate gender biases, stereotypes or assumptions in its presentation).

But men and women have negligible differences in full-scale IQ, so how can DEC work here? It’s simple: men are better spatially and women are better verbally. Thus, by choosing which items they want on the test, test constructors can build the conclusions they want into the test. DEC emphasizes socio-cultural influences on knowledge exposure, stating that unique socio-cultural and historical experiences and contexts influences one’s knowledge acquisition. Cultural/social norms and gendered socialization can also shape one’s interests and experiences, which would then influence knowledge production. Further, test content could have gender bias (as Rosser, 1989 pointed out), and subjects that either sex are more likely to have interest in could have skewed answer outcomes (as Rosser showed). Stereotype threat is also another thing that could influence this, with one study conceptualizing stereotype threat gender as being responsible for gender differences in advanced math (Spencer, Steele, and Quinn, 1999). Although stereotype threat affects different groups in different ways, one analysis showed empirical support “for mediators such as anxiety, negative thinking, and mind-wandering, which are suggested to co-opt working memory resources under stereotype threat” (Pennington et al, 2016). Lastly, intersectionality is inherent in DEC. Of course the experiences of a woman from a marginalized group would be different from the experiences of a woman from a privileged group. So these differences could influence how gender intersects with other identities when it comes to knowledge production.

When it comes to racial differences in test scores, DEC would emphasis the significance of understanding test score variations as reflecting multifaceted variables resulting from the interaction of cultural influences, experiential learning, societal contexts and historical influences. DEC rejects the biological essentialism and reductionism of hereditarianism and their claims of innate, genetic differences in IQ—it contextualizes test score differences. It views test scores as dynamic outcomes, which are influenced by social contexts, cultural influences and experiential learning. It also highlights cultural tools as mediators of knowledge production which would then influence test scores. Language, communication styles, educational values and other cultural resources influence how people engage with test content and respond to test items. Of course, social interactions play a large part in the acquisition of knowledge in different racial groups. Cultural tools are shared and transmitted through social interactions within racial communities. Historical legacies and social structures could impact access to cultural tools along with educational opportunities that would be useful to score well on the test, which then would affect test performance. Blacks and whites are different cultural groups, so they’re exposed to different kinds of knowledge which then influences their test scores.

Lastly, we come to social class. People from families in higher social strata benefit from greater access to educational resources—along with enriching experiences—like attending quality pre-schools and having access to educational materials, materials that are likely to be in the test items on the test. The early learning experiences then set the foundation for performing well on standardized tests. Lower class people could have limited access to these kinds of opportunities, which would impact their readiness and therefore performance on standardized tests. Cultural tools and language also play a pivotal role in shaping class differences in test scores. Parents of higher social class could is language and communication patterns that could potentially contribute to higher test scores. Conversely, lower social classes could have lack of exposure to the specific language conventions used in test items which would then influence their performance. Social interactions also influence knowledge production. Higher social classes foster discussions and educational discourses which support academic achievement, and also the peer groups in would also provide additional academic support and encouragement which would lend itself to higher test scores. On the other hand, lower class groups have limited academic support along with fewer opportunities for social interactions which are conducive to learning the types of items and structure of the test. It has also been shown that there are SES disparities in language acquisition due to the home learning environment, and this contributes to the achievement gap and also school readiness (Brito, 2017). Thus, class dictates if one is or is not ready for school due to their exposure to language in their home learning environment. Therefore, in effect, IQ tests are middle-class knowledge tests (Richardson, 2001, 2022). So one who is not exposed to the specific, cultural knowledge on the test wouldn’t score as well as someone who is. Richardson (1999; cf, Richardson, 2002) puts this well:

So relative acquisition of relevant background knowledge (which will be closely associated with social class) is one source of the elusive common factor in psychometric tests. But there are other, non-cognitive, sources. Jensen seems to have little appreciation of the stressful effects of negative social evaluation and systematic prejudice which many children experience every day (in which even superficial factors like language dialect, facial appearance, and self-presentation all play a major part). These have powerful effects on self concepts and self-evaluations. Bandura et al (1996) have shown how poor cognitive self-efficacy beliefs acquired by parents become (socially) inherited by their children, resulting in significant depressions of self-expectations in most intellectual tasks. Here, g is not a general ability variable, but one of ‘self-belief’.

Reduced exposure to middle-class cultural tools and poor cognitive self-efficacy beliefs will inevitably result in reduced self-confidence and anxiety in testing situations.

In sum, the ‘common factor’ which emerges in test performances stems from a combination of (a) the (hidden) cultural content of tests; (b) cognitive self-efficacy beliefs; and (c) the self-confidence/freedom-from-anxiety associated with such beliefs. In other words, g is just an mystificational numerical surrogate for social class membership. This is what is being distilled when g is statistically ‘extracted’ from performances. Perhaps the best evidence for this is the ‘Flynn effect,’ (Fkynn 1999) which simply corresponds with the swelling of the middle classes and greater exposure to middle-class cultural tools. It is also supported by the fact that the Flynn effect is more prominent with non-verbal than with verbal test items – i.e. with the (covertly) more enculturated forms.

I can also make this argument:

(1) If children of different class levels have experiences of different kinds with different material, and (2) if IQ tests draw a disproportionate amount of test items from the higher classes, then (3) higher class children should have higher scores than lower-class children.

The point that ties together this analysis is that different groups are exposed to different knowledge bases, which are shaped by their unique cultural tools, experiential learning activities, and social interactions. Ultimately, these divergent knowledge bases are influenced by social class, race, and gender, and they play a significant role in how people approach educational tests which therefore impacts their test scores and academic performance.

Conclusion

DEC offers a framework in which we can delve into to explain how and why groups score differently on academic tests. It recognizes the intricate interplay between experiential learning, societal contexts, socio-historical contexts and cultural tools in shaping human cognition and knowledge production. The part that the irreducibility of the mental plays is pivotal in refuting hereditarian dogma. Since the mental is irreducible, then genes nor brain structure/physiology can explain test scores and differences in mental abilities. In my framework, the irreducibility of the mental is used to emphazies the importance of considering subjective experiences, emotions, conscious awareness and the unique perspectives of individuals in understanding human learning.

Using DEC, we can better understand how and why races, social classes and men and women score differently from each other. It allows us to understand experiential learning and how groups have access to different cultural and psychological tools in shaping cognitive development which would then provide a more nuanced perspective on test score differences between different social groups. DEC moves beyond the rigid gene-environment false dichotomy and allows us to understand how groups score differently, while rejecting hereditarianism and explaining how and why groups score differently using a constructivist lens, since all human cognizing takes place in cultural contexts, it follows that groups not exposed to certain cultural contexts that are emphasized in standardized testing may perform differently due to variations in experiential learning and cultural tools.

In rejecting the claim that genes cause or influence mental abilities/psychological traits and differences in them, I am free to reason that social groups score differently not due to inherent genetic differences, but as a result of varying exposure to knowledge and cultural tools. With my DEC framework, I can explore how diverse cultural contexts and learning experiences shape psychological tools. This allows a deeper understanding of the dynamic interactions between the individual and their environment, emphasizing the role of experiential learning and socio-cultural factors in knowledge production. Gene-environment interactions and the irreducibility of the mental allow me to steer clear of genetic determinist explanations of test score differences and correctly identity such differences as due to what one is exposed to in their lives. In recognizing G-E interactions, DEC acknowledges that genetic factors are necessary pre-conditions for the mind, but genes alone are not able to explain how mind arises due to the irreducibility principle. So by considering the interplay between genes and experiential learning in different social contexts, DEC offers a more comprehensive understanding of how individuals construct knowledge and how psychological traits and mind emerge, steering away from genetically reductionistic approaches to human behavior, action, and psychological traits.

I also have argued how mind-body dualism and developmental systems theory refute hereditarianism, thus framework I’ve created is a further exposition which challenges traditional assumptions in psychology, providing a more holistic and nuanced understanding of human cognition and development. By incorporating mind-body dualism, it rejects the hereditarian perspective of reducing psychology and mind to genes and biology. Thus, hereditarianism is discredited since it has a narrow focus on genetic determinism/reductionism. It also integrates developmental systems theory, where development is a dynamic process influenced by multiple irreducible interactions between the parts that make up the system along with how the human interacts with their environment to acquire knowledge. Thus, by addressing the limitations (and impossibility) of hereditarian genetic reductionism, my DEC framework provides a richer framework for explaining how mind arises and how people acquire different psychological and cultural tools which then influence their outcomes and performance on standardized tests.

From Blank Slates to Dynamic Interactions: Dualistic Experiential Constructivism Challenges Hereditarian Assumptions

4000 words

Introduction

For decades, hereditarians have attempted to partition traits into relative genetic and environmental causes. The assumption here is of course that G and E are separable, independent components and another assumption is that we can discover the relative contribution of G and E by performing certain tests and statistical procedures. However, since Oyama’s publication of The Ontogeny of Information in 1985, this view has been called into question. The view that Oyama articulated is a philosophical theory based on the irreducible interactions between all developmental resources called developmental systems theory (DST)

However, we can go further. We can use the concept of dualism and argue that psychology is irreducible to the physical and so it’s irreducible to genes. We can then use the concepts laid forth in DST like that of gene-environment and the principle of biological relativity and argue that the development of organisms is irreducible to any one resource. Then, for the formation of mind and psychological traits in humans, we can say that they arise due to human-specific ecological contexts. I will call this view Dualistic Experiential Constructivism (DEC), and I will argue that it invalidates any and all attempts at partitioning G and E into quantifiable components. Thus, the hereditarian research program is bound to fail since it rests on a conceptual blunder.

The view that refutes the claim that genes and environment, nature and nurture, can’t be separated is this:

(1) Suppose that there can be no environmental effect without a biological organism to act on. (2) Suppose there can be no organism outside of its context (like the organism-environment system). (3) Suppose the organism cannot exist without the environment. (4) Suppose the environment has certain descriptive properties if and only if it is connected to the organism. Now here is the argument.

P1: If there can be no environmental effect without a biological organism to act on, and if the organism cannot exist without the environment, then the organism and environment are interdependent.
P2: If the organism and environment are interdependent, and if the environment has certain descriptive properties if and only if it is connected to the organism, then nature and nurture are inseparable.
C: Thus, nature and nurture are inseparable.

Rushton and Jensen’s false dichotomy

Rushton and Jensen (2005) uphold a 50/50 split between genes and environment and call this the “hereditarian” view. On the other side is the “culture-only” model which is 0 percent genes and 100 percent environment regarding black-white IQ differences. Of course note the false dichotomy here: What is missing? Well, an interactive GxE view. Rushton and Jensen merely put that view into their 2-way box and called it a day. They wrote:

It is essential to keep in mind precisely what the two rival positions do and do not say—about a 50% genetic–50% environmental etiology for the hereditarian view versus an effectively 0% genetic–100% environmental etiology for the culture-only theory. The defining difference is whether any significant part of the mean Black–White IQ difference is genetic rather than purely cultural or environmental in origin. Hereditarians use the methods of quantitative genetics, and they can and do seek to identify the environmental components of observed group differences. Culture-only theorists are skeptical that genetic factors play any independently effective role in explaining group differences.

Most of those who have taken a strong position in the scientific debate about race and IQ have done so as either hereditarians or culture-only theorists. Intermediate positions (e.g., gene–environment interaction) can be operationally assigned to one or the other of the two positions depending on whether they predict any significant heritable component to the average group difference in IQ. For example, if gene–environment interactions make it impossible to disentangle causality and apportion variance, for pragmatic purposes that view is indistinguishable from the 100% culture-only program because it denies any potency to the genetic component proposed by hereditarians.

Rushton and Jensen did give an argument here, here it is formalized:

P1: Gene-environment interactions make it impossible to disentangle causality and apportion variance correctly.
P2: If it is impossible to disentangle and apportion variance, then the view denying any potency to the genetic component proposed by hereditarians becomes indistinguishable from a 100% culture-only perspective.
C: Thus, for pragmatic purposes, the view denying any potency to the genetic component is indistinguishable from a 100% culture-only program.

This argument is easy enough to counter. Rushton and Jensen are explicitly putting the view that refutes their whole research program into their 2 boxes—their 50/50 split between genes and environment, and the 0 percent genes and 100 percent environment. The view that Rushton and Jensen articulated is basically a developmental systems theory (DST) view. DST highlights the interactive and dynamic nature of development. Rushton and Jensen’s view is clearly gene-centric, where gene-centric means centered on genes. I would impute to them—based on their writings—that genes are a sufficient, privileged cause for IQ, and traits as a whole. But that claim is false (Noble, 2012).

Although I understand where they’re coming from here, they’re outright wrong.

Put simply, they need to put everything into this box in order to legitimatize their “research.” Although I would be a “culture-only theorist” to them regarding my views on the cause of IQ gaps (since there is no other way to be), my views on genetic causation are starkly different than theirs are.

Most may know that I deny the claim that genes can cause or influence differences in psychological traits between people. (And that genes are outright causes on their own, independent of environment.) I hold this view due to conceptual arguments. The interactive view (of which is more complex than Rushton and Jensen are describing), is how development is carried out, with no one resource having primacy over another—a view called the causal parity thesis. This is the principle of biological relativity (Noble, 2012). This theory asserts that there is no privileged level of causation, and so if there is no privileged level of causation, then that holds for all of the developmental resources that interact to make up the phenotype. Thus, hereditarianism is false since hereditarianism privileges genes over other developmental resources when no developmental resource is privileged in biological systems.

Rushton and Jensen almost had it—if GxE makes it hard or impossible to disentangle causality and apportion variance, then the hereditarian program cannot and will not work since, basically, they apportion variance into G and E causes and claim that independent genetic effects are possible. However, many authors have a conceptual argument on heritability, for if G and E and anything else interact, then they are not separable, and if they are not separable, they are not quantifiable. For example, Burt and Simon (2015: 107) argue that the “conceptual model is unsound and the goal of heritability studies is biologically nonsensical given what we now know about the way genes work.

When it comes to “denying potency” to the “genetic component”, Rushton and Jensen seem to be quite specific in what they mean by this. Of course, a developmentalist (a GxE supporter) would not deny that genes are NECESSARY for the construction of the phenotype, though they would deny the PRIMACY that hereditarians place on genes. Genes are nothing special, they are not special resources when compared to other resources.

Of course, hereditarianism is a reductionist discipline. And by reductionist, I mean it attempts to break down the whole to the sum of its parts to ascertain the ontogeny of the desired object. Reductionism is false, and so that would apply to genetic and neuroreduction. Basically, reducing X to genes or the brain/brain physiology is the wrong way to go about this. Rushton (2003) even explicitly stated his adherence to the reductionist paradigm in a small commentary of Rose’s (1998) Lifelines. He repeats his “research” into brain size differences between races and argues that, due to the .4 correlation between MRI and IQ, due to differences in brain size between races (see here for critique) and since races have different cognitive abilities, then this is a “+” for reductionist science.

Since the behavioral genetic research program is reductive, it is necessarily committed to genetic determinism, even though most don’t explicitly state this. The way that Rushton and Jensen articulated the GxE (DST) view fit into their false dichotomy to try to reject it outright without grappling with its implications for organismal development. Unfortunately for the view put forth by Rushton and Jensen, organisms and environment are constantly interacting with each other. If they constantly interact, then they are not separable. If they are not separable, then the distinction made by Rushton and Jensen fails. If the distinction made by Rushton and Jensen fails, then ultimately, the quest of behavioral genetics—to apportion variance into genetic and environmental causes—fails.

Another hereditarian who tries to argue against interactionism is Gottfredson (2009) with her “interactionism fallacy.” Heritability estimates, it is claimed, can partition causes of variance between G and E components. Gottfredson—like all other hereditarians, I claim—completely misrepresent the view and (wilfully?) misunderstand what developmental systems theorists are saying. People like Rushton, Jensen, and Gottfredson quite obviously claim that science can solve the nature-nurture debate. The fact of the matter that destroys hereditarian assumptions and claims about the separability of nature and nurture is this: The genome is reactive (Fox-Keller, 2014) that is, it reacts to what is occurring in the environment, whether that be the environment outside or inside of the body.

At the molecular level, the nurture/nature debate currently revolves around reactive genomes and the environments, internal and external to the body, to which they ceaselessly respond. Body boundaries are permeable, and our genome and microbiome are constantly made and remade over our lifetimes. Certain of these changes can be transmitted from one generation to the next and may, at times, persist into succeeding generations. But these findings will not terminate the nurture/nature debate – ongoing research keeps arguments fueled and forces shifts in orientations to shift. Without doubt, molecular pathways will come to light that better account for the circumstances under which specific genes are expressed or inhibited, and data based on correlations will be replaced gradually by causal findings. Slowly, “links” between nurture and nature will collapse, leaving an indivisible entity. But such research, almost exclusively, will miniaturize the environment for the sake of accuracy – an unavoidable process if findings are to be scientifically replicable and reliable. Even so, increasing recognition of the frequency of stochastic, unpredictable events ensures that we can never achieve certainty. (Locke and Pallson, 2016)

The implication here is that science cannot resolve this debate, since “nature and nurture are not readily demarcated objects of scientific inquiry” (Locke and Pallson, 2016: 18). So if heritability estimates are useful for understanding phenotypic variation, then the organism and environment must not interact. If these interactions are constant and pervasive, then it becomes challenging—and I claim impossible—to accurately quantify the relative contribution of genes and environment. But the organism and environment constantly interact. Thus, heritability estimates aren’t useful for understanding phenotypic variation. This undermines the interpretability of heritability and invalidates any and all claims as to the relative contribution of G and E made by any behavioral geneticist.

The interactive view of G and E state that genes are necessary for traits but not sufficient for them. While genetic factors do of course lay the foundation for trait development, so do the other resources that interact with the genes (the suite of them) that are necessary for trait development. I can put the argument like this:

P1: An interactive view acknowledges that genes contribute to the development of traits.
P2: Genes are necessary pre-conditions for the expression of traits.
P3: Genes alone are not sufficient to fully explain the complexity of traits.
C: Thus, an interactive view states that genes are necessary pre-conditions for traits but not sufficient on their own.

Why my view is not blank slatism: On Dualistic Experiential Constructivism

Now I need to defend my view that the mind and body are distinct substances, so the mental is irreducible to the physical, so genes can’t cause psychology. One may say “Well that makes you a blank slatist since you deny that the mind has any innate properties.” Fortunately, my view is more complex than that.

I have been espousing certain points of view for years on this blog: The irreducibility of the mental, genes can’t cause mental/psychological traits, mind is constructed through interacting with other humans in species-relevant contexts to eventually form mind, and so-called innate traits are learned and experience-dependent. How can I reconcile these views? Doesn’t the fact that I deny any and all genetic influence on psychology due to my dualistic commitments mean I am a dreaded “blank slatist”? No it does not and I will explain why.

I call my view “Dualistic Experiential Constructivism” (DEC). It’s dualistic since it recognizes that the mind and body are separate, distinct substances. It’s experiential since it highlights the role of experiential factors in the forming of mind and the construction of knowledge and development of psychological traits. It is constructivist since individuals actively construct their knowledge and understanding of the world by interacting with other humans. Also in this framework is the concept of gene-environment interaction, where G and E interact to be inseparable and non-independent interactants.

Within the DEC framework, gene-environment interactions are influential in the development of cognition, psychology and behavior. This is because due to genes being necessary for the construction of humans, they need to be there to ensure they begin growing once conceived. Then, the system begins interacting irreducibly with other developmental interactants, which then begin to form the phenotype and eventually a human forms. So genes provide a necessary pre-condition for traits, but in this framework they are not sufficient conditions.

In Vygotsky’s socio-historical theory of learning and development, Vygotsky argued that individuals acquire psychological traits through interacting with other humans in certain social and environmental contexts through the use of cultural and psychological tools. Language, social interactions and culture mediate the cognitive development which then fosters higher-order thinking. Thus, Vygotsky’s theory highlights the dynamic and interactive nature of human development which emphasizes the social contexts of the actors in how mind is shaped and developed. So Vygotsky’s theory supports the idea I hold that mind is shaped through interactions and experiences within certain socio-historical contexts. So it would seem that adherence to this theory would mean that there are critical points in child development, where if the child does not get the rich exposure they need in order to develop their abilities, they then may never acquire the ability, indicating a critical window in which these abilities can be acquired (Vyshedakiy, Mahapatra, and Dunn, 2017). Cases of feral children allow us to see how one would develop without the role of social interaction and cultural tools in cognitive development. That these children are so stunted in their psychology and language shows the critical window in which children can learn and understand a language. The absence of social experiences in feral children thusly supports Vygotsky’s theory regarding the significance of cultural and social factors in shaping the mind. And cognitive development. Vygotsky’s theory is very relevant here, since it shows the necessary socio-historical and cultural experiences need to occur for higher order thinking, psychology, and mind to develop in humans. And since newborns, infants and young children are surrounded by what Vygotsky called More Knowledgeable Others, they learn from and copy what they see from people who already know how to act in certain social and cultural situations, which then develops an individual’s psychology and mind.

There is also another issue here: The fact that species-typical behaviors develop in reliable ecological contexts. If we assume this holds for humans—and I see no reason not to—then there need to be certain things in the environment that then jettison the beginnings of the construction of mind in humans, and this is in relevant social-historical-ecological contexts, basically, environments are inherited too.

In an article eschewing the concept of “innateness”, Blumberg (2018) has a great discussion on how species-typical traits arise. Quite simply, it’s due to the construction of species-specific niches which then allow the traits to reliably appear over time:

Species-typical behaviors can begin as subtle predispositions in cognitive processing or behavior. They also develop under the guidance of species-typical experiences occurring within reliable ecological contexts. Those experiences and ecological contexts, together comprising what has been called an ontogenetic niche, are inherited along with parental genes16. Stated more succinctly, environments are inherited—a notion that shakes the nature-nurture dichotomy to its core. That core is shaken still further by studies demonstrating how even our most ancient and basic appetites, such as that for water, are learned17. Our natures are acquired.

Contrasting the DEC with hereditarianism shows exactly how different they are and how DEC answers hereditarianism with a different framework. DEC offers an alternative perspective on the construction of psychological traits and mind in humans, and strongly emphasizes the role of individual experiences and environmental factors (like the social) in allowing the mind to form and shape psychological traits, but it does in fact highlight the need for genetic factors—though in a necessary, not sufficient, way. DEC suggests that genes alone aren’t enough to account for psychology. It argues that the mind is irreducible to the physical (genes, brain/brain structure) and that the development of psychological traits (and along with it the mind) requires the interactive influences of the individual, experiences, and environmental context.

There is one more line of evidence I need to discuss before I conclude—that of clonal populations living in the same controlled environment and what it does and does not show, along with the implications of behavioral genetic hereditarian explanations of behavior. Kate Laskowski’s (2022) team observed how genetically identical fish behaved in controlled environments. Substantial individuality still arises in clonal fishes with the same genes while being in a controlled environment. These studies from Laskowski’s team suggests that behavioral individuality “might be an inevitable and potentially unpredictable outcome of development” (Bierbach, Laskowski, and Wolf, 2017). So the argument below captures this fact, and is based on the assumption that if genes did cause psychological traits and behavior, then individuals with an identical genome would have identical psychology and behavior. But these studies show that they do not, so the conclusion follows that mind and psychological traits aren’t determined by psychology.

(P1) If the mind is determined by genetic factors, then all individuals with the same genetic makeup would exhibit identical psychological traits.
(P2) Not all individuals with the same genetic makeup exhibit identical psychological traits.
(C) Thus, mind isn’t determined by genetic factors.

I think it is a truism that an entailment of the hereditarian view would be identical genes would mean identical psychology and behavior. Quite obviously, experimental results have shown that this quite simply is not the case. If the view espoused by Rushton and Jensen and other hereditarians were true, then organisms with identical genomes would have the same behavior and psychology. But we don’t find this. Thus, we should reject hereditarianism since their claim has been tested in clonal populations and gas been found wanting.

Now how is my view not blank slatism? I deny the claim that psychology reduces to anything physical, and I deny that innate traits are a thing, so can there be nuances, or am I doomed to be labeled a blank slatist? Genetic factors are necessary pre-conditions for the mind but there are no predetermined, hardwired traits in them. While genetic factors lay the groundwork for this, the importance of learning, experience, and relevant ecological contexts must not be discounted here. While I recognize the interplay between genes and environment and other resources, I do not hold to the claim that any of them are sufficient to explain mind and psychology. I would say that Vygotsky’s theory shows how and why people and groups score differently on so-called psychological tests. There is the interplay between the child, the socio-cultural environment, and the individuals in that environment. Thus, by being in these kinds of environments, this allows the formation of mind and psychology (which is shown in cases of feral children), meaning that hereditarianism is ill-suited to explaining this with their fixation on genes, even when genes can’t explain psychology. If the mental is irreducible to the physical and genes are physical, then genes can’t explain the mental. This destroys the hereditarian argument.

Conclusion

Vygotsky’s theory provides a socio-cultural framework which acknowledges the role of subjective experiences within social contexts. Individuals engage in social interactions, and collaborative activities as conscious beings, and in doing so, they share their subjective experiences to the collective construction of knowledge and understanding. The brand of dualism I push entails that psychology doesn’t reduce to anything physical, which includes genes and the brain. But I do of course recognize the interactions between all developmental resources, I don’t think that any of them along are explanatory regarding psychology and behavior like the hereditarian, that’s one of the biggest differences between hereditarianism and the DEC. My view is similar to that of relational developmental systems theory (Lerner, 2021a, b). Further, this view is similar to Oyama’s (2002) view where she conceptualizes “nature” as a natural outcome of the organism-environment system (inline with Blumberg, 2018), and nurture as the ongoing developmental process. Thus, Oyama has reconceptualized the nature nurture debate.

Of course, my claim that psychology isn’t reducible to genes would put me in the “100% percent culture-only” camp that Rushton and Jensen articulated. However, there is no other way to be about this debate, since races are different cultural groups and different cultural groups are exposed to different cultural and psychological tools which lead to differences in knowledge and therefore lead to score differences. So I reject their dichotomy they mounted and I also reject the claim that the interactive view is effectively a “culture-only” view. But, ultimately, the argument that psychology doesn’t reduce to genes is sound, so hereditarianism is false. Furthermore, the hereditarian claim that genes cause differences in psychology and behavior is called into question due to the research on clonal populations. This shows that individuality arises randomly, and is not caused by genetic differences since there were no genetic differences.

The discussion surrounding the specific IQ debate concerning the hereditarian explanation necessitates a thorough examination of the intricate interplay between genetics and environment. A mere environmental explanation seems to be the only plausible rationale for the observed black-white IQ gap, considering that psychological states cannot be solely ascribed/reduced to genetic factors. In light of this, any attempts to dichotomize nature versus nurture, as was exemplified by Rushton and Jensen, fail to capture the essence of the matter at hand. Their reductionist approach, encapsulating a “100% culture-only program” within their 2 boxes that shows their adherence to the false dichotomy, followed by the triumphal proclamation of their seemingly preferred “50/50 split between genes and environment” explanation (although they later advocate an 80/20 perspective), can be regarded as nothing more than a fallacious oversimplification.

I have presented a comprehensive framework which challenges hereditarianism and provides an alternative perspective on the nature of human psychology and development. I integrated the principles of mind-body dualism, Vygotsky’s socio-historical theory of learning and development, and gene-environment interactions calling it Dualistic Experiential Constructivism, which acknowledges the interplay between genes, environment, and other developmental resources. Ultimately, DEC promoted a more holistic and interactive view in understanding the origin of mind through social processes and species-typical contextual-dependent events, while acknowledging genes as a necessary template for these things, since the organism is what is navigating the environment.

So this is the answer to hereditarianism—a view in which all developmental resources interact and are irreducible, in which first-personal subjective experiences with others of the species taking place in reliable ecological contexts jettison the formation of mind and psychological traits. This is called Dualistic Experiential Constructivism, and it entails a few different other frameworks that then coalesce into the view against hereditarianism that I hold.

Race, Brain Size, and “Intelligence”: A Critical View

5250 words

“the study of the brains of human races would lose most of its interest and utility” if variation in size counted for nothing ([Broca] 1861 , p. 141). Quoted in Gould, 1996: 115)

The law is: small brain, little achievement; great brain, great achievement (Ridpath, 1891: 571)

I can’t hope to give as good a review as Gould’s review in Mismeasure of Man on the history of skull measuring, but I will try to show that hereditarians are mistaken in their brain size-IQ correlations and racial differences in brain size as a whole.

The claim that brain size is causal for differences in intelligence is not new. Although over the last few hundred years there has been back and forth arguments on this issue, it is generally believed that there are racial differences in brain size and that this racial difference in brain size accounted for civilizational accomplishments, among other things. Notions from Samuel Morton which seem to have been revived by Rushton in the 80s while formulating his r/K selection theory show that the racism that was incipient in the time period never left us, even after 1964. Rushton and others merely revived the racist thought of those from the 1800s.

Using MRI scans (Rushton and Ankney, 2009) and measuring the physical skull, Rushton asserted that the differences in brain size and quality between races accounted for differences in IQ. Although Rushton was not alone in this belief, this belief on the relationship between brain weight/structure and intelligence goes back centuries. In this article, I will review studies on racial differences in brain size and see if Rushton et al’s conclusions hold on not only brain size being causally efficacious for IQ but there being racial and differences in brain size and the brain size and IQ correlation.

The Morton debate

Morton’s skull collection has received much attention over the years. Gould (1978) first questioned Morton’s results on the ranking of skulls. He argued that when the data was properly reinterpreted, “all races have approximately equal capacities.” The skulls in Morton’s collection were collected from all over. Morton’s men even robbed graves to procure skulls for Morton, even going as far to take “bodies in front of grieving relatives and boiled flesh off fresh corpses” (Fabian, 2010: 178). One man even told Morton that grave robbing gave him a “rascally pleasure” (Fabian, 2010: 15). Indeed, grave robbing seems to have been a way to procure many skulls which were used in these kinds of analyses (Monarrez et al, 2022). Nevertheless, since skulls house brains, the thought is that by measuring skulls then we can ascertain the brain of the individual that the skull belonged to. A larger skull would imply a larger brain. And larger brains, it was said, belong to more “intelligent” people. This assumption was one that was held by the neurologist Broca, and this then justified using brain weight as a measure of intelligence. Though in 1836, an anti-racist Tiedemann (1836) argued that there were no differences in brain size between whites and blacks. (Also see Gould, 1999 for a reanalysis of Tiedemann where he shows C > M > N in brain size, but concludes that the “differences are tiny and probably of no significance in the judgment of intelligence” (p 10).) It is interesting to note that Tiedemann and Morton worked with pretty much the same data, but they came to different conclusions (Gould, 1999; Mitchell, 2018).

In 1981 Gould published his landmark book The Mismeasure of Man (Gould, 1981/1996). In the book, he argued that bias—sometimes unconscious—pervaded science and that Morton’s work on his skull collection was a great example of this type of bias. Gould (1996: 140) listed many reasons why group (race) differences in brain size have never been demonstrated, citing Tobias (1970):

After all, what can be simpler than weighing a brain?—take it out, and put it on the scale. One set of difficulties refers to problems of measurement itself: at what level is the brain severed from the spinal cord; are the meninges removed or not (meninges are the brain’s covering membranes, and the dura mater, or thick outer covering, weighs 50 to 60 grams); how much time elapsed after death; was the brain preserved in any fluid before weighing and, if so, for how long; at what temperature was the brain preserved after death. Most literature does not specify these factors adequately, and studies made by different scientists usually cannot be compared. Even when we can be sure that the same object has been measured in the same way under the same conditions, a second set of biases intervenes—influences upon brain size with no direct tie to the desired properties of intelligence or racial affiliation: sex, body size, age, nutrition, nonnutritional environment, occupation, and cause of death.

Nevertheless, in Mismeasure, Gould argued that Morton had unconscious bias where he packed the skulls of smaller African skulls more loosely while he would pack the skulls of a smaller Caucasian skull tighter (Gould made this inference due to the disconnect between Morton’s lead shot and seed measurements).

Plausible scenarios are easy to construct. Morton, measuring by seed, picks up a threateningly large black skull, fills it lightly and gives it a few desultory shakes. Next, he takes a distressingly small Caucasian skull, shakes hard, and pushes mightily at the foramen magnum with his thumb. It is easily done, without conscious motivation; expectation is a powerful guide to action. (1996: 97)

Yet through all this juggling, I detect no sign of fraud or conscious manipulation. Morton mad e no attempt to cove r his tracks and I must presume that he was unaware he had left them. He explained all his procedure s and published all his raw data. All I can discern is an a priori conviction about racial ranking so powerful that it directed his tabulations along preestablished lines. Yet Morton was widely hailed as the objectivist of his age, the man who would rescue American science from the mire of unsupported speculation. (1996: 101)

But in 2011, a team of researchers tried to argue that Morton did not manipulate data to fit his a priori biases (Lewis et al, 2011). They claimed that “most of Gould’s criticisms are poorly supported or falsified.” They argued that Morton’s measurements were reliable and that Morton really was the scientific objectivist many claimed him to be. Of course, since Gould died in 2002 shortly after publishing his magnum opus The Stuecure of Evolutionary Theory, Gould could not defend his arguments against Morton.

However, a few authors have responded to Lewis et al and have defended Gould conclusions against Morton (Weisberg, 2014; Kaplan, Pigliucci and Banta, 2015; Weisberg and Paul, 2016).

Weisberg (2014) was the first to argue against Lewis et al’s conclusions on Gould. Weisberg argued that while Gould sometimes overstated his case, most of his arguments were sound. Weisberg argued that, contra what Lewis et al claimed, they did not falsify Gould’s claim, which was that the difference between shot and seed measurements showed Morton’s unconscious racial bias. While Weisberg rightly states that Lewis et al uncovered some errors that Gould made, they did not successfully refute two of Gould’s main claims: “that there is evidence that Morton’s seed‐based measurements exhibit racial bias and that there are no significant differences in mean cranial capacities across races in Morton’s collection.”

Weisberg (2014: 177) writes:

There is prima facie evidence of racial bias in Morton’s (or his assistant’s) seed‐basedmeasurements. This argument is based on Gould’s accurate analysis of the difference between the seed‐ and shot‐based measurements of the same crania.

Gould is also correct about two other major issues. First, sexual dimorphism is a very suspicious source of bias in Morton’s reported averages. Since Morton identified most of his sample by sex, this is something that he could have investigated and corrected for. Second, when one takes appropriately weighted grand means of Morton’s data, and excludes obvious sources of bias including sexual dimorphism, then the average cranial capacity of the five racial groups in Morton’s collection is very similar. This was probably the point that Gould cared most about. It has been reinforced by my analysis.

[This is Weisberg’s reanalysis]

So Weisberg successfully defended Gould’s claim that there are no general differences in the races as ascribed by Morton and his contemporaries.

In 2015, another defense of Gould was mounted (Kaplan, Pigliucci and Banta, 2015). Like Weisberg before them, they also state that Gould got some things right and some things wrong, but his main arguments weren’t touched by Lewis et al. Kaplan et al stated that while Gould was right to reject Morton’s data, he was wrong to believe that “a more appropriate analysis was available.” They also argue due to the “poor dataset no legitimate inferences to “naturalpopulations can be drawn.” (See Luchetti, 2022 for a great discussion of Kaplan, Pigliucci and Banta.)

In 2016, Weisberg and Paul (2016) argued that Gould assumed that Morton’s lead shot method  was an objective way to ascertain the cranial capacities of skulls. Gould’s argument rested on the differences between lead shot and seed. Then in 2018, Mitchell (2018) published a paper where he discovered lost notes of Morton’s and he argued that Gould was wrong. He, however, admitted that Gould’s strongest argument was untouched—the “measurement issue” (Weisberg and Paul, 2016) was Gould’s strongest argument, deemed “perceptive” by Mitchell. In any case, Mitchell showed that the case of Morton isn’t one of an objective scientist looking to explain the world sans subjective bias—Morton’s a priori biases were strong and strongly influenced his thinking.

Lastly, ironically Rushton used Morton’s data from Gould’s (1978) critique, but didn’t seem to understand why Gould wrote the paper, nor why Morton’s methodology was highly suspect. Rushton basically took the unweighted average for “Ancient Caucasian” skulls, and the sex/age of the skulls weren’t known. He also—coincidentally I’m sure—increased the “Mongoloid skull” size from 85 to 85.5cc (Gould’s table had it as 85cc). Amazingly—and totally coincidentally, I’m sure—Rushton miscited Gould’s table and basically combined Morton’s and Gould’s data, increased the skull size slightly of “Mongoloids” and used the unweighted average of “Ancient Caucasian” skulls (Cain and Vanderwolf, 1990). How honest of Rushton. It’s ironic how people say that Gould lied about Morton’s data and that Gould was a fraud, when in all actuality, Rushton was the real fraud, never recanting on his r/K theory, and now we can see that Rushton actually miscited and combined Gould’s and Morton’s results and made assumptions without valid justification.

The discussion of bias in science is an interesting one. Since science is a social endeavor, there necessarily will be bias inherent in it, especially when studying humans and discussing the causes of certain peculiarities. I would say that Gould was right about Morton and while Gould did make a few mistakes, his main argument against Morton was untouched.

Skull measuring after Morton

The inferiority of blacks and other non-white races has been asserted ever since the European age of discovery. While there were of course 2 camps at the time—one which argued that blacks were not inferior in intelligence and another that argued they were—the claim that blacks are inferior in intelligence was, and still is, ubiquitous. They argued that smaller heads meant that one was less intelligent, and if groups had smaller heads then they too were less intelligent than groups that had smaller heads. This then was used to argue that blacks hadn’t achieved any kind of civilizational accomplishments since they were intellectually inferior due to their smaller brains (Davis, 1869; Campbell, 1891; Hoffman, 1896; Ridpath, 1897; Christison, 1899).

Robert Bean (1906) stated, using cadavers, that his white cadavers had larger frontal lobes than his black cadavers. He concluded that blacks were more objective than whites who were more subjective, and that white cadavers has larger frontal and anterior lobes than black cadavers. However, it seems that Bean did not state one conclusion—that the brain’s of his cadavers seemed to show no difference. Gould (1996: 112) discusses this issue (see Mall, 1909: 8-10, 13; Reuter, 1927). Mall (1909: 32) concluded, “In this study of several anatomical characters said to vary according to race and sex, the evidence advanced has been tested and found wanting.

Franz Boas also didn’t agree with Bean’s analysis:

Furthermore, in “The Anthropological Position of the Negro,” which appeared in Van Norden)- Magazine a few months later, Boas attempted to refute Bean by arguing that “the anatomical differences” between blacks and whites “are minute,” and “no scientific proof that will stand honest proof … would prove the inferiority of the negro race.”39 (Williams, 1996: 20)

In 1912, Boas argued that the skull was plastic, so plastic that changes in skull shape between immigrants and their progeny were seen. His results were disputed (Sparks and Jantz, 2002), though Gravlee, Bernard, and Leonard (2002) argued that Boas was right—the shape of the skull indeed was influenced by environmental factors.

When it comes to sex, brain size, and intelligence, this was discredited by Alice Lee in her thesis in 1900. Lee created a way to measure the brain of living subjects and she used her method on the Anthropological Society and showed a wife variation, with of course overlapping sizes between men and women.

Lee, though, was a staunch eugenicist and did not apply the same thinking to race:

After dismantling the connection between gender and intellect, a logical route would have been to apply the same analysis to race. And race was indeed the next realm that Lee turned to—but her conclusions were not the same. Instead, she affirmed that through systematic measurement of skull size, scientists could indeed define distinct and separate racial groups, as craniometry contended. (The Statistician Who Debunked Sexist Myths About Skull Size and Intelligence)

Contemporary research on race, brain size, and intelligence

Starting from the mid-1980s when Rushton first tried to apply r/K to human races, there was a lively debate in the literature, with people responding to Rushton and Rushton responding back (Cain and Vanderwolf, 1990; Lynn, 1990; Rushton, 1990; Mouat, 1992). Why did Rushton seemingly revive this area of “research” into racial differences in brain size between human races?

Centring Rushton’s views on racial differences needs to start in his teenage years. Rushton stated that being surrounded by anti-white and anti-western views led to him seeking out right-wing ideas:

JPR recalls how the works of Hans Eysenck were significantly influential to the teenage Rushton, particularly his personality questionnaires mapping political affiliation to personality. During those turbulent years JPR describes bundled as growing his hair long  becoming outgoing but utterly selfish. Finding himself surrounded by what he described as anti-white and anti-western views, JPR became interested in right-wing groups. He went about sourcing old, forbidden copies of eugenics articles that argued that evolutionary differences existed between blacks and whites. (Forsythe, 2019) (See also Dutton, 2018.)

Knowing this, it makes sense how Rushton was so well-versed in old 18 and 1900s literature on racial differences.

For decades, J. P. Rushton argued that skulls and brains of blacks were smaller than whites. Since intelligence was related to brain size in Rushtonian r/K selection theory, this meant that what would account for some of the intelligence differences based on IQ scores between blacks and whites could be accounted for by differences in brain size between them. Since the brain size differences between races accounted for millions of brain cells, this could then explain race differences in IQ (Rushton and Rushton, 2003). Rushton (2010) went as far to argue that brain size was an explanation for national IQ differences and longevity.

Rushton’s thing in the 90s was to use MRI to measure endocranial volumes (eg Rushton and Ankney, 1996). Of course they attempt to show how smaller brain sizes are found in lower classes, women, and non-white races. Quite obviously, this is scientific racism, sexism, and classism (which Murray 2020 also wrote a book on). In any case, Rushton and Ankney (2009) tried arguing for “general mental ability” and whole brain size, trying to argue that the older studies “got it right” in regard to not only intelligence and brain size but also race and brain size. (Rushton and Ankney, just like Rushton and Jensen 2005, cited Mall, 1909 in the same sentence as Bean, 1906 trying to argue that the differences in brain size between whites and blacks were noted then, when Mall was a response specifically to Bean! See Gould 1996 for a solid review of Bean and Mall.) Kamin and Omari (1998) show that whites had greater head height than blacks while blacks had greater head length and circumference. They described many errors that Lynn, Rushton and Jensen made in their analyses of race and head size. Not only did Rushton ignore Tobias’ conclusions when it comes to measuring brains, he also ignored the fact that American Blacks, in comparison to American, French and English whites, had larger brains in Tobias’ (1970) study (Weizmann et al, 1990).

Rushton and Ankney (2009) review much of the same material they did in their 1996 review. They state:

The sex differences in brain size present a paradox. Women have proportionately smaller average brains than men but apparently have the same intelligence test scores.

This isn’t a paradox at all, it’s very simple to explain. Terman assumed that men and women should be equal in IQ and so constructed his test to fit that assumption. Since Terman’s Stanford-Binet test is still in use today, and since newer versions are “validated” on older versions that held the same assumption, then it follows that the assumption is still alive today. This isn’t some “paradox” that needs to be explained away by brain size, we just need to look back into history and see why this is a thing. The SAT has been changed many times to strengthen or weaken sex differences (Rosser, 1989). It’s funny how this completely astounds hereditarians. “There are large differences in brain size between men and women but hardly if any differences in IQ, but a 1 SD difference in IQ between whites and blacks which is accounted for in part by brain size.” I wonder why that never struck them as absurd? If Rushton accepted brain weight as an indicator that IQ test scores reflected differences in brain size between the races, then he would also need to accept that this should be true for men and women (Cernovsky, 1990), but Rushton never proposed anything like that. Indeed he couldn’t, since sex differences in IQ are small or nonexistent.

In their review papers, Rushton and Ankney, as did Rushton and Jensen (I should assume that this was Rushton’s contribution to the paper since he also has the same citations and arguments in his book and other papers) consistently return to a few references: Mall, Bean, Vint and Gordon, Ho et al and Beals et al. Cernovsky (1995) has a masterful response to Rushton where he dismantles his inferences and conclusions based on other studies. Cernovsky showed that Rushton’s claim that his claim that there are consistent differences between races in brain size is false; Rushton misrepresented other studies which showed blacks having heavier brains and larger cranial capacities than whites. He misrepresented Beals et al by claiming that the differences in the skulls they studied are due to race when race was spurious, climate explained the differences regardless of race. And Rushton even misrepresented Herskovits’ data which showed no difference in regarding statute or crania. So Rushton even misrepresented the brain-body size literature.

Now I need to discuss one citation line that Rushton went back to again and again throughout his career writing about racial differences. In articles like Rushton (2002) Rushton and Jensen (2005), Rushton and Ankney (2007, 2009) Rushton went back to a similar citation line: Citing 1900s studies which show racial differences. Knowing what we know about Rushton looking for old eugenics articles that showed that evolutionary differences existed between blacks and whites, this can now be placed into context.

Weighing brains at autopsy, Broca (1873) found that Whites averaged heavier brains than Blacks and had more complex convolutions and larger frontal lobes. Subsequent studies have found an average Black–White difference of about 100 g (Bean, 1906Mall, 1909Pearl, 1934Vint, 1934). Some studies have found that the more White admixture (judged independently from skin color), the greater the average brain weight in Blacks (Bean, 1906Pearl, 1934). In a study of 1,261 American adults, Ho et al. (1980) found that 811 White Americans averaged 1,323 g and 450 Black Americans averaged 1,223 g (Figure 1).

There are however, some problems with this citation line. For instance, Mall (1909) was actually a response to Bean (1906). Mall was race-blind to where the brains came from after reanalysis and found no differences in the brain between blacks and whites. Regarding the Ho et al citation, Rushton completely misrepresented their conclusions. Further, brains that are autopsied aren’t representative of the population at large (Cain and Vanderwolf, 1990; see also Lynn, 1989; Fairchild, 1991). Rushton also misrepresented the conclusions in Beals et al (1984) over the years (eg, Rushton and Ankney, 2009). Rushton reported that they found his same racial hierarchy in brain size. Cernovsky and Littman (2019) stated that Beals et al’s conclusion was that cranial size varied with climatic zone and not race, and that the correlation between race and brain size was spurious, with smaller heads found in warmer climates, regardless of race. This is yet more evidence that Rushton ignored data that fid not fit his a priori conclusions (see Cernovsky, 1997; Lerner, 2019: 694-700). Nevertheless, it seems that Rushton’s categorization of races by brain size cannot be valid (Peters, 1995).

It would seem to me that Rushton was well-aware of these older papers due to what he read in his teenage years. Although at the beginning of his career, Rushton was a social learning theorist (Rushton, 1980), quite obviously Rushton shifted to differential psychology and became a follower—and collaborator—of Jensenism.

But what is interesting here in the renewed ideas of race and brain size are the different conclusions that different investigators came to after they measured skulls. Lieberman (2001) produced a table which shies different rankings of different races over the past few hundred years.

Table 1 from Lieberman, 2001 showing different racial hierarchies in the 19th and 20th century

As can be seen, there is a stark contrast in who was on top of the hierarchy based on the time period the measurements were taken. Why may this be? Obviously, this is due to what the investigator wanted to find—if you’re looking for something, you’re going to find it.

Rushton (2004) sought to revive the scala naturae, proposing that gthe general factor of intelligence—sits a top a matrix of correlated traits and he tried to argue that the concept of progress should return to evolutionary biology. Rushton’s r/K theory has been addressed in depth, and his claim that evolution is progressive is false. Nevertheless, even Rushton’s claim that brain size was selected for over evolutionary history also seems to be incorrect—it was body size that was, and since larger bodies have larger brains this explains the relationship. (See Deacon, 1990a, 1990b.)

Salami et al (2017) used brains from fresh cadavers, severing them from the spinal cord at the forum magnum and they completely removed the dura mater. This then allowed them to measure the whole brain without any confounds due to parts of the spinal cord which aren’t actually parts of the brain. They found that the mean brain weight for blacks was 1280g with a ranging between 1015g to 1590g while the mean weight of male brains was 1334g. Govender et al (2018) showed a 1404g mean brain weight for the brains of black males.

Rushton aggregated data from myriad different sources and time periods, claiming that by aggregating even data which may have been questionable in quality, the true differences in brain size would appear when averaged out. Rushton, Brainerd, and Pressley, 1983 defended the use of aggregation stating “By combining numerous exemplars, such errors of measurement are averaged out, leaving a clearer view of underlying relationships.” However, this method that Rushton used throughout his career has been widely criticized (eg, Cernovsky, 1993; Lieberman, 2001).

Rushton was quoted as saying “Even if you take something like athletic ability or sexuality—not to reinforce stereotypes or some such thing—but, you know, it’s a trade-off: more brain or more penis. You can’t have both.” How strange—because for 30 years Rushton pushed stereotypes as truth and built a whole (invalid) research program around them. The fact of the matter is, for Rushton’s hierarchy when it comes to Asians, they are a selected population in America. Thus, even there, Rushton’s claim rests on values taken from a selected population into the country.

While Asians had larger brains and higher IQ scores, they had lower sexual drive and smaller genitals; blacks had smaller brains and lower IQ scores with higher sexual drive and larger genitals; whites were just right, having brains slightly smaller than Asians with slightly lower IQs and lower sexual drive than blacks but higher than Asians along with smaller genitals than blacks but larger than Asians. This is Rushton’s legacy—keeping up racial stereotypes (even then, his claims on racial differences in penis size do not hold.)

The misleading arguments on brain size lend further evidence against Rushton’s overarching program. Thus, this discussion is yet more evidence that Rushton was anything but a “serious scholar” who trolled shopping malls asking people their sexual exploits. He was clearly an ideologue with a point to prove about race differences which probably manifested in his younger, teenage years. Rushton got a ton wrong, and we can now add brain size to that list, too, due to his fudging of data, misrepresenting data, and not including data that didn’t fit his a priori biases.

Quite clearly, whites and Asians have all the “good” while blacks and other non-white races have all the “bad.” And thus, what explains social positions not only in America but throughout the world (based on Lynn’s fraudulent national IQs; Sear, 2020) is IQ which is mediated by brain size. Brain size was but a part of Rushton’s racial rank ordering, known as r-K selection theory or differential K theory. However, his theory didn’t replicate and it was found that any differences noticed by Rushton could be environmentally-driven (Gorey and Cryns, 1995; Peregrine, Ember and Ember, 2003).

The fact of the matter is, Rushton has been summarily refuted on many of his incendiary claims about racial differences, so much so that a couple of years ago quite a few of his papers were retracted (three in one swipe). While a theoretical article arguing about the possibility that melanocortin and skin color may mediate aggression and sexuality in humans (Rushton and Templer, 2012). (This appears to be the last paper that Rushton published before his death in October, 2012. How poetic that it was retracted.) This was due mainly to the outstanding and in depth look into the arguments and citations made by Rushton and Templer. (See my critique here.)

Conclusion

Quite clearly, Gould got it right about Morton—Gould’s reanalysis showed the unconscious bias that was inherent in Morton’s thoughts on his skull collection. Gould’s—and Weisberg’s—reanalysis show that there are small differences in skulls of Morton’s collection. Even then, Gould’s landmark book showed that the study of racial differences—in this case, in brain and skull size—came from a place of racist thought. Writings from Rushton and others carry on this flame, although Rushton’s work was shown to have considerable flaws, along with the fact that he outright ignored data that didn’t fit his a priori convictions.

Although comparative studies of brain size have been widely criticized (Healy and Rowe, 2007), they quite obviously survive today due to the assumptions that hereditarians have between “IQ” and brain size along with the assumption that there are racial differences in brain size and that these differences are causal for socially-important things. However, as can be seen, the comparative study of racial brain sizes and the assumption that IQ is causally mediated by it are hugely mistaken. Morton’s studies were clouded by his racial bias, as Gould and Weisberg and Kaplan et al showed. When Rushton, Jensen, and Lynn arose, they they tried to carry on that flame, correlating head size and IQ while claiming that smaller head sizes and—by identity—smaller brains are related to a suite of negative traits.

The brain is of course an experience-dependent organ and people are exposed to different types of knowledge based on their race and social class. This difference in knowledge exposure based on group membership, then, explains IQ scores. Not any so-called differences in brain size, brain physiology or genes. And while Cairo (2011) concludes that “Everything indicates that experience makes the great difference, and therefore, we contend that the gene-environment interplay is what defines the IQ of an individual“, genes are merely necessary for that, not sufficient. Of course, since IQ is an outcome of experience, this is what explains IQ differences between groups.

Table 1 from Lieberman (2001) is very telling about Gould’s overarching claim about bias in science. As the table shows, the hierarchy in brain size was constantly shifting throughout the years based on a priori biases. Even different authors coming to different conclusions in the same time period on whether or not there are differences in brain size between races pop up. Quite obviously, the race scientists would show that race is the significant variable in whatever they were studying and so the average differences in brain size then reflect differences in genes and then intelligence which would then be reflected in civilizational accomplishments. That’s the line of reasoning that hereditarians like Rushton use when operating under these assumptions.

Science itself isn’t racist, but racist individuals can attempt to use science to import their biases and thoughts on certain groups to the masses and use a scientific veneer to achieve that aim. Rushton, Jensen and others have particular reasons to believe what they do about the structure of society and how and why certain racial groups are in the societal spot they are in. However, these a priori conceptions they had then guided their research programs for the rest of their lives. Thus, Gould’s main claim in Mismeasure about the bias that was inherent in science is well-represented: one only needs to look at contemporary hereditarian writings to see how their biases shape their research and interpretations of data.

In the end, we don’t need just-so stories to explain how and why races differ in IQ scores. We most definitely don’t need any kinds of false claims about how brain size is causal for intelligence. Nor do we need to revive racist thought on the causes and consequences of racial differences in brain size. Quite obviously, Rushton was a dodgy character in his attempt to prove his tri-archic racial theory using r/K selection theory. But it seems that when one surveys the history of accounts of racial differences in brain size and how these values were ascertained, upon critical examination, such differences claimed by the hereditarian all but dissappear.

Eugenics and Brain Reductionism in Colonial Kenya

4250 words

Reducing “intelligence” to the brain is nothing new. This has been the path hereditarians have taken in the new millennium to try to show that the hereditarian hypothesis is true. This is basically mind-brain identity as I have argued before. Why are African countries so different from other more developed countries? The hereditarian assumes that biology must be a factor, and it is there where they try to find the answer. This was what British Eugenicists in Kenya tried to show—that the brain of the Kenyan explained how and why East Africa is so different in comparison to Europe regarding civilizational accomplishments.

In this article, I will discuss eugenic attitudes on Kenyans and their attempted reduction of intelligence to the brain, how these attitudes and beliefs went with them which grew out of Galtonian beliefs, and how such beliefs never died out.

Eugenics in Kenya

Eugenic ideas on race and intelligence appeared in Kenya in the 1930s since it promised biological solutions to social problems (Campbell, 2007, 2012). Of course these ideas grew from the heartland of eugenics where it began, from Francis Galton. So it’s no surprise that Britons who went to Kenya held those ideals. Moreover, the attitudes that the Britons settlers had in Kenya on the law in regard to Africans seems reminiscent of Jim Crow America:

The law must be a tool used on behalf of whites to bend Africans to their will. It must be personal and racially biased, the punishment swift and sharp. (Shadle, 2010)

This story begins with F. Vint (1934) and and Henry Gordon (1934) (who was in Kenya beginning in 1925). (See Mahone, 2007.) Gordon met Vint while he was a visiting doctor at the Mathari Mental Hospital in Nairobi (Tilley, 2005: 235). Both of these men attempted to show that Africans were inferior to Europeans in intelligence, and used physical brain measures to attempt to show this.

Vint used two measures—brain weight and brain structure. He also argued that the pyramidal cell layer of the Kenyan brain was only 84 percent of the European brain. Vint used others’ comparisons of European’s brains for these studies, never studying them on his own. So he concluded that the average Kenyan reached only the development of a 7 or 8 year old European. While Vint (1934) argued that the brain of the Kenyan was 152 grams less than the average brain of the European, he didn’t explicitly claim in this paper that this would then lead to differences in intelligence. We can infer that this was an implication of the argument based on his other papers. Campbell (2007: 75) quotes Vint in his article A Preliminary Note on the Cell Content of the Prefrontal Cortex of the East African Native on the subject of brain weight and intelligence:

Thus from the both the average weight of the African brain and measurements of its prefrontal cortex I have arrived, in this preliminary investigation, at the conclusion that the stage of mental development reached by the average native is that of the average European boy of between 7 and 8 years of age.

Note the similarity between this and Lynn’s claim that Bushman IQ is 54 which corresponds to that of European 8 year olds. (See this article for a refutation of that claim.) So Vint believed that he had found the reason for racial backwardness, and this is of course through reduction to biology. Campbell (2007: 60) also tells us how the eugenic movement in Kenya grew out of British eugenic ideas along with the brain reductionism they espoused:

Eugenics in Kenya grew out of the theories disseminated from Britain; the application of current ideas about the transmission of innate characteristics, in particular intelligence, shaped a new and extreme eugenic interpretation of racial difference. The Kenyan eugenicists did not, however, use the most obvious methods, such as pedigrees, statistics and intelligence testing, which were applied by British eugenicists when assessing the intelligence of large social groups. When examining race, an area in which British eugenics had not prescribed a methodology, the Kenyan doctors most radically made histological counts of brain cells and physical measurements of brain capacity. This led to the adoption of a particularly pathologising theory about biological inferiority in the East African brain.

Gordon (1934) found an average cranial capacity of 1,316 cc in comparison to an average cranial capacity of 1,481 cc in European. This led to the conclusion that the Kenyan brain was both quantitatively and qualitatively inferior in comparison to the European brain. This of course meant that the brain was what we need to look at as this would show differences in intelligence between groups of people that we could actually measure. Gordon (1934: 231-232) describes some of Vint’s research on the brain, stating that physical and environmental causes must not be discounted:

Dr. Vint’s report on bis naked-eye and microscopic examintion of one hundred brains of normal male adults is to be published shortly in the Journal of Anatomy; but in order that we may have a little more light on the question of whether the East African cerebrum is, on the average, on a lower biological level than the European cerebrum, I may mention these facts:

In the areas of the cortex examined, Dr. Vint found a total inferinrity in quantity, as compared with the European, of 14-8 percent. His naked-eye examination revealed a significant simplicity of convolutional pattern and many features generally called primitive; e.g. the lunate sulcus, described by Professor Elliot Smith, was present in seventy of the one hundred brains. The microscopic examination showed the important supragranular layer of the cortex to be deficient in all the six areas that Von Economo examined, and the cells of these areas to be deficient very markedly in size, arrangement and in differentiation.

These, I think, are enough of Dr. Vint’s new facts to make us feel that the deficiencies found in examination of the living are indeed associated with suggestive deficiency in the native cerebrum; that we are in fact confronted in the East African with a brain on a lower biological level. This, I submit, is a matter requiring investigation by the highest expert skill into the question of heredity or environment or both.

However, going back above to what Campbell stated about Kenyan eugenicists not using tests, Gordon (1934) states that the Binet was “quite unsuitable“, while the Porteus maze test was “both suitable and to native liking.” Gordon stated that although the sample was too small to draw a definitive conclusion, the results trended inline with Vint’s measures of the brain at puberty as described by Gordon. Gordon, it seemed, had a negative view on cross-cultural comparisons between whites and blacks:

I find, on coming out of the darkness and confusion of Africa into the clear and tranquil air of European psychological thought and practice, that mental tests and mental ages by themselves are largely depended upon for the diagnosis of amentia. I venture to say only this: In my experience of many thousands of natives, intelligence in its ordinary connotation is present amongst them often to an enviable degree; nevertheiess, I believe we may do the native injustice and even injury if we are content to estimate his “intelligence” only in terms of his apparent ability to cope with the exactions of European scholastic education. Moreover, in the present state of psychological knowledge it seems to me that any use of mental tests as a means of comparison between European and African—races of widely different physical and social heritage and environment—carries the risk of misleading African education and legislative policy. The field for research by the trained psychologist of broad outlook is enormous in East Africa; his presence would be welcome. (Gordon, 1934)

Nevertheless, despite Gordon’s surprisingly negative view on the cross-cultural validity of tests, he did still believe that to ameliorate amentia in the native population that eugenic measures must be undertaken.

We can see now how Vint and Gordon attempted to infer mentality from the brain—and of course inferior mentality in the brain of the East African, in this case Kenyans (of course, the tribes that were studied). So due to Vint’s studies, it was proclaimed in this 1933 commentary in Nature titled European Civilisation and African Brains that due to brain differences, “Europeanisation” for the Kenyan just wasn’t possible. It was Gordon’s intention to use the study of racial differences to enact eugenic policies in Kenya. For if Kenyan “backwardness” is due to their intelligence which is due to their deficient brain, then this would have implications for their education and health. Regarding “backwardness”, Gordon (1945: 140) had this to say:

A few of the important questions ancillary to this leading qualitative question are:
(I) Mental deficiency, ignored by the laws of Kenya including the immigration law;
(2) Unprevented preventable diseases;
(3) Miscegenation, present and future;
(4) The introduction of contraceptives to Asiatics and Africans and no appearance of organized family planning.

The second momentous qualitative issue is the accepted “backwardness” of our African group and the question: what is backwardness? This condition, long discussed, has never been investigated; its causes and nature are wholly unknown; the correct treatment for it is wholly unknown. There are some who think they know these things and have unwittingly intensified a situation containing a deep appeal for truth. This situation must inevitably be encountered by a population inquiry.

I have often pointed out that scientific light upon “backwardness ” is required for commonsense thought and action in regard to difficult questions in trusteeship for our Africans, of which I name only the following:
(I) Scholastic education and vocational training;
(2) Mental deficiency and mental disorder;
(3) Alcoholism and drug addiction;
(4) Adult and juvenile crime;
(5) The ayah question;
(6) The urbanization of a backward rural people;
(7) The capacity of the East African Native to acquire British culture.

Such questions cannot be lightly brushed aside or lightly answered by a nation anxious to help up a weaker people; nor is the responsibility of taking charge of that people and its future without scientific answers to such questions one to be lightly continued. It should be more widely known that the differences between the white and the black man are far from being confined to colour, and that to proceed as if the resemblances were all that matters may be a grievous error.

Gordon stated that the most important “resource” for study was the population, which other scientists ignored. Gordon dubbed this the “population problem.” Due to these kinds of eugenic ideas, there were blood banks in Kenya that were racially segregated (Dantzler, 2017). What Gordon, Vint and other Kenyan eugenicists were worried about was amentia, which is intellectual disability or severe mental illness. Although Gordon (1934) did discuss some environmental influences on the brain development of the East African in his talk to the African Circle, Gordon argued for eugenic proposals due to what he claimed to be a high level of amentia in the population which led to decreased intelligence. In this same talk, he discusses the previous research of Vint’s, showing data that the brain growth of the East African was about half as much as that of the European. He also stated that they were inferior to Europeans not only in brain measures, but also in “certain physical and psychophysical attributes, but also in reaction to the mental tests used by the enquiry, although it is not pretended that mental tests suitable to the East African have yet been arrived at (Gordon, 1934: 235). He then stated that only eugenic proposals could fix the inborn attributes of the so-called “aments.” Thus, if there are differences in the brain between Europeans and East Africans, then “efforts to educate the African to the standard of the European could prove to be either futile or disastrous” (Mahone, 2007).

So without a good understanding of eugenics and how it works, then it didn’t make sense to try to develop African civilizations since their inferior mentality due to their brains made it a forgone conclusion that they wouldn’t be able to upkeep what they would need to to be educated and in good health. Thus, to Kenyan eugenicists like Gordon and Vint, Kenyans were biologically inferior due to their brains.

It is worth noting that Gordon didn’t believe that human races were the same species and that the Kenya colony was in danger of degrading due to the emigration of “mentally unstable” Europeans from the upper classed. He did, though, believe that some of them could be cured and become useful in the colony, he did believe that such the “mental unstables” should not have been sent to the colony (Campbell, 2007). Gordon also claimed that high grade “aments” could flourish in a low level society undetected, only being detected once introduced to European civilization.

After Gordon and Vint, came J. C. Carothers who, despite lacking psychiatric training was sent to Kenya as a specialist psychologist (Prince, 1996: 235). He became the director of the Mathari mental hospital in Nairobi in 1938 and held the position until 1950 (Carson, 1997) while studying the “insane” at the Mathari mental hospital (Carothers, 1947). Although he seemed to be influenced by Gordon and Vint, and seemed to share the same brain reductionism as them, he looked at it from an environmental tilt although he did not discount heredity in being a factor in racial differences. Carothers claimed that mental illness and cognitive/mental deficiency are “normal physical state[s]” in the African:

In searching for a plausible theory of African psychology, Carothers attempted to explain a perceived difference between Africans and Europeans. He notes gross variation in physical characteristics, such as skin color, which he then correlates with supposed differences in cognitive capability. He quotes Sequeira, the renowned dermatologist, in support:

“both the cerebral cortex and the epidermis are derived from the same elementary embryonic layer–the epiblast….It should therefore not be surprising on embryological grounds to find differences in the characters of the cerebral cortex in different races (2).”

Carothers also investigated the general shape, fissuration and cortical histology of the African brain as compared to the European brain. While he notes that “no sweeping conclusions in regard to African mentality can be arrived at on the basis of these data,” his general conclusion was that Africans exhibit a “cortical sluggishness” due to under-use of the frontal lobes, which inhibited their ability to synthesize information (3).

With the frontal lobe hypothesis, Carothers claimed that cognitive or mental inferiority was an inherent state in the African. “With the Negro,” he writes, “emotional, momentary and explosive thinking predominates… dependence on excitement, on external influences and stimuli, is a characteristic sign of primitive mentality.” According to Carothers, the African’s “mental development is defined by the time he reaches adolescence, and little new remains to be said” (3). In this supposed child-like permanence, “above all, the importance of physical needs (nutrition, sexuality)” prevail (2). This belief was used as proof that Africans could not appreciate the Victorian moral values of hard work and education, the desire for which was said to have come in part through denial of the sexual drive. By extension, the African was denied the possibility of reaching a civilized state.

Carothers also claimed that the African exhibits an “impulsivity [that is] violent but unsustained, … an ‘immaturity’ which prevents complexity and integration in the emotional life” (2). Using this discourse of violence, he medicalized “mental illness” as a normal physical state in the African. When the British administration in Kenya called upon Carothers to assess the Mau Mau rebellion (1945-1952), ethnopsychiatry was “commandeered to clothe the political interests of the colonists in the pseudo-scientific language of psychiatry to legitimize European suzerainty” (4). After due investigation, Carothers reported to the British government that “the onus for the rebellion rests with the deficiencies characteristic of the native Kenyans and not with the policies of the British colonial desire” (3). (Carson, 1997)

In 1951, Carothers (1951: 47) argued for a cultural view to explain the “frontal idleness” of the African, while not discounting “the possibility of anatomical differences” in explaining it:

This frontal idleness in turn can be accounted for on cultural grounds alone, but the possibility of anatomical differences, is not thereby excluded.

Finally, a plea is voiced for expert anatomical study of the African brain and, in view of his resemblance to a certain type of European psychopath, of the brains of the latter also.

Carothers published a WHO report in 1953 where he stated that he would relate cultural factors and malnutrition and disease to mental development (Carothers, 1953). Carothers (1953: 106) stated that “The psychology of the African is essentially the psychology of the African child.” This claim, of course, seems to gel well with the Gordon-Vint claim that the brain growth of the East African seems to subside way earlier than that of the European brain. Carothers also reinterpreted Vint’s findings on the thinner cerebral cortex.

[Carothers] introduced an interpretation which permitted education to play a role in post-natal cerebral development. Noting the remarkable enhancement in interest and alertness “that comes to African boys and girls as a result of only a very little education… often comprising little more than some familiarity with written symbols in reading, writing and arithmetic;’ he raised the question whether,”it is not possible that the
maturation of those cortical cells in Europe is also dependant on the
acquisition of that skill” (Carothers, 1962, p. 134). (Prince, 1996: 237)

Though regarding the so-called thinner cortex of the African, Tobias (1979) stated:

Published interracial comparisons of thickness of the cerebral cortex and, particularly, of its supragranular layer, are technically invalid: there is no acceptable proof that the cortex of Negroes is thinner in whole, or in any layer, than that of Europeans. It is concluded that vast claims have been based on insubstantial evidence.

However, Cryns (1962: 237) stated that while there are differences in brain morphology between whites and blacks, there was no evidence that this accounted foe the alleged inferiority in intelligence in Africans:

With regard to brain fissuration and the histological structure of the cortex, both Carothers (14, p. 80) and Verhaegen (49, p. 54) state that there is no scientific evidence sufficient to assume that mental capacity is in some degree related to the surface or structure of the cerebral cortex.

The general conclusion, then, to be drawn from the above anatomical and physiological brain studies is that there is sufficient empirical evidence indicating the existence of morphological differences between White and Negro brains, but that there is no sufficient evidence to indicate that the morphological peculiarities found in the African brain are of functional significance, i.e., account for an alleged intellectual inferiority.

Gordon and Vint’s works and conclusions in the modern day

Reading the works of these two men, we can see that what they are saying is nothing new—since contemporary hereditarians argue for almost similar conclusions. Rushton was one of the main hereditarians who argued that biological reductionism was true and he authored many studies with Ankney on the correlation between general mental ability (GMA) and the brain (Rushton and Ankney, 2007; 2009).

Rushton, however, aggregated numerous different measurements from different time periods, even from authors who did not subscribe the racial hierarchies that Rushton proposed—in fact, this “hierarchy” changed numerous times throughout the ages (Lieberman, 2001). The current hierarchy came about due to East Asia’s economic uprise starting after WW2, and the “shrinking skulls” of Europeans began in the 1980s with Rushton (Lieberman, 2001). Although Lynn, (1977, 1982) did speak of higher Japanese IQs, it is of course in the context of “Japan’s dazzling commercial success.” (See here for a refutation of Lynn’s genetic hypothesis regarding Asians.)

Gordon’s and Vint’s works were cited favorably by Rushton and Jensen (2005: 255) and Rushton and Jensen (2010) while Rushton referenced Vint many times (Rushton, 1997; Rushton and Ankney, 2009). These works were cited as being in agreement with Morton’s studies on cranial capacity (see Gould, 1996; Weisberg, 2014; Kaplan, Pigliucci and Banta, 2015; Weisberg and Paul, 2019). Although in a recent paper, Salami et al (2017) showed that the average brain weight of Africans has been underestimated and came to a value of 1280g with between 1015g and 1590 g (a mean of 1334g was found for the brain’s of males) while no statistical difference between groups was found. This was also replicated by Govender et al (2018) in South Africa.

It is quite obvious by looking at how contemporary hereditarian research is trending, that the biological reductionism of Gordon and Vint is still alive today in fMRI and MRI studies. Contemporary hereditarians have also implicated the frontal lobe as being part of the reason why blacks are “less intelligent” than whites, and as we have seen, this is a decades-old claim. These beliefs were held due to outdated and outright racist views on the “quality” of the greatest “resource”, according to Gordon: The population.

Conclusion

Eugenics in Kenya—as it was in America—wasn’t a scientific movement; it was a social and political one. Eugenic ideas were practiced all over the world from the time of antiquity all the way to the modern day. The biological reductionism espoused by Kenyan eugenicists is still with us today, and instead of using post-mortem brains and crude skull measures, we are using more sophisticated technologies to try to show this reductionism is true. However, since mind doesn’t reduce to brain, this is bound to fail.

As we can see, the kind of gross biological reductionism hasn’t left us, it has only strengthened. The mental and physical reductionism inherent in these theories have never died—they just quieted down for a bit after WW2.

What is inherent in such claims is that there are not only racial brains, but racial minds. What Gordon, Vint and Carothers tried arguing was that it wasn’t due to the rule of the British and the society that they attempted to create in Kenya, the capacity for rebellion was inherent in the Kenya native. This seems to me to be like the “drapetomania” craze during slavery in America: pathologizing a normal response—like wanting to escape slavery—and create a new psychological diagnosis to explain why they act a certain way. The views espoused by the scientific racists in Kenya were not new, since earlier in the 19th century the inferiority of the “black brain” was well-noted and discussed. Although I have found one (1) view from Tiedemann (1836: 504) who claims that his studies led him to the belief that “by measuring the cavity of the skull of Negroes and men of the Caucasian, Mongolian, American, and Malayan races, that the brain of the Negro is as large as thsg of the European and other nations.

Campbell (2007: 219-220) explains that although most probably still held their eugenic beliefs, the changing intellectual climate in Britain was a main reason why the eugenics movement in Kenya was not sustained.

By the late 1930s, although there had been no radical change in settler attitudes to race and no upheaval in the policy or personnel of the colonial administration the Kenyan eugenics movement petered out. We must assume that individuals retained their eugenic beliefs, but its potency in Kenya’s lore of human biology was lost. The causes of the demise of Kenyan racial eugenics lay in the financial retrenchment of the 1930s and responses in the metropole at a time when scientific racism was being increasingly undermined on both political and intellectual grounds. Without metropolitan support, Kenyan eugenics could not be sustained as a social movement. The size and composition of the Kenyan European community was such that there were not enough individuals with the intellectual and scientific interests and authority to establish an independent, self-sufficient organisation. Kenyan eugenics was forced to look to the metropole for financial, intellectual and institutional legitimacy. The demise of Kenyan eugenics is therefore intimately linked with a changing intellectual climate in Britain.

The views espoused by Gordon, Vint and Carothers have not left us. After Arthur Jensen revived the race and IQ debate in 1969, searches for the cause of why blacks are less intelligent than whites began coming back into the mainstream. Rushton and Jensen relied on such works to argue for their conclusion that the cause of lower intelligence and hence lower civilizational attainment and academic performance was due to genes and their brain structure. Such antiquated views, it seems, just will not die. Lieberman (2001) showed how the racial hierarchy in brain size has changed throughout the ages based on current social thought, and of course, this has affected hereditarian thinking in the modern day.

Although some authors in the 18 and 1900s proclaimed that brain weight had no bearing on one’s mental faculties, quite obviously the Kenyan eugenicists never got that memo. Nevertheless, there are a few studies that contradict Rushton’s racial hierarchy in brain size, showing that the brain’s of blacks are in range with those of whites.

Discussions on the “quality” of brains of different groups of course have not went away, they just changed their language. It seems to me that, like with most hereditarian claims, it’s just racists citing racists as “consensus” for their claims. Gordon (1934) asked why the brain of the Kenyan does not develop in the same way as the European’s. Since the reductionism they held to is false, such a question isn’t really relevant.

How Mind-Body Dualism and Developmental Systems Theory Refute Hereditarianism

2500 words

The concept of hereditarianism has been a topic of intense debate for decades. Ever since Francis Galton’s inquiries into what makes “genius”, to the advent of twin studies in the 1920s, hereditarian ideas have been espoused in the literature as having explanatory power. Hereditarianism is the theory that genes cause and influence psychological traits and differences in them between people and even groups.

The main claim is that genetics is the main influence and cause of psychological traits like IQ/intelligence. Hereditarians claim that intelligence is greater than 0 percent genetically caused (Warne, 2021) or that a “substantial proportion (20% or more) of differences in psychological traits within and among human populations is caused by genes” (Winegard, Winegard, and Anomaly, 2020). So hereditarianism is true if intelligence is greater than 0 percent genetically caused or if 20 percent of more of the differences in psychological traits are genetically caused. However, the concepts of mind-body dualism (MBD) and developmental systems theory (DST) offer a very powerful challenge to this kind of genetic reductionism/determinism.

MBD is the philosophical theory that the mind and the body are distinct entities. Basically, the mental is irreducible to the physical. If the mental is irreducible to the physical, then the mental can’t be explained in physical terms. Facts about the mind can’t be stated using a physical vocabulary and the mind can’t be described in material terms using words that only refer to material properties. This refutes psychological genetic reductionism; it is impossible for human psychology to be genetically caused/influenced and so this holds for differences between groups and individuals as well.

Developmental systems theory (DST) further establishes that since human development is dynamic and interactive, then genes, environment, behavior and other developmental resources all interact to form the phenotype and shape development. Thus, DST refutes the view, too, that genes cause the development of traits and of the organism as a whole. The hereditarian programme is inherently reductionist, and it attempts to reduce human life and it’s particularities to genes and biology.

The possibility that hereditarianism could reinforce social inequalities is high. From Jensen to Murray and Herrnstein, it has been stated for decades that we need to do something about the lower classes and their having children. Hereditarianism basically would then be removing undesirable people from society based on the false premise that genes have anything to do with their psychology or the undesirable social traits they have.

Hereditarians claim that their research is objective, that they are merely interested in the search for truth. Modern hereditarian thinking can be traced back to Francis Galton. The presupposition that human psychology can be quantitative has its origins with Francis Galton and is directly derived from his eugenic ideas (Michell, 2021). So hereditarian ideas and eugenics are inherently linked. It is the case that genetic determinist ideas like hereditarianism deflect away from actionable positions that could reduce disease far more than eugenic proposals (Holtzman, 1998).

Hereditarianism could be used as justification to accept current existing inequities and inequalities. For if these differences between people are inborn and the result of their genes, then there would be some harsh realities that we as a society would need to accept. People are of course genetically different and these genetic differences then somehow cause group (class) and individual differences. However, contra Murray (2020), social class differences do not lie in the genes and genetics can’t be used as justification to maintain a ruling class, limiting a group’s ability to have children, and minimize social safety nets (Holtzman, 2002).

Why is hereditarianism alluring?

I think it’s simple—it gives us quite simplistic answers on the nature of group, individual, and societal differences. If differences within and between these things reduce to genes, then we can say that the causes are due to genes and they thusly have certain consequences attached to them. This, again, shows how eugenic and hereditarian ideas are married to each other.

It is alluring because it is simplistic and reductionist, deterministic. It posits that differences within and between individuals, groups, and societies come down to genes. Of course individuals, groups, and societies have different gene frequencies—that is the correlation. But the folly is to assume that the genetic differences between them drive the trait (used loosely) differences between them. That is something that has yet to be explained—there is no mechanism of action.

The genetic determinism that is steeped into society also plays a role. If genes largely determine one’s intelligence, then it provides predictability and stability. It suggests a fixed level of ability that simply isn’t malleable due to how genes are thought to work by the hereditarian. This then offers a level of understanding to the hereditarian—the causes of ability and differences in them between people, groups, and societies are due to genetic differences between them, even if we don’t know exactly how these differences manifest themselves genetically. This is why they have to use twin, family, and adoption studies along with GWASs and PGS. This lends them the deterministic tilt they need in order to show that society is stratified due to the genetic differences between groups and individuals.

This assumption, though, is quite clearly false since societies are genetically stratified (the fact that needs to be explained, which the hereditarian tries to argue are due to genetic differences), social stratification maintains this genetic stratification, social stratification causes cognitive stratification, and tests reflect priori cognitive stratification. Thus, the structure of society bakes-in these stratifications, giving the illusion of genetic differences being the causes of differences between people (Richardson, 2017, 2021).

Genetic determinism and reductionism then lead to a kind of “gene worship.” For if differences are mainly due to genes, then the gene is powerful, powerful enough to be causal in the sense that genes dictate certain outcomes that would then manifest in social life and then dictate the course of a society or group of people.

How do MBD and DST combine to refute hereditarian ideas?

MBD and DST combine to refute hereditarianism quite easily. Hereditarianism has two main assumptions:

A1: Genes are the main determinate of differenced in traits and of psychological differences.

A2: Genes and environment can be teased apart using certain methods which shows the proportion of influence each has on a trait.

Assumption 1: This assumption is easily dispatched due to the irreducibility of the mental. Accepting the irreducibility of the mental undermines the hereditarian assumption that genes can account for most of the variation in IQ and other psychological traits. Hereditarianism is a physicalist theory and so relies on the assumption that the mental can be reduced to the physical, whether it be genes, brain physiology or the brain itself. But if the mental is irreducible (and it is), then the hereditarian programme becomes highly questionable and thusly outright false, since no hereditarian has articulated a specified measured object, object of measurement and measurement unit for any psychological trait, IQ included. Since hereditarianism seeks to reduce psychology to genes, then the irreducibility of the mental challenges that assumption, and it ensures that a hereditarian psychology just isn’t possible. So of the mental is irreducible, then it implies that the hereditarian hypothesis is false, since psychology can’t be explained by the physical since it is immaterial. So attempting to explain and measure psychological traits based on genetic assumptions is bound to fail. And there is also the measurement and quantification issue—the irreducibility of the mental challenges the claim that psychology can be measured and quantitative since it isn’t physical.

Assumption 2: Ever since Susan Oyama published The Ontogeny of Information in 1985, simplistic and reductive accounts of genetics and the nature traits have been called into question based on an interactive view of developmental resources. Hereditarians privilege genetic factors above other developmental resources, as if they are special resources. But unlike hereditarian theories, DST proponents argue against any a priori privileging of any developmental resources. So this suggests that genetic factors lack superiority—either inherent or predetermined—over other developmental resources. Genes are on par with other developmental resources (called the causal parity thesis, CPT), and so, this hereditarian assumption is also false.

Thus the combination of MBD and DST combined to refute the simplistic assumptions of the hereditarian. Both combined challenge the reductive and deterministic assumptions of hereditarianism. They do this by calling into question the measurability of psychological traits while advocating for a holistic, non-reductionist perspective which acknowledges the irreducible interplay between all developmental resources.

The arguments against hereditarianism from MBD and DST

Now that I have described hereditarianism and what it sets out to do, along with how MBD and DST refute hereditarianism, I will provide two arguments. The first will conclude that genes aren’t special nor privileged developmental resources. The second will then combine both arguments from MBD and DST to successfully show that the hereditarian dream is a logical impossibility.

P1: If genes are special or privileged developmental resources, then they possess a unique or superior causal role in shaping development compared to other factors.
P2: If causal parity exists, then no developmental resource possesses a unique or superior causal role in shaping development.
P3: If genes do not possess a unique or superior causal role in shaping development, then they are not special or privileged developmental resources.
P4: Casual parity exists.
C: Thus, genes are not special or privileged developmental resources.

Premise 1: This premise asserts that if genes are special, then they must have a distinct role—compared to other resources—in explaining and shaping development. Genes would need to show a unique influence in shaping developmental outcomes. This is a main assumption of hereditarianism and perhaps the most important one, because if the assumption is false then hereditarianism cannot possibly be true.

Premise 2: However, since DNA sequences (genes) do nothing on their own until activated by and for the physiological system, then we can safely state that no single resource would be over and above another in doing any explaining. Development is interactive, rather than individual; these resources work together rather than in isolation.

Premise 3: This premise builds on the idea that if genes lack a superior, or unique causal role in shaping development, then they cannot be privileged or special resources. The absence of exclusive causal influence diminishes—and outright refutes—the claim that genes are special or unique developmental resources with a privileged role in development.

Premise 4: This premise is derived from DST literature, where development is understood as a complex and multifaceted event, influenced by many interactive and irreducible factors. It highlights a need for a holistic, rather than reductionist approach to understanding development.

Conclusion: This conclusion is derived from the claim that if causal parity exists (P4), then no developmental resource possesses a unique or superior causal role, so genes can’t considered special or privileged when it comes to development. P2 emphasizes the equal importance of the interacting of developmental resources, which challenges the claim that any of those resources can be isolated as a causal, privileged factor. P3 challenges the assumption that genes can alone determine how traits develop which then reinforces the interactivity between the resources. P4 then asserts that causal parity exists, and so no developmental resource, including genes, should be privileged. This directly refutes a sometimes unstated assumption of hereditarianism.


P1: If hereditarianism is true, then mental abilities can be explained by genetic factors and can be accurately measured. (Assumption of hereditarianism)
P2: If mental abilities are irreducible to the physical, then they cannot be explained by genetic factors. (From MBD)
P3: If no developmental resource is privileged in biological systems, then genetic factors alone cannot determine any trait, including psychological traits. (From DST)
C1: If mental abilities are irreducible to the physical, then hereditarianism is false. (Modus tollens, P2)
C2: If no developmental resource is privileged in biological systems, then hereditarianism is false. (Hypothetical syllogism, C1, P3)

Premise 1: This is an accepted and accurate depiction of hereditarianism and is how hereditarianism is understood in the literature.

Premise 2: This draws on MBD and the irreducibility of the mental. I have been using dualistic arguments for years to argue against the concept of hereditarianism. Mental abilities cannot be reduced to anything physical, and therefore refutes the main assumption of hereditarianism, that genes can determine psychological traits and differences in them between people, groups and societies.

Premise 3: This is derived from DST. Any kind of development is due to the interactive and irreducible nature of development. It asserts that there is no privileged level of causation between resources, which then refutes the claim that genes should be looked at to explain any differences—any that we deem “good and bad”—between people.

Conclusion 1: This conclusion follows using modus tollens. If the consequent in the conditional statement in P1 is false (“If mental abilities are irreducible to the physical”), then the antecedent (“hereditarianism”) must also be false. If mental abilities cannot be explained by genetic factors (asserted in P2), then it contradicts the main assumption of hereditarianism (P1). Therefore if mental abilities are irreducible to the physical, then hereditarianism is false.

Conclusion 2: If mental abilities are irreducible to the physical (C1), and no developmental resource is privileged in biological systems (P3), then it follows that hereditarianism is false. This conclusion stems from the entailment of hereditarianism which relies on privileging genetic factors over and above other factors. But if no developmental resource holds privilege, then hereditarianism is false, since it quite clearly assumes the superiority of genes in trait determination. Thus the conclusion challenges hereditarianism based on the premise that no developmental resource is privileged, and since hereditarians privilege genes, then hereditarianism is false.

Conclusion

The two main assumptions of hereditarianism quite clearly do not hold when inspected using a MBD and DST lense. Thus, since hereditarianism is false, then believing it to be true would be socially destructive. And these socially destructive policies were an outcome of the IQ test then they were brought to America, using the assumption that genes were the primary cause for differences in IQ scores. Here’s the argument:

P1: If hereditarianism is false, then it does not accurately represent the complex nature of human traits and abilities.
P2: If we believe in a false representation of human traits and abilities, then it can lead to discriniminatory practices and unjust societal outcomes.
P3:, Hereditarianism is false.
C: Thus, if we believe hereditarianism to be true when it is false, then it can lead to socially destructive outcomes.

This is why I have argued that IQ tests should be banned. Nevertheless, hereditarianism and along with it IQism are proven false, using conceptual arguments. The dissimilarity between psychological traits and physical objects shows that psychology can’t be measured, so there can’t be a science of the mind. For these reasons, hereditarian ideas should be directly discounted and ignored, since their assumptions are clearly false.

The Concept of Genotypic IQ is False and Socially Destructive

2050 words

Introduction

The concept of “genotypic IQ” (GIQ) refers to a theoretical genetic potential of IQ. Basically, GIQ is one’s IQ without any corresponding environmental insults, and of course it is due the interaction of many genes each with small effect (which is the justification for GWAS). This, though, is like the concept of true score. “A true score is the hypothetical average of a thousand parallel testings of someone’s intellectual abilities.” Nevertheless, this concept of GIQ is used by hereditarians to proclaim that “genotypic intelligence is deteriorating” (Lynn, 1998) and this is due to “dysgenic fertility“, which is “a negative correlation between intelligence and the number of children” (Lynn and Harvey, 2008: 112), while “genotypic IQ” is “Genotypic intelligence is the genetic component of intelligence and it is this that has been declining” (Lynn and Harvey, 2008: 113) or is the IQ they have if they have access to optimal environments. I will argue in this article why the concept of GIQ is nonsense.

What is GIQ?

So GIQ is the so-called genetic component of intelligence. This, of course, is based on the assumption that genes are causative for IQ. This is based on the assumption that, however weakly, heritability can tell us anything about genetic causation (it can’t).

Lynn (2015) talks about the GIQs of Africans, pygmies, and aborigines. He also claims that the IQ of African Americans is “solely genetically determined“, since it hasn’t changed in some 80 years. This claim, though, is false (Dickens and Flynn, 2006). Nevertheless, the claim of GIQ arises due to the assumption—which hasn’t been tested, nor can it—that IQ and other psychological traits are caused/influenced by genes. I have argued at length that this claim is false.

It seems that the only people discussing this concept are the usual suspects (Lynn, 2015, 2018; Woodley of Menie, 2015; Madison, Woodley of Menie, and Sanger, 2016; Kirkegaard, Lasker, and Kura, 2019; Piffer, 2023). The decline in so-called genotypic IQ is used as a cudgel to try to argue that “dysgenic effects” of low IQ women having more children is leading to this effect. Weiss (2021: 35) puts it like this:

If women with a low IQ give birth to their children earlier than women with a high IQ, the mean genotypic IQ of the population will also decrease (Comings 1996), even if the number of children in both population strata should be the same. If the number of children across the IQ distribution is not equal (Blake 1989), the next generation will have a different IQ distribution.

Quite obviously, the hereditarian claim of GIQ is that some individuals—and of course groups—are genetically more intelligent than others. Nevertheless, a women “with a low IQ” doesn’t have a low IQ due to genetics; if we think about the nature of IQ and the types of items on the test, we then come to the conclusion that these tests aren’t a test of one’s genetic potential for learning ability (as many have claimed), but it’s merely what one has been exposed to and learned.

We have also used this concept of GIQ to attempt to show that these genes we have found to be associated with IQ have been in decline. Cretan (2016)—in a paper titled “Was Cro-Magnon the Most Intelligent Modern Human?“—tries to argue that GIQ has decreased since Neolithic times, and that the decrease in height and brain size since then is expected, since they are moderately correlated. However, the so-called brain size increase seems to be an artifact (Deacon, 1990a, 1990b). Cretan (2016: 158-159) writes:

Genotypic” intelligence changes across millennia because the genetic variants, or alleles, that enable people to develop higher intelligence change their frequencies due to mutation and selection. Evolution by mutation and selection implies that at a certain selection pressure favoring higher intelligence, the genotypic intelligence of a population remains constant. At selection pressures below this break-even point, intelligence will decrease; at higher selection pressure, intelligence will increase. In the complete absence of selection, genotypic IQ will not remain constant.

As we can see, this concept of GIQ and the so-called decrease in it has been sounding hereditarian alarm bells for decades. People like Lynn and Jensen push eugenic ideals on the basis of low intelligence people having more children, pushing for a negative eugenic practice to prevent people with low IQ from having children. Jensen, in his infamous 1969 paper, was pretty much explicit with these aims, and then in 1970 he stated that heritability can tell use one’s genetic standing when it comes to intelligence. Richard Lynn, in his review of Cattell’s Beyondism, called for “realistically phasing out” certain populations, but that it wasn’t eugenic:

“Is there a danger that current welfare policies, unaided by eugenic foresight, could lead to the genetic enslavement of a substantial segment of our population?” – Jensen, 1969: 95How Much Can We Boost IQ and Scholastic Achievement?

“What the evidence on heritability tells us is that we can, in fact, estimate a person’s genetic standing on intelligence from his score on an IQ test.” – Jensen, 1970, Can We and Should We Study Race Difference?

“What is called for here is not genocide, the killing off of the populations of incompetent cultures. But we do need to think realistically in terms of “phasing out” of such peoples.” [Lynn]

This is an example of negative eugenics—preventing those who were thought to have undesirable traits from breeding. William Shockley—who was Arthur Jensen’s inspiration—talked about paying people to undergo sterilization. This was called the voluntary sterilization bonus plan:

Shockley is proposing varying bonuses to anyone with an IQ under 100 who agrees to be sterilized upon reaching child-bearing age. He would pay volunteers $1,000 for every IQ point below 100, with “$30,000 put into a trust fund for a 70-IQ moron, potentially capable of producing 20 children.”

Under the plan, bonuses would also go to potential parents based on the “best scientific estimates” of their having such “genetically carried disabilities as hemophilia, sickle cell anemia, epilepsy, Huntington’s chorea and so on,” with taxpayers getting no money to participate.

This is another example of negative eugenics, but there is of course also positive eugenics—encouraging those with desired traits to have more children. In his article Bright New World, Moen (2016) discusses this kind of positive eugenics, while endorsing the claim of GIQ. Moen proposed that women should be paid modest sums of cash to have children with high IQ sperm donors, not their husbands:

Here I would like to suggest an alternative way to raise global IQ: giving prospective mothers modest monetary incentives to have children that genetically belong not to their husbands (or to ordinary sperm donors) but to high-IQ sperm donors.

These are the kinds of views and ultimate consequences that derive from such thinking that there is GIQ. Since we know that IQ can’t be genetic, there can be no GIQ. If there can be no GIQ, then such proposals like these negative and positive eugenic ideas that I just cited would merely just be getting rid of people that are not socially desireable—mainly the lower class, along with blacks since they are more likely to be lower class and have lower IQs (due to knowledge exposure and differential access to cultural and psychological tools). This concept of GIQ has, since the advent of IQ tests in America, been used to sterilize people in the name of eugenics. The moral wrongness of eugenics is why we should reject this concept, nevermind the irreducibility arguments. Eugenic policies discriminate against people based on arbitrary criteria and violate their reproductive rights.

Arguments against GIQ

Now that I have described what GIQ is and how it has been used in the past in the name of eugenics, here are a few arguments to invalidate the concept.

P1: If IQ is solely determined by one’s genetic makeup, then IQ scores should remain stable through one’s lifetime.
P2: IQ scores do not remain relatively stable through one’s lifetime.
C: Thus, IQ is not solely determined by one’s genetic makeup.


P1: If IQ is solely determined by genetics, then individuals with high IQ parents should also have high IQ scores.
P2: If individuals with high IQ parents also have high IQ scores, then adoption should not affect their IQ scores.
P3: Adoption does affect the IQ scores of individuals with high IQ parents.
C: Thus intelligence is not solely determined by genetics.

This argument contradicts the main claim of GIQ, since adoption has been shown to raise IQ (see Capron and Duyme, 1989; Locurto, 1990; Flynn, 1993; Duyme, Dumaret, and Tomkiwicz, 1999; Kendler et al, 2015; see Nisbett et al, 2012 for review).


P1: If the concept of GIQ were true, then one’s IQ would be determined by their genetics.
P2: Genes don’t determine traits, nevermind psychological ones.
C: Therefore, the concept of GIQ is false.


P1: If psychological traits are reducible to genetics, then environment plays no role in shaping IQ and the concept of GIQ is true.
P2: The environment plays a significant role in shaping IQ, as adoption studies show.
C: Therefore psychological traits are not reducible to genetics and the concept of GIQ is false.

And

P1: If psychological traits are irreducible, then the concept of GIQ is false.
P2: Psychological traits are irreducible.
C: Therefore, the concept of GIQ is false.

Both of these argument draw on the irreducibility of the mental arguments I’ve been making for years. If the mental is irreducible to the physical, then the concept of GIQ can’t possibly be true.


P1: Either the concept of GIQ is true and implies that IQ is determined by genes alone, or the concept of GIQ is false and other factors other than genes contribute to IQ.
P2: If the concept of GIQ is true and implies genetic determinism, then it ignores the significant impact that environmental factors have on IQ and may perpetuate discrimination against those with low IQ.
P3: If the concept of GIQ is false and other factors other than genes contribute to IQ, then efforts should be focused on addressing these other factors rather than assuming that genes are the sole determinant of IQ.
C: Thus, either the concept of GIQ perpetuates discriminatory attitudes if true, or it distracts from addressing the true determinants of IQ if false.

P1 is logically true, while P2 and P3 are supported by scientific evidence, so the argument is plausible.


The concept of GIQ assumes that IQ is largely determined by genetics, and that individuals have different genetic potentials for IQ. There is no clear, consistent definition of intelligence. The factors that contribute to IQ are complex and multifaceted. So any attempt at reducing one’s IQ to their genes or to make predictions about one’s IQ from their genes along is inherently flawed and oversimplified. Thus, the concept of GIQ is not a valid or useful way of understanding intelligence, and so attempts to use it to make policy or social decisions would be misguided. So this argument challenges the concept of GIQ, since there is no accepted definition of intelligence. That’s more than enough to discount the concept entirely.

Conclusion

I have described the concept of GIQ that many hereditarians in the literature have espoused. It is described as one’s genetic potential for IQ sans environmental insults. The usual suspects are arguing for a GIQ. However as can be seen historically, this concept had led to destructive consequences for groups of people and individuals who are deemed less intelligent. It has been argued that those who have low IQs should not have children and that either people should be paid to not have children and get sterilized or to influence high IQ mother’s to have children not with their husbands but high IQ sperm donors. Eugenics is morally wrong so we should not do that, nevermind the fact that genes don’t work how hereditarians need them to. Nevertheless, I have given a few arguments that the concept of GIQ is misleading at worst and socially destructive at best. This is yet another reason why we should ban IQ tests.

Thus, the concept of GIQ is merely false eugenic nonsense.

Prenatal Testing to Screen for Diseases is Eugenic: The Eugenic Nature of Prenatal Testing

2350 words

Introduction

The concept of eugenics has a long history. Back in 2018, I surveyed the history of eugenics throughout antiquity to the modern day in different countries. It seems that the Greeks were the first to employ the concept. Both Aristotle and Plato wanted the state go be in charge of the birthing process, which is a classical definition of eugenics. People have even been sterilized in recent history, as recent as 20 years ago in California.

After the defeat of the Nazis in WW2, though, such eugenic ideas have never left. They have just changed form. We are in the new millennium and so we have new technologies that may allow us to screen for certain disseases and terminate then early on in the process. In this article, I will argue that using such technologies to prevent the births of such people are eugenic. I will give a few arguments and then I will connect them.

The “new eugenics”, same as the old eugenics

“New eugenics” refers to the use of advanced genetic technologies to improve or enhance genetic traits of humans or to selectively breed humans with desired traits while discouraging or preventing the reproduction of those with undesired traits. This tracks with “classical eugenics”, which was a socio-political movement in the late 18th to early 19th century which aimed at improving the human gene pool through encouraging the selective breeding of those with desirable traits while discouraging or preventing the reproduction of those with undesired traits, through coercion such as forced sterilization and euthanasia of individuals who have undesired traits like mental illness, physical disabilities or criminal tendencies. So as can be seen, both the old and new eugenics both involve the same basic practice of selective breeding of humans based on their genetic traits. Thus, both forms of eugenics are reductive in nature.

Both kinds of eugenics are morally wrong. By “morally wrong” I mean that it is not in accordance with accepted ethical principles and values. So calling eugenics “morally wrong” indicates that it is ethically unacceptable to most people, since it goes against the fundamental principles of human dignity, social justice, and human autonomy.

It’s a violation of human dignity and autonomy (Zaluski, 2010) since it makes decisions about a person’s life and reproductive choices based on their genetic makeup rather than their own desires and preferences. It can also stigmatize certain groups while perpetuating existing socio-economic inequalities by reinforcing the dominance of certain groups while marginalizing others. So it can result in further stigmatization and discrimination of certain groups based on their perceived genetic traits which would then lead to a loss of social cohesion along with a decrease in societal well-being. Selective breeding can also lead to a loss of genetic diversity in humans, which could then have further negative effects on our species’ ability for long-term survival and adaptation. And there are concerns involving the new eugenics like gene editing and PGD while there of course could be unintended, unforseen consequences and side effects while new forms of inequality and discrimination could emerge.

So here is the argument that eugenics is morally wrong.

P1: If a practice involves the selective breeding of humans based on their genetic traits, it is permissible only if it respects the autonomy and dignity of all individuals involved.
P2: Eugenics involves the selective breeding of humans based on their genetic traits.
P3: Eugenics does not respect the autonomy and dignity of all individuals involved.
C: Therefore, eugenics is morally wrong.

Premise 1 can be defended by the idea that every human has inherent value and deserves to be treated with respect and dignity regardless of their genetic makeup. Premise 2 is an accepted feature of both the old and the new eugenics. Premise 3 can be supported on the basis that eugenic practices involve the imposition of genetic traits on individuals without their consent, and it could also lead to the stigmatization and marginalization of those with so-called undesired genetic traits which would violate the fundamental ethical principles of human dignity and autonomy. So from (1), (2), and (3), and Conclusion follows that eugenics is morally wrong since it involves the selective breeding of humans based on their genetic traits while failing to respect the autonomy and dignity of all individuals involved.

Eugenics won’t work because genetic reductionism is false

Genetic reductionism is the view that genes are the primary determinants of human traits. It is the view that complex traits and behaviors can be reduced to and explained by genetic and biological factors while non-genetic and environmental factors are insignificant determinants. In the eugenic view—and in the view of most people—traits are primarily genetically caused, and by using genetic engineering and similar new-age tools, we can then guide out evolution and prune out both genes that lead to undesired traits and, in effect, people too. However, genetic reductionism is false. It is false because there is no privileged causal role in development of any of the developmental resources, genes included (Noble, 2012). So it then follows that eugenics can’t work, since eugenics is genetically reductionistic, and genetic reductionism is false. So the practice of eugenics is unlikely to work and may lead to unintended consequences. Here’s the formalized argument:

P1: If eugenics is based on the assumption that genetic traits are the primary determinants of human traits, then eugenics is genetically reductionistic.
P2: Eugenics is based on the assumption that genetic traits are the primary determinants of human traits.
P3: Genetic reductionism is false.
C: Therefore, eugenics cannot work.

Just like eugenics is genetically reductionistic, so is hereditarianism and that’s also why hereditarianism cannot work. And many hereditarians, like Lynn, Jensen, Shockley, and Cattell held eugenic views (just like Murray and Herrnstein, but they were much more careful with their language, though the underlying ideas are the same) and they are, of course, genetic reductionists. It is, after all, with the advent of IQ tests that eugenics had it’s start in America, and that’s one of the reasons why IQ tests should be banned, since they can and have led to morally wrong policies.

New genetic technologies are eugenic

I have given a pro- and anti-argument for the use of preimplantation genetic diagnosis (PGD) back in 2018. PGD is a procedure which allows parents to screen embryos for genetic abnormalities before implatiation during IVF. This process is often based on the desire to avoid certain traits or to select for certain desirable traits. As I argued above, the new boss is the same as the old boss—the new eugenics has similar end-goals as the old eugenics. PGD doesn’t involve coercion or forced sterilization like the old eugenics, yet it still has intended goals which are similar to the old eugenics by creating “genetically better” people by selecting for certain genes while avoiding others, under the assumption of genetic causation of socially-desired and undesired traits. This can then lead to the homogenization of our species, since people with certain traits could become more common while others without them become rarer. This can also lead to the discrimination of those who do not have the desired traits. Thus, PGD is a form of new eugenics and it is eugenic because it has the same end-goals as the old eugenics.

P1: If PGD isn’t a form of new eugenics, then it does not involve a selective breeding process based on genetic traits that can lead to a homogenization of the human population and discrimination against those who do not possess the desired traits.
P2: PGD does involve a selective breeding process based on genetic traits that can lead to a homogenization of the human population and discrimination against those who do not possess the desired traits.
C: Therefore, PGD is a form of new eugenics.

I have already provided an argument which establishes that eugenics is morally wrong. Now here are a few more arguments which establish PGD as a eugenic practice.

P1: If prenatal testing is used to screen for diseases to abort babies, then it is selectively terminating those with undesirable genetic traits.
P2: If selective termination of those with undesirable genetic traits is practice then it is a eugenic practice.
C: Thus, if prenatal testing is used to screen for diseases to abort babies, then it is a eugenic practice.


P1: If prenatal testing is not a eugenic practice, then it is not selectively terminating those with undesirable genetic traits.
P2: Prenatal testing is selectively terminating those with undesirable genetic traits.
C: Therefore prenatal testing is a eugenic practice.


P1: If a practice is eugenic, then it involves the selective breeding or termination of individuals with undesirable genetic traits.
P2: Prenatal testing involves the selective termination of individuals with undesirable genetic traits.
C: Therefore, prenatal testing is a eugenic practice.

As can be seen, it is quite obvious that the new eugenics is the same as the old eugenics and the goals shared are very similar. Thus, the only distinction between old and new eugenics is that for the new eugenics there is no state coercion for the use of the new genetic technologies to screen for undesired traits like diseases. In this regard, it is used negatively, but there is though the chance that it will be used positively. By “negative” and “positive” I’m referring to negative and positive eugenics.

Now, I can connect the arguments I’ve made and argue that eugenics is morally wrong and that it rests on the false premise of genetic reductionism.

P1: If prenatal testing is used to screen for diseases to abort babies, then it is a eugenic practice.
P2: If selective breeding or termination of individuals with undesirable genetic traits is a eugenic practice, then eugenics is based on the false premise of genetic reductionism.
P3: Eugenics that is based on the false premise of genetic reductionism ignores the complex interplay between genetics, environmental factors and other developmental resources and fails to fully appreciate the inherent worth and value of every human being.
C: Therefore, using prenatal testing to screen for diseases to abort babies is a form of eugenics that is based on the false premise of genetic reductionism and is morally wrong.

IQ, embryo selection and PGS

While we have already begun to implement such tools and methods in the public, a recent study concluded that testing embryos for complex traits like height and IQ is “premature”, with the top-scoring PGS embryos gain would be approximately equal to 2.5cm in height and 2.5 IQ points (Karavani et al, 2019). But these values were derived from PGS which were derived from GWAS, so it’s just based on correlation. Most authors of course assume that “intelligence” is “highly polygenic”, they need not only correlation, but a mechanism (Munday and Savalescu, 2021). Unfortunately, the eugenic dreams of IQ-ists to increase IQ through these methods won’t work. Since one’s IQ is a function of the type of psychological and cultural tools they are exposed to from birth, and the items on the test are biased towards a certain social class, there are known ways to increase IQ that don’t have anything to do with genetically reductionist GWAS/PGS/PGD pipe dream. The argument can be made like this:

P1: The potential gain of embryo screening for traits such as height and cognitive ability is not significant.
P2: The gain due to embryo screening for height and cognitive ability is small, with an average gain of only ≈2.5 cm for height and ≈2.5 IQ points for cognitive ability.
C: Therefore, there is no significant case for using preimplantation genetic diagnosis to select embryos for implantation based on height or cognitive ability.

Of course, this doesn’t mean that even if the so called gains were significant and that PGS were causal that we should use PGD to select those traits

Conclusion

Although it has been said that common arguments against genetic reductionism rest on a strong version of genetic reductionism/determinism, and so the arguments “are therefore unsound” (Resnick and Vorhaus, 2006). The kinds of arguments, assumptions and considerations in this discussion of genetic modification and PGD assume, also, any kind of genetic determinism of traits.

At the end of the day, methods like PGD can lead to the destruction of fetuses on the basis of its genetic constitution. Eugenic selection could also have unintended consequences in the future since genetic variance could be reduced which would impinge on one’s ability to choose a partner, so it would lead to a limitation in partners for future people. Irrespective of the moral arguments made here, I think that the open future argument makes the best case against genetic modification of humans. This will yet again be another argument from human autonomy. Not only will we be impinging on one’s individual autonomy, but we don’t even know what kind of traits could be desirable from a survival point of view in the future. So that’s another reason to not genetically modify embryos or to select certain embryos over others.

P1: Future people have a moral right to choose (or not) the characteristics of their own genome.
P2: Genetic modification of an embryo involves making choices about the characteristics of the future person’s genome.
C: Therefore genetic modification of an embryo is morally impermissible since it violates the moral right of the future person to choose (or not choose) the characteristics of their own genome.

While genetic reductionism is a form of biological determinism, there is also what is called epigenetic determinism. Any kind of reducing X to deterministic proclivities is false. Nevertheless, I have distinguished between the old and the new eugenics, and showed that the only difference between them is that in the new eugenics, there is no state-sponsored coercion or forced sterilization occurring. (Although that, sadly still happens today.) Since genetic reductionism is false, then any attempt to “defend eugenics” (Anomaly, 2018; Wilson, 2019; Veit et al, 2021) are doomed to fail. But genetic engineering “is objectionable because it represents a bid for mastery and dominion that fails to appreciate the gifted character of human powers and achievements” (Sandell, 2007).