NotPoliticallyCorrect

Home » Brain size

Category Archives: Brain size

Hypertension, Brain Volume, and Race: Hypotheses, Predictions and Actionable Strategies

2300 words

Introduction

Hypertension (HT, also known as high blood pressure, BP) is defined as a BP of 140/90. But more recently, the guidelines were changed making HT being defend as a BP over 130/90 (Carey et al, 2022; Iqbal and Jamal, 2022). One 2019 study showed that in a sample with an age range of 20-79, 24 percent of men and 23 percent of women could be classified as hypertensive based on the old guidelines (140/90) (Deguire et al, 2019). Having consistent high BP could lead to devestating consequences like (from the patient’s perspective) hot flushes, dizziness, and mood disorders (Goodhart, 2016). However, one serious problem with HT is the issue that consistently high BP is associated with a decrease in brain volume (BV). This has been seen in two systematic reviews and meta-analyses (Alosco et al, 2013; Beauchet et al, 2013; Lane et al, 2019; Alateeq, Walsh and Cherbuin, 2021; Newby et al, 2022) while we know that long-standing hypertension has deleterious effects on brain health (Salerno et al, 1992). However, it’s not only high BP that’s related to this, it’s also lower BP in conjuction with lower pulse pressure (Muller et al, 2010; Foster-Dingley, 2015). So what this says to me is that too much or too little blood flow to the brain is deleterious for brain health.I will state the hypothesis and then I will state the predictions that follow from it. I will then provide three reasons why I think this relationship occurs.

The hypothesis

The hypothesis is simple: high BP (hypertension, HT) is associated with a reduced brain volume. This relationship is dose-dependent, meaning that the extent and duration of HT correlates with the degree of BV changes. So the hypothesis suggests that there is a relationship—an association—between HT and brain volume, where people with HT will be more likely to have decreased BVs than those who lack HT—that is, those with BP in the normal range.

The dose-dependent relationship that has been observed (Alateeq, Walsh and Cherbuin, 2021), and this shows that as HT increases and persists over time, the effects of decreased BV become more pronounced. This relationship suggests that it’s not a binary, either-or situation, present or absent situation, but that it varies across a continuum. So people with shorter-lasting HT will have fewer effects than those with constant and consistent elevated BP and they will then show subsequent higher decreases in BV. This dose-dependent relationship also suggests that as BP continues to elevate, the decrease in BV will worsen.

This dose-dependent relationship implies a few things. The consequences of HT on BV aren’t binary (either or), but are related to the severity of HT, how long one has HT, and at what age they have HT and that it varies on a continuum. For instance, people with mild or short-lasting HT would experience smaller reductions in BV than those that have severe or long-standing HT. The dose-dependent relationship also suggests that the longer one has HT without treatment, the more severe and worse the reduction in BV will be if it is uncontrolled. So as BP continues to elevate, it may lead to a gradual reduction in BV. So the relationship between HT and BV isn’t uniform, but it varies based on the intensity and duration of high BP.

So the hypothesis suggests that HT isn’t just a risk factor for cardiovascular disease, but it’s also a risk factor for decreased BV. This seems intuitive, since the higher one’s BP, the more likely it is that there is the beginnings of a blockage somewhere in the intricate system of blood vessels in the body. And since the brain is a vascular organ, then by decreasing the amount of blood flowing to it, this then would lead to cell death, white matter lesions which would lead to a smaller BV. One newer study showed, with a sample of Asians, whites, blacks, and “Latinos” that, compared to those with normal BP, those who were transitioning to higher BP or already had higher BP had lower brain connectivity, decreased cerebral gray matter and frontal cortex volume, while this change was worse for men (George et al, 2023). Shang et al (2021) showed that HT diagnosed in early and middle life but not late life was associated with decreased BV and increased risk of dimentia. This, of course, is due to the slow cumulative effects of HT and it’s effects on the brain. While Power et al (2016)The pattern of hypertension ~15 years prior and hypotension concurrent with neuroimaging was associated with smaller volumes in regions preferentially affected by Alzheimer’s disease.” But not only is BP relevant here, so is the variability of BP at night (Gutteridge et al, 2022; Yu et al, 2022). Alateeq, Walsh and Cherbuin (2021) conclude that:

Although reviews have been previously published in this area, they only investigated the effects of hypertension on brain volume [86]. To the best of our knowledge, this study’s the first systematic review with meta-analysis providing quantitative evidence on the negative association between continuous BP and global and regional brain volumes. Our results suggest that heightened BP across its whole range is associated with poorer cerebral health which may place individuals at increased risk of premature cognitive decline and dementia. It is therefore important that more prevention efforts be directed at younger populations with a greater focus on achieving optimal BP rather than remaining below clinical or pre-clinical thresholds[5].

One would think that a high BP would actually increase blood flow to the brain, but HT actually causes alterations in the flow of blood to the brain which leads to ischaemia and it causes the blood-brain barrier to break down (Pires et al, 2013). Essentially, HT has devestating effects on the brain which could lead to dimentia and Alzheimer’s (Iadecola and Davisson, 2009).

So the association between HT and decreased BV means that individuals with HT can experience alterations in BV in comparison to those with normal BP. The hypothesis also suggests that there are several mechanisms (detailed below), which may lead to various physiological and anatomic changes in the brain, such as vascular damage, inflammation and tissue atrophy.

The mechanisms

(1) High BP can damage blood vessels in the brain, which leads to reduced blood flow. This is called “cerebral hypoperfusion.” The reduced blood flow can deprive the cells in the brain of oxygen and nutrients, which cause them to shrink or die which leads to decreased brain volume (BV). Over time, high BP can damage the arteries, making them less elastic

(2) Over a long period of time having high BP, this can cause hypertensive encephalopathy, which is basically brain swelling. A rapid increase in BP could over the short term increase BV, but left untreated it could lead to brain damage and atrophy over time.

And (3) Chronically high BP can lead to the creation of white matter lesions on the brain, and the lesions are areas of damaged brain tissue which could result in microvascular changes caused by high BP (hypertension, HT). Thus, over time, the accumulation of white matter lesions could lead to a decrease in brain volume. HT can contribute to white matter lesions in the brain, which are then associated with cognitive changes and decreased BV, and these lesions increase with BP severity.

So we have (1) cerebral hypoperfusion, (2) hypertensive encephalopathy, and (3) white matter lesions. I need to think/read more on which of these could lead to decreased BV, or if they all actually work together to decrease BV. We know that HT damages blood vessels, and of course there are blood vessels in the brain, so it then follows that HT would decrease BV.

I can also detail a step-by-step mechanism. The process beings with consistently elevated BP, which could be due to various factors like genetics, diet/lifestyle, and underlying medical conditions. High BP then places increased strain on the blood vessels in the body, including those in the brain. This higher pressure could then lead to structural change of the blood vessels over time. Then, chronic HT over time can lead to endothelial dysfunction, which could impair the ability of blood vessels to regulate blood flow and maintain vessel integrity. The dysfunction can result in oxidative stress and inflammation.

Then as a response to prolonged elevated BP, blood vessels in the brain could undergo vascular remodeling, which involves changes im blood vessel structure and thickness, which can then affect blood flow dynamics. Furthermore, in some cases, this could lead to something called cerebral small vessel disease which involves damage to the small blood vessels in the brain including capillaries and arterioles. This could impair delivery of oxygen and nutrients to brain tissue which could lead to cell death and consequently a decrease in BV. Then reduced blood flow along compromised blood vessel integrity could lead to cerebral ischaemia—reduced blood supply—and hypoxia—reduced oxygen supply—in certain parts of the brain. This can then result in neural damage and eventually cell death.

Then HT-related vascular changes and cerebral small vessel disease can trigger brain inflammation. Prolonged exposure to neural inflammation, hypoxia and ischemia can lead to neuronal atrophy, where neurons shrink and lose their functional integrity. HT can also increase the incidence of white matter lesions in the brain which can be seen in neuroimages, which involve areas of white matter tissue which become damaged. Finally, over time, the cumulative effects of the aforementioned processes—vascular changes, inflammation, neural atrophy, and white matter changes could lead to a decrease in BV. This reduction can manifest as brain atrophy which is then observed in parts of the brain which are susceptible and vulnerable to the effects of HT.

So the step-by-step mechanism goes like this: elevated BP —> increased vascular strain —> endothelial dysfunction —> vascular remodeling —> cerebral small vessel disease —> ischemia and hypoxia —> inflammation and neuroinflammation —> neuronal atrophy —> white matter changes —> reduction in BV.

Hypotheses and predictions

H1: The severity of HT directly correlates with the extent of BV reduction. One prediction would be that people with more severe HT would exhibit greater BV decreases than those with moderate (less severe) HT, which is where the dose-dependent relationship comes in.

H2: The duration of HT is a critical factor in BV reduction. One prediction would be that people with long-standing HT will show more significant BV changes than those with recent onset HT.

H3: Effective BP management can mitigate BV reduction in people with HT. One prediction would be that people with more controlled HT would show less significant BV reduction than those with uncontrolled HT.

H4: Certain subpopulations may be more susceptible to BV decreases due to HT. One prediction is that certain factors like age of onset (HT at younger age), genetic factors (some may have certain gene variants that make them more susceptible and vulnerable to damage caused by elevated BP), comorbities (people with diabetes, obesity and heart problems could be at higher risk of decreased BV due to the interaction of these factors), ethnic/racial factors (some populations—like blacks—could be at higher risk of having HT and they could be more at risk due to experiencing disparities in healthcare and treatment.

The hypotheses and predictions generated from the main proposition that HT is associated with a reduction in BV and that the relationship is dose-dependent can be considered risky, novel predictions. They are risky in the sense that they are testable and falsifiable. Thus, if the predictions don’t hold, then it could falsify the initial hypothesis.

Blacks and blood pressure

Due to this, for populations like black Americans, this is significant. About 33 percent of blacks have hypertension (Peters, Arojan, and Flack, 2006), while urban blacks are more likely to have elevated BP than whites (Lindhorst et al, 2007). Though Non, Gravlee, and Mulligan (2012) showed that racial differences in education—not genetic ancestry—explained differences in BP in blacks compared to whites. Further, Victor et al (2018) showed that in black male barbershop attendees who had uncontrolled BP, that along with medication and outreach, this lead to a decrease in BP. Williams (1992) cited stress, socioecologic stress, social support, coping patterns, health behavior, sodium, calcium, and potassium consumption, alcohol consumption, and obesity as social factors which lead to increased BP.

Moreover, consistent with the hypothesis discussed here (that chronic elevated BP leads to reductions in BV which lead to a higher chance of dementia and Alzheimer’s), it’s been shown that vulnerability to HT is a major determinate in the risk of acquiring Alzheimer’s (Clark et al, 2020; Akushevic et al, 2022). It has also been shown that “a lifetime of racism makes Alzheimer’s more common in black Americansand consistent with the discussion here since racism is associated with stress which is associated with elevated BP, then consistent events of racial discrimination would lead to consistent and elevated BP which would then lead to decreased BV and then a higher chance of acquitting Alzheimer’s. But, there is evidence that blood pressure drugs (in this case telmisartan) reduce the incidence of Alzheimer’s in black Americans (Zhang et al, 2022) while the same result was also seen using antihyperintensive medications in blacks which led to a reduction in incidence of dementia (Murray et al, 2018), which lends credence to the discussed hypothesis. Stress and poverty—experiences—and not ancestry could explain higher rates of dementia in black Americans as well. Thus, since blood pressure could explain higher rates of dementia in black populations, this then lends credence to the discussed hypothesis.

Conclusion

The evidence that chronic elevated BP leads to reductions in BV are well-studied and the mechanisms are well-known. I discussed the hypothesis that chronically elevated BP leads to reduced blood flow to the brain which decreases BV. I then discussed the mechanisms behind the relationship, and then hypotheses and predictions that follow from them. Lastly, I discussed the well-known fact that blacks have higher rates of BP, and also higher rates of dementia and Alzheimer’s, and linked the fact that they have higher rates of BP to those maladies.

So by catching chronically elevated BP in the early ages, since the earlier one has high BP the more likely they are to have reduced brain volume and the associated maladies, we can then begin to fight the associated issues before they coalesce, since we know the mechanisms behind them, along with the fact that blood pressure drugs and antihypertensive medications decrease incidences of dementia and Alzheimer’s in black Americans.

Race, Brain Size, and “Intelligence”: A Critical View

5250 words

“the study of the brains of human races would lose most of its interest and utility” if variation in size counted for nothing ([Broca] 1861 , p. 141). Quoted in Gould, 1996: 115)

The law is: small brain, little achievement; great brain, great achievement (Ridpath, 1891: 571)

I can’t hope to give as good a review as Gould’s review in Mismeasure of Man on the history of skull measuring, but I will try to show that hereditarians are mistaken in their brain size-IQ correlations and racial differences in brain size as a whole.

The claim that brain size is causal for differences in intelligence is not new. Although over the last few hundred years there has been back and forth arguments on this issue, it is generally believed that there are racial differences in brain size and that this racial difference in brain size accounted for civilizational accomplishments, among other things. Notions from Samuel Morton which seem to have been revived by Rushton in the 80s while formulating his r/K selection theory show that the racism that was incipient in the time period never left us, even after 1964. Rushton and others merely revived the racist thought of those from the 1800s.

Using MRI scans (Rushton and Ankney, 2009) and measuring the physical skull, Rushton asserted that the differences in brain size and quality between races accounted for differences in IQ. Although Rushton was not alone in this belief, this belief on the relationship between brain weight/structure and intelligence goes back centuries. In this article, I will review studies on racial differences in brain size and see if Rushton et al’s conclusions hold on not only brain size being causally efficacious for IQ but there being racial and differences in brain size and the brain size and IQ correlation.

The Morton debate

Morton’s skull collection has received much attention over the years. Gould (1978) first questioned Morton’s results on the ranking of skulls. He argued that when the data was properly reinterpreted, “all races have approximately equal capacities.” The skulls in Morton’s collection were collected from all over. Morton’s men even robbed graves to procure skulls for Morton, even going as far to take “bodies in front of grieving relatives and boiled flesh off fresh corpses” (Fabian, 2010: 178). One man even told Morton that grave robbing gave him a “rascally pleasure” (Fabian, 2010: 15). Indeed, grave robbing seems to have been a way to procure many skulls which were used in these kinds of analyses (Monarrez et al, 2022). Nevertheless, since skulls house brains, the thought is that by measuring skulls then we can ascertain the brain of the individual that the skull belonged to. A larger skull would imply a larger brain. And larger brains, it was said, belong to more “intelligent” people. This assumption was one that was held by the neurologist Broca, and this then justified using brain weight as a measure of intelligence. Though in 1836, an anti-racist Tiedemann (1836) argued that there were no differences in brain size between whites and blacks. (Also see Gould, 1999 for a reanalysis of Tiedemann where he shows C > M > N in brain size, but concludes that the “differences are tiny and probably of no significance in the judgment of intelligence” (p 10).) It is interesting to note that Tiedemann and Morton worked with pretty much the same data, but they came to different conclusions (Gould, 1999; Mitchell, 2018).

In 1981 Gould published his landmark book The Mismeasure of Man (Gould, 1981/1996). In the book, he argued that bias—sometimes unconscious—pervaded science and that Morton’s work on his skull collection was a great example of this type of bias. Gould (1996: 140) listed many reasons why group (race) differences in brain size have never been demonstrated, citing Tobias (1970):

After all, what can be simpler than weighing a brain?—take it out, and put it on the scale. One set of difficulties refers to problems of measurement itself: at what level is the brain severed from the spinal cord; are the meninges removed or not (meninges are the brain’s covering membranes, and the dura mater, or thick outer covering, weighs 50 to 60 grams); how much time elapsed after death; was the brain preserved in any fluid before weighing and, if so, for how long; at what temperature was the brain preserved after death. Most literature does not specify these factors adequately, and studies made by different scientists usually cannot be compared. Even when we can be sure that the same object has been measured in the same way under the same conditions, a second set of biases intervenes—influences upon brain size with no direct tie to the desired properties of intelligence or racial affiliation: sex, body size, age, nutrition, nonnutritional environment, occupation, and cause of death.

Nevertheless, in Mismeasure, Gould argued that Morton had unconscious bias where he packed the skulls of smaller African skulls more loosely while he would pack the skulls of a smaller Caucasian skull tighter (Gould made this inference due to the disconnect between Morton’s lead shot and seed measurements).

Plausible scenarios are easy to construct. Morton, measuring by seed, picks up a threateningly large black skull, fills it lightly and gives it a few desultory shakes. Next, he takes a distressingly small Caucasian skull, shakes hard, and pushes mightily at the foramen magnum with his thumb. It is easily done, without conscious motivation; expectation is a powerful guide to action. (1996: 97)

Yet through all this juggling, I detect no sign of fraud or conscious manipulation. Morton mad e no attempt to cove r his tracks and I must presume that he was unaware he had left them. He explained all his procedure s and published all his raw data. All I can discern is an a priori conviction about racial ranking so powerful that it directed his tabulations along preestablished lines. Yet Morton was widely hailed as the objectivist of his age, the man who would rescue American science from the mire of unsupported speculation. (1996: 101)

But in 2011, a team of researchers tried to argue that Morton did not manipulate data to fit his a priori biases (Lewis et al, 2011). They claimed that “most of Gould’s criticisms are poorly supported or falsified.” They argued that Morton’s measurements were reliable and that Morton really was the scientific objectivist many claimed him to be. Of course, since Gould died in 2002 shortly after publishing his magnum opus The Stuecure of Evolutionary Theory, Gould could not defend his arguments against Morton.

However, a few authors have responded to Lewis et al and have defended Gould conclusions against Morton (Weisberg, 2014; Kaplan, Pigliucci and Banta, 2015; Weisberg and Paul, 2016).

Weisberg (2014) was the first to argue against Lewis et al’s conclusions on Gould. Weisberg argued that while Gould sometimes overstated his case, most of his arguments were sound. Weisberg argued that, contra what Lewis et al claimed, they did not falsify Gould’s claim, which was that the difference between shot and seed measurements showed Morton’s unconscious racial bias. While Weisberg rightly states that Lewis et al uncovered some errors that Gould made, they did not successfully refute two of Gould’s main claims: “that there is evidence that Morton’s seed‐based measurements exhibit racial bias and that there are no significant differences in mean cranial capacities across races in Morton’s collection.”

Weisberg (2014: 177) writes:

There is prima facie evidence of racial bias in Morton’s (or his assistant’s) seed‐basedmeasurements. This argument is based on Gould’s accurate analysis of the difference between the seed‐ and shot‐based measurements of the same crania.

Gould is also correct about two other major issues. First, sexual dimorphism is a very suspicious source of bias in Morton’s reported averages. Since Morton identified most of his sample by sex, this is something that he could have investigated and corrected for. Second, when one takes appropriately weighted grand means of Morton’s data, and excludes obvious sources of bias including sexual dimorphism, then the average cranial capacity of the five racial groups in Morton’s collection is very similar. This was probably the point that Gould cared most about. It has been reinforced by my analysis.

[This is Weisberg’s reanalysis]

So Weisberg successfully defended Gould’s claim that there are no general differences in the races as ascribed by Morton and his contemporaries.

In 2015, another defense of Gould was mounted (Kaplan, Pigliucci and Banta, 2015). Like Weisberg before them, they also state that Gould got some things right and some things wrong, but his main arguments weren’t touched by Lewis et al. Kaplan et al stated that while Gould was right to reject Morton’s data, he was wrong to believe that “a more appropriate analysis was available.” They also argue due to the “poor dataset no legitimate inferences to “naturalpopulations can be drawn.” (See Luchetti, 2022 for a great discussion of Kaplan, Pigliucci and Banta.)

In 2016, Weisberg and Paul (2016) argued that Gould assumed that Morton’s lead shot method  was an objective way to ascertain the cranial capacities of skulls. Gould’s argument rested on the differences between lead shot and seed. Then in 2018, Mitchell (2018) published a paper where he discovered lost notes of Morton’s and he argued that Gould was wrong. He, however, admitted that Gould’s strongest argument was untouched—the “measurement issue” (Weisberg and Paul, 2016) was Gould’s strongest argument, deemed “perceptive” by Mitchell. In any case, Mitchell showed that the case of Morton isn’t one of an objective scientist looking to explain the world sans subjective bias—Morton’s a priori biases were strong and strongly influenced his thinking.

Lastly, ironically Rushton used Morton’s data from Gould’s (1978) critique, but didn’t seem to understand why Gould wrote the paper, nor why Morton’s methodology was highly suspect. Rushton basically took the unweighted average for “Ancient Caucasian” skulls, and the sex/age of the skulls weren’t known. He also—coincidentally I’m sure—increased the “Mongoloid skull” size from 85 to 85.5cc (Gould’s table had it as 85cc). Amazingly—and totally coincidentally, I’m sure—Rushton miscited Gould’s table and basically combined Morton’s and Gould’s data, increased the skull size slightly of “Mongoloids” and used the unweighted average of “Ancient Caucasian” skulls (Cain and Vanderwolf, 1990). How honest of Rushton. It’s ironic how people say that Gould lied about Morton’s data and that Gould was a fraud, when in all actuality, Rushton was the real fraud, never recanting on his r/K theory, and now we can see that Rushton actually miscited and combined Gould’s and Morton’s results and made assumptions without valid justification.

The discussion of bias in science is an interesting one. Since science is a social endeavor, there necessarily will be bias inherent in it, especially when studying humans and discussing the causes of certain peculiarities. I would say that Gould was right about Morton and while Gould did make a few mistakes, his main argument against Morton was untouched.

Skull measuring after Morton

The inferiority of blacks and other non-white races has been asserted ever since the European age of discovery. While there were of course 2 camps at the time—one which argued that blacks were not inferior in intelligence and another that argued they were—the claim that blacks are inferior in intelligence was, and still is, ubiquitous. They argued that smaller heads meant that one was less intelligent, and if groups had smaller heads then they too were less intelligent than groups that had smaller heads. This then was used to argue that blacks hadn’t achieved any kind of civilizational accomplishments since they were intellectually inferior due to their smaller brains (Davis, 1869; Campbell, 1891; Hoffman, 1896; Ridpath, 1897; Christison, 1899).

Robert Bean (1906) stated, using cadavers, that his white cadavers had larger frontal lobes than his black cadavers. He concluded that blacks were more objective than whites who were more subjective, and that white cadavers has larger frontal and anterior lobes than black cadavers. However, it seems that Bean did not state one conclusion—that the brain’s of his cadavers seemed to show no difference. Gould (1996: 112) discusses this issue (see Mall, 1909: 8-10, 13; Reuter, 1927). Mall (1909: 32) concluded, “In this study of several anatomical characters said to vary according to race and sex, the evidence advanced has been tested and found wanting.

Franz Boas also didn’t agree with Bean’s analysis:

Furthermore, in “The Anthropological Position of the Negro,” which appeared in Van Norden)- Magazine a few months later, Boas attempted to refute Bean by arguing that “the anatomical differences” between blacks and whites “are minute,” and “no scientific proof that will stand honest proof … would prove the inferiority of the negro race.”39 (Williams, 1996: 20)

In 1912, Boas argued that the skull was plastic, so plastic that changes in skull shape between immigrants and their progeny were seen. His results were disputed (Sparks and Jantz, 2002), though Gravlee, Bernard, and Leonard (2002) argued that Boas was right—the shape of the skull indeed was influenced by environmental factors.

When it comes to sex, brain size, and intelligence, this was discredited by Alice Lee in her thesis in 1900. Lee created a way to measure the brain of living subjects and she used her method on the Anthropological Society and showed a wife variation, with of course overlapping sizes between men and women.

Lee, though, was a staunch eugenicist and did not apply the same thinking to race:

After dismantling the connection between gender and intellect, a logical route would have been to apply the same analysis to race. And race was indeed the next realm that Lee turned to—but her conclusions were not the same. Instead, she affirmed that through systematic measurement of skull size, scientists could indeed define distinct and separate racial groups, as craniometry contended. (The Statistician Who Debunked Sexist Myths About Skull Size and Intelligence)

Contemporary research on race, brain size, and intelligence

Starting from the mid-1980s when Rushton first tried to apply r/K to human races, there was a lively debate in the literature, with people responding to Rushton and Rushton responding back (Cain and Vanderwolf, 1990; Lynn, 1990; Rushton, 1990; Mouat, 1992). Why did Rushton seemingly revive this area of “research” into racial differences in brain size between human races?

Centring Rushton’s views on racial differences needs to start in his teenage years. Rushton stated that being surrounded by anti-white and anti-western views led to him seeking out right-wing ideas:

JPR recalls how the works of Hans Eysenck were significantly influential to the teenage Rushton, particularly his personality questionnaires mapping political affiliation to personality. During those turbulent years JPR describes bundled as growing his hair long  becoming outgoing but utterly selfish. Finding himself surrounded by what he described as anti-white and anti-western views, JPR became interested in right-wing groups. He went about sourcing old, forbidden copies of eugenics articles that argued that evolutionary differences existed between blacks and whites. (Forsythe, 2019) (See also Dutton, 2018.)

Knowing this, it makes sense how Rushton was so well-versed in old 18 and 1900s literature on racial differences.

For decades, J. P. Rushton argued that skulls and brains of blacks were smaller than whites. Since intelligence was related to brain size in Rushtonian r/K selection theory, this meant that what would account for some of the intelligence differences based on IQ scores between blacks and whites could be accounted for by differences in brain size between them. Since the brain size differences between races accounted for millions of brain cells, this could then explain race differences in IQ (Rushton and Rushton, 2003). Rushton (2010) went as far to argue that brain size was an explanation for national IQ differences and longevity.

Rushton’s thing in the 90s was to use MRI to measure endocranial volumes (eg Rushton and Ankney, 1996). Of course they attempt to show how smaller brain sizes are found in lower classes, women, and non-white races. Quite obviously, this is scientific racism, sexism, and classism (which Murray 2020 also wrote a book on). In any case, Rushton and Ankney (2009) tried arguing for “general mental ability” and whole brain size, trying to argue that the older studies “got it right” in regard to not only intelligence and brain size but also race and brain size. (Rushton and Ankney, just like Rushton and Jensen 2005, cited Mall, 1909 in the same sentence as Bean, 1906 trying to argue that the differences in brain size between whites and blacks were noted then, when Mall was a response specifically to Bean! See Gould 1996 for a solid review of Bean and Mall.) Kamin and Omari (1998) show that whites had greater head height than blacks while blacks had greater head length and circumference. They described many errors that Lynn, Rushton and Jensen made in their analyses of race and head size. Not only did Rushton ignore Tobias’ conclusions when it comes to measuring brains, he also ignored the fact that American Blacks, in comparison to American, French and English whites, had larger brains in Tobias’ (1970) study (Weizmann et al, 1990).

Rushton and Ankney (2009) review much of the same material they did in their 1996 review. They state:

The sex differences in brain size present a paradox. Women have proportionately smaller average brains than men but apparently have the same intelligence test scores.

This isn’t a paradox at all, it’s very simple to explain. Terman assumed that men and women should be equal in IQ and so constructed his test to fit that assumption. Since Terman’s Stanford-Binet test is still in use today, and since newer versions are “validated” on older versions that held the same assumption, then it follows that the assumption is still alive today. This isn’t some “paradox” that needs to be explained away by brain size, we just need to look back into history and see why this is a thing. The SAT has been changed many times to strengthen or weaken sex differences (Rosser, 1989). It’s funny how this completely astounds hereditarians. “There are large differences in brain size between men and women but hardly if any differences in IQ, but a 1 SD difference in IQ between whites and blacks which is accounted for in part by brain size.” I wonder why that never struck them as absurd? If Rushton accepted brain weight as an indicator that IQ test scores reflected differences in brain size between the races, then he would also need to accept that this should be true for men and women (Cernovsky, 1990), but Rushton never proposed anything like that. Indeed he couldn’t, since sex differences in IQ are small or nonexistent.

In their review papers, Rushton and Ankney, as did Rushton and Jensen (I should assume that this was Rushton’s contribution to the paper since he also has the same citations and arguments in his book and other papers) consistently return to a few references: Mall, Bean, Vint and Gordon, Ho et al and Beals et al. Cernovsky (1995) has a masterful response to Rushton where he dismantles his inferences and conclusions based on other studies. Cernovsky showed that Rushton’s claim that his claim that there are consistent differences between races in brain size is false; Rushton misrepresented other studies which showed blacks having heavier brains and larger cranial capacities than whites. He misrepresented Beals et al by claiming that the differences in the skulls they studied are due to race when race was spurious, climate explained the differences regardless of race. And Rushton even misrepresented Herskovits’ data which showed no difference in regarding statute or crania. So Rushton even misrepresented the brain-body size literature.

Now I need to discuss one citation line that Rushton went back to again and again throughout his career writing about racial differences. In articles like Rushton (2002) Rushton and Jensen (2005), Rushton and Ankney (2007, 2009) Rushton went back to a similar citation line: Citing 1900s studies which show racial differences. Knowing what we know about Rushton looking for old eugenics articles that showed that evolutionary differences existed between blacks and whites, this can now be placed into context.

Weighing brains at autopsy, Broca (1873) found that Whites averaged heavier brains than Blacks and had more complex convolutions and larger frontal lobes. Subsequent studies have found an average Black–White difference of about 100 g (Bean, 1906Mall, 1909Pearl, 1934Vint, 1934). Some studies have found that the more White admixture (judged independently from skin color), the greater the average brain weight in Blacks (Bean, 1906Pearl, 1934). In a study of 1,261 American adults, Ho et al. (1980) found that 811 White Americans averaged 1,323 g and 450 Black Americans averaged 1,223 g (Figure 1).

There are however, some problems with this citation line. For instance, Mall (1909) was actually a response to Bean (1906). Mall was race-blind to where the brains came from after reanalysis and found no differences in the brain between blacks and whites. Regarding the Ho et al citation, Rushton completely misrepresented their conclusions. Further, brains that are autopsied aren’t representative of the population at large (Cain and Vanderwolf, 1990; see also Lynn, 1989; Fairchild, 1991). Rushton also misrepresented the conclusions in Beals et al (1984) over the years (eg, Rushton and Ankney, 2009). Rushton reported that they found his same racial hierarchy in brain size. Cernovsky and Littman (2019) stated that Beals et al’s conclusion was that cranial size varied with climatic zone and not race, and that the correlation between race and brain size was spurious, with smaller heads found in warmer climates, regardless of race. This is yet more evidence that Rushton ignored data that fid not fit his a priori conclusions (see Cernovsky, 1997; Lerner, 2019: 694-700). Nevertheless, it seems that Rushton’s categorization of races by brain size cannot be valid (Peters, 1995).

It would seem to me that Rushton was well-aware of these older papers due to what he read in his teenage years. Although at the beginning of his career, Rushton was a social learning theorist (Rushton, 1980), quite obviously Rushton shifted to differential psychology and became a follower—and collaborator—of Jensenism.

But what is interesting here in the renewed ideas of race and brain size are the different conclusions that different investigators came to after they measured skulls. Lieberman (2001) produced a table which shies different rankings of different races over the past few hundred years.

Table 1 from Lieberman, 2001 showing different racial hierarchies in the 19th and 20th century

As can be seen, there is a stark contrast in who was on top of the hierarchy based on the time period the measurements were taken. Why may this be? Obviously, this is due to what the investigator wanted to find—if you’re looking for something, you’re going to find it.

Rushton (2004) sought to revive the scala naturae, proposing that gthe general factor of intelligence—sits a top a matrix of correlated traits and he tried to argue that the concept of progress should return to evolutionary biology. Rushton’s r/K theory has been addressed in depth, and his claim that evolution is progressive is false. Nevertheless, even Rushton’s claim that brain size was selected for over evolutionary history also seems to be incorrect—it was body size that was, and since larger bodies have larger brains this explains the relationship. (See Deacon, 1990a, 1990b.)

Salami et al (2017) used brains from fresh cadavers, severing them from the spinal cord at the forum magnum and they completely removed the dura mater. This then allowed them to measure the whole brain without any confounds due to parts of the spinal cord which aren’t actually parts of the brain. They found that the mean brain weight for blacks was 1280g with a ranging between 1015g to 1590g while the mean weight of male brains was 1334g. Govender et al (2018) showed a 1404g mean brain weight for the brains of black males.

Rushton aggregated data from myriad different sources and time periods, claiming that by aggregating even data which may have been questionable in quality, the true differences in brain size would appear when averaged out. Rushton, Brainerd, and Pressley, 1983 defended the use of aggregation stating “By combining numerous exemplars, such errors of measurement are averaged out, leaving a clearer view of underlying relationships.” However, this method that Rushton used throughout his career has been widely criticized (eg, Cernovsky, 1993; Lieberman, 2001).

Rushton was quoted as saying “Even if you take something like athletic ability or sexuality—not to reinforce stereotypes or some such thing—but, you know, it’s a trade-off: more brain or more penis. You can’t have both.” How strange—because for 30 years Rushton pushed stereotypes as truth and built a whole (invalid) research program around them. The fact of the matter is, for Rushton’s hierarchy when it comes to Asians, they are a selected population in America. Thus, even there, Rushton’s claim rests on values taken from a selected population into the country.

While Asians had larger brains and higher IQ scores, they had lower sexual drive and smaller genitals; blacks had smaller brains and lower IQ scores with higher sexual drive and larger genitals; whites were just right, having brains slightly smaller than Asians with slightly lower IQs and lower sexual drive than blacks but higher than Asians along with smaller genitals than blacks but larger than Asians. This is Rushton’s legacy—keeping up racial stereotypes (even then, his claims on racial differences in penis size do not hold.)

The misleading arguments on brain size lend further evidence against Rushton’s overarching program. Thus, this discussion is yet more evidence that Rushton was anything but a “serious scholar” who trolled shopping malls asking people their sexual exploits. He was clearly an ideologue with a point to prove about race differences which probably manifested in his younger, teenage years. Rushton got a ton wrong, and we can now add brain size to that list, too, due to his fudging of data, misrepresenting data, and not including data that didn’t fit his a priori biases.

Quite clearly, whites and Asians have all the “good” while blacks and other non-white races have all the “bad.” And thus, what explains social positions not only in America but throughout the world (based on Lynn’s fraudulent national IQs; Sear, 2020) is IQ which is mediated by brain size. Brain size was but a part of Rushton’s racial rank ordering, known as r-K selection theory or differential K theory. However, his theory didn’t replicate and it was found that any differences noticed by Rushton could be environmentally-driven (Gorey and Cryns, 1995; Peregrine, Ember and Ember, 2003).

The fact of the matter is, Rushton has been summarily refuted on many of his incendiary claims about racial differences, so much so that a couple of years ago quite a few of his papers were retracted (three in one swipe). While a theoretical article arguing about the possibility that melanocortin and skin color may mediate aggression and sexuality in humans (Rushton and Templer, 2012). (This appears to be the last paper that Rushton published before his death in October, 2012. How poetic that it was retracted.) This was due mainly to the outstanding and in depth look into the arguments and citations made by Rushton and Templer. (See my critique here.)

Conclusion

Quite clearly, Gould got it right about Morton—Gould’s reanalysis showed the unconscious bias that was inherent in Morton’s thoughts on his skull collection. Gould’s—and Weisberg’s—reanalysis show that there are small differences in skulls of Morton’s collection. Even then, Gould’s landmark book showed that the study of racial differences—in this case, in brain and skull size—came from a place of racist thought. Writings from Rushton and others carry on this flame, although Rushton’s work was shown to have considerable flaws, along with the fact that he outright ignored data that didn’t fit his a priori convictions.

Although comparative studies of brain size have been widely criticized (Healy and Rowe, 2007), they quite obviously survive today due to the assumptions that hereditarians have between “IQ” and brain size along with the assumption that there are racial differences in brain size and that these differences are causal for socially-important things. However, as can be seen, the comparative study of racial brain sizes and the assumption that IQ is causally mediated by it are hugely mistaken. Morton’s studies were clouded by his racial bias, as Gould and Weisberg and Kaplan et al showed. When Rushton, Jensen, and Lynn arose, they they tried to carry on that flame, correlating head size and IQ while claiming that smaller head sizes and—by identity—smaller brains are related to a suite of negative traits.

The brain is of course an experience-dependent organ and people are exposed to different types of knowledge based on their race and social class. This difference in knowledge exposure based on group membership, then, explains IQ scores. Not any so-called differences in brain size, brain physiology or genes. And while Cairo (2011) concludes that “Everything indicates that experience makes the great difference, and therefore, we contend that the gene-environment interplay is what defines the IQ of an individual“, genes are merely necessary for that, not sufficient. Of course, since IQ is an outcome of experience, this is what explains IQ differences between groups.

Table 1 from Lieberman (2001) is very telling about Gould’s overarching claim about bias in science. As the table shows, the hierarchy in brain size was constantly shifting throughout the years based on a priori biases. Even different authors coming to different conclusions in the same time period on whether or not there are differences in brain size between races pop up. Quite obviously, the race scientists would show that race is the significant variable in whatever they were studying and so the average differences in brain size then reflect differences in genes and then intelligence which would then be reflected in civilizational accomplishments. That’s the line of reasoning that hereditarians like Rushton use when operating under these assumptions.

Science itself isn’t racist, but racist individuals can attempt to use science to import their biases and thoughts on certain groups to the masses and use a scientific veneer to achieve that aim. Rushton, Jensen and others have particular reasons to believe what they do about the structure of society and how and why certain racial groups are in the societal spot they are in. However, these a priori conceptions they had then guided their research programs for the rest of their lives. Thus, Gould’s main claim in Mismeasure about the bias that was inherent in science is well-represented: one only needs to look at contemporary hereditarian writings to see how their biases shape their research and interpretations of data.

In the end, we don’t need just-so stories to explain how and why races differ in IQ scores. We most definitely don’t need any kinds of false claims about how brain size is causal for intelligence. Nor do we need to revive racist thought on the causes and consequences of racial differences in brain size. Quite obviously, Rushton was a dodgy character in his attempt to prove his tri-archic racial theory using r/K selection theory. But it seems that when one surveys the history of accounts of racial differences in brain size and how these values were ascertained, upon critical examination, such differences claimed by the hereditarian all but dissappear.

Eugenics and Brain Reductionism in Colonial Kenya

4250 words

Reducing “intelligence” to the brain is nothing new. This has been the path hereditarians have taken in the new millennium to try to show that the hereditarian hypothesis is true. This is basically mind-brain identity as I have argued before. Why are African countries so different from other more developed countries? The hereditarian assumes that biology must be a factor, and it is there where they try to find the answer. This was what British Eugenicists in Kenya tried to show—that the brain of the Kenyan explained how and why East Africa is so different in comparison to Europe regarding civilizational accomplishments.

In this article, I will discuss eugenic attitudes on Kenyans and their attempted reduction of intelligence to the brain, how these attitudes and beliefs went with them which grew out of Galtonian beliefs, and how such beliefs never died out.

Eugenics in Kenya

Eugenic ideas on race and intelligence appeared in Kenya in the 1930s since it promised biological solutions to social problems (Campbell, 2007, 2012). Of course these ideas grew from the heartland of eugenics where it began, from Francis Galton. So it’s no surprise that Britons who went to Kenya held those ideals. Moreover, the attitudes that the Britons settlers had in Kenya on the law in regard to Africans seems reminiscent of Jim Crow America:

The law must be a tool used on behalf of whites to bend Africans to their will. It must be personal and racially biased, the punishment swift and sharp. (Shadle, 2010)

This story begins with F. Vint (1934) and and Henry Gordon (1934) (who was in Kenya beginning in 1925). (See Mahone, 2007.) Gordon met Vint while he was a visiting doctor at the Mathari Mental Hospital in Nairobi (Tilley, 2005: 235). Both of these men attempted to show that Africans were inferior to Europeans in intelligence, and used physical brain measures to attempt to show this.

Vint used two measures—brain weight and brain structure. He also argued that the pyramidal cell layer of the Kenyan brain was only 84 percent of the European brain. Vint used others’ comparisons of European’s brains for these studies, never studying them on his own. So he concluded that the average Kenyan reached only the development of a 7 or 8 year old European. While Vint (1934) argued that the brain of the Kenyan was 152 grams less than the average brain of the European, he didn’t explicitly claim in this paper that this would then lead to differences in intelligence. We can infer that this was an implication of the argument based on his other papers. Campbell (2007: 75) quotes Vint in his article A Preliminary Note on the Cell Content of the Prefrontal Cortex of the East African Native on the subject of brain weight and intelligence:

Thus from the both the average weight of the African brain and measurements of its prefrontal cortex I have arrived, in this preliminary investigation, at the conclusion that the stage of mental development reached by the average native is that of the average European boy of between 7 and 8 years of age.

Note the similarity between this and Lynn’s claim that Bushman IQ is 54 which corresponds to that of European 8 year olds. (See this article for a refutation of that claim.) So Vint believed that he had found the reason for racial backwardness, and this is of course through reduction to biology. Campbell (2007: 60) also tells us how the eugenic movement in Kenya grew out of British eugenic ideas along with the brain reductionism they espoused:

Eugenics in Kenya grew out of the theories disseminated from Britain; the application of current ideas about the transmission of innate characteristics, in particular intelligence, shaped a new and extreme eugenic interpretation of racial difference. The Kenyan eugenicists did not, however, use the most obvious methods, such as pedigrees, statistics and intelligence testing, which were applied by British eugenicists when assessing the intelligence of large social groups. When examining race, an area in which British eugenics had not prescribed a methodology, the Kenyan doctors most radically made histological counts of brain cells and physical measurements of brain capacity. This led to the adoption of a particularly pathologising theory about biological inferiority in the East African brain.

Gordon (1934) found an average cranial capacity of 1,316 cc in comparison to an average cranial capacity of 1,481 cc in European. This led to the conclusion that the Kenyan brain was both quantitatively and qualitatively inferior in comparison to the European brain. This of course meant that the brain was what we need to look at as this would show differences in intelligence between groups of people that we could actually measure. Gordon (1934: 231-232) describes some of Vint’s research on the brain, stating that physical and environmental causes must not be discounted:

Dr. Vint’s report on bis naked-eye and microscopic examintion of one hundred brains of normal male adults is to be published shortly in the Journal of Anatomy; but in order that we may have a little more light on the question of whether the East African cerebrum is, on the average, on a lower biological level than the European cerebrum, I may mention these facts:

In the areas of the cortex examined, Dr. Vint found a total inferinrity in quantity, as compared with the European, of 14-8 percent. His naked-eye examination revealed a significant simplicity of convolutional pattern and many features generally called primitive; e.g. the lunate sulcus, described by Professor Elliot Smith, was present in seventy of the one hundred brains. The microscopic examination showed the important supragranular layer of the cortex to be deficient in all the six areas that Von Economo examined, and the cells of these areas to be deficient very markedly in size, arrangement and in differentiation.

These, I think, are enough of Dr. Vint’s new facts to make us feel that the deficiencies found in examination of the living are indeed associated with suggestive deficiency in the native cerebrum; that we are in fact confronted in the East African with a brain on a lower biological level. This, I submit, is a matter requiring investigation by the highest expert skill into the question of heredity or environment or both.

However, going back above to what Campbell stated about Kenyan eugenicists not using tests, Gordon (1934) states that the Binet was “quite unsuitable“, while the Porteus maze test was “both suitable and to native liking.” Gordon stated that although the sample was too small to draw a definitive conclusion, the results trended inline with Vint’s measures of the brain at puberty as described by Gordon. Gordon, it seemed, had a negative view on cross-cultural comparisons between whites and blacks:

I find, on coming out of the darkness and confusion of Africa into the clear and tranquil air of European psychological thought and practice, that mental tests and mental ages by themselves are largely depended upon for the diagnosis of amentia. I venture to say only this: In my experience of many thousands of natives, intelligence in its ordinary connotation is present amongst them often to an enviable degree; nevertheiess, I believe we may do the native injustice and even injury if we are content to estimate his “intelligence” only in terms of his apparent ability to cope with the exactions of European scholastic education. Moreover, in the present state of psychological knowledge it seems to me that any use of mental tests as a means of comparison between European and African—races of widely different physical and social heritage and environment—carries the risk of misleading African education and legislative policy. The field for research by the trained psychologist of broad outlook is enormous in East Africa; his presence would be welcome. (Gordon, 1934)

Nevertheless, despite Gordon’s surprisingly negative view on the cross-cultural validity of tests, he did still believe that to ameliorate amentia in the native population that eugenic measures must be undertaken.

We can see now how Vint and Gordon attempted to infer mentality from the brain—and of course inferior mentality in the brain of the East African, in this case Kenyans (of course, the tribes that were studied). So due to Vint’s studies, it was proclaimed in this 1933 commentary in Nature titled European Civilisation and African Brains that due to brain differences, “Europeanisation” for the Kenyan just wasn’t possible. It was Gordon’s intention to use the study of racial differences to enact eugenic policies in Kenya. For if Kenyan “backwardness” is due to their intelligence which is due to their deficient brain, then this would have implications for their education and health. Regarding “backwardness”, Gordon (1945: 140) had this to say:

A few of the important questions ancillary to this leading qualitative question are:
(I) Mental deficiency, ignored by the laws of Kenya including the immigration law;
(2) Unprevented preventable diseases;
(3) Miscegenation, present and future;
(4) The introduction of contraceptives to Asiatics and Africans and no appearance of organized family planning.

The second momentous qualitative issue is the accepted “backwardness” of our African group and the question: what is backwardness? This condition, long discussed, has never been investigated; its causes and nature are wholly unknown; the correct treatment for it is wholly unknown. There are some who think they know these things and have unwittingly intensified a situation containing a deep appeal for truth. This situation must inevitably be encountered by a population inquiry.

I have often pointed out that scientific light upon “backwardness ” is required for commonsense thought and action in regard to difficult questions in trusteeship for our Africans, of which I name only the following:
(I) Scholastic education and vocational training;
(2) Mental deficiency and mental disorder;
(3) Alcoholism and drug addiction;
(4) Adult and juvenile crime;
(5) The ayah question;
(6) The urbanization of a backward rural people;
(7) The capacity of the East African Native to acquire British culture.

Such questions cannot be lightly brushed aside or lightly answered by a nation anxious to help up a weaker people; nor is the responsibility of taking charge of that people and its future without scientific answers to such questions one to be lightly continued. It should be more widely known that the differences between the white and the black man are far from being confined to colour, and that to proceed as if the resemblances were all that matters may be a grievous error.

Gordon stated that the most important “resource” for study was the population, which other scientists ignored. Gordon dubbed this the “population problem.” Due to these kinds of eugenic ideas, there were blood banks in Kenya that were racially segregated (Dantzler, 2017). What Gordon, Vint and other Kenyan eugenicists were worried about was amentia, which is intellectual disability or severe mental illness. Although Gordon (1934) did discuss some environmental influences on the brain development of the East African in his talk to the African Circle, Gordon argued for eugenic proposals due to what he claimed to be a high level of amentia in the population which led to decreased intelligence. In this same talk, he discusses the previous research of Vint’s, showing data that the brain growth of the East African was about half as much as that of the European. He also stated that they were inferior to Europeans not only in brain measures, but also in “certain physical and psychophysical attributes, but also in reaction to the mental tests used by the enquiry, although it is not pretended that mental tests suitable to the East African have yet been arrived at (Gordon, 1934: 235). He then stated that only eugenic proposals could fix the inborn attributes of the so-called “aments.” Thus, if there are differences in the brain between Europeans and East Africans, then “efforts to educate the African to the standard of the European could prove to be either futile or disastrous” (Mahone, 2007).

So without a good understanding of eugenics and how it works, then it didn’t make sense to try to develop African civilizations since their inferior mentality due to their brains made it a forgone conclusion that they wouldn’t be able to upkeep what they would need to to be educated and in good health. Thus, to Kenyan eugenicists like Gordon and Vint, Kenyans were biologically inferior due to their brains.

It is worth noting that Gordon didn’t believe that human races were the same species and that the Kenya colony was in danger of degrading due to the emigration of “mentally unstable” Europeans from the upper classed. He did, though, believe that some of them could be cured and become useful in the colony, he did believe that such the “mental unstables” should not have been sent to the colony (Campbell, 2007). Gordon also claimed that high grade “aments” could flourish in a low level society undetected, only being detected once introduced to European civilization.

After Gordon and Vint, came J. C. Carothers who, despite lacking psychiatric training was sent to Kenya as a specialist psychologist (Prince, 1996: 235). He became the director of the Mathari mental hospital in Nairobi in 1938 and held the position until 1950 (Carson, 1997) while studying the “insane” at the Mathari mental hospital (Carothers, 1947). Although he seemed to be influenced by Gordon and Vint, and seemed to share the same brain reductionism as them, he looked at it from an environmental tilt although he did not discount heredity in being a factor in racial differences. Carothers claimed that mental illness and cognitive/mental deficiency are “normal physical state[s]” in the African:

In searching for a plausible theory of African psychology, Carothers attempted to explain a perceived difference between Africans and Europeans. He notes gross variation in physical characteristics, such as skin color, which he then correlates with supposed differences in cognitive capability. He quotes Sequeira, the renowned dermatologist, in support:

“both the cerebral cortex and the epidermis are derived from the same elementary embryonic layer–the epiblast….It should therefore not be surprising on embryological grounds to find differences in the characters of the cerebral cortex in different races (2).”

Carothers also investigated the general shape, fissuration and cortical histology of the African brain as compared to the European brain. While he notes that “no sweeping conclusions in regard to African mentality can be arrived at on the basis of these data,” his general conclusion was that Africans exhibit a “cortical sluggishness” due to under-use of the frontal lobes, which inhibited their ability to synthesize information (3).

With the frontal lobe hypothesis, Carothers claimed that cognitive or mental inferiority was an inherent state in the African. “With the Negro,” he writes, “emotional, momentary and explosive thinking predominates… dependence on excitement, on external influences and stimuli, is a characteristic sign of primitive mentality.” According to Carothers, the African’s “mental development is defined by the time he reaches adolescence, and little new remains to be said” (3). In this supposed child-like permanence, “above all, the importance of physical needs (nutrition, sexuality)” prevail (2). This belief was used as proof that Africans could not appreciate the Victorian moral values of hard work and education, the desire for which was said to have come in part through denial of the sexual drive. By extension, the African was denied the possibility of reaching a civilized state.

Carothers also claimed that the African exhibits an “impulsivity [that is] violent but unsustained, … an ‘immaturity’ which prevents complexity and integration in the emotional life” (2). Using this discourse of violence, he medicalized “mental illness” as a normal physical state in the African. When the British administration in Kenya called upon Carothers to assess the Mau Mau rebellion (1945-1952), ethnopsychiatry was “commandeered to clothe the political interests of the colonists in the pseudo-scientific language of psychiatry to legitimize European suzerainty” (4). After due investigation, Carothers reported to the British government that “the onus for the rebellion rests with the deficiencies characteristic of the native Kenyans and not with the policies of the British colonial desire” (3). (Carson, 1997)

In 1951, Carothers (1951: 47) argued for a cultural view to explain the “frontal idleness” of the African, while not discounting “the possibility of anatomical differences” in explaining it:

This frontal idleness in turn can be accounted for on cultural grounds alone, but the possibility of anatomical differences, is not thereby excluded.

Finally, a plea is voiced for expert anatomical study of the African brain and, in view of his resemblance to a certain type of European psychopath, of the brains of the latter also.

Carothers published a WHO report in 1953 where he stated that he would relate cultural factors and malnutrition and disease to mental development (Carothers, 1953). Carothers (1953: 106) stated that “The psychology of the African is essentially the psychology of the African child.” This claim, of course, seems to gel well with the Gordon-Vint claim that the brain growth of the East African seems to subside way earlier than that of the European brain. Carothers also reinterpreted Vint’s findings on the thinner cerebral cortex.

[Carothers] introduced an interpretation which permitted education to play a role in post-natal cerebral development. Noting the remarkable enhancement in interest and alertness “that comes to African boys and girls as a result of only a very little education… often comprising little more than some familiarity with written symbols in reading, writing and arithmetic;’ he raised the question whether,”it is not possible that the
maturation of those cortical cells in Europe is also dependant on the
acquisition of that skill” (Carothers, 1962, p. 134). (Prince, 1996: 237)

Though regarding the so-called thinner cortex of the African, Tobias (1979) stated:

Published interracial comparisons of thickness of the cerebral cortex and, particularly, of its supragranular layer, are technically invalid: there is no acceptable proof that the cortex of Negroes is thinner in whole, or in any layer, than that of Europeans. It is concluded that vast claims have been based on insubstantial evidence.

However, Cryns (1962: 237) stated that while there are differences in brain morphology between whites and blacks, there was no evidence that this accounted foe the alleged inferiority in intelligence in Africans:

With regard to brain fissuration and the histological structure of the cortex, both Carothers (14, p. 80) and Verhaegen (49, p. 54) state that there is no scientific evidence sufficient to assume that mental capacity is in some degree related to the surface or structure of the cerebral cortex.

The general conclusion, then, to be drawn from the above anatomical and physiological brain studies is that there is sufficient empirical evidence indicating the existence of morphological differences between White and Negro brains, but that there is no sufficient evidence to indicate that the morphological peculiarities found in the African brain are of functional significance, i.e., account for an alleged intellectual inferiority.

Gordon and Vint’s works and conclusions in the modern day

Reading the works of these two men, we can see that what they are saying is nothing new—since contemporary hereditarians argue for almost similar conclusions. Rushton was one of the main hereditarians who argued that biological reductionism was true and he authored many studies with Ankney on the correlation between general mental ability (GMA) and the brain (Rushton and Ankney, 2007; 2009).

Rushton, however, aggregated numerous different measurements from different time periods, even from authors who did not subscribe the racial hierarchies that Rushton proposed—in fact, this “hierarchy” changed numerous times throughout the ages (Lieberman, 2001). The current hierarchy came about due to East Asia’s economic uprise starting after WW2, and the “shrinking skulls” of Europeans began in the 1980s with Rushton (Lieberman, 2001). Although Lynn, (1977, 1982) did speak of higher Japanese IQs, it is of course in the context of “Japan’s dazzling commercial success.” (See here for a refutation of Lynn’s genetic hypothesis regarding Asians.)

Gordon’s and Vint’s works were cited favorably by Rushton and Jensen (2005: 255) and Rushton and Jensen (2010) while Rushton referenced Vint many times (Rushton, 1997; Rushton and Ankney, 2009). These works were cited as being in agreement with Morton’s studies on cranial capacity (see Gould, 1996; Weisberg, 2014; Kaplan, Pigliucci and Banta, 2015; Weisberg and Paul, 2019). Although in a recent paper, Salami et al (2017) showed that the average brain weight of Africans has been underestimated and came to a value of 1280g with between 1015g and 1590 g (a mean of 1334g was found for the brain’s of males) while no statistical difference between groups was found. This was also replicated by Govender et al (2018) in South Africa.

It is quite obvious by looking at how contemporary hereditarian research is trending, that the biological reductionism of Gordon and Vint is still alive today in fMRI and MRI studies. Contemporary hereditarians have also implicated the frontal lobe as being part of the reason why blacks are “less intelligent” than whites, and as we have seen, this is a decades-old claim. These beliefs were held due to outdated and outright racist views on the “quality” of the greatest “resource”, according to Gordon: The population.

Conclusion

Eugenics in Kenya—as it was in America—wasn’t a scientific movement; it was a social and political one. Eugenic ideas were practiced all over the world from the time of antiquity all the way to the modern day. The biological reductionism espoused by Kenyan eugenicists is still with us today, and instead of using post-mortem brains and crude skull measures, we are using more sophisticated technologies to try to show this reductionism is true. However, since mind doesn’t reduce to brain, this is bound to fail.

As we can see, the kind of gross biological reductionism hasn’t left us, it has only strengthened. The mental and physical reductionism inherent in these theories have never died—they just quieted down for a bit after WW2.

What is inherent in such claims is that there are not only racial brains, but racial minds. What Gordon, Vint and Carothers tried arguing was that it wasn’t due to the rule of the British and the society that they attempted to create in Kenya, the capacity for rebellion was inherent in the Kenya native. This seems to me to be like the “drapetomania” craze during slavery in America: pathologizing a normal response—like wanting to escape slavery—and create a new psychological diagnosis to explain why they act a certain way. The views espoused by the scientific racists in Kenya were not new, since earlier in the 19th century the inferiority of the “black brain” was well-noted and discussed. Although I have found one (1) view from Tiedemann (1836: 504) who claims that his studies led him to the belief that “by measuring the cavity of the skull of Negroes and men of the Caucasian, Mongolian, American, and Malayan races, that the brain of the Negro is as large as thsg of the European and other nations.

Campbell (2007: 219-220) explains that although most probably still held their eugenic beliefs, the changing intellectual climate in Britain was a main reason why the eugenics movement in Kenya was not sustained.

By the late 1930s, although there had been no radical change in settler attitudes to race and no upheaval in the policy or personnel of the colonial administration the Kenyan eugenics movement petered out. We must assume that individuals retained their eugenic beliefs, but its potency in Kenya’s lore of human biology was lost. The causes of the demise of Kenyan racial eugenics lay in the financial retrenchment of the 1930s and responses in the metropole at a time when scientific racism was being increasingly undermined on both political and intellectual grounds. Without metropolitan support, Kenyan eugenics could not be sustained as a social movement. The size and composition of the Kenyan European community was such that there were not enough individuals with the intellectual and scientific interests and authority to establish an independent, self-sufficient organisation. Kenyan eugenics was forced to look to the metropole for financial, intellectual and institutional legitimacy. The demise of Kenyan eugenics is therefore intimately linked with a changing intellectual climate in Britain.

The views espoused by Gordon, Vint and Carothers have not left us. After Arthur Jensen revived the race and IQ debate in 1969, searches for the cause of why blacks are less intelligent than whites began coming back into the mainstream. Rushton and Jensen relied on such works to argue for their conclusion that the cause of lower intelligence and hence lower civilizational attainment and academic performance was due to genes and their brain structure. Such antiquated views, it seems, just will not die. Lieberman (2001) showed how the racial hierarchy in brain size has changed throughout the ages based on current social thought, and of course, this has affected hereditarian thinking in the modern day.

Although some authors in the 18 and 1900s proclaimed that brain weight had no bearing on one’s mental faculties, quite obviously the Kenyan eugenicists never got that memo. Nevertheless, there are a few studies that contradict Rushton’s racial hierarchy in brain size, showing that the brain’s of blacks are in range with those of whites.

Discussions on the “quality” of brains of different groups of course have not went away, they just changed their language. It seems to me that, like with most hereditarian claims, it’s just racists citing racists as “consensus” for their claims. Gordon (1934) asked why the brain of the Kenyan does not develop in the same way as the European’s. Since the reductionism they held to is false, such a question isn’t really relevant.

There is No Such Thing as a “Male” and “Female” Brain

3150 words

Introduction

Almost seven years ago I argued that there is such a thing as a “male” and “female” brain. Now, I’m not so sure on that belief. Because a claim like that reduces to the claim that there are two different KINDS of brain—make and female. This, though, is basically a mereological fallacy. Brains aren’t gendered/sexed, people are. Brains don’t have genders, people have genders. This doesn’t mean that there are no sex differences in the brain, that claim would be ridiculous. But the actual claim—a claim that I think is perfectly defensible—is that there ARE NOT two different kinds of brain. This is the conclusion that I will argue for in this article.

The brain mosaic

Questions like “Is the brain gendered?are the wrong kinds of questions to ask. Not only is it implying that there is more than one kind of brain, it is also implying that the brain is itself gendered. The claim that the brain is gendered is patently false; brains don’t have genders, people have genders, and people aren’t—nor do they reduce to—their brains. Therefore brains aren’t gendered.

When does a feature of a brain count as that which is typical of a male brain and vice versa for women? How many of these differences would there need to be in one brain to designate that brain as male or female? Of course there are average differences which I don’t think anyone would deny, but these average differences between brains wouldn’t license the claim that there are two different kinds of brain just like the fact that there are average differences in hearts between men and women don’t license the claim that there are two different kinds of heart. The only clear-cut average difference between the brains of men and women are that of size—women’s brains are about 11 percent smaller than men’s when body size is accounted for (Eliot et al, 2021). But mere size differences, also, do not license the claim that there are two different kinds of brain. For there to be male and female brains—two types of brain—there needs to be a property or set of properties which are exclusive to the two brains, but there are no such properties. Again, no one denies average sex differences, what is denied is that there are two different kinds of brain.

In recent years, talk in the neurosciences have shifted away from such a binary claim to that of mosaicism (Joel, 2011, 2012, 2021; Joel et al, 2015). Fine, Joel, and Rippon even have an explainer about sex, gender, brains and behavior. Joel et al (2015) analyzed four datasets of 1400 individuals examining the size and characters of brain regions that show the largest sex differences. They found substantial overlap between features, and that, on each end of the distribution, there were more males and more females, respectively. However, they had a novel finding: Many of the brains that were analyzed had many components of each “kind” of brain—they contains a mosaic of each of the ends of the distribution (male and female). Thus, the claim that brains are a mosaic or intersexed are true. So sex doesn’t determine brain type and, even though there are average differences between men and women, these average differences don’t add up to the claim that there are two different kinds of brain. Sex is dimorphic, but brains aren’t—brains are monomorphic.

Monomorphic not dimorphic

Sexual dimorphism is where the genders of a specific species have differences that aren’t solely (that is, not related to) due to their sexual characteristics. Monomorphic species, though, are similar in everything but their sexual characteristics. There is only one form with all individuals in that species having the same physical characters with little to no variation in them. So the claim that brains are dimorphic means that there are two kinds of brain—meaning, male and female. These terms (monomorphic and dimorphic) refer to variation in traits, with the term monomorphic referring to little or no variation while the term dimorphic refers to a situation in which there is noticeable variation. Certain bird species have different physical characteristics such as sex-specific markings, size differences and color differences which would mean they are dimorphic. On the other hand, other kinds of bird species may have the same kinds of physical characteristics meaning they are dimorphic.

If there is only one form of trait in a population, then the population is monomorphic. If there are two distinct forms of a trait in a population, then that population is dimorphic. Thus if there is little to no variation in the expression of a trait within a population then that population is monomorphic; if there is noticeable variation in the expression of a trait in a population then that population is dimorphic.

Eliot et al (2021) showed that brains aren’t dimorphic, they are monomorphic. The only reliable difference between the two are that of brain size, with women having an 11% smaller brain than men, which is smaller than that of the heart, lungs, and kidneys. Therefore, once brain size is accounted for, there are little no variation between brains (Eliot et al state the few reliable differences between brains are byproducts of brain size, so brain differences between sex/genders “explains” 1 percent of the total variance which means that brain differences which could be attributed to sex and gender are minuscule compared to individual variation.)

But for all the surplus of brain-level data on male-female difference, surprisingly few clear findings have emerged, and even less to justify labeling the human brain as “sexually-dimorphic.” Nor does anything in this massive data collection actually explain male/female differences in psychology or mental health (De Vries and Södersten, 2009Hirnstein et al., 2019) in spite of decades of such promise. To the contrary, the data show that male and female brains are overwhelmingly similar, or monomorphic, and suggest that finding such neural correlates will more fruitfully be achieved through study at the individual, as opposed to s/g group level.

Rather, a picture is emerging not of two brain types nor even a continuous gradient from masculine to feminine, but of a multidimensional “mosaic” of countless brain attributes that differ in unique patterns across all individuals (Joel et al., 2015). Although such differences may, in a particular sample, sum up to discriminate male from female brains, the precise discriminators do not translate across populations (Table 7; see also Joel et al., 2018Sanchis-Segura et al., 2020) so are not diagnostic of two species-wide types. In this sense, the brains of male and females are not dimorphic (like the gonads) but monomorphic, like the kidneys, heart and lungs, which can be transplanted between women and men with great success. (Eliot et al, 2021)

Mccarthy and Arnold (2011) explain why the belief that there are sex-specific circuits, which is due to the investigation of a small number of dimorphisms in the brain:

The repeated investigation of a relatively small number of sexual dimorphisms may have contributed to the false impression that a few discrete male or female circuits sit in an otherwise sexually monomorphic brain. The notion that for specific behaviors there is a discrete male neural circuit versus a discrete female neural circuit remains widely held despite a lack of empirical evidence of the existence of either.

The argument against the “two kinds of brain” argument

In this section, I will synthesize the preceding sections into an argument which argues that there aren’t two kinds of brain, male and female.

P1: If there are two kinds of brain (male and female) then there should be clear and distinct differences in brain structure and function between men and women.
P2: Studies have shown that there is a wide range of variation in brain structure and function among individuals of the same sex and also between men and women.
C1: Therefore, the available evidence doesn’t support the claim that there are clear and distinct differences in brain structure and function between men and women.
P3: The claim that there are two kinds of brain (male and female) is based on the assumption that there are clear and distinct differences in brain structure and function between men and women.
C2: Therefore, the claim that there are two kinds of brain (make and female) is not supported by available evidence.

Premise 1: This premise is based on the assumption that male and female brains are fundamentally distinct from each other, to such an extent that they can be categorized into two separate categories. There is, though, much overlap between the structure and function between brains belonging to men and brains belonging to women. For example, the Joel et al (2015) study cited above concluded that there is no such thing as a “male” and “female” brain, but there is a continuum of brain characteristics which are influenced by G and E factors. Rippon et al (2014) don’t argue that there are no differences in brain structure and function between sexes, but they do argue that such differences don’t license the claim that there are two forms—kinds—of brain. It is, again, important to note that none of these researchers argue that there are no sex differences; the claim is that these sex differences don’t add up to make “male” or “female” brains, they don’t belong to two different categories. Joel and Fausto-Sterling (2016) write:

We argue that the existence of differences between the brains of males and females does not unravel the relations between sex and the brain nor is it sufficient to characterize a population of brains. … Studies of humans further suggest that human brains are better described as belonging to a single heterogeneous population rather than two distinct populations.

Premise 2: The references on the brain mosaic back up P2. The differences that do exist are small (as noted by Eliot et al, 2021) and these differences do not support that claim that human brains are dimorphic. There is much overlap between brains of men and women and even significant variation in brain function and structure between individuals of the same sex.

Conclusion 1: Based on the two previous premises, the claim that human brains are dimorphic are clearly false. Differences are not clear-cut (and what differences do exist are small) and there is no one property or set of properties between brains that would designate one “male” and another “female.”

Premise 3: P3 is based on the history of this kind of research, in which it was assumed that there are two different kinds of brain—male and female. Jordan-Young and Rumiati (2012) argue that much of the research on sex differences in the brain is based on the binary assumption—since sex is binary, then the brains inside of the heads of the individuals must be sexed too. They assume that such differences exist in the brains exist and then go looking for them. Of course, more often than not, if you’re looking for something you’re going to find it. At the end of the day, the fact that sex and gender (s/g; Eliot et al, 2021) are so tightly interwoven (but still distinct) that even if there are biological differences, untangling them will be next to impossible, just like when it comes to heritability and the nature-nurture debate.

Conclusion 2: This conclusion logically follows from P3, since the claim that there are male and female brains is based on an outdated and oversimplified understanding between biology (brain) and sex. Any differences that do exist are small, influenced by numerous factors, and fall along a continuum, not a dimorphic binary.

So it thusly follows that there are not two different kinds of brain; the dimorphic assumption is false and brains, like other internal organs, are monomorphic.

Gender isn’t natural

Here, I have two arguments. One that establishes that gender and sex aren’t the same, and another that establishes that gender is not natural (it is social).

P1: If gender and sex are the same, then the characters and roles associated with being male and female are biologically determined.
P2: The characters and roles associated with being male and female are not purely biologically determined.
C: Thus, gender and sex are not the same.

P1 is based on the assumption that if g and s are the same, then all characters associated with male and female are biologically determined. Gender is a social construct which changes with the times and is different across cultures and time periods. (Like, for example,) So this indicates that such differences are not solely biologically determined. P2 states that gender roles are context- and time-sensitive. So roles and expectations of men and women are not solely biologically determined. The conclusion then logically follows: If the differences between men and women aren’t purely biologically determined, then gender doesn’t reduce to biology. So sex and gender are different because the characters and roles of men and women aren’t purely biologically determined, which means that gender isn’t reducible to biology.

Now here is my argument that gender is not natural, meaning it is social:

P1: All things that are “natural” are socially unmediated and inevitable (all A are B).
P2: Gender is socially-mediated and not inevitable (C is not B).
C: Therefore, gender is not natural (C is not A).

I think P1 is the only premise that one would reject. But to best defend P1, I only need to appeal to the definition of “natural.” “Natural” refers to anything that exists in the world independent of human society, culture, or intervention. Natural phenomena aren’t socially-mediated meaning that they aren’t shaped by human norms, values or practices and are inevitable due to certain physical laws. By “socially unmediated” I mean a phenomenon which isn’t dependent on human values, norms, or practices which occur independent of human intervention which are not subject to variation or change based on social context or historical period. By “inevitable” I mean phenomena which are subject to natural laws which are universal and unchanging. I can also defend P1 by arguing the distinction between facts and values. Natural phenomena are facts that exist beyond human values. Anything that is subject to human values or norms would be socially mediated, which would include gender.

Now that I have successfully defended P1, to defend P2 one easy example is that of color. It has been argued that men and women prefer different colors due to our hunter-gatherer ancestry (Hulbert and Ling, 2007). Pink used to be seen as a color for boys while blue used to be seen as a color for girls. (See here and here.) The conclusion then follows, since the premises are true and the argument is valid.

Men and women and IQ

Lastly I will discuss the preceding arguments in the context of IQ. For example, Lynn (1994) argues that there is a 4 point difference between men and women in IQ, and relates it to selection pressures. Kanazawa (2009) argues that men have higher IQ than women since men are taller than women, and when height is controlled, women have higher IQ. Irving and Lynn (2006) and Lynn and Kanazawa (2011) also note a small difference between the sexes. But Halpern and Wai (2019) rightly note the historical reasons why there is such a small—almost nonexistent—difference in IQ between men and women:

Massive amounts of data show that although there are some on average differences in specific cognitive abilities, there is considerable overlap in the male and female distributions. There are no sex differences in general intelligence – standardized IQ tests were written to show no differences, and separate assessments that were not written with this criterion show no differences in general intelligence.

When creating his Stanford-Binet test, Terman thought that men and women should be equal in IQ, and so he adjusted his test to reflect this (a priori) assumption. Ackerman (2018) describes this well:

There is an important historical reason why there are negligible gender differences in omnibus IQ assessments. … Terman … decided that there was adequate justification for equality of IQ scores across the sexes, and so he constructed his IQ test to be specifically balanced.

We don’t need to use differences in height, or stories about evolutionary selection pressures, or differences in brain size to explain the small difference between and women on IQ tests. We only need to look at how the tests are constructed, as the considerations from Terman and also Rosser (1989) show. Thus, we don’t need to look to biology and brains to explain the small difference. It is due to how the tests are constructed.

Conclusion

Taken together, the three sections here point to one conclusion: The nonexistence of male and female brains means that gender doesn’t reduce to biology (the brain), nor do brain differences cause IQ differences between men and women. While hereditarians do argue that the brain size differences between men and women “explain” the slight 4 point or so difference in IQ between men and women, and while women do have about an 11 percent smaller brain than men on average, this does not (1) license the claim that brain size is causal for the small IQ differences and (2) justify the claim that there are two distinct kinds of brain (male and female). So claims from people like Murray (2020) that there are two kinds of distinct brain fail. When does a feature count as “typical” of the so-called male or female brain and how many of these features would one of these brains need to have to be designated as male or female? Brains aren’t gendered or sexed, people are, and people aren’t their brains.

P1: If male and female brains don’t exist, then any observed differences in cognitive ability between men and women are likely to be explained by cultural and social factors along with how the tests are constructed.
P2: Male and female brains do not exist.
C: Thus, any observed differences in cognitive ability between men and women are likely to be explained by cultural and social factors along with how the tests are constructed.

Brains are not sexed or gendered, humans (and their selves) are sexed or gendered. While gender identity does exist, it’s irreducible to biology and it is a form of personal identity. As I stated at the outset, the claim that there are male or female brains or that brains are sexed is a mereological fallacy since those are properties of the whole (human) rather than their parts (the brain). These arguments also have implications for claims that transgendered people have brains of “the other sex.” For if two types of brains do not exist, then those claims are false. “Brain sex”, therefore, is a nonsense, incoherent term. Human brains are monomorphic, not dimorphic.

Just-so Stories: The Brain Size Increase

1600 words

The increase in brain size in our species over the last 3 million years has been the subject of numerous articles and books. Over that time period, brain size increased from our ancestor Lucy, all the way to today. Many stories are proposed to explain how and why it exactly happened. The explanation is the same ol’ one: Those with bigger heads, and therefore bigger brains had more children and passed on their “brain genes” to the next generation until all that was left was bigger-brained individuals of that species. But there is a problem here, just like with all just-so stories. How do we know that selection ‘acted’ on brain size and thusly “selected-for” the ‘smarter’ individual?

Christopher Badcock, an evolutionary psychologist, as an intro to EP published in 2001, where he has a very balanced take on EP—noting its pitfalls and where, in his opinion, EP is useful. (Most may know my views on this already, see here.) In any case, Badcock cites R.D. Martin (1996: 155) who writes:

… when the effects of confounding variables such as body size and socio-economic status are excluded, no correlation is found between IQ and brain size among modern humans.

Badcock (2001: 48) also quotes George Williams—author of Adaptation and Natural Selection (1966; the precursor to Dawkins’ The Selfish Gene) where he writes:

Despite the arguments that have been advanced, I cannot readily accept the idea that advanced mental capabilities have ever been directly favored by selection. There is no reason for believing that a genius has ever been likely to leave more children than a man of somewhat below average intelligence. It has been suggested that a tribe that produces an occasional genius for its leadership is more likely to prevail in competition with tribes that lack this intellectual resource. This may well be true in the sense that a group with highly intelligent leaders is likely to gain political domination over less gifted groups, but political domination need not result in genetic domination, as indicated by the failure of many a ruling class to maintain its members.

In Adaptation and Natural Selection, Williams was much more cautious than adaptationists today, stating that adaptationism should be used only in very special cases. Too bad that adaptationists today did not get the memo. But what gives? Doesn’t it make sense that the “more intelligent” human 2 mya would be more successful when it comes to fitness than the “less intelligent” (whatever these words mean in this context) individual? Would a pre-historic Bill Gates have the most children due to his “high IQ” as PumpkinPerson has claimed in the past? I doubt it.

In any case, the increase in brain size—and therefore increase in intellectual ability in humans—has been the last stand for evolutionary progressionists. “Look at the increase in brain size”, the progressionist says “over the past 3mya. Doesn’t it look like there is a trend toward bigger, higher-quality brains in humans as our skills increased?” While it may look like that on its face, in fact, the real story is much more complicated.

Deacon (1990a) notes many fallacies that those who invoke the brain size increase across evolutionary history make, including: the evolutionary progression fallacy; the bigger-is-smarter fallacy; and the numerology fallacy. The evolutionary progression fallacy is simple enough. Deacon (1990a: 194) writes:

In theories of brain evolution, the concept of evolutionary progress finds implicit expression in the analysis of brain-size differences and presumed grade shifts in allometric brain/body size trends, in theories of comparative intelligence, in claims about the relative proportions of presumed advanced vs. primitive brain areas, in estimates of neural complexity, including the multiplication and differentiation of brain areas, and in the assessment of other species with respect to humans, as the presumed most advanced exemplar. Most of these accounts in some way or other are tied to problems of interpreting the correlates of brain size. The task that follows is to dispose of fallacious progressivist notions hidden in these analyses without ignoring the questions otherwise begged by the many enigmatic correlations of brain size in vertebrate evolution.

Of course, when it comes to the bigger-is-smarter fallacy, it’s quite obviously not true that bigger IS always better when it comes to brain size, as elephants and whales have larger brains than humans (also see Skoyles, 1999). But what they do not have more of than humans is cortical neurons (see Herculano-Houzel, 2009). Decon (1990a: 201) describes the numerology fallacy:

Numerology fallacies are apparent correlations that turn out to be artifacts of numerical oversimplification. Numerology fallacies in science, like their mystical counterparts, are likely to be committed when meaning is ascribed to some statistic merely by virtue of its numeric similarity to some other statistic, without supportive evidence from the empirical system that is being described.

While Deacon (1990a: 232) concludes that:

The idea, that there have been progressive trends of brain evolution, that include changes in the relative proportions of different structures (i.e., enlarging more “advanced” areas with respect to more primitive areas) and increased differentiation, interconnection, and overall complexity of neural circuits, is largely an artifact of misunderstanding the complex internal correlates of brain size. … Numerous statistical problems, arising from the difficulty of analyzing a system with so many interdependent scaling relationships, have served to reinforce these misconceptions, and have fostered the tacit assumption that intelligence, brain complexity, and brain size bear a simple relationship to one another.

Deacon (1990b: 255) notes how brains weren’t directly selected for, but bigger bodies (bigger bodies means bigger brains), and this does not lean near the natural selection fallacy theory for trait selection since this view is of the organism, not its trait:

I will argue that it is this remarkable parallelism, and not some progressive selection for increasing intelligence, that is responsible for many pseudoprogressive trends in mammalian brain evolution. Larger whole animals were being selected—not just larger brains—but along with the correlated brain enlargement in each lineage a multitude of parallel secondary internal adaptations followed.

Deacon (1990b: 697-698) notes that the large brain-to-body size ratio in humans compared to other primates is an illusion “a surface manifestation of a complex allometric reorganization within the brain” and that the brain itself is unlikely to be the object of selection. The correlated reorganization of the human brain, to Deacon, is what makes humans unique; not our “oversized” brains for our body. While Deacon (1990c) states that “To a great extent the apparent “progress” of mammalian brain evolution vanishes when the effects of brain size and functional specialization are taken into account.” (See also Deacon, 1997: chapter 5.)

So is there really progress in brain evolution, which would, in effect, lend credence to the idea that evolution is progressive? No, there is no progress in brain evolution; so-called size increases throughout human history are an artifact; when we take brain size and functional specialization into account (functional specialization is the claim that different areas in the brain are specialized to carry out different functions; see Mahon and Cantlon, 2014). Our brains only seem like they’ve increased; when we get down to the functional details, we can see that it’s just an artifact.

Skoyles and Sagan (2002: 240) note that erectus, for example, could have survived with much smaller brains and that the brain of erectus did not arise for the need for survival:

So how well equipped was Homo erectus? To throw some figures at you (calculations shown in the notes), easily well enough. Of Nariokotome boy’s 673 cc of cortex, 164 cc would have been prefrontal cortex, roughly the same as half-brained people. Nariokotome boy did not need the mental competence required by cotemporary hunter-gatherers. … Compared to that of our distant ancestors, Upper Paleolithic technology is high tech. And the organizational skills used in hunts greatly improved 400,000 years ago to 20,000 years ago. These skills, in terms of our species, are recent, occurring by some estimates in less than the last 1 percent of our 2.5 million year existence as people. Before then, hunting skills would have required less brain power, as they were less mentally demanding. If you do not make detailed forward plans, then you do not need as much mental planning abilities as those who do. This suggests that the brains of Homo erectus did not arise for reasons of survival. For what they did, they could have gotten away with much smaller, Daniel Lyon-sized brains.

In any case—irrespective of the problems that Deacon shows for arguments for increasing brain size—how would we be able to use the theory of natural selection to show what was selected-for, brain size or another correlated trait? The progressionist may say that it doesn’t matter which is selected-for, the brain size is still increasing even if the correlated trait—the free-rider—is being selected-for.

But, too bad for the progressionist: If the correlated non-fitness-enhancing trait is being selected-for and not brain size directly, then the progressionist cannot logically state that brain size—and along with it intelligence (as the implication always is)—is being directly selected-for. Deacon throws a wrench into such theories of evolutionary progress in regard to human brain size. Though, looking at erectus, it’s not clear that he really “needed” such a big brain for survival—it seems like he could have gotten away with a much smaller brain. And there is no reason, as George Williams notes, to attempt to argue that “high intelligence” was selected-for in our evolutionary history.

And so, Gould’s Full House argument still stands—there is no progress in evolution; bacteria occupy life’s mode; humans are insignificant to the number of bacteria on the planet, “big brains”, or not.

Just-so Stories: MCPH1 and ASPM

1350 words

Microcephalin, a gene regulating brain size, continues to evolved adaptively in humans” (Evans et al, 2005) “Adaptive evolution of ASPM, a major determinant of cerebral cortical size in humans” (Evans et al, 2004) are two papers from the same research team which purport to show that both MCPH1 and ASPM are “adaptive” and therefore were “selected-for” (see Fodor, 2008; Fodor and Piatteli-Palmarini, 2010 for discussion). That there was “Darwinian selection” which “operated on” the ASPM gene (Evans et al, 2004), that we identified it was selected, along with its functional effect is evidence that it was supposedly “selected-for.” Though, the combination of functional effect along with signs of (supposedly) positive selection do not license the claim that the gene was “selected-for.”

One of the investigators who participated in these studies was one Bruce Lahn, who stated in an interview that MCPH1 “is clearly favored by natural selection.” Evans et al (2005) show specifically that the variant supposedly under selection (MCPH1) showed lower frequencies in Africans and the highest in Europeans.

But, unfortunately for IQ-ists, neither of these two alleles are associated with IQ. Mekel-Boborov et al (2007: 601) write that their “overall findings suggest that intelligence, as measured by these IQ tests, was not detectably associated with the D-allele of either ASPM or Microcephalin.” Timpson et al (2007: 1036A) found “no meaningful associations with brain size and various cognitive measures, which indicates that contrary to previous speculations, ASPM and MCPH1 have not been selected for brain-related effects” in genotyped 9,000 genotyped children. Rushton, Vernon, and Bons (2007) write that “No evidence was found of a relation between the two candidate genes ASPM and MCPH1 and individual differences in head circumference, GMA or social intelligence.Bates et al’s (2008) analysis shows no relationship between IQ and MCPH1-derived genes.

But, to bring up Fodor’s critique, if MCPH1 is coextensive with another gene, and both enhance fitness, then how can there be direct selection on the gene in question? There is no way for selection to distinguish between the two linked genes. Take Mekel-Bobrov et al (2005: 1722) who write:

The recent selective history of ASPM in humans thus continues the trend of positive selection that has operated at this locus for millions of years in the hominid lineage. Although the age of haplogroup D and its geographic distribution across Eurasia roughly coincide with two important events in the cultural evolution of Eurasia—namely, the emergence and spread of domestication from the Middle East ~10,000 years ago and the rapid increase im population associated with the development of cities and written language 5000 to 6000 years ago around the Middle East—the signifigance of this correlation is not clear.

Surely both of these genetic variants have a hand in the dawn of these civilizations and behaviors of our ancestors; they are correlated, right? Though, they only did draw that from the research studies they reported on—these types of wild speculation are in the papers referenced above. Lahn and his colleagues, though, are engaging in very wild speculation—if these variants are under positive selection, that is.

So it seems that this research and the conclusions drawn from it are ripe for a just-so story. We need to do a just-so story check. Now let’s consult Smith’s (2016: 277-278) seven just-so story triggers:

1) proposing a theory-driven rather than a problem-driven explanation, 2) presenting an explanation for a change without providing a contrast for that change, 3) overlooking the limitations of evidence for distinguishing between alternative explanations (underdetermination), 4) assuming that current utility is the same as historical role, 5) misusing reverse engineering, 6) repurposing just-so stories as hypotheses rather than explanations, and 7) attempting to explain unique events that lack comparative data.

For example, take (1): a theory-driven explanation leads to a just-so story, as Shapiro (2002: 603) notes, “The theory-driven scholar commits to a sufficient account of a phenomenon, developing a “just so” story that might seem convincing to partisans of her theoretical priors. Others will see no more reason to believe it than a host of other “just so” stories that might have been developed, vindicating different theoretical priors.” That these two genes were “selected-for” means that, for Evans et al, it is a theory-driven explanation and therefore falls prey to the just-so story criticism.

Rasmus Nielsen (2009) has a paper on the thirty years of adaptationism after Gould and Lewontin’s (1972) Spandrels paper. In it, he critiques so-called examples of two genes being supposedly selected-for: a lactase gene, and MCPH1 and ASPM. Nielsen (2009) writes of MCPH1 and ASPM:

Deleterious mutations in ASPM and microcephalin may lead to reduced brain size, presumably because these genes are cell‐cycle regulators and very fast cell division is required for normal development of the fetal brain. Mutations in many different genes might cause microcephaly, but changes in these genes may not have been the underlying molecular cause for the increased brain size occurring during the evolution of man.

In any case, Currat et al (2006: 176a) show that “the high haplotype frequency, high levels of homozygosity, and spatial patterns observed by Mekel-Bobrov et al. (1) and Evans et al. (2) can be generated by demographic models of human history involving a founder effect out-of-Africa and a subsequent demographic or spatial population expansion, a very plausible scenario (5). Thus, there is insufficient evidence for ongoing selection acting on ASPM and microcephalin within humans.McGowen et al (2011) show that there is “no evidence to support an association between MCPH1 evolution and the evolution of brain size in highly encephalized mammalian species. Our finding of significant positive selection in MCPH1 may be linked to other functions of the gene.

Lastly, Richardson (2011: 429) writes that:

The force of acceptance of a theoretical framework for approaching the genetics of human intellectual differences may be assessed by the ease with which it is accepted despite the lack of original empirical studies – and ample contradictory evidence. In fact, there was no evidence of an association between the alleles and either IQ or brain size. Based on what was known about the actual role of the microcephaly gene loci in brain development in 2005, it was not appropriate to describe ASPM and microcephalin as genes controlling human brain size, or even as ‘brain genes’. The genes are not localized in expression or function to the brain, nor specifically to brain development, but are ubiquitous throughout the body. Their principal known function is in mitosis (cell division). The hypothesized reason that problems with the ASPM and microcephalin genes may lead to small brains is that early brain growth is contingent on rapid cell division of the neural stem cells; if this process is disrupted or asymmetric in some way, the brain will never grow to full size (Kouprina et al, 2004, p. 659; Ponting and Jackson, 2005, p. 246)

Now that we have a better picture of both of these alleles and what they are proposed to do, let’s now turn to Lahn’s comments on his studies. Lahn, of course, commented on “lactase” and “skin color” genes in defense of his assertion that such genes like ASPM and MCPH1 are linked to “intelligence” and thusly were selected-for just that purpose. However, as Nielsen (2009) shows, that a gene has a functional effect and shows signs of selection does not license the claim that the gene in question was selected-for. Therefore, Lahn and colleagues engaged in fallacious reasoning; they did not show that such genes were “selected-for”, while even studies done by some prominent hereditarians did not show that such genes were associated with IQ.

Like what we now know about the FOXP2 gene and how there is no evidence for recent positive or balancing selection (Atkinson et al, 2018), we can now say the same for such other evolutionary just-so stories that try to give an adaptive tinge to a trait. We cannot confuse selection and function as evidence for adaptation. Such just-so stories, like the one described above along with others on this blog, can be told about any trait or gene and explain why it was selected and stabilized in the organism in question. But historical narratives may be unfalsifiable. As Sterelny and Griffiths write in their book Sex and Death:

Whenever a particular adaptive story is discredited, the adaptationist makes up a new story, or just promises to look for one. The possibility that the trait is not an adaptation is never considered.

The Human and Cetacean Neocortex and the Number of Neurons in it

2100 words

For the past 15 years, neuroscientist Suzanna Herculano-Houzel has been revolutionizing the way we look at the human brain. In 2005, Herculano-Houzel and Lent (2005) pioneered a new way to ascertain the neuronal make-up of brains: dissolving brains into soup and counting the neurons in it. Herculano-Houzel (2016: 33-34) describes it so:

Because we [Herculano-Houzel and Lent] were turning heterogeneous tissue into a homogeneous—or “isotropic”—suspension of nuclei, he proposed we call it the “isotropic fractionator.” The name stuck for lack of any better alternative. It has been pointed out to me by none other than Karl Herrup himself that it’s a terribly awkward name, and I agree. Whenever I can (which is not often because journal editors don’t appreciate informality), I prefer to call our method of counting cells what it is: “brain soup.”

So, using this method, we soon came to know that humans have 86 billion neurons. This flew in the face of the accepted wisdom—humans have 100 billion neurons in the brain. However, when Herculano-Houzel searched for the original reference for this claim, she came up empty-handed. The claim that we have 100 billion neurons “had become such an established “fact” that neuroscientists were allowed to start their review papers with generic phrases to that effect without citing references. It was the neuroscientist’s equivalent to stating that genes were made of DNA: it had become a universally known “fact” (Herculano-Houzel, 2016: 27). Herculano-Houzel (2016: 27) further states that “Digging through the literature for the original studies on how many cells brains are made of, the more I read, the more I realized that what I was looking for simply didn’t exist.”

So this “fact” that the human brain was made up of 100 billion neurons was so entrenched in the literature that it became something like common knowledge—for instance, that the sun is 93 million miles away from earth—that did not need a reference in the scientific literature. Herculano-Houzel asked her co-author of her 2005 paper (Roberto Lent) who authored a textbook called 100 Billion Neurons if he knew where the number came from, but of course he didn’t know. Though, subsequent editions added a question mark, making the title of the text 100 Billion Nuerons? (Herculano-Houzel, 2016: 28).

So using this method, we now know that the cellular composition of the human brain is expected for a brain our size (Herculano-Houzel, 2009). According to the encephilization quotient (EQ) first used by Harry Jerison, humans have an EQ of between 7 and 8—the largest for any mammal. And so, since humans are the most intelligent species on earth, this must account for Man’s exceptional abilities. But does it?

Herculano-Houzel et al (2007) showed that it wasn’t humans, as popularly believed, that had a larger brain than expected, but it was great apes, more specifically orangutans and gorillas that had bodies too big for their brains. So the human brain is nothing but a linearly scaled-up primate brain—humans have the amount of neurons expected for a primate brain of its size (Herculano-Houzel, 2012).

So Herculano-Houzel (2009) writes that “If cognitive abilities among non-human primates scale with absolute brain size (Deaner et al., 2007 ) and brain size scales linearly across primates with its number of neurons (Herculano-Houzel et al., 2007 ), it is tempting to infer that the cognitive abilities of a primate, and of other mammals for that matter, are directly related to the number of neurons in its brain.Deaner et al (2007) showed that cognitive ability in non-human primates “is not strongly correlated with neuroanatomical measures that statistically control for a possible effect of body size, such as encephalization quotient or brain size residuals. Instead, absolute brain size measures were the best predictors of primate cognitive ability.” While Herculano-Houzel et al (2007) showed that brain size scales linearly across primates with the number of neurons—so as brain size increases so does the neuronal count of that primate brain.

This can be seen in Fonseca-Azevedo’s and Herculano-Houzel’s (2012) study on the metabolic constraints between humans and gorillas. Humans cook food while great apes eat uncooked plant foods. Larger animals have larger brains, more than likely. However, gorillas have larger bodies than we do but smaller brains than expected while humans have a smaller body and bigger brain. This is due to the diet that the two species eat—gorillas spend about 8-10 hours per day feeding while, if humans had the same number of nuerons but ate a raw, plant-based diet, they would need to feed for about 9 hours a day to be able to sustain a brain with that many neurons. This, however, was overcome by Homo erectus and his ability to cook food. Since he could cook food, he could afford a large brain with more neurons. Fonseca-Azevedo and Herculano-Houzel (2012) write that:

Given the difficulties that the largest great apes have to feed for more than 8 h/d (as detailed later), it is unlikely, therefore, that Homo species beginning with H. erectus could have afforded their combinations of MBD and number of brain neurons on a raw diet.

That cooking food leads to a greater amount of energy unlocked can be seen with Richard Wrangham’s studies. Since the process of cooking gelatinizes the protein in meat, it makes it easier to chew and therefore digest. This same denaturization of proteins occurs in vegetables, too. So, the claim that cooked food (a form of processing, along with using tools to mash food) has fewer calories (kcal) than raw food is false. It was the cooking of food (meat) that led to the expansion of the human brain—and of course, allowed our linearly scaled-up primate brain to be able to afford so many neurons. Large brains with a high neuronal count are extraordinarily expensive, as shown by Fonseca-Azevedo and Herculano-Houzel (2012).

Erectus had smaller teeth, reduced bite force, reduced chewing muscles and a relatively smaller gut compared to other species of Homo. Fink and Lieberman (2016) show that slicing and mashing meat and underground storage organs (USOs) would decrease the number of chews per year by 2 million (13 percent) while the total masticatory force would be reduced about 15 percent. Further, by slicing and pounding foodstuffs into 41 percent smaller particles, the number of chews would be reduced by 5 percent and the masticatory force reduced by 12 percent. So, of course, it was not only cooking that led to the changes we see in erectus compared to others, it was also the beginning of food processing (slicing and mashing are forms of processing) which led to these changes. (See also Catching Fire: How Cooking Made Us Human by Wrangham, 2013 for the evidence that cooking catapulted our brains and neuronal capacity to the size it is now, along with Wrangham, 2017.)

So, since the neuronal count of a brain is directly related to the cognitive ability that brain is capable of, then since Herculano-Houzel and Kaas (2011) showed that since the modern range of neurons was found in heidelbergensis and neanderthalensis, that they therefore had similar cognitive potential to us. This would then mean that “Compared to their societies, our outstanding accomplishments as individuals, as groups, and as a species, in this scenario, would be witnesses of the beneficial effects of cultural accumulation and transmission over the ages” (Herculano-Houzel and Kaas, 2011).

The diets of Neanderthals and humans, while similar (and differed due to the availability of foods), nevertheless, is a large reason why they have such large brains with a large number of neurons. Though, it must be said that there is no progress in hominin brain evolution (contra the evolutionary progressionists) as brain size is predicated on the available food and nutritional quality (Montgomery et al, 2010).

But there is a problem for Herculano-Houzel’s thesis that cognitive ability scales-up with the absolute number of neurons in the cerebral cortex. Mortensen et al (2014) used the optical fractionator (not to be confused with the isotropic fractionator) and came to the conclusion that “the long-finned pilot whale neocortex has approximately 37.2 × 109 neurons, which is almost twice as many as humans, and 127 × 109 glial cells. Thus, the absolute number of neurons in the human neocortex is not correlated with the superior cognitive abilities of humans (at least compared to cetaceans) as has previously been hypothesized.” This throws a wrench in Herculano-Houzel’s thesis—or does it?

There are a couple of glaring problems here, most importantly, I do not see how many slices of the cortex that Mortensen et al (2014) studied. They refer to the flawed stereological estimate of Eriksen and Pakkenberg (2007) showed that the Minke whale had an estimated 13 billion neurons while Walloe et al (2010) showed that the harbor porpoise had 15 billion cortical neurons with an even smaller cortex. These three studies are all from the same research team who use the same stereological methods, so Hercualano-Houzel’s (2016: 104-106) comments apply:

However, both these studies suffered from the same unfortunately common problem in stereology: undersampling, in one case drawing estimates from only 12 sections out of over 3,000 sections of the Minke whale’s cerebral cortex, sampling a total of only around 200 cells from the entire cortex, when it is recommended that around 700-1000 cells be counted per individual brain structure. with such extreme undersampling, it is easy to make invalid extrapolations—like trying to predict the outcome of a national election by consulting just a small handful of people.

It is thus very likely, given the undersampling of these studies and the neuronal scaling rules that apply to cetartiodactyls, that even the cerebral cortex of the largest whales is a fraction of the average 16 billion neurons that we find in the human cerebral cortex.

[…]

It seems fitting that great apes, elephants, and probably cetaceans have similar numbers of neurons in the cerebral cortex, in the range of 3 to 9 billion: fewer than humans have, but more than all other mammals do.

Kazu et al (2014) state that “If the neuronal scaling rules for artiodactyls extend to all cetartiodactyls, we predict that the large cerebral cortex of cetaceans will still have fewer neurons than the human cerebral cortex.” Artiodactyls are cousins of cetaceans—and the order is called cetariodactyls since it is thought that whales evolved from artiodactyls. So if they did evolve from artiodactyls, then the neruonal scaling rules would apply to them (just as humans have evolved from other primates and the neuronal scaling rules apply to us). So the predicted “cerebral cortex of Phocoena phocoena, Tursiops truncatus, Grampus griseus, and Globicephala macrorhyncha, at 340, 815, 1,127, and 2,045 cm3, to be composed of 1.04, 1.75, 2.11, and 3.01 billion neurons, respectively” (Kazu et al, 2014). So the predicted number of cerebellar neurons in the pilot whale is around 3 billion—nowhere near the staggering amount that humans have (16 billion).

Humans have the most cerebellar neurons of any animal on the planet—and this, according to Herculano-Houzel and her colleagues, accounts for the human advantage. Studies that purport to show that certain species of cetaceans have similar—or more—cereballar neurons than humans rest on methodological flaws. The neuronal scaling rules that Herculano-Houzel and colleagues have for cetaceans predict far, far fewer cortical neurons in the species. It is for this reason that studies that show similar—or more—cortical neurons in other species that do not use the isotropic fractionator must be looked at with extreme caution.

However, when Herculano-Houzel and colleagues do finally use the isotropic fractionator on pilot whales, and if their prediction does not come to pass but falls in-line with that of Mortensen et al (2014), this does not, in my opinion, cast doubt on her thesis. One must remember that cetaceans have completely different body plans from humans—most glaringly, the fact that we have hands to manipulate the world with. However, Fox, Muthukrishna, and Shultz (2017) show that whales and dolphins have human-like cultures and societies while using tools and passing down that information to future generations—just like humans do.

In any case, I believe that the prediction borne out from Kazu et al (2014) will show substantially fewer cortical neurons than in humans. There is no logical reason to accept the cortical neuronal estimates from the aforementioned studies since they undersampled parts of the cortex. Herculano-Houzel’s thesis still stands—what sets humans a part from other animals is the number of neurons which is tightly packed in to the cerebral cortex. The human brain is not that special.

The human advantage, I would say, lies in having the largest number of neurons in the cerebral cortex than any other animal species has managed—and it starts by having a cortex that is built in the image of other primate cortices: remarkable in its number of neurons, but not an exception to the rules that govern how it is put together. Because it is a primate brain—and not because it is special—the human brain manages to gather a number of neurons in a still comparatively small cerebral cortex that no other mammal with a viable brain, that is, smaller than 10 kilograms, would be able to muster. (Herculano-Houzel, 2016: 105-106)

(The Lack of) IQ Construct Validity and Neuroreductionism

2400 words

Construct validity for IQ is fleeting. Some people may refer to Haier’s brain imaging data as evidence for construct validity for IQ, even though there are numerous problems with brain imaging and that neuroreductionist explanations for cognition are “probably not” possible (Uttal, 2014; also see Uttal, 2012). Construct validity refers to how well a test measures what it purports to measure—and this is non-existent for IQ (see Richardson and Norgate, 2014). If the tests did test what they purport to (intelligence), then they would be construct valid. I will show an example of a measure that was validated and shown to be reliable without circular reliance of the instrument itself; I will show that the measures people use in attempt to prove that IQ has construct validity fail; and finally I will provide an argument that the claim “IQ tests test intelligence” is false since the tests are not construct valid.

Jung and Haier (2007) formulated the P-FIT hypothesis—the Parieto-Frontal Intelligence Theory. The theory purports to show how individual differences in test scores are linked to variations in brain structure and function. There are, however, a few problems with the theory (as Richardson and Norgate, 2007 point out in the same issue; pg 162-163). IQ and brain region volumes are experience-dependent (eg Shonkoff et al, 2014; Betancourt et al, 2015Lipina, 2016; Kim et al, 2019). So since they are experience-dependent, then different experiences will form different brains/test scores. Richardson and Norgate (2007) state that such bigger brain areas are not the cause of IQ, rather that, the cause of IQ is the experience-dependency of both: exposure to middle-class knowledge and skills leads to a better knowledge base for test-taking (Richardson, 2002), whereas access to better nutrition would be found in middle- and upper-classes, which, as Richardson and Norgate (2007) note, lower-quality, more energy-dense foods are more likely to be found in lower classes. Thus, Haier et al did not “find” what they purported too, based on simplistic correlations.

Now let me provide the argument about IQ test experience-dependency:

Premise 1: IQ tests are experience-dependent.
Premise 2: IQ tests are experience-dependent because some classes are more exposed to the knowledge and structure of the test by way of being born into a certain social class.
Premise 3: If IQ tests are experience-dependent because some social classes are more exposed to the knowledge and structure of the test along with whatever else comes with the membership of that social class then the tests test distance from the middle class and its knowledge structure.
Conclusion 1: IQ tests test distance from the middle class and its knowledge structure (P1, P2, P3).
Premise 4: If IQ tests test distance from the middle class and its knowledge structure, then how an individual scores on a test is a function of that individual’s cultural/social distance from the middle class.
Conclusion 2: How an individual scores on a test is a function of that individual’s cultural/social distance from the middle class since the items on the test are more likely to be found in the middle class (i.e., they are experience-dependent) and so, one who is of a lower class will necessarily score lower due to not being exposed to the items on the test (C1, P4)
Conclusion 3: IQ tests test distance from the middle class and its knowledge structure, thus, IQ scores are middle-class scores (C1, C2).

Still further regarding neuroimaging, we need to take a look at William Uttal’s work.

Uttal (2014) shows that “The problem is that both of these approaches are deeply flawed for methodological, conceptual, and empirical reasons. One reason is that simple models composed of a few neurons may simulate behavior but actually be based on completely different neuronal interactions. Therefore, the current best answer to the question asked in the title of this contribution [Are neuroreductionist explanations of cognition possible?] is–probably not.

Uttal even has a book on meta-analyses and brain imaging—which, of course, has implications for Jung and Haier’s P-FIT theory. In his book Reliability in Cognitive Neuroscience: A Meta-meta Analysis, Uttal (2012: 2) writes:

There is a real possibility, therefore, that we are ascribing much too much meaning to what are possibly random, quasi-random, or irrelevant response patterns. That is, given the many factors that can influence a brain image, it may be that cognitive states and braib image activations are, in actuality, only weakly associated. Other cryptic, uncontrolled intervening factors may account for much, if not all, of the observed findings. Furthermore, differences in the localization patterns observed from one experiment to the next nowadays seems to reflect the inescapable fact that most of the brain is involved in virtually any cognitive process.

Uttal (2012: 86) also warns about individual variability throughout the day, writing:

However, based on these findings, McGonigle and his colleagues emphasized the lack of reliability even within this highly constrained single-subject experimental design. They warned that: “If researchers had access to only a single session from a single subject, erroneous conclusions are a possibility, in that responses to this single session may be claimed to be typical responses for this subject” (p. 708).

The point, of course, is that if individual subjects are different from day to day, what chance will we have of answering the “where” question by pooling the results of a number of subjects?

That such neural activations gleaned from neuroimaging studies vary from individual to individual, and even time of day in regard to individual, means that these differences are not accounted for in such group analyses (meta-analyses). “… the pooling process could lead to grossly distorted interpretations that deviate greatly from the actual biological function of an individual brain. If this conclusion is generally confirmed, the goal of using pooled data to produce some kind of mythical average response to predict the location of activation sites on an individual brain would become less and less achievable“‘ (Uttal, 2012: 88).

Clearly, individual differences in brain imaging are not stable and they change day to day, hour to hour. Since this is the case, how does it make sense to pool (meta-analyze) such data and then point to a few brain images as important for X if there is such large variation in individuals day to day? Neuroimaging data is extremely variable, which I hope no one would deny. So when such studies are meta-analyzed, inter- and intrasubject variation is obscured.

The idea of an average or typical “activation region” is probably nonsensical in light of the neurophysiological and neuroanatomical differences among subjects. Researchers must acknowledge that pooling data obscures what may be meaningful differences among people and their brain mechanisms. THowever, there is an even more negative outcome. That is, by reifying some kinds of “average,” we may be abetting and preserving some false ideas concerning the localization of modular cognitive function (Uttal, 2012: 91).

So when we are dealing with the raw neuroimaging data (i.e., the unprocessed locations of activation peaks), the graphical plots provided of the peaks do not lead to convergence onto a small number of brain areas for that cognitive process.

… inconsistencies abound at all levels of data pooling when one uses brain imaging techniques to search for macroscopic regional correlates of cognitive processes. Individual subjects exhibit a high degree of day-to-day variability. Intersubject comparisons between subjects produce an even greater degree of variability.

[…]

The overall pattern of inconsistency and unreliability that is evident in the literature to be reviewed here again suggests that intrinsic variability observed at the subject and experimental level propagates upward into the meta-analysis level and is not relieved by subsequent pooling of additional data or averaging. It does not encourage us to believe that the individual meta-analyses will provide a better answer to the localization of cognitive processes question than does any individual study. Indeed, it now seems plausible that carrying out a meta-analysis actually increases variability of the empirical findings (Uttal, 2012: 132).

So since reliability is low at all levels of neuroimaging analysis, it is very likely that the relations between particular brain regions and specific cognitive processes have not been established and may not even exist. The numerous reports purporting to find such relations report random and quasi-random fluctuations in extremely complex systems.

Construct validity (CV) is “the degree to which a test measures what it claims, or purports, to be measuring.” A “construct” is a theoretical psychological construct. So CV in this instance refers to whether IQ tests test intelligence. We accept that unseen functions measure what they purport to when they’re mechanistically related to differences in two variables. E.g, blood alcohol and consumption level nd the height of the mercury column and blood pressure. These measures are valid because they rely on well-known theoretical constructs. There is no theory for individual intelligence differences (Richardson, 2012). So IQ tests can’t be construct valid.

The accuracy of thermometers was established without circular reliance on the instrument itself. Thermometers measure temperature. IQ tests (supposedly) measure intelligence. There is a difference between these two, though: the reliability of thermometers measuring temperature was established without circular reliance on the thermometer itself (see Chang, 2007).

In regard to IQ tests, it is proposed that the tests are valid since they predict school performance and adult occupation levels, income and wealth. Though, this is circular reasoning and doesn’t establish the claim that IQ tests are valid measures (Richardson, 2017). IQ tests rely on other tests to attempt to prove they are valid. Though, as seen with the valid example of thermometers being validated without circular reliance on the instrument itself, IQ tests are said to be valid by claiming that it predicts test scores and life success. IQ and other similar tests are different versions of the same test, and so, it cannot be said that they are validated on that measure, since they are relating how “well” the test is valid with previous IQ tests, for example, the Stanford-Binet test. This is because “Most other tests have followed the Stanford–Binet in this regard (and, indeed are usually ‘validated’ by their level of agreement with it; Anastasi, 1990)” (Richardson, 2002: 301). How weird… new tests are validated with their agreement with other, non-construct valid tests, which does not, of course, prove the validity of IQ tests.

IQ tests are constructed by excising items that discriminate between better and worse test takers, meaning, of course, that the bell curve is not natural, but forced (see Simon, 1997). Humans make the bell curve, it is not a natural phenomenon re IQ tests, since the first tests produced weird-looking distributions. (Also see Richardson, 2017a, Chapter 2 for more arguments against the bell curve distribution.)

Finally, Richardson and Norgate (2014) write:

In scientific method, generally, we accept external, observable, differences as a valid measure of an unseen function when we can mechanistically relate differences in one to differences in the other (e.g., height of a column of mercury and blood pressure; white cell count and internal infection; erythrocyte sedimentation rate (ESR) and internal levels of inflammation; breath alcohol and level of consumption). Such measures are valid because they rely on detailed, and widely accepted, theoretical models of the functions in question. There is no such theory for cognitive ability nor, therefore, of the true nature of individual differences in cognitive functions.

That “There is no such theory for cognitive ability” is even admitted by lead IQ-ist Ian Deary in his 2001 book Intelligence: A Very Short Introduction, in which he writes “There is no such thing as a theory of human intelligence differences—not in the way that grown-up sciences like physics or chemistry have theories” (Richardson, 2012). Thus, due to this, this is yet another barrier against IQ’s attempted validity, since there is no such thing as a theory of human intelligence.

Conclusion

In sum, neuroimaging meta-analyses (like Jung and Haier, 2007; see also Richardson and Norgate, 2007 in the same issue, pg 162-163) do not show what they purport to show for numerous reasons. (1) There are, of course, consequences of malnutrition for brain development and lower classes are more likely to not have their nutritional needs met (Ruxton and Kirk, 1996); (2) low classes are more likely to be exposed to substance abuse (Karriker-Jaffe, 2013), which may well impact brain regions; (3) “Stress arising from the poor sense of control over circumstances, including financial and workplace insecurity, affects children and leaves “an indelible impression on brain structure and function” (Teicher 2002, p. 68; cf. Austin et al. 2005)” (Richardson and Norgate, 2007: 163); and (4) working-class attitudes are related to poor self-efficacy beliefs, which also affect test performance (Richardson, 2002). So, Jung and Haier’s (2007) theory “merely redescribes the class structure and social history of society and its unfortunate consequences” (Richardson and Norgate, 2007: 163).

In regard to neuroimaging, pooling together (meta-analyzing) numerous studies is fraught with conceptual and methodological problems, since a high-degree of individual variability exists. Thus, attempting to find “average” brain differences in individuals fails, and the meta-analytic technique used (eg by Jung and Haier, 2007) fails to find what they want to find: average brain areas where, supposedly, cognition occurs between individuals. Meta-analyzing such disparate studies does not show an “average” where cognitive processes occur, and thusly, cause differences in IQ test-taking. Reductionist neuroimaging studies do not, as is popularly believed, pinpoint where cognitive processes take place in the brain, they have not been established and they may not even exist.

Nueroreductionism does not work; attempting to reduce cognitive processes to different regions of the brain, even using meta-analytic techniques as discussed here, fail. There “probably cannot” be neuroreductionist explanations for cognition (Uttal, 2014), and so, using these studies to attempt to pinpoint where in the brain—supposedly—cognition occurs for such ancillary things such as IQ test-taking fails. (Neuro)Reductionism fails.

Since there is no theory of individual differences in IQ, then they cannot be construct valid. Even if there were a theory of individual differences, IQ tests would still not be construct valid, since it would need to be established that there is a mechanistic relation between IQ tests and variable X. Attempts at validating IQ tests rely on correlations with other tests and older IQ tests—but that’s what is under contention, IQ validity, and so, correlating with older tests does not give the requisite validity to IQ tests to make the claim “IQ tests test intelligence” true. IQ does not even measure ability for complex cognition; real-life tasks are more complex than the most complex items on any IQ test (Richardson and Norgate, 2014b)

Now, having said all that, the argument can be formulated very simply:

Premise 1: If the claim “IQ tests test intelligence” is true, then IQ tests must be construct valid.
Premise 2: IQ tests are not construct valid.
Conclusion: Therefore, the claim “IQ tests test intelligence” is false. (modus tollens, P1, P2)

Vegans/Vegetarians vs. Carnivores and the Neanderthal Diet

2050 words

The vegan/vegetarian-carnivore debate is one that is a false dichotomy. Of course, the middle ground is eating both plants and animals. I, personally, eat more meat (as I eat a high protein diet) than plants, but the plants are good for a palate-switch-up and getting other nutrients in my diet. In any case, on Twitter, I see that there is a debate between “carnivores” and “vegans/vegetarians” on which diet is healthier. I think the “carnivore” diet is healthier, though there is no evolutionary basis for the claims that they espouse. (Because we did evolve from plant-eaters.) In this article, I will discuss the best argument for ethical vegetarianism and the evolutionary basis for meat-eating.

Veganism/Vegetarianism

The ethical vegetarian argument is simple: Humans and non-human animals deserve the same moral consideration. Since they deserve the same moral consideration and we would not house humans for food, it then follows that we should not house non-human animals for food. The best argument for ethical vegetarianism comes from Peter Singer from Unsanctifying Animal Life. Singer’s argument also can be extended to using non-human animals for entertainment, research, and companionship.

Any being that can suffer has an interest in avoiding suffering. So the equal consideration of interests principle (Guidi, 2008) asserts that the ability to suffer applies to both human and non-human animals.

Here is Singer’s argument, from Just the Arguments: 100 of the Most Important Arguments in Western Philosophy (pg. 277-278):

P1. If a being can suffer, then that being’s interests merit moral consideration.

P2. If a being cannot suffer, then that beings interests do not merit moral consideration.

C1. If a being’s interests merit moral consideration, then that being can suffer (transposition, P2).

C2. A being’s interests merit moral consideration if and only if that being can suffer (material equivalence, P1, C1).

P3. The same interests merit the same moral consideration, regardless of what kind of being is the interest-bearer (equal consideration of interests principle).

P4. If one causes a being to suffer without adequate justification, then one violates that being’s interests.

P5. If one violates a being’s interests, then one does what is morally wrong.

C3. If one causes a being to suffer without adequate justification, then one does what is morally wrong (hypothetical syllogism, P4, P5).

P6. If P3, then if one kills, confines, or causes nonhuman animals to experience pain in order to use them as food, then one causes them to suffer without adequate justification.

P7. If one eats meat, then one participates in killin, confining, and causing nonhuman animals to experience pain in order to use them as food.

C4. If one eats mea, then one causes nonhuman animals to suffer without adequate justification (hypothetical syllogism, P6, P7).

C5. If one eats meat, the one does what is morally wrong (hypothetical syllogism, C3, C4).

This argument is pretty strong, indeed it is sound. However, I personally will never eat a vegetarian/vegan diet because I love eating meat too much. (Steak, turkey, chicken.) I will do what is morally wrong because I love the taste of meat.

In an evolutionary context, the animals we evolved from were plant-eaters. The amount of meat in our diets grew as we diverged from our non-human ancestors; we added meat through the ages as our tool-kit became more complex. Since the animals we evolved from were plant-eaters and we added meat as time went on, then, clearly, we were not “one or the other” in regard to diet—our diet constantly changed as we migrated into new biomes.

So although Singer’s argument is sound, I will never become a vegan/vegetarian. Fatty meat tastes too good.

Nathan Cofnas (2018) argues that “we cannot say decisively that vegetarianism or veganism is safe for children.” This is because even if the vitamins and minerals not gotten through the diet are supplemented, the bioavailability of the consumed nutrients are lower (Pressman, Clement, and Hayes, 2017). Furthermore, pregnant women should not eat a vegan/vegetarian diet since vegetarian diets can lead to B12 and iron deficiency along with low birth weight and vegan diets can lead to DHZ, zinc, and iron deficiencies along with a higher risk of pre-eclampsia and inadequate fetal brain development (Danielewicz et al, 2017). (See also Tan, Zhao, and Wang, 2019.)

Carnivory

Meat was important to our evolution, this cannot be denied. However, prominent “carnivores” take this fact and push it further than it goes. Yes, there is data that meat-eating allowed our brains to grow bigger, trading-off with body size. Fonseca-Azevedo and Herculano-Houzel (2012) showed that metabolic limitations resulting from hours of feeding and low caloric yield explain the body/brain size in great apes. Plant foods are low in kcal; great apes have large bodies and so, need to eat a lot of plants. They spend about 10 to 11 hours per day feeding. On the other hand, our brains started increasing in size with the appearance of erectus.

If erectus ate nothing but raw foods, he would have had to eat more than 8 hours per day while hominids with neurons around our level (about 86 billion; Herculano-Houzel, 2009). Thus, due to the extreme difficulty of attaining the amount of kcal needed to power the brains with more neurons, it is very unlikely that erectus would have been able to survive on only plant foods while eating 8+ hours per day. Indeed, with the archaeological evidence we have about erectus, it is patently ridiculous to claim that erectus did eat for that long. Great apes mostly graze all day. Since they graze all day—indeed, they need to as the caloric availability of raw foods is lower than in cooked foods (even cooked plant foods would have a higher bioavailability of nutrients)—then to afford their large bodies they need to basically do nothing but eat all day.

It makes no sense for erectus—and our immediate Homo sapiens ancestors—to eat nothing but raw plant foods for what amounts to more than a work day in the modern world. If this were the case, where would they have found the time to do everything else that we have learned about them in the archaeological record?

There is genetic evidence for human adaptation to a cooked diet (Carmody et al, 2016). Cooking food denatures the protein in it, making it easier to digest. Denaturation is the alteration of the protein shape of whatever is being cooked. Take the same kind of food. That food will have different nutrient bioavailability depending on whether or not it is cooked. This difference, Herculano-Houzel (2016) and Wrangham (2009) argue is what drove the evolution of our genus and our big brains.

Just because meat-eating and cooking was what drove the evolution of our big brains—or even only allowed our brains to grow bigger past a certain point—does not mean that we are “carnivores”; though it does throw a wrench into the idea that we—as in our species Homo sapiens—were strictly plant-eaters. Our ancestors ate a wide-range of foods depending on the biome they migrated to.

The fact that our brain takes up around 20 percent of our TDEE while representing only 2 percent of our overall body mass, the reason being our 86 billion neurons (Herculano-Houzel, 2011). So, clearly, as our brains grew bigger and acquired more neurons, there had to have been a way for our ancestors to acquire the energy need to power their brains and neurons and, as Fonseca-Azevedo and Herculano-Houzel (2012) show, it was not possible on only a plant diet. Eating and cooking meat was the impetus for brain growth and keeping the size of our brains.

Take this thought experiment. An asteroid smashes into the earth. A huge dust cloud blocks out the sun. So the asteroid would have been a cause of lowering food production. This halting of food production—high-quality foods—persisted for hundreds of years. What would happen to our bodies and brains? They would, of course, shrink depending on how much and what we eat. Food scarcity and availability, of course, do influence the brain and body size of primates (Montgomery et al, 2010), and humans would be no different. So, in this scenario I have concocted, in such an event, we would shrink, in both brain and body size. I would imagine in such a scenario that high-quality foods would disappear or become extremely hard to come by. This would further buttress the hypothesis that a shift to higher-quality energy is how and why our large brains evolved.

Neanderthal Diet

A new analysis of the tooth of a Neanderthal apparently establishes that they were mostly carnivorous, living mostly on horse and reindeer meat (Jaouen et al, 2019). Neanderthals did indeed have a high-meat diet in northerly latitudes during the cold season. Neanderthals in Southern Europe—especially during the warmer seasons—however, ate a mixture of plants and animals (Fiorenza et al, 2008). Further, there was a considerable plant component to the diet of Neanderthals (Perez-Perez et al, 2003) (with the existence of plant-rich diets for Neanderthals being seen mostly in the Near East; Henry, Brooks, and Piperno, 2011) while the diet of both Neanderthals and Homo sapiens varied due to climatic fluctuations (El Zataari et al, 2016). From what we know about modern human biochemistry and digestion, we can further make the claim that Neanderthals ate a good amount of plants.

Ulijaszek, Mann, and Elton (2013: 96) write:

‘Absence of evidence’ does not equate to ‘evidence of absence,’ and the meat-eating signals from numerous types of data probably swamp the plant-eating signlas for Neanderthals. Their dietary variability across space and time is consistent with the pattern observed in the hominin clade as a whole, and illustrates hominin dietary adaptatbility. It also mirrors trends observed in modern foragers, whereby those populations that live in less productive environments have a greater (albeit generally not exclusive) dependance on meat. Differences in Neanderthal and modern human diet may have resulted from exploitation of different environments: within Europe and Asia, it has been argued that modern humans exploited marginal areas, such as steppe environments, whereas Neanderthals may have preferred more mosaic, Mediterranean-type habitats.

Quite clearly, one cannot point to any one study to support an (ideologically driven) belief that our genus or Neanderthals were “strictly carnivore”, as there was great variability in the Neanderthal diet, as I have shown.

Conclusion

Singer’s argument for ethical vegetarianism is sound; I personally can find no fault in it (if anyone can, leave a comment and we can discuss it, I will take Singer’s side). Although I can find no fault in the argument, I would never become a vegan/vegetarian as I love meat too much. There is evidence that vegan/vegetarian diets are not good for growing children and pregnant mothers, and although the same can be said for any type of diet that leads to nutrient deficiencies, the risk is much higher in these types of plant-based diets.

The evidence that we were meat-eaters in our evolutionary history is there, but we evolved as eclectic feeders. There was great variability in the Neanderthal diet depending on where they lived, and so the claim that they were “full-on carnivore” is false. The literature attests to great dietary flexibility and variability in both Homo sapiens and Neanderthals, so the claim that they ate meat and only meat is false.

My conclusion in my look into our diet over evolutionary time was:

It is clear that both claims from vegans/vegetarians and carnivores are false: there is no one “human diet” that we “should” be eating. Individual variation in different physiologic processes implies that there is no one “human diet”, no matter what type of food is being pushed as “what we should be” eating. Humans are eclectic feeders; we will eat anything since “Humans show remarkable dietary flexibility and adaptability“. Furthermore, we also “have a relatively unspecialized gut, with a colon that is shorter relative to overall size than in other apes; this is often attributed to the greater reliance on faunivory in humans (Chivers and Langer 1994)” (Ulijaszek, Mann, and Elton, 2013: 58). Our dietary eclectism can be traced back to our Australopithecine ancestors. The claim that we are either “vegetarian/vegan or carnivore” throughout our evolution is false.

There is no evidence for both of these claims from both of these extreme camps; humans are eclectic feeders. We are omnivorous, not vegan/vegetarian or carnivores. Although we did evolve from plant-eating primates and then added meat into our diets over time, there is no evidence for the claim that we ate only meat. Our dietary flexibility attests to that.

I Am Not A Phrenologist

1500 words

People seem to be confused on the definition of the term ‘phrenology’. Many people think that just the measuring of skulls can be called ‘phrenology’. This is a very confused view to hold.

Phrenology is the study of the shape and size of the skull and then drawing conclusions from one’s character from bumps on the skull (Simpson, 2005) to overall different-sized areas of the brain compared to others then drawing on one’s character and psychology from these measures. Franz Gall—the father of phrenology—believed that by measuring one’s skull and the bumps etc on it, then he could make accurate predictions about their character and mental psychology. Gall had also proposed a theory of mind and brain (Eling, Finger, and Whitaker, 2017). The usefulness of phrenology aside, the creator Gall contributed a significant understanding to our study of the brain, being that he was a neuroanatomist and physiologist.

Gall’s views on the brain can be seen here (read this letter where he espouses his views here):

1.The brain is the organ of the mind.
2. The mind is composed of multiple, distinct, innate faculties.
3. Because they are distinct, each faculty must have a separate seat or “organ” in the brain.
4. The size of an organ, other things being equal, is a measure of its power.
5. The shape of the brain is determined by the development of the various organs.
6. As the skull takes its shape from the brain, the surface of the skull can be read as an accurate index of psychological aptitudes and tendencies.

Gall’s work, though, was imperative to our understanding of the brain and he was a pioneer in the inner workings of the brain. They ‘phrenologized’ by running the tips of their fingers or their hands along the top of one’s head (Gall liked using his palms). Here is an account of one individual reminiscing on this (around 1870):

The fellow proceeded to measure my head from the forehead to the back, and from one ear to the other, and then he pressed his hands upon the protuberances carefully and called them by name. He felt my pulse, looked carefully at my complexion and defined it, and then retired to make his calculations in order to reveal my destiny. I awaited his return with some anxiety, for I really attached some importance to what his statement would be; for I had been told that he had great success in that sort of work and that his conclusion would be valuable to me. Directly he returned with a piece of paper in his hand, and his statement was short. It was to the effect that my head was of the tenth magnitude with phyloprogenitiveness morbidly developed; that the essential faculties of mentality were singularly deficient; that my contour antagonized all the established rules of phrenology, and that upon the whole I was better adapted to the quietude of rural life rather than to the habit of letters. Then the boys clapped their hands and laughed lustily, but there was nothing of laughter in it for me. In fact, I took seriously what Rutherford had said and thought the fellow meant it all. He showed me a phrenological bust, with the faculties all located and labeled, representing a perfect human head, and mine did not look like that one. I had never dreamed that the size or shape of the head had anything to do with a boy’s endowments or his ability to accomplish results, to say nothing of his quality and texture of brain matter. I went to my shack rather dejected. I took a small hand- mirror and looked carefully at my head, ran my hands over it and realized that it did not resemble, in any sense, the bust that I had observed. The more I thought of the affair the worse I felt. If my head was defective there was no remedy, and what could I do? The next day I quietly went to the library and carefully looked at the heads of pictures of Webster, Clay, Calhoun, Napoleon, Alexander Stephens and various other great men. Their pictures were all there in histories.

This—what I would call skull/brain-size fetishizing—is still evident today, with people thinking that raw size matters (Rushton and Ankney, 2007; Rushton and Ankney, 2009) for cognitive ability, though I have compiled numerous data that shows that we can have smaller brains and have IQs in the normal range, implying that large brains are not needed for high IQs (Skoyles, 1999). It is also one of Deacon’s (1990) fallacies, the “bigger-is-smarter” fallacy. Just because you observe skull sizes, brain size differences, structural brain differences, etc, does not mean you’re a phrenologist. you’re making easy and verifiable claims, not like some of the outrageous claims made by phrenologists.

What did they get right? Well, phrenologists stated that the most-used part of the brain would become bigger, which, of course, was vindicated by modern research—specifically in London cab drivers (McGuire, Frackowiak, and Frith, 1997; Woolett and McGuire, 2011).

It seems that phrenologists got a few things right but their theories were largely wrong. Though those who bash the ‘science’ of phrenology should realize that phrenology was one of the first brain ‘sciences’ and so I believe phrenology should at least get some respect since it furthered our understanding of the brain and some phrenologists were kind of right.

People see the avatar I use which is three skulls, one Mongoloid, the other Negroid and the other Caucasoid and then automatically make that leap that I’m a phrenologist based just on that picture. Even, to these people, stating that races/individuals/ethnies have different skull and brain sizes caused them to state that what I was saying is phrenology. No, it isn’t. Words have definitions. Just because you observe size differences between brains of, say either individuals or ethnies, doesn’t mean that you’re making any value judgments on the character/mental aptitude of that individual based on the size of theur skull/brain. On the other hand, noting structural differences between brains like saying “the PFC is larger here but the OFC is larger in this brain than in that brain” yet no one is saying that and if that’s what you grasp from just the statement that individuals and groups have different sized skulls, brains, and parts of the brain then I don’t know what to tell you. Stating that one brain weighs more than another, say one is 1200 g and another is 1400 g is not phrenology. Stating that one brain is 1450 cc while another is 1000 cc is not phrenology. For it to be phrenology I have to outright state that differences in the size of certain areas of the brain or brains as a whole cause differences in character/mental faculties. I am not saying that.

A team of neuroscientists just recently (recently as in last month, January, 2018) showed, in the “most exhaustive way possible“, tested the claims from phrenological ‘research’ “that measuring the contour of the head provides a reliable method for inferring mental capacities” and concluded that there was “no evidence for this claim” (Jones, Alfaro-Almagro, and Jbabdi, 2018). That settles it. The ‘science’ is dead.

It’s so simple: you notice physical differences in brain size between two corpses. That one’s PFC was bigger than OFC and with the other, his OFC was bigger than his PFC. That’s it. I guess, using this logic, neuroanatomists would be considered phrenologists today since they note size differences between individual parts of brains. Just noting these differences doesn’t make any type of judgments on potential between brains of individuals with different size/overall size/bumps etc.

It is ridiculous to accuse someone of being a ‘phrenologist’ in 2018. And while the study of skull/brain sizes back in the 17th century did pave the way for modern neuroscience and while they did get a few things right, they were largely wrong. No, you cannot see one’s character from feeling the bumps on their skull. I understand the logic and, back then, it would have made a lot of sense. But to claim that one is a phrenologist or is pushing phrenology just because they notice physical differences that are empirically verifiable does not make them a phrenologist.

In sum, studying physical differences is interesting and tells us a lot about our past and maybe even our future. Stating that one is a phrenologist because they observe and accept physical differences in the size of the brain, skull, and neuroanatomic regions is like saying that physical anthropologists and forensic scientists are phrenologists because they measure people’s skulls to ascertain certain things that may be known in their medical history. Chastizing someone because they tell you that one has a different brain size than the other by calling them outdated names in an attempt to discredit them doesn’t make sense. It seems that even some people cannot accept physical differences that are measurable again and again because it may go against some long-held belief.