Home » Brain size
Category Archives: Brain size
Just-so Stories: The Brain Size Increase
The increase in brain size in our species over the last 3 million years has been the subject of numerous articles and books. Over that time period, brain size increased from our ancestor Lucy, all the way to today. Many stories are proposed to explain how and why it exactly happened. The explanation is the same ol’ one: Those with bigger heads, and therefore bigger brains had more children and passed on their “brain genes” to the next generation until all that was left was bigger-brained individuals of that species. But there is a problem here, just like with all just-so stories. How do we know that selection ‘acted’ on brain size and thusly “selected-for” the ‘smarter’ individual?
Christopher Badcock, an evolutionary psychologist, as an intro to EP published in 2001, where he has a very balanced take on EP—noting its pitfalls and where, in his opinion, EP is useful. (Most may know my views on this already, see here.) In any case, Badcock cites R.D. Martin (1996: 155) who writes:
… when the effects of confounding variables such as body size and socio-economic status are excluded, no correlation is found between IQ and brain size among modern humans.
Badcock (2001: 48) also quotes George Williams—author of Adaptation and Natural Selection (1966; the precursor to Dawkins’ The Selfish Gene) where he writes:
Despite the arguments that have been advanced, I cannot readily accept the idea that advanced mental capabilities have ever been directly favored by selection. There is no reason for believing that a genius has ever been likely to leave more children than a man of somewhat below average intelligence. It has been suggested that a tribe that produces an occasional genius for its leadership is more likely to prevail in competition with tribes that lack this intellectual resource. This may well be true in the sense that a group with highly intelligent leaders is likely to gain political domination over less gifted groups, but political domination need not result in genetic domination, as indicated by the failure of many a ruling class to maintain its members.
In Adaptation and Natural Selection, Williams was much more cautious than adaptationists today, stating that adaptationism should be used only in very special cases. Too bad that adaptationists today did not get the memo. But what gives? Doesn’t it make sense that the “more intelligent” human 2 mya would be more successful when it comes to fitness than the “less intelligent” (whatever these words mean in this context) individual? Would a pre-historic Bill Gates have the most children due to his “high IQ” as PumpkinPerson has claimed in the past? I doubt it.
In any case, the increase in brain size—and therefore increase in intellectual ability in humans—has been the last stand for evolutionary progressionists. “Look at the increase in brain size”, the progressionist says “over the past 3mya. Doesn’t it look like there is a trend toward bigger, higher-quality brains in humans as our skills increased?” While it may look like that on its face, in fact, the real story is much more complicated.
Deacon (1990a) notes many fallacies that those who invoke the brain size increase across evolutionary history make, including: the evolutionary progression fallacy; the bigger-is-smarter fallacy; and the numerology fallacy. The evolutionary progression fallacy is simple enough. Deacon (1990a: 194) writes:
In theories of brain evolution, the concept of evolutionary progress finds implicit expression in the analysis of brain-size differences and presumed grade shifts in allometric brain/body size trends, in theories of comparative intelligence, in claims about the relative proportions of presumed advanced vs. primitive brain areas, in estimates of neural complexity, including the multiplication and differentiation of brain areas, and in the assessment of other species with respect to humans, as the presumed most advanced exemplar. Most of these accounts in some way or other are tied to problems of interpreting the correlates of brain size. The task that follows is to dispose of fallacious progressivist notions hidden in these analyses without ignoring the questions otherwise begged by the many enigmatic correlations of brain size in vertebrate evolution.
Of course, when it comes to the bigger-is-smarter fallacy, it’s quite obviously not true that bigger IS always better when it comes to brain size, as elephants and whales have larger brains than humans (also see Skoyles, 1999). But what they do not have more of than humans is cortical neurons (see Herculano-Houzel, 2009). Decon (1990a: 201) describes the numerology fallacy:
Numerology fallacies are apparent correlations that turn out to be artifacts of numerical oversimplification. Numerology fallacies in science, like their mystical counterparts, are likely to be committed when meaning is ascribed to some statistic merely by virtue of its numeric similarity to some other statistic, without supportive evidence from the empirical system that is being described.
While Deacon (1990a: 232) concludes that:
The idea, that there have been progressive trends of brain evolution, that include changes in the relative proportions of different structures (i.e., enlarging more “advanced” areas with respect to more primitive areas) and increased differentiation, interconnection, and overall complexity of neural circuits, is largely an artifact of misunderstanding the complex internal correlates of brain size. … Numerous statistical problems, arising from the difficulty of analyzing a system with so many interdependent scaling relationships, have served to reinforce these misconceptions, and have fostered the tacit assumption that intelligence, brain complexity, and brain size bear a simple relationship to one another.
Deacon (1990b: 255) notes how brains weren’t directly selected for, but bigger bodies (bigger bodies means bigger brains), and this does not lean near the natural selection fallacy theory for trait selection since this view is of the organism, not its trait:
I will argue that it is this remarkable parallelism, and not some progressive selection for increasing intelligence, that is responsible for many pseudoprogressive trends in mammalian brain evolution. Larger whole animals were being selected—not just larger brains—but along with the correlated brain enlargement in each lineage a multitude of parallel secondary internal adaptations followed.
Deacon (1990b: 697-698) notes that the large brain-to-body size ratio in humans compared to other primates is an illusion “a surface manifestation of a complex allometric reorganization within the brain” and that the brain itself is unlikely to be the object of selection. The correlated reorganization of the human brain, to Deacon, is what makes humans unique; not our “oversized” brains for our body. While Deacon (1990c) states that “To a great extent the apparent “progress” of mammalian brain evolution vanishes when the effects of brain size and functional specialization are taken into account.” (See also Deacon, 1997: chapter 5.)
So is there really progress in brain evolution, which would, in effect, lend credence to the idea that evolution is progressive? No, there is no progress in brain evolution; so-called size increases throughout human history are an artifact; when we take brain size and functional specialization into account (functional specialization is the claim that different areas in the brain are specialized to carry out different functions; see Mahon and Cantlon, 2014). Our brains only seem like they’ve increased; when we get down to the functional details, we can see that it’s just an artifact.
Skoyles and Sagan (2002: 240) note that erectus, for example, could have survived with much smaller brains and that the brain of erectus did not arise for the need for survival:
So how well equipped was Homo erectus? To throw some figures at you (calculations shown in the notes), easily well enough. Of Nariokotome boy’s 673 cc of cortex, 164 cc would have been prefrontal cortex, roughly the same as half-brained people. Nariokotome boy did not need the mental competence required by cotemporary hunter-gatherers. … Compared to that of our distant ancestors, Upper Paleolithic technology is high tech. And the organizational skills used in hunts greatly improved 400,000 years ago to 20,000 years ago. These skills, in terms of our species, are recent, occurring by some estimates in less than the last 1 percent of our 2.5 million year existence as people. Before then, hunting skills would have required less brain power, as they were less mentally demanding. If you do not make detailed forward plans, then you do not need as much mental planning abilities as those who do. This suggests that the brains of Homo erectus did not arise for reasons of survival. For what they did, they could have gotten away with much smaller, Daniel Lyon-sized brains.
In any case—irrespective of the problems that Deacon shows for arguments for increasing brain size—how would we be able to use the theory of natural selection to show what was selected-for, brain size or another correlated trait? The progressionist may say that it doesn’t matter which is selected-for, the brain size is still increasing even if the correlated trait—the free-rider—is being selected-for.
But, too bad for the progressionist: If the correlated non-fitness-enhancing trait is being selected-for and not brain size directly, then the progressionist cannot logically state that brain size—and along with it intelligence (as the implication always is)—is being directly selected-for. Deacon throws a wrench into such theories of evolutionary progress in regard to human brain size. Though, looking at erectus, it’s not clear that he really “needed” such a big brain for survival—it seems like he could have gotten away with a much smaller brain. And there is no reason, as George Williams notes, to attempt to argue that “high intelligence” was selected-for in our evolutionary history.
And so, Gould’s Full House argument still stands—there is no progress in evolution; bacteria occupy life’s mode; humans are insignificant to the number of bacteria on the planet, “big brains”, or not.
Just-so Stories: MCPH1 and ASPM
“Microcephalin, a gene regulating brain size, continues to evolved adaptively in humans” (Evans et al, 2005) “Adaptive evolution of ASPM, a major determinant of cerebral cortical size in humans” (Evans et al, 2004) are two papers from the same research team which purport to show that both MCPH1 and ASPM are “adaptive” and therefore were “selected-for” (see Fodor, 2008; Fodor and Piatteli-Palmarini, 2010 for discussion). That there was “Darwinian selection” which “operated on” the ASPM gene (Evans et al, 2004), that we identified it was selected, along with its functional effect is evidence that it was supposedly “selected-for.” Though, the combination of functional effect along with signs of (supposedly) positive selection do not license the claim that the gene was “selected-for.”
One of the investigators who participated in these studies was one Bruce Lahn, who stated in an interview that MCPH1 “is clearly favored by natural selection.” Evans et al (2005) show specifically that the variant supposedly under selection (MCPH1) showed lower frequencies in Africans and the highest in Europeans.
But, unfortunately for IQ-ists, neither of these two alleles are associated with IQ. Mekel-Boborov et al (2007: 601) write that their “overall findings suggest that intelligence, as measured by these IQ tests, was not detectably associated with the D-allele of either ASPM or Microcephalin.” Timpson et al (2007: 1036A) found “no meaningful associations with brain size and various cognitive measures, which indicates that contrary to previous speculations, ASPM and MCPH1 have not been selected for brain-related effects” in genotyped 9,000 genotyped children. Rushton, Vernon, and Bons (2007) write that “No evidence was found of a relation between the two candidate genes ASPM and MCPH1 and individual differences in head circumference, GMA or social intelligence.” Bates et al’s (2008) analysis shows no relationship between IQ and MCPH1-derived genes.
But, to bring up Fodor’s critique, if MCPH1 is coextensive with another gene, and both enhance fitness, then how can there be direct selection on the gene in question? There is no way for selection to distinguish between the two linked genes. Take Mekel-Bobrov et al (2005: 1722) who write:
The recent selective history of ASPM in humans thus continues the trend of positive selection that has operated at this locus for millions of years in the hominid lineage. Although the age of haplogroup D and its geographic distribution across Eurasia roughly coincide with two important events in the cultural evolution of Eurasia—namely, the emergence and spread of domestication from the Middle East ~10,000 years ago and the rapid increase im population associated with the development of cities and written language 5000 to 6000 years ago around the Middle East—the signifigance of this correlation is not clear.
Surely both of these genetic variants have a hand in the dawn of these civilizations and behaviors of our ancestors; they are correlated, right? Though, they only did draw that from the research studies they reported on—these types of wild speculation are in the papers referenced above. Lahn and his colleagues, though, are engaging in very wild speculation—if these variants are under positive selection, that is.
So it seems that this research and the conclusions drawn from it are ripe for a just-so story. We need to do a just-so story check. Now let’s consult Smith’s (2016: 277-278) seven just-so story triggers:
1) proposing a theory-driven rather than a problem-driven explanation, 2) presenting an explanation for a change without providing a contrast for that change, 3) overlooking the limitations of evidence for distinguishing between alternative explanations (underdetermination), 4) assuming that current utility is the same as historical role, 5) misusing reverse engineering, 6) repurposing just-so stories as hypotheses rather than explanations, and 7) attempting to explain unique events that lack comparative data.
For example, take (1): a theory-driven explanation leads to a just-so story, as Shapiro (2002: 603) notes, “The theory-driven scholar commits to a sufficient account of a phenomenon, developing a “just so” story that might seem convincing to partisans of her theoretical priors. Others will see no more reason to believe it than a host of other “just so” stories that might have been developed, vindicating different theoretical priors.” That these two genes were “selected-for” means that, for Evans et al, it is a theory-driven explanation and therefore falls prey to the just-so story criticism.
Rasmus Nielsen (2009) has a paper on the thirty years of adaptationism after Gould and Lewontin’s (1972) Spandrels paper. In it, he critiques so-called examples of two genes being supposedly selected-for: a lactase gene, and MCPH1 and ASPM. Nielsen (2009) writes of MCPH1 and ASPM:
Deleterious mutations in ASPM and microcephalin may lead to reduced brain size, presumably because these genes are cell‐cycle regulators and very fast cell division is required for normal development of the fetal brain. Mutations in many different genes might cause microcephaly, but changes in these genes may not have been the underlying molecular cause for the increased brain size occurring during the evolution of man.
In any case, Currat et al (2006: 176a) show that “the high haplotype frequency, high levels of homozygosity, and spatial patterns observed by Mekel-Bobrov et al. (1) and Evans et al. (2) can be generated by demographic models of human history involving a founder effect out-of-Africa and a subsequent demographic or spatial population expansion, a very plausible scenario (5). Thus, there is insufficient evidence for ongoing selection acting on ASPM and microcephalin within humans.” McGowen et al (2011) show that there is “no evidence to support an association between MCPH1 evolution and the evolution of brain size in highly encephalized mammalian species. Our finding of significant positive selection in MCPH1 may be linked to other functions of the gene.”
Lastly, Richardson (2011: 429) writes that:
The force of acceptance of a theoretical framework for approaching the genetics of human intellectual differences may be assessed by the ease with which it is accepted despite the lack of original empirical studies – and ample contradictory evidence. In fact, there was no evidence of an association between the alleles and either IQ or brain size. Based on what was known about the actual role of the microcephaly gene loci in brain development in 2005, it was not appropriate to describe ASPM and microcephalin as genes controlling human brain size, or even as ‘brain genes’. The genes are not localized in expression or function to the brain, nor specifically to brain development, but are ubiquitous throughout the body. Their principal known function is in mitosis (cell division). The hypothesized reason that problems with the ASPM and microcephalin genes may lead to small brains is that early brain growth is contingent on rapid cell division of the neural stem cells; if this process is disrupted or asymmetric in some way, the brain will never grow to full size (Kouprina et al, 2004, p. 659; Ponting and Jackson, 2005, p. 246)
Now that we have a better picture of both of these alleles and what they are proposed to do, let’s now turn to Lahn’s comments on his studies. Lahn, of course, commented on “lactase” and “skin color” genes in defense of his assertion that such genes like ASPM and MCPH1 are linked to “intelligence” and thusly were selected-for just that purpose. However, as Nielsen (2009) shows, that a gene has a functional effect and shows signs of selection does not license the claim that the gene in question was selected-for. Therefore, Lahn and colleagues engaged in fallacious reasoning; they did not show that such genes were “selected-for”, while even studies done by some prominent hereditarians did not show that such genes were associated with IQ.
Like what we now know about the FOXP2 gene and how there is no evidence for recent positive or balancing selection (Atkinson et al, 2018), we can now say the same for such other evolutionary just-so stories that try to give an adaptive tinge to a trait. We cannot confuse selection and function as evidence for adaptation. Such just-so stories, like the one described above along with others on this blog, can be told about any trait or gene and explain why it was selected and stabilized in the organism in question. But historical narratives may be unfalsifiable. As Sterelny and Griffiths write in their book Sex and Death:
Whenever a particular adaptive story is discredited, the adaptationist makes up a new story, or just promises to look for one. The possibility that the trait is not an adaptation is never considered.
The Human and Cetacean Neocortex and the Number of Neurons in it
For the past 15 years, neuroscientist Suzanna Herculano-Houzel has been revolutionizing the way we look at the human brain. In 2005, Herculano-Houzel and Lent (2005) pioneered a new way to ascertain the neuronal make-up of brains: dissolving brains into soup and counting the neurons in it. Herculano-Houzel (2016: 33-34) describes it so:
Because we [Herculano-Houzel and Lent] were turning heterogeneous tissue into a homogeneous—or “isotropic”—suspension of nuclei, he proposed we call it the “isotropic fractionator.” The name stuck for lack of any better alternative. It has been pointed out to me by none other than Karl Herrup himself that it’s a terribly awkward name, and I agree. Whenever I can (which is not often because journal editors don’t appreciate informality), I prefer to call our method of counting cells what it is: “brain soup.”
So, using this method, we soon came to know that humans have 86 billion neurons. This flew in the face of the accepted wisdom—humans have 100 billion neurons in the brain. However, when Herculano-Houzel searched for the original reference for this claim, she came up empty-handed. The claim that we have 100 billion neurons “had become such an established “fact” that neuroscientists were allowed to start their review papers with generic phrases to that effect without citing references. It was the neuroscientist’s equivalent to stating that genes were made of DNA: it had become a universally known “fact” (Herculano-Houzel, 2016: 27). Herculano-Houzel (2016: 27) further states that “Digging through the literature for the original studies on how many cells brains are made of, the more I read, the more I realized that what I was looking for simply didn’t exist.”
So this “fact” that the human brain was made up of 100 billion neurons was so entrenched in the literature that it became something like common knowledge—for instance, that the sun is 93 million miles away from earth—that did not need a reference in the scientific literature. Herculano-Houzel asked her co-author of her 2005 paper (Roberto Lent) who authored a textbook called 100 Billion Neurons if he knew where the number came from, but of course he didn’t know. Though, subsequent editions added a question mark, making the title of the text 100 Billion Nuerons? (Herculano-Houzel, 2016: 28).
So using this method, we now know that the cellular composition of the human brain is expected for a brain our size (Herculano-Houzel, 2009). According to the encephilization quotient (EQ) first used by Harry Jerison, humans have an EQ of between 7 and 8—the largest for any mammal. And so, since humans are the most intelligent species on earth, this must account for Man’s exceptional abilities. But does it?
Herculano-Houzel et al (2007) showed that it wasn’t humans, as popularly believed, that had a larger brain than expected, but it was great apes, more specifically orangutans and gorillas that had bodies too big for their brains. So the human brain is nothing but a linearly scaled-up primate brain—humans have the amount of neurons expected for a primate brain of its size (Herculano-Houzel, 2012).
So Herculano-Houzel (2009) writes that “If cognitive abilities among non-human primates scale with absolute brain size (Deaner et al., 2007 ) and brain size scales linearly across primates with its number of neurons (Herculano-Houzel et al., 2007 ), it is tempting to infer that the cognitive abilities of a primate, and of other mammals for that matter, are directly related to the number of neurons in its brain.” Deaner et al (2007) showed that cognitive ability in non-human primates “is not strongly correlated with neuroanatomical measures that statistically control for a possible effect of body size, such as encephalization quotient or brain size residuals. Instead, absolute brain size measures were the best predictors of primate cognitive ability.” While Herculano-Houzel et al (2007) showed that brain size scales linearly across primates with the number of neurons—so as brain size increases so does the neuronal count of that primate brain.
This can be seen in Fonseca-Azevedo’s and Herculano-Houzel’s (2012) study on the metabolic constraints between humans and gorillas. Humans cook food while great apes eat uncooked plant foods. Larger animals have larger brains, more than likely. However, gorillas have larger bodies than we do but smaller brains than expected while humans have a smaller body and bigger brain. This is due to the diet that the two species eat—gorillas spend about 8-10 hours per day feeding while, if humans had the same number of nuerons but ate a raw, plant-based diet, they would need to feed for about 9 hours a day to be able to sustain a brain with that many neurons. This, however, was overcome by Homo erectus and his ability to cook food. Since he could cook food, he could afford a large brain with more neurons. Fonseca-Azevedo and Herculano-Houzel (2012) write that:
Given the difficulties that the largest great apes have to feed for more than 8 h/d (as detailed later), it is unlikely, therefore, that Homo species beginning with H. erectus could have afforded their combinations of MBD and number of brain neurons on a raw diet.
That cooking food leads to a greater amount of energy unlocked can be seen with Richard Wrangham’s studies. Since the process of cooking gelatinizes the protein in meat, it makes it easier to chew and therefore digest. This same denaturization of proteins occurs in vegetables, too. So, the claim that cooked food (a form of processing, along with using tools to mash food) has fewer calories (kcal) than raw food is false. It was the cooking of food (meat) that led to the expansion of the human brain—and of course, allowed our linearly scaled-up primate brain to be able to afford so many neurons. Large brains with a high neuronal count are extraordinarily expensive, as shown by Fonseca-Azevedo and Herculano-Houzel (2012).
Erectus had smaller teeth, reduced bite force, reduced chewing muscles and a relatively smaller gut compared to other species of Homo. Fink and Lieberman (2016) show that slicing and mashing meat and underground storage organs (USOs) would decrease the number of chews per year by 2 million (13 percent) while the total masticatory force would be reduced about 15 percent. Further, by slicing and pounding foodstuffs into 41 percent smaller particles, the number of chews would be reduced by 5 percent and the masticatory force reduced by 12 percent. So, of course, it was not only cooking that led to the changes we see in erectus compared to others, it was also the beginning of food processing (slicing and mashing are forms of processing) which led to these changes. (See also Catching Fire: How Cooking Made Us Human by Wrangham, 2013 for the evidence that cooking catapulted our brains and neuronal capacity to the size it is now, along with Wrangham, 2017.)
So, since the neuronal count of a brain is directly related to the cognitive ability that brain is capable of, then since Herculano-Houzel and Kaas (2011) showed that since the modern range of neurons was found in heidelbergensis and neanderthalensis, that they therefore had similar cognitive potential to us. This would then mean that “Compared to their societies, our outstanding accomplishments as individuals, as groups, and as a species, in this scenario, would be witnesses of the beneficial effects of cultural accumulation and transmission over the ages” (Herculano-Houzel and Kaas, 2011).
The diets of Neanderthals and humans, while similar (and differed due to the availability of foods), nevertheless, is a large reason why they have such large brains with a large number of neurons. Though, it must be said that there is no progress in hominin brain evolution (contra the evolutionary progressionists) as brain size is predicated on the available food and nutritional quality (Montgomery et al, 2010).
But there is a problem for Herculano-Houzel’s thesis that cognitive ability scales-up with the absolute number of neurons in the cerebral cortex. Mortensen et al (2014) used the optical fractionator (not to be confused with the isotropic fractionator) and came to the conclusion that “the long-finned pilot whale neocortex has approximately 37.2 × 109 neurons, which is almost twice as many as humans, and 127 × 109 glial cells. Thus, the absolute number of neurons in the human neocortex is not correlated with the superior cognitive abilities of humans (at least compared to cetaceans) as has previously been hypothesized.” This throws a wrench in Herculano-Houzel’s thesis—or does it?
There are a couple of glaring problems here, most importantly, I do not see how many slices of the cortex that Mortensen et al (2014) studied. They refer to the flawed stereological estimate of Eriksen and Pakkenberg (2007) showed that the Minke whale had an estimated 13 billion neurons while Walloe et al (2010) showed that the harbor porpoise had 15 billion cortical neurons with an even smaller cortex. These three studies are all from the same research team who use the same stereological methods, so Hercualano-Houzel’s (2016: 104-106) comments apply:
However, both these studies suffered from the same unfortunately common problem in stereology: undersampling, in one case drawing estimates from only 12 sections out of over 3,000 sections of the Minke whale’s cerebral cortex, sampling a total of only around 200 cells from the entire cortex, when it is recommended that around 700-1000 cells be counted per individual brain structure. with such extreme undersampling, it is easy to make invalid extrapolations—like trying to predict the outcome of a national election by consulting just a small handful of people.
It is thus very likely, given the undersampling of these studies and the neuronal scaling rules that apply to cetartiodactyls, that even the cerebral cortex of the largest whales is a fraction of the average 16 billion neurons that we find in the human cerebral cortex.
It seems fitting that great apes, elephants, and probably cetaceans have similar numbers of neurons in the cerebral cortex, in the range of 3 to 9 billion: fewer than humans have, but more than all other mammals do.
Kazu et al (2014) state that “If the neuronal scaling rules for artiodactyls extend to all cetartiodactyls, we predict that the large cerebral cortex of cetaceans will still have fewer neurons than the human cerebral cortex.” Artiodactyls are cousins of cetaceans—and the order is called cetariodactyls since it is thought that whales evolved from artiodactyls. So if they did evolve from artiodactyls, then the neruonal scaling rules would apply to them (just as humans have evolved from other primates and the neuronal scaling rules apply to us). So the predicted “cerebral cortex of Phocoena phocoena, Tursiops truncatus, Grampus griseus, and Globicephala macrorhyncha, at 340, 815, 1,127, and 2,045 cm3, to be composed of 1.04, 1.75, 2.11, and 3.01 billion neurons, respectively” (Kazu et al, 2014). So the predicted number of cerebellar neurons in the pilot whale is around 3 billion—nowhere near the staggering amount that humans have (16 billion).
Humans have the most cerebellar neurons of any animal on the planet—and this, according to Herculano-Houzel and her colleagues, accounts for the human advantage. Studies that purport to show that certain species of cetaceans have similar—or more—cereballar neurons than humans rest on methodological flaws. The neuronal scaling rules that Herculano-Houzel and colleagues have for cetaceans predict far, far fewer cortical neurons in the species. It is for this reason that studies that show similar—or more—cortical neurons in other species that do not use the isotropic fractionator must be looked at with extreme caution.
However, when Herculano-Houzel and colleagues do finally use the isotropic fractionator on pilot whales, and if their prediction does not come to pass but falls in-line with that of Mortensen et al (2014), this does not, in my opinion, cast doubt on her thesis. One must remember that cetaceans have completely different body plans from humans—most glaringly, the fact that we have hands to manipulate the world with. However, Fox, Muthukrishna, and Shultz (2017) show that whales and dolphins have human-like cultures and societies while using tools and passing down that information to future generations—just like humans do.
In any case, I believe that the prediction borne out from Kazu et al (2014) will show substantially fewer cortical neurons than in humans. There is no logical reason to accept the cortical neuronal estimates from the aforementioned studies since they undersampled parts of the cortex. Herculano-Houzel’s thesis still stands—what sets humans a part from other animals is the number of neurons which is tightly packed in to the cerebral cortex. The human brain is not that special.
The human advantage, I would say, lies in having the largest number of neurons in the cerebral cortex than any other animal species has managed—and it starts by having a cortex that is built in the image of other primate cortices: remarkable in its number of neurons, but not an exception to the rules that govern how it is put together. Because it is a primate brain—and not because it is special—the human brain manages to gather a number of neurons in a still comparatively small cerebral cortex that no other mammal with a viable brain, that is, smaller than 10 kilograms, would be able to muster. (Herculano-Houzel, 2016: 105-106)
(The Lack of) IQ Construct Validity and Neuroreductionism
Construct validity for IQ is fleeting. Some people may refer to Haier’s brain imaging data as evidence for construct validity for IQ, even though there are numerous problems with brain imaging and that neuroreductionist explanations for cognition are “probably not” possible (Uttal, 2014; also see Uttal, 2012). Construct validity refers to how well a test measures what it purports to measure—and this is non-existent for IQ (see Richardson and Norgate, 2014). If the tests did test what they purport to (intelligence), then they would be construct valid. I will show an example of a measure that was validated and shown to be reliable without circular reliance of the instrument itself; I will show that the measures people use in attempt to prove that IQ has construct validity fail; and finally I will provide an argument that the claim “IQ tests test intelligence” is false since the tests are not construct valid.
Jung and Haier (2007) formulated the P-FIT hypothesis—the Parieto-Frontal Intelligence Theory. The theory purports to show how individual differences in test scores are linked to variations in brain structure and function. There are, however, a few problems with the theory (as Richardson and Norgate, 2007 point out in the same issue; pg 162-163). IQ and brain region volumes are experience-dependent (eg Shonkoff et al, 2014; Betancourt et al, 2015; Lipina, 2016; Kim et al, 2019). So since they are experience-dependent, then different experiences will form different brains/test scores. Richardson and Norgate (2007) state that such bigger brain areas are not the cause of IQ, rather that, the cause of IQ is the experience-dependency of both: exposure to middle-class knowledge and skills leads to a better knowledge base for test-taking (Richardson, 2002), whereas access to better nutrition would be found in middle- and upper-classes, which, as Richardson and Norgate (2007) note, lower-quality, more energy-dense foods are more likely to be found in lower classes. Thus, Haier et al did not “find” what they purported too, based on simplistic correlations.
Now let me provide the argument about IQ test experience-dependency:
Premise 1: IQ tests are experience-dependent.
Premise 2: IQ tests are experience-dependent because some classes are more exposed to the knowledge and structure of the test by way of being born into a certain social class.
Premise 3: If IQ tests are experience-dependent because some social classes are more exposed to the knowledge and structure of the test along with whatever else comes with the membership of that social class then the tests test distance from the middle class and its knowledge structure.
Conclusion 1: IQ tests test distance from the middle class and its knowledge structure (P1, P2, P3).
Premise 4: If IQ tests test distance from the middle class and its knowledge structure, then how an individual scores on a test is a function of that individual’s cultural/social distance from the middle class.
Conclusion 2: How an individual scores on a test is a function of that individual’s cultural/social distance from the middle class since the items on the test are more likely to be found in the middle class (i.e., they are experience-dependent) and so, one who is of a lower class will necessarily score lower due to not being exposed to the items on the test (C1, P4)
Conclusion 3: IQ tests test distance from the middle class and its knowledge structure, thus, IQ scores are middle-class scores (C1, C2).
Still further regarding neuroimaging, we need to take a look at William Uttal’s work.
Uttal (2014) shows that “The problem is that both of these approaches are deeply flawed for methodological, conceptual, and empirical reasons. One reason is that simple models composed of a few neurons may simulate behavior but actually be based on completely different neuronal interactions. Therefore, the current best answer to the question asked in the title of this contribution [Are neuroreductionist explanations of cognition possible?] is–probably not.”
Uttal even has a book on meta-analyses and brain imaging—which, of course, has implications for Jung and Haier’s P-FIT theory. In his book Reliability in Cognitive Neuroscience: A Meta-meta Analysis, Uttal (2012: 2) writes:
There is a real possibility, therefore, that we are ascribing much too much meaning to what are possibly random, quasi-random, or irrelevant response patterns. That is, given the many factors that can influence a brain image, it may be that cognitive states and braib image activations are, in actuality, only weakly associated. Other cryptic, uncontrolled intervening factors may account for much, if not all, of the observed findings. Furthermore, differences in the localization patterns observed from one experiment to the next nowadays seems to reflect the inescapable fact that most of the brain is involved in virtually any cognitive process.
Uttal (2012: 86) also warns about individual variability throughout the day, writing:
However, based on these findings, McGonigle and his colleagues emphasized the lack of reliability even within this highly constrained single-subject experimental design. They warned that: “If researchers had access to only a single session from a single subject, erroneous conclusions are a possibility, in that responses to this single session may be claimed to be typical responses for this subject” (p. 708).
The point, of course, is that if individual subjects are different from day to day, what chance will we have of answering the “where” question by pooling the results of a number of subjects?
That such neural activations gleaned from neuroimaging studies vary from individual to individual, and even time of day in regard to individual, means that these differences are not accounted for in such group analyses (meta-analyses). “… the pooling process could lead to grossly distorted interpretations that deviate greatly from the actual biological function of an individual brain. If this conclusion is generally confirmed, the goal of using pooled data to produce some kind of mythical average response to predict the location of activation sites on an individual brain would become less and less achievable“‘ (Uttal, 2012: 88).
Clearly, individual differences in brain imaging are not stable and they change day to day, hour to hour. Since this is the case, how does it make sense to pool (meta-analyze) such data and then point to a few brain images as important for X if there is such large variation in individuals day to day? Neuroimaging data is extremely variable, which I hope no one would deny. So when such studies are meta-analyzed, inter- and intrasubject variation is obscured.
The idea of an average or typical “activation region” is probably nonsensical in light of the neurophysiological and neuroanatomical differences among subjects. Researchers must acknowledge that pooling data obscures what may be meaningful differences among people and their brain mechanisms. THowever, there is an even more negative outcome. That is, by reifying some kinds of “average,” we may be abetting and preserving some false ideas concerning the localization of modular cognitive function (Uttal, 2012: 91).
So when we are dealing with the raw neuroimaging data (i.e., the unprocessed locations of activation peaks), the graphical plots provided of the peaks do not lead to convergence onto a small number of brain areas for that cognitive process.
… inconsistencies abound at all levels of data pooling when one uses brain imaging techniques to search for macroscopic regional correlates of cognitive processes. Individual subjects exhibit a high degree of day-to-day variability. Intersubject comparisons between subjects produce an even greater degree of variability.
The overall pattern of inconsistency and unreliability that is evident in the literature to be reviewed here again suggests that intrinsic variability observed at the subject and experimental level propagates upward into the meta-analysis level and is not relieved by subsequent pooling of additional data or averaging. It does not encourage us to believe that the individual meta-analyses will provide a better answer to the localization of cognitive processes question than does any individual study. Indeed, it now seems plausible that carrying out a meta-analysis actually increases variability of the empirical findings (Uttal, 2012: 132).
So since reliability is low at all levels of neuroimaging analysis, it is very likely that the relations between particular brain regions and specific cognitive processes have not been established and may not even exist. The numerous reports purporting to find such relations report random and quasi-random fluctuations in extremely complex systems.
Construct validity (CV) is “the degree to which a test measures what it claims, or purports, to be measuring.” A “construct” is a theoretical psychological construct. So CV in this instance refers to whether IQ tests test intelligence. We accept that unseen functions measure what they purport to when they’re mechanistically related to differences in two variables. E.g, blood alcohol and consumption level nd the height of the mercury column and blood pressure. These measures are valid because they rely on well-known theoretical constructs. There is no theory for individual intelligence differences (Richardson, 2012). So IQ tests can’t be construct valid.
The accuracy of thermometers was established without circular reliance on the instrument itself. Thermometers measure temperature. IQ tests (supposedly) measure intelligence. There is a difference between these two, though: the reliability of thermometers measuring temperature was established without circular reliance on the thermometer itself (see Chang, 2007).
In regard to IQ tests, it is proposed that the tests are valid since they predict school performance and adult occupation levels, income and wealth. Though, this is circular reasoning and doesn’t establish the claim that IQ tests are valid measures (Richardson, 2017). IQ tests rely on other tests to attempt to prove they are valid. Though, as seen with the valid example of thermometers being validated without circular reliance on the instrument itself, IQ tests are said to be valid by claiming that it predicts test scores and life success. IQ and other similar tests are different versions of the same test, and so, it cannot be said that they are validated on that measure, since they are relating how “well” the test is valid with previous IQ tests, for example, the Stanford-Binet test. This is because “Most other tests have followed the Stanford–Binet in this regard (and, indeed are usually ‘validated’ by their level of agreement with it; Anastasi, 1990)” (Richardson, 2002: 301). How weird… new tests are validated with their agreement with other, non-construct valid tests, which does not, of course, prove the validity of IQ tests.
IQ tests are constructed by excising items that discriminate between better and worse test takers, meaning, of course, that the bell curve is not natural, but forced (see Simon, 1997). Humans make the bell curve, it is not a natural phenomenon re IQ tests, since the first tests produced weird-looking distributions. (Also see Richardson, 2017a, Chapter 2 for more arguments against the bell curve distribution.)
Finally, Richardson and Norgate (2014) write:
In scientific method, generally, we accept external, observable, differences as a valid measure of an unseen function when we can mechanistically relate differences in one to differences in the other (e.g., height of a column of mercury and blood pressure; white cell count and internal infection; erythrocyte sedimentation rate (ESR) and internal levels of inflammation; breath alcohol and level of consumption). Such measures are valid because they rely on detailed, and widely accepted, theoretical models of the functions in question. There is no such theory for cognitive ability nor, therefore, of the true nature of individual differences in cognitive functions.
That “There is no such theory for cognitive ability” is even admitted by lead IQ-ist Ian Deary in his 2001 book Intelligence: A Very Short Introduction, in which he writes “There is no such thing as a theory of human intelligence differences—not in the way that grown-up sciences like physics or chemistry have theories” (Richardson, 2012). Thus, due to this, this is yet another barrier against IQ’s attempted validity, since there is no such thing as a theory of human intelligence.
In sum, neuroimaging meta-analyses (like Jung and Haier, 2007; see also Richardson and Norgate, 2007 in the same issue, pg 162-163) do not show what they purport to show for numerous reasons. (1) There are, of course, consequences of malnutrition for brain development and lower classes are more likely to not have their nutritional needs met (Ruxton and Kirk, 1996); (2) low classes are more likely to be exposed to substance abuse (Karriker-Jaffe, 2013), which may well impact brain regions; (3) “Stress arising from the poor sense of control over circumstances, including financial and workplace insecurity, affects children and leaves “an indelible impression on brain structure and function” (Teicher 2002, p. 68; cf. Austin et al. 2005)” (Richardson and Norgate, 2007: 163); and (4) working-class attitudes are related to poor self-efficacy beliefs, which also affect test performance (Richardson, 2002). So, Jung and Haier’s (2007) theory “merely redescribes the class structure and social history of society and its unfortunate consequences” (Richardson and Norgate, 2007: 163).
In regard to neuroimaging, pooling together (meta-analyzing) numerous studies is fraught with conceptual and methodological problems, since a high-degree of individual variability exists. Thus, attempting to find “average” brain differences in individuals fails, and the meta-analytic technique used (eg by Jung and Haier, 2007) fails to find what they want to find: average brain areas where, supposedly, cognition occurs between individuals. Meta-analyzing such disparate studies does not show an “average” where cognitive processes occur, and thusly, cause differences in IQ test-taking. Reductionist neuroimaging studies do not, as is popularly believed, pinpoint where cognitive processes take place in the brain, they have not been established and they may not even exist.
Nueroreductionism does not work; attempting to reduce cognitive processes to different regions of the brain, even using meta-analytic techniques as discussed here, fail. There “probably cannot” be neuroreductionist explanations for cognition (Uttal, 2014), and so, using these studies to attempt to pinpoint where in the brain—supposedly—cognition occurs for such ancillary things such as IQ test-taking fails. (Neuro)Reductionism fails.
Since there is no theory of individual differences in IQ, then they cannot be construct valid. Even if there were a theory of individual differences, IQ tests would still not be construct valid, since it would need to be established that there is a mechanistic relation between IQ tests and variable X. Attempts at validating IQ tests rely on correlations with other tests and older IQ tests—but that’s what is under contention, IQ validity, and so, correlating with older tests does not give the requisite validity to IQ tests to make the claim “IQ tests test intelligence” true. IQ does not even measure ability for complex cognition; real-life tasks are more complex than the most complex items on any IQ test (Richardson and Norgate, 2014b)
Now, having said all that, the argument can be formulated very simply:
Premise 1: If the claim “IQ tests test intelligence” is true, then IQ tests must be construct valid.
Premise 2: IQ tests are not construct valid.
Conclusion: Therefore, the claim “IQ tests test intelligence” is false. (modus tollens, P1, P2)
Vegans/Vegetarians vs. Carnivores and the Neanderthal Diet
The vegan/vegetarian-carnivore debate is one that is a false dichotomy. Of course, the middle ground is eating both plants and animals. I, personally, eat more meat (as I eat a high protein diet) than plants, but the plants are good for a palate-switch-up and getting other nutrients in my diet. In any case, on Twitter, I see that there is a debate between “carnivores” and “vegans/vegetarians” on which diet is healthier. I think the “carnivore” diet is healthier, though there is no evolutionary basis for the claims that they espouse. (Because we did evolve from plant-eaters.) In this article, I will discuss the best argument for ethical vegetarianism and the evolutionary basis for meat-eating.
The ethical vegetarian argument is simple: Humans and non-human animals deserve the same moral consideration. Since they deserve the same moral consideration and we would not house humans for food, it then follows that we should not house non-human animals for food. The best argument for ethical vegetarianism comes from Peter Singer from Unsanctifying Animal Life. Singer’s argument also can be extended to using non-human animals for entertainment, research, and companionship.
Any being that can suffer has an interest in avoiding suffering. So the equal consideration of interests principle (Guidi, 2008) asserts that the ability to suffer applies to both human and non-human animals.
Here is Singer’s argument, from Just the Arguments: 100 of the Most Important Arguments in Western Philosophy (pg. 277-278):
P1. If a being can suffer, then that being’s interests merit moral consideration.
P2. If a being cannot suffer, then that beings interests do not merit moral consideration.
C1. If a being’s interests merit moral consideration, then that being can suffer (transposition, P2).
C2. A being’s interests merit moral consideration if and only if that being can suffer (material equivalence, P1, C1).
P3. The same interests merit the same moral consideration, regardless of what kind of being is the interest-bearer (equal consideration of interests principle).
P4. If one causes a being to suffer without adequate justification, then one violates that being’s interests.
P5. If one violates a being’s interests, then one does what is morally wrong.
C3. If one causes a being to suffer without adequate justification, then one does what is morally wrong (hypothetical syllogism, P4, P5).
P6. If P3, then if one kills, confines, or causes nonhuman animals to experience pain in order to use them as food, then one causes them to suffer without adequate justification.
P7. If one eats meat, then one participates in killin, confining, and causing nonhuman animals to experience pain in order to use them as food.
C4. If one eats mea, then one causes nonhuman animals to suffer without adequate justification (hypothetical syllogism, P6, P7).
C5. If one eats meat, the one does what is morally wrong (hypothetical syllogism, C3, C4).
This argument is pretty strong, indeed it is sound. However, I personally will never eat a vegetarian/vegan diet because I love eating meat too much. (Steak, turkey, chicken.) I will do what is morally wrong because I love the taste of meat.
In an evolutionary context, the animals we evolved from were plant-eaters. The amount of meat in our diets grew as we diverged from our non-human ancestors; we added meat through the ages as our tool-kit became more complex. Since the animals we evolved from were plant-eaters and we added meat as time went on, then, clearly, we were not “one or the other” in regard to diet—our diet constantly changed as we migrated into new biomes.
So although Singer’s argument is sound, I will never become a vegan/vegetarian. Fatty meat tastes too good.
Nathan Cofnas (2018) argues that “we cannot say decisively that vegetarianism or veganism is safe for children.” This is because even if the vitamins and minerals not gotten through the diet are supplemented, the bioavailability of the consumed nutrients are lower (Pressman, Clement, and Hayes, 2017). Furthermore, pregnant women should not eat a vegan/vegetarian diet since vegetarian diets can lead to B12 and iron deficiency along with low birth weight and vegan diets can lead to DHZ, zinc, and iron deficiencies along with a higher risk of pre-eclampsia and inadequate fetal brain development (Danielewicz et al, 2017). (See also Tan, Zhao, and Wang, 2019.)
Meat was important to our evolution, this cannot be denied. However, prominent “carnivores” take this fact and push it further than it goes. Yes, there is data that meat-eating allowed our brains to grow bigger, trading-off with body size. Fonseca-Azevedo and Herculano-Houzel (2012) showed that metabolic limitations resulting from hours of feeding and low caloric yield explain the body/brain size in great apes. Plant foods are low in kcal; great apes have large bodies and so, need to eat a lot of plants. They spend about 10 to 11 hours per day feeding. On the other hand, our brains started increasing in size with the appearance of erectus.
If erectus ate nothing but raw foods, he would have had to eat more than 8 hours per day while hominids with neurons around our level (about 86 billion; Herculano-Houzel, 2009). Thus, due to the extreme difficulty of attaining the amount of kcal needed to power the brains with more neurons, it is very unlikely that erectus would have been able to survive on only plant foods while eating 8+ hours per day. Indeed, with the archaeological evidence we have about erectus, it is patently ridiculous to claim that erectus did eat for that long. Great apes mostly graze all day. Since they graze all day—indeed, they need to as the caloric availability of raw foods is lower than in cooked foods (even cooked plant foods would have a higher bioavailability of nutrients)—then to afford their large bodies they need to basically do nothing but eat all day.
It makes no sense for erectus—and our immediate Homo sapiens ancestors—to eat nothing but raw plant foods for what amounts to more than a work day in the modern world. If this were the case, where would they have found the time to do everything else that we have learned about them in the archaeological record?
There is genetic evidence for human adaptation to a cooked diet (Carmody et al, 2016). Cooking food denatures the protein in it, making it easier to digest. Denaturation is the alteration of the protein shape of whatever is being cooked. Take the same kind of food. That food will have different nutrient bioavailability depending on whether or not it is cooked. This difference, Herculano-Houzel (2016) and Wrangham (2009) argue is what drove the evolution of our genus and our big brains.
Just because meat-eating and cooking was what drove the evolution of our big brains—or even only allowed our brains to grow bigger past a certain point—does not mean that we are “carnivores”; though it does throw a wrench into the idea that we—as in our species Homo sapiens—were strictly plant-eaters. Our ancestors ate a wide-range of foods depending on the biome they migrated to.
The fact that our brain takes up around 20 percent of our TDEE while representing only 2 percent of our overall body mass, the reason being our 86 billion neurons (Herculano-Houzel, 2011). So, clearly, as our brains grew bigger and acquired more neurons, there had to have been a way for our ancestors to acquire the energy need to power their brains and neurons and, as Fonseca-Azevedo and Herculano-Houzel (2012) show, it was not possible on only a plant diet. Eating and cooking meat was the impetus for brain growth and keeping the size of our brains.
Take this thought experiment. An asteroid smashes into the earth. A huge dust cloud blocks out the sun. So the asteroid would have been a cause of lowering food production. This halting of food production—high-quality foods—persisted for hundreds of years. What would happen to our bodies and brains? They would, of course, shrink depending on how much and what we eat. Food scarcity and availability, of course, do influence the brain and body size of primates (Montgomery et al, 2010), and humans would be no different. So, in this scenario I have concocted, in such an event, we would shrink, in both brain and body size. I would imagine in such a scenario that high-quality foods would disappear or become extremely hard to come by. This would further buttress the hypothesis that a shift to higher-quality energy is how and why our large brains evolved.
A new analysis of the tooth of a Neanderthal apparently establishes that they were mostly carnivorous, living mostly on horse and reindeer meat (Jaouen et al, 2019). Neanderthals did indeed have a high-meat diet in northerly latitudes during the cold season. Neanderthals in Southern Europe—especially during the warmer seasons—however, ate a mixture of plants and animals (Fiorenza et al, 2008). Further, there was a considerable plant component to the diet of Neanderthals (Perez-Perez et al, 2003) (with the existence of plant-rich diets for Neanderthals being seen mostly in the Near East; Henry, Brooks, and Piperno, 2011) while the diet of both Neanderthals and Homo sapiens varied due to climatic fluctuations (El Zataari et al, 2016). From what we know about modern human biochemistry and digestion, we can further make the claim that Neanderthals ate a good amount of plants.
Ulijaszek, Mann, and Elton (2013: 96) write:
‘Absence of evidence’ does not equate to ‘evidence of absence,’ and the meat-eating signals from numerous types of data probably swamp the plant-eating signlas for Neanderthals. Their dietary variability across space and time is consistent with the pattern observed in the hominin clade as a whole, and illustrates hominin dietary adaptatbility. It also mirrors trends observed in modern foragers, whereby those populations that live in less productive environments have a greater (albeit generally not exclusive) dependance on meat. Differences in Neanderthal and modern human diet may have resulted from exploitation of different environments: within Europe and Asia, it has been argued that modern humans exploited marginal areas, such as steppe environments, whereas Neanderthals may have preferred more mosaic, Mediterranean-type habitats.
Quite clearly, one cannot point to any one study to support an (ideologically driven) belief that our genus or Neanderthals were “strictly carnivore”, as there was great variability in the Neanderthal diet, as I have shown.
Singer’s argument for ethical vegetarianism is sound; I personally can find no fault in it (if anyone can, leave a comment and we can discuss it, I will take Singer’s side). Although I can find no fault in the argument, I would never become a vegan/vegetarian as I love meat too much. There is evidence that vegan/vegetarian diets are not good for growing children and pregnant mothers, and although the same can be said for any type of diet that leads to nutrient deficiencies, the risk is much higher in these types of plant-based diets.
The evidence that we were meat-eaters in our evolutionary history is there, but we evolved as eclectic feeders. There was great variability in the Neanderthal diet depending on where they lived, and so the claim that they were “full-on carnivore” is false. The literature attests to great dietary flexibility and variability in both Homo sapiens and Neanderthals, so the claim that they ate meat and only meat is false.
My conclusion in my look into our diet over evolutionary time was:
It is clear that both claims from vegans/vegetarians and carnivores are false: there is no one “human diet” that we “should” be eating. Individual variation in different physiologic processes implies that there is no one “human diet”, no matter what type of food is being pushed as “what we should be” eating. Humans are eclectic feeders; we will eat anything since “Humans show remarkable dietary flexibility and adaptability“. Furthermore, we also “have a relatively unspecialized gut, with a colon that is shorter relative to overall size than in other apes; this is often attributed to the greater reliance on faunivory in humans (Chivers and Langer 1994)” (Ulijaszek, Mann, and Elton, 2013: 58). Our dietary eclectism can be traced back to our Australopithecine ancestors. The claim that we are either “vegetarian/vegan or carnivore” throughout our evolution is false.
There is no evidence for both of these claims from both of these extreme camps; humans are eclectic feeders. We are omnivorous, not vegan/vegetarian or carnivores. Although we did evolve from plant-eating primates and then added meat into our diets over time, there is no evidence for the claim that we ate only meat. Our dietary flexibility attests to that.
I Am Not A Phrenologist
People seem to be confused on the definition of the term ‘phrenology’. Many people think that just the measuring of skulls can be called ‘phrenology’. This is a very confused view to hold.
Phrenology is the study of the shape and size of the skull and then drawing conclusions from one’s character from bumps on the skull (Simpson, 2005) to overall different-sized areas of the brain compared to others then drawing on one’s character and psychology from these measures. Franz Gall—the father of phrenology—believed that by measuring one’s skull and the bumps etc on it, then he could make accurate predictions about their character and mental psychology. Gall had also proposed a theory of mind and brain (Eling, Finger, and Whitaker, 2017). The usefulness of phrenology aside, the creator Gall contributed a significant understanding to our study of the brain, being that he was a neuroanatomist and physiologist.
Gall’s views on the brain can be seen here (read this letter where he espouses his views here):
1.The brain is the organ of the mind.
2. The mind is composed of multiple, distinct, innate faculties.
3. Because they are distinct, each faculty must have a separate seat or “organ” in the brain.
4. The size of an organ, other things being equal, is a measure of its power.
5. The shape of the brain is determined by the development of the various organs.
6. As the skull takes its shape from the brain, the surface of the skull can be read as an accurate index of psychological aptitudes and tendencies.
Gall’s work, though, was imperative to our understanding of the brain and he was a pioneer in the inner workings of the brain. They ‘phrenologized’ by running the tips of their fingers or their hands along the top of one’s head (Gall liked using his palms). Here is an account of one individual reminiscing on this (around 1870):
The fellow proceeded to measure my head from the forehead to the back, and from one ear to the other, and then he pressed his hands upon the protuberances carefully and called them by name. He felt my pulse, looked carefully at my complexion and defined it, and then retired to make his calculations in order to reveal my destiny. I awaited his return with some anxiety, for I really attached some importance to what his statement would be; for I had been told that he had great success in that sort of work and that his conclusion would be valuable to me. Directly he returned with a piece of paper in his hand, and his statement was short. It was to the effect that my head was of the tenth magnitude with phyloprogenitiveness morbidly developed; that the essential faculties of mentality were singularly deficient; that my contour antagonized all the established rules of phrenology, and that upon the whole I was better adapted to the quietude of rural life rather than to the habit of letters. Then the boys clapped their hands and laughed lustily, but there was nothing of laughter in it for me. In fact, I took seriously what Rutherford had said and thought the fellow meant it all. He showed me a phrenological bust, with the faculties all located and labeled, representing a perfect human head, and mine did not look like that one. I had never dreamed that the size or shape of the head had anything to do with a boy’s endowments or his ability to accomplish results, to say nothing of his quality and texture of brain matter. I went to my shack rather dejected. I took a small hand- mirror and looked carefully at my head, ran my hands over it and realized that it did not resemble, in any sense, the bust that I had observed. The more I thought of the affair the worse I felt. If my head was defective there was no remedy, and what could I do? The next day I quietly went to the library and carefully looked at the heads of pictures of Webster, Clay, Calhoun, Napoleon, Alexander Stephens and various other great men. Their pictures were all there in histories.
This—what I would call skull/brain-size fetishizing—is still evident today, with people thinking that raw size matters (Rushton and Ankney, 2007; Rushton and Ankney, 2009) for cognitive ability, though I have compiled numerous data that shows that we can have smaller brains and have IQs in the normal range, implying that large brains are not needed for high IQs (Skoyles, 1999). It is also one of Deacon’s (1990) fallacies, the “bigger-is-smarter” fallacy. Just because you observe skull sizes, brain size differences, structural brain differences, etc, does not mean you’re a phrenologist. you’re making easy and verifiable claims, not like some of the outrageous claims made by phrenologists.
What did they get right? Well, phrenologists stated that the most-used part of the brain would become bigger, which, of course, was vindicated by modern research—specifically in London cab drivers (McGuire, Frackowiak, and Frith, 1997; Woolett and McGuire, 2011).
It seems that phrenologists got a few things right but their theories were largely wrong. Though those who bash the ‘science’ of phrenology should realize that phrenology was one of the first brain ‘sciences’ and so I believe phrenology should at least get some respect since it furthered our understanding of the brain and some phrenologists were kind of right.
People see the avatar I use which is three skulls, one Mongoloid, the other Negroid and the other Caucasoid and then automatically make that leap that I’m a phrenologist based just on that picture. Even, to these people, stating that races/individuals/ethnies have different skull and brain sizes caused them to state that what I was saying is phrenology. No, it isn’t. Words have definitions. Just because you observe size differences between brains of, say either individuals or ethnies, doesn’t mean that you’re making any value judgments on the character/mental aptitude of that individual based on the size of theur skull/brain. On the other hand, noting structural differences between brains like saying “the PFC is larger here but the OFC is larger in this brain than in that brain” yet no one is saying that and if that’s what you grasp from just the statement that individuals and groups have different sized skulls, brains, and parts of the brain then I don’t know what to tell you. Stating that one brain weighs more than another, say one is 1200 g and another is 1400 g is not phrenology. Stating that one brain is 1450 cc while another is 1000 cc is not phrenology. For it to be phrenology I have to outright state that differences in the size of certain areas of the brain or brains as a whole cause differences in character/mental faculties. I am not saying that.
A team of neuroscientists just recently (recently as in last month, January, 2018) showed, in the “most exhaustive way possible“, tested the claims from phrenological ‘research’ “that measuring the contour of the head provides a reliable method for inferring mental capacities” and concluded that there was “no evidence for this claim” (Jones, Alfaro-Almagro, and Jbabdi, 2018). That settles it. The ‘science’ is dead.
It’s so simple: you notice physical differences in brain size between two corpses. That one’s PFC was bigger than OFC and with the other, his OFC was bigger than his PFC. That’s it. I guess, using this logic, neuroanatomists would be considered phrenologists today since they note size differences between individual parts of brains. Just noting these differences doesn’t make any type of judgments on potential between brains of individuals with different size/overall size/bumps etc.
It is ridiculous to accuse someone of being a ‘phrenologist’ in 2018. And while the study of skull/brain sizes back in the 17th century did pave the way for modern neuroscience and while they did get a few things right, they were largely wrong. No, you cannot see one’s character from feeling the bumps on their skull. I understand the logic and, back then, it would have made a lot of sense. But to claim that one is a phrenologist or is pushing phrenology just because they notice physical differences that are empirically verifiable does not make them a phrenologist.
In sum, studying physical differences is interesting and tells us a lot about our past and maybe even our future. Stating that one is a phrenologist because they observe and accept physical differences in the size of the brain, skull, and neuroanatomic regions is like saying that physical anthropologists and forensic scientists are phrenologists because they measure people’s skulls to ascertain certain things that may be known in their medical history. Chastizing someone because they tell you that one has a different brain size than the other by calling them outdated names in an attempt to discredit them doesn’t make sense. It seems that even some people cannot accept physical differences that are measurable again and again because it may go against some long-held belief.
You Don’t Need Genes to Delineate Race
Most race deniers say that race isn’t real because, as Lewontin (1972) and Rosenberg (2002) state, the within-group variation is larger than the between-group variation. Though, you can circumvent this claim by not even looking at genes/allele frequencies between races, you can show that race is real by looking at morphology, phenotype and geographic ancestry. This is one of Michael Hardimon’s race categories, the minimalist concept of race. This concept does not entail anything that we cannot physically ‘see’ with our eyes (e.g., mental and psychological traits are off the table). Using these concepts laid out by Hardimon can and does prove that race is real and useful without even arguing about any potential mental and psychological differences between human races.
Morphology is one of the most simple tells for racial classification. Just by looking at average morphology between the races we can use attempt to use this data point as a premise in the argument that races exist.
East Asians are shorter with shorter limbs and have an endomorphic somatype. This is due to evolving in cold climate, as a smaller body and less surface area can be warmer much quicker than a larger body. This is a great example of Allen’s rule: that animals in colder climates will be smaller than animals in warmer climates. Using average morphology, of course, can show how the population in question evolved and where they evolved.
Regarding Europeans, they have an endomorphic somatype as well. This, again, is due to where they evolved. Morphology can tell us a lot about the evolution of a species. Though, East Asians and Europeans have similar morphologies due to evolving in similar climates. Like East Asians, Europeans have a wider pelvis in comparison to Africans, so this is yet another morphological variable we can use to show that race exists.
Finally, the largest group is ‘Africans’ who have the largest phenotypic and genetic diversity on earth. Generally, you can say that they’re tall, have long limbs and a short torso, which is due to evolving in the tropics. Furthermore, and perhaps most important, Africans have narrower pelves than East Asians and Europeans. This character is one of the most important regarding the reality of race because it’s one of the most noticeable, and we do notice in when it comes to sports competition because that certain type of morphology is conducive to athletic success. (Also read my recent article on strength and race and my article on somatype and race for more information on morphologic racial differences.)
Morphology is a part of the phenotype too, obviously, but there is a reason why it’s separated. As is true with morphology, different characters evolved due to cultural evolution (whether or not they adopted farming early) or evolution through natural selection, drift and mutation. Though, of course, favorable mutations in a certain environment will be passed on and eventually become a part of the characteristics of the population in question.
East Asians have the epicanthic fold, which probably evolved to protect the eye from the elements and UV rays on the Mongolian steppes. They also have softer features than Europeans and Africans, but this is not due to lower testosterone as is popularly stated. (Amusingly enough, there is a paper that stated that East Asians have Down Syndrome-like qualities due to their epicanthic folds to bring up one reason.) Even then, what some races find attractive or not can show how and why certain facial phenotypes evolved. To quote Gau et al (2018):
Compared with White women, East Asian women prefer a small, delicate and less robust face, lower position of double eyelid, more obtuse nasofrontal angle, rounder nose tip, smaller tip projection and slightly more protruded mandibular profile.
And they conclude:
The average faces are different from the attractive faces, while attractive faces differ according to race. In other words, the average facial and aesthetic criteria are different. We should use the attractive faces of a race to study that races aesthetic criteria.
We can use studies such as this to discern different facial phenotypes, which, again, proves that race exists.
The climate one’s ancestors evolved in dictates nose shape. In areas where it is extremely dry and also has a lot of heat, a larger mucous area is required to moisten inspired (inhaled) air, which is why a more flat and narrow nose is needed.
Zaidi et al (2017) write:
We find that width of the nares is correlated with temperature and absolute humidity, but not with relative humidity. We conclude that some aspects of nose shape may have indeed been driven by local adaptation to climate.
Though climate, of course, isn’t the only reason for differences in nose shape; sexual selection plays a part too, as seen in the above citation on facial preferences in East Asian and European women.
There are also differences in hirsutism between the races. Racial differences exist regarding upper lip hair, along with within-race differences (Javorsky et al, 2014). The self-reported races of African American, East Asian, Asian Indian, and ‘Hispanic’ predicted facial hair differences in women, but not how light their skin was. The women were from Los Angeles, USA; Rome, Italy; Akita, Japan; and London, England. Indian women had more hair than any other race, while European women had the least. Regarding within-race variation, Italian women had more hair on their upper lip than American and British women. Skin lightness was related to hair on the upper lip. (Also read my article The Evolution of Human Skin Variation for more information on racial differences in skin color.)
In 2012, an interesting study was carried out on hair greying on a sample population of a large number of the world’s ethnies titled Greying of the human hair: a worldwide survey, revisiting the ‘50’ rule of thumb. The objective of the study was to test the ’50-50-50′ rule; that at age 50, 50 percent of the population has at least 50 percent of their hair grey. Africans and Asians showed fewer grey hairs than whites who showed the most. The results imply that hair greyness varies by ethnicity/geographic origin, which is perfect for the argument laid out in this article. The global range for people over 50 with 50 percent or more of their hair grey was between 6 and 23 percent, far lower than what was originally hypothesized (Panhard, Lozano, and Loussouarn, 2012). They write on page 870:
With regard to the intensity of hair greying, the lowest values were found among African and Asian groups, especially Thai and Chinese, whereas the highest values were in subjects with the blondest hair (Polish, Scottish, Russian, Danish, CaucasianAustralian and French).
Altogether, these analyses clearly illustrate that the lowest incidences and intensities of grey hair are found in populations of the darkest hair whereas the highest intensities are found in populations with the lightest hair tones.
Actual hair diversity is much more concentrated in Europeans, however (Frost, 2005). (See Peter Frost’s article Why Do Europeans Have So Many Hair and Eye Colors?) It is largely due to sexual selection, with a few climatic factors thrown in. Dark hair, on the other hand, is a dominant trait, which is found all over the world.
Zhuang et al (2010) found significant differences in facial morphology between the races, writing:
African-Americans have statistically shorter, wider, and shallower noses than Caucasians. Hispanic workers have 14 facial features that are significantly larger than Caucasians, while their nose protrusion, height, and head length are significantly shorter. The other ethnic group was composed primarily of Asian subjects and has statistically different dimensions from Caucasians for 16 anthropometric values.
Statistically significant differences in facial anthropometric dimensions (P < 0.05) were noted between males and females, all racial/ethnic groups, and the subjects who were at least 45 years old when compared to workers between 18 and 29 years of age.
Blacks had statistically significant differences in lip and face length when compared to whites (whites had shorted lips than blacks who had longer lips than whites).
Brain size and cranial morphology, too, differs by geographic ancestry which is directly related to the climate where that population evolved (Beals, Smith, and Dodd, 1984). Most every trait that humans have—on average of course—differs by geographic location and the cause of this is evolution in these locations along with being a geographically isolated breeding population.
The final piece to this argument is using where one’s recent ancestors came from. There are five major populations from a few geographic locales: Oceania, the Americas (‘Native Americans), Europe, Africa and East Asia. These geographic locales have peoples that evolved there and underwent different selective pressures due to their environment and their bodies evolved to better suit their environment, and so racial differences in morphology and phenotype occurred so the peoples could survive better in that location. No one part of this argument is more important than any other, though geographic ancestry is the final piece of the puzzle that brings everything together. Because race is correlated with morphology and phenotype, the geographic ancestry dictates what these characteristics look like.
Thus, this is the basic argument:
P1: Differing populations have differing phenotypes, including (but not limited to) facial structure, hair type/color, lip structure, skull size, brain size etc.
P2: Differing populations have differing morphology which, along with this population’s phenotype, evolved in response to climatic demands along with sexual selection.
P3: This population must originate from a distinct geographic location.
C: If all three of the above premises are true, then race—in the minimalist sense—exists and is biologically real.
This argument is extremely simple, and along with the papers cited above in support of the three premises and the ultimate conclusion, it will be extremely hard for race deniers to counter. We can say that P1 is logically sound because geographically isolated populations differ in the above-mentioned criteria. We can say that P2 is logically sound since differing populations have differing morphology (as I have discussed numerous times which leads to racial differences in sporting competition) such as differing trunk lengths, leg lengths, arm lengths and heights which are largely due to evolution in differing climates. We can say that P3 is logically sound because the populations that would satisfy P1 and P2 do come from geographically distinct locations; that is, they have a peculiar ancestry that they only share.
This concept of minimalist race from Michael Hardimon is (his) the racialist concept of race “stripped down to its barest bones” (Hardimon, 2017: 3). The minimalist concept of race, then, does not discuss any differences between populations that cannot be directly discerned with the naked eye. (Note: You can also use the above arguments/data laid out for the populationist concept of race, which, according to Hardimon (2017: 3) is: “A nonracialist (nonessentialist, nonhierarchical) candidate scientific concept that characterizes races as groups of populations belonging to biological lines of descent, distinguished by patterns of phenotypic differences, that trace back to geographically separated and extrinsically reproductively isolated founder populations.)
Minimalist race is biologically sound, grounded in genetics (though I have argued here that you don’t need genetics to define race), and is grounded in biology. Minimalist race is defined as characteristics of the group, not of the individual. Minimalist race are biologically real. Minimalist races exist because, as shown with the data presented in this article, phenotypic and morphologic traits are unevenly distributed throughout the world which then correlates with geographic ancestry. It cannot get any more simpler than that: race exists because differences in phenotype and morphology exist which then corresponds with geographic ancestry.
From Hardimon (2017: 177)\
No sane or logical person would deny the existence of race based on the criteria laid out in this article. We can also make another leap in logic and state that since minimalist races exist and are biologically real then geographic ancestry should be a guide when dealing with medicine and different minimalist races.
It is clear that race exists in the minimal sense; you do not need genes to show that race is real, nor that race has any utility in a medical context. This is important for race deniers to understand: genes are irrelevant when talking about the reality of race; you only need to just use your eyes and you’ll see that certain morphologies and phenotypes are distributed across geographic locations. It is also very easy to get someone to admit that races exist in this minimalist-biological sense. No one denies the existence of Africans, Europeans, ‘Native’ Indians, East Asians and Pacific Islanders. These populations differ in morphology and other physical characters which are unevenly distributed by geographic ancestry, so, therefore: minimialist races exist and are a biological reality.
Microcephaly and Normal IQ
In my last article on brain size and IQ, I showed how people with half of their brains removed and people with microcephaly can have IQs in the normal/above average range. There is a pretty large amount of data out there on microcephalics and normal intelligence—even a family showing normal intelligence in two generations despite having dominantly inherited microcephaly.
Microcephaly is a condition in which an individual has a head circumference of 2 SD below the mean. Though most would think that would doom all microcephalics to low IQs, 15 percent of microcephalics have IQs in the normal range. This is normally associated with mental retardation, but this is a medical myth (Skoyles and Sagan, 2002: 239), though there are numerous cases of microcephalics having normal IQs (Dorman, 1991). Numerous studies show that it’s possible for normal people to have small brains. Giedd et al (1996) showed a wide variation in head circumference. Of the 104 individuals who had their heads scanned, volume for the cerebellum ranged from 735 cc in a 10 year old boy to 1470 cc in a 14 year old boy (Skoyles, 1999: 4, para 12). Though Giedd et al (1996) did not report total brain volumes in their subjects, brain volume can be inferred. Skoyles (1999; 4, para 12) writes:
The cerebral cortex makes up only 86.4% of brain volume when measured by MRI (Filipek, Richelme, Kennedy & Caviness, 1994), so the total brain volume of the 10-year-old would be larger at 850.7 cc. Brains at 10 years are about 4.4% smaller than adult size (Dekaban & Sadowsky, 1978), suggesting that that brain would grow to an adult size of 888 cc. Even using the lower figure of 80% cerebrum to brain ratio derived from anatomical studies suggests a figure of only 960 cc.
The variation of 888 cc to 960 cc depending on which value for the cerebrum to brain ratio you use still shows that people can have brains 450-300 cc lower than average and still be ‘normal’.
Researchers began noticing many cases of both individuals and families exhibiting features of microcephaly—but they had normal intelligence (Simila, 1970;Seemanova et al, 1985; Rossi et al, 1987; Teebi et al, 1987; Sherrif and Hegab, 1988; Desch et al, 1990; Opitz and Holt, 1990; Evans, 1991; Heney et al, 1991; Green et al, 1995; Rizzo and Pavone, 1995; Teebi and Kurash, 1996; Innis et al, 1997; Kawame, Pagon, and Hudgens, 1997; Abdel-Salam et al, 1999; Digweed, Reis, and Sperling, 1999; Woods, Bond, and Enard, 2005; Ghaoufari-Fard et al, 2015). This is a pretty huge blow to the brain size/IQ correlation, for if people with such small heads can have normal IQs, why do we have such large brains that leave us with such large problems (Skoyles and Sagan, 2002: 240-244)?
If we can have smaller heads—which would make childbirth easier and allow us to continue to have smaller pelves which would be conducive to endurance running since we are the running ape, why would brains have gotten so much larger from that of erectus (where modern people can have normal IQs with erectus-sized brains) if it is perfectly possible to have a brain on around the size of early erectus? In any event, these anomalies need an explanation, and Skoyles (1999) hypothesizes that people with smaller heads but normal IQs may have a lower capacity for expertise. This is something that I will look into in the future, as it may explain these anomalies, along with the true reason why our brains began increasing around 3 mya.
Sells (1977)—using the criteria of 2 SD below mean head size—showed that 1.9 percent of the children he tested (n=1009) had IQs indistinguishable from their normocephalic peers. Watemberg et al (2002) studied 1,393 patients. They found that almost half of their patients with microcephaly (15.4% of their patients studies had microcephaly) had IQs within the normal limits, while among those with sub-normal intelligence, 30 percent had borderline IQs or were mildly mentally retarded (it’s worth noting that l-glutamate can raise IQ scores by 5-20 points in the mild to moderate mental deficiency; Vogel, Braverman and Draguns, 1966 review numerous lines of evidence that glutamate raises IQ in mentally deficient individuals). Sassaman and Zartler (1982) showed that 31.9 percent of microcephalics had normal intelligence, 6.9 percent of them had average intelligence.
Head circumference does not directly correlate with IQ in microcephalic patients (Baxter et al, 2009). Dorman (1991: 268) writes: “Decreased head size may or may not be associated with lowered intelligence, indicating that small head size by itself does not affect intelligence. The presence of subgroups of microcephalic persons who typically have normal intelligence is sufficient to rule out a causal relationship between head size and intellect. … It can be added that reduction in brain size without such structural pathology, as mayvoccur in some genetic conditions or evenvas a result of normal variation, does not
affect intelligence. ”
Tenconi et al (1981) write: “We were able to examine five other members of this family (1-3; 11-1; 11-4; 11-5; 11-8) and found no abnormalities: they were of normal intelligence, head circumference, and ophthalmic evaluation. Members of the grandmother’s family who refused to be examined appeared to be of normal intelligence and head appearance and did not have any serious eye problems.”
Stoler-Poria et al (2010) write: “There was a K-ABC cognitive score < 85 (signifying developmental delay) in two (10%) children from the study group and in one (5%) child from the control group: one of the children in the study group (the one with HC below − 3 SD) scored significantly below the normal range (IQ = 70), while the other scored in the borderline range (IQ = 83); the child from the control group also scored in the borderline range (IQ = 84).” Whereas Thelander and Pryor (1968) showed that individuals with head circumferences 2-2.6 SDs below the mean had average IQs, though the smaller their HC, the lower their IQ. Ashwal et al (2009: 891) write: “The students with microcephaly had a similar mean IQ to the normocephalic group (99.5 vs 105) but had lower mean academic achievement scores (49 vs 70).” So it seems that microcephalics can have normal IQs, but have lower academic achievement scores.
Primary microcephalics have higher IQs than secondary microcephalics (Cowie, 1987). Primary microcephaly is microcephaly that one is born with whereas secondary microcephaly is acquired.
There is one case study of a girl with microcephaly where Tan et al (2014) write: “Most recent measures of general intelligence (performed at 6½ years of age) reveal a below average full scale IQ of 75 with greatest impairment in processing speed. On the Wechsler Preschool and Primary Scale of Intelligence III Revised (for children 2 years 6 m – 7 years 3 m), she obtained a Verbal IQ of 83, Nonverbal IQ of 75, and Processing speed 71. On the Wechsler Individual Achievement Testing (WIAT) she showed significant struggles in secondary language on tasks of early reading (SS 60), word reading (SS 70), reading comprehension (SS 69) and struggles in math on the task of numerical operations (SS 61) (WPPSI – R and WIAT mean = 100 and SD = 15). Parents report subjectively that differences in development relative to her sisters are becoming more apparent with time.”
It is not a foregone conclusion that if an individual has microcephaly that they will have a low IQ and be mentally retarded, as reviewed above, there are numerous cases of individuals with microcephaly and normal IQs, with this even being seen in families—that is, multiple families with normal IQs yet have microcephaly. Numerous people with Nijmegen breakage syndrome (a type of microcephaly) can have normal IQs. Rossi et al (1987) reported that for 6 Italian families (n=21 microcephalics) with autsomally inherited microcephaly, for those administered psychometric tests (n=12), all had normal IQs but one, with an IQ range of 99 to 112 for a mean of 99.3.
In conclusion, microcephalics can have normal IQs and live normal lives, despite having heads, on average, that are 2 SDs below the mean. These anomalies (and there are many, many more) need explaining. This is great evidence that a larger brain does not always mean a higher IQ, as well as yet more evidence that it was possible for Homo erectus to have an IQ in our range today, which means that we may not need brains our current size for our intellect and achievements. To conclude, I will provide a quote from Dorman (1991):
The normal intelligence found by SELLS in school children with small head size also militates against any straightforward relationship between diminished head size and lowered intelligence.
With the correlation between brain size and IQ being .4 (Gignac and Bates, 2017), this does not rule out the ‘outliers’ reviewed in this article. These cases deserve an explanation, for if large brains lead to high IQs, why do these people with heads significantly smaller have IQs in the normal range? (See Skoyles, 1999: 8, para 31 for an explanation for the brain size/IQ correlation.)
My Response to Jared Taylor’s Article “Breakthroughs in Intelligence”
Here is my reply to Jared Taylor’s new article over at AmRen Breakthroughs in Intelligence:
“The human mind is not a blank slate; intelligence is biological”
The mind is not a ‘blank slate’, though there is no ‘biological’ basis for intelligence (at least in the way that hereditarians believe). They’re just correlations. (Whatever ‘intelligence’ is.)
“there is no known environmental intervention—including breast feeding”
There is a causal effect of breast feeding on IQ:
While reported associations of breastfeeding with child BP and BMI are likely to reflect residual confounding, breastfeeding may have causal effects on IQ. Comparing associations between populations with differing confounding structures can be used to improve causal inference in observational studies.
Brion, M. A., Lawlor, D. A., Matijasevich, A., Horta, B., Anselmi, L., Araújo, C. L., . . . Smith, G. D. (2011). What are the causal effects of breastfeeding on IQ, obesity and blood pressure? Evidence from comparing high-income with middle-income cohorts. International Journal of Epidemiology, 40(3), 670-680. doi:10.1093/ije/dyr020
Breastfeeding is related to improved performance in intelligence tests. A positive effect of breastfeeding on cognition was also observed in a randomised trial. This suggests that the association is causal.
Horta, B. L., Mola, C. L., & Victora, C. G. (2015). Breastfeeding and intelligence: a systematic review and meta-analysis. Acta Paediatrica, 104, 14-19. doi:10.1111/apa.13139
“before long we should be able to change genes and the brain itself in order to raise intelligence.“
Which genes? 84 percent of genes are expressed in the brain. Good luck ‘finding’ them…
These results corroborate with the results from previous studies, which have shown 84% of genes to be expressed in the adult human brain …
Negi, S. K., & Guda, C. (2017). Global gene expression profiling of healthy human brain and its application in studying neurological disorders. Scientific Reports, 7(1). doi:10.1038/s41598-017-00952-9
“Normal people can have extraordinary abilities. Prof. Haier writes about a non-savant who used memory techniques to memorize 67,890 digits of π! He also notes that chess grandmasters have an average IQ of 100; they seem to have a highly specialized ability that is different from normal intelligence. Prof. Haier asks whether we will eventually understand the brain well enough to endow anyone with special abilities of that kind.”
Evidence that intelligence is not related to expertise.
“It is only after a weight of evidence has been established that we should have any degree of confidence in a finding, and Prof. Haier issues another warning: “If the weight of evidence changes for any of the topics covered, I will change my mind, and so should you.” It is refreshing when scientists do science rather than sociology.”
Even with the “weight of evidence”, most people will not change their views on this matter.
“Once it became possible to take static and then real-time pictures of what is going on in the brain, a number of findings emerged. One is that intelligence appears to be related to both brain efficiency and structure”
Patterns of activation in response to various fluid reasoning tasks are diverse, and brain regions activated in response to ostensibly similar types of reasoning (inductive, deductive) appear to be closely associated with task content and context. The evidence is not consistent with the view that there is a unitary reasoning neural substrate. (p. 145)
Nisbett R. E., Aronson J., Blair C., Dickens W., Flynn J., Halpern D. F., Turkheimer E. Intelligence: New findings and theoretical developments. American Psychologist. 2012;67:130–159. doi: 10.1037/a0026699.
“Early findings suggested that smart people’s brains require less glucose—the main fuel for brain activity—than those of dullards.”
Cause and correlation aren’t untangled; they could be answering questions in a familiar format, for instance, and this could be why their brains show less glucose consumption.
“It now appears that grey matter is where “thinking” takes place, and white matter provides connections between different areas of grey matter. Some brains seem to be organized with shorter white-matter connections, which appear to allow more efficient communication, and there seem to be sex differences in the ways the part of the brain are connected. One of the effects of aging is deterioration of the white-matter connections, which reduces intelligence.”
Read this commentary (pg. 162): Norgate, S., & Richardson, K. (2007). On images from correlations. Behavioral and Brain Sciences, 30(02), 162. doi:10.1017/s0140525x07001379
“Brain damage never makes people smarter”
This is wrong:
You would think that cutting out one-half of people’s brains would kill them, or at least leave them vegetables needing care for the rest of their lives. But it does not. Consider this striking story. A boy starts having seizures at 10 years of age when his right cerebral hemisphere atrophies. By the time he is 12, the left side of his body is paralyzed. When he is 19, surgeons decide to operate and remove the right side of his brain, as it is causing gits in his intact left one. You might think this would lower his IQ or leave him severely retarded, but no. His IQ shoots up 14 points, to 142! The mystery is not so great when you realize that the operation has gotten rid of the source of his fits, which had previously hampered his intelligence. When doctors saw him 15 years later, they described him as “having obtained a university diploma . . . [and now holding] a responsible administrative position with a local authority.”
Skoyles, J. R., & Sagan, D. (2002). Up from dragons: the evolution of human intelligence. New York: McGraw-Hill (pg. 282)
“Prof. Haier wants a concerted effort: “What if a country ignored space exploration and announced its major scientific goal was to achieve the capability to increase every citizen’s g-factor [general intelligence] by a standard deviation?””
Don’t make me laugh. You need to prove that ‘g’ exists first. Glad to see some commentary on epigenetics that isn’t bashing it (it is a real phenomenon, though the scope of it in regards to health, disease and evolution remains to be discovered).
As most readers may know, I’m skeptical here and a huge contrarian. I do not believe that g is physiological and if it were then they better start defining it/talking about it differently because I’ve shown that if it were physiological then it would not mimick any known physiological process in the body. I eagerly await some good neuroscience studies on IQ that are robust, with large ns, their conclusions show the arrow of causality, and they’re not just making large sweeping claims that they found X “just because they want to” and are emotionally invested in their work. That’s my opinion about a lot of intelligence research; like everyone, they are invested in their own theories and will do whatever it takes to save face no matter the results. The recent Amy Cuddy fiasco is the perfect example of someone not giving up when it’s clear they’re incorrect.
I wish that Mr. Taylor would actually read some of the literature out there on TBI and IQ along with how people with chunks of their brains missing can have IQs in the normal range, showing evidence that most a lot of our brain mass is redundant. How can someone survive with a brain that weighs 1.5 pounds (680 gms) and not need care for the rest of his life? That, in my opinion, shows how incredible of an organ the human brain is and how plastic it is—especially in young age. People with IQs in the normal range need to be studied by neuroscientists because anomalies need explaining.
If large brains are needed for high IQs, then how do these people function in day-to-day life? Shouldn’t they be ‘as dumb as an erectus’, since they have erectus-sized brains living in the modern world? Well, the human body and brain are two amazing aspects of evolution, so even sudden brain damage and brain removal (up to half the brain) does not show deleterious effects in a lot of people. This is a clue, a clue that most of our brain mass after erectus is useless for our ‘intelligence’ and that our brains must have expanded for another reason—family structure, sociality, expertise, etc. I will cover this at length in the future.
Small Brain, Normal IQ
Emil Kirkegaard left a short commentary on John Skoyles’ 1999 paper Human Evolution Expanded Brains to Increase Expertise Capacity, not IQ, in which Emil writes in his article Evolution and imperfect mediators:
If we condense the argument, it becomes a little clearer:
John Skoyles (1999) [Condensed argument from Emil; paragraph 2] Brain expansion causes problems. Thus, whatever selected for increased brain size must have offered compensating benefits. People can have below average size brains yet exhibit normal intelligence. Thus, the compensating benefit offered by large brains is unlikely to be intelligence. Why should evolution have increased brain size with its associated problems for something smaller sized brains could have without expansion?
I merely edited out the unnecessary parts. Now try substituting some other trait, say fighting ability and some mediator of it.
Muscle size increases causes problems. Thus, whatever selected for increased muscle size must have offered compensating benefits. People can have below average size muscles yet exhibit normal fighting ability. Thus, the compensating benefit offered by large muscles is unlikely to be fighting ability. Why should evolution have increased muscle size with its associated problems for something smaller sized muscles could have without increase?
See the issue? This argument works for any imperfect physical underpinning of a trait, which is to say, basically all of them. Longer legs didn’t evolve for running well for some people with short legs run well. Bigger/stronger hears didn’t evolve for better cardio, because some people smaller/weaker hearts have good cardio. Longer arms didn’t evolve for fighting because some short armed people fight well. Darker skin didn’t evolve as a protection against sun exposure for some relative light skinned people don’t get skin cancer or sunburns. Larger eyes didn’t evolve for seeing better for some people with smaller eyes see well. Bigger ears… Bigger noses… Stronger hands… …
I don’t agree. Our brains sap about 20 percent of our daily energy needs while being 2 percent of our overall body mass whereas, in other primates, their brains cost about 9 percent of their daily energy needs (Fonseca-Azevedo and Herculano-Houzel, 2012).
In regards to Emil’s counterarguments, I’ll address them one by one:
Long legs: People with longer legs were better runners and could escape from predators and chase prey. People with shorter legs were killed.
Bigger/stronger hearts: Those with a larger heart (sans cardiomegaly) could run for longer distance (remember, we are distance runners; Carrier, 1984; Skoyles and Sagan, 2002; Bramble and Lieberman, 2004; Mattson, 2012) and so long legs and bigger/stronger hearts tie in with each other.
Long arms: This, again, goes back to our morphology in Africa. Long limbs are more conducive to heat dissipation (Lieberman, 2015). So those who had the right body plan for distance running could survive better during our evolutionary history.
Dark skin: A light-skinned person who spends enough time without protection in a tropical climate will develop skin cancer. (It is hypothesized that skin cancer is what caused the evolution of dark skin; Greaves, 2014, though this was contested by Jablonksi and Chaplin, 2014.)
Large eyes: Bigger eyes don’t mean better eyesight in comparison to smaller ones.
All in all, the brain size argument is 100 percent different from these arguments: large brains come with large problems. Further, there is evidence (which will be reviewed below) that people can live long, normal lives with half of their brain missing
The brain-size/IQ puzzle
The oft-repeated wisdom is that our brains evolved to such a large size so we could become more intelligent. And looking at when our brains began to increase (starting with erectus, which had to do with the advent of cooking/fire use), we can see that that’s when our modern body plan appeared. We can ascertain this by looking at Nariokotome boy, an erectus that lived about 1.6 mya.
Further, in regards to brain size, there was a man named Daniel Lyon. What was so extraordinary about this man is that, at the time of his death, had a brain that weighed 1.5 pounds (see Wilder, 1911)! Skoyles and Sagan (2002: 239) write:
Upon examination, anatomists could find no difference between it [Lyon’s brain] and other human brains apart from its size with one exception: The part of his brain attached to the brainstem, the cerebellum, was near normal size. Thus, the total size of Lyon’s cerebral hemisphere was smaller than would be suggested by a total brain weight of 1.5 lb. We do not know how bright he was—being a watchman is not particularly intellectually demanding—but he clearly was not retarded. A pound and a half brain may not be enough to manage a career as an attorney, a professor of theology, or a composer, but it was sufficient to let Lyon survive for 20 years in New York City.
Skoyles and Sagan (2002) review numerous lines of evidence of individuals with small brains/people with severe TBI living full lives, even having IQs in the average/above average range. They write (pg 238):
You would think that cutting out one-half of people’s brains would kill them, or at least leave them vegetables needing care for the rest of their lives. But it does not. Consider this striking story. A boy starts having seizures at 10 years of age when his right cerebral hemisphere atrophies. By the time he is 12, the left side of his body is paralyzed. When he is 19, surgeons decide to operate and remove the right side of his brain, as it is causing gits in his intact left one. You might think this would lower his IQ or leave him severely retarded, but no. His IQ shoots up 14 points, to 142! The mystery is not so great when you realize that the operation has gotten rid of the source of his fits, which had previously hampered his intelligence. When doctors saw him 15 years later, they described him as “having obtained a university doploma . . . [and now holding] a responsible administrative position with a local authority.” (18)
They also write about the story of an Argentinian boy who had a right hemispherectomy when he was 3-years-old who was notable for “the richness of his vocabulary and syntax” and also “attends English classes at school, in which he attains a high level of success (20; quote from Skoyles and Sagan, 2002: 238).
It is also a “medical myth that microcephaly (having a head smaller than two standard deviations (SD) below average circumference) is invariably linked to retardation.” (Skoyles and Sagan, 2002: 239).
There are some important things to be noted in regards to the study of Nariokotome boy’s skeleton and skull size. Skoyles and Sagan (2002: 240) write (emphasis mine):
So how well equipped was Homo erectus? To throw some figures at you (calculations shown in the notes), easily well enough. Of Nariokotome boy’s 673 cc of cortex, 164 cc would have been prefrontal cortex, roughly the same as half-brained people. Nariokotome boy did not need the mental competence required by cotemporary hunter-gatherers. … Compared to that of our distant ancestors, Upper Paleolithic technology is high tech. And the organizational skills used in hunts greatly improved 400,000 years ago to 20,000 years ago. These skills, in terms of our species, are recent, occurring by some estimates in less than the last 1 percent of our 2.5 million year existence as people. Before then, hunting skills would have required less brain power, as they were less mentally demanding. If you do not make detailed forward plans, then you do not need as much mental planning abilities as those who do. This suggests that the brains of Homo erectus did not arise for reasons of survival. For what they did, they could have gotten away with much smaller, Daniel Lyon-sized brains.
Lastly, I will touch on the fact that since we are running apes, that we need a narrow pelvis. As I stated above, our modern body plan came to be around 1.6 mya with the advent of erectus, which could be inferred from footprints (Steudel-Numbers, 2006; Bennett et al, 2009). Now the picture is beginning to become clearer: if people with brains the size of erectus could have intelligence in the modern range, and if our modern body plans evolved 1.6 mya (which is when our brains began to really increase in size due to metabolic constraints being unlocked due to erectus’ cooking ability), then you can see that it’d be perfectly possible for modern Homo sapiens to have brains the size of erectus while still having an IQ in the normal range.
Lastly, Skoyles and Sagan (2002: 245) write (emphasis mine):
Kanzi seems to do remarkably well with a chimp-sized brain. And while we tend to link retardation with small brains, we have seen that people can live completely normal lives while missing pieces of their brains. Brain size may enhance intelligence, but it seems we can get away without 3 pounders. Kanzi shows there is much potential in even 13 oz.
So Skoyles and Sagan do concede that brain size may enhance intelligence, however, as they have argued (and as Skoyles does in his 1999 paper), it is perfectly possible to live a normal life with half a brain, as well as have an average/above average IQ (as reviewed in Skoyles, 1999). So if people with erectus-sized brains can have IQs in the normal range and live normal lives, then brains must have increased for another reason, which Skoyles has argued is expertise capacity.
Large brains are, clearly, not needed for high IQs.
(Also search for this paper: Reiss, A. L., Abrams, M. T., Singer, H. S., Ross, J. L. & Denckla, M. B. (1996). Brain development, gender and IQ in children: A volumetric imaging study. Brain, 119, 1763-1774. where they show that there is a plateau, and a decrease in IQ in the largest brains; see table 2. I also reviewed some studies on TBI and IQ and how even those with severe TBI can have IQs in the normal range (Bigler, 1995; Wood and Rutterford, 2006; Crowe et al, 2012). Yet more evidence that people with half of their brains missing can lead normal lives and have IQs in the modern range.)