Home » IQ
Category Archives: IQ
What is the relationship between traumatic brain injury (TBI) and IQ? Does IQ decrease? Stay the same? Increase? A few studies have looked at the relationship between TBI and IQ, and the results may be quite surprising to some. Tonight I will look through a few studies and see what the relationship is between TBI and IQ—does IQ decrease substantially or is there only a small decrease? Does it decrease for all subtests or only some?
TBI and IQ
In a sample of 72 people with TBI who had significant brain injuries had an average IQ of 90 (study 1; Bigler, 1995). Bigler also says that whatever correlation exists between brain size and IQ “does not persist post injury” (pg 387). This finding has large implications: can there be a minimal hit to IQ depending on age/severity of injury/brain size/education level?
As will be seen when I review another study on IQ and brain injury, every individual in the cohort in Bigler (1995) was tested after 42 days of brain injury. This does matter, as I will get into below.
Table 1 in study 1 shows that whatever positive relationship between IQ and brain size that is there before injury does not persist after injury (Bigler, 1995: 387). Study 1 showed that, even with mild-to-severe brain damage, there was little change in measured IQ—largely because the correlation between brain size and IQ is .51 at the high end (which I will use—the true correlation is between .24 [Pietschnig et al, 2015] to .4 [Rushton and Ankney, 2009]), this means that if the correlation were to be that high, brain size would only explain 25 percent of the variation in IQ (Skoyles, 1999). That leaves a lot of room for other reasons for differences in brain size and IQ in individuals and groups.
In study 2 (Bigler, 1995: 389-391), he looked into whether or not there were differences in IQ between high and low brain volume people (95 men). Results summed in table 3 (pg 390). Those with low brain volume (1185), aged 28, had an IQ of 82.61 while those with high brain volume (1584), aged 34 had an IQ of 92 (both cohorts had similar education). Bigler showed in study 1 IQ was maintained post injury, so we can say that this was their IQ preinjury.
In table 2, Bigler (1995) compares IQs and brain volumes of mild-to-moderate and moderate-to-severe individuals with TBI. Brain volume in the moderate-to-severe group was 1289.2 whereas for the mild-to-moderate TBI-suffering individuals had a mean brain volume of 1332.9. Amazingly, both groups had IQ scores in the normal range (90.0 for moderate-to-severe TBI and 90.7 for individuals suffering from mild-to-moderate TBI. In study 3, Bigler (1995) shows that trauma-induced atrophic changes in the brain aren’t related to IQ postinjury, nor to the amount of focal lesion volume.
Nevertheless, Bigler (1995) shows that those with bigger brains had less of a cognitive hit after TBI than those with smaller brains. PumpkinPerson pointed me to a study that shows that TBI stretches far back into our evolutionary history, with TBI seen in australopithecine fossils along with erectus fossils found throughout the world. This implies that TBI was a driver for brain size (Shivley et al, 2012); if the brain is bigger, then if/when TBI is acquired, the cognitive hit will be lessened (Stern, 2002). This is a great theory for explaining why we have large brains despite the negatives that come with them—if we were to acquire TBI in our evolutionary past, then the hit to our cognition would not be too great, and so we could still pass our genes to the next generation.
The fact that changes in IQ are minimal when brain damage is acquired shows that brain size isn’t as important as some brain-size-fetishists would like you to believe. Though, preinjury (PI) IQ was not tested, I have one study where it was.
Wood and Rutterford (2006) showed results similar to Bigler (1995)—minimal change to IQ occurs after TBI. The whole cohort pre-injury (PI) had a 99.79 IQ. T1 (early measure) IQ for the cohort was 90.96 while T2 (late measure) IQ for the cohort was 92.37. For people with greater than 11th-grade education (n=30), IQ decreased from 106.57 PI to 95.19 in T1 to 100.17 in T2. For people with less than an 11th-grade education (n=44), IQ PI was 95.16 and decreased to 86.99 in T1 and increased to 87.96 in T2. Male (n=51) and female (n=23) were similar, with male PI IQ being 99.04 to women’s 101.44 with a 90.13 IQ in T1 for men with a 90.72 IQ in T1 for women. In T2 for men it was 92.94 and for women, it was 92.83. So this cohort shows the same trends as Bigler (1995).
The most marked difference in subtests post-injury was in vocabulary (see table 3) with similarities staying the same, and digit symbol, and block design increasing between T1 and T2. Neither group differed between T1 and T2. The only significant association in performance change over time was years of education. Less educated people were at greater risk for cognitive decline (see table 2).
The difference for PI IQ after T2 for less educated people was 7.2 whereas for more educated people it was 6.4. Though more educated people gained back more IQ points between T1 and T2 (4.98 points) compared to less educated people (.97 IQ points). And: “The participants in our study represent a subgroup of patients with severe head injury reported in a larger study assessing long‐term psychosocial outcome.”
Bigler (1995) didn’t have PI IQ, but Wood and Rutterford (2006) did, and from T1 to T2 (Bigler 1995 tested what would be equivalent to T1 in the Wood and Rutterford 2006 study), IQ hardly increased for those with lower education (.97 points) but substantially increased for those with higher education (4.98 points) with there being a similar difference between PI IQ and T2 IQ for both groups.
Brain-derived neurotrophic protective factor (BDNF) also promotes survival and synaptic plasticity in the human brain (Barbey et al, 2014). They genotyped 156 Vietnam War soldiers with frontal lobe lesion and “focal penetrating head injuries” for the BDNF polymorphism. Though they did find differences in the groups with and without the BDNF polymorphism, writing that there were “substantial average differences between these groups in general intelligence (≈ half a standard deviation or 8 IQ points), verbal comprehension (6 IQ points), perceptual organization (6 IQ points), working memory (8 IQ points), and processing speed (8 IQ points) after TBI” (Barbey et al, 2014). This supports the hypothesis that BDNF is protective against TBI; and since BDNF was important in our evolutionary history which is secreted by the brain while endurance running (Raichlen and Polk, 2012), this could have also been another protective factor against hits to cognition that were acquired, say, during hunts or fights.
Nevertheless, one study found in a sample of 181 children Crowe et al (2012) found that children with mild-to-moderate TBI had IQ scores in the average range, whereas children with severe TBI had IQ scores in the low average range (80 to 90; table 3).
Infants with mild TBI had IQ scores of 99.9 (n=20) whereas infants with moderate TBI has IQs of 98.0 (n=23) and infants with severe TBI had IQs of 90.7 (n=7); preschoolers with mild TBI had IQ scores of 103.8 (n=11), whereas preschoolers with moderate TBI had IQ scores of 100.1 (n=19) and preschoolers with severe TBI had IQ scores of 85.8 (n=13); middle schoolers with mild TBI had IQ scores of 93.9 (n=10), whereas middle schoolers with moderate TBI had IQ scores of 93.5 (n=21), and middle schoolers with severe TBI had IQ scores of 86.1 (n=14); finally, children with mild TBI in late childhood had a mean FSIQ of 107.3 (n=17), while children with moderate TBI had IQs of 99.5 in late childhood (n=15), and children with severe TBI in late childhood had FSIQs of 94.7 (Crowe et al, 2012; table 3). This shows that age of acquisition and severity influence IQ scores (along with their subtests), and that brain maturity matters for maintaining average intelligence post-TBI. Königs et al (2016) also show the same trend; the outlook is better for children with mild TBI, while children faired far worse with severe TBI compared to mild when compared to adults (also seen in Crowe et al, 2012).
People who got into motor vehicle accidents suffered a loss of 14 IQ points (n=33) after being tested 20 months postinjury (Parker and Rosenblum, 1996). The WAIS-IV Technical and Interpretive Manual also shows a similar loss of 16 points (pg 111-112), however, the 22 subjects were tested within 6 to 18 months within acquiring their TBI, with no indication of whether or not a follow-up was done. IQ will recover postinjury, but education, brain size, age, and severity all are factors that contribute to how many IQ points will be gained. However, adults who suffer mild, moderate, and severe TBIs have IQs in the normal range. TBI severity also had a stronger effect on children aged 2 to 7 years of age at injury, with white matter volume and results on the Glasgow Coma Scale (which is used to assess consciousness after a TBI) were related to the severity of the injury (Levin, 2012).
TBI can occur with a minimal hit to IQ (Bigler, 1995; Wood and Rutterford, 2006; Crowe et al, 2012). IQs can still be in the average range at a wide range of ages/severities, however the older one is when they suffer a TBI, the more likely it is that they will incur little to no loss in IQ (depending on the severity, and even then they are still in the average range). It is interesting to note that TBI may have been a selective factor in our brain evolution over the past 3 million years from australopithecines to erectus to Neanderthals to us. However, the fact that people with severe TBI can have IQ scores in the normal range shows that the brain size/IQ correlation isn’t all it’s cracked up to be.
Barbey AK, Colom R, Paul E, Forbes C, Krueger F, Goldman D, et al. (2014) Preservation of General Intelligence following Traumatic Brain Injury: Contributions of the Met66 Brain-Derived Neurotrophic Factor. PLoS ONE 9(2): e88733. https://doi.org/10.1371/journal.pone.0088733
Bigler, E. D. (1995). Brain morphology and intelligence. Developmental Neuropsychology,11(4), 377-403. doi:10.1080/87565649509540628
Crowe, L. M., Catroppa, C., Babl, F. E., Rosenfeld, J. V., & Anderson, V. (2012). Timing of Traumatic Brain Injury in Childhood and Intellectual Outcome. Journal of Pediatric Psychology,37(7), 745-754. doi:10.1093/jpepsy/jss070
Green, R. E., Melo, B., Christensen, B., Ngo, L., Monette, G., & Bradbury, C. (2008). Measuring premorbid IQ in traumatic brain injury: An examination of the validity of the Wechsler Test of Adult Reading (WTAR). Journal of Clinical and Experimental Neuropsychology,30(2), 163-172. doi:10.1080/13803390701300524
Königs, M., Engenhorst, P. J., & Oosterlaan, J. (2016). Intelligence after traumatic brain injury: meta-analysis of outcomes and prognosis. European Journal of Neurology,23(1), 21-29. doi:10.1111/ene.12719
Levin, H. S. (2012). Long-term Intellectual Outcome of Traumatic Brain Injury in Children: Limits to Neuroplasticity of the Young Brain? Pediatrics, 129(2), e494–e495. http://doi.org/10.1542/peds.2011-3403
Parker, R. S., & Rosenblum, A. (1996). IQ loss and emotional dysfunctions after mild head injury incurred in a motor vehicle accident. Journal of Clinical Psychology,52(1), 32-43. doi:10.1002/(sici)1097-4679(199601)52:1<32::aid-jclp5>3.3.co;2-1
Pietschnig, J., Penke, L., Wicherts, J. M., Zeiler, M., & Voracek, M. (n.d.). Meta-Analysis of Associations Between Human Brain Volume And Intelligence Differences: How Strong Are They and What Do They Mean? SSRN Electronic Journal. doi:10.2139/ssrn.2512128
Raichlen, D. A., & Polk, J. D. (2012). Linking brains and brawn: exercise and the evolution of human neurobiology. Proceedings of the Royal Society B: Biological Sciences,280(1750), 20122250-20122250. doi:10.1098/rspb.2012.2250
Rushton, J. P., & Ankney, C. D. (2009). Whole Brain Size and General Mental Ability: A Review. The International Journal of Neuroscience, 119(5), 692–732. http://doi.org/10.1080/00207450802325843
Shively, S., Scher, A. I., Perl, D. P., & Diaz-Arrastia, R. (2012). Dementia Resulting From Traumatic Brain Injury: What Is the Pathology? Archives of Neurology, 69(10), 1245–1251. http://doi.org/10.1001/archneurol.2011.3747
Skoyles R. J. (1999) HUMAN EVOLUTION EXPANDED BRAINS TO INCREASE EXPERTISE CAPACITY, NOT IQ. Psycoloquy: 10(002) brain expertise
Stern, Y. (2002). What is cognitive reserve? Theory and research application of the reserve concept. Journal of the International Neuropsychological Society,8(03), 448-460. doi:10.1017/s1355617702813248
Wood, R. L., & Rutterford, N. A. (2006). Long‐term effect of head trauma on intellectual abilities: a 16‐year outcome study. Journal of Neurology, Neurosurgery, and Psychiatry, 77(10), 1180–1184. http://doi.org/10.1136/jnnp.2006.091553
by Scott Jameson
RaceRealist and I have been ruminating on a lot of stuff lately. Here’s a fun one: what economic system works best relative to what we know about human health? In my mind there are two approaches: the libertarian approach, and quasi-fascism.
In the libertarian approach, there’s no regulation of sugar placed in our food. That’s already the case. But here’s an improvement: you don’t have to pay for anyone’s gastric bypass after they overeat that sugar.
In the fascist approach, there is regulation of sugar, because a fascist state does not allow people to poison each other for profit. You still have to pay for others’ medical expenses, but those expenses will be lower.
Here’s an advantage to the libertarian approach. In that society, the people who stuff their faces and refuse to get off the couch- who are dumber and lazier on average, probably- will have a higher mortality rate on average. Eugenics need not cost a dime.
But you run into a snag, sand in the gears of your hands-off system, when Big Food kicks out a whole bunch of crappy dietary advice, at which point a minority of reasonably intelligent people will be led astray, perhaps to the grave. How could a libertarian society stop that from taking place? Would it even bother? Could the system broadly work in spite of this snag?
A libertarian society doesn’t pay for idiots to have children. That’s good, but half of your population (women) are unlikely to ever support it. Women don’t do libertarianism; observe Rand Paul’s demographic Achilles Heel on page 25. When women asked men what to do about so-and-so’s eighth unpaid for child, we’d have to look them in the eyes and give a deadpan “let’s hope private charity can handle it.” There was a time, before FDR, when women would’ve accepted that answer. They were still in the kitchen back then, and I don’t know how to put them back there.
A fascist society has more hands-on eugenics, possibly genome editing or embryo selection. Also good. Expensive, but obviously worth it.
We welcome your input on these issues.
As an aside, White men are well-known as the most conservative, small government, nationalist group out there in our current political atmosphere. I always hear people spewing the schmaltziest nonsense about the values of the Founding Fathers. They were, relative to our political compass, nationalist libertarians. Accordingly, modern nationalists and libertarians do best with the exact same demographics that used to vote on candidates back then: property-owning White men. The sole reason that Ron and Rand Paul couldn’t get elected is that they are too similar to the Founding Fathers. Any other candidate who blathers on about the Founding values is simply a liar, and their obvious lies show a disrespect of your intelligence.
If you’re a libertarian, but not an ethno-nationalistic and patriarchal thinker, then you simply haven’t gotten the memo: women and minorities do not want to create the same world that you do, nor will they ever. Evolution gave us women who want social safety nets and other races which are better off if they parasitize off of your tax dollars. All of the most libertarian societies that ever existed (early US, ancient Athens, Roman Republic) were entirely run by White men, and adding women to the electorate gave us the welfare state. Aristophanes was right.
We’re also ruminating on the difference between IQ and expertise. I know of no mentally complicated task of which one can be a master without being intelligent. Take the IQs of chess grandmasters and you will find no morons.
Contrast that with purely physical activities. I bet you there are some really stupid people out there who are great at dancing for example. A prodigiously capable cerebellum may not predict an equally capable frontal lobe.
Discounting tasks which exclusively require things like simple physical coordination, muscle memory, etc, I ought to think that IQ is the biggest component of expertise.
by Scott Jameson
For my first post on this blog, I thought I’d talk about something relevant to the mission of the blog: Political Correctness. I’m very grateful to RaceRealist for inviting me to hop on board here (although I should put out the categorical disclaimer that me posting here is not in and of itself an endorsement of any given thing he’s said over the years).
This is going to be something of an opinion essay about why denying reality is silly: because you still have to live in it. Most of my content is going to be more empirically driven, as you’re used to on this blog. Bear with me.
The SAT’s name change story is a classic case of “Political Correctness,” and is mirrored by KFC’s story of adapting to new nutritional standards. For those out of the loop: after the public realized how unhealthy fried foods are, Kentucky Fried Chicken changed its name to KFC. The point was to make the unhealthy nature of the food one conceptual extrapolation away from the name itself, in hopes that the public would not bother to recall what the “F” stood for.
SAT originally stood for “Scholastic Aptitude Test.” It was (and is) a test to determine how apt you are for scholarly endeavors. Put bluntly, it’s a somewhat sloppy IQ test oriented towards scholarly settings in particular. Of course, that name was too accurate, so it fell out of favor. The public does not want to live in a world wherein poor students are less apt than rich students and Black students are less apt than White students, and so the Scholastic Aptitude Test became the Scholastic Assessment Test. In order to be offended by that, you have to remember that what’s being assessed is aptitude and that nothing has changed. Like “KFC,” it was one conceptual extrapolation away from the reality at hand. Most people were probably too harebrained to see through that.
For some reason, they kept rolling with it. It became an alleged Reasoning Test, and then simply a series of letters that used to be an abbreviation: “the SAT,” no doubt an homage to The Colonel and the chicken he hawks. They’re both just a series of letters now – the unpleasant realities contained therein have been conceptually sterilized. Like the SAT, the nutritional content of the chicken hasn’t changed as much as the name has.
You may suspect that I’m simply flinging excrement in the general direction of The College Board, but there’s a point to what I’m saying here. What we call “Political Correctness” is a pervasive scrubbing of reality out of the consciousness of the public at large, especially the young. There was a time when people were allowed to say things like “I do not enjoy living around Blacks/Whites/Hispanics/whomever.” “Political Correctness” entered from stage left, and then Boomers had to say “bad schools” and “bad neighborhood” instead. Odds are, the Boomers understood the connotative meanings, at least at first. But if you asked millennials what those terms are, I’d bet on most of them actually being ignorant enough to think that the schools are themselves the problem. Nobody ever pointed out to these kids that almost all of the “bad schools” – the schools with low average test scores – are simply full of Hispanics (Mestizos) and African Americans who have low average test scores regardless of what school they’re in, and that the supermajority of all of the “good schools” aren’t. Anyone who doesn’t know this has been deliberately rendered ignorant of a reality that is important to their lives.
What we call “Political Correctness” is in fact the successful, systematic obfuscation of reality, and having reality perpetually hidden from you is dangerous. That is why we at this blog are NotPoliticallyCorrect. As long as I’m here, I can promise you my best attempt at discovering and conveying the truth in the NotPoliticallyCorrect fashion exemplified thus far by RaceRealist: bringing you interesting truths, obscure truths, and of course, controversial truths.
I’m not the first to make the SAT-KFC comparison, by the way. After I wrote this article, I looked around for sources only to dredge this up.
PumpkinPerson’s most recent article Are muscular guys genetically inferior? is a joke. He makes huge assumptions and attempts to this ‘social experiment’ as evidence that women find ‘nerds’ more attractive. The logic here is that since East Asians are the ‘most evolved’ race and (in his world) they have the least testosterone along with the highest intelligence, that this is some kind of apex of human evolution. However the conclusions he makes off of this one video are very erroneous and I will explain why.
They are simply genetically inferior because the muscular body type branched off the evolutionary tree pre-maturely.
…No idea what he’s talking about. No source that the ‘muscular body type branched off the evolutionary tree prematurely.’ This is just an assumption because Africans supposedly have higher testosterone than both Europeans and East Asians, except East Asians have the highest testosterone out of all of all three traditional races, not Africans.
After watching this video I feel like starving my muscles off (not that I recommend that).
Good luck with that.
I realize not everyone agrees with the progressive model of evolution, but real scientists do. For example, check out this phys.org article:
This article has nothing to do with progressive evolution at all. In fact, this article is basically a summary of Full House (Gould, 1996) in which Gould argues that since life began at the left wall of complexity—where no organism can get simpler—that a right-tail distribution of complexity was inevitable. I have covered this here. This is not evidence for progressive evolution. It is, in fact, the opposite. He’s never read Gould’s books so he wouldn’t know that.
Now, PP’s contention that women find nerds more attractive has no basis. When I think of a ‘nerd’, I think of a scrawny pencil-neck, buck teeth, person with thick-rimmed black glasses. This, obviously, isn’t true. If it were, then why do East Asians—Japan specifically—have the lowest birthrates? Of course, social factors have a lot to do with it—birthrates decline in developed countries (Nargund, 2009; Sinding, 2009), as well as genetic ones (Harris and Nielson, 2016). So, clearly, the more intelligent, more developed countries don’t have more children, which then, of course implies that either higher IQ people are less desirable from a reproductive point of view (plausible), or they forgo having children until around 28 years of age (Lange, Rinderu and Bushman, 2016). Whatever the case may be, those with higher IQs do not conceive as many children as those with lower IQs, signifying something about their fitness aspects.
Further, women, evolutionarily speaking, sexually selected men for high levels of testosterone, which leads to bigger muscles, more defined facial features, higher levels of aggression (good for protecting genetic interests) and so on. The fact that some people may think that nerds have better prospects than non-nerds, evolutionarily speaking, had no basis in reality and for one to believe as much, it has to be driven by ideology.
Dixson et al (2010) showed that women prefer men with the mesomorphic somatype and ‘average’ body type, then prefer ectomorphs (a skinnier body type) and finally endomorph (a heavier build) ranging from most attractive to least. This study shows that, at least when it comes to European females, they prefer mesomorphic somatypes, which, more often than not, one who is over 6 feet tall will have. Does that seem like a ‘nerd’ to you? I don’t think so. Someone who has the potential ability to control a room with his presence doesn’t seem like a nerd to me. These are the same people who are CEOs.
Journalist Malcolm Gladwell showed that on average, CEOs averaged just under 6 foot tall. Since the average American is 5 foot 9, the average CEO has a three-inch height advantage over the average man in America. However, when looking at those who are 6 feet tall and up, for average Joe the percentage is a paltry 3.9 percent while, in Gladwell’s sample, 30 percent were over 6’2″. So, Gladwell states, the lack of minorities and women in high positions has a plausible explanation: height. Men are, on average taller than women. Tall men earn more money than their shorter counterparts. Taller children also perform better on cognitive tests, taller men earn more money in Mexico, and taller children do better on learning tests in India (Lawson and Spears, 2016).
Women want taller men more than men want taller women (Stulp, Buunk, and Pollet, 2012). Tall men are also more likely to have a mesomorphic somatype. Those somatypes are seen as the most attractive. Does that seem like a nerd somatype to you? An athletic somatype? On the other hand, women aren’t attracted to short men (Nettle, 2002). East Asians—the so-called ‘most evolved race’—are the shortest race. Doesn’t look too good for them.
Furthermore, while East Asian men see themselves as attractive and dateable, they don’t believe society sees it that way. Forty-six percent of the sample said they could recall one instance where they hear someone state that they do not date Asian men, while eleven percent of Asian men have heard it at least six times. For Okcupid’s 2009 race/dating data, 18 percent of Asian women (3,381 yes) would date someone of their own background/skin color while 82 percent (17,227) wouldn’t! So much for the ‘most evolved’ race having dating prospects in their own race. East Asian men said yes to the question at a rate of 24 percent (7,965 yes) and no 76 percent of the time (25,358).
To further put this into perspective, white women would said yes to the question at a rate of 54 percent (154,595) and no at a rate of 46 percent (132,497) while white men said yes at a 40/60 yes/no rate (183,360/277,827 respectively). In total, 45 percent of whites would prefer to date someone of their skin color/ethnicity while 55 percent wouldn’t (337,955/410,324) while non-whites said yes to the question 20 percent of the time while they said no 80 percent of the time (56,080/222,484).
A 2014 follow-up found the same thing, however with Asian women showing some positive ratings toward Asian males (while all races of men didn’t find black women particularly attractive). However, Asian men were seen as the least attractive throughout the whole sample. Asian males are also seen as less attractive than males of other races (Fisman et al, 2008). In their sample, they found even after running regressions that Asian women found white, black, and ‘Hispanic’ men. They also show that even Asian men find white, black and ‘Hispanic’ females more attractive than Asian females.
In sum, PP’s contentions and reaches in his article are wrong. ‘Nerds’ (in the way I’m defining the word) are not more successful than the alpha CEOs who are over 6’2”. PP seems to have an aversion to testosterone (believes that it is the cause for racial differences in prostate cancer differences, but vitamin D deficiencies are a more likely culprit). East Asian men—the so-called ‘most evolved’ men of the ‘most evolved’ race do not fair well in terms of physical attractiveness, and this may be a reason why the Japanese birthrate is declining, with the average Japanese woman having only one child during her lifetime (Nomura and Koizumi, 2016). PP’s theory makes no sense, because women favor mesomorphic somatypes. Mesomorphs are more likely to be CEOs of 500 companies, more likely to be more cognitively adept and make more money than their shorter counterparts. Making evolutionary theories off of one (obviously fake) ‘social experiment’ is ridiculous. East Asian men, the so-called ‘most evolved man’ fall short in the dating game, due to being seen as less attractive.
Dixson, B. J., Dixson, A. F., Bishop, P. J., & Parish, A. (2009). Human Physique and Sexual Attractiveness in Men and Women: A New Zealand–U.S. Comparative Study. Archives of Sexual Behavior,39(3), 798-806. doi:10.1007/s10508-008-9441-y
Fisman, R. J., Iyengar, S. S., Kamenica, E., & Simonson, I. (2008) (n.d.). Racial Preferences in Dating: Evidence from a Speed Dating Experiment. SSRN Electronic Journal. doi:10.2139/ssrn.610589
Gould, S. J. (1996). Full house: The Spread of Excellence from Plato to Darwin. New York: Harmony Books.
Harris, K., & Nielsen, R. (2016). The Genetic Cost of Neanderthal Introgression. Genetics, 2016 doi:10.1101/030387
Lange, P. A., Rinderu, M. I., & Bushman, B. J. (2016). Aggression and Violence Around the World: A Model of CLimate, Aggression, and Self-control in Humans (CLASH). Behavioral and Brain Sciences, 1-63. doi:10.1017/s0140525x16000406
Nargund G. (2009) Declining birth rate in Developed Countries: A radical policy re-think is required. F.V & V in ObGyn. 2009;1:191-3
Nettle, D. (2002). Women’s height, reproductive success and the evolution of sexual dimorphism in modern humans. Proceedings of the Royal Society B: Biological Sciences,269(1503), 1919-1923. doi:10.1098/rspb.2002.2111
Nomura, K., & Koizumi, A. (2016). Strategy against aging society with declining birthrate in Japan. Industrial Health INDUSTRIAL HEALTH,54(6), 477-479. doi:10.2486/indhealth.54-477
Sinding S.(2009) Population, poverty and economic development. Phil. Trans. R. Soc. B 364.
Stulp, G., Buunk, A. P., & Pollet, T. V. (2013). Women want taller men more than men want shorter women. Personality and Individual Differences,54(8), 877-883. doi:10.1016/j.paid.2012.12.019
I just came across this video on YouTube published yesterday called “White people are not 100% human (Race differences) (I.Q debunked)“, with, of course, outrageous claims (the usual from Afrocentrists). I already left a comment proving his nonsense incorrect, but I thought I’d further expound on it here.
His first ‘evidence’ that whites aren’t 100 percent human is showing some individuals who are born with tails. Outliers are meaningless, of course. The cause of the human tail is due to the unsuccessful inhibition of the Wnt3-a gene. When this gene isn’t successful in signaling the cell death of the tail in early embryonic development, a person is then born with a small vestigial tail. This doesn’t prove anything.
His next assertion is that since “94 percent of whites test positive for Rh blood type” and that “as a result, they are born with a tail”, then whites must have interbred with rhesus monkeys in the past. This is ridiculous. This blood type was named in error. The book Blood Groups and Red Cell Antigens sums it up nicely:
The Rh blood group is one of the most complex blood groups known in humans. From its discovery 60 years ago where it was named (in error) after the Rhesus monkey, it has become second in importance only to the ABO blood group in the field of transfusion medicine. It has remained of primary importance in obstetrics, being the main cause of hemolytic disease of the newborn (HDN).
It was wrongly thought that the agglutinating antibodies produced in the mother’s serum in response to her husbands RBCs were the same specificity as antibodies produced in various animals’ serum in response to RBCs from the Rhesus monkey. In error, the paternal antigen was named the Rhesus factor. By the time it was discovered that the mother’s antibodies were produced against a different antigen, the rhesus blood group terminology was being widely used. Therefore, instead of changing the name, it was abbreviated to the Rh blood group.
As you can see, this is another ridiculous and easily debunked claim. One only needs to do a bit of non-biased reading into something to get the truth, which some people are not capable of.
What he says next, I don’t really have a problem with. He just shows articles stating that Neanderthals had big brains to control their bodies and that they had a larger, elongated visual cortex. However, there is archeological evidence that our cognitive superiority over Neanderthals is a myth (Villa and Roebroeks, 2014). What he shows in this section is the truest thing he’ll say, though.
Then he shows how African immigrants to America have a higher educational achievement than whites and immigrant East Asians. However, it’s clear he’s not heard of super-selection. The people with the means to leave will, and, most likely, those with the means are the more intelligent ones in the group. We also can’t forget about ‘preferential treatment’, AKA Affirmative Action.
The concept of ‘multiple intelligences’ is then brought up. The originator of the theory, Howard Gardner, rejects general intelligence, dismisses factor analysis, doesn’t defend his theory with quantitative data, instead, drawing on anthropology to zoology findings for his claims, being completely devoid of any psychometric or quantitative data (Herrnstein and Murray, 1994: 18). The Alternative Hypothesis also has a thorough debunking of this claim.
He then makes the claim that hereditarians assume that environment/experience play no factor in performance on IQ tests/life success. We know that both the individual heritability is 80/20 genetics and environment, with the black-white gap being the same (Rushton and Jensen 2005: 279). Another easily refuted claim.
The term ‘inferior’ is brought up due to whites’ supposed ‘inferiority’, though we know that terms such as those have no basis in evolutionary biology.
He claims that a black man named Jesse Russel invented the cell phone, when in reality a white man named Martin Cooper did. He claims that Lewis Latimer invented the filament lightbulb, when a man named Joseph Swan obtained the patent in the UK in 1860. Of course, individual outliers are meaningless to group success, as they don’t reflect the group average as a whole, so these discussions are meaningless.
He finally claims that the “black Moors civilized Europe”. Europeans didn’t need to “be civilized”, I guess people don’t understand that empires/kingdoms rise and fall and go through highs and lows. That doesn’t stop people from pushing a narrative, though. Further, the Moors were not black. People love attempting to create their own fantasy history in which their biases are a reality.
I don’t know why people have to make these idiotic and easily refuted videos. Lies that push people further from the truth of racial differences, genetics, and history as a whole. Biases such as these just cloud people’s minds to the truth, and when the truth is shown to them, refuting their biases and twisting of history, genetics, and IQ, they then look at it as an attack on what they deem to be true despite all of the conflicting, non-biased evidence shown to them. Afrocentric loons need to be refuted, lest people believe their lies, misconceptions and twistings of history.
The denial of human nature is extremely prevalent, most noticeably in our institutions of higher learning. To most academics, the fact that there could be population differences that are genetic in nature is troubling for many people. However, denying genetic/biological causes for racial differences is 1) intellectually dishonest; 2) will lead to negative health outcomes for populations due to the assumption that all human populations are the same; and 3) the ‘lie of equality’ will not allow all human populations to reach their ‘potential’ to be as good as they can be due to the fact that implicit assumption that all human populations are the same. Anti-hereditarians fully deny any and all genetic explanations for human differences, believing that human brain evolution somehow halted around 50-100 kya. Numerous studies show that race is a biological reality; it doesn’t matter what we call the clusters as those are the social constructs. The contention is that ‘all brains are the same color’ (Nisbett, 2007; for comment see my article Refuting Richard Nisbett), and that evolution in differing parts of the world for the past 50,000 years was not enough for any meaningful population differences between people. But to accept that means you must accept the fact that the brain is the only organ that is immune to natural selection. Does that make any sense? I will show that these differences do exist and should be studied, as free of any bias as possible, with every possible hypothesis being looked at and not discarded.
Evolution is true. It’s not ‘only a theory’ (as some anti-evolutionists contend). Anti-evolutionists do not understand the definition of the word ‘theory’. Richard Dawkins (2009) wrote that a theory is a scheme or system of ideas or statements held as an explanation or account of a group of facts or phenomena. This is in stark contrast to the layperson’s definition of the word theory, which means ‘just a guess’. Evolution is a fact. What biologists argue with each other about is the mechanisms behind evolution, for any quote-mining Creationists out there.
We know that evolution is a fact and it is the only game in town (Dawkins, 2009) to explain the wide diversity and variation we see on our planet. However, numerous scholars deny the effect of evolution on human behavior (most residing in the social sciences, but other prominent biologists have denied (or implied there were no differences between us and our ancestors) the effect of human evolution on behavior and cognition; Gould 1981, 1996, for a review of Gould 1996, see my article Complexity, Walls, 0.400 Hitting and Evolutionary “Progress” and Stephen Jay Gould and Anti-Hereditarianism; Mayr 1963; see Cochran and Harpending 2009). A prominent neuroscientist, who I have written about here, Herculano-Houzel, implied that Neanderthals and Antecessor may have been just as intelligent as we are due to a neuronal count in a similar range to ours (Herculano-Houzel 2013). This raises an interesting question (which I have tackled here and will return to in the future): did our recent hominin ancestors at least have the capacity for similar intellect to ours (Villa and Roebroeks, 2014; Herculano-Houzel and Kaas, 2011)? It is interesting that neuronal scaling rules hold for our extinct ancestors, and this question is most definitely worth looking into.
Whatever the case may be in regards to recent human evolution and our extinct hominin ancestors, human evolution has increased in the past 10,000 years (Cochran and Harpending, 2009; Wade, 2014). This is due to the dispersal of Anatomical Modern Humans (AMH) OoA around 70 kya; and with this geographical isolation, populations began to diverge with no interbreeding with each other. However, this is noticed most in ‘Native’ Americans, who show no gene flow with other populations due to being genetically isolated (Villena et al, 2000). Who’s to say that evolution stops at the neck, and no further evolution occurs on the brain? Is the brain itself exempt from the laws of natural selection? We know that there is no/hardly any gene flow between populations before the advent of modern-day technology and vehicles; we know that humans differ on morphological and anatomical traits, why are genetic differences out of the question, especially when genetic differences may explain, in part, some of the variation between populations?
We know that evolution is true, without a reasonable doubt. So why, do some researchers contend, is the human brain exempt from such selective pressures?
A theoretical article by Winegard, Winegard, and Boutwell (2017) was just released on January 17th. In the article, they argue that social scientists should integrate HBD into their models. Social scientists do not integrate genetics into their models, and the longer one studies social sciences, the more likely it is they will deny human nature, regardless of political leaning (Perry and Mace, 2010). This poses a problem. By completely ignoring a huge variable (possible genetic differences), this has the potential to harm people’s health, as race is a very informative marker when discussing diseases acquisition as well as whether certain drugs will work on two individuals of different races (Risch et al, 2002; Tang et al, 2005; Wade, 2014). People who deny the usefulness of race, even in a medical context, endanger the lives of individuals from different races/ethnies since they assume that all humans are the same inside, despite ‘superficial differences’ between populations.
The notion that all human populations—genetic isolation and evolution in differing ecosystems/climates/geographic locales be damned—is preposterous to anyone who has a true understanding of evolution. Why should man’s brain be the only organ on earth exempt from the forces of natural selection? Why do egalitarians assume that all humans are the same and have the same psychological faculties compared to other humans, despite the fact that rapid evolution has occurred within the human species within the last 10,000 years?
To see some of the most obvious ways to see natural selection in action in human populations, one should look to the Inuits (Fumagalli, 2015; Daanen and Lichtenbelt, 2016; NIH, 2015; Cardona et al, 2014; Tishkoff, 2015; Ford, McDowell, and Pierce, 2015; Galloway, Young, and Bjerregaard, 2012; Harper, 2015). Global warming is troubling to some researchers, with many researchers suggesting that global warming will have negative effects on the health and food security of the Inuit (Ford et al, 2014, 2016; Ford, 2012, 2009; Wesche, 2010; Furgal and Seguin, 2006; McClymont and Myers, 2012; Petrasek et al, 2015; Rosol, Powell-Hellyer, and Chan, 2016; Petrasek, 2014; WHO, 2003). I could go on and on citing journal articles for both claims, but you get the point already. The main point is this: we know the Inuit have evolved for their climate, and a (possible) climate change would then have a negative effect on their quality of life due to their adaptations to the cold weather climate. However, egalitarians still contend, with these examples and numerous others I could cite, that any and all differences within and between human populations can be explained by socio-cultural factors and not any genetic ones.
One of the best examples of genetic isolation in a geographic locale that is the complete opposite from the environment of evolutionary adaptedness (EEA; Kanazawa, 2004), the African savanna in which we evolved in. I did entertain the idea of the Savanna hypothesis, and while I do believe that it could explain a lot of the variance in IQ between countries (Kanazawa, 2007), his hypothesis doesn’t make sense with what we know about human evolution over the past 10,000 years.
The most obvious differences we can see between populations is differences in skin color. Skin color does not signify race, per se, but it is a good indicator. Skin color is an adaptation to UV radiation (Jablonski and Chaplin, 2010, 2000; Juzenienne et al, 2009; Jeong and Rienzo, 2015; Hancock, et al, 2010; Kita and Fraser, 2016; Scheinfeldt and Tishkoff, 2013), and is therefor and adaptation based on climate. Dark skin is a protectant from skin cancer (Brenner and Hearing, 2008; D’Orazio et al, 2010; Bradford, 2009). Skin cancer is a possible selective force in black pigmentation of the skin in early hominin evolution (Greaves, 2014). With these adaptations in skin color between genetically and geographically isolated populations, are changes in the brain, however small, really out of the question?
A better population to bring up in regards to geographic isolation having an effect on human evolution is the Tibetans. For instance, Tibetans have higher total lung capacities in comparison to the Han Chinese (Droma et al, 1991). There are even differences in lung capacity between Tibetans and Han Chinese who live at the same altitude (Yangzong et al, 2013), with the same thing noticed for peoples living in the Andean mountains (Beall, 2007). Tibetans evolved in a higher elevation than the Han Chinese who lived closer to sea level, so it makes sense that they would be selected for the ability to take deeper inhales They also have a larger chest circumference and greater capacity than the Han Chinese who live at lower altitudes (Gilbert-Kawai et al, 2014).
Admittedly, the acceptance of the usefulness of race in regards to human differences is a touchy subject. So much so, that social scientists do not take genetics into account in their models. However, researchers in the relevant fields accept the usefulness of race (Risch et al, 2002; Tang et al, 2005; Wade, 2014; Sesardic, 2010), so the fact that social scientists do not is to be ignored. Race is a social construct, yes. But no matter what we call these clusters, clines, demes, races, ethnies—whatever name you want to use to describe them—this does not change the fact that race is a useful category in biomedical research. Race is an issue when talking about bone marrow transplants, so by treating all populations as the same with no variation between them, people are pretty much saying that differences between people in a biomedical context do not exist, with there being other explanatory factors behind population differences, in this case, bone marrow transplants. Ignoring heritable human variation will lead to disparate health outcomes for all human populations with the assumption that all humans are the same. Is that what we want? Is that what race-deniers want?
So there are anatomical and physiological differences between human populations (Wagner and Hayward, 2000), with black Americans having a different morphology and lower fat-free body mass on average in comparison to white Americans. This, then, is one of the variables that dictates racial differences in sports, along with muscle fiber explaining a large portion of the variance, in my opinion. No one denies that blacks and whites differ at elite levels in baseball, football, swimming and jumping, and bodybuilding and strength sports. Though, accepting the fact that these morphological and anatomical differences between the races come down to evolution, one would then have to accept the fact that different races/ethnies differ in the brain, thusly destroying their egalitarian fantasy in their head of all genetically isolated human populations being the same in the brain. Wade (2014) writes on page 106:
“… brain genes do not lie in some special category exempt from natural selection. They are as much under evolutionary pressure as any other category of gene”
This is a hard pill to swallow for race-deniers, especially those who emphatically deny any type of selection pressure on the human brain within the past 10,000 to 100,000 years.
Winegard, Winegard, and Boutwell (2017) write:
Consider an analogy that might make this clear while simultaneously illuminating the explanatory importance of population differences. Most cars are designed from the same basic blueprint and consist of similar parts—an internal combustion engine, a gas tank, a chassis, tires, bearings, spark plugs, et cetera. Cars as distinct as a Honda Civic and a Subaru Outback are built from the same basic blueprint and comprised of the same parts; so, in this sense, there is a “universal car nature” (Newton 1999). However, precise, correlated changes in these parts can dramatically change the characteristics of a car.
Humans, like cars, are built from the same basic body plan. They all have livers, lungs, kidneys, brains, arms, and legs. And these structures are built from the same basic building blocks, tissues, which are built of proteins, which are built of amino acids, et cetera. However, small changes in the structures of these building blocks can lead to important and scientifically meaningful differences in function.
Put in this context, yes, there is a ‘universal human nature’, but the application of that human nature will differ depending on what a population has to do to survive in that climate/ecosystem. And, over time, populations will diverge away from each other, both physically and mentally. The authors also argue that societal differences between Eurasians (Europeans and East Asian) can be explained partly by genetic differences. Indeed, the races do differ on the Big Five Personality traits, with heritable components explaining 40 to 60 percent of the variation (Power and Pluess, 2015). So some of the cultural differences between European and East Asians must come down to some biological variation.
One of the easiest ways to see the effects of cultural/environmental selective pressures in humans is to look at Ashkenazi Jews (Cochran et al, 2006). Due to Ashkenazi Jews being barred from numerous occupations, they were confined to a few cognitively demanding occupations. Over time, only the Jews that could handle these occupations would prosper, further selecting for higher intelligence due to the cognitive demands of the jobs they were able to acquire. Thus, Ashkenazi Jews who could handle the few occupations they were allowed to do would breed more and pass on variants for higher intelligence to their offspring, whereas those Jews who couldn’t handle the cognitive demands of the occupation were selected out of the gene pool. This is one situation in which natural selection worked swiftly, and is why Ashkenazi Jews are so overrepresented in the fields of academia today—along with nepotism.
Winegard, Winegard, and Boutwell (2017) lay out six basic principles for a new Darwinian paradigm, as follows:
- Variation is the grist for the mill of natural selection and is ubiquitous within and among human populations.
- Evolution by natural selection has not stopped acting on human traits and has significantly shaped at least some human traits in the past 50,000 years.
- Current hunter-gatherer groups might be slightly different from other modern human populations because of culture and evolution by natural selection acting to influence the relative presence, or absence, of trait-relevant alleles in those groups. Therefore, using extant hunter-gatherers as a template for a panhuman nature is problematic.
- It is probably more accurate to say that, while much of human nature is universal, there may have been selective tuning on various aspects of human nature as our species left Africa and settled various regions of the planet (Frost 2011).
- The human brain is subject to selective forces in the same way that other organ systems are. Natural selection does not discriminate between genes for the body and genes for the brain (Wade 2014).
- The concept of a Pleistocene-based environment of evolutionary adaptedness (EEA) is likely unhelpful (Zuk 2013). Individual traits should be explored phylogenetically and historically. Some human traits were sculpted in the Pleistocene (or before) and have remained substantially unaltered; some, however, have been further shaped in the past 10,000 years, and some probably quite recently (Clark 2007). It remains imperative to describe what selection pressures might have been actively shaping human nature moving forward from the Pleistocene epoch, and how those ecological pressures might have differed for different human populations.
No stone should be left unturned when attempting to explain population differences between geographically isolated peoples, and these six principles are a great start, which all social scientists should introduce into their models.
As I brought up earlier, Kanazawa’s (2004b) hypothesis doesn’t make sense in regards to what we know about the evolution of human psychology. Thus, any type of proposed evolutionary mismatch in regards to our societies do not make much sense. However, one mismatch that does need to be looked into is the negative mismatch we have with our modern-day Western diets. Agriculture was both a gift and a negative event in human history. Yes, without the advent of agriculture 10,000 years ago we would not have the societies we have today. However, on the other hand, we have higher rates of disease compared to our hunter-gatherer ancestors. This is one evolutionary mismatch that cannot and should not go ignored as it has devastating effects on our populations that consume a Western diet—which we did not evolve to eat.
Winegart, Winegart, and Boutwell (2017) then discuss how their new Darwinian paradigm could be used by researchers: 1) look for differences among human populations; 2) after population differences are found, causal analyses should be approached neutrally; 3) researchers should consider a broad range of data to consider whether or not the trait or traits in question are heritable; and 4) researchers should test the posited biological cause more indepth. Without understanding—and using—biological differences between human populations, the quality of life for some populations will be diminished, all for the false notion of ‘equality’ between human races.
There are huge barriers in place to studying human differences, however. Hayden (2013) documents differing taboos in genetics, with intelligence having a high taboo rating. Of course, we HBDers know that intelligence is a highly heritable trait, largely genetic in nature, and so studying these differences between human populations may lead to some uncomfortable truths for some people. On the 200th anniversary of Darwin’s On the Origin of Species, Ceci and Williams (2009) said that “the scientific truth must be pursued” and that researchers must study race and IQ, much to the chagrin of anti-hereditarians (Horgan, 2013). He does write something very troubling in regards to this research, and free speech in our country as a whole:
Some readers may wonder what I mean by “ban,” so let me spell it out. I envision a federal prohibition against speech or publications supporting racial theories of intelligence. All papers, books and other documents advocating such theories will be burned, deleted or otherwise destroyed. Those who continue espousing such theories either publicly or privately (as determined by monitoring of email, phone calls or other communications) will be detained indefinitely in Guantanamo until or unless a secret tribunal overseen by me says they have expressed sufficient remorse and can be released.
Whether he’s joking or not, that’s besides the point. The point is, is that these topics are extremely sensitive to the lay public, and with these articles being printed in popular publications, the reader will get an extremely biased look into the debate and their mind will already be made up for them. This is the definition of intellectual dishonesty, attempting to sway a lay-readers’ opinion on a subject they are ignorant of with an appeal to emotion. Shouldn’t all things be studied scientifically, without any ideological biases?
Speaking about the ethics of putting this information out to the general public, Winegard, Winegard, and Boutwell (2017) write:
If researchers do not responsibly study and discuss population differences, then they leave an abyss that is likely to be filled by the most extreme and hateful writings on population differences. So, although it is understandable to have concerns about the dangers of speaking and writing frankly about potential population differences, it is also important to understand the likely dangers of not doing so. It is not possible to hide the reality of human variation from the world, not possible to propagate a noble lie about human equality, and the attempt to do so leaves a vacancy for extremists to fill.
This is my favorite quote in the whole paper. It is NOT possible to hide the reality of HBD from the world; anyone with eyes can see that humans do differ. Attempting to continue the feel-good liberal lie of human equality will lead to devastating effects in all countries/populations due to the implicit assumption that all human groups are the same in their cognitive and mental faculties.
The denial of genetic human differences, could, as brought up earlier in this article, lead to negative effects in regards to health outcomes between populations. Black Americans have higher rates of hypertension than white Americans (Fuchs, 2011; Ferdinand, 2007; Ortega, Sedki, and Nayer, 2015; Nesbitt, 2009; Wright et al, 2005). To overlook possible genetic differences as a causal factor in regards to racial differences will mean the deaths of many people since people truly believe that people are the same and that all differences come down to the environment. This, however, is not true and believing so is extremely dangerous to the health of all populations in the world.
Epigenetic signatures of ethnicity may be biomarkers for shared cultural experiences. Seventy-six percent of the genetic alteration between Mexicans and Puerto Ricans in this study was due to DNA methylation—which is an epigenetic mechanism used by cells to control gene expression. Therefore, 24 percent of the effect is due to an unknown factor, probably regarding environmental, social, and cultural differences between the two ethnies (Galanter et al, 2017). This is but one of many effects that culture can have on the genome, leading to differences between two populations, and is good evidence for the contention that the different races/ethnies evolved different psychological mechanisms due to genetic isolation in different environments.
We must now ask the question: what if the hereditarian hypothesis is true (Gottfredson, 2005)? If the hereditarian hypothesis is true, Gottfredson argues, special consideration should be given to those found to have a lower IQ, with better training and schooling that specifically target those individuals at risk to be less able due to their lower intelligence. This is one way the hereditarian hypothesis can help race relations in the country: people will (hopefully) accept intrinsic differences between the races. What Gottfredson argues in her paper will hopefully then pacify anti-hereditarians, as less able people of all races/ethnicities will still get the extra help they need in regards to finding work and getting schooling/training/jobs that accommodate their intelligence.
People accept genetic causes for racial differences in sports, yet emphatically deny that human races/ethnies differ in the brain. The denial of human nature—racially and ethnically—is the next hurdle for us to jump over. Once we accept that these differences in populations can, in part, be explained by genetic factors, we can then look to other avenues to see how and why these differences exist between populations occur and if anything can be done to ameliorate them. However, ironically, anti-hereditarians do not realize that their policies and philosophy is actively hindering their goals, and by accepting biological causes—if only to see them researched and held against other explanations—will lead to further inequality, while they scratch their heads without realizing that the cause is the one variable that they have discarded: genetics. Still, however, I see this won’t happen in the future and the same non-answers will be given in response to findings on how the human races differ psychologically (Gottfredson, 2012). The races do differ in biologically meaningful ways, and denying or disregarding the truth will not make these differences disappear. Social scientists must take these differences into account in their models, and seriously entertain them like any other hypothesis, or else they will never fully understand human nature.
Is the human brain ‘special’? Not according to Herculano-Houzel; our brains are just linearly scaled-up primate brains. We have the number of neurons predicted for a primate of our body size. But what does this have to do with general intelligence? Evolutionary psychologists also contend that the human brain is not ‘special’; that it is an evolved organ just like the rest of our body. Satoshi Kanazawa (2003) proposed the ‘Savanna Hypothesis‘ which states that more intelligent people are better able to deal with ‘evolutionary novel’ situations (situations that we didn’t have to deal with in our ancestral African environment, for example) whereas he purports that general intelligence does not affect an individuals’ ability to deal with evolutionarily familiar entities and situations. I don’t really have a stance on it yet, though I do find it extremely interesting, with it making (intuitive) sense.
Kanazawa (2010) suggests that general intelligence may both be an evolved adaptation and an ‘individual-difference variable’. Evolutionary psychologists contend that evolved psychological adaptations are for the ancestral environment which was evolved in, not in any modern-day environment. Kanazawa (2010) writes:
The human brain has difficulty comprehending and dealing with entities and situations that did not exist in the ancestral environment. Burnham and Johnson (2005, pp. 130–131) referred to the same observation as the evolutionary legacy hypothesis, whereas Hagen and Hammerstein (2006, pp. 341–343) called it the mismatch hypothesis.
From an evolutionary perspective, this does make sense. A perfect example is Eurasian societies vs. African ones. you can see the evolutionary novelty in Eurasian civilizations, while African societies are much closer (though obviously not fully) to our ancestral environment. Thusly, since the situations found in Africa are not evolutionarily novel, it does not take high levels of g to survive in, while Eurasian societies (which are evolutionarily novel) take much higher levels of g to live and survive in.
Kanazawa rightly states that most evolutionary psychologists and biologists contend that there have been no changes to the human brain in the last 10,000 years, in line with his Savanna Hypothesis. However, as I’m sure all readers of my blog know, there were sweeping changes in the last 10,000 years in the human genome due to the advent of agriculture, and, obviously, new alleles have appeared in our genome, however “it is not clear whether these new alleles have led to the emergence of new evolved psychological mechanisms in the last 10,000 years.”
General intelligence poses a problem for evo psych since evolutionary psychologists contend that “the human brain consists of domain-specific evolved psychological mechanisms” which evolved specifically to solve adaptive problems such as survival and fitness. Thusly, Kanazawa proposes in contrast to other evolutionary psychologists that general intelligence evolved as a domain-specific adaptation to deal with evolutionary novel problems. So, Kanazawa says, our ancestors didn’t really need to think inorder to solve recurring problems. However, he talks about three major evolutionarily novel situations that needed reasoning and higher intelligence to solve:
1. Lightning has struck a tree near the camp and set it on fire. The fire is now spreading to the dry underbrush. What should I do? How can I stop the spread of the fire? How can I and my family escape it? (Since lightning never strikes the same place twice, this is guaranteed to be a nonrecurrent problem.)
2. We are in the middle of the severest drought in a hundred years. Nuts and berries at our normal places of gathering, which are usually plentiful, are not growing at all, and animals are scarce as well. We are running out of food because none of our normal sources of food are working. What else can we eat? What else is safe to eat? How else can we procure food?
3. A flash flood has caused the river to swell to several times its normal width, and I am trapped on one side of it while my entire band is on the other side. It is imperative that I rejoin them soon. How can I cross the rapid river? Should I walk across it? Or should I construct some sort of buoyant vehicle to use to get across it? If so, what kind of material should I use? Wood? Stones?
These are great examples of ‘novel’ situations that may have arisen, in which our ancestors needed to ‘think outside of the box’ in order to survive. Situations such as this may be why general intelligence evolved as a domain-specific adaptation for ‘evolutionarily novel’ situations. Clearly, when such situations arose, our ancestors who could reason better at the time these unfamiliar events happened would survive and pass on their genes while the ones who could not die and got selected out of the gene pool. So general intelligence may have evolved to solve these new and unfamiliar problems that plagued out ancestors. What this suggests is that intelligent people are better than less intelligent people at solving problems only if they are evolutionarily novel. On the other hand, situations that are evolutionarily familiar to us do not take higher levels of g to solve.
For example, more intelligent individuals are no better than less intelligent individuals in finding and keeping mates, but they may be better at using computer dating services. Three recent studies, employing widely varied methods, have all shown that the average intelligence of a population appears to be a strong function of the evolutionary novelty of its environment (Ash & Gallup, 2007; D. H. Bailey & Geary, 2009; Kanazawa, 2008).
Who is more successful, on average, over another in modern society? I don’t even need to say it, the more intelligent person. However, if there was an evolutionarily familiar problem there would be no difference in figuring out how to solve the problem, because evolution has already ‘outfitted’ a way to deal with them, without logical reasoning.
Kanazawa then talks about evolutionary adaptations such as bipedalism (we all walk, but some of us are better runners than others); vision (we can all see, but some have better vision than others); and language (we all speak, but some people are more proficient in their language and learn it earlier than others). These are all adaptations, but there is extensive individual variation between them. Furthermore, the first evolved psychological mechanism to be discovered was cheater detection, to know if you got cheated while in a ‘social contract’ with another individual. Another evolved adaptation is theory of mind. People with Asperger’s syndrome, for instance, differ in the capacity of their theory of mind. Kanazawa asks:
If so, can such individual differences in the evolved psychological mechanism of theory of mind be heritable, since we already know that autism and Asperger’s syndrome may be heritable (A. Bailey et al., 1995; Folstein & Rutter, 1988)?
A very interesting question. Of course, since it’s #2017, we have made great strides in these fields and we know these two conditions to be highly heritable. Can the same be said for theory of mind? That is a question that I will return to in the future.
Kanazawa’s hypothesis does make a lot of sense, and there is empirical evidence to back his assertions. His hypothesis proposes that evolutionarily familair situations do noot take any higher levels of general intelligence to solve, whereas novel situations do. Think about that. Society is the ultimate evolutionary novelty. Who succeeds the most, on average, in society? The more intelligent.
Go outside. Look around you. Can you tell me which things were in our ancestral environment? Trees? Grass? Not really, as they aren’t the same exact kinds as we know from the savanna. The only thing that is constant is: men, women, boys and girls.
This can, however, be said in another way. Our current environment is an evolutionary mismatch. We are evolved for our past environments, and as we all know, evolution is non-teleological—meaning there is no direction. So we are not selected for possible future environments, as there is no knowledge for what the future holds due to contingencies of ‘just history’. Anything can happen in the future, we don’t have any knowledge of any future occurences. These can be said to be mismatches, or novelties, and those who are more intelligent reason more logically due to the fact that they are more adept at surviving evolutionary novel situations. Kanazawa’s theory provides a wealth of information and evidence to back his assertion that general intelligence is domain-specific.
This is yet another piece of evidence that our brain is not special. Why continue believing that our brain is special, even when there is evidence mounting against it? Our brains evolved and were selected for just like any other organ in our body, just like it was for every single organism on earth. Race-realists like to say “How can egalitarians believe that we stopped evolving at the neck for 50,000 years?” Well to those race-realists who contend that our brains are ‘special’, I say to them: “How can our brain be ‘special’ when it’s an evolved organ just like any other in our body and was subject to the same (or similar) evolutionary selective pressures?”
In sum, the brain has problems dealing with things that were not in its ancestral environment. However, those who are more intelligent will have an easier time dealing with evolutionarily novel situations in comparison to people with lower intelligence. Look at places in Africa where development is still low. They clearly don’t need high levels of g to survive, as it’s pretty close to the ancestral environment. Conversely, Eurasian societies are much more complex and thus, evolutionarily novel. This may be one reason that explains societal differences between these populations. It is an interesting question to consider, which I will return to in the future.