Home » Posts tagged 'HBD' (Page 3)
Tag Archives: HBD
Minimalist Races Exist and are Biologically Real
3050 words
People look different depending on where their ancestors derived from; this is not a controversial statement, and any reasonable person would agree with that assertion. Though what most don’t realize, is that even if you assert that biological races do not exist, but allow for patterns of distinct visible physical features between human populations that then correspond with geographic ancestry, then race—as a biological reality—exists because what denotes the physical characters are biological in nature, and the geographic ancestry corresponds to physical differences between continental groups. These populations, then, can be shown to be real in genetic analyses, and that they correspond to traditional racial groups. So we can then say that Eurasian, East Asian, Oceanian, black African, and East Asians are continental-level minimalist races since they hold all of the criteria needed to be called minimalist races: (1) distinct facial characters; (2) distinct morphologic differences; and (3) they come from a unique geographic location. Therefore minimalist races exist and are a biological reality. (Note: There is more variation within races than between them (Lewontin, 1972; Rosenberg et al, 2002; Witherspoon et al, 2007; Hunley, Cabana, and Long, 2016), but this does not mean that the minimalist biological concept of race has no grounding in biology.)
Minimalist race exists
The concept of minimalist race is simple: people share a peculiar geographic ancestry unique to them, they have peculiar physiognomy (facial features like lips, facial structure, eyes, nose etc), other physical traits (hair/hair color), and a peculiar morphology. Minimalist races exist, and are biologically real since minimalist races can survive findings from population genetics. Hardimon (2017) asks, “Is the minimalist concept of race a social concept?” on page 62. He writes that social concepts are socially constructed in a pernicious sense if and only if it “(i) fails to represent any fact of the matter and (ii) supports and legitimizes domination.” Of course, populations who derive from Africa, Europe, and East Asia have peculiar facial morphology/morphology unique to that isolated population. Therefore we can say that minimalist race does not conform to criteria (i). Hardimon (2017: 63) then writes:
Because it lacks the nasty features that make the racialist concept of race well suited to support and legalize domination, the minimalist race concept fails to satisfy condition (ii). The racialist concept, on the other hand, is socially constructed in the pernicious sense. Since there are no racialist races, there are no facts of the matter it represents. So it satisfies (i). To elaborate, the racialist race concept legtizamizes racial domination by representing the social hierarchy of race as “natural” (in a value-conferring sense): as the “natural” (socially unmediated and inevitable) expression of the talent and efforts of the inidividuals who stand on its rungs. It supports racial domination by conveying the idea that no alternative arrangment of social institutions could possibly result in racial equality and hence that attempts to engage in collective action in the hopes of ending the social hierarchy of race are futile. For these reasons the racialist race concept is also idealogical in the prejorative sense.
Knowing what we know about minimalist races (they have distinct physiognomy, distinct morphology and geographic ancestry unique to that population), we can say that this is a biological phenomenon, since what makes minimalist races distinct from one another (skin color, hair color etc) are based on biological factors. We can say that brown skin, kinky hair and full lips, with sub-Saharan African ancestry, is African, while pale/light skin, straight/wavy/curly hair with thin lips, a narrow nose, and European ancestry makes the individual European.
These physical features between the races correspond to differences in geographic ancestry, and since they differ between the races on average, they are biological in nature and therefore it can be said that race is a biological phenomenon. Skin color, nose shape, hair type, morphology etc are all biological. So knowing that there is a biological basis to these physical differences between populations, we can say that minimalist races are biological, therefore we can use the term minimalist biological phenomenon of race, and it exists because there are differences in the patterns of visible physical features between human populations that correspond to geographic ancestry.
Hardimon then talks about how eliminativist philosophers and others don’t deny that above premises above the minimalist biological phenomenon of race, but they allow these to exist. Hardimon (2017: 68-69) then quotes a few prominent people who profess that there are, of course, differences in physical features between human populations:
… Lewontin … who denies that biological races exist, freely grants that “peoples who have occupied major geographic areas for much of the recent past look different from one another. Sub-Saharan Africans have dark skin and people who have lived in East Asia tend to have a light tan skin and an eye color and eye shape that is difference from Europeans.” Similarly, population geneticist Marcus W. Feldman (final author of Rosenberg et al., “Genetic Stucture of Human Populations” [2002]), who also denies the existence of biological races, acknowledges that “it has been known for centuries that certain physical features of humans are concentrated within families: hair, eye, and skin color, height, inability to digest milk, curliness of hair, and so on. These phenotypes also show obvious variation among people from different continents. Indeed, skin color, facial shape, and hair are examples of phenotypes whose variation among populations from different regions is noticeable.” In the same vein, eliminative anthropologist C. Loring Brace concedes, “It is perfectly true that long term residents of various parts of the world have patterns of features that we can identify as characteristic of they area from which they come.”
So even these people who claim to not believe in “biological races”, do indeed believe in biological races because what they are describing is biological in nature and they, of course, do not deny that people look different while their ancestors came from different places so therefore they believe in biological races. We can then use the minimalist biological phenomenon of race to get to the existence of minimalist races.
Hardimon (2017: 69) writes:
Step 1. Recognize that there are differences in patterns of visible physical features of human beings that correspond to their differences in geographic ancestry.
Step 2. Observe that these patterns are exhibited by groups (that is, real existing groups).
Step 3. Note that the groups that exhibit these patterns of visible physical features correspond to differences in geographical ancestry satisfy the conditions of the minimalist concept of race.
Step 4. Infer that minimalist race exists.
Those individuals mentioned previously who deny biological races but allow that people with ancestors from differing geographic locales look differently do not disagree with step 1, nor does anyone really disagree with step 2. Step 4’s inference immediately flows from the premise in step 3. “Groups that exhibit patterns or visible physical features that correspond to differences in geographical ancestry satisfy the conditions of the minimalist concept of race. Call (1)-(4) the argument from the minimalist biological phenomenon of race” (Hardimon, 2017: 70). Of course, the argument does not identify which populations may be called races (see further below), it just shows that race is a biological reality. Because if minimalist races exist, then races exist because minimalist races are races. Minimalist races exist, therefore biological races exist. Of course, no one doubts that people come from Europe, sub-Saharan Africa, East Asia, the Americas, and the Pacific Islands, even though the boundaries between them are ‘blurry’. They exhibit patterns of visible physical characters that correspond to their differing geographic ancestry, they are minimalist races therefore minimalist races exist.
Pretty much, the minimalist concept of race is just laying out what everyone knows and arguing for its existence. Minimalist races exist, but are they biologically real?
Minimalist races are biologically real
Of course, some who would assert that minimalist races do not exist would say that there are no ‘genes’ that are exclusive to one certain population—call them ‘race genes’. Of course, these types of genes do not exist. Whether or not one individual is a part of one race or not does not rest on the basis of his physical characters, but is determined by who his parents are, because one of the three premises for the minimalist race argument is ‘must have a peculiar geographic ancestry’. So it’s not that members of races share sets of genes that other races do not, it’s based on the fact that they share a distinctive set of visible physical features that then correspond with geographic ancestry. So of course if the minimalist concept of race is a biological concept then it entails more than ‘genes for’ races.
Of course, there is a biological significance to the existence of minimalist biological races. Consider that one of the physical characters that differ between populations is skin color. Skin color is controlled by genes (about half a dozen within and a dozen between populations). Lack of UV rays for individuals with dark skin will lead to diseases like prostate cancer, while darker skin is a protectant against UV damage to human skin (Brenner and Hearing, 2008; Jablonksi and Chaplin, 2010). Since minimalist race is biologically significant and minimalist races are partly defined by differences in skin color between populations then skin color has both medical and ecological significance.
(1) Consider light skin. People with light skin are more susceptible to skin cancer since they evolved in locations with poor UVR radiation (D’Orazio et al, 2013). The body needs vitamin D to absorb and use calcium for maintaining proper cell functioning. People who evolved near the equator don’t have to worry about this because the doses of UVB they absorb are sufficient for the production of enough previtamin D. While East Asians and Europeans on the other hand, became adapted to low-sunlight locations and therefore over time evolved lighter skin. This loss of pigmentation allowed for better UVB absorption in these new environments. (Also read my article on the evolution of human skin variation and also how skin color is not a ‘tell’ of aggression in humans.)
(2) While darker-skinned people have a lower rate of skin cancer “primarily a result of photo-protection provided by increased epidermal melanin, which filters twice as much ultraviolet (UV) radiation as does that in the epidermis of Caucasians” (Bradford, 2009). Dark skin is thought to have evolved to protect against skin cancer (Greaves, 2014a) but this has been contested (Jablonski and Chaplin, 2014) and defended (Greaves, 2014b). So therefore, using (1) and (2), skin color has evolutionary signifigance.
So as humans began becoming physically adapted to their new niches they found themselves in, they developed new features distinct from the location they previously came from to better cope with the new lifestyle due to their new environments. For instance “Northern Europeans tend to have light skin because they belong to a morphologically marked ancestral group—a minimalist race—that was subject to one set of environmental conditions (low UVR) in Europe” (Hardimon, 2017: 81). Of course explaining how human beings survived in new locations falls into the realm of biology, while minimalist races can explain why this happened.
Minimalist races clearly exist since minimalist races constitute complex biological patterns between populations. Hardimon (2017: 83) writes:
It [minimalist race] also enjoys intrinsic scientific interest because it represents distinctive salient systematic dimension of human biological diversity. To clarify: Minimalist race counts as (i) salient because human differences of color and shape are striking. Racial differences in color and shape are (ii) systematic in that they correspond to differences in geographic ancestry. They are not random. Racial differences are (iii) distinctive in that they are different from the sort of biological differences associated with the other two salient systematic dimensions of human diversity: sex and age.
[…]
An additional consideration: Like sex and age, minimalist race constitutes one member of what might be called “the triumverate of human biodiversity.” An account of human biodiversity that failed to include any one of these three elements would be obviously incomplete. Minimalist race’s claim to be biologically real is as good as the claim of the other members of the triumverate. Sex is biologically real. Age is biologically real. Minimalist race is biologically real.
Real does not mean deep. Compared to the biological associated with sex (sex as contrasted with gender), the biological differences associated with minimalist race are superficial.
Of course, the five ‘clusters’ and ‘populations’ identified by Rosenberg et al’s (2002) K=5 graph, which told structure to produce 5 genetic clusters, corresponds to Eurasia, Africa, East Asia, Oceania, and the Americas, are great candidates for minimalist biological races since they correspond to geographic locations, and even corroborates what Fredrich Blumenbach said about human races back in the 17th century. Hardimon further writes (pg 85-86):
If the five populations corresponding to the major areas are continental-level minimalist races, the clusters represent continental-level minimalist races: The cluster in the mostly orange segment represents the sub-Saharan African continental-level minimalist race. The cluster in the mostly blue segment represents the Eurasian continental-level minimal race. The cluster in the mostly pink segment represents the East Asian continental-level minimalist race. The cluster in the mostly green segment represents the Pacific Islander continental-level minimalist race. And the cluster in the mostly purple segment represents the American continental-level minimalist race.
[…]
The assumption that the five populations are continental-level minimalist races entitles us to interpret structure as having the capacity to assign individuals to continental-level minimalist races on the basis of markers that track ancestry. In constructing clusters corresponding to the five continental-level minimalist races on the basis of objective, race-neutral genetic markers, structure essentially “reconstructs” those races on the basis of a race-blind procedure. Modulo our assumption, the article shows that it is possible to assign individuals to continental-level races without knowing anything about the race or ancestry of the individuals from whose genotypes the microsattelites are drawn. The populations studied were “defined by geography, language, and culture,” not skin color or “race.”

Of course, as critics note, the researchers predetermine how many populations that structure demarcates, for instance, K=5 indicates that the researchers told the program to delineate 5 clusters. Though, these objections do not matter. For the 5 populations that come out in K=5 “are genetically structured … which is to say, meaningfully demarcated solely on the basis of genetic markers” (Hardimon, 2017: 88). K=6 brings one more population, the Kalash, a group from northern Pakistan who speak an Indo-European language. Though “The fact that structure represents a population as genetically distinct does not entail that the population is a race. Nor is the idea that populations corresponding to the five major geographic areas are minimalist races undercut by the fact that structure picks out the Kalash as a genetically distinct group. Like the K=5 graph, the K=6 graph shows that modulo our assumption, continental-level races are genetically structured” (Hardimon, 2017: 88).

Though of course there are naysayers. Svante Paabo and David Serre, Hardimon writes, state that when individuals are sampled from homogeneous populations from around the world, the gradients of the allele frequencies that are found are distributed randomly across the world rather than clustering discretely. Though Rosenberg et al responded by verifying that the clusters they found are not artifacts of sampling as Paabo and Serre imply, but reflect features of underlying human variation. Though Rosenberg et al agree with Paabo and Serre in that that human genetic diversity consists of clines in variation in allele frequencies (Hardimon, 2017: 89). Other naysayers also state that all Rosenberg et al show is what we can “see with our eyes”. Though a computer does not partition individuals into different populations based on something that can be done with eyes, it’s based on an algorithm.
Hardimon also accepts that black Africans, Caucasians, East Asians, American Indians and Oceanians can be said to be races in the basic sense because “they constitute a partition of the human species“, and that they are distinguishable “at the level of the gene” (Hardimon, 2017: 93). And of course, K=5 shows that the 5 races are genetically distinguishable.
Hardimon finally discusses some medical significance for minimalist races. He states that if you are Caucasian that it is more likely that you have a polymorphism that protects against HIV compared to a member of another race. Meanwhile, East Asians are more likely to carry alleles that make them more susceptible to Steven-Johnson syndrome or another syndrome where their skin falls off. Though of course, the instances where this would matter in a biomedical context are rare, but still should be at the back of everyone’s mind (as I have argued), even though instances where medical differences between minimalist races are rare, there are times where one’s race can be medically significant.
Hardimon finally states that this type of “metaphysics of biological race” can be called “deflationary realism.” Deflationary because it “consists in the repudiation of the ideas that racialist races exist and that race enjoys the kind of biological reality that racialist race was supposed to have” and realism which “consists in its acknowledgement of the existence of minimalist races and the genetically grounded, relatively superficial, but still significant biological reality of minimalist race” (Hardimon, 2017: 95-96).
Conclusion
Minimalist races exist. Minimalist races are a biological reality because distinct visible patterns show differences between geographically isolated populations. This is enough for the classification of the five classic races we know of to be called race, be biologically real, and have a medical significance—however small—because certain biological/physical traits are tied to different geographic populations—minimalist races.
Hardimon (2017: 97) shows an alternative to racialism:
Deflationary realism provides a worked-out alternative to racialism—it it a theory that represents race as a genetically grounded, relatively superficial biological reality that is not normatively important in itself. Deflationary realism makes it possible to rethink race. It offers the promise of freeing ourselves, if only imperfectly, from the racialist background conception of race.
It is clear that minimalist races exist and are biologically real. You do not need to speak about supposed mental traits between these minimalist races, they are irrelevant to the existence of these minimalist biological races. As Hardimon (2017: 67) writes: “No reference is made to normatively important features such as intelligence, sexuality, or morality. No reference is made to essences. The idea of sharp boundaries between patterns of visible physical features or corresponding geographical regions is not invoked. Nor again is reference made to the idea of significant genetic differences. No reference is made to groups that exhibit patterns of visible physical features that correspond to geographic ancestry.”
The minimalist biological concept of race stands up to numerous lines of argumentation, therefore we can say without a shadow of a doubt that minimalist biological race exists and is real.
Do pigmentation and the melanocortin system modulate aggression and sexuality in humans as they do in other animals? A Response to Rushton and Templer (2012)
2100 words
Rushton et al have kept me pretty busy over the last year or so. I’ve debunked many of their claims that rest on biology—such as testosterone causing crime and aggression. The last paper that Rushton published before he died in October of 2012 was an article with Donald Templer—another psychologist—titled Do pigmentation and the melanocortin system modulate aggression and sexuality in humans as they do in other animals? (Rushton and Templer, 2012) and they make a surfeit of bold claims that do not follow. They review animal studies on skin and fur pigmentation and show that the darker an animal’s skin or fur, the more likely they are to be aggressive and violent. They then conclude that, of course (it wouldn’t be a Rushton article without it), that the long-debunked r/K ‘continuum’ explains the co-variation between human populations in birth rate, longevity, violent crime, infant mortality and rate and acquisition of AIDS/HIV.
In one of the very first articles I wrote on this site, I cited Rushton and Templer (2012) favorably (back when I had way less knowledge of biology and hormones). I was caught by biases and not knowing anything about what was discussed. After I learned more about biology and hormones over the years, I came to find out that the claims in the paper are wrong and that they make huge, sweeping conclusions based on a few correlations. Either way, I have seen the error of my ways and the biases that lead me to the beliefs I held, and when I learned more about hormones and biology I saw how ridiculous some of the papers I have cited in the past truly were.
Rushton and Templer (2012) start off the paper by discussing Ducrest et al (2008) who state that within each species studied, darker-pigmented individuals of said species exhibited higher rates of aggression, sexuality and social dominance (which is caused by testosterone) than lighter-pigmented individuals in that same species. They state that this is due to pleiotropy—when a single gene has to or more phenotypic effects. They then refer to Rushton and Jensen (2005) to reference the claim that low IQ is correlated with skin color (skin color doesn’t cause IQ, obviously).
They then state that in 40 vertebrate species that within each that the darker-pigmented members had higher levels of aggression and sexual activity along with a larger body size, better stress resistance, and are more physically active while grooming (Ducrest, Keller, and Roulin, 2008). Rushton and Templer (2012) then state that this relationship was ‘robust’ across numerous species, specifically 36 species of birds, 4 species of fish, 3 species of mammals, and 4 species of reptiles.
Rushton and Templer (2012) then discuss the “Validation of the pigmentation system as causal to the naturalistic observations was demonstrated by experimentally manipulating pharmacological dosages and by studies of cross-fostering“, citing Ducrest, Keller, and Roulin (2008). They even state that ‘Placing darker versus lighter pigmented individuals with adoptive parents of the opposite pigmentation did not modify offspring behavior.” Seems legit. Must mean that their pigmentation caused these differences. They then state something patently ridiculous: “The genes that control that balance occupy a high level in the hierarchical system of the genome.” Though, unfortunately for their hypothesis, there is no privileged level of causation (Noble, 2016; also see Noble, 2008), so this is a nonsense claim. Genes are not ‘blueprints’ or ‘recipes’ (Oyama, 1985; Schneider, 2007).
They then refer to Ducrest, Keller and Roulin (2008: 507) who write:
In this respect, it is important to note that variation in melanin-based coloration between human populations is primarily due to mutations at, for example, MC1R, TYR, MATP and SLC24A5 [29,30] and that human populations are therefore not expected to consistently exhibit the associations between melaninbased coloration and the physiological and behavioural traits reported in our study.
This quote, however, seems to be ignored by Rushton and Templer (2012) throughout the rest of their article, and so even though they did a brief mentioning of the paper and how one should be ‘cautious’ in interpreting the data in their study, it seems like they just brush it under the rug to not have to contend with it. Rushton and Templer (2012) then cite the famous silver fox study, where tame foxes were bred. They lost their dark fur and became lighter and, apparently, were less aggressive than their darker-pigmented kin. These animal studies are, in my useless when attempting to correlate skin color and the melanocortin system in the modulation of aggressive behavior, so let’s see what they write about human studies.
It’s funny, because Rushton and Templer (2012) cite Ducrest, Keller, and Roulin (2008: 507) to show that caution should be made when assessing any so-called differences in the melanocortin system between human races. They then disregard that by writing “A first examination of whether melanin based pigmentation plays a role in human aggression and sexuality (as seen in non-human animals), is to compare people of African descent with those of European descent and observe whether darker skinned individuals average higher levels of aggression and sexuality (with violent crime the main indicator of aggression).” This is a dumb comparison. Yes, African nations commit more crime than European nations, but does this mean that the skin color (or whatever modulates skin color/melanocortin system) is the cause for this? No. Not at all.
There really isn’t anything to discuss here, though, because they just run through how different African nations have higher levels of crime than European and East Asian nations, how blacks report having more sex and feel less guilty about it. Rushton and Templer (2012) then state that one study “asked married couples how often they had sex each week. Pacific Islanders and Native Americans said from 1 to 4 times, US Whites answered 2–4 times, while Africans said 3 to over 10 times.” They then switch over to their ‘replication’ of this finding, using the data from Alfred Kinsey (Rushton and Bogaert, 1988). Though, unfortunately for Rushton and Bogaert, there are massive problems with this data.
Though, the Kinsey data can hardly be seen as representative (Zuckerman and Brody, 1988), and it is also based on outdated, non-representative, non-random samples (Lynn, 1989). Rushton and Templer (2012) also discuss so-called differences in penis size between races, too. But I have written two response articles on the matter and shown that Rushton used shoddy sources like ‘French Army Surgeon who contradicts himself: “Similarly, while the French Army surgeon announces on p. 56 that he once discovered a 12-inch penis, an organ of that size becomes “far from rare” on p. 243. As one might presume from such a work, there is no indication of the statistical procedures used to compute averages, what terms such as “often” mean, how subjects were selected, how measurements were made, what the sample sizes were, etc” (Weizmann et al, 1990: 8).
Rushton and Templer (2012) invoke, of course, Rushton’s (1985; 1995) r/K selection theory as applied to human races. I have written numerous articles on r/K selection and attempts at reviving it, but it is long dead, especially as a way to describe human populations (Anderson, 1991; Graves, 2002). The theory was refuted in the late 70s (Graves, 2002), and replaced with age-specific mortality (Reznick et al, 2002). Some of his larger claims I will cover in the future (like how r/K relates to criminal activity), but he just goes through all of the same old motions he’s been going through for years, bringing nothing new to the table. In all honesty, testosterone is one of the pillars of Rushton’s r/K selection theory (e.g., Lynn, 1990; Rushton, 1997; Rushton, 1999; Hart, 2007; Ellis, 2017; extensive arguments against Ellis, 2017 can be found here). If testosterone doesn’t do what he believes it does and the levels of testosterone between the races are not as high as believed/non-existent (Gapstur et al, 2002; read my discussion of Gapstur et al 2002; Rohrmann et al, 2007; Richard et al, 2014. Though see Mazur, 2016 and read my interpretation of the paper) then we can safely disregard their claims.
Rushton and Templer (2012: 6) write:
Another is that Blacks have the most testosterone (Ellis & Nyborg, 1992), which
helps to explain their higher levels of athletic ability (Entine, 2000).
As I have said many times in the past, Ellis and Nyborg (1992) found a 3 percent difference in testosterone levels between white and black ex-military men. This is irrelavent. He also, then cites John Entine’s (2002) book Taboo: Why Black Athletes Dominate Sports and Why We’re Afraid to Talk About It, but this doesn’t make sense. Because he literally cites Rushton who cites Ellis and Nyborg (1992) and Ross et al (1986) (stating that blacks have 3-19 percent higher levels of testosterone than whites, citing Ross et al’s 1986 uncorrected numbers)—and I have specifically pointed out numerous flaws in their analysis and so, Ross et al (1986) cannot seriously be used as evidence for high testosterone differences between the races. Though I cited Fish (2013), who wrote about Ellis and Nyborg (1992):
“These uncorrected figures are, of course, not consistent with their racial r- and K-continuum.”
Rushton and Templer (2012) then state that testosterone acts like a ‘master switch’ (Rushton, 1999), implicating testosterone as a cause for aggression, though I’ve shown that this is not true, and that aggression causes testosterone production, testosterone doesn’t cause aggression. Testosterone does control muscle mass, of course. But Rushton’s claim that blacks have deeper voices due to higher levels of testosterone, but this claim does not hold in newer studies.
Rushton and Templer (2012) then shift gears to discuss Templer and Arikawa’s (2006) study on the correlation between skin color and ‘IQ’. However, there is something important to note here from Razib:
we know the genetic architecture of pigmentation. that is, we know all the genes (~10, usually less than 6 in pairwise between population comparisons). skin color varies via a small number of large effect trait loci. in contrast, I.Q. varies by a huge number of small effect loci. so logically the correlation is obviously just a correlation. to give you an example, SLC45A2 explains 25-40% of the variance between africans and europeans.
long story short: it’s stupid to keep repeating the correlation between skin color and I.Q. as if it’s a novel genetic story. it’s not. i hope don’t have to keep repeating this for too many years.
Rushton and Templer (2012: 7) conclude:
The melanocortin system is a physiological coordinator of pigmentation and life history traits. Skin color provides an important marker placing hormonal mediators such as testosterone in broader perspective.
I don’t have a problem with the claim that the melanocortin system is a physiological coordinator of pigmentation, because it’s true and we have a great understanding of the physiology behind the melanocortin system (see Cone, 2006 for a review). EvolutionistX also has a great article, reviewing some studies (mouse studies and some others) showing that increasing melatonin appears to decreases melanin.
Rushton and Templer’s (2012) make huge assumptions not warranted by any data. For instance, Rushton states in his VDare article on the subject, J. Phillipe Rushton Says Color May Be More Than Skin Deep, “But what about humans? Despite all the evidence on color, aggression, and sexuality in animals, there has been little or no discussion of the relationship in people. Ducrest & Co. even warned that genetic mutations may make human populations not exhibit coloration effects as consistently as other species. But they provided no evidence.” All Rushton and Templer (2012) do in their article is just restating known relationships with crime and race, and then attempting to implicate the melanocortin system as a factor driving this relationship, literally off of a slew of animal studies. Even then, the claim that Ducrest, Keller, and Roulin (2008: 507) provide no evidence for their warning is incorrect, because before they stated that, they wrote “In this respect, it is important to note that variation in melanin-based coloration between human populations is primarily due to mutations at, for example, MC1R, TYR, MATP and SLC24A5 [29,30]. . .” Melanin does not cause aggression, it does not cause crime. Rushton and Templer just assume too many things based on no evidence in humans, while their whole hypothesis is structured around a bunch of animal studies.
In conclusion, it seems like Rushton and Templer don’t know anything about the physiology of the melanocortin system if they believe that pigmentation and the melanocortin system modulates aggression and sexual behavior in humans. I know of no evidence (studies, not Rushton and Templer’s 2012 relationships with crime and then asserting that, because these relationships are seen in animals, that it must mean that the melanocortin system in humans modulates the relationships too) for these assertions by Rushton and Templer (2012). The fact that they think that restating relationships between crime and race, country of origin and race, supposed correlations with testosterone and crime and blacks supposedly having higher testosterone than whites, among other things, shows that the proposed relationships are caused by the melanocortin system and Lift History Theory shows their ignorance of the human physiological system.
Height, Longetivity, and Aging
1700 words
Humans reach their maximum height at around their mid-20s. It is commonly thought that taller people have better life outcomes, and are in general healthier. Though this misconception stems from misconceptions about the human body. In all reality, shorter people live longer than taller people. (Manlets of the world should be rejoicing; in case anyone is wondering I am 5’10”.) This flies in the face about what people think, and may be counter-intuitive to some but the logic—and data—is sound. I will touch on mortality differences between tall and short people and at the end talk a bit about shrinking with age (and studies that show there is no—or little—decrease in height due to self-reports, the study is flawed).
One reason why the misconception of taller people living longer, healthier lives than shorter people is the correlation between height and IQ—people assume that they are traits that are ‘similar’ in that they become ‘stable’ at adulthood—but one way to explain that relationship is that IQ is correlated with height because higher SES people can afford better food and thus be better nourished. Either way, it is a myth that taller people have lower rates of all-cause mortality.
The truth of the matter is this: smaller bodies live longer lives, and this is seen in the animal kingdom and humans—larger body size independently reduces mortality (Samaras and Elrick, 2002). They discuss numerous lines of evidence—from human to animal studies—and show that smaller bodies have a lower chance of all-cause mortality, the reasoning being (one of the reasons, anyway) that larger bodies have more cells which then would, in turn, be more subject to carcinogens and, obviously, would have higher rates of cancer which would then, too, lower mortality rates. Samaras (2012) also has another paper where the implications are reviewed for this, and other causes are proposed for this observation. Causes are reduced cell damage, lower DNA damage, and lower cancer incidence; with other, hormonal differences, between tall and short people that explain more of the variation between them.
One study found a positive linear correlation between height and cancer mortality. Lee et al (2009) write:
A positive linear association was observed between height and cancer mortality. For each standard deviation greater height, the risk of cancer was increased by 5% (2–8%) and 9% (5–14%) in men and women, respectively.
One study suggests that “variations in adult height (and, by implication, the genetic and other determinants of height) have pleiotropic effects on several major adult-onset diseases” (The Emerging Risk Factors Collaboration, 2012). Taller people also are at greater risk for heart attack (Tamaras, 2013). The cause for this, Tamaras writes, is “including reduced telomere shortening, lower atrial fibrillation, higher heart pumping efficiency, lower DNA damage, lower risk of blood clots, lower left ventricular hypertrophy and superior blood parameters.” Height, though, may be inversely associated with long-term incidence of fatal stroke (Goldbourt and Tanne, 2002). Schmidt et al (2014) conclude: “In conclusion, short stature was a risk factor for ischemic heart disease and premature death, but a protective factor for atrial fibrillation. Stature was not substantially associated with stroke or venous thromboembolism.” Cancer incidence also increases with height (Green et al, 2011). Samaras, Elrick, and Storms (2003) suggest that men live longer than women live longer than men due to the height difference between them, being about 8 percent taller than women but having a 7.9 percent lower life expectancy at birth.
Height at mid-life, too, is a predictor of mortality with shorter people living longer lives (He et al, 2014). There are numerous lines of evidence that shorter people—and people of shorter ethnies, too—live longer lives if they are vertically challenged. One study on patients undergoing maintenance hemodialysis stated that “height was directly associated with all-cause mortality and with mortality due to cardiovascular events, cancer, and infection” (Daugirdas, 2015; Shapiro et al, 2015). Even childhood height is associated with prostate cancer acquisition (Aarestrup et al, 2015). Even men who are both tall and have more adipose tissue (body fat) are more likely to die younger and that greater height was associated with a higher risk of acquiring prostate cancer (Perez-Cornago et al, 2017). Short height is a risk factor for death for hemodyalisis patients (Takenaka et al, 2010). Though there are conflicting papers regarding short height and CHD, many reviews show that shorter people have better health outcomes than taller people.
Sohn (2016) writes:
An additional inch increase in height is related to a hazard ratio of death from all causes that is 2.2% higher for men and 2.5% higher for women. The findings are robust to changing survival distributions, and further analyses indicate that the figures are lower bounds. This relationship is mainly driven by the positive relationship between height and development of cancer. An additional inch increase in height is related to a hazard ratio of death from malignant neoplasms that is 7.1% higher for men and 5.7% higher for women.
[…]
It has been widely observed that tall individuals live longer or die later than short ones even when age and other socioeconomic conditions are controlled for. Some researchers challenged this position, but their evidence was largely based on selective samples.
Four additional inches of height in post-menopausal women coincided with an increase in all types of cancer risk by 13 percent (Kabat et al, 2013), while taller people also have less efficient lungs (Leon et al, 1995; Smith et al, 2000). Samaras and Storms (1992) write “Men of height 175.3 cm or less lived an average of 4.95 years longer than those of height over 175.3 cm, while men of height 170.2 cm or less lived 7.46 years longer than those of at least 182.9 cm.”
Lastly, regarding height and mortality, Turchin et al (2012) write “We show that frequencies of alleles associated with increased height, both at known loci and genome wide, are systematically elevated in Northern Europeans compared with Southern Europeans.” This makes sense, because Southern European populations live longer (and have fewer maladies) than Northern European populations:
Compared with northern Europeans, shorter southern Europeans had substantially lower death rates from CHD and all causes.2 Greeks and Italians in Australia live about 4 years longer than the taller host population … (Samaras and Elrick, 2002)
So we have some data that doesn’t follow the trend of taller people living shorter lives due to maladies they acquire due to their height, but most of the data points in the direction that taller people live shorter lives, higher rates of cancer, lower heart pumping efficiency (the heart needs to pump more blood through a bigger body) etc. It makes logical sense that a shorter body would have fewer maladies, and would have higher heart pumping efficiency, lower atrial fibrillation, lower DNA damage, lower risk of blood clotting (duh) when compared to taller people. So it seems that, if you’re a normal American man, then if you want to live a good, long life then you’d want to be shorther, rather than taller.
Lastly, do we truly shrink as we age? Steve Hsu has an article on this matter, citing Birrell et al (2005) which is a longitudinal study in Newcastle, England which began in 1947. The children were measured when full height was expected to be acheived, which is about 22 years of age. They were then followed up at age 50. Birrell et al (2005) write:
Height loss was reported by 57 study members (15%, median height loss: 2.5 cm), with nine reporting height loss of >3.5 cm. However, of the 24 subjects reporting height loss for whom true height loss from age 22 could be calculated, assuming equivalence of heights within 0.5 cm, 7 had gained height, 9 were unchanged and only 8 had lost height. There was a poor correlation between self-reported and true height loss (r=0.28) (Fig. 1).
In this population, self-reported height was off the mark, and it seems like Hsu takes this conclusion further than he should, writing “Apparently people don’t shrink quite as much with age as they think they do.” No no no. This study is not good. We begin shrinking at around age 30:
Men gradually lose an inch between the ages of 30 to 70, and women can lose about two inches. After the age of 80, it’s possible for both men and women to lose another inch.
The conclusion from Hsu on that study is not warranted. To see this, we can look at Sorkin, Muller, and Andres (1999) who write:
For both sexes, height loss began at about age 30 years and accelerated with increasing age. Cumulative height loss from age 30 to 70 years averaged about 3 cm for men and 5 cm for women; by age 80 years, it increased to 5 cm for men and 8 cm for women. This degree of height loss would account for an “artifactual” increase in body mass index of approximately 0.7 kg/m2 for men and 1.6 kg/m2 for women by age 70 years that increases to 1.4 and 2.6 kg/m2, respectively, by age 80 years.
So, it seems that Hsu’s conclusion is wrong. We do shrink with age for myriad reasons, including discs between the vertebrae and spine decompress and dehydrate, the aging spine becomes more curved due to loss of bone density, and loss of torso muscle could contribute to the differing posture. Either way, these are preventable, but some height decrease will be notable for most people. Either way, Hsu doesn’t know what he’s talking about here.
In conclusion, while there is some conflicting data on whether tall or short people have lower all-cause mortality, the data seems to point to the fact that shorter people live longer due since they have lower atrial fibrillation, higher heart pumping efficiency, low DNA damage, lower risk for blood clots (since the blood doesn’t have to travel too far in shorter people), along with superior blood parameters etc. With the exception of a few diseases, shorter people do have a higher quality of life and higher lung efficiency. We do get shorter as we age—though with the right diet we can ameliroate some of those effects (for instance keeping calcium high). There are many reasons why we shrink due to age, and the study that Hsu cited isn’t good compared to the other data we have in the literature on this phenomenon. All in all, shorter people live longer for myriad reasons and we do shrink as we age, contrary to Steve Hsu’s claims.
Is There Really More Variation Within Races Than Between Them?
1500 words
In 1972 Richard Lewontin, studying the blood groups of different races, came to the conclusion that “Human racial classification is of no social value and is positively destructive of social and human relations. Since such racial classification is now seen to be of virtually no genetic or taxonomic significance either, no justification can be offered for its continuance” (pg 397). He also found that “the difference between populations within a race account for an additional 8.3 percent, so that only 6.3 percent is accounted for by racial classification.” This has lead numerous people to, along with Lewontin, conclude that race is ‘of virtually no genetic or taxonomic significance’ and conclude that, due to this, race does not exist.
Lewontin’s main reasoning was that since there is more variation within races than between them (85 percent of differences were within populations while 15 percent was within them) then since a lion’s share of human diversity is distributed within races, not between them, then race is of no genetic nor taxonomic use. Lewontin is correct that there is more variation within races than between them, but he is incorrect that this means that racial classification ‘is of no social value’, since knowing and understanding the reality of race (even our perceptions of them, whether they are true or not) influence things such as medical outcomes.
Though, like Lewontin, people have cited this paper as evidence against the existence of human races, for if there is way more genetic variation between races, and that most human genetic variation is within races, then race cannot be of any significance for things such as medical outcomes since most genetic variation is within races not between them.
Rosenberg et al (2002) also confirmed and replicated Lewontin’s analysis, showing that within-population genetic variation accounts for 93-95 percent of human genetic variation, while 3 to 5 percent of human genetic variation lies between groups. Philosopher Michael Hardimon (2017) uses these arguments to buttress his point that ‘racialist races’ (as he calls them) do not exist. His criteria being:
(a) The fraction of human genetic diversity between populations must exceed the fraction of diversity between them.
(b) The fraction of human genetic diversity within populations must be small.
(c) The fraction of diversity between populations must be large.
(d) Most genes must be highly differentiated by race.
(e) The variation in genes that underlie obvious physical differences must be typical of the genome in general.
(f) There must be several important genetic differences between races apart from the genetic differences that underlie obvious physical differences.
Note: (b) says that racialist races are genetically racially homogeneous groups; (c)-(f) say that racialist races are distinguised by major biological differences.
Call (a)-(f) the racialist concept of race’s genetic profile. (Hardimon, 2017: 21)
He clearly strawmans the racialist position, but I’ll get into that another day. Hardimon writes about how both of these studies lend credence to his above argument on racialist races (pg 24):
Rosenberg and colleagues also confirm Lewontin’s findings that most genes are not highly differentiated by race and that the variation in genes that underlie obvious physical differences is not typical of the variation of the genome in general. They also suggest that it is not the case that there are many important genetic differences between races apart from the genetic differences that underlie the obvious physical differences. These considerations further buttress the case against the existence of racialist races.
[…]
The results of Lewontin’s 1972 study and Rosenberg and colleagues’ 2002 study strongly suggest that it is extremely unlikely that there are many important genetic differences between races apart from the genetic differences that underlie the obvious physical differences.
(Hardimon also writes on page 124 that Rosenberg et al’s 2002 study could also be used as evidence for his populationist concept of race, which I will return to in the future.)
Though, my reasoning for writing this article is to show that the findings by Lewontin and Rosenberg et al regarding more variation within races than between them are indeed true, despite claims to the contrary. There is one article, though, that people cite as evidence against the conclusions by Lewontin and Rosenberg et al, though it’s clear that they only read the abstract and not the full paper.
Witherspoon et al (2007) write that “sufficient genetic data can permit accurate classification of individuals into populations“, which is what the individuals who cite this study as evidence for their contention mean, though they conclude (emphasis mine):
The fact that, given enough genetic data, individuals can be correctly assigned to their populations of origin is compatible with the observation that most human genetic variation is found within populations, not between them. It is also compatible with our finding that, even when the most distinct populations are considered and hundreds of loci are used, individuals are frequently more similar to members of other populations than to members of their own population. Thus, caution should be used when using geographic or genetic ancestry to make inferences about individual phenotypes.
Witherspoon et al (2007) analyzed the three classical races (Europeans, Africans and East Asians) over thousands of loci and came to the conclusion when genetic similarity is measured over thousands of loci, the answer to the question “How often is a pair of individuals from one population genetically more dissimilar than two individuals chosen from two different populations?” is “never“.
Hunley, Cabana, and Long (2016: 7) also confirm Lewontin’s analysis, writing “In sum, we concur with Lewontin’s conclusion that Western-based racial classifications have no taxonomic significance, and we hope that this research, which takes into account our current understanding of the structure of human diversity, places his seminal finding on firmer evolutionary footing.” But the claim that “racial classifications have no taxonomic significance” is FALSE.
This is a point that Edwards (2003) rebutted in depth. While he did agree with Lewontin’s (1972) analysis that there was more variation within races than between them (which was confirmed through subsequent analysis), he strongly disagreed with Lewontin’s conclusion that race is of no taxonomic significance. Richard Dawkins, too disagreed with Lewontin, though as Dawkins writes in his book The Ancestors Tale: “Most of the variation among humans can be found within races as well as between them. Only a small admixture of extra variation distinguishes races from each other. That is all correct. What is not correct is the inferene that race is therefore a meaningless concept.” The fact that there is more variation within races than between them is irrelevant to taxonomic classification, and classifying races by phenotypic differences (morphology, and facial features) along with geographic ancestry shows that just by looking at the average phenotype that race exists, though these concepts make no value-based judgements on anything you can’t ‘see’, such as mental and personality differences between populations.
Though while some agree with Edwards’ analysis of Lewontin’s argument about race’s taxonomic significance, they don’t believe that he successfully refuted Lewontin. For instance, Hardimon (2017: 22-23) writes that Lewontin’s argument against—what Hardimon (2017) calls ‘racialist race’ (his strawman quoted above)—the existence of race because the within-race component of genetic variation is greater than the genetic variation between races “is untouched by Edwards’ objections.”
Though Sesardic (2010: 152) argues that “Therefore, contra Lewontin, the racial classification that is based on a number of genetic differences between populations may well be extremely reliable and robust, despite the fact that any single of those genetic between-population differences\ remains, in itself, a very poor predictor of racial membership.” He also states that the 7 to 10 percent difference between populations “actually refers to the inter-racial portion of variation that is averaged over the separate contributions of a number of individual genetic indicators that were sampled in different studies” (pg 150).
I personally avoid all of this talk about genes/allele frequencies between populations and jump straight to using Hardimon’s minimalist race concept—a concept that, according to Hardimon is “stripped down to its barest bones” since it captures enough of the racialist concept of race to be considered a race concept.
In sum, variation within races is greater than variation between races, but this does not mean anything for the reality of race since race can still be delineated based on peculiar physical features and peculiar geographic ancestry to that group. Using a few indicators (morphology, facial features such as nose, lips, cheekbones, facial structure, and hair along with geographic ancestry), we can group races based on these criteria and we can show that race does indeed exist in a physical—not social—sense and that these categories are meaningful in a medical context (Hardimon, 2013, 2017). So even though genetic variation is greater within races than between them, this does not mean that there is no taxonomic significance to race, as other authors have argued. Hardimon (2017: 23) agrees, writing (emphasis his) “… Lewontin’s data do not preclude the possibility that racial classification might have taxonomic significance, but they do preclude the possibility that racialist races exist.”
Hardimon’s strawman of the racialist concept notwithstanding (which I will cover in the future), his other three race concepts (minimalist, populationist and socialrace concepts) are logically sound and stand up to a lot of criticism. Either way, race does exist, and it does not matter if the apportionment of human genetic diversity is greatest within races than between them.
Don’t Fall for Facial ‘Reconstructions’
1400 words
Back in April of last year, I wrote an article on the problems with facial ‘reconstructions’ and why, for instance, Mitochondrial Eve probably didn’t look like that. Now, recently, ‘reconstructions’ of Nariokotome boy and Neanderthals. The ‘reconstructors’, of course, have no idea what the soft tissue of said individual looked like, so they must infer and use ‘guesswork’ to show parts of the phenotype when they do these ‘reconstructions’.
My reason for writing this is due to the ‘reconstruction’ of Nefertiti. I have seen altrighers proclaim ‘The Ancient Egyptians were white!’ whereas I saw blacks stating ‘Why are they whitewashing our history!’ Both of these claims are dumb, and they’re also wrong. Then you have articles—purely driven by ideology—that proclaim ‘Facial Reconstruction Reveals Queen Nefertiti Was White!‘
This article is garbage. It first makes the claim that King Tut’s DNA came back as being similar to 70 percent of Western European man. Though, there are a lot of problems with this claim. 1) the company IGENEA inferred his Y chromosome from a TV special; the data was not available for analysis. 2) Haplogroup does not equal race. This is very simple.
Now that the White race has decisively reclaimed the Ancient Egyptians
The white race has never ‘claimed’ the Ancient Egyptians; this is just like the Arthur Kemp fantasy that the Ancient Egyptians were Nordic and that any and all civilizations throughout history were started and maintained by whites, and that the causes of the falls of these civilizations were due to racial mixing etc etc. These fantasies have no basis in reality, and, now, we will have to deal with people pushing these facial ‘reconstructions’ that are largely just ‘art’, and don’t actually show us what the individual in question used to look like (more on this below).
Stephan (2003) goes through the four primary fallacies of facial reconstruction: fallacy 1) That we can predict soft tissue from the skull, that we can create recognizable faces. This is highly flawed. Soft tissue fossilization is rare—rare enough to be irrelevant, especially when discussing what ancient humans used to look like. So for these purposes, and perhaps this is the most important criticism of ‘reconstructions’, any and all soft tissue features you see on these ‘reconstructions’ are largely guesswork and artistic flair from the ‘reconstructor’. So facial ‘reconstructions’ are mostly art. So, pretty much, the ‘reconstructor’ has to make a ton of leaps and assumptions while creating his sculpture because he does not have the relevant information to make sure it is truly accurate, which is a large blow to facial ‘reconstructions’.
And, perhaps most importantly for people who push ‘reconstructions’ of ancient hominin: “The decomposition of the soft tissue parts of paleoanthropological beings makes it impossible for the detail of their actual soft tissue face morphology and variability to be known, as well as the variability of the relationship between the hard and the soft tissue.” and “Hence any facial “reconstructions” of earlier hominids are likely to be misleading [4].”
As an example for the inaccuracy of these ‘reconstructions’, see this image from Wikipedia:

The left is the ‘reconstruction’ while the right is how the woman looked. She had distinct lips which could not be recreated because, again, soft tissue is missing.
2) That faces are ‘reconstructed’ from skulls: This fallacy directly follows from fallacy 1: that ‘reconstructors’ can accurately predict what the former soft tissue looked like. Faces are not ‘reconstructed’ from skulls, it’s largely guesswork. Stephan states that individuals who see and hear about facial ‘reconstructions’ state things like “wow, you have to be pretty smart/knowledgeable to be able to do such a complex task”, which Stephan then states that facial ‘approximation’ may be a better term to use since it doesn’t imply that the face was ‘reconstructed’ from the skull.
3) That this discipline is ‘credible’ because it is ‘partly science’, but Stephan argues that calling it a science is ‘misleading’. But he writes (pg 196): “The fact that several of the commonly used subjective guidelines when scientifically evaluated have been found to be inaccurate, … strongly emphasizes the point that traditional facial approximation methods are not scientific, for if they were scientific and their error known previously surely these methods would have been abandoned or improved upon.”
And finally, 4) We know that ‘reconstructions’ work because they have been successful in forensic investigations. Though this is not a strong claim because other factors could influence the discovery, such as media coverage, chance, or ‘contextual information’. So these forensics cases cannot be pointed to when one attempts to argue for the utility of facial ‘reconstructions’. There also seems to be a lot of publication bias in this literature too, with many scientists not publishing data that, for instance, did not show the ‘face’ of the individual in question. It is largely guesswork. “The inconsistency in reports combined with confounding factors influencing casework success suggest that much caution should be employed when gauging facial approximation success based on reported practitioner success and the success of individual forensic cases” (Stephan, 2003: 196).
So, 1) the main point here is that soft tissue work is ‘just a guess’ and the prediction methods employed to guess the soft tissue have not been tested. 2) faces are not ‘reconstructed’ from skulls. 3) It’s hardly ‘science’, and more of a form of art due to the guesses and large assumptions poured into the ‘technique’. 4) ‘Reconstructions’ don’t ‘work’ because they help us ‘find’ people, as there is a lot more going on there than the freak-chance happenings of finding a person based on a ‘reconstruction’ which was probably due to chance. Hayes (2015) also writes: “Their actual ability to meaningfully represent either an individual or a museum collection is questionable, as facial reconstructions created for display and published within academic journals show an enduring preference for applying invalidated methods.”
Stephan and Henneberg (2001) write: “It is concluded that it is rare for facial approximations to be sufficiently accurate to allow identification of a target individual above chance. Since 403 incorrect identifications were made out of 592 identification scenarios, facial approximation should be considered to be a highly inaccurate and unreliable forensic technique. These results suggest that facial approximations are not very useful in excluding individuals to whom skeletal remains may not belong.”
Wilkinson (2010) largely agrees, but states that ‘artistic interpretation’ should be used only when “particularly for the morphology of the ears and mouth, and with the skin for an ageing adult” but that “The greatest accuracy is possible when information is available from preserved soft tissue, from a portrait, or from a pathological condition or healed injury.” But she also writes: “… the laboratory studies of the Manchester method suggest that facial reconstruction can reproduce a sufficient likeness to allow recognition by a close friend or family member.”
So to sum up: 1) There is insufficient data for tissue thickness. This just becomes guesswork and, of course, is up to artistic ‘interpretation’, and then becomes subjective to whichever individual artist does the ‘reconstruction’. Cartilage, skin and fat does not fossilize (only in very rare cases and I am not aware of any human cases). 2) There is a lack of methodological standardization. There is no single method to use to ‘guesstimate’ things like tissue thickness and other soft tissue that does not fossilize. 3) They are very subjective! For instance, if the artist has any type of idea in his head of what the individual ‘may have’ looked like, his presuppositions may go from his head to his ‘reconstruction’, thusly biasing a look he/she will believe is true. I think this is the case for Mitochondrial Eve; just because she lived in Africa doesn’t mean that she looks similar to any modern Africans alive today.
I would make the claim that these ‘reconstructions’ are not science, they’re just the artwork of people who have assumptions of what people used to look like (for instance, with Nefertiti) and they take their assumptions and make them part of their artwork, their ‘reconstruction’. So if you are going to view the special that will be on tomorrow night, keep in the back of your mind that the ‘reconstruction’ has tons of unvalidated assumptions thrown into it. So, no, Nefertiti wasn’t ‘white’ and Nefertiti wasn’t ‘white washed’; since these ‘methods’ are highly flawed and highly subjective, we should not state that “This is what Nefertiti used to look like”, because it probably is very, very far from the truth. Do not fall for facial ‘reconstructions’.
I Am Not A Phrenologist
1500 words
People seem to be confused on the definition of the term ‘phrenology’. Many people think that just the measuring of skulls can be called ‘phrenology’. This is a very confused view to hold.
Phrenology is the study of the shape and size of the skull and then drawing conclusions from one’s character from bumps on the skull (Simpson, 2005) to overall different-sized areas of the brain compared to others then drawing on one’s character and psychology from these measures. Franz Gall—the father of phrenology—believed that by measuring one’s skull and the bumps etc on it, then he could make accurate predictions about their character and mental psychology. Gall had also proposed a theory of mind and brain (Eling, Finger, and Whitaker, 2017). The usefulness of phrenology aside, the creator Gall contributed a significant understanding to our study of the brain, being that he was a neuroanatomist and physiologist.
Gall’s views on the brain can be seen here (read this letter where he espouses his views here):
1.The brain is the organ of the mind.
2. The mind is composed of multiple, distinct, innate faculties.
3. Because they are distinct, each faculty must have a separate seat or “organ” in the brain.
4. The size of an organ, other things being equal, is a measure of its power.
5. The shape of the brain is determined by the development of the various organs.
6. As the skull takes its shape from the brain, the surface of the skull can be read as an accurate index of psychological aptitudes and tendencies.
Gall’s work, though, was imperative to our understanding of the brain and he was a pioneer in the inner workings of the brain. They ‘phrenologized’ by running the tips of their fingers or their hands along the top of one’s head (Gall liked using his palms). Here is an account of one individual reminiscing on this (around 1870):
The fellow proceeded to measure my head from the forehead to the back, and from one ear to the other, and then he pressed his hands upon the protuberances carefully and called them by name. He felt my pulse, looked carefully at my complexion and defined it, and then retired to make his calculations in order to reveal my destiny. I awaited his return with some anxiety, for I really attached some importance to what his statement would be; for I had been told that he had great success in that sort of work and that his conclusion would be valuable to me. Directly he returned with a piece of paper in his hand, and his statement was short. It was to the effect that my head was of the tenth magnitude with phyloprogenitiveness morbidly developed; that the essential faculties of mentality were singularly deficient; that my contour antagonized all the established rules of phrenology, and that upon the whole I was better adapted to the quietude of rural life rather than to the habit of letters. Then the boys clapped their hands and laughed lustily, but there was nothing of laughter in it for me. In fact, I took seriously what Rutherford had said and thought the fellow meant it all. He showed me a phrenological bust, with the faculties all located and labeled, representing a perfect human head, and mine did not look like that one. I had never dreamed that the size or shape of the head had anything to do with a boy’s endowments or his ability to accomplish results, to say nothing of his quality and texture of brain matter. I went to my shack rather dejected. I took a small hand- mirror and looked carefully at my head, ran my hands over it and realized that it did not resemble, in any sense, the bust that I had observed. The more I thought of the affair the worse I felt. If my head was defective there was no remedy, and what could I do? The next day I quietly went to the library and carefully looked at the heads of pictures of Webster, Clay, Calhoun, Napoleon, Alexander Stephens and various other great men. Their pictures were all there in histories.
This—what I would call skull/brain-size fetishizing—is still evident today, with people thinking that raw size matters (Rushton and Ankney, 2007; Rushton and Ankney, 2009) for cognitive ability, though I have compiled numerous data that shows that we can have smaller brains and have IQs in the normal range, implying that large brains are not needed for high IQs (Skoyles, 1999). It is also one of Deacon’s (1990) fallacies, the “bigger-is-smarter” fallacy. Just because you observe skull sizes, brain size differences, structural brain differences, etc, does not mean you’re a phrenologist. you’re making easy and verifiable claims, not like some of the outrageous claims made by phrenologists.
What did they get right? Well, phrenologists stated that the most-used part of the brain would become bigger, which, of course, was vindicated by modern research—specifically in London cab drivers (McGuire, Frackowiak, and Frith, 1997; Woolett and McGuire, 2011).
It seems that phrenologists got a few things right but their theories were largely wrong. Though those who bash the ‘science’ of phrenology should realize that phrenology was one of the first brain ‘sciences’ and so I believe phrenology should at least get some respect since it furthered our understanding of the brain and some phrenologists were kind of right.
People see the avatar I use which is three skulls, one Mongoloid, the other Negroid and the other Caucasoid and then automatically make that leap that I’m a phrenologist based just on that picture. Even, to these people, stating that races/individuals/ethnies have different skull and brain sizes caused them to state that what I was saying is phrenology. No, it isn’t. Words have definitions. Just because you observe size differences between brains of, say either individuals or ethnies, doesn’t mean that you’re making any value judgments on the character/mental aptitude of that individual based on the size of theur skull/brain. On the other hand, noting structural differences between brains like saying “the PFC is larger here but the OFC is larger in this brain than in that brain” yet no one is saying that and if that’s what you grasp from just the statement that individuals and groups have different sized skulls, brains, and parts of the brain then I don’t know what to tell you. Stating that one brain weighs more than another, say one is 1200 g and another is 1400 g is not phrenology. Stating that one brain is 1450 cc while another is 1000 cc is not phrenology. For it to be phrenology I have to outright state that differences in the size of certain areas of the brain or brains as a whole cause differences in character/mental faculties. I am not saying that.
A team of neuroscientists just recently (recently as in last month, January, 2018) showed, in the “most exhaustive way possible“, tested the claims from phrenological ‘research’ “that measuring the contour of the head provides a reliable method for inferring mental capacities” and concluded that there was “no evidence for this claim” (Jones, Alfaro-Almagro, and Jbabdi, 2018). That settles it. The ‘science’ is dead.
It’s so simple: you notice physical differences in brain size between two corpses. That one’s PFC was bigger than OFC and with the other, his OFC was bigger than his PFC. That’s it. I guess, using this logic, neuroanatomists would be considered phrenologists today since they note size differences between individual parts of brains. Just noting these differences doesn’t make any type of judgments on potential between brains of individuals with different size/overall size/bumps etc.
It is ridiculous to accuse someone of being a ‘phrenologist’ in 2018. And while the study of skull/brain sizes back in the 17th century did pave the way for modern neuroscience and while they did get a few things right, they were largely wrong. No, you cannot see one’s character from feeling the bumps on their skull. I understand the logic and, back then, it would have made a lot of sense. But to claim that one is a phrenologist or is pushing phrenology just because they notice physical differences that are empirically verifiable does not make them a phrenologist.
In sum, studying physical differences is interesting and tells us a lot about our past and maybe even our future. Stating that one is a phrenologist because they observe and accept physical differences in the size of the brain, skull, and neuroanatomic regions is like saying that physical anthropologists and forensic scientists are phrenologists because they measure people’s skulls to ascertain certain things that may be known in their medical history. Chastizing someone because they tell you that one has a different brain size than the other by calling them outdated names in an attempt to discredit them doesn’t make sense. It seems that even some people cannot accept physical differences that are measurable again and again because it may go against some long-held belief.
Responding to Jared Taylor on the Raven Progressive Matrices Test
2950 words
I was on Warski Live the other night and had an extremely short back-and-forth with Jared Taylor. I’m happy I got the chance to shortly discuss with him but I got kicked out about 20 minutes after being there. Taylor made all of the same old claims, and since everyone continued to speak I couldn’t really get a word in.
A Conversation with Jared Taylor
I first stated that Jared got me into race realism and that I respected him. He said that once you see the reality of race then history etc becomes clearer.
To cut through everything, I first stated that I don’t believe there is any utility to IQ tests, that a lot of people believe that people have surfeits of ‘good genes’ ‘bad genes’ that give ‘positive’ and ‘negative’ charges. IQ tests are useless and that people ‘fetishize them’. He then responded that IQ is one of, if not the, most studied trait in psychology to which JF then asked me if I contended that statement and I responded ‘no’ (behavioral geneticists need to work to ya know!). He then talked about how IQ ‘predicts’ success in life, e.g., success in college,
Then, a bit after I stated that, it seems that they painted me as a leftist because of my views on IQ. Well, I’m far right (not that my politics matters to my views on scientific matters) and they made it seem like I meant that Jared fetishized IQ, when I said ‘most people’.
Then Jared gives a quick rundown of the same old and tired talking points how IQ is related to crime, success, etc. I then asked him if there was a definition of intelligence and whether or not there was consensus in the psychological community on the matter.
I quoted this excerpt from Ken Richardson’s 2002 paper What IQ Tests Test where he writes:
Of the 25 attributes of intelligence mentioned, only 3 were mentioned by 25 per cent or more of respondents (half of the respondents mentioned `higher level components’; 25 per cent mentioned ‘executive processes’; and 29 per cent mentioned`that which is valued by culture’). Over a third of the attributes were mentioned by less than 10 per cent of respondents (only 8 per cent of the 1986 respondents mentioned `ability to learn’).
Jared then stated:
“Well, there certainly are differing ideas as to what are the differing components of intelligence. The word “intelligence” on the other hand exists in every known language. It describes something that human beings intuitively understand. I think if you were to try to describe sex appeal—what is it that makes a woman appealing sexually—not everyone would agree. But most men would agree that there is such a thing as sex appeal. And likewise in the case of intelligence, to me intelligence is an ability to look at the facts in a situation and draw the right conclusions. That to me is one of the key concepts of intelligence. It’s not necessarily “the capacity to learn”—people can memorize without being particularly intelligent. It’s not necessarily creativity. There could be creative people who are not necessarily high in IQ.
I would certainly agree that there is no universally accepted definition for intelligence, and yet, we all instinctively understand that some people are better able to see to the essence of a problem, to find correct solutions to problems. We all understand this and we all experience this in our daily lives. When we were in class in school, there were children who were smarter than other children. None of this is particularly difficult to understand at an intuitive level, and I believe that by somehow saying because it’s impossible to come up with a definition that everyone will accept, there is no such thing as intelligence, that’s like saying “Because there may be no agreement on the number of races, that there is no such thing as race.” This is an attempt to completely sidetrack a question—that I believe—comes from dishonest motives.”
(“… comes from dishonest motives”, appeal to motive. One can make the claim about anyone, for any reason. No matter the reason, it’s fallacious. On ‘ability to learn’ see below.)
Now here is the fun part: I asked him “How do IQ tests test intelligence?” He then began talking about the Raven (as expected):
“There are now culture-free tests, the best-known of which is Raven’s Progressive Matrices, and this involves recognizing patterns and trying to figure out what is the next step in a pattern. This is a test that doesn’t require any language at all. You can show an initial simple example, the first square you have one dot, the next square you have two dots, what would be in the third square? You’d have a choice between 3 dots, 5 dots, 20 dots, well the next step is going to be 3 dots. You can explain what the initial patterns are to someone who doesn’t even speak English, and then ask them to go ahead and go and complete the suceeding problems that are more difficult. No language, involved at all, and this is something that correlates very, very tightly with more traditonal, verbally based, IQ tests. Again, this is an attempt to measure capacity that we all inherently recognize as existing, even though we may not be able to define it to everyone’s mutual satisfaction, but one that is definitely there.
Ultimately, we will be able to measure intelligence through direct assessment of the brain, that it will be possible to do through genetic analysis. We are beginning to discover the gene patterns associated with high intelligence. Already there have been patent applications for IQ tests based on genetic analysis. We really aren’t at the point where spitting in a cup and analyzing the DNA you can tell that this guy has a 140 IQ, this guy’s 105 IQ. But we will eventually get there. At the same time there are aspects of the brain that can be analyzed, repeatedly, with which the signals are transmitted from one part of the brain to the other, the density of grey matter, the efficiency with which white matter communicates between the different grey matter areas of the brain.
I’m quite confident that there will come a time where you can just strap on a set of electrodes and have someone think about something—or even not think about anything at all—and we will be able to assess the power of the brain directly through physical assessment. People are welcome to imagine that this is impossible, or be skeptical about that, but I think we’re defintely moving in that direction. And when the day comes—when we really have discovered a large number of the genetic patterns that are associated with high intelligence, and there will be many of them because the brain is the most complicated organ in the human body, and a very substantial part of the human genome goes into constructing the brain. When we have gotten to the bottom of this mystery, I would bet the next dozen mortgage payments that those patterns—alleles as they’re called, genetic patterns—that are associated with high intelligence will not be found to be equally distributed between people of all races.”
Then immediately after that, the conversation changed. I will respond in points:
1) First off, as I’m sure most long-time readers know, I’m not a leftist and the fact that (in my opinion) I was implied to be a leftist since I contest the utility of IQ is kind of insulting. I’m not a leftist, nor have I ever been a leftist.
2) On his points on definitions of ‘intelligence’: The point is to come to a complete scientific consensus on how to define the word, the right way to study it and then think of the implications of the trait in question after you empirically verify its reality. That’s one reason to bring up how there is no consensus in the psychological community—ask 50 psychologists what intelligence is, get numerous different answers.
3) IQ and success/college: Funny that gets brought up. IQ tests are constructed to ‘predict’ success since they’re similar already to achievement tests in school (read arguments here, here, and here). Even then, you would expect college grades to be highly correlated with job performance 6 years after graduation from college right? Wrong. Armstrong (2011: 4) writes: “Grades at universities have a low relationship to long-term job performance (r = .05 for 6 or more years after graduation) despite the fact that cognitive skills are highly related to job performance (Roth, et al. 1996). In addition, they found that this relationship between grades and job performance has been lower for the more recent studies.” Though the claim that “cognitive skills are highly related to job performance” lie on shaky ground (Richardson and Norgate, 2015).
4) My criticisms on IQ do not mean that I deny that ‘intelligence exists’ (which is a common strawman), my criticisms are on construction and validity, not the whole “intelligence doesn’t exist” canard. I, of course, don’t discard the hypothesis that individuals and populations can differ in ‘intelligence/intelligence ‘genes’, the critiques provided are against the “IQ-tests-predict-X-in-life” claims and ‘IQ-tests-test-‘intelligence” claims. IQ tests test cultural distance from the middle class. Most IQ tests have general knowledge questions on them which then contribute a considerable amount to the final score. Therefore, since IQ tests test learned knowledge present in some cultures and not in others (which is even true for ‘culture-fair’ tests, see point 5), then learning is intimately linked with Jared’s definition of ‘intelligence’. So I would necessariliy state that they do test learned knowledge and test learned knowledge that’s present in some classes compared to others. Thusly, IQ tests test learned knowledge more present in some certain classes than others, therefore, making IQ tests proxies for social class, not ‘intelligence’ (Richardson, 2002; 2017b).
5) Now for my favorite part: the Raven. The test that everyone (or most people) believe is culture-free, culture-fair since there is nothing verbal thusly bypassing any implicit suggestion that there is cultural bias in the test due to differences in general knowledge. However, this assumption is extremely simplistic and hugely flawed.
For one, the Raven is perhaps one of the most tests, even more so than verbal tests, reflecting knowledge structures present in some cultures more than others (Richardson, 2002). One may look at the items on the Raven and then proclaim ‘Wow, anyone who gets these right must be ‘intelligent”, but the most ‘complicated’ Raven’s items are not more complicated than everyday life (Carpenter, Just, and Shell, 1990; Richardson, 2002; Richardson and Norgate, 2014). Furthermore, there is no cognitive theory in which items are selected for analysis and subsequent entry onto a particular Raven’s test. Concerning John Raven’s personal notes, Carpenter, Just, and Shell (1990: 408) show that John Raven—the creator of the Raven’s Progressive Matrices test—used his “intuition and clinical experience” to rank order items “without regard to any underlying processing theory.”
Now to address the claim that the Raven is ‘culture-free’: take one genetically similar population, one group of them are foraging hunter-gatherers while the other population lives in villages with schools. The foraging people are tested at age 11. They score 31 percent, while the ones living in more modern areas with amenities get 72 percent right (‘average’ individuals get 78 percent right while ‘intellectually defective’ individuals get 47 percent right; Heine, 2017: 188). The people I am talking about are the Tsimane, a foraging, hunter-gatherer population in Bolivia. Davis (2014) studied the Tsimane people and administered the Raven test to two groups of Tsimane, as described above. Now, if the test truly were ‘culture-free’ as is claimed, then they should score similarly, right?
Wrong. She found that reading was the best predictor of performance on the Raven. Children who attend school (presumably) learn how to read (with obviously a better chance to learn how to read if you don’t live in a hunter-gatherer environment). So the Tsimane who lived a more modern lifestyle scored more than twice as high on the Raven when compared to those who lived a hunter-gatherer lifestyle. So we have two genetically similar populations, one is exposed to more schooling while the other is not and schooling is the most related to performance on the Raven. Therefore, this study is definitive proof that the Raven is not culture-fair since “by its very nature, IQ testing is culture bound” (Cole, 1999: 646, quoted by Richardson, 2002: 293).
6) I doubt that we will be able to genotype people and get their ‘IQ’ results. Heine (2017) states that you would need all of the SNPs on a gene chip, numbering more than 500,000, to predict half of the variation between individuals in IQ (Davies et al, 2011; Chabris et al, 2012). Furthermore, since most genes may be height genes (Goldstein, 2009). This leads Heine (2017: 175) to conclude that “… it seems highly doubtful, contra Robert Plomin, that we’ll ever be able to estimate someone’s intelligence with much precision merely by looking at his or her genome.”
I’ve also critiqued GWAS/IQ studies by making an analogous argument on testosterone, the GWAS studies for testosterone, and how testosterone is produced in the body (its indirectly controlled by DNA, while what powers the cell is ATP, adenosine triphosphate (Kakh and Burnstock, 2009).
7) Regarding claims on grey and white matter: he’s citing Haier et al’s work, and their work on neural efficiency, white and grey matter correlates regarding IQ, to how different networks of the brain “talk” to each other, as in the P-FIT hypothesis of Jung and Haier (2007; numerous critiques/praises). Though I won’t go in depth on this point here, I will only say that correlations from images, correlations from correlations etc aren’t good enough (the neural network they discuss also may be related to other, noncognitive, factors). Lastly, MRI readings are known to be confounded by noise, visual artifacts and inadequate sampling, even getting emotional in the machine may cause noise in the readings (Okon-Singer et al, 2015) and since movements like speech and even eye movements affect readings, when describing normal variation, one must use caution (Richardson, 2017a).
8) There are no genes for intelligence (I’d also say “what is a gene?“) in the fluid genome (Ho, 2013), so due to this, I think that ‘identifying’ ‘genes for’ IQ will be a bit hard… Also touching on this point, Jared is correct that many genes—most, as a matter of fact—are expressed in the brain. Eighty-four percent, to be exact (Negi and Guda, 2017), so I think there will be a bit of a problem there… Further complicating these types of matters is the matter of social class. Genetic population structures have also emerged due to social class formation/migration. This would, predictably, cause genetic differences between classes, but these genetic differences are irrelevant to education and cognitive ability (Richardson, 2017b). This, then, would account for the extremely small GWAS correlations observed.
9) For the last point, I want to touch briefly on the concept of heritability (because I have a larger theme planned for the concept). Heritability ‘estimates’ have both group and individual flaws; environmental flaws; genetic flaws (Moore and Shenk, 2017), which arise due to the use of the highly flawed CTM (classical twin method) (Joseph, 2002; Richardson and Norgate, 2005; Charney, 2013; Fosse, Joseph, and Richardson, 2015). The flawed CTM inflates heritabilities since environments are not equalized, as they are in animal breeding research for instance, which is why those estimates (which as you can see are lower than the sky-high heritabilities that we get for IQ and other traits) are substantially lower than the heritabilities we observe for traits observed from controlled breeding experiments; which “surpasses almost anything found in the animal kingdom” (Schonemann, 1997: 104).
Lastly, there are numerous hereditarian scientific fallacies which include: 1) trait heritability does not predict what would occur when environments/genes change; 2) they’re inaccurate since they don’t account for gene-environment covariation or interaction while also ignoring nonadditive effects on behavior and cognitive ability; 3) molecular genetics does not show evidence that we can partition environment from genetic factors; 4) it wouldn’t tell us which traits are ‘genetic’ or not; and 5) proposed evolutionary models of human divergence are not supported by these studies (since heritability in the present doesn’t speak to what traits were like thousands of years ago) (Bailey, 1997). We, then, have a problem. Heritability estimates are useful for botanists and farmers because they can control the environment (Schonemann, 1997; Moore and Shenk, 2017). Regarding twin studies, the environment cannot be fully controlled and so they should be taken with a grain of salt. It is for these reasons that some researchers call to end the use of the term ‘heritability’ in science (Guo, 2000). For all of these reasons (and more), heritability estimates are useless for humans (Bailey, 1997; Moore and Shenk, 2017).
Still, other authors state that the use of heritability estimates “attempts to impose a simplistic and reified dichotomy (nature/nurture) on non-dichotomous processes.” (Rose, 2006) while Lewontin (2006) argues that heritability is a “useless quantity” and that to better understand biology, evolution, and development that we should analyze causes, not variances. (I too believe that heritability estimates are useless—especially due to the huge problems with twin studies and the fact that the correct protocols cannot be carried out due to ethical concerns.) Either way, heritability tells us nothing about which genes cause the trait in question, nor which pathways cause trait variation (Richardson, 2012).
In sum, I was glad to appear and discuss (however shortly) with Jared. I listened to it a few times and I realize (and have known before) that I’m a pretty bad public speaker. Either way, I’m glad to get a bit of points and some smaller parts of the overarching arguments out there and I hope I have a chance in the future to return on that show (preferably to debate JF on IQ). I will, of course, be better prepared for that. (When I saw that Jared would appear I decided to go on to discuss.) Jared is clearly wrong that the Raven is ‘culture-free’ and most of his retorts were pretty basic.
(Note: I will expand on all 9 of these points in separate articles.)
Race, Testosterone, Aggression, and Prostate Cancer
4050 words
Race, aggression, and prostate cancer are all linked, with some believing that race is the cause of higher testosterone which then causes aggression and higher rates of crime along with maladies such as prostate cancer. These claims have long been put to bed, with a wide range of large analyses.
The testosterone debate regarding prostate cancer has been raging for decades and we have made good strides in understanding the etiology of prostate cancer and how it manifests. The same holds true for aggression. But does testosterone hold the key to understanding aggression, prostate cancer and does race dictate group levels of the hormone which then would explain some of the disparities between groups and individuals of certain groups?
Prostate cancer
For decades it was believed that heightened levels of testosterone caused prostate cancer. Most of the theories to this day still hold that large amounts of androgens, like testosterone and it’s metabolic byproduct dihydrotestosterone, are the two many factors that drive the proliferation of cells and therefore, if a male is exposed to higher levels of testosterone throughout their lives then they are at a high risk of prostate cancer compared to a man with low testosterone levels, so the story goes.
In 1986 Ronald Ross set out to test a hypothesis: that black males were exposed to more testosterone in the womb and this then drove their higher rates of prostate cancer later in life. He reportedly discovered that blacks, after controlling for confounds, had 15 percent higher testosterone than whites which may be the cause of differential prostate cancer mortality between the two races (Ross et al, 1986) This is told in a 1997 editorial by Hugh McIntosh. First, the fact that black males were supposedly exposed to more testosterone in the womb is brought up. I am aware of one paper discussing higher levels of testosterone in black women compared to white women (Perry et al, 1996). Though, I’ve shown that black women don’t have high levels of testosterone, not higher than white women, anyway (see Mazur, 2016 for discussion). (Yes I changed my view on black women and testosterone, stop saying that they have high levels of testosterone it’s just not true. I see people still link to that article despite the long disclaimer at the top.)
Alvarado (2013) discusses Ross et al (1986), Ellis and Nyborg (1992) (which I also discussed here along with Ross et al) and other papers discussing the supposed higher testosterone of blacks when compared to whites and attempts to use a life history framework to explain higher incidences of prostate cancer in black males. He first notes that nutritional status influences testosterone production which should be no surprise to anyone. He brings up some points I agree with and some I do not. For instance, he states that differences in nutrition could explain differences in testosterone between Western and non-Western people (I agree), but that this has no effect within Western countries (which is incorrect as I’ll get to later).
He also states that ancestry isn’t related to prostate cancer, writing “In summation, ancestry does not adequately explain variation among ethnic groups with higher or lower testosterone levels, nor does it appear to explain variation among ethnic groups with high or low prostate cancer rates. This calls into question the efficacy of a disease model that is unable to predict either deleterious or protective effects.”
He then states that SES is negatively correlated with prostate cancer rates, and that numerous papers show that people with low SES have higher rates of prostate cancer mortality which makes sense, since people in a lower economic class would have less access to and a chance to get good medical care to identify problems such as prostate cancer, including prostate biopsies and checkups to identify the condition.
He finally discusses the challenge hypothesis and prostate cancer risk. He cites studies by Mazur and Booth (who I’ve cited in the past in numerous articles) as evidence that, as most know, black-majority areas have more crime which would then cause higher levels of testosterone production. He cites Mazur’s old papers showing that low-class men, no matter if they’re white or black, had heightened levels of testosterone and that college-educated men did not, which implies that the social environment can and does elevate testosterone levels and can keep them heightened. Alvarado concludes this section writing: “Among Westernized men who have energetic resources to support the metabolic costs associated with elevated testosterone, there is evidence that being exposed to a higher frequency of aggressive challenges can result in chronically elevated testosterone levels. If living in an aggressive social environment contributes to prostate cancer disparities, this has important implications for prevention and risk stratification.” He’s not really wrong but on what he is wrong I will discuss later on this section. It’s false that testosterone causes prostate cancer so some of this thesis is incorrect.
I rebutted Ross et al (1986) December of last year. The study was hugely flawed and, yet, still gets cited to this day including by Alvarado (2013) as the main point of his thesis. However, perhaps most importantly, the assay times were done ‘when it was convenient’ for the students which were between 10 am and 3 pm. To not get any wacky readings one most assay the individuals as close to 8:30 am as possible. Furthermore, they did not control for waist circumference which is another huge confound. Lastly, the sample was extremely small (50 blacks and 50 whites) and done on a nonrepresentative sample (college students). I don’t think anyone can honestly cite this paper as any evidence for blacks having higher levels of testosterone or testosterone causing prostate cancer because it just doesn’t do that. (Read Race, Testosterone and Prostate Cancer for more information.)
What may explain prostate cancer rates if not for differences in testosterone like has been hypothesized for decades? Well, as I have argued, diet explains a lot of the variation between races. The etiology of prostate cancer is not known (ACA, 2016) but we know that it’s not testosterone and that diet plays a large role in its acquisition. Due to their dark skin, they need more sunlight than do whites to synthesize the same amount of vitamin D, and low levels of vitamin D in blacks are strongly related to prostate cancer (Harris, 2006). Murphy et al (2014) even showed, through biopsies, that black American men had higher rates of prostate cancer if they had lower levels of vitamin D. Lower concentrations of vitamin D in blacks compared to whites due to dark pigmentation which causes reduced vitamin D photoproduction and may also account for “much of the unexplained survival disparity after consideration of such factors as SES, state at diagnosis and treatment” (Grant and Peiris, 2012).
Testosterone
As mentioned above, testosterone is assumed to be higher in certain races compared to others (based on flawed studies) which then supposedly exacerbates prostate cancer. However, as can be seen above, a lot of assumptions go into the testosterone-prostate cancer hypothesis which is just false. So if the assumptions are false about testosterone, mainly regarding racial differences in the hormone and then what the hormone actually does, then most of their claims can be disregarded.
Perhaps the biggest problem is that Ross et al is a 32-year-old paper (which still gets cited favorably despite its huge flaws) while our understanding of the hormone and its physiology has made considerable progress in that time frame. So it’s in fact not so weird to see papers like this that say “Prostate cancer appears to be unrelated related to endogenous testosterone levels” (Boyle et al, 2016). Other papers also show the same thing, that testosterone is not related to prostate cancer (Stattin et al, 2004; Michaud, Billups, and Partin, 2015). This kills a lot of theories and hypotheses, especially regarding racial differences in prostate cancer acquisition and mortality. So, what this shows is that even if blacks did have 15 percent higher serum testosterone than whites as Ross et al, Rushton, Lynn, Templer, et al believed then it wouldn’t cause higher levels of prostate cancer (nor aggression, which I’ll get into later).
How high is testosterone in black males compared to white males? People may attempt to cite papers like the 32-year-old paper by Ross et al, though as I’ve discussed numerous times the paper is highly flawed and should therefore not be cited. Either way, levels are not as high as people believe and meta-analyses and actual nationally representative samples (not convenience college samples) show low to no difference, and even the low difference wouldn’t explain any health disparities.
One of the best papers on this matter of racial differences in testosterone is Richard et al (2014). They meta-analyzed 15 studies and concluded that the “racial differences [range] from 2.5 to 4.9 percent” but “this modest difference is unlikely to explain racial differences in disease risk.” This shows that testosterone isn’t as high in blacks as is popularly misconceived, and that, as I will show below, it wouldn’t even cause higher rates of aggression and therefore criminal behavior. (Rohrmann et al 2007 show no difference in testosterone between black and white males in a nationally representative sample after controlling for lifestyle and anthropometric variables. Whereas Mazur, 2009 shows that blacks have higher levels of testosterone due to low marriage rates and lower levels of adiposity, while be found a .39 ng/ml difference between blacks and whites aged 20 to 60. Is this supposed to explain crime, aggression, and prostate cancer?)
However, as I’ve noted last year (and as Alvarado, 2013 did as well), young black males with low education have higher levels of testosterone which is not noticed in black males of the same age group but with more education (Mazur, 2016). Since blacks of a similar age group have lower levels of testosterone but are more highly educated then this is a clue that education drives aggression/testosterone/violent behavior and not that testosterone drives it.
Mazur (2016) also replicated Assari, Caldwell, and Zimmerman’s (2014) finding that “Our model in the male sample suggests that males with higher levels of education has lower aggressive behaviors. Among males, testosterone was not associated with aggressive behaviors.” I know this is hard for many to swallow that testosterone doesn’t lead to aggressive behavior in men, but I’ll cover that in the last and final section.
So it’s clear that the myth that Rushton, Lynn, Templer, Kanazawa, et al pushed regarding hormonal differences between the races are false. It’s also with noting, as I did in my response to Rushton on r/K selection theory, that the r/K model is literally predicated on 1) testosterone differences between races being real and in the direction that Rushton and Lynn want because they cite the highly flawed Ross et al (1986) and 2) testosterone does not cause higher levels of aggression (which I’ll show below) which then lead to higher rates of crime along with higher rates of incarceration.
A blogger who goes by the name of ethnicmuse did an analysis of numerous testosterone papers and he found:
Which, of course, goes against a ton of HBD theory, that is, if testosterone did what HBDers believed it does (it doesn’t). This is what it comes down to: blacks don’t have higher levels of testosterone than whites and testosterone doesn’t cause aggression nor prostate cancer so even if this relationship was in the direction that Rushton et al assert then it still wouldn’t cause any of the explanatory variables they discuss.
Last year Lee Ellis published a paper outlining his ENA theory (Ellis, 2017). I responded to the paper and pointed out what he got right and wrong. He discussed strength (blacks aren’t stronger than whites due to body type and physiology, but excel in other areas); circulating testosterone, umbilical cord testosterone exposure; bone density and crime; penis size, race, and crime (Rushton’s 1997 claims on penis size don’t ‘size up’ to the literature as I’ve shown two times); prostate-specific antigens, race, and prostate cancer; CAG repeats; intelligence and education and ‘intelligence’; and prenatal androgen exposure. His theory has large holes and doesn’t line up in some places, as he himself admits in his paper. He, as expected, cites Ross et al (1986) favorably in his analysis.
Testosterone can’t explain all of these differences, no matter if it’s prenatal androgen exposure or not, and a difference of 2.5 to 4.9 percent between blacks and whites regarding testosterone (Richard et al, 2014) won’t explain differences in crime, aggression, nor prostate cancer.
Other authors have attempted to also implicate testosterone as a major player in a wide range of evolutionary theories (Lynn, 1990; Rushton, 1997; Rushton, 1999; Hart, 2007; Rushton and Templer, 2012; Ellis, 2017). However, as can be seen by digging into this literature, these claims are not true and therefore we can discard the conclusions come to by the aforementioned authors since they’re based on false premises (testosterone being a cause for aggression, crime, and prostate cancer and r/K meaning anything to human races, it doesn’t)
Finally, to conclude this section, does testosterone explain racial differences in crime? No, racial differences in testosterone, however small, cannot be responsible for the crime gap between blacks and whites.
Testosterone and aggression
Testosterone and aggression, are they linked? Can testosterone tell us anything about individual differences in aggressive behavior? Surprisingly for most, the answer seems to be a resounding no. One example is the castration of males. Does it completely take away the urge to act aggressively? No, it does not. What is shown when sex offenders are castrated is that their levels of aggression decrease, but importantly, they do not decrease to 0. Robert Sapolsky writes on page 96 of his book Behave: The Biology of Humans at Our Best and Worst (2017) (pg 96):
… the more experience a male has being aggressive prior to castration, the more aggression continues afterward. In other words, the less his being aggressive in the future requires testosterone and the more it’s a function of social learning.
He also writes (pg 96-97):
On to the next issue that lessens the primacy of testosterone: What do individual levels of testosterone have to do with aggression? If one person higher testosterone levels than another, or higher levels this week than last, are they more likely to be aggressive?
Initially the answer seemed to be yes, as studies showed correlation between individual differences in testosterone levels and levels of aggression. In a typical study, higher testosterone levels would be observed in those male prisoners with higher rates of aggression. But being aggressive stimulates testosterone secretion; no wonder more aggressive individuals had higher levels. Such studies couldn’t disentangle chickens and eggs.
Thus, a better question is whether differences in testosterone levels among individuals predict who will be aggressive. And among birds, fish, mammals, and especially other primates, the answer is generally no. This has been studied extensively in humans, examining a variety of measures of aggression. And the answer is clear. To quote British endocrinologist John Archer in a definitive 2006 review, “There is a weak and inconsistent association between testosterone levels and aggression in [human] adults, and . . . administration of testosterone to volunteers typically does not increase aggression.” The brain doesn’t pay attention to testosterone levels within the normal range.
[…]
Thus, aggression is typically more about social learning than about testosterone, differing levels of testosterone generally can’t explain why some individuals are more aggressive than others.
Sapolsky also has a 1997 book of essays on human biology titled The Trouble With Testosterone: And Other Essays On The Biology Of The Human Predicament and he has a really good essay on testosterone titled Will Boys Just Be Boys? where he writes (pg 113 to 114):
Okay, suppose you note a correlation between levels of aggression and levels of testosterone among these normal males. This could be because (a) testosterone elevates aggression; (b) aggression elevates testosterone secretion; (c) neither causes the other. There’s a huge bias to assume option a while b is the answer. Study after study has shown that when you examine testosterone when males are first placed together in the social group, testosterone levels predict nothing about who is going to be aggressive. The subsequent behavioral differences drive the hormonal changes, not the other way around.
Because of a strong bias among certain scientists, it has taken do forever to convince them of this point.
[…]
As I said, it takes a lot of work to cure people of that physics envy, and to see interindividual differences in testosterone levels don’t predict subsequent differences in aggressive behavior among individuals. Similarly, fluctuations in testosterone within one individual over time do not predict subsequent changes in the levels of aggression in the one individual—get a hiccup in testosterone secretion one afternoon and that’s not when the guy goes postal.
And on page 115 writes:
You need some testosterone around for normal levels of aggressive behavior—zero levels after castration and down it usually goes; quadruple it (the sort of range generated in weight lifters abusing anabolic steroids), and aggression typically increases. But anywhere from roughly 20 percent of normal to twice normal and it’s all the same; the brain can’t distinguish among this wide range of basically normal values.
Weird…almost as if there is a wide range of ‘normal’ that is ‘built in’ to our homeodynamic physiology…
So here’s the point: differences in testosterone between individuals tell us nothing about individual differences in aggressive behavior; castration and replacement seems to show that, however broadly, testosterone is related to aggression “But that turns out to not be true either, and the implications of this are lost on most people the first thirty times you tell them about it. Which is why you’d better tell them about it thirty-one times, because it’s the most important part of this piece” (Sapolsky, 1997: 115).
Later in the essay, Sapolsky discusses a discusses 5 monkeys that were given time to form a hierarchy of 1 through 5. Number 3 can ‘throw his weight’ around with 4 and 5 but treads carefully around 1 and 2. He then states to take the third-ranking monkey and inject him with a ton of testosterone, and that when you check the behavioral data that he’d then be participating in more aggressive actions than before which would imply that the exogenous testosterone causes participation in more aggressive behavior. But it’s way more nuanced than that.
So even though small fluctuations in the levels of the hormone don’t seem to matter much, testosterone still causes aggression. But that would be wrong. Check out number 3 more closely. Is he now raining aggression and terror on any and all in the group, frothing in an androgenic glaze of indiscriminate violence. Not at all. He’s still judiciously kowtowing to numbers 1 and 2 but has simply become a total bastard to number 4 and 5. This is critical: testosterone isn’t causing aggression, it’s exaggerating the aggression that’s already there.
The correlation between testosterone and aggression is between .08 and .14 (Book, Starzyk, and Quinsey, 2001; Archer, Graham-Kevan, and Davies, 2005; Book and Quinsey, 2005). Therefore, along with all of the other evidence provided in this article, it seems that testosterone and aggression have a weak positive correlation, which buttresses the point that aggression concurrent increases in testosterone.
Sapolsky then goes on to discuss the amygdala’s role in fear processing. The amygdala has its influence on aggressive behavior through the stria terminalis, which is a bunch of neuronal connections. How the amygdala influences aggression is simple: bursts of electrical excitation called action potentials go up and down the stria terminalis which changes the hypothalamus. You can then inject testosterone right into the brain and will it cause the same action potentials that surge down the stria terminalis? No, it does not turn on the pathway at all. This only occurs only if the amygdala is already sending aggression-provoking action potentials down the stria terminalis with testosterone increasing the rate of action potentials you’re shortening the rest time between them. So it doesn’t turn on this pathway, it exaggerates the preexisting pattern, which is to say, it’s exaggerating the response to environmental triggers of what caused the amygdala to get excited in the first place.
He ends this essay writing (pg 119):
Testosterone is never going to tell us much about the suburban teenager who, in his after-school chess club, has developed a particularly aggressive style with his bishops. And it certainly isn’t going to tell us much about the teenager in some inner-city hellhole who has taken to mugging people. “Testosterone equals aggression” is inadequate for those who would offer a simple solution to the violent male—just decrease levels of those pesky steroids. And “testosterone equals aggression” is certainly inadequate for those who would offer a simple excuse: Boys will be boys and certain things in nature are inevitable. Violence is more complex than a single hormone. This is endocrinology for the bleeding heart liberal—our behavioral biology is usually meaningless outside of the context of social factors and the environment in which it occurs.
Injecting individuals with supraphysiological doses of testosterone as high as 200 and 600 mg per week does not cause heightened anger or aggression (Tricker et al, 1996; O’Connor et, 2002). This, too, is a large blow for the testosterone-induces-aggression hypothesis. Because aggressive behavior heightens testosterone, testosterone doesn’t heighten aggressive behavior. (This is the causality that has been looked for, and here it is. The causality is not in the other direction.) This tells us that we need to be put into situations for our aggression to rise and along with it, testosterone. I don’t even see how people could think that testosterone could cause aggression. It’s obvious that the environmental trigger needs to be there first in order for the body’s physiology to begin testosterone production in order to prepare for the stimulus that caused the heightened testosterone production. Once the trigger occurs, then it can and does stay heightened, especially in areas where dominance contests would be more likely to occur, which would be low-income areas (Mazur, 2006, 2016).
(Also read my response to Batrinos, 2012, my musings on testosterone and race, and my responses to Robert Lindsay and Sean Last.)
Lastly, one thing that gets on my nerves that people point to to attempt to show that testosterone and its derivatives cause violence, aggression etc is the myth of “roid rage” which is when an individual objects himself with testosterone, anabolic steroids or another banned substance, and then the individual becomes more aggressive as a result of more free-flowing testosterone in their bloodstream.
The problem here is that people believe what they hear on the media about steroids and testosterone, and they’re largely not true. One large analysis was done to see the effects of steroids and other illicit drug use on behavior, and what was found was that after controlling for other substance use “Our results suggest that it was not lifetime steroid use per se, but rather co-occurrring polysubstance abuse that most parsimoniously explains the relatively strong association of steroid use and interpersonal violence” (Lundholm et al, 2015). So after controlling for other drugs used, men who use steroids do not go to prison and be convicted of violence after other polysubstance use was controlled for, implying that is what’s driving interpersonal violence, not the substance abuse of steroids.
Conclusion
Numerous myths about testosterone have been propagated over the decades, which are still believed in the new millennium despite numerous other studies and arguments to the contrary. As can be seen, the myths that people believe about testosterone are easily debunked. Numerous papers (with better methodology than Ross et al) attest to the fact that testosterone levels aren’t as high as was believed decades ago between the races. Diet can explain a lot of the variation, especially vitamin D intake. Injecting men with supraphysiological doses of testosterone does not heighten anger nor aggression. It does not even heighten prostate cancer severity.
Racial differences in testosterone are also not as high as people would like to believe, there is even an opposite relationship with Asians having higher levels and whites having lower (which wouldn’t, on average, imply femininity) testosterone levels. So as can be seen, the attempted r/K explanations from Rushton et al don’t work out here. They’re just outright wrong on testosterone, as I’ve been arguing for a long while on this blog.
Testosterone doesn’t cause aggression, aggression causes heightened testosterone. It can be seen from studies of men who have been castrated that the more crime they committed before castration, the more crime they will commit after which implies a large effect of social learning on violent behavior. Either way, the alarmist attitudes of people regarding testosterone, as I have argued, are not needed because they’re largely myths.
Responding to Criticisms on IQ
2250 words
My articles get posted on the Reddit board /r/hbd and, of course, people don’t like what I write about IQ. I get accused of reading ‘Richardson n=34 studies’ even though that was literally one citation in a 32 page paper that does not affect his overall argument. (I will be responding to Kirkegaard and UnsilencedSci in separate articles.) I’ll use this time to respond to criticisms from the Reddit board.
He’s peddling BS, say this:
“But as Burt and his associates have clearly demonstrated, teachers’ subjective assessments afford even more reliable predictors.”
Well, no, teachers are in fact remarkably poor at predicting student’s success in life. Simple formulas based on school grades predict LIFE success better than teachers, notwithstanding the IQ tests.
You’re incorrect. As I stated in my response to The Alternative Hypothesis, the correlation between teacher’s judgement and student achievement is .66. “The median correlation, 0.66, suggests a moderate to strong correspondence between teacher judgements and student achievement” (Hoge and Coladarci, 1989: 303). This is a higher correlation than what was found in the ‘validation studies’ from. Hunter and Schmidt.
He cherry-picks a few bad studies and ignores entire bodies of evidence with sweeping statements like this:
“This, of course, goes back to our good friend test construction. ”
Test construction is WHOLLY IRRELEVANT. It’s like saying: “well, you know, the ether might be real because Michelson-Morley experiment has been constructed this way”. Well no, it does not matter how MM experiment has been constructed as long as it tests for correct principles. Both IQ and MM have predictive power and it has nothing to do with “marvelling”, it has to do whether the test, regardless of its construction, can effectively predict outcomes or not.
This is a horrible example. You’re comparing the presuppositions of the test constructors who have in their mind who is or is not intelligent and then construct the test to confirm those preconceived notions to an experiment that was used to find the presence and properties of aether? Surely you can think of a better analogy because this is not it.
More BS: “Though a lot of IQ test questions are general knowledge questions, so how is that testing anything innate if you’ve first got to learn the material, and if you have not you’ll score lower?”
Of course the IQ tests do NOT test much of general knowledge. Out of 12 tests in WAIS only 2 deal with general knowledge.
The above screenshot is from Nisbett (2012: 14) (though it’s the WISC, not WAIS they’re similar, all IQ tests go through item analysis, tossing items that don’t conform to the test constructors’ presuppositions).
Either way, our friend test construction makes an appearance here, too. This is how these tests are made and they are made to conform to the constructor’s presuppositions. The WISC and WAIS have similar subtests, either way. Test anxiety, furthermore, leads to a lessened performance on the block design and picture arrangement subtests (Hopko et al, 2005) and moderate to severe stress, furthermore, is related to social class and IQ test performance. Stress affects the growth of the hippocampus and PFC (prefrontal cortex) (Davidson and McEwing, 2012) so does it seem like an ‘intellectual’ thing here? Furthermore, all tests and batteries are tried out on a sample of children, with items not contributing to normality being tossed out, therefore ‘item analysis’ forces what we ‘see’ regarding IQ tests.
Even the great Jensen said in his 1980 book Bias in Mental Testing (pg 71):
It is claimed that the psychometrist can make up a test that will yield any type of score distribution he pleases. This is roughly true, but some types of distributions are easier to obtain than others.
This holds for tbe WAIS, WISC, the Raven, any type of IQ test. This shows how arbitrary the ‘item selection’ is. No matter what type of ‘IQ test’ you attempt to use to say ‘It does test “intelligence” (whatever that is)!!’ the reality of test construction and constructing tests to fit presuppositions and distributions cannot be ran away from.
The other popular test, Raven’s Progressive Matrices does not test for general knowledge at all.
This is a huge misconception. People think that just because there are no ‘general knowledge questions’ or anything verbal regarding the Matrices then it must test an innate power, thus mysterious ‘g’. However, this is wrong and he clearly doesn’t keep up with recent data:
Reading was the greatest predictor of performance Raven’s, despite controlling for age and sex. Attendance was so strongly related with Raven’s performance [school attendance was used as a proxy for motivation]. These findings suggest that reading, or pattern recognition, could be fundamentally affecting the way an individual problem solves or learns to learn, and is somehow tapping into ‘g’. Presumably the only way to learn to read is through schooling. It is, therefore, essential that children are exposed to formal education, have the mother to go/stay in school, and are exposed to consistent, quality training in order to develop the skills associated with your performance. (pg 83) Variable Education Exposure and Cognitive Task Performance Among the Tsimane, Forager- Horticulturalists.
Furthermore, according to Richardson (2002): “Performance on the Raven’s test, in other words, is a question not of inducing ‘rules’ from meaningless symbols, in a totally abstract fashion, but of recruiting ones that are already rooted in the activities of some cultures rather than others.”
The assumption that the Raven is ‘culture free’ because it’s ‘just shapes and rote memory’ is clearly incorrect. James Thompson even said to me that Linda Gottfredson said that people only think the Raven is a ‘test of pure g’ because Jensen said it, which is not true.
This is completely wrong in so many ways. No understanding of normalization. Suggestion that missing heritability is discovering environmentally. I think a distorted view of the Flynn Effect. I’ll just stick to some main points.
I didn’t imply a thing about missing heritability. I only cited the article by Evan Charney to show how populations become stratified.
RR: There is no construct validity to IQ tests
First, let’s go through the basics. All IQ tests measure general intelligence (g), the positive manifold underlying every single measure of cognitive ability. This was first observed over a century ago and has been replicated across hundreds of studies since. Non-g intelligences do not exist, so for all intents and purposes it is what we define as intelligence. It is not ‘mysterious’
Thanks for the history lesson. 1) we don’t know what ‘g’ is. (I’ve argued that it’s not physiological.) So ‘intelligence’ is defined as ‘g’ yet which we don’t know what ‘g’ is. His statement here is pretty much literally ‘intelligence is what IQ tests test’.
It would be correct to say that the exact biological mechanisms aren’t known. But as with Gould’s “reification” argument, this does not actually invalidate the phenomenon. As Jensen put it, “what Gould has mistaken for “reification” is neither more nor less than the common practice in every science of hypothesizing explanatory models or theories to account for the observed relationships within a given domain.” Poor analogies to white blood cells and breathalyzer won’t change this.
It’s not a ‘poor analogy’ at all. I’ve since expanded on the construct validity argument with more examples of other construct valid tests like showing how the breathalyzer is construct valid and how white blood cell count is a proxy for disease. They have construct validity, IQ tests do not.
RR: I said that I recall Linda Gottfredson saying that people say that Ravens is culture-fair only because Jensen said it
This has always been said in the context of native, English speaking Americans. For example it was statement #5 within Mainstream Science on Intelligence. Jensen’s research has demonstrated this. The usage of Kuwait and hunter gatherers is subsequently irrelevant.
Point 5 on the Mainstream Science on Intelligence memo is “Intelligence tests are not culturally biased against American blacks or other native-born, English-speaking peoples in the U.S. Rather, IQ scores predict equally accurately for all such Americans, regardless of race and social class. Individuals who do not understand English well can be given either a nonverbal test or one in their native language.”
This is very vague. Richardson (2002) has noted how different social classes are differentially prepared for IQ test items:
I shall argue that the basic source of variation in IQ test scores is not entire (or even mainly) cognitive, and what is cognitive is not general or unitary. It arises from a nexus or sociocognitive-affective factors determining individuals: relative preparedness for the demands of the IQ test.
The fact of the matter is, all social classes aren’t prepared in the same way to take the IQ test and if you read the paper you’d see that.
RR: IQ test validity
I’ll keep this short. There exist no predictors stronger than g across any meaningful measures of success. Not education, grades, upbringing, you name it.
Yes there are. Teacher assessment which has a higher correlation than the correlation between ‘IQ’ and job performance.
RR: Another problem with IQ test construction is the assumption that it increases with age and levels off after puberty.
The very first and most heavily researched behavioral trait’s heritability has been intelligence. Only through sheer ignorance could the term “assumption” describe findings from over a century of inquiry.
Yes the term ‘assumption’ was correct. You do realize that, of course, the increase in IQ heritability is, again, due to test construction? You can also build that into the test as well, by putting more advanced questions, say high school questions for a 12 year old, and heritability would seem to increase due to just how the test was constructed.
Finally, IanTichszy says:
That article is thoroughly silly.
First, the IQ tests predict real world-performance just fine: http://thealternativehypothesis.org/index.php/2016/04/15/the-validity-of-iq/
I just responded to this article this week. They only ‘predicts real-world performance just fine’ because they’re constructed to and even then, high-achieving children in achievement rarely become high achieving adults whereas low-achieving adults tend to become successful adults. There are numerous problems with TAH’s article which I’ve already covered.
That is the important thing, not just correlation with blood pressure or something biological. Had g not predicted real-world performance from educational achievement to job performance with very high reliability, it would be useless, but it does predict those.
Test construction. You can’t get past that by saying ‘it does predict’ because it only predicts because it’s constructed to (I’d call it ‘post-dict’).
Second, on Raven’s Progressive Matrices test: the argument “well Jensen just said so” is plain silly. If RPM is culturally loaded, a question: just what culture is represented on those charts? You can’t reasonably say that. Orangutans are able to solve simplified versions of RPM, apparently they do not have a problem with cultural loading. Just look at the tests yourself.
Of course it’s silly to accept that the Raven is culture free and tests ‘g’ the best just ‘because Jensen said so’. The culture loading of the Raven is known, there is a ‘hidden structure’ in them. Even the constructors of the Raven have noted this where they state that they transposed the items to read from left to right, not right to left which is a tacit admission of cultural loading. “The reason that some people fail such problems is exactly the same reason some people fail IQ test items like the Raven Matrices tests… It simply is not the way the human cognitive system is used to being engaged” (Richardson, 2017: 280).
Furthermore, when items are familiar to all groups, even young children are capable of complex analogical reasoning. IQ tests “test for the learned factual knowledge and cognitive habits more prominent in some social classes than in others. That is, IQ scores are measures of specific learning, as well as self-confidence and so on, not general intelligence“ (Richardson, 2017: 192).
Another piece of misinformation: claiming that IQs are not normally distributed. Well, we do not really know the underlying distribution, that’s the problem, only the rank order of questions by difficulty, because we do not have absolute measure of intelligence. Still, the claim that SOME human mental traits, other than IQ, do not have normal distribution, in no way impacts the validity of IQ distribution as tests found it and projected onto mean 100 and standard dev 15 since it reflects real world performance well.
Physiological traits important for survival are not normally distributed (of course it is assumed that IQ both tests innate physiological differences and is important for survival so if it were physiological it wouldn’t be normally distributed either since traits important for survival have low heritabilities). It predicts real world performance well because, see above and my other articles on thus matter.
If you know even the basic facts about IQ, it’s clear that this article has been written in bad faith, just for sake of being contrarian regardless of the truth content or for self-promotion.
No, people don’t know the basic facts of IQ (or its construction). My article isn’t written in bad faith nor is it being contrarian regardless of the truth content or for self-promotion. I can, clearly, address criticisms to my writing.
In the future, if anyone has any problems with what I write then please leave a comment here on the blog at the relevant article. Commenting on Reddit on the article that gets posted there is no good because I probably won’t see it.
The Native American Genome and Dubious Interpretations
1100 words
A recent paper was published on the origins of Native Americans titled Terminal Pleistocene Alaskan genome reveals first founding population of Native Americans (Moreno-Mayar et al, 2018). An infant genome was studied and it was found that group of people the infant belonged to was similar to modern Native Americans but not a direct ancestor. The infant’s group and modern Native Americans share the same common ancestors, however. This, of course, supports the hypothesis that Native Americans are descended from Asian migrants.
The infant is also related to both North and South Natives, which implies they’re descended from a single migration. (Though I am aware of a hypothesis that states that there were three waves of migration into the Americas from Beringia, along with back migrations from South America back into Asia.)
Moreno-Mayar et al (2018) write in the abstract: “Our findings further suggest that the far-northern North American presence of northern Native Americans is from a back migration that replaced or absorbed the initial founding population of Ancient Beringians.” And they conclude (pg 5):
The USR1 results provide direct genomic evidence that all Native Americans can be traced back to the same source population from a single Late Pleistocene founding event. Descendants of that population were present in eastern Beringia until at least 11.5 ka. By that time, however, a separate branch of Native Americans had already established itself in unglaciated North America, and diverged into the two basal groups that ultimately became the ancestors of most of the indigenous populations of the Americas.
This is a highly interesting paper which shows that, as we’ve known for decades, that the ancestors of the Native Americans crossed the Bering Land Bridge around 11 kya. Though, my reason for writing this article is not for this very interesting paper, but the ‘conclusions’ that people that people are drawing from it.
Dubious ‘interpretations’
Of course, whenever a study like this gets published you get a whole slew of people who read the popular articles on the matter and don’t read the actual journal article. The problem here is that some people took the chance to attempt to say that this paper showed that the origins of Man were in Europe, not Africa as can be seen in the tweet below.
Black Pigeon Speaks, YouTuber, purportedly shows a quotation from the Nature article which said:
“…represent a growing body of evidence being discovered across the world that suggests the origins of the human race may have been Europe and not Africa as once believed.”
So I read the paper, read it again and even cntrl f’d it and didn’t see the phrase. So where did the phrase come from?
I did some digging and I found the source for the quote, which, of course, was not in the Nature article. The quote in question comes from an article titled Scientists discover DNA proving original Native Americans were White. Oh, wow. Isn’t that interesting? Maybe he read a different paper then I did.
The author stated that the infant was “more closely related to modern white Europeans“, though of course this too wasn’t stated anywhere in the article. He also quoted an evolutionary biologist who stated “This is a new population of Native Americans — the white Native American.” Wow, this is interesting. Now let’s look at what else this author writes:
Working with scientists at the University of Alaska and elsewhere, Willerslev compared the genetic makeup of the baby, named Xach’itee’aanenh t’eede gaay or “sunrise child-girl” by the local community, with genomes from other ancient and modern people. They found that nearly half of the girls DNA came from the ancient North Europeans who lived in what is no Scandinavia. The rest of her genetic makeup was a roughly even mixed of DNA now carried by northern and southern Native Americans. Using evolutionary models, the researchers showed the ancestors of the first Native Americans started to emerge as a distinct population about 35,000 years ago.
Isn’t that weird? This is nowhere in the original article. So I did some digging and what do I find? I found that the author of this article literally plagiarized almost word for word from another article from The Guardian!
Working with scientists at the University of Alaska and elsewhere, Willerslev compared the genetic makeup of the baby, named Xach’itee’aanenh t’eede gaay or “sunrise child-girl” by the local community, with genomes from other ancient and modern people. They found that nearly half of the girls DNA came from the ancient north Eurasians who lived in what is now Siberia. The rest of her genetic makeup was a roughly even mixed of DNA now carried by northern and southern Native Americans.
Using evolutionary models, the researchers showed the ancestors of the first Native Americans started to emerge as a distinct population about 35,000 years ago.
This is not only an example of straight up plagiarism, the author of the other article literally only switched “Siberia” with “Scandinavia” and “ancient north Eurasians” with “ancient North Europeans”. Ancient north Eurasians are NOT WHITE! Where do you gather this from?! There is NO INDICATION that they were ‘ancient north Europeans!
In sum, if you ever see articles like this that purport to show that Native Americans were white European and that it supposedly calls the OoA model into question, always ALWAYS check the claims and don’t fall for plagiarist bullshit. This is truly incredible that not only did the author literally copy and past a full article, he also snipped a few words to fit the narrative he was pushing! I will be notifying the author of the Guardian article of this plagiarism. You can check it out yourself, read the first article cited above then read the Guardian article. Do people really think they can get away with literally plagiarizing and article like that word for word?
This article is on a whole other level compared to the claims that modern Man began in Europe and that a few teeth upend the OoA model. This guy didn’t even read the paper, it seems like he read the Guardian article and then copy and pasted it and changed a few words for his own ‘gain’ to ‘show’ that the first Native Americans were white. There is no way that one can interpret this paper in this manner if they’ve truly read and understood it. Always, always read original journal articles and, if you must read popular science articles then read it from a reputable website, not kooky websites with an agenda to push who literally plagiarize other people’s work. You can tell who’s gullible and who’s not just by what they say about new papers that can possibly be misinterpreted.
