Eat less and move more and you will lose weight. That’s the common mantra of everyone around the world because this is what has been repeated for decades. “The First Law of Thermodynamics states that energy can neither be created nor destroyed in an isolated system”. This Law is used in support of the CICO paradigm. But this kind of thinking does not make sense. The First Law only tells us that energy is conserved. That’s it. It says absolutely nothing about weight loss. Does anyone think that it’s weird that we’re given weight loss advice with physics (thermodynamics) and not advice for our physiology? This fallacy, what I term the CICO fallacy, then leads to the second fallacy: that a calorie is a calorie. The implication is this: the body does not discern between what type of macro you choose to ingest, it’s only worried about the amount of energy consumed. But, as I will show, this type of thinking does not work, either.
The First Law of Thermodynamics
The First Law states that energy can neither be created nor destroyed. A positive caloric balance must be associated with weight gain, but where the wrong conclusions come in is when people assume that the positive caloric balance is driving the weight gain. So if the First Law is interpreted correctly, then both conclusions—getting fat makes one consume more energy and consuming more energy makes one fat—are both valid hypotheses. The evidence and observations suggest that getting fat makes one consume more energy. (Jason Fung (2016: 33) writes: “Having studied a full year of thermodynamics in university, I can assure you that neither calories nor weight loss were mentioned even a single time.“)
Obesity researcher Jules Hirsch said to the New York Times:
There is an inflexible law of physics—energy taken in must exactly equal the number of calories leaving the system when fat storage is unchanged. Calories leave the system when food is used to fuel the body. To lower fat content—reduce obesity—one must reduce calories taken in, or increase activity, or both. This is true whether calories come from pumpkins or peanuts or pate de foie gras.
It’s this type of information that has caused the CICO paradigm to continue unabated. However, there are dissenting voices. People like Dr. Jason Fung, Gary Taubes, Zoe Harcombe, Nina Teicholz, Tim Noakes all go against the conventional wisdom regarding obesity and the cause for weight gain.
Another thing that is not taken into account is what occurs in the body when calories are reduced. One important thing to note is that the energy we consume and expend are not independent variables—they are dependent. Therefore, if we lower what we consume, what we expend will then lower as well. If you change one of them, the other will change too. For example, if you exercise more in an attempt to lose more weight you will eat more to compensate. If you eat less to lose more weight, your body’s metabolism will drop to match what the intake is. This is exactly what was seen in the Biggest Loser Study—shockingly lower RMRs in the contestants (see Fothergill et al, 2016). Biological systems are way more complex than to reduce it down to “eat less and move more=weight loss”, and that is easily shown.
Fliers and Maratos-Flier (2007: 74) write in Scientific American:
An animal whose food is suddenly restricted tends to reduce its energy expenditure both by being less active and by slowing energy use in cells, thereby limiting weight loss. It also experiences increased hunger so that once the restriction ends, it will eat more than its prior norm until the earlier weight is attained.
Take this example. Caloric excess in children is positively correlated with height increases. Though the caloric excess is not driving the height increases; they eat because they are growing.
The point that most people miss is the third storage system—fat storage. The three storage systems are kcal in/kcal out and fat storage. Insulin dictates fat storage, in the absence of insulin, the body cannot gain weight. Insulin shuttles fat into the adipocyte which is why insulin is fattening. That’s the point that CICO doesn’t work due to hormonal fluctuations. The most fattening hormone is insulin. The types of foods that elicit the highest insulin response are processed carbohydrates. Therefore, those are the most fattening foods. People who assume CICO state that a calorie is a calorie; that’s wrong.
Imagine a crowded room. The room is getting more crowded, and you ask me why the room is getting more crowded. I say ‘the room is more crowded because more people are entering it than leaving it.’ You say ‘duh, of course that’s true, but why is the room more crowded?’ Saying a room gets crowded because more people are entering than leaving it is redundant; saying that one gets fat because more calories are consumed than burned is redundant, it only says the same thing in two different ways so it is meaningless. Rooms that have more people enter them than leave them will become more crowded since there is no getting around the First Law, right?
Now take that same logic with obesity. Thermodynamics states that if we get fatter then more energy is entering our body than leaving it. Overeating means we’ve consumed more calories than we have expended. It’s tautological.
‘CICO could work’ but that is irrelevant, since what is assumed by the CICOers is that calories are calories; the assumption that once ingested, they go through the same metabolic pathways. This is false. The First Law says nothing about why we get fat. It is irrelevant to human physiology.
Taubes (2007: 293) writes:
Change in energy stores = Energy intake — Energy expenditure
The first law of thermodynamics dictates that weight gain—the increase in energy stored as fat and lean-tissue mass—will be accompanied by or associated with positive energy balance, but it does not say that it is caused by a positive energy balance—by “a plethora of calories,” as Russel Cecil and Robert Loeb’s 1951 Textbook of Medicine put it. There is no arrow of causality in the equation. It is equally possible, without violating this fundamental truth, for a change in energy stores, the left side of the above equation, to be the driving force in the cause and effect; some regulatory phenomenon could drive us to gai weight, which would in turn cause a positive energy balance—and thus overeating or sedentary behavior. Either way, the calories in will equal the calories out, as they must, but what is the cause in one cause is effect in the other.
And on pg 294:
The alternative hypothesis reverses the causality: we are driven to get fat by “primary metabolic or enzymatic effects,” as Hilde Bruch phrased it, and this fattening process induces the compensatory responses of overeating and/or physical inactivity. We eat more, move less, and have less energy to expend because we are metabolically or hormonally driven to get fat.
All the first law of thermodynamics tells us is that people can’t become more massive without taking in more energy than they expend since people who are heavier contain more energy than people who are lighter. That person has to consume more energy to accommodate said increasing mass. That person also cannot become lighter without expending more energy than they take in. That’s all the First Law tells us: energy is conserved. It says nothing about causation. The First Law literally only says that if something becomes more massive than more energy has to come in than leave. Nothing is said about cause and effect; it only tells us what has to happen if said thing does happen. That’s not causal information.
People only assume that the First Law has any relevance to obesity because of the ‘energy cannot be created nor destroyed’ part. But this shows no understanding of the Law. If you carefully read and understand it, you will see that it gives you absolutely no causal information. You can then reverse the commonly-held mantra—that eating more leads to obesity—to becoming obese leads one to eat more. It’s perfectly logical to reverse it and no Law is broken. People erroneously assume that the Laws of physics dictate weight gain and loss, but in complex metabolic systems, what is ingested is more important than how much is ingested (because we have hormones that let us know when to stop eating—which don’t get released while one eats carbohydrates).
The Second Law of Thermodynamics
The second weight loss fallacy is ‘a calorie is a calorie’, therefore, for weight loss, it doesn’t matter if a majority of my calories comes from fat, carbs or protein; the body will register the calories consumed and will regulate fat stores as dictated by the First Law (supposedly). The fallacy of invoking the First Law of thermodynamics ties directly into the fallacy of the Second Law of Thermodynamics—what the Second Law states is, that variation in metabolic pathways is to be expected, therefore, the mantra “a calorie is a calorie” violates the Second Law as a principle (Feinman and Fine, 2004, 2007).
A diet split of 55:30:15 CHO, fat, protein, yielded 1848 kcal. In fact, thermodynamics does not support the dictum that, all else being equal (i.e., two diets with the same amount of calories, but differing macro splits; one high-fat low carb the other high carb low-fat).
However, in 2004 Zoe Harcombe recalculated the figure from Feinman and Fein (2004) and found it to be wrong. The correct number ended up being 1825 kcal, not 1848 kcal, which strengthened Feinman and Fine’s (2004) point (Harcombe, 2004). She also writes:
I then repeated the calculations for a 10:30:60 high protein diet (keeping fat the same and swapping carbs out and protein in), and the calories available to the body dropped to 1,641. This is incredible. This means that two people can both eat 2000 calories a day and the high carbohydrate person is effectively getting nearly 200 calories more than the high protein person. Anyone still wonder why low-carbohydreate diets have a built in advantage?
So we can see that it’s ridiculous to ignore the thermic effect of food, seeing as it’s 20 percent for protein and 5 percent for CHO.
To put this into perspective, two people eating similar diets (but differing macro splits) only need to out-eat the other by 20 calories per day and that will be enough to gain more weight than the other person. Taubes (2011: 58) writes:
How many calories do we have to consume, but not expend, stashing them away in our fat tissue, to trsnsform ourselves, as many of us do, from lean twenty-five-year-olds to obese fifty-year-olds?
Twenty calories a day.
Twenty calories a day times the 365 days in a year comes to a little more than seven thousand calories stored as fat every year—two pounds of excess fat.
Multiply that by 10 and that’s twenty pounds gained in ten years—all from counting kcal wrong (Aamodt, 2016: 111-112). So with Harcomb’s (2004) example, the damage will be much worse in 10 years. This is all based on the assumption that ‘calories are calories’ which is false, as I have shown.
The CICO paradigm is wrong. Consumption and expenditure are not independent variables, they are dependent. So if you decrease one of them, the other will decrease as well. This is the fatal flaw in the CICO paradigm. The First Law always holds, yes, but it tells us absolutely nothing about obesity or human physiology and is therefore irrelevant. The Second Law is violated when one states that ‘a calorie is a calorie’, but this is demonstrably false. The Second Law states that variation in metabolic pathway efficiency is to be expected. Therefore stating that “a calorie is a calorie” violates the Second Law. This has further implications. Using Taubes’ example of 20 calories per day, if people truly believe the CICO mantra then people eating the same exact number of calories will have different weight gains if the skew of carbs to fat is higher in one than the other. Couple that with what insulin does in the body and this exacerbates the problem.
Stating that thermodynamics has anything to do with weight loss is clearly fallacious.
People appeal to moderate to high heritability estimates as evidence that a trait is controlled by genes. They then assume that because something has a high heritability then that it must show something about causation. The fact of the matter is, they do not. Heritability estimates assume a false dichotomy of nature vs nurture; it assumes that we can neatly partition genetic from environmental effects. It assumes that the higher a trait’s heritability the more genes control said trait. These are all false. One of the main ways that heritability is estimated is by the CTM (classic twin method). This method, though, has a ton of assumptions poured into it—most importantly, the assumption that DZ and MZ fraternal twins experience roughly equal environments—the equal environments assumption (EEA). Heritability studies are useless for humans; twin studies bias estimates upwards with a whole host of assumptions.
I will show that i) heritability estimates are highly flawed (due to erroneous assumptions); ii) nature vs nurture cannot be separated (like behavior geneticists claim) and so their main tool (the heritability estimate) should be discontinued; iii) genetic reductionism is not a tenable model due to what we now know about how genes work. All three of these reasons are enough to discontinue heritability estimates. If the nature vs nurture debate rests on a fallacy, and this fallacy is used as a vehicle for heritability estimates, then they should be discontinued for humans and only be used for breeding animals where they can control the environment fully (Schonemann, 1997; Moore and Shenk, 2016).
Heritability, twin studies, and equal environments
Back in 2014-2015, there was a debate in the criminological literature that had implications for heritability studies as a whole. Burt and Simons (2014) stated that it was time to get rid of heritability studies. Barnes et al (2015) responded that this was “a de facto form of censorship” (pg 2). Joseph et al (2015) respond to these accusations, writing, “It was good science and not “censorship” when earlier scientists called for ending studies based on craniometry, phrenology, and physiognomy, and any contemporary criminologist calling for the use of astrological charts to predict whether certain people will commit violent crimes would be justifiably ridiculed.” The main thing here, in my opinion, is that heritability estimates are based on an oversimplified (and wrong) model of the gene. Partitioning variance assumes that you can partition how much a trait is influenced by “nature” or “nurture” which is a false dichotomy (Moore, 2002; Schneider, 2007; Moore and Shenk, 2016).
More importantly, no “genes have been found” (I know that’s everyone’s favorite thing to hear) for traits that supposedly have high heritabilities. On page 179 of his book (nook version), Misbehaving Science, Controversy and the Development of Behavior Genetics Panofsky (2014) writes:
Molecular genetics has been a major dissapointment, if not an outright failure, in behavior genetics. Scientists have made many bold claims about genes for behavioral traits or mental disorders only to later retract them or to have them not be replicated by other scientists. Further, the findings that have been confirmed, or not yet falsified, have been few, far between, and small in magnitude.
There seems to be a huge disconnect between heritability estimates gleaned from twin studies and what the actual molecular genetic evidence says. This is because the EEA—that fraternal MZ twins experience roughly similar environments compared to fraternal DZ twins—is false. Fraternal MZ twins end up experiencing more similar environments when compared with fraternal DZ twins. Though most researchers attempt to save face by stating that MZ twins “seek out” and “elicit” their own environments which then makes them more similar compared to DZ twins. However, this is circular logic. The conclusion (that twins experience more similar environments) is in the premise, and therefore it is an invalid argument due to the logical fallacy. (It should also be noted that identical twins’ genes are not identical.)
Heritability studies assume an outdated model of the gene. The flaw regarding heritability estimates is simple: they imply a false dichotomy of nature vs nurture, while also assuming that genes and environment are independent, while the contribution to complex behaviors can be precisely quantified (Charney, 2013). This is one of the most critical parts of the heritability debate. Prenatal environments of DZ twins “can be significantly more stressful than that of DZ twins, and hence a cause of greater stress-related phenotypic concordance, the equal environment assumption will not hold in relation to behavioral phenotypes potentially associated with prenatal stress” (Charney, 2012: 20). This also is cause for concern regarding studies of twins reared apart. While twins are reared apart to eliminate shared environmental confounds, it cannot eliminate perhaps the most important confound of all—the prenatal environment (Moore and Shenk, 2016).
One of the most-cited studies regarding twins reared apart is Bouchard (1990). Though there are a whole slew of problems with this study.
1) You have the huge confound of similar environments before birth.
2) Full details for the MISTRA have never been published, so we don’t know how ‘separated’ the twins were. Though Bouchard et al do say that they were separated between 0 to 48.7 months (table 1) so some pairs spent at least 4 years together. Some of the twins even had reunions and spent a lot of time together.
3) They’re not representative and twins who do sign up for this research are self-selecting. Ken Richardson says in his book (2017, pg 55): “Twins generally tend to be self-selecting in any twin study. They may have responded to advertisements placed by investigators or have been prompted to do so by friends or family, on the grounds that they are alike. Remember, at least some of them knew each other prior to the study. Jay Joseph has suggested that the twins who elected to participate in all twin studies are likely to be more similar to one another than twins who chose not to participate. This makes it difficult to claim that the results would apply to the general population.”
4) And the results aren’t fully reported. Richardson also states that (2017, pg 55) “… of two IQ tests administered in the MISTRA, results have been published for one but not the other. No explanation was given for that omission. Could it be they produced different results?” He even states that attempts to get the data, by researchers like Jay Joseph, have been denied. Why would you refuse to publish, or give to another researcher, your data when asked?
We don’t know the relevant environments, the children’s average age at testing is closer to the biological mother than adopted mother; the biological mother and child will have reduced self-esteem and be more vulnerable to difficult situations, and in this sense they share environments; and conscious or unconscious bias make adopted children different from other family members. Adoption agencies also attempt to put children into similar homes as the biological mother too.
Charney (2012: 25) brings up an important point: “For phenotypes of any degree of complexity, DNA does not contain a determinate genetic program (analogous to the digital code of a computer) from which we can predict phenotype. If DNA were the sole carrier of information relevant to phenotype formation, and contained a genetic program sufficiently determinate that solely by reading it we could predict phenotype, then humans (and all other organisms) would be largely lacking in phenotypic plasticity.” Moore and Shenk (2016) also state that “we inherit developmental resources, not traits.”
1 For twin studies to be valid DZ twins and MZ fraternal twins would have to experience roughly equal environments. 2 Fraternal MZ twins experience much more similar environments than DZ twins. 3 Therefore the EEA is false and no genetic interpretations can be drawn from the data.
Heritability estimates cannot detangle genes and environment, and therefore they should be discontinued or reinterpreted (Joseph et al, 2015). Burt and Simons (2014: 110) also conclude: “Rejecting heritability studies and the false nature–nurture dichotomy and gene-centric model on which they are grounded is a necessary step forward that will pave the way for a reconceptualization of the link between the biological and the social in shaping criminal propensities in ways that are consistent with postgenomic knowledge“. I disagree with Barnes et al (2015) when they say that ending heritability estimates are “a defacto form of censorship“, because if nature vs nurture is a false dichotomy and the gene-centric model that heritability estimates rely on is wrong, then we need to either discontinue or reinterpret the estimates, not saying that ‘this is how much nature contributes to X and this is how much nurture contributes to Y’. (See also Richardson and Norgate, 2005 for more arguments regarding the EEA.)
Sapolsky (2017: 219) writes:
Oh, that’s right, humans. Of all species, heritability scores in humans plummet the most when shifting from a controlled experimental setting to considering the species’ full range of habitats. Just consider how much the heritability score for wearing earrings, with its gender split, has declined since 1958.
High heritability estimates have been used as evidence for causation—that genes control a large part of the trait in question. This reasoning, however, is highly flawed. People confuse “heritable” with “inheritable” (Moore and Shenk, 2016). Heritability does not inform us what causes a trait, how much environment contributes to a trait, nor does it tell us the relative influence of genes on a trait. Moore and Shenk (2016) agree with Joseph et al (2015) and Burt and Simons (2014) that heritability studies need to end, but Moore and Shenk’s reasoning slightly differs: they say we should end estimates because people confuse “heritable” with “inheritable”. Likewise, Guo (2000: 299) concurs, writing “it can be argued that the term ‘heritability’, which carries a strong conviction or connotation of something ‘heritable’ in everyday sense, is no longer suitable for use in human genetics and its use should be discontinued.”
Some may say that if a trait turns out to be mildly heritable then we can say that genes have some effect, but we know that genes affect all traits so it seems kind of redundant to have a useless measure that assumes a false dichotomy and relies on an outdated, additive model of the gene.
Rose (2006), too, agrees that heritability estimates imply a false dichotomy of nature vs nurture onto biological systems:
Biological systems are complex, non-linear, and non-additive. Heritability estimates are attempts to impose a simplistic and reified dichotomy (nature/nurture) on non-dichotomous processes.
Likewise, Lewontin (2006) argues we should be analyzing and studying causes, not variance.
There are numerous hereditarian scientific fallacies which include: 1) trait heritability does not predict what would occur when environments/genes change; 2) they’re inaccurate since they don’t account for gene-environment covariation or interaction while also ignoring nonadditive effects on behavior and cognitive ability; 3) molecular genetics does not show evidence that we can partition environment from genetic factors; 4) it wouldn’t tell us which traits are ‘genetic’ or not; and 5) proposed evolutionary models of human divergence are not supported by these studies (since heritability in the present doesn’t speak to what traits were like thousands of years ago) (Bailey, 1997).
Bailey (1997) brings up important arguments against the use of heritability, and even discusses fallacious writing from Rushton on the matter:
Rushton (1995), for example, thinks that if observed differences among the
racial groups that he defines are higher for traits that have high heritability within the groups, the hypothesis of genetically caused differences among the groups is supported.
Bailey (1997) then goes on to discuss three lakes: Otter lake, Welcome lake, and Bark lake. Otter lake has very high primary production, while Bark lake has very little and Welcome lake is somewhere in between (you can see that ‘Otter’, ‘Bark’ and ‘Welcome’ lakes are analogies for ‘Orientals’, ‘Blacks’, and ‘Whites’ as said by Rushton). But there is variation within the lakes, there are high production pockets of water in Bark lake while there are low production pockets of water in Otter lake. All three lakes are visited and measurements are taken. Bailey (1997) states that his conclusion would be that they differ in how much light each receives. Bailey (1997: 131) writes:
If I substitute three groups of people for my lakes, IQ for primary production, and genes for light levels, the fallacy of the slippery scale, as applied to human behaviour genetics, becomes clear. Even if we are sure that there is a difference among groups of people in IQ, and we are sure that IQ has high heritability within
each of the groups (i.e. variation in IQ is largely caused by genetic variation), we can make no inference about the cause of differences in IQ among the groups. The differences might be caused by genetic differences or they might not, but the heritability studies within the groups can’t help us make that judgment.
(Genes don’t cause IQ scores—or behavior—but that’s for another day.)
Heritability estimates for, say, IQ, are higher than any other trait in the animal kingdom. Heritability estimates for animal traits are low—lower than the stratospheric heritability of IQ. For example, heritability estimates of the bodyweight of farm animals is about 30 percent, which is the same for egg and milk production. Body fat in pigs and wool on sheep has a heritability of about 50 percent. But these estimates pale in comparison to the heritability estimates of IQ: estimates have been as high as 80 percent (but Schonemann, 1997 states it’s 60 percent but it’s as high as 80-90 percent today); this heritability estimate for IQ “surpasses almost anything found in the animal kingdom” (Schonemann, 1997: 104).
This high heritability estimate for IQ, of course, comes to us from the highly flawed twin studies discussed above. The reason why farmers and botanists use heritability estimates is that they can perfectly control the environment, and therefore get accurate—or close enough to it—estimates that will help them in their breeding efforts. Conversely, for humans, environments cannot be perfectly controlled and it is, of course, unethical to rear twins, MZ and DZ, in a controlled environment. Proponents of the twin method may say “It doesn’t matter if it’s flawed, it still shows there is a genetic component to trait X!”. But as discussed by Moore and Shenk (2016), that’s irrelevant because genetic factors influence all of our characteristics.
Heritability and causation
In the final section, I will shortly discuss how people fallaciously assume that high heritability estimates imply that a trait is strongly influenced by genetic factors.
In his essay in the book Postgenomics: Perspectives on Biology After the Genome, sociologist Aaron Panofsky (2016: 167; nook version) writes:
Heritability estimates do not help identify particular genes or ascertain their functions in development or physiology, and thus, by this way of thinking, they yield no causal information.
This is important to note: to those who truly believe that heritability estimates tell us anything about causation, how could they, logically, give us causal information if genes that lead to trait variation are not identified (Richardson, 2012)?
Panofsky (2014: 102-103) writes:
Experimental evidence from plants and animals suggest that shapes of the curves cannot inferred in advance and rarely follow the smooth, nonintersecting pattern like in figure 3.2 [I will provide the figure after this quote]. Thus true causal interpretations of heritability are hopeless and must be abandoned. Behavior geneticists did not claim direct experimental evidence, but they thought these various indirect lines of evidence provided a reasonable set of assumptions that would enable them to interpret heritability scores causally—provided they offer apporopriate, reasonable qualifications.
Graph from Panofsky (2014: 103)
Heritability estimates imply nothing about causation. It is about associations with variance, not identity and causes (Richardson, 2017: 69). A heritability of 0 does not mean that genes do not play a role in the development of form and function and phenotypic variation, it just means that, for whatever reason, there is little correlation between the two.
Scheneider (2007) writes (emphasis mine):
Heritability estimates apply only to groups, and are inherently inapplicable to individuals in any sense. And they do not imply causation. As Moore notes, all of these important limitations have been frequently ignored or minimized.
Heritability estimates imply nothing about causation. Behavior geneticists and others assume that heritability estimates will lead to ‘finding the genes’ that ’cause’ or are ‘associated with’ behavior. Their models are also, of course, extremely reductionist. It is then important to note that genes do not determine behavior. To quote Lerner and Overton (2017: 114):
Data presented in a 2016 special section of the journal Child Development indicate
that “some behaviors may be affected by only slight changes in DNA methylation,
while others may require a larger percent change in methylation; of course, the
effects are also likely bidirectional, with behavior impacting changes in methylation” [Lester et al., 2016, p. 31]. This point is key . It underscores the absurdity of genetic reductionist models: Genes do not determine behavior.
Methylation impacts behavior; behavior impacts methylation. It is the relations between methylation and behavior, not the genes acting as the “command center”, the “executive” of human behavior and development, that constitute the basic role of biology across the developmental course. This is the fatal flaw of reductionist models. Lastly, Lerner and Overton (2017: 145) write (emphasis mine):
That is, with the recent advances in understanding the role of epigenetics and recent research findings supporting this role, it should no longer be possible for any scientist to undertake the procedure of splitting of nature and nurture and, through reductionist procedures, come to conclusions that the one or the other plays a more important role in behavior and development.
[Richardson (2017: 129) also writes: “Note that this environmental source of [epigenetic] variation will appear in the behavioral geneticists twin-study as genetic variation: quite probably another way in which heritability estimates are distorted.”]
Reductionism in biology is fatally flawed. Reductionism, of course, has greatly increased our understanding of biology. However, it is time to move past the false dichotomy of nature vs nurture, and with that, move past heritability estimates since they prop up the fallacy of nature vs nurture. There is no way to separate the two since they are intertwined, but behavior geneticists would like you to believe that by studying twins raised apart will tell you anything about how ‘genetic’ or ‘environmental’ variation in a trait is in a population. Since heritability estimates are gleaned from the highly flawed studies of twins reared apart, a whole host of assumptions is poured in and these estimates are highly inflated, showing that genes influence a trait more than they supposedly do.
Twin studies, and along with it, heritability estimates, are useless for figuring out, and describing, trait variation in humans. The developmental system is more complex than the genetic reductionists (behavior geneticists) would like one to believe. The reductionist model has been heavily attacked in recent years (Regenmortal, 2004; Noble, 2008, 2012, 2015, 2016; Joyner, 2011, b; Joyner and Pederson, 2011).
Since the genetic reductionist model is wrong, along with heritability estimates (because of the nature/nurture fallacy), both should be discontinued. One of the main vehicles of these two models—twin studies—should also be discontinued. These fatal flaws of the behavior geneticists’ paradigm should be enough to discontinue these techniques in the study of human development and behavior. Heritability estimates give no causal information and they also use an outdated model of the gene; twin studies assume too many things for it to be a viable model in the discovering how traits manifest (most importantly, twin studies keep the nature/nurture fallacy alive and should be discontinued on that note only, in my opinion); and genetic reductionist models have been shown to be fatally flawed in recent years. We now have a better understanding of what a gene is today (Portin and Wilkins, 2017), and due to this, we should discontinue whatever implies the fallacy of nature vs nurture because it is irrelevant and a false dichotomy. That, alone, should be enough to discontinue twin studies and heritability estimates.
Race Differences in Penis Size Revisited: Is Rushton’s r/K Theory of Race Differences in Penis Length Confirmed?
In 1985 JP Rushton, psychology professor at the University of Ontario, published a paper arguing that r/K selection theory (which he termed Differential K theory) explained and predicted outcomes of what he termed the three main races of humanity—Mongoloids, Negroids and Caucasoids (Rushton, 1985; 1997). Since Rushton’s three races differed on a whole suite of traits, he reasoned races that were more K-selected (Caucasoids and Mongoloids) had slower reproduction times, higher time preference, higher IQ etc in comparison to the more r-selected Negroids who had faster reproduction times, lower time preference, lower IQ etc (see Rushton, 1997 for a review; also see Van Lange, Rinderu, and Bushmen, 2017 for a replication of Rushton’s data not theory). Were Rushton’s assertions on race and penis size verified and do they lend credence to his Differential-K claims regarding human races?
Rushton’s so-called r/K continuum has a whole suite of traits on it. Ranging from brain size to speed of maturation to reaction time and IQ, these data points supposedly lend credence to Rushton’s Differential-K theory of human differences. Penis size is, of course, important for Rushton’s theory due to what he’s said about it in interviews.
Rushton’s main reasoning for penis size differences between race is “You can’t have both”, and that if you have a larger brain then you must have a smaller penis; if you have a smaller penis you must have a larger brain. He believed there was a “tradeoff” between brain size and penis size. In the book Darwin’s Athletes: How Sport Has Damaged Black America and Preserved the Myth of Race, Hoberman (1997: 312) quotes Rushton: “Even if you take something like athletic ability or sexuality—not to reinforce stereotypes or some such thing—but, you know, it’s a trade-off: more brain or more penis. You can’t have both.” This, though, is false. There is no type of evidence to imply that this so-called ‘trade-off’ exists. In my readings of Rushton’s work over the years, that’s always something I’ve wondered: was Rushton implying that large penises take more energy to have and therefore the trade-off exists due to this supposed relationship?
Andrew Joyce of the Occidental Observer published an article the other day in defense of Richard Lynn. Near the end of his article he writes:
Another tactic is to belittle an entire area of research by picking out a particularly counter-intuitive example that the public can be depended on to regard as ridiculous. A good example is J. Philippe Rushton’s claim, based on data he compiled for his classic Race, Evolution and Behavior, that average penis size varied between races in accord with the predictions of r/K theory. This claim was held up to ridicule by the likes of Richard Lewontin and other crusaders against race realism, and it is regularly presented in articles hostile to the race realist perspective. Richard Lynn’s response, as always, was to gather more data—from 113 populations. And unsurprisingly for those who keep up with this area of research, he found that indeed the data confirmedRushton’s original claim.
The claim was ridiculed because it was ridiculous. This paper by Lynn (2013) titled Rushton’s r-K life history theory of race differences in penis length and circumference examined in 113 populations is the paper that supposedly verifies Rushton’s theory regarding race differences in penis size, along with one of its correlates in Rushton’s theory (testosterone). Lynn (2013) proclaims that East Asians are the most K-evolved, then come Europeans, while Africans are the least K-evolved. This, then, is the cause of the supposed racial differences in penis size.
Lynn (2013) begins by briefly discussing Rushton’s ‘findings’ on racial differences in penis size while also giving an overview of Rushton’s debunked r/K selection theory. He then discusses some of Rushton’s studies (which I will describe briefly below) along with stories from antiquity of the supposed larger penis size of African males.
Our old friend testosterone also makes an appearance in this paper. Lynn (2013: 262) writes:
Testosterone is a determinant of aggression (Book, Starzyk, & Quinsey, 2001; Brooks & Reddon, 1996; Dabbs, 2000). Hence, a reduction of aggression and sexual competitiveness between men in the colder climates would have been achieved by a reduction of testosterone, entailing the race differences in testosterone (Negroids > Caucasoids > Mongoloids) that are given in Lynn (1990). The reduction of testosterone had the effect of reducing penis length, for which evidence is given by Widodsky and Greene (1940).
Phew, there’s a lot to unpack here. (I discuss Lynn 1990 in this article.) Testosterone does not determine aggression; see my most recent article on testosterone (aggression increases testosterone; testosterone does not increase aggression. Book, Starzyk and Quinsey, 2001 show a .14 correlation between testosterone and aggression, whereas Archer, Graham-Kevan, and Davies 2005 show the correlation is .08). This is just a correlation. Sapolsky (1997: 113) writes:
Okay, suppose you note a correlation between levels of aggression and levels of testosterone among these normal males. This could be because (a) testosterone elevates aggression; (b) aggression elevates testosterone secretion; (c) neither causes the other. There’s a huge bias to assume option a while b is the answer. Study after study has shown that when you examine testosterone when males are first placed together in the social group, testosterone levels predict nothing about who is going to be aggressive. The subsequent behavioral differences drive the hormonal changes, not the other way around.
Brooks and Reddon (1996) also only show relationships with testosterone and aggressive acts; they show no causation. This same relationship was noted by Dabbs (2000; another Lynn 2013 citation) in prisoners. More violent prisoners were seen to have higher testosterone, but there is a caveat here too: being aggressive stimulates testosterone production so of course they had higher levels of testosterone; this is not evidence for testosterone causing aggression.
Another problem with that paragraph quoted from Lynn (2013) is that it’s a just-so story. It’s an ad-hoc explanation. You notice something with data you have today and then you imagine a nice-sounding story to attempt to explain your data in an evolutionary context. Nice-sounding stories are cool and all and I’m sure everyone loves a nicely told story, but when it comes to evolutionary theory I’d like theories that can be independently verified of the data they’re trying to explain.
My last problem with that paragraph from Lynn (2013) is his final citation: he cites it as evidence that the reduction of testosterone affects penis length…..but his citation (Widodsky and Green, 1940) is a study on rats… While these studies can give us a wealth of information regarding our physiologic systems (at least showing us which types of avenues to pursue; see my previous article on myostatin), they don’t really mean anything for humans; especially this study on the application of testosterone to the penis of a rat. See, the fatal flaw in these assertions is this: would a, say, 5 percent difference in testosterone lead to a larger penis as if there is a dose-response relationship between testosterone and penis length? It doesn’t make any sense.
Lynn (2013), though, says that Rushton’s theory doesn’t propose that there is a direct causal relationship between “intelligence”‘ and penis length, but just that they co-evolved together, with testosterone reduction occurring when Homo sapiens migrated north out of Africa they needed to cooperate more so selection for lower levels of testosterone subsequently occurred which then shrunk the penises of Rushton’s Caucasian and Mongoloid races.
Lynn (2013) then discusses two “new datasets”, one of which is apparently in Donald Templer’s book Is Size Important (which is on my to-read list, so many books, so little time). Table 1 below is from Lynn reproducing Templer’s ‘work’ in his book.
The second “dataset” is extremely dubious. Lynn (2013) attempts to dress it up, writing that “The information in this website has been collated from data obtained by research centres and reports worldwide.” Ethnicmuse has a good article on the pitfalls of Lynn’s (2013) article. (Also read Scott McGreal’s rebuttal.)
Rushton attempted to link race and penis size for 30 years. In a paper with Bogaert (Rushton and Bogaert, 1987), they attempt to show that blacks had larger penises than whites who h ad longer penises than Asians which then supposedly verified one dimension of Rushton’s theory. Rushton (1988) also discusses race differences in penis size, citing a previous paper by Rushton and Bogaert, where they use data from Alfred Kinsey, but this data is nonrepresentative and nonrandom (see Zuckermann and Brody, 1988 and Weizmann et al, 1990: 8).
Still others may attempt to use supposed differences in IGF-1 (insulin-like growth factor 1) as evidence that there is, at least, physiological evidence for the claim that black men have larger penises than white men, though I discussed that back in December of 2016 and found it strongly lacking.
Rushton (1997: 182) shows a table of racial differences in penis size which was supposedly collected by the WHO (World Health Organization). Though a closer look shows this is not true. Ethnicmuse writes:
ANALYSIS: The WHO did not study penis sizes. It relied on three separate studies, two of which were not peer-reviewed and the data was included as “Appendix III” (which should have alerted Rushton that this was not an original study). The first study references Africans in the US (not Africa!) and Europeans in the US (not Europe!), the second Europeans in Australia (not Europe!) and the third, Thais.
So it seems to be bullshit all the way down.
Ajmani et al (1985) showed that 385 healthy Nigerians had an average penile length of 3.21 inches (flaccid). Orakwe and Ebuh (2007) show that while Nigerians had longer penises than other ethnies tested, the only statistical difference was between them and Koreans. Though Veale et al (2014: 983) write that “There are no indications of differences in racial variability in our present study, e.g. the study from Nigeria was not a positive outlier.”
Lynn and Dutton have attempted to use androgen differentials between the races as evidence for racial differences in penis size (this is another attempt at a physiological argument to attempt to show the existence of racial differences in penis size). Edward Dutton attempted to revive the debate on racial differences in penis size during a 2015 presentation where he, again, showed that Negroids have higher levels of testosterone than Caucasoids who have higher levels of androgens than Mongoloids. These claims, though, have been rebutted by Scott McGreal who showed that populations differences in androgen levels are meaningless while they subsequently fail to validate Rushton and Lynn’s claims on racial differences in penis size.
Finally, it was reported the other day that condoms from China were too small in Zimbabwe, per Zimbabwe’s health minister. This led Kevin MacDonald to proclaim that this was “More corroboration of race differences in penis size which was part of the data Philippe Rushton used in his theory of r/K selection (along with brain size, maturation rates, IQ, etc.)” This isn’t “more corroboration” for Rushton’s long-dead theory; nor is this evidence that blacks have longer penises. I don’t understand why people make broad and sweeping generalizations. It’s one country in Africa that complained about smaller condoms from a country in East Asia, therefore this is more corroboration for Rushton’s r/K selection theory? The logic doesn’t follow.
Asians have small condoms. Those condoms go to Africa. They complain condoms from China are too small. Therefore Rushton’s r/K selection theory is corroborated. Flawed logic.
In sum, Lynn (2013) didn’t verify Rushton’s theory regarding racial differences in penis size and I find it even funnier that Lynn ends his article talking about “falsification’ stating that this aspect of Rushton’s theory has survived two attempts at falsification, therefore, it can be regarded as a “progressive research program“, though obviously, with the highly flawed “data” that was used, one cannot rationally make that statement. Supposed hormonal differences between the races do not cause penis size differences; even if blacks had levels of testosterone significantly higher than whites (the 19 percent that is claimed by Lynn and Rushton off of one highly flawed study in Ross et al, 1986) they still would not have longer penises.
The study of physical differences between populations is important, but sometimes, stereotypes do not tell you anything, especially in this case. Though in this instance, the claim that blacks have the longest penis lies on shaky ground, and with what evidence we do have for the claim, we cannot logically make the inference (especially not from Lynn’s (2013) flimsy data). Richard Lynn did not “confirm” anything with this paper; the only thing he “confirmed” are his own preconceived notions; he did not ‘prove’ what he set out to.
I’ve been reading bodybuilding magazines for almost ten years. Good science articles on training and diet, but there was always one ad in the magazines that I always saw: the leg and calf of a neonate and then that same neonate at 7 months. The kid was brolic. Defined calves with absolutely no training. What was the cause? Well, he had a deletion on the myostatin gene— also called growth-differentiating factor 8, GDF-8. The ads in the magazines would try to get you to buy some shitty supplement that did not work, but the kid? The kid is real and he had a deletion on the gene that codes for a protein that myostatin. Myostatin restrains muscle growth, normally, which ensures that muscles don’t grow too large. Myo means muscle, while statin means heart. This can be a huge breakthrough regarding muscular dystrophy (Smith and Lin, 2013). Myostatin seems to have two roles: 1) regulating the number of muscle fibers formed in development and 2) to regulate the growth of muscle fibers postnatally.
When it is deleted, in cattle, it causes “double-muscle” cattle—cattle that have about 20 percent more muscle mass than cattle who don’t have the deletion (Grobert et al, 1997; Amthor et al, 2007). The cause is skeletal-muscle hyperplasia, which causes an increase in the number of muscle fibers, not only an increase in diameter. This is what causes these crazy-looking animals. Myostatin is coded by the MSTN gene. So they discovered that the gene caused double-muscled cattle. It should also be noted that while mice who lack myostatin are more muscular than average, they have impaired force generation (Amthor et al, 2007)
From Grobet et al, 1997; a double-muscle Belgian Blue homozygous for a deletion in the myostatin gene
The same thing is seen in mice—mice with a myostatin deletion are stronger and bigger (muscle) than mice without the myostatin deletion; myostatin, in adult mice, is expressed in all muscle tissue but more specifically in fast twitch muscle fibers (Whittemore et al, 2002). Se-Jin Lee is the one to discover the myostatin gene, and for his work he was elected the to the National Academy of Sciences in 2012 (Glass and Spiegelman, 2012). Mice that lack myostatin have, on average, double the muscle compared to mice who have myostatin. However, Lee (2007) proved that mice who lack myostatin and who overproduce follistatin (which is capable of blocking myostatin activity in muscle cells). Lee (2007) writes:
Moreover, the rank order of magnitude of these increases correlated with the rank order of expression levels of the transgene; in the highest-expressing line, Z116A [Z116a is one of Lee’s four transgenic mouse lines], muscle weights were increased by 57–81% in females and 87–116% in males compared to wild type mice. Hence, FLRG is capable of increasing muscle growth in a dose-dependent manner when expressed as a transgene in skeletal muscle.
So Lee (2007) discovered that the effect of FLRG is additive. He then attempted to determine whether or not the FLRG gene was truly causing increased muscle growth by blocking myostatin activity, so he examined the effect of combining the FLRG transgene with a knocked-out myostatin gene. He was not able to find this relationship in Z116A—i.e. being positive for the FLRG transgene and homozygous for myostatin—but he did discover that females from the Z166A strain were heterozygous for the myostatin deletion, having further increases in muscle weights combined with wild-type mice with ‘normal’ myostatin.
Most importantly, in two of the muscles that were examined (quadriceps and gastrocnemius) the observed increases were also greater than those seen in Mstn−/− mice lacking the transgene. Based on this finding, it appears that myostatin cannot be the sole target for FLRG in the transgenic mice and, therefore, that additional ligands must be capable of suppressing muscle growth in vivo.
Then Lee examined the effects of follistatin in MSTN null mice. He found that the presence of the F66 transgene in MSTN null mice, which caused another doubling in muscle. Lee had bred mice with quadruple muscle. Like FLRG, follistatin exerts its effects on other ligands, along with myostatin, so the effect of blocking still other ligands is also comparable to that loss-of-function from the myostatin.
So there are two important take-aways here with this landmark study: 1) the loss-of-function mutation on the Mstn gene exerts a maternal effect; muscle mass in the fetus is determined by the number of functional Mstn alleles (the offspring had higher muscle weights if the mother had fewer functioning Mstn alleles even if the offspring had the same genotype); and 2) Lee showed that other ligands worked with myostatin to control muscle growth. Both FLRG and follistatin can promote muscle growth when they are transgenes in skeletal muscle. So when he combined follistatin transgene and myostatin null mutation deletions, he had bred mice with qaudruple muscle.
These mice are huge. And it’s only due simply to a loss-of-function mutation along with the myostatin-binding protein follistatin that causes mice with quadruple muscle. Myostatin regulates muscle growth. So if myostatin regulates muscle growth, then a deletion on the gene that codes for the protein that codes for myostatin to regulate muscle growth should causes increases in size and strength in animals with this null myostatin deletion.
In his 2014 book, David Epstein writes about how Lee attempted to find him subjects for human testing, so he put an ad in muscle magazines such as Muscle and Fitness and Muscular Development. Over 150 people answered his ad, but he had found no myostatin mutants.
This was until 2003, when he got a phone call of a babe who was born with bulging muscles, in Germany. He had mutations on both of his myostatin genes, therefore he had no myostatin in his blood. This baby’s mother (called “Superbaby”) had one normal myostatin gene and one mutant so she had more myostatin than her son but less than the general population. She is the only adult with a known myostatin deletion, and she just so happens to be a professional sprinter.
Before I discuss Superbaby, I need to discuss myostatin and its role in development. Myostatin plays the same role in birds, cattle, mice, humans, etc. Muscle is costly, energetically speaking, and if one is too muscular they may not be able to find enough food to sustain their higher-than-average muscle mass, so myostatin is kind of like the body’s ‘fail-safe’ to prevent one’s muscles from becoming too big. Of course larger muscles require more calories—and of course protein for muscle-building—and so, it wouldn’t make sense, for instance, for our ancestors to have huge bulging muscles since they ate intermittently. So myostatin helps us stay smaller than we would be than if we had the null mutation.
One of the incredible things about Superbaby is that he had no heart problems, although doctors were worried that he would (like his heart growing out of control), but him nor his mother have reported any problems. Epstein (2014: 105) writes:
But the facts that the one boy with two of the rare myostatin gene variants has exceptional strength, and that his mother has exceptional speed, are no coincidence. Superbaby and his mother fall precisely in line with whippets.
Epstein describes how two whippets, one with one copy of the myostatin gene, have four puppies and how the mutation would go to the offspring (kind of like a Punnet square):
If two sprinter whippets—dogs that each have one copy of the myostatin mutation—have four puppies, this is the likely scenario: one puppy will have zero copies of the mutation and be normal; two puppies will have one copy of the mutation, like Superbaby’s mother, and be sprinters; the fourth pippy will have two copies of the mutation, like Superbaby, which make for a double-muscled “bully” whippet.
Schuelke et al’s (2004) case report is one of the first-known cases of the myostatin mutation in humans. The pregnancy was normal, and when he was born, Superbaby had protruding calves (see Fig. 1: a below; left is as a 6-day-old neonate and the right is 7 months), along with his upper arms too. The ultrasonograms of his muscles were also different from controls as was the morphometric analysis (Fig. 1 b and c respectively). All around, Superbaby was normal; but by age 3 he still had increased muscle mass/strength and could even hold 3 kg dumbells in suspension, horizontally with his arms extended. He had some strong family members, one was a construction worker who was able to unload curbstones by hand, while the mother “appeared muscular” but not as muscular as her son (see Fig. 1 d).
Myostatin is also expressed in the heart, and since Superbaby had a loss-of-function mutation on his myostatin gene, he was monitored for cardiomyopathy but he may have been too young to detect any defects. So Schuelke et al’s (2004) obvious conclusion was that a loss-of-function on the myostatin gene could increase muscle bulk and strength and be good therapy for people with a muscle-wasting disease.
One deletion in the MSTN gene can cause myostatin-related hypertrophy. One mutation disrupts how the gene that codes for the protein to make MSTN and therefore muscle cells. So when this occurs the cells make little to no functional myostatin. When one protein is lost, it leads to an overgrowth of muscle cells, with no other apparent medical problems (which is also seen in Superbaby and his mother).
If that were my son, I’d be a proud father. My baby coming out of the womb already jacked and strong? He’d immediately be in the gym as soon as he was able to and I would attempt to mold him into a champion bodybuilder/powerlifter; I’m not sure if the mutation that Superbaby has would truly matter at the elite level in the IFBB, but it would matter in the amateur level. Superbaby—and all of the other loss-of-function myostatin animal mutants—have paved the way for new forms of gene therapy for humans who have a muscle-wasting disease. Another American boy also had the same mutation as Superbaby. (Though the cause is different in this child, his body produces a normal level of myostatin, a defect in his myostatin receptors is thought to prevent his muscle cells from responding to myostatin, and since he’s bigger and stronger than children his age, this is a sensible hypothesis.)
With these loss-of-function mutants along with other transgenes, we can understand how and why muscles atrophy and grow, and we can help people with serious disease. Superbaby is not 14 years old, and while I am unable to find any new information on Superbaby (I will write something else on this if and when I do), it’s clear that a loss-of-function on the myostatin gene causes higher amounts of muscle mass and strength when people with the mutation are compared to people without the mutation. I’d personally line up turn off my myostatin gene, so I can get double-muscled and if there is any gene therapy for follistatin, I’d get that, too, in order to become quadruple-muscled.
Racial differences in sporting success are undeniable. The races are somewhat stratified in different sports and we can trace the cause of this to differences in genes and where one’s ancestors were born. We can then say that there is a relationship between them since, they have certain traits which their ancestors also had, which then correlate with geographic ancestry, and we can explain how and why certain populations dominate (or would have the capacity to based on body type and physiology) certain sporting events. Critiques of Taboo: Why Black Athletes Dominate Sports and Why We’re Afraid to Talk About It are few and far between, and the few that I am aware of are alright, but this one I will discuss today is not particularly good, because the author makes a lot of claims he could have easily verified himself.
In 2010, Ian Kerr published The Myth of Racial Superiority in Sports, who states that there is a “dark side” to sports, and specifically sets his sights on Jon Entine’s (2000) book Taboo. In this article, Kerr (2010) makes a lot of, in my opinion, giant claims which provide a lot of evidence and arguments in order to show their validity. I will discuss Kerr’s views on race, biology, the “environment”, “genetic determinism”, and racial dominance in sports (which will have a focus on sprinting/distance running in this article).
Since establishing the reality and validity of the concept of race is central to proving Entine’s (2002) argument on racial differences in sports, then I must prove the reality of race (and rebut what Kerr 2010 writes about race). Kerr (2010: 20) writes:
First, it is important to note that Entine is not working in a vacuum; his assertions about race and sports are part of a larger ongoing argument about folk notions of race. Folk notions of race founded on the idea that deep, mutually exclusive biological categories dividing groups of people have scientific and cultural merit. This type of thinking is rooted in the notion that there are underlying, essential differences among people and that those observable physical differences among people are rooted in biology, in genetics (Ossorio, Duster, 2005: 2).
Dividing groups of people does have scientific, cultural and philosophical merit. The concept of “essences” has long been discarded by philosophers. Though there are differences in both anatomy and physiology in people that differ by geographic location, and this then, at the extreme end, would be enough to cause the differences in elite sporting competition that is seen.
Either way, the argument for the existence of race is simple: 1) populations differ in physical attributes (facial, morphological) which then 2) correlate with geographic ancestry. Therefore, race has a biological basis since the physical differences between these populations are biological in nature. Now that we have established that race exists using only physical features, it should be extremely simple to show how Kerr (2010) is in error with his strong claims regarding race and the so-called “mythology” of racial superiority in sports. Race is biological; the biological argument for race is sound (read here and here, and also see Hardimon, 2017).
True genetic determinism—as is commonly thought—does not have any sound, logical basis (Resnick and Vorhaus, 2006). So Kerr’s (2010) claims in this section need to be dissected here. This next quote, though, is pretty much imperative to the soundness and validity of his whole article, and let’s just say that it’s easy to rebut and invalidates his whole entire argument:
Vinay Harpalani is one of the most outspoken critics of using genetic determinism to validate notions of inferiority or the superiority of certain groups (in this case Black athletes). He argues that in order for any of Entine’s claims to be valid he must prove that: 1) there is a systematic way to define Black and White populations; 2) consistent and plausible genetic differences between the populations can be demonstrated; 3) a link between those genetic differences and athletic performance can be clearly shown (2004).
This is too easy to prove.
1) While I do agree that the terminology of ‘white’ and ‘black’ are extremely broad, as can be seen by looking at Rosenberg et al (2002), population clusters that cluster with what we call ‘white’ and ‘black’ exist (and are a part of continental-level minimalist races). So is there a systematic way to define ‘Black’ and ‘White’ populations? Yes, there is; genetic testing will show where one’s ancestors came from recently, thereby proving point 1.
2) Consistent and plausible genetic differences between populations can be demonstrated. Sure, there is more variation within races than between them (Lewontin, 1972; Rosenberg et al, 2002; Witherspoon et al, 2007; Hunley, Cabana, and Long, 2016). Even these small between-continent/group differences would have huge effects on the tail end of said distribution.
3) I have compiled numerous data on genetic differences between African ethnies and European ethnies and how these genetic differences then cause differences in elite athletic performance. I have shown that Jamaicans, West Africans, Kenyans and Ethiopians (certain subgroups of the two aforementioned countries) have genetic/somatypic differences that then lead to differences in these sporting competitions. So we can say that race can predict traits important for certain athletic competitions.
1) The terminology of ‘White’ and ‘Black’ are broad; but we can still classify individuals along these lines; 2) consistent and plausible genetic differences between races and ethnies do exist; 3) a link between these genetic differences between genes/athletic differences between groups can be found. Therefore Entine’s (2002) arguments—and the validity thereof—are sound.
Kerr (2010) then makes a few comments on the West’s “obsession with superficial physical features such as skin color”, but using Hardimon’s minimalist race concept, skin color is a part of the argument to prove the existence and biological reality of race, therefore skin color is not ‘superficial’, since it is also a tell of where one’s ancestors evolved in the recent past. Kerr (2010: 21) then writes:
Marks writes that Entine is saying one of three things: that the very best Black athletes have an inherent genetic advantage over the very best White athletes; that the average Black athlete has a genetic advantage over the average White athlete; that all Blacks have the genetic potential to be better athletes than all Whites. Clearly these three propositions are both unknowable and scientifically untenable. Marks writes that “the first statement is trivial, the secondly statistically intractable, and the third ridiculous for its racial essentialism” (Marks, 2000: 1077).
The first two, in my opinion (the very best black athletes have an inherent genetic advantage over the very best white athletes and the average black athlete has a genetic advantage over the average white athlete), are true, and I don’t know how you can deny this; especially if you’re talking about AVERAGES. The third statement is ridiculous, because it doesn’t work like that. Kerr (2010), of course, states that race is not a biological reality, but I’ve proven that it is so that statement is a non-factor.
Kerr (2010) then states that “ demonstrating across the board genetic variations between
populations — has in recent years been roundly debunked“, and also says “ Differences in height, skin color, and hair texture are simply the result of climate-related variation.” This is one of the craziest things I’ve read all year! Differences in height would cause differences in elite sporting competition; differences in skin color can be conceptualized as one’s ancestors’ multi-generational adaptation to the climate they evolved in as can hair texture. If only Kerr (2010) knew that this statement here was the beginning of the end of his shitty argument on Entine’s book. Race is a social construct of a biological reality, and there are genetic differences between races—however small (Risch et al, 2002; Tang et al, 2005) but these small differences can mean big differences at the elite level.
The “environment” and biological variability
Kerr (2010) then shifts his focus over to, not genetic differences, but biological differences. He specifically discusses the Kenyans—Kalenjin—stating that “height or weight, which play an instrumental role in helping define an individual’s athletic prowess, have not been proven to be exclusively rooted in biology or genetics.” While estimates of BMI and height are high (both around .8), I think we can disregard the numbers since they came from highly flawed twin studies, since molecular genetic evidence shows lower heritabilities. Either way, surely height is strongly influenced by ‘genes’. Another important caveat is that Kenya has one of the lowest BMIs in the world, 20.7 for Kenyan men, which also is part of the cause of why certain African ethnies dominate running competitions.
I don’t disagree with Kerr (2010) here too much; many papers show that SES/cultural/social factors are very important to Kenyan runners (Onywera et al, 2006; Wilbur and Pistiladis, 2012; Tucker, Onywera, and Santos-Concejero, 2015). You can have all of the ‘physical gifts’ in the world, if it’s not combined with the will to want to do your best, along with cultural and social factors you won’t succeed. But having an advantageous genotype and physique are useless without a strong mind (Lippi, Favaloro, and Guidi, 2008):
An advantageous physical genotype is not enough to build a top-class athlete, a champion capable of breaking Olympic records, if endurance elite performances (maximal rate of oxygen uptake, economy of movement, lactate/ventilatory threshold and, potentially, oxygen uptake kinetics) (Williams & Folland, 2008) are not supported by a strong mental background.”
Dissecting this, though, is tougher. Because being born at certain altitudes will cause certain advantageous traits, such as a larger lung capacity (and you will have an advantage in lung capacity when competing at lower altitudes), but certain subpopulations live in these high-altitude areas, so what is it? Genetic? Cultural? Environmental? All three? Nature vs nurture is a false dichotomy; so it is a mixture of the three.
How does one explain, then, the athlete who trains countless hours a day fine-tuning a jump shot, like LeBron James or shaving seconds off sub-four minute miles like Robert Kipkoech Cheruiyot, a four time Boston Marathon winner?
Literally no one denies that elite athletes put in insane amounts of practice; but if everyone has the same amount of practice they won’t have similar abilities.
He also briefly brings up muscle fibers, stating:
These include studies on African fast twitch muscle fibers and development of motor skills. Entine includes these studies to demonstrate irrevocable proof of embedded genetic differences between populations but refuses to accept the fact that any differences may be due to environmental factors or training.
This, again, shows ignorance of the literature. An individual’s muscle fibers are formed during development from the fusion of several myoblasts, with differentiation being completed before birth. Muscle fiber typoing is also set at age 6, no difference in skeletal muscle tissue was found when comparing 6-year-olds and adults, therefore we can state that muscle fiber typing is set by age 6 (Bell et al, 1980). You can, of course, train type II fibers to have similar aerobic capacity to type I fibers, but they’ll never be fully similar. This is something that Kerr (2010) obviously is ignorant to because he’s not well-read on the literature which causes him to make dumb statements like “any differences [in muscle fiber typing] may be due to environmental factors or training“.
Black domination in sports
Finally, Kerr (2010) discusses the fact that whites dominated certain running competitions in the Olympics and that before the 1960s, a majority of distance-running gold medals went to white athletes. He then states that the 2008 Boston Marathon winner was Kenyan; but the next 4 behind him were not. Now, let’s check out the 2017 Marathon winners: Kenya, USA, Japan for the top 3; while 5 Kenyans/Ethiopians are in the top 15 while the same is also true of women; a Kenyan winner, with Kenyans/Ethiopians taking 5 of the top 15 spots. The fact that whites used to do well in running sports is a non-factor; Jesse Owens blew away the competition in the Games in Germany, which showed how blacks would begin to dominate in the US decades later.
Kerr (2010) then ends the article with a ton of wild claims; the wildest one, in my opinion, being that “Kenyans are no more genetically different from any other African or European population on average“, does anyone believe this? Because I have data to the contrary. They have a higher Vo2 max, which of course is trainable but with a ‘genetic’ component (Larsen, 2003), while other authors argue that genetic differences between populations account for differences in success in running competition between populations (Vancini et al, 2014), while male and female Kenyan and Ethiopian runners are the fastest in the half and full marathon (Knechtle et al, 2016). There is a large amount of data out there that speaks about Kenyan/Ethiopian and others’ dominance in running; it seems Kerr (2010) just ignored the data. I agree with Kerr that Kenyanholos show that humans can adapt to their environment; but his conclusion here:
The fact that runners coming from Kenya do so well in running events attests to the fact the combination of intense high altitude training, consumption of a low-fat, high protein diet, and a social and cultural expectation to succeed have created in recent decades an environment which is highly conducive to producing excellent long-distance runners.
is very strong, and while I don’t disagree at all with anything here, he’s disregarding how somatype and genes differ between Kenyans and other populations that compete in these sports that then lead to differences in elite sporting competitions.
Elite sporting performance is influenced by myriad factors, including psychology, ‘environment’, and genetic factors. Something that Kerr (2010) doesn’t understand—because he’s not well-read on this literature—is that many genetic factors that influence sporting performance are known. The ability to become elite depends on one’s capacity for endurance, muscle performance, the ability of the tendons and ligaments to withstand stress and injury, and the attitude to train and push above and beyond what normal people can do (Lippi, Longo, and Maffulli, 2010). We can then extend this to human races; some are better-equipped to excel in running competitions than others.
On its face, Kerr’s (2010) claim that there are no inherent differences between races is wrong. Races differ in somatype, which is due to evolution in different geographic locations for tens of thousands of years. The human body is perfectly adapted to for long distance running (Murray and Costa, 2012), and since our capabilities for endurance running evolved in Africa and they, theoretically, have a musculoskeletal structure similar to the Homo sapiens that left Africa around 70 kya, then it’s only logical to state that African’s, on average, have an inherent ability in running competitions (West and East Africans, while North Africans fare very well in middle distance running, which, again, comes down to living in higher altitudes like Kenyans and Ethiopians).
Wagner and Heyward (2000) reviewed many studies on the physiological differences between blacks and whites. Blacks skew towards mesomorphy; black youths had smaller billiac and bitrochanteric width (the widest measure of the pelvis at the outer edges and the flat process on the femur, respectively), and black infants had longer extremities than white infants (Wagner and Heyward, 2000). We have anatomic evidence that blacks are superior runners (in an American context). Mesomorphic athletes are more likely to be sprinters (Sands et al, 2005; which is also seen in prepubescent children: Marta et al, 2013) Kenyans are ecto-dominant (Vernillo et al, 2013) which helps to explain their success at long-distance running. So just on only looking at the phenotype (a marker for race with geographic ancestry, proving the biological existence of race) we can confidently state, on average just by looking at an individual or a population, how they will fare in certain competitions.
Kerr’s (2010) arguments leave a ton to be desired. Race exists and is a biological reality. I don’t know why this paper got published since it was so full of errors; his arguments were not sound and much of the literature contradicts his claims. What he states at the end about Kenyans is not wrong at all, but to not even bring up genetic/biologic differences as a factor influencing their performance is dishonest.
Of course, a whole slew of factors, be they biological, cultural, psychological, genetic, socioeconomic, anatomic, physiologic etc influence sporting performance, but certain traits are more likely to be found in certain populations, and in the year 2018 we have a good idea of what influences elite sporting performance and what does not. It just so happens that these traits are unevenly distributed between populations, and the cause is evolution in differing climates in differing geographic locations.
Race exists and is a biological reality. Biological anatomic/physiological differences between these races then manifest themselves in elite sporting competition. The races differ, on average, in traits important for success in certain competitions. Therefore, race explains some of the variance in elite sporting competition.
President Trump was quoted the other day saying “We have to look at the Internet because a lot of bad things are happening to young kids and young minds and their minds are being formed,” Trump said, according to a pool report, “and we have to do something about maybe what they’re seeing and how they’re seeing it. And also video games. I’m hearing more and more people say the level of violence on video games is really shaping young people’s thoughts.” But outside of broad assertions like this—that playing violent video games cause violent behavior—does it stack up to what the scientific literature says about it? In short, no, it does not. (A lot of publication bias exists in this debate, too.) Why do people think that violent video games cause violent behavior? Mostly due to the APA and their broad claims with little evidence.
Just doing a cursory Google search of ‘violence in video games pubmed‘ brings up 9 journal articles, so let’s take a look at a few of those.
The first article is titled The Effect of Online Violent Video Games on Levels of Aggression by Hollingdale and Greitemeyer (2014). They took 101 participants and randomized them to one of four experimental conditions: neutral, offline; neutral online; (Little Big Planet 2) violent offline; and violent online video games (Call of Duty: Modern Warfare). After they played said games, they answered a questionnaire and then measured aggression using the hot sauce paradigm (Lieberman et al, 1999) to measure aggressive behavior. Hollingdale and Greitemeyer (2014) conclude that “this study has identified that increases in aggression are not more pronounced when playing a violent video game online in comparison to playing a neutral video game online.”
Staude-Muller (2011) finds that “it was not the consumption of violent video games but rather an uncontrolled pattern of video game use that was associated with increasing aggressive tendencies.” Przybylski, Ryan, and Rigby (2009) found that enjoyment, value, and desire to play in the future were strongly related to competence in the game. Players who were high in trait aggression, though, were more likely to prefer violent games, even though it didn’t add to their enjoyment of the game, while violent content lent little overall variance to the satisfactions previously cited.
Tear and Nielsen (2013) failed to find evidence that violent video game playing leads to a decrease in pro-social behavior (Szycik et al, 2017 also show that video games do not affect empathy). Gentile et al (2014) show that “habitual violent VGP increases long-term AB [aggressive behavior] by producing general changes in ACs [aggressive cognitions], and this occurs regardless of sex, age, initial aggressiveness, and parental involvement. These robust effects support the long-term predictions of social-cognitive theories of aggression and confirm that these effects generalize across culture.” The APA (2015) even states that “scientific research has demonstrated an association between violent video game use and both increases in aggressive behavior, aggressive affect, aggressive cognitions and decreases in prosocial behavior, empathy, and moral engagement.” How true is all of this, though? Does playing violent video games truly increase aggression/aggressive behavior? Does it have an effect on violence in America and shootings overall?
Whitney (2015) states that the video-games-cause-violence paradigm has “weak support” (pg 11) and that, pretty much, we should be cautious before taking this “weak support” as conclusive. He concludes that there is not enough evidence to establish a truly causal connection between violent video game playing and violent and aggressive behavior. Cunningham, Engelstatter, and Ward (2016) tracked the sale of violent video games and criminal offenses after those games were sold. They found that violent crime actually decreased the weeks following the release of a violent game. Of course, this does not rule out any longer-term effects of violent game-playing, but in the short term, this is good evidence against the case of violent games causing violence. (Also see the PsychologyToday article on the matter.)
We seem to have a few problems here, though. How are we to untangle the effects of movies and other forms of violent media that children consume? You can’t. So the researcher(s) must assume that video games and only video games cause this type of aggression. I don’t even see how one can logically state that out of all other types of media that violent video games—and not violent movies, cartoons, TV shows etc—cause aggression/violent behavior.
Back in 2011, the Supreme Court case Brown vs. Entertainment Merchants Association concluding that since the effects on violent/aggressive behavior were so small and couldn’t be untangled from other so-called effects from other violent types of media. Ferguson (2015) found that violent video game playing had little effect on children’s mood, aggression levels, pro-social behavior or grades. He also found publication bias in this literature (Ferguson, 2017). Contrary to what those say about video games causing violence/aggressive behavior, video game playing was associated with a decrease in youth crime (Ferguson, 2014; Markey, Markey, and French, 2015 which is in line with Cunningham, Engelstatter, and Ward, 2016). You can read more about this in Ferguson’s article for The Conversation, along with his and others’ responses to the APA who state that violent video games cause violent behavior (with them stating that the APA is biased). (Also read a letter from 230 researchers on the bias in the APA’s Task Force on Violent Media.)
How would one actually untangle the effects of, say, violent video game playing and the effects of such other ‘problematic’ forms of media that also show aggression/aggressive acts towards others and actually pinpoint that violent video games are the culprit? That’s right, they can’t. How would you realistically control for the fact that the child grows up around—and consumes—so much ‘violent’ media, seeing others become violent around him etc; how can you logically state that the video games are the cause? Some may think it logical that someone who plays a game like, say, Call of Duty for hours on end a day would be more likely to be more violent/aggressive or more likely to commit such atrocities like school shootings. But none of these studies have ever come to the conclusion that violent video games may/will cause someone to kill or go on a shooting spree. It just doesn’t make sense. I can, of course, see the logic in believing that it would lead to aggressive behavior/lack of pro-social behavior (let’s say the kid played a lot of games and had little outside contact with people his age), but of course the literature on this subject should be enough to put claims like this to bed.
It’s just about impossible to untangle the so-called small effects of video games on violent/aggressive behavior from other types of media such as violent cartoons and violent movies. Who’s to say it’s not just the violent video games and not the violent movies and violent cartoons, too, that ’cause’ this type of behavior? It’s logically impossible to distinguish this, so therefore the small relationship between video games and violent behavior should be safely ignored. The media seems to be getting this right, which is a surprise (though I bet if Trump said the opposite—that violent video games didn’t cause violent behavior/shootings—that these same people would be saying that they do), but a broken clock is right twice a day.
So Trump’s claim (even if he didn’t outright state it) is wrong, along with anyone else who would want to jump in and attempt to say that video games cause violence. In fact, the literature shows a decrease in violence after games are released (Ferguson, 2014; Markey, Markey, and French, 2015; Cunningham, Engelstatter, and Ward, 2016). The amount of publication bias (also see Copenhaver and Ferguson, 2015 where they show how the APA ignores bias and methodological problems regarding these studies) in this field (Ferguson, 2017) should lead one to question the body of data we currently have, since studies that find an effect are more likely to get published than studies that find no effect.
Video games do not cause violent/aggressive behavior/school shootings. There is literally no evidence that they are linked to the deaths of individuals, and with the small effects noted on violent/aggressive behavior due to violent video game playing, we can disregard those claims. (One thing video games are good for, though, is improving reaction time (Benoit et al, 2017). The literature is strong here; playing these so-called “violent video games” such as Call of Duty improved children’s reaction time, so wouldn’t you say that these ‘violent video games’ have some utility?)
Lead has many known neurological effects on the brain (regarding the development of the brain and nervous system) that lead to many deleterious health outcomes and negative outcomes in general. Including (but not limited to) lower IQ, higher rates of crime, higher blood pressure and higher rates of kidney damage, which have permanent, persistent effects (Stewart et al, 2007). Chronic lead exposure, too, can “also lead to decreased fertility, cataracts, nerve disorders, muscle and joint pain, and memory or concentration problems” (Sanders et al, 2009). Lead exposure in vitro, in infancy, and in childhood can also lead to “neuronal death” (Lidsky and Schneider, 2003). While epigenetic inheritance also plays a part (Sen et al, 2015). How do blacks and whites differ in exposure to lead? How much is the difference between the two races in America, and how much would it contribute to crime? On the other hand, China has high rates of lead exposure, but lower rates of crime, so how does this relationship play out with the lead-crime relationship overall? Are the Chinese an outlier or is there something else going on?
The effects of lead on the brain are well known, and numerous amounts of effort have been put into lowering levels of lead in America (Gould, 2009). Higher exposure to lead is also found in poorer, lower class communities (Hood, 2005). So since higher levels of lead exposure are found more often in lower-class communities, then blacks should have higher blood-lead levels than whites. This is what we find.
Blacks had a 27 percent higher concentration of lead in their tibia, while having significantly higher levels of blood lead, “likely because of sustained higher ongoing lead exposure over the decades” (Theppeang et al, 2008). Other data—coming out of Detroit—shows the same relationships (Haar et al, 1979; Talbot, Murphy, and Kuller, 1982; Lead poisoning in children under 6 jumped 28% in Detroit in 2016; also see Maqsood, Stanbury, and Miller, 2017) while lead levels in the water contribute to high levels of blood-lead in Flint, Michigan (Hanna-Attisha et al, 2016; Laidlaw et al, 2016). Cassidy-Bushrow et al (2017) also show that “The disproportionate burden of lead exposure is vertically transmitted (i.e., mother-to-child) to African-American children before they are born and persists into early childhood.”
Children exposed to lead have lower brain volumes as children, specifically in the ventrolateral prefrontal cortex, which is the same region of the brain that is impaired in antisocial and psychotic persons (Cecil et al, 2008). The community that was tested was well within the ‘safe’ range set by the CDC (Raine, 2014: 224), though the CDC says that there is no safe level of lead exposure. There is a large body of studies which show that there is no safe level of lead exposure (Needleman and Landrigan, 2004; Canfield, Jusko, and Kordas, 2005; Barret, 2008; Rossi, 2008; Abelsohn and Sanborn, 2010; Betts, 2012; Flora, Gupta, and Tiwari, 2012; Gidlow, 2015; Lanphear, 2015; Wani, Ara, and Usmani, 2015; Council on Environmental Health, 2016; Hanna-Attisha et al, 2016; Vorvolakos, Aresniou, and Samakouri, 2016; Lanphear, 2017). So the data is clear that there is absolutely no safe level of lead exposure, and even small effects can lead to deleterious outcomes.
Further, one brain study of 532 men who worked in a lead plant showed that those who had higher levels of lead in their bones had smaller brains, even after controlling for confounds like age and education (Stewart et al, 2008). Raine (2014: 224) writes:
The fact that the frontal cortex was particularly reduced is very interesting, given that this brain region is involved in violence. This lead effect was equivalent to five years of premature aging of the brain.
So we have good data that the parts of the brain that relate to violent tendencies are reduced in people exposed to more lead had the same smaller parts of the brain, indicating a relationship. But what about antisocial disorders? Are people with higher levels of lead in their blood more likely to be antisocial?
Needleman et al (1996) show that boys who had higher levels of lead in their blood had higher teacher ratings of aggressive and delinquent behavior, along with higher self-reported ratings of aggressive behavior. Even high blood-lead levels later in life is related to crime. One study in Yugoslavia showed that blood lead levels at age three had a stronger relationship with destructive behavior than did prenatal blood lead levels (Wasserman et al, 2008); with this same relationship being seen in America with high blood lead levels correlating with antisocial and aggressive behavior at age 7 and not age 2 (Chen et al 2007).
Nevin (2007) showed a strong relationship between preschool lead exposure and subsequent increases in criminal cases in America, Canada, Britain, France, Australia, Finland, West Germany, and New Zealand. Reyes (2007) also shows that crime increased quicker in states that saw a subsequent large decrease in lead levels, while variations in lead levels within cities correlating with variations in crime rates (Mielke and Zahran, 2012). Nevin (2000) showed a strong relationship between environmental lead levels from 1941 to 1986 and corresponding changes to violent crime twenty-three years later in the United States. Raine (2014: 226) writes (emphasis mine):
So, young children who are most vulnerable to lead absorption go on twenty-three years later to perpetrate adult violence. As lead levels rose throughout the 1950s, 1960s, and 1970s, so too did violence correspondingly rise in the 1970s, 1980s and 1990s. When lead levels fell in the late 1970s and early 1980s, so too did violence fall in the 1990s and the first decade of the twenty-first century. Changes in lead levels explained a full 91 percent of the variance in violent offending—an extremely strong relationship.
From international to national to state to city levels, the lead levels and violence curves match up almost exactly.
But does lead have a causal effect on crime? Due to the deleterious effects it has on the developing brain and nervous system, we should expect to find a relationship, and this relationship should become stronger with higher doses of lead. Fortunately, I am aware of one analysis, a sample that’s 90 percent black, which shows that with every 5 microgram increase in prenatal blood-lead levels, that there was a 40 percent higher risk of arrest (Wright et al, 2008). This makes sense with the deleterious developmental effects of lead; we are aware of how and why people with high levels of lead in their blood show similar brain scans/brain volume in certain parts of the brain in comparison to antisocial/violent people. So this is yet more suggestive evidence for a causal relationship.
Jennifer Doleac discusses three studies that show that blood-lead levels in America need to be addressed, since they are related strongly to negative health outcomes.Aizer and Curry (2017) show that “A one-unit increase in lead increased the probability of suspension from school by 6.4-9.3 percent and the probability of detention by 27-74 percent, though the latter applies only to boys.” They also show that children who live nearer to roads have higher blood-lead levels, since the soil near highways was contaminated decades ago with leaded gasoline. Fiegenbaum and Muller (2016) show that cities’ use of lead pipes increased murder rates between the years o921 and 1936. Finally, Billings and Schnepnel (2017: 4) show that their “results suggest that the effects of high levels of [lead] exposure on antisocial behavior can largely be reversed by intervention—children who test twice over the alert threshold exhibit similar outcomes as children with lower levels of [lead] exposure (BLL<5μg/dL).”
A relationship with lead exposure in vitro and arrests at adulthood. The sample was 90 percent black, with numerous controls. They found that prenatal and post-natal blood-lead exposure was associated with higher arrest rates, along with higher arrest rates for violent acts (Wright et al, 2008). To be specific, for every 5 microgram increase in prenatal blood-lead levels, there was a 40 percent greater risk for arrest. This is direct causal evidence for the lead-causes-crime hypothesis.
One study showed that in post-Katrina New Orleans, decreasing lead levels in the soil caused a subsequent decrease in blood lead levels in children (Mielke, Gonzales, and Powell, 2017). Sean Last argues that, while he believes that lead does contribute to crime, that the racial gaps have closed in the recent decades, therefore blood-lead levels cannot be a source of some of the variance in crime between blacks and whites, and even cites the CDC ‘lowering its “safe” values’ for lead, even though there is no such thing as a safe level of lead exposure (references cited above). White, Bonilha, and Ellis Jr., (2015) also show that minorities—blacks in particular—have higher rates of lead in their blood. Either way, Last seems to downplay large differences in lead exposure between whites and blacks at young ages, even though that’s when critical development of the mind/brain and other important functioning occurs. There is no safe level of lead exposure—pre- or post-natal—nor are there safe levels at adulthood. Even a small difference in blood lead levels would have some pretty large effects on criminal behavior.
Sean Last also writes that “Black children had a mean BLL which was 1 ug/dl higher than White children and that this BLL gap shrank to 0.9 ug/dl in samples taken between 2003 and 2006, and to 0.5 ug/dl in samples taken between 2007 and 2010.” Though, still, there are problems here too: “After adjustment, a 1 microgram per deciliter increase in average childhood blood lead level significantly predicts 0.06 (95% confidence interval [CI] = 0.01, 0.12) and 0.09 (95% CI = 0.03, 0.16) SD increases and a 0.37 (95% CI = 0.11, 0.64) point increase in adolescent impulsivity, anxiety or depression, and body mass index, respectively, following ordinary least squares regression. Results following matching and instrumental variable strategies are very similar” (Winter and Sampson, 2017).
Naysayers may point to China and how they have higher levels of blood-lead levels than America (two times higher), but lower rates of crime, some of the lowest in the world. The Hunan province in China has considerably lowered blood-lead levels in recent years, but they are still higher than developed countries (Qiu et al, 2015). One study even shows ridiculously high levels of lead in Chinese children “Results showed that mean blood lead level was 88.3 micro g/L for 3 – 5 year old children living in the cities in China and mean blood lead level of boys (91.1 micro g/L) was higher than that of girls (87.3 micro g/L). Twenty-nine point nine one per cent of the children’s blood lead level exceeded 100 micro g/L” (Qi et al, 2002), while Li et al (2014) found similar levels. Shanghai also has higher levels of blood lead than the rest of the developed world (Cao et al, 2014). Blood lead levels are also higher in Taizhou, China compared to other parts of the country—and the world (Gao et al, 2017). But blood lead levels are decreasing with time, but still higher than other developed countries (He, Wang, and Zhang, 2009).
Furthermore, Chinese women, compared to American women, had two times higher BLL (Wang et al, 2015). With transgenerational epigenetic inheritance playing a part in the inheritance of methylation DNA passed from mother to daughter then to grandchildren (Sen et al, 2015), this is a public health threat to Chinese women and their children. So just by going off of this data, the claim that China is a safe country should be called into question.
Reality seems to tell a different story. It seems that the true crime rate in China is covered up, especially the murder rate:
In Guangzhou, Dr Bakken’s research team found that 97.5 per cent of crime was not reported in the official statistics.
Of 2.5 million cases of crime, in 2015 the police commissioner reported 59,985 — exactly 15 less than his ‘target’ of 60,000, down from 90,000 at the start of his tenure in 2012.
The murder rate in China is around 10,000 per year according to official statistics, 25 per cent less than the rate in Australia per capita.
“I have the internal numbers from the beginning of the millennium, and in 2002 there were 52,500 murders in China,” he said.
Instead of 25 per cent less murder than Australia, Dr Bakken said the real figure was closer to 400 per cent more.”
Guangzhou, for instance, doesn’t keep data for crime committed by migrants, who commit 80 percent of the crime in this province. Out of 2.5 million crimes committed in Guangzhou, only 5,985 crimes were reported in their official statistics, which was 15 crimes away from their target of 6000. Weird… Either way, China doesn’t have a similar murder rate to Switzerland:
The murder rate in China does not equal that of Switzerland, as the Global Times claimed in 2015. It’s higher than anywhere in Europe and similar to that of the US.
China also ranks highly on the corruption index, higher than the US, which is more evidence indicative of a covered up crime rate. So this is good evidence that, contrary to the claims of people who would attempt to downplay the lead-crime relationship, that these effects are real and that they do matter in regard to crime and murder.
So it’s clear that we can’t trust the official Chinese crime stats since there much of their crime is not reported. Why should we trust crime stats from a corrupt government? The evidence is clear that China has a higher crime—and murder rate—than is seen on the Chinese books.
Lastly, effects of epigenetics can and do have a lasting effect on even the grandchildren of mothers exposed to lead while pregnant (Senut et al, 2012; Sen et al, 2015). Sen et al (2015) showed lead exposure during pregnancy affected the DNA methylation status of the fetal germ cells, which then lead to altered DNA methylation on dried blood spots in the grandchildren of the mother exposed to lead while pregnant.—though it’s indirect evidence. If this is true and holds in larger samples, then this could be big for criminological theory and could be a cause for higher rates of black crime (note: I am not claiming that lead exposure could account for all, or even most of the racial crime disparity. It does account for some, as can be seen by the data compiled here).
In conclusion, the relationship between lead exposure and crime is robust and replicated across many countries and cultures. No safe level of blood lead exists, even so-called trace amounts can have horrible developmental and life outcomes, which include higher rates of criminal activity. There is a clear relationship between lead increases/decreases in populations—even within cities—that then predict crime rates. Some may point to the Chinese as evidence against a strong relationship, though there is strong evidence that the Chinese do not report anywhere near all of their crime data. Epigenetic inheritance, too, can play a role here mostly regarding blacks since they’re more likely to be exposed to high levels of lead in the womb, their infancy, and childhood. This could also exacerbate crime rates, too. The evidence is clear that lead exposure leads to increased criminal activity, and that there is a strong relationship between blood lead levels and crime.
People look different depending on where their ancestors derived from; this is not a controversial statement, and any reasonable person would agree with that assertion. Though what most don’t realize, is that even if you assert that biological races do not exist, but allow for patterns of distinct visible physical features between human populations that then correspond with geographic ancestry, then race—as a biological reality—exists because what denotes the physical characters are biological in nature, and the geographic ancestry corresponds to physical differences between continental groups. These populations, then, can be shown to be real in genetic analyses, and that they correspond to traditional racial groups. So we can then say that Eurasian, East Asian, Oceanian, black African, and East Asians are continental-level minimalist races since they hold all of the criteria needed to be called minimalist races: (1) distinct facial characters; (2) distinct morphologic differences; and (3) they come from a unique geographic location. Therefore minimalist races exist and are a biological reality. (Note: There is more variation within races than between them (Lewontin, 1972; Rosenberg et al, 2002; Witherspoon et al, 2007; Hunley, Cabana, and Long, 2016), but this does not mean that the minimalist biological concept of race has no grounding in biology.)
Minimalist race exists
The concept of minimalist race is simple: people share a peculiar geographic ancestry unique to them, they have peculiar physiognomy (facial features like lips, facial structure, eyes, nose etc), other physical traits (hair/hair color), and a peculiar morphology. Minimalist races exist, and are biologically real since minimalist races can survive findings from population genetics. Hardimon (2017) asks, “Is the minimalist concept of race a social concept?” on page 62. He writes that social concepts are socially constructed in a pernicious sense if and only if it “(i) fails to represent any fact of the matter and (ii) supports and legitimizes domination.” Of course, populations who derive from Africa, Europe, and East Asia have peculiar facial morphology/morphology unique to that isolated population. Therefore we can say that minimalist race does not conform to criteria (i). Hardimon (2017: 63) then writes:
Because it lacks the nasty features that make the racialist concept of race well suited to support and legalize domination, the minimalist race concept fails to satisfy condition (ii). The racialist concept, on the other hand, is socially constructed in the pernicious sense. Since there are no racialist races, there are no facts of the matter it represents. So it satisfies (i). To elaborate, the racialist race concept legtizamizes racial domination by representing the social hierarchy of race as “natural” (in a value-conferring sense): as the “natural” (socially unmediated and inevitable) expression of the talent and efforts of the inidividuals who stand on its rungs. It supports racial domination by conveying the idea that no alternative arrangment of social institutions could possibly result in racial equality and hence that attempts to engage in collective action in the hopes of ending the social hierarchy of race are futile. For these reasons the racialist race concept is also idealogical in the prejorative sense.
Knowing what we know about minimalist races (they have distinct physiognomy, distinct morphology and geographic ancestry unique to that population), we can say that this is a biological phenomenon, since what makes minimalist races distinct from one another (skin color, hair color etc) are based on biological factors. We can say that brown skin, kinky hair and full lips, with sub-Saharan African ancestry, is African, while pale/light skin, straight/wavy/curly hair with thin lips, a narrow nose, and European ancestry makes the individual European.
These physical features between the races correspond to differences in geographic ancestry, and since they differ between the races on average, they are biological in nature and therefore it can be said that race is a biological phenomenon. Skin color, nose shape, hair type, morphology etc are all biological. So knowing that there is a biological basis to these physical differences between populations, we can say that minimalist races are biological, therefore we can use the term minimalist biological phenomenon of race, and it exists because there are differences in the patterns of visible physical features between human populations that correspond to geographic ancestry.
Hardimon then talks about how eliminativist philosophers and others don’t deny that above premises above the minimalist biological phenomenon of race, but they allow these to exist. Hardimon (2017: 68-69) then quotes a few prominent people who profess that there are, of course, differences in physical features between human populations:
… Lewontin … who denies that biological races exist, freely grants that “peoples who have occupied major geographic areas for much of the recent past look different from one another. Sub-Saharan Africans have dark skin and people who have lived in East Asia tend to have a light tan skin and an eye color and eye shape that is difference from Europeans.” Similarly, population geneticist Marcus W. Feldman (final author of Rosenberg et al., “Genetic Stucture of Human Populations” ), who also denies the existence of biological races, acknowledges that “it has been known for centuries that certain physical features of humans are concentrated within families: hair, eye, and skin color, height, inability to digest milk, curliness of hair, and so on. These phenotypes also show obvious variation among people from different continents. Indeed, skin color, facial shape, and hair are examples of phenotypes whose variation among populations from different regions is noticeable.” In the same vein, eliminative anthropologist C. Loring Brace concedes, “It is perfectly true that long term residents of various parts of the world have patterns of features that we can identify as characteristic of they area from which they come.”
So even these people who claim to not believe in “biological races”, do indeed believe in biological races because what they are describing is biological in nature and they, of course, do not deny that people look different while their ancestors came from different places so therefore they believe in biological races. We can then use the minimalist biological phenomenon of race to get to the existence of minimalist races.
Hardimon (2017: 69) writes:
Step 1. Recognize that there are differences in patterns of visible physical features of human beings that correspond to their differences in geographic ancestry.
Step 2. Observe that these patterns are exhibited by groups (that is, real existing groups).
Step 3. Note that the groups that exhibit these patterns of visible physical features correspond to differences in geographical ancestry satisfy the conditions of the minimalist concept of race.
Step 4. Infer that minimalist race exists.
Those individuals mentioned previously who deny biological races but allow that people with ancestors from differing geographic locales look differently do not disagree with step 1, nor does anyone really disagree with step 2. Step 4’s inference immediately flows from the premise in step 3. “Groups that exhibit patterns or visible physical features that correspond to differences in geographical ancestry satisfy the conditions of the minimalist concept of race. Call (1)-(4) the argument from the minimalist biological phenomenon of race” (Hardimon, 2017: 70). Of course, the argument does not identify which populations may be called races (see further below), it just shows that race is a biological reality. Because if minimalist races exist, then races exist because minimalist races are races. Minimalist races exist, therefore biological races exist. Of course, no one doubts that people come from Europe, sub-Saharan Africa, East Asia, the Americas, and the Pacific Islands, even though the boundaries between them are ‘blurry’. They exhibit patterns of visible physical characters that correspond to their differing geographic ancestry, they are minimalist races therefore minimalist races exist.
Pretty much, the minimalist concept of race is just laying out what everyone knows and arguing for its existence. Minimalist races exist, but are they biologically real?
Minimalist races are biologically real
Of course, some who would assert that minimalist races do not exist would say that there are no ‘genes’ that are exclusive to one certain population—call them ‘race genes’. Of course, these types of genes do not exist. Whether or not one individual is a part of one race or not does not rest on the basis of his physical characters, but is determined by who his parents are, because one of the three premises for the minimalist race argument is ‘must have a peculiar geographic ancestry’. So it’s not that members of races share sets of genes that other races do not, it’s based on the fact that they share a distinctive set of visible physical features that then correspond with geographic ancestry. So of course if the minimalist concept of race is a biological concept then it entails more than ‘genes for’ races.
Of course, there is a biological significance to the existence of minimalist biological races. Consider that one of the physical characters that differ between populations is skin color. Skin color is controlled by genes (about half a dozen within and a dozen between populations). Lack of UV rays for individuals with dark skin will lead to diseases like prostate cancer, while darker skin is a protectant against UV damage to human skin (Brenner and Hearing, 2008; Jablonksi and Chaplin, 2010). Since minimalist race is biologically significant and minimalist races are partly defined by differences in skin color between populations then skin color has both medical and ecological significance.
(1) Consider light skin. People with light skin are more susceptible to skin cancer since they evolved in locations with poor UVR radiation (D’Orazio et al, 2013). The body needs vitamin D to absorb and use calcium for maintaining proper cell functioning. People who evolved near the equator don’t have to worry about this because the doses of UVB they absorb are sufficient for the production of enough previtamin D. While East Asians and Europeans on the other hand, became adapted to low-sunlight locations and therefore over time evolved lighter skin. This loss of pigmentation allowed for better UVB absorption in these new environments. (Also read my article on the evolution of human skin variation and also how skin color is not a ‘tell’ of aggression in humans.)
(2) While darker-skinned people have a lower rate of skin cancer “primarily a result of photo-protection provided by increased epidermal melanin, which filters twice as much ultraviolet (UV) radiation as does that in the epidermis of Caucasians” (Bradford, 2009). Dark skin is thought to have evolved to protect against skin cancer (Greaves, 2014a) but this has been contested (Jablonski and Chaplin, 2014) and defended (Greaves, 2014b). So therefore, using (1) and (2), skin color has evolutionary signifigance.
So as humans began becoming physically adapted to their new niches they found themselves in, they developed new features distinct from the location they previously came from to better cope with the new lifestyle due to their new environments. For instance “Northern Europeans tend to have light skin because they belong to a morphologically marked ancestral group—a minimalist race—that was subject to one set of environmental conditions (low UVR) in Europe” (Hardimon, 2017: 81). Of course explaining how human beings survived in new locations falls into the realm of biology, while minimalist races can explain why this happened.
Minimalist races clearly exist since minimalist races constitute complex biological patterns between populations. Hardimon (2017: 83) writes:
It [minimalist race] also enjoys intrinsic scientific interest because it represents distinctive salient systematic dimension of human biological diversity. To clarify: Minimalist race counts as (i) salient because human differences of color and shape are striking. Racial differences in color and shape are (ii) systematic in that they correspond to differences in geographic ancestry. They are not random. Racial differences are (iii) distinctive in that they are different from the sort of biological differences associated with the other two salient systematic dimensions of human diversity: sex and age.
An additional consideration: Like sex and age, minimalist race constitutes one member of what might be called “the triumverate of human biodiversity.” An account of human biodiversity that failed to include any one of these three elements would be obviously incomplete. Minimalist race’s claim to be biologically real is as good as the claim of the other members of the triumverate. Sex is biologically real. Age is biologically real. Minimalist race is biologically real.
Real does not mean deep. Compared to the biological associated with sex (sex as contrasted with gender), the biological differences associated with minimalist race are superficial.
Of course, the five ‘clusters’ and ‘populations’ identified by Rosenberg et al’s (2002) K=5 graph, which told structure to produce 5 genetic clusters, corresponds to Eurasia, Africa, East Asia, Oceania, and the Americas, are great candidates for minimalist biological races since they correspond to geographic locations, and even corroborates what Fredrich Blumenbach said about human races back in the 17th century. Hardimon further writes (pg 85-86):
If the five populations corresponding to the major areas are continental-level minimalist races, the clusters represent continental-level minimalist races: The cluster in the mostly orange segment represents the sub-Saharan African continental-level minimalist race. The cluster in the mostly blue segment represents the Eurasian continental-level minimal race. The cluster in the mostly pink segment represents the East Asian continental-level minimalist race. The cluster in the mostly green segment represents the Pacific Islander continental-level minimalist race. And the cluster in the mostly purple segment represents the American continental-level minimalist race.
The assumption that the five populations are continental-level minimalist races entitles us to interpret structure as having the capacity to assign individuals to continental-level minimalist races on the basis of markers that track ancestry. In constructing clusters corresponding to the five continental-level minimalist races on the basis of objective, race-neutral genetic markers, structure essentially “reconstructs” those races on the basis of a race-blind procedure. Modulo our assumption, the article shows that it is possible to assign individuals to continental-level races without knowing anything about the race or ancestry of the individuals from whose genotypes the microsattelites are drawn. The populations studied were “defined by geography, language, and culture,” not skin color or “race.”
Of course, as critics note, the researchers predetermine how many populations that structure demarcates, for instance, K=5 indicates that the researchers told the program to delineate 5 clusters. Though, these objections do not matter. For the 5 populations that come out in K=5 “are genetically structured … which is to say, meaningfully demarcated solely on the basis of genetic markers” (Hardimon, 2017: 88). K=6 brings one more population, the Kalash, a group from northern Pakistan who speak an Indo-European language. Though “The fact that structure represents a population as genetically distinct does not entail that the population is a race. Nor is the idea that populations corresponding to the five major geographic areas are minimalist races undercut by the fact that structure picks out the Kalash as a genetically distinct group. Like the K=5 graph, the K=6 graph shows that modulo our assumption, continental-level races are genetically structured” (Hardimon, 2017: 88).
Though of course there are naysayers. Svante Paabo and David Serre, Hardimon writes, state that when individuals are sampled from homogeneous populations from around the world, the gradients of the allele frequencies that are found are distributed randomly across the world rather than clustering discretely. Though Rosenberg et al responded by verifying that the clusters they found are not artifacts of sampling as Paabo and Serre imply, but reflect features of underlying human variation. Though Rosenberg et al agree with Paabo and Serre in that that human genetic diversity consists of clines in variation in allele frequencies (Hardimon, 2017: 89). Other naysayers also state that all Rosenberg et al show is what we can “see with our eyes”. Though a computer does not partition individuals into different populations based on something that can be done with eyes, it’s based on an algorithm.
Hardimon also accepts that black Africans, Caucasians, East Asians, American Indians and Oceanians can be said to be races in the basic sense because “they constitute a partition of the human species“, and that they are distinguishable “at the level of the gene” (Hardimon, 2017: 93). And of course, K=5 shows that the 5 races are genetically distinguishable.
Hardimon finally discusses some medical significance for minimalist races. He states that if you are Caucasian that it is more likely that you have a polymorphism that protects against HIV compared to a member of another race. Meanwhile, East Asians are more likely to carry alleles that make them more susceptible to Steven-Johnson syndrome or another syndrome where their skin falls off. Though of course, the instances where this would matter in a biomedical context are rare, but still should be at the back of everyone’s mind (as I have argued), even though instances where medical differences between minimalist races are rare, there are times where one’s race can be medically significant.
Hardimon finally states that this type of “metaphysics of biological race” can be called “deflationary realism.” Deflationary because it “consists in the repudiation of the ideas that racialist races exist and that race enjoys the kind of biological reality that racialist race was supposed to have” and realism which “consists in its acknowledgement of the existence of minimalist races and the genetically grounded, relatively superficial, but still significant biological reality of minimalist race” (Hardimon, 2017: 95-96).
Minimalist races exist. Minimalist races are a biological reality because distinct visible patterns show differences between geographically isolated populations. This is enough for the classification of the five classic races we know of to be called race, be biologically real, and have a medical significance—however small—because certain biological/physical traits are tied to different geographic populations—minimalist races.
Hardimon (2017: 97) shows an alternative to racialism:
Deflationary realism provides a worked-out alternative to racialism—it it a theory that represents race as a genetically grounded, relatively superficial biological reality that is not normatively important in itself. Deflationary realism makes it possible to rethink race. It offers the promise of freeing ourselves, if only imperfectly, from the racialist background conception of race.
It is clear that minimalist races exist and are biologically real. You do not need to speak about supposed mental traits between these minimalist races, they are irrelevant to the existence of these minimalist biological races. As Hardimon (2017: 67) writes: “No reference is made to normatively important features such as intelligence, sexuality, or morality. No reference is made to essences. The idea of sharp boundaries between patterns of visible physical features or corresponding geographical regions is not invoked. Nor again is reference made to the idea of significant genetic differences. No reference is made to groups that exhibit patterns of visible physical features that correspond to geographic ancestry.”
The minimalist biological concept of race stands up to numerous lines of argumentation, therefore we can say without a shadow of a doubt that minimalist biological race exists and is real.
Do pigmentation and the melanocortin system modulate aggression and sexuality in humans as they do in other animals? A Response to Rushton and Templer (2012)
Rushton et al have kept me pretty busy over the last year or so. I’ve debunked many of their claims that rest on biology—such as testosterone causing crime and aggression. The last paper that Rushton published before he died in October of 2012 was an article with Donald Templer—another psychologist—titled Do pigmentation and the melanocortin system modulate aggression and sexuality in humans as they do in other animals? (Rushton and Templer, 2012) and they make a surfeit of bold claims that do not follow. They review animal studies on skin and fur pigmentation and show that the darker an animal’s skin or fur, the more likely they are to be aggressive and violent. They then conclude that, of course (it wouldn’t be a Rushton article without it), that the long-debunked r/K ‘continuum’ explains the co-variation between human populations in birth rate, longevity, violent crime, infant mortality and rate and acquisition of AIDS/HIV.
In one of the very first articles I wrote on this site, I cited Rushton and Templer (2012) favorably (back when I had way less knowledge of biology and hormones). I was caught by biases and not knowing anything about what was discussed. After I learned more about biology and hormones over the years, I came to find out that the claims in the paper are wrong and that they make huge, sweeping conclusions based on a few correlations. Either way, I have seen the error of my ways and the biases that lead me to the beliefs I held, and when I learned more about hormones and biology I saw how ridiculous some of the papers I have cited in the past truly were.
Rushton and Templer (2012) start off the paper by discussing Ducrest et al (2008) who state that within each species studied, darker-pigmented individuals of said species exhibited higher rates of aggression, sexuality and social dominance (which is caused by testosterone) than lighter-pigmented individuals in that same species. They state that this is due to pleiotropy—when a single gene has to or more phenotypic effects. They then refer to Rushton and Jensen (2005) to reference the claim that low IQ is correlated with skin color (skin color doesn’t cause IQ, obviously).
They then state that in 40 vertebrate species that within each that the darker-pigmented members had higher levels of aggression and sexual activity along with a larger body size, better stress resistance, and are more physically active while grooming (Ducrest, Keller, and Roulin, 2008). Rushton and Templer (2012) then state that this relationship was ‘robust’ across numerous species, specifically 36 species of birds, 4 species of fish, 3 species of mammals, and 4 species of reptiles.
Rushton and Templer (2012) then discuss the “Validation of the pigmentation system as causal to the naturalistic observations was demonstrated by experimentally manipulating pharmacological dosages and by studies of cross-fostering“, citing Ducrest, Keller, and Roulin (2008). They even state that ‘Placing darker versus lighter pigmented individuals with adoptive parents of the opposite pigmentation did not modify offspring behavior.” Seems legit. Must mean that their pigmentation caused these differences. They then state something patently ridiculous: “The genes that control that balance occupy a high level in the hierarchical system of the genome.” Though, unfortunately for their hypothesis, there is no privileged level of causation (Noble, 2016; also see Noble, 2008), so this is a nonsense claim. Genes are not ‘blueprints’ or ‘recipes’ (Oyama, 1985; Schneider, 2007).
They then refer to Ducrest, Keller and Roulin (2008: 507) who write:
In this respect, it is important to note that variation in melanin-based coloration between human populations is primarily due to mutations at, for example, MC1R, TYR, MATP and SLC24A5 [29,30] and that human populations are therefore not expected to consistently exhibit the associations between melaninbased coloration and the physiological and behavioural traits reported in our study.
This quote, however, seems to be ignored by Rushton and Templer (2012) throughout the rest of their article, and so even though they did a brief mentioning of the paper and how one should be ‘cautious’ in interpreting the data in their study, it seems like they just brush it under the rug to not have to contend with it. Rushton and Templer (2012) then cite the famous silver fox study, where tame foxes were bred. They lost their dark fur and became lighter and, apparently, were less aggressive than their darker-pigmented kin. These animal studies are, in my useless when attempting to correlate skin color and the melanocortin system in the modulation of aggressive behavior, so let’s see what they write about human studies.
It’s funny, because Rushton and Templer (2012) cite Ducrest, Keller, and Roulin (2008: 507) to show that caution should be made when assessing any so-called differences in the melanocortin system between human races. They then disregard that by writing “A first examination of whether melanin based pigmentation plays a role in human aggression and sexuality (as seen in non-human animals), is to compare people of African descent with those of European descent and observe whether darker skinned individuals average higher levels of aggression and sexuality (with violent crime the main indicator of aggression).” This is a dumb comparison. Yes, African nations commit more crime than European nations, but does this mean that the skin color (or whatever modulates skin color/melanocortin system) is the cause for this? No. Not at all.
There really isn’t anything to discuss here, though, because they just run through how different African nations have higher levels of crime than European and East Asian nations, how blacks report having more sex and feel less guilty about it. Rushton and Templer (2012) then state that one study “asked married couples how often they had sex each week. Pacific Islanders and Native Americans said from 1 to 4 times, US Whites answered 2–4 times, while Africans said 3 to over 10 times.” They then switch over to their ‘replication’ of this finding, using the data from Alfred Kinsey (Rushton and Bogaert, 1988). Though, unfortunately for Rushton and Bogaert, there are massive problems with this data.
Though, the Kinsey data can hardly be seen as representative (Zuckerman and Brody, 1988), and it is also based on outdated, non-representative, non-random samples (Lynn, 1989). Rushton and Templer (2012) also discuss so-called differences in penis size between races, too. But I have written two response articles on the matter and shown that Rushton used shoddy sources like ‘French Army Surgeon who contradicts himself: “Similarly, while the French Army surgeon announces on p. 56 that he once discovered a 12-inch penis, an organ of that size becomes “far from rare” on p. 243. As one might presume from such a work, there is no indication of the statistical procedures used to compute averages, what terms such as “often” mean, how subjects were selected, how measurements were made, what the sample sizes were, etc” (Weizmann et al, 1990: 8).
Rushton and Templer (2012) invoke, of course, Rushton’s (1985; 1995) r/K selection theory as applied to human races. I have written numerous articles on r/K selection and attempts at reviving it, but it is long dead, especially as a way to describe human populations (Anderson, 1991; Graves, 2002). The theory was refuted in the late 70s (Graves, 2002), and replaced with age-specific mortality (Reznick et al, 2002). Some of his larger claims I will cover in the future (like how r/K relates to criminal activity), but he just goes through all of the same old motions he’s been going through for years, bringing nothing new to the table. In all honesty, testosterone is one of the pillars of Rushton’s r/K selection theory (e.g., Lynn, 1990; Rushton, 1997; Rushton, 1999; Hart, 2007; Ellis, 2017; extensive arguments against Ellis, 2017 can be found here). If testosterone doesn’t do what he believes it does and the levels of testosterone between the races are not as high as believed/non-existent (Gapstur et al, 2002; read my discussion of Gapstur et al 2002; Rohrmann et al, 2007; Richard et al, 2014. Though see Mazur, 2016 and read my interpretation of the paper) then we can safely disregard their claims.
Another is that Blacks have the most testosterone (Ellis & Nyborg, 1992), which
helps to explain their higher levels of athletic ability (Entine, 2000).
As I have said many times in the past, Ellis and Nyborg (1992) found a 3 percent difference in testosterone levels between white and black ex-military men. This is irrelavent. He also, then cites John Entine’s (2002) book Taboo: Why Black Athletes Dominate Sports and Why We’re Afraid to Talk About It, but this doesn’t make sense. Because he literally cites Rushton who cites Ellis and Nyborg (1992) and Ross et al (1986) (stating that blacks have 3-19 percent higher levels of testosterone than whites, citing Ross et al’s 1986 uncorrected numbers)—and I have specifically pointed out numerous flaws in their analysis and so, Ross et al (1986) cannot seriously be used as evidence for high testosterone differences between the races. Though I cited Fish (2013), who wrote about Ellis and Nyborg (1992):
“These uncorrected figures are, of course, not consistent with their racial r- and K-continuum.”
Rushton and Templer (2012) then state that testosterone acts like a ‘master switch’ (Rushton, 1999), implicating testosterone as a cause for aggression, though I’ve shown that this is not true, and that aggression causes testosterone production, testosterone doesn’t cause aggression. Testosterone does control muscle mass, of course. But Rushton’s claim that blacks have deeper voices due to higher levels of testosterone, but this claim does not hold in newer studies.
Rushton and Templer (2012) then shift gears to discuss Templer and Arikawa’s (2006) study on the correlation between skin color and ‘IQ’. However, there is something important to note here from Razib:
we know the genetic architecture of pigmentation. that is, we know all the genes (~10, usually less than 6 in pairwise between population comparisons). skin color varies via a small number of large effect trait loci. in contrast, I.Q. varies by a huge number of small effect loci. so logically the correlation is obviously just a correlation. to give you an example, SLC45A2 explains 25-40% of the variance between africans and europeans.
long story short: it’s stupid to keep repeating the correlation between skin color and I.Q. as if it’s a novel genetic story. it’s not. i hope don’t have to keep repeating this for too many years.
Rushton and Templer (2012: 7) conclude:
The melanocortin system is a physiological coordinator of pigmentation and life history traits. Skin color provides an important marker placing hormonal mediators such as testosterone in broader perspective.
I don’t have a problem with the claim that the melanocortin system is a physiological coordinator of pigmentation, because it’s true and we have a great understanding of the physiology behind the melanocortin system (see Cone, 2006 for a review). EvolutionistX also has a great article, reviewing some studies (mouse studies and some others) showing that increasing melatonin appears to decreases melanin.
Rushton and Templer’s (2012) make huge assumptions not warranted by any data. For instance, Rushton states in his VDare article on the subject, J. Phillipe Rushton Says Color May Be More Than Skin Deep, “But what about humans? Despite all the evidence on color, aggression, and sexuality in animals, there has been little or no discussion of the relationship in people. Ducrest & Co. even warned that genetic mutations may make human populations not exhibit coloration effects as consistently as other species. But they provided no evidence.” All Rushton and Templer (2012) do in their article is just restating known relationships with crime and race, and then attempting to implicate the melanocortin system as a factor driving this relationship, literally off of a slew of animal studies. Even then, the claim that Ducrest, Keller, and Roulin (2008: 507) provide no evidence for their warning is incorrect, because before they stated that, they wrote “In this respect, it is important to note that variation in melanin-based coloration between human populations is primarily due to mutations at, for example, MC1R, TYR, MATP and SLC24A5 [29,30]. . .” Melanin does not cause aggression, it does not cause crime. Rushton and Templer just assume too many things based on no evidence in humans, while their whole hypothesis is structured around a bunch of animal studies.
In conclusion, it seems like Rushton and Templer don’t know anything about the physiology of the melanocortin system if they believe that pigmentation and the melanocortin system modulates aggression and sexual behavior in humans. I know of no evidence (studies, not Rushton and Templer’s 2012 relationships with crime and then asserting that, because these relationships are seen in animals, that it must mean that the melanocortin system in humans modulates the relationships too) for these assertions by Rushton and Templer (2012). The fact that they think that restating relationships between crime and race, country of origin and race, supposed correlations with testosterone and crime and blacks supposedly having higher testosterone than whites, among other things, is caused by the melanocortin system and pigmentation has no basis in reality.
Humans reach their maximum height at around their mid-20s. It is commonly thought that taller people have better life outcomes, and are in general healthier. Though this misconception stems from misconceptions about the human body. In all reality, shorter people live longer than taller people. (Manlets of the world should be rejoicing; in case anyone is wondering I am 5’10”.) This flies in the face about what people think, and may be counter-intuitive to some but the logic—and data—is sound. I will touch on mortality differences between tall and short people and at the end talk a bit about shrinking with age (and studies that show there is no—or little—decrease in height due to self-reports, the study is flawed).
One reason why the misconception of taller people living longer, healthier lives than shorter people is the correlation between height and IQ—people assume that they are traits that are ‘similar’ in that they become ‘stable’ at adulthood—but one way to explain that relationship is that IQ is correlated with height because higher SES people can afford better food and thus be better nourished. Either way, it is a myth that taller people have lower rates of all-cause mortality.
The truth of the matter is this: smaller bodies live longer lives, and this is seen in the animal kingdom and humans—larger body size independently reduces mortality (Samaras and Elrick, 2002). They discuss numerous lines of evidence—from human to animal studies—and show that smaller bodies have a lower chance of all-cause mortality, the reasoning being (one of the reasons, anyway) that larger bodies have more cells which then would, in turn, be more subject to carcinogens and, obviously, would have higher rates of cancer which would then, too, lower mortality rates. Samaras (2012) also has another paper where the implications are reviewed for this, and other causes are proposed for this observation. Causes are reduced cell damage, lower DNA damage, and lower cancer incidence; with other, hormonal differences, between tall and short people that explain more of the variation between them.
One study found a positive linear correlation between height and cancer mortality. Lee et al (2009) write:
A positive linear association was observed between height and cancer mortality. For each standard deviation greater height, the risk of cancer was increased by 5% (2–8%) and 9% (5–14%) in men and women, respectively.
One study suggests that “variations in adult height (and, by implication, the genetic and other determinants of height) have pleiotropic effects on several major adult-onset diseases” (The Emerging Risk Factors Collaboration, 2012). Taller people also are at greater risk for heart attack (Tamaras, 2013). The cause for this, Tamaras writes, is “including reduced telomere shortening, lower atrial fibrillation, higher heart pumping efficiency, lower DNA damage, lower risk of blood clots, lower left ventricular hypertrophy and superior blood parameters.” Height, though, may be inversely associated with long-term incidence of fatal stroke (Goldbourt and Tanne, 2002). Schmidt et al (2014) conclude: “In conclusion, short stature was a risk factor for ischemic heart disease and premature death, but a protective factor for atrial fibrillation. Stature was not substantially associated with stroke or venous thromboembolism.” Cancer incidence also increases with height (Green et al, 2011). Samaras, Elrick, and Storms (2003) suggest that men live longer than women live longer than men due to the height difference between them, being about 8 percent taller than women but having a 7.9 percent lower life expectancy at birth.
Height at mid-life, too, is a predictor of mortality with shorter people living longer lives (He et al, 2014). There are numerous lines of evidence that shorter people—and people of shorter ethnies, too—live longer lives if they are vertically challenged. One study on patients undergoing maintenance hemodialysis stated that “height was directly associated with all-cause mortality and with mortality due to cardiovascular events, cancer, and infection” (Daugirdas, 2015; Shapiro et al, 2015). Even childhood height is associated with prostate cancer acquisition (Aarestrup et al, 2015). Even men who are both tall and have more adipose tissue (body fat) are more likely to die younger and that greater height was associated with a higher risk of acquiring prostate cancer (Perez-Cornago et al, 2017). Short height is a risk factor for death for hemodyalisis patients (Takenaka et al, 2010). Though there are conflicting papers regarding short height and CHD, many reviews show that shorter people have better health outcomes than taller people.
Sohn (2016) writes:
An additional inch increase in height is related to a hazard ratio of death from all causes that is 2.2% higher for men and 2.5% higher for women. The findings are robust to changing survival distributions, and further analyses indicate that the figures are lower bounds. This relationship is mainly driven by the positive relationship between height and development of cancer. An additional inch increase in height is related to a hazard ratio of death from malignant neoplasms that is 7.1% higher for men and 5.7% higher for women.
It has been widely observed that tall individuals live longer or die later than short ones even when age and other socioeconomic conditions are controlled for. Some researchers challenged this position, but their evidence was largely based on selective samples.
Four additional inches of height in post-menopausal women coincided with an increase in all types of cancer risk by 13 percent (Kabat et al, 2013), while taller people also have less efficient lungs (Leon et al, 1995; Smith et al, 2000). Samaras and Storms (1992) write “Men of height 175.3 cm or less lived an average of 4.95 years longer than those of height over 175.3 cm, while men of height 170.2 cm or less lived 7.46 years longer than those of at least 182.9 cm.”
Lastly, regarding height and mortality, Turchin et al (2012) write “We show that frequencies of alleles associated with increased height, both at known loci and genome wide, are systematically elevated in Northern Europeans compared with Southern Europeans.” This makes sense, because Southern European populations live longer (and have fewer maladies) than Northern European populations:
Compared with northern Europeans, shorter southern Europeans had substantially lower death rates from CHD and all causes.2 Greeks and Italians in Australia live about 4 years longer than the taller host population … (Samaras and Elrick, 2002)
So we have some data that doesn’t follow the trend of taller people living shorter lives due to maladies they acquire due to their height, but most of the data points in the direction that taller people live shorter lives, higher rates of cancer, lower heart pumping efficiency (the heart needs to pump more blood through a bigger body) etc. It makes logical sense that a shorter body would have fewer maladies, and would have higher heart pumping efficiency, lower atrial fibrillation, lower DNA damage, lower risk of blood clotting (duh) when compared to taller people. So it seems that, if you’re a normal American man, then if you want to live a good, long life then you’d want to be shorther, rather than taller.
Lastly, do we truly shrink as we age? Steve Hsu has an article on this matter, citing Birrell et al (2005) which is a longitudinal study in Newcastle, England which began in 1947. The children were measured when full height was expected to be acheived, which is about 22 years of age. They were then followed up at age 50. Birrell et al (2005) write:
Height loss was reported by 57 study members (15%, median height loss: 2.5 cm), with nine reporting height loss of >3.5 cm. However, of the 24 subjects reporting height loss for whom true height loss from age 22 could be calculated, assuming equivalence of heights within 0.5 cm, 7 had gained height, 9 were unchanged and only 8 had lost height. There was a poor correlation between self-reported and true height loss (r=0.28) (Fig. 1).
In this population, self-reported height was off the mark, and it seems like Hsu takes this conclusion further than he should, writing “Apparently people don’t shrink quite as much with age as they think they do.” No no no. This study is not good. We begin shrinking at around age 30:
Men gradually lose an inch between the ages of 30 to 70, and women can lose about two inches. After the age of 80, it’s possible for both men and women to lose another inch.
The conclusion from Hsu on that study is not warranted. To see this, we can look at Sorkin, Muller, and Andres (1999) who write:
For both sexes, height loss began at about age 30 years and accelerated with increasing age. Cumulative height loss from age 30 to 70 years averaged about 3 cm for men and 5 cm for women; by age 80 years, it increased to 5 cm for men and 8 cm for women. This degree of height loss would account for an “artifactual” increase in body mass index of approximately 0.7 kg/m2 for men and 1.6 kg/m2 for women by age 70 years that increases to 1.4 and 2.6 kg/m2, respectively, by age 80 years.
So, it seems that Hsu’s conclusion is wrong. We do shrink with age for myriad reasons, including discs between the vertebrae and spine decompress and dehydrate, the aging spine becomes more curved due to loss of bone density, and loss of torso muscle could contribute to the differing posture. Either way, these are preventable, but some height decrease will be notable for most people. Either way, Hsu doesn’t know what he’s talking about here.
In conclusion, while there is some conflicting data on whether tall or short people have lower all-cause mortality, the data seems to point to the fact that shorter people live longer due since they have lower atrial fibrillation, higher heart pumping efficiency, low DNA damage, lower risk for blood clots (since the blood doesn’t have to travel too far in shorter people), along with superior blood parameters etc. With the exception of a few diseases, shorter people do have a higher quality of life and higher lung efficiency. We do get shorter as we age—though with the right diet we can ameliroate some of those effects (for instance keeping calcium high). There are many reasons why we shrink due to age, and the study that Hsu cited isn’t good compared to the other data we have in the literature on this phenomenon. All in all, shorter people live longer for myriad reasons and we do shrink as we age, contrary to Steve Hsu’s claims.