NotPoliticallyCorrect
Please keep comments on topic.

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 142 other followers

Follow me on Twitter

Charles Darwin

Denis Noble

JP Rushton

Richard Lynn

Linda Gottfredson

Goodreads

Advertisements

A Simple Argument for the Existence of Race

1550 words

Race deniers say that there is too small of a genetic distance between races to call the so-called races “races”. They latch on to Lewontin’s 1974 analysis, trumpeting that genetic distance is too small for there to be true “races”. There is, however, a simple way to bypass the useless discussions that would ensue if one cites genetic evidence for the existence of race: just use this simple argument:

P1) There are differences in patterns of visible physical features which correspond to geographic ancestry
P2) These patterns are exhibited between real groups, existing groups (i.e., individuals who share common ancestry)
P3) These real, existing groups that exhibit these physical patterns by geographic ancestry satisfy conditions of minimalist race
C) Therefore race exists and is a biological reality

This argument is simple; anyone who denies this needs to provide a good enough counter-argument, and I’m not aware of any that exist to counter the argument.

P1 shows that there are patterns of visible physical features which correspond to geographic ancestry. This is due to the climates said race evolved in over evolutionary history. Since these phenotypes are not randomly distributed across the globe, but show distinct patterning based on geographic ancestry, we can say that P1 is true; different populations show patterns of different physical features which are not randomly distributed across the globe. Further, since P1 establishes that races are populations that look different from each other, it guarantees that groups like the Amish, social classes etc are not counted as races. P1 further allows a member of a given race to not show the normative physical characters that are characteristic of that race. It further allows for the possibility that individuals from two different races may not differ in their physical characters. These visible physical characters that differ by populations we then call races also need to be heritable to be biological. “Because the visible physical features of race are heritable, the skin color, hair type, and eye shape of children of Rs tend to resemble the skin color, hair type, and eye shape of their parents” (Hardimon, 2017: 35). P1 is true.

P2 shows that these patterns of visible physical features are exhibited between real, existing groups. That is, the groups that exhibit these patterns exist in reality. No one denies this either. Differences in physical features that these real, existing groups exhibit can then be used as proxies for factors in P1. Though, like with which populations figure into this concept, the minimalist race concept doesn’t say—it only establishes the biological existence of races. “In recent years the concept of the continent has come under fire for not being well defined. 59 It is of interest that the formation of the concepts CONTINENT and RACE are roughly coeval. One wonders if the geneses of the two ideas are mutually entwined. Could it be that our idea of continent derives in part from the idea of the habitat of a racial group? Could it be that the idea of a racial group gets part of its content from the idea of a group whose aboriginal home is a distinctive continent? Perhaps the concepts should be thought of as having formed in tandem, each helping to fix the other’s reference” (Hardimon, 2017: 51). Since these real, existing populations that were geographically separated for thousands of years show these visible physical patterns, P2 is true.

P3 follows from the specification of the concept of minimalist race. If these populations that exhibit these distinct visible characters and if they are non-randomly distributed across the globe then this satisfied the argument for the concept of minimalist race. The specification of the minimalist concept of race states that groups satisfy the requisites for the concept by being distinguishable by patterns of visible physical features (P1) and that individuals who share a common ancestry peculiar to them which derive from a distinct geographic location (P2) exist as real groups. Since P1 and P2 are true, P3 follows logically from P1 and P2, which then leads us to the conclusion which is true and establishes the argument for the minimalist concept of race as a sound and valid argument.

C is then the logical conclusion of the three premises: race exists and is a biological reality since the patterns of visible physical features are non-randomly distributed across the globe and are exhibited by real, existing groups. Since all three premises are true and the conclusion is true, it is a valid argument; since the premises are true the argument is sound. No one can—logically—deny the existence of race when presented with this logical proof.

Though notice the argument doesn’t identify which populations are designated as “races” (that’s for another article), the argument just establishes that race exists and it exists as a biological reality. Notice also how this conception of race is sort of like the “racialist concept”, but it takes it down to its barest bones—only taking the normatively important, superficial biological physical features (these features establish minimalist races as biologically existing).

Notice, too, that I did not appeal to any genetic differences between the races, indeed, in my opinion, they are not needed when discussing race. All that is needed when discussing race and whether or not it is a biological reality is asking three simple questions:

1) Are there differences in patterns of visible physical features that correspond to geographic ancestry? Yes.

2) Are these patterns exhibited by real, existing groups? Yes.

3) Do these real, existing groups satisfy conditions of minimalist race? Yes

Therefore race exists.

These three simple questions (just take the premises and ask them as questions) will have one—knowingly or not—admit to the biological reality and existence of race.

Do note, though, nothing in this argument brings up anything about what we “can’t see”, meaning things like “intelligence” or mores of these races. This concept is distinct from the racialist concept in that it does not mention normatively important characters; it does not posit a relationship between visible physical characters and normatively important characters; and it does not “rank” populations on some type of scale.  “Also, the conjoint fact that a group is characterized by a distinctive pattern of visible physical characteristics and consists of members who are linked by a common ancestry and originates from a distinctive geographic location is of no intrinsic normative significance. The status of being a minimalist race has no intrinsic normative significance (Hardimon, 2017: 32).

Clearly, one does not need to invoke genetic differences to show that race exists as a biological reality. That races differ in patterns of visible physical features which are inherited from the parents and are heritable establishes the biological reality of minimalist race. I really see no way that one could, logically, deny the existence of race given the argument provided in this article. Race exists and is a biological reality and the argument for the existence of minimalist race establishes this fact. Races differ in physiognomy and morphology; these physical differences are non-randomly distributed by geographic ancestry/at the continental level. These populations that show these physical differences share a peculiar ancestry. Knowing these facts, we can safely infer the existence of race. It is the only logical conclusion to come to. Note that the minimalist concept is deflationist—meaning that racialist races do not exist and that this concept enjoys what the racialist concept was supposed to, it is deflationary in the aspect that it takes the normative physical differences from the racialist concept. It is realist since it acknowledges the existence of minimalist race as genetically grounded and relatively superficial but still very significant biological reality of race.

Races can exist as minimalist races and socialraces—no contradiction exists. minimalist race, and its “scientific” companion populationist race (which I will cover in the future) show that there is a well-formulated argument for the existence of race (minimalist race concept) whereas the other concept shows how it is grounded in science and partitions populations to races (populationist concept; both are deflationary). (Read the descriptions of racialist race, minimalist race, populationist race, and socialrace.) You don’t need genes to delineate race; you only need a sound, valid argument based on biological principles. Minimalist races exist.

Race exists and is a biological reality, even if it is ‘socially constructed’ (what isn’t?), our social constructs still correspond to differing breeding populations who share peculiar ancestry and show patterns of visible physical features establish the existence of race.

hardimon flow chart

From Hardimon (2017: 177)

(I also came across a book review from philosopher Joshua Glasgow (Book Review Rethinking Race: The Case for Deflationary Realism, by Michael O. Hardimon. Harvard University Press, 2017. Pp. 240.), author of A Theory of Race (2009) who has some pretty good critiques against Hardimon’s theses in his book, but not good enough. I am going to cover a bit more about these concepts then discuss his article. I will also cover “Latinos” and mixed race people as regards these concepts as well.)

4/19/2018 edit: Two more simple arguments:

(Where P is population, C is continent and T is trait(s)

Population P that evolved in continent C have physical traits T which correspond to C.

Ps correspond to C because P evolved in C. Ps that evolved in Cs have differing physical features non-randomly distributed between Cs. Ps are Rs and R is race. Therefore race exists and is biological.

 
Advertisements

Snakes, Spiders, and Just-so stories

1550 words

Evolutionary psychology (EP) purports to explain how and why humans act the way they do today. It is a framework that assumes that certain mental/psychological traits were useful in the EEA (Environment of Evolutionary Adaptedness) and thusly were selected for over time. It assumes that traits are adaptations then “works backward” by reverse engineering. Reverse engineering is the process of figuring out the design of the mechanism based on its function. (Many problems exist there which will be covered in the future; see also Evolutionary Psychology: The Burdens of Proof by Lloyd, 1999). But let’s discuss snakes and other animals that we have fears of today; is there an evolutionary basis for said behavior and can we really know if there was?

Fear of snakes and spiders

Ohman (2009: 543) writes that “Snakes … have a history measured in many millions of years of shaping mammalian and primate evolution in important respects” and that “snakes … are promising tools for probing the emotional ramifications of deep evolutionary heritages and their interaction with the current environment.” Are they promising tools, though? Were there that many snakes in our EEA that made it possible for us to ‘evolve’ these types of ‘fear modules’ (Ohman and Mineka, 2001)? No, it is impossible for our responses to snakes—along with some other animals—to be an evolved response to what occurred in our EEA because the number of venomous, dangerous snakes to humans and our ancestors was, in reality, not all that high.

Ohman and Mineka (2003: 5-6) also write that “the human dislike of snakes and the common appearances of reptiles as the embodiment of evil in myths and art might reflect an evolutionary heritage” and “fear and respect for reptiles is a likely core mammalian heritage. From this perspective, snakes and other reptiles may continue to have a special psychological significance even for humans, and considerable evidence suggests this is indeed true. Furthermore, the pattern of findings appears consistent with the evolutionary premise.

Even the APA says that an evolutionary predisposition to fear snakes—but not spiders—exists in primates (citing research from Kawai and Koda, 2016). Conclusions such as this—and there are many others—arise from the ‘fact’ that, in our EEA, these animals were harmful to us and, over time, we evolved to fear snakes (and spiders), but there are some pretty big problems with this view.

Jankowitsch (2009) writes that “Fear of snakes and spiders, which are both considered to be common threats to survival in early human history, are not thought to be innate characteristics in human and nonhuman primates, learned.” For this to be the case, however, there would need to be many spiders and snakes in our EEA.

Philosopher of science Robert C. Richardson, in his book Evolutionary Psychology and Maladapted Psychology (Richardson, 2007) concludes that EP explanations are speculation disguised as results. He says that the stories that state that we evolved to evolved to fear snakes and spiders lack evidence. Most spiders aren’t venomous and pose no risk to humans. In the case of snakes, one quarter are poisonous to humans and we’d have to expect this ‘module’ to evolved on the basis of a minority of snakes that are poisonous to humans:

On this view, at least some human fears (but not all) are given explanations in evolutionary terms. So a fear of snakes or spiders, like our fear of strangers and heights, serves to protect us from dangers. Having observed that snakes and spiders are always scary, and not only to humans, but other primates, Steven Pinker (1997: 386) says “The common thread is obvious. These are the situations that put our evolutionary ancestors in danger. Spiders and snakes are often venomous, especially in Africa…. Fear is the emotion that motivated our ancestors to cope with the dangers they were likely to face” (cf. Nesse 1990). This is a curious view, actually. Spiders offer very little risk to humans, aside from annoyance. Most are not even venomous. There are perhaps eight species of black widow, one of the Sydney funnel web, six cases of brown recluses in North and South America, and one of the red banana spider in Latin America. These do present varying amounts of risk to humans. They are not ancestrally in Africa, our continent of origin. Given that there are over 37,000 known species of spiders, that’s a small percentage. The risk from spiders is exaggerated. The “fact” that they are “always scary” and the explanation of this fact in terms of the threat they posed to our ancestors is nonetheless one piece of lore of evolutionary psychology. Likeways, snakes have a reputation among evolutionary psychologists that is hardly deserved. In Africa, some are truly dangerous, but by no means most. About one quarter of species in Uganda pose a threat to humans, though there is geographic variability. It’s only in Australia—hardly our point of origin—that the majority of snakes are venomous. Any case for an evolved fear of snakes would need to be based on the threat from a minority. In this case too, the threat seems exaggerated. There is a good deal of mythology in the anecdotes we are offered. It is not altogether clear how the mythology gets established, but it is often repeated, with scant evidence. (pg. 28)

The important point to note here, of course, is the assumption that we have an evolved response to fear snakes (and spiders) based on a minority of actually dangerous species to humans.

Just-so stories

The EP enterprise is built on what Gould (1978) termed “just-so stories”, borrowed from Rudyard Kipling’s (1902) book of stories called “Just So Stories” (which he told to his daughter) where he imagined ways that in which certain animals look the way they do today. These stories needed to be told “just so” or she would complain.

And the Camel said ‘Humph!’ again; but no sooner had he said it than he saw his back, that he was so proud of, puffing up and puffing up into a great big lolloping humph.

‘Do you see that?’ said the Djinn. ‘That’s your very own humph that you’ve brought upon your very own self by not working. To-day is Thursday, and you’ve done no work since Monday, when the work began. Now you are going to work.’

‘How can I,’ said the Camel, ‘with this humph on my back?’

‘That’s made a-purpose,’ said the Djinn, ‘all because you missed those three days. You will be able to work now for three days without eating, because you can live on your humph; and don’t you ever say I never did anything for you. Come out of the Desert and go to the Three, and behave. Humph yourself!’ (How the Camel got His Hump)

These stories “sound good” but is there any way to verify these nice-sounding stories? One can then make the same argument for EP hypotheses: can they be independently verified? The thing about functional verification is that we cannot possibly know the EEA of humans—or other animals—and thusly any explanation for the functionality of a certain trait are nothing but just-so stories.

Kaplan (2002: S302) argues that:

Evolutionary psychology has not yet developed the tools necessary to uncover our “shared human nature” (if such there is—see Dupre 1998) any more than physical anthropology has been able to uncover the specifics even of such clear human adaptations as our bipedalism. It is obvious that our brains were subject to selective pressures during our evolutionary history; it is not at all obvious what those pressures were.

I don’t deny that we are the products (partly, natural selection isn’t the only mode of evolution) of evolution; I do deny that these fantasy stories can tell us anything about how and why we evolved though. I don’t see how EP can develop such tools to uncover our “shared human nature”—or any other “nature” for that matter—unless time machines are developed and we can directly observe the evolution of trait X that is being discussed.

A simple argument to show that EP hypotheses are just-so stories:

P1) A just-so story is an ad-hoc hypothesis

P2) A hypothesis is ad-hoc if it’s not independently verified (verified independently of the data the hypothesis purports to explain)

P3) EP hypotheses cannot be independently verified

C) Therefore EP hypotheses are just-so stories

This simple argument shows that all EP hypotheses are just-so stories since they cannot be independently verified of the data they attempt to explain. Stories can “sound good”, they can “sound logical”, they can even be “parsimonious” and they can even be the “inference to the best explanation“, (how do you but just because these stories are “parsimonious”, “sound logical” and are the “inference to best explanation” doesn’t make the stories true. The above argument holds for one of HBD’s pet theories, too, the cold winter theory (CWT). It cannot be independently verified either, and it was formulated after national IQ differences were known; therefore CWT is a just-so story.

(I will cover this more in the future.)

Conclusion

Stories about snakes and spiders in our evolutionary history are likely wrong—especially if they derive from what supposedly occurred in our EEA, an environment we know almost nothing about. The fact of the matter is, regarding snakes and spiders, there is no evidence that our fear of them is an adaptive response to what occurred in our EEA. That is a just-so story. Just-so stories are ad-hoc hypotheses that cannot be independently verified, therefore EP hypotheses are just-so stories.

Afrocentric Melanist Theorists

2650 words

Extremists exist in every ideology (there are too many to name but take the books March of the Titans (Kemp, 1999) to Testosterone Rex (Fine, 2017) as examples), but some are (in my opinion) more extreme (and funnier and more delusional) than others (even if they’re almost neck and neck). Afrocentrists veer toward the extreme side (as Nordicists veer to the extreme side on the opposite end). But certain beliefs these ideologies have may “sound right” to the uneducated ear, especially when they begin to weave fantastical stories with physiological terminology in order to woo the listener. You see things like “Melanin affects the idea of white supremacy“, but what does this really mean (you will see what it means near the end of this article)? The one saying it may believe it themselves, though it all doesn’t make sense. The extremist views that are more interesting are the Afrocentric ones, though, especially the melanin theory that gets thrown around in Afrocentric circles.

Melanin production

Melanin is produced by melanocytes. Melanin is synthesized from L-tyrosine, with the help of tyrosinase, which is one of the main enzymes for melanin production (Solano, 2014; D’Mello et al, 2016). (See Cone, 2006 for a review of the melanocortin system.) Melanin absorbs energy from UV rays which then dissipate in the body as heat (de Monteallano, 1993). There are three types of melanin: eumelanin (there are two types of eumelanin: brown eumelanin and black eumelanin), pheomelanin (these two are present in the human epidermis; Thody et al, 1991; Solano, 2014) and neuromelanin. Pheomelanin and eumelanin are found in the hair and skin.

Races that live closer to the equator have higher concentrations of melanin in their skin (not neuromelanin, which will be discussed later) which then causes dark skin pigmentation. But everyone on earth has around the same number of melanocytes; skin pigmentation differences come down to differences in UV exposure (for which melanin is useful; and produced due to UV radiation Brenner and Hearing, 2008), disease (albinism and vitiligo), size of melanocytes (see below) and genetic make up.

Europeans and Chinese have about half as much melanin as African and Indian skin types, whereas Africans had the largest melanosomes, followed by the Indians, Mexicans, Chinese, and Europeans, therefore variation in melanosome size may also account for skin variation between races. It’s also interesting to note that people, no matter the skin color, who are born in high UV areas—regardless of ethnicity—have twice as much epidermal melanin compared to people born in low UV areas (Alaluf et al, 2002).

Melanin and pseudoscience

Rushton and Templer (2012) wrongly hypothesized that the melanocortin system modulated sexuality and aggression and humans as they do in animals. The claims made here are may “sound good” to one who isn’t well-versed in the physiology of aggression and sexuality, but to those people, Rushton and Templer’s hypothesis “sounds good enough” and so they believe it without question. On the opposite side, you have black “academics” who believe that melanin gives blacks some type of “greatness” and is the reason for their “natural moves.’ In the book Darwin’s Athletes: How Sport Has Damaged Black America and Preserved the Myth of Race, Hoberman (1997: 89) shortly discusses Afrocentric melanin theorists:

Finally, there are the melanin theorists. a motley collection of pseudo-scientific cranks and better-known members of the black academic demimonde who attended the Fourth Annual World Melanin Conference in Dallas in April 1989—Leonard Jeffries, John Henrik Clarke, Ivan Van Sertima, and others. For these racial biologists, the pigment that makes skin dark is “the Chemical Key to Black Greatness” and accounts for an entire range of superior black aptitudes: “The reason why Black athletes do so well and have these ‘natural moves’ is these melanic tracks in the brainstem tie into the cerrebellum . . . a part of us that controls motor movement (Dr. Richard King). The real signifigance of the melanin theory is that it is the reductio ad absurdum of black racial seperatism, putting its adherents in a de facto alliance with white racists, who have their own reasons to establish separate racial physiologies. Afrocentric science curricula that promote melanin theory have been introduced in a nimber of urban school districts in the United States, thereby doing educational damage to those children who can least afford it.

Note how there are similarities to Rushton and Templer’s (2012) hypothesis on the melanocortin system in darker-pigmented races (mainly blacks since that’s the race they theorized on). But what I find the funniest about melanin theory, as that some Afrocentrists use higher levels of melanin as “physiologic” proof that blacks are “superior athletes” (this can be explained without appealing to melanin). Though do note how the Afrocentric view of melanin and Rushton and Templer’s (2012) view of the melanocortin system and melanin are stark opposites of each other—and they’re both horribly wrong.

Now let’s look at some quotes from some Afrocentric websites.

These quotes are from Suzar’s (1999) book Blacked Out Through Whitewash: Exposing the Quantum Deception/Rediscovering and Recovering Suppressed Melanated (the author cites another book, Melanin: The Chemical Key to Black Greatness by Carol Barnes (1988):

“…your mental processes (brain power) are controlled by the same chemical that gives Black humans their superior physical (athletics, rhythmic dancing) abilities. This chemical…is Melanin!”

Then writing:

The abundance of melanin in Black humans produces a superior organism both mentally and physically. Black infants sit, stand, crawl and walk sooner than whites, and demonstrate more advanced cognitive skills than their white counterparts because of their abundance of melanin. Melanin is the neuro-chemical basis foe what is called “SOUL” in Black people. Melanin refines the nervous system in such a way that messages from the brain reach other areas of the body more rapidly in Black people than in the other. In the same way Blacks excel in athletics, Blacks can excel in all other areas as well (like they did in the past!) once the road blocks are removed.

5chart-pg56

Notice how this uses Rushton-like data similar to his ‘life history/r/K’ theory of human racial differences. People can have any kind of data they want, but when they start discussing the data then they are leaving the realm of science and are entering the realm of philosophy. They then interpret the data wrong, as evidence for ‘superiority’ in certain traits, and those who are less informed will buy it without question. Do note the similarities to Rushton and Templer’s (2012) hypothesis on the causes for sexual behavior and aggression differences in human races: melanin and the melanocortin system is partly a cause for these racial disparities. You only need a ‘good story’ (a just-so story) that seems like it is a plausible explanation in order to lure someone who’s unsuspecting to pseudo-science. (I would liken melanin to ‘g’ here. Both melanin and ‘g’ are given ‘powers’ that do not exist; but in the case of ‘g’, it doesn’t exist so at least Afrocentrists are discussing an actual hormone, though they are horribly misrepresenting what the actual data on melanin says.)

The most in-depth take-down of Afrocentric melanist theories is from de Mantellano (1993). Afrocentric theory states that black people—and Egyptians because they were black too (they weren’t)—since they have higher levels of melanin in their skin, then this gives them physical and mental superiority over those with less melanin in their skin. They misinterpret (willingly or not) many papers in order to push their pseudo-scientific theories to the ignorant masses (which already is occurring in inner-city schools, widening the already wide science gaps; see de Manteallno, 1992). What Afrocentrists do not understand is that all humans have similar anounts of neuromelanin (which they wrongly conflate with skin melanin), while neuromelanin levels in the brain are also independent of melanin levels in the skin. So the fantastic claims of melanin causing physical and mental ‘superiority’ (whatever that is) for darker-skinned individuals is unfounded.

Further claims from Afrocentrists are that since blacks have more skin melanin then this also means they have more melatonin and beta-melanocyte-stimulating hormone. Melatonin in humans also has no physiological relationship to skin color (de Mantellano, 1993). Lastly, Afrocentric melanists also state, as I have covered before, that Europeans are African albino mutants.

In fact, the claim that whites are just African albino mutants is ridiculous. Whites can produce eumelanin, while albinos can’t. Albinos are also homozygous recessive—since albinism is a Mendelian disorder, one must be homozygous recessive for “the albinism gene” (de Mantellano, 1993: 42). They can mate forever and they will never create offspring with the ability to synthesize melanin. Therefore it is impossible for whites to have been African albinos.

De Mantellano (1993) concludes that the theory is not to be taken seriously (of course) but states that “The idea that there are distinct races and that one is superior to the others is as racist and erroneous when it refers to high melanin levels as it was when it described low melanin levels (the Aryan “master race”)” (pg 54). Of course, no ‘master race’ exists, and the concept of ‘superiority’ has no basis in evolutionary biology, but race exists and is a biological reality. Though that doesn’t mean that any of the Afrocentric claims covered here have any basis—that’s because they conflate neuromelanin and melanin in the skin, even if they didn’t conflate the two they still would not be correct.

The fatal flaw in this type of Afrocentric “reasoning” is that neuromelanin differs in structure, location, and biosynthesis from skin melanin. Afrocentrists assert that neuromelanin and skin melanin are correlated. Though what falsifies this assertion is that albinos have the same amount of neuromelanin in their brains as non-albinos. So all of the purported ‘mental and physical superiority’ that was ’caused by melanin’ makes no sense, because neuromelanin and skin melanin were conflated. Neuromelanin does not even have the physiologic effects that most Afrocentrists believe.

Most Afrocentric melanists also cite individuals who cite…. rat studies and then extrapolate those results to humans. This is dumb. Yes I know the tired old “Humans are animals too!” but just because we’re animals too doesn’t mean that hormones work the same way in all species; it’s just some sort of bland appeal.

Perhaps one of the most amusing parts of de Mantellano (1993) is where he quotes a few prominent Afrocentrists who ‘argue’ that white men are afraid of black men because “Africans have very dominant genes”:

The conspiracy to destroy black youth. . . . It has to do with the fact that in terms of genetics and genes that because Africans have dominant genes that it is very possible for Africans to annihilate the European population. And the best way to prevent the annihilation is to get to the root of the perpetrator who could do that.
And that, of course, would be African men. Because it is men, specifically African men, that start the reproductive process off. For example, in looking at the four possibilities of sexual relationships. Of looking at those four there is only one possibility to produce a European child. If you have an African man with an African woman you will produce a child of color. If you have an African man with a European woman you will also produce a child of color. If you have a European man with an African woman that will also produce a child of color. European men can only produce a child that looks like them when they connect with a European woman. As the result of that, then, European men are very much afraid of African men and the conspiracy is directly centered at them. . . . And that’s that conspiracy is synonymous with the word genocide, and genocide not only is gradual, it is collective (Kunjufu, 1989).

[…]

The reason that the Black male . . . is and always has been central to the issue of white supremacy is clarified by the definition of racism as white genetic survival. In the collective white psyche, Black males represent the greatest threat to white genetic survival because only males (of any color) can impose sexual intercourse, and Black males have the greatest genetic potential (of all non-white males) to cause white genetic annihilation. Thus, Black males must be attacked and destroyed in a power system designed to assure white genetic survival. . . . The prevention of white genetic annihilation is pursued through all means, including chemical and biological warfare. Today, the white genetic survival imperative, instead of using chemicals in gas chambers, is using chemicals in the streets-crack, cocaine, ecstasy, PCP, heroin and methadon [sic] (all “designer chemicals”). [Welsing, 1991a: 4]

Other more outlandish ideas are quoted by de Mantellano (1993) too, and all of the claims made about the physiology of melanin, neuromelanin bringing supernatural, physical and mental powers are horribly flawed. These people have no understanding of the physiology of the hormone, nor what they’re really speaking about. These attempted physiological theories to attempt to show racial ‘superiority’ make absolutely no sense if one has a basic understanding of the physiological system.

Claims made by Afrocentrists regarding melanin and neuromelanin range from blacks having more melanin in their muscle cells which is the cause for black athleticism; darker-eyed people having quicker reaction times which was thought to be caused by melanin; melanin centers in the brain being important for controlling and coordination of the body and brain power; to being critical for control of memory, motivation, mental maturation etc; causing altered states of consciousness which then causes black people who attend Church to speak in toungues; helps in the processing of memory; melanin and the pineal gland is at highest functionality in humans; and they conflate skin melanin with neuromelanin, when they are two different hormones (references for these claims can be found in de Montenallo, 1993).

Conclusion

Psuedo-science about melanin is rampant, no matter which side one is on. Both sides make ridiculous assertions and leaps of logic regarding melanin, and I find it very amusing that each group is talking about the same thing while attempting to argue the polar opposite of what the other is arguing. These misconceptions come from no understanding of physiology, to ideological biases, to delusions of ‘superiority’ to just plain ignorance overall. Afrocentrist fairy tales most probably are widening and already-wide science gap between blacks and whites. Of course, race doesn’t really have any bearing on whether or not you’ll believe something, though of course, black kids are more susceptible to believing the fantastical stories and non-understandings of physiology that come from their inner-city teachers who will then indoctrinate them to their ideology.

Correlations between skin pigmentation and neuromelanin are nonexistent. Further, there is no known physiological relationship between melatonin and skin color in humans. Therefore, the assertion that blacks have more melatonin due to their skin color and they then have this physical and mental superiority due to melanin has absolutely no scientific basis (even though those who push these types of theories have absolutely no understanding of the physiology of the hormone they are discussing). Racial pride ‘stories’ are harmful to science education; I don’t see March of the Titans being taught at schools (if I am in error let me know), but Afrocentric melanist theories are?

The most important thing to take note of here is the similarities between Rushton and Templer (2012) and melanists. They mirror each other so well, they are talking about the same exact hormone, but both groups have wildly different conclusions. Rushton and Templer (2012) were driven by the (wrong) hypothesis that testosterone caused aggression and crime and that since a whole slew of animals that had dark pigmentation were aggressive, therefore this should apply to humans too because “Evolution doesn’t stop at the neck”, as most people say. On the other side, we have melanists making wild, almost sci-fi like claims about the power and magic of this one hormone in black bodies and only black bodies. To believe something like that you’d have to be either ignorant or highly biased.

Melanism is clearly untenable, and Afrocentrists who push this ‘theory’ should take a few physiology classes and learn what this hormone does in the human body because they are woefully misinformed, reading books of pseudo-science.

The Weight Loss and Thermodynamics Fallacy

2000 words

Eat less and move more and you will lose weight. That’s the common mantra of everyone around the world because this is what has been repeated for decades. “The First Law of Thermodynamics states that energy can neither be created nor destroyed in an isolated system”. This Law is used in support of the CICO paradigm. But this kind of thinking does not make sense. The First Law only tells us that energy is conserved. That’s it. It says absolutely nothing about weight loss. Does anyone think that it’s weird that we’re given weight loss advice with physics (thermodynamics) and not advice for our physiology? This fallacy, what I term the CICO fallacy, then leads to the second fallacy: that a calorie is a calorie. The implication is this: the body does not discern between what type of macro you choose to ingest, it’s only worried about the amount of energy consumed. But, as I will show, this type of thinking does not work, either.

The First Law of Thermodynamics

The First Law states that energy can neither be created nor destroyed. A positive caloric balance must be associated with weight gain, but where the wrong conclusions come in is when people assume that the positive caloric balance is driving the weight gain. So if the First Law is interpreted correctly, then both conclusions—getting fat makes one consume more energy and consuming more energy makes one fat—are both valid hypotheses. The evidence and observations suggest that getting fat makes one consume more energy. (Jason Fung (2016: 33) writes: “Having studied a full year of thermodynamics in university, I can assure you that neither calories nor weight loss were mentioned even a single time.“)

Obesity researcher Jules Hirsch said to the New York Times:

There is an inflexible law of physics—energy taken in must exactly equal the number of calories leaving the system when fat storage is unchanged. Calories leave the system when food is used to fuel the body. To lower fat content—reduce obesity—one must reduce calories taken in, or increase activity, or both. This is true whether calories come from pumpkins or peanuts or pate de foie gras.

It’s this type of information that has caused the CICO paradigm to continue unabated. However, there are dissenting voices. People like Dr. Jason Fung, Gary Taubes, Zoe Harcombe, Nina Teicholz, Tim Noakes all go against the conventional wisdom regarding obesity and the cause for weight gain.

Another thing that is not taken into account is what occurs in the body when calories are reduced. One important thing to note is that the energy we consume and expend are not independent variables—they are dependent. Therefore, if we lower what we consume, what we expend will then lower as well. If you change one of them, the other will change too. For example, if you exercise more in an attempt to lose more weight you will eat more to compensate. If you eat less to lose more weight, your body’s metabolism will drop to match what the intake is. This is exactly what was seen in the Biggest Loser Study—shockingly lower RMRs in the contestants (see Fothergill et al, 2016). Biological systems are way more complex than to reduce it down to “eat less and move more=weight loss”, and that is easily shown.

Fliers and Maratos-Flier (2007: 74) write in Scientific American:

An animal whose food is suddenly restricted tends to reduce its energy expenditure both by being less active and by slowing energy use in cells, thereby limiting weight loss. It also experiences increased hunger so that once the restriction ends, it will eat more than its prior norm until the earlier weight is attained.

Take this example. Caloric excess in children is positively correlated with height increases. Though the caloric excess is not driving the height increases; they eat because they are growing.

The point that most people miss is the third storage system—fat storage. The three storage systems are kcal in/kcal out and fat storage. Insulin dictates fat storage, in the absence of insulin, the body cannot gain weight. Insulin shuttles fat into the adipocyte which is why insulin is fattening. That’s the point that CICO doesn’t work due to hormonal fluctuations. The most fattening hormone is insulin. The types of foods that elicit the highest insulin response are processed carbohydrates. Therefore, those are the most fattening foods. People who assume CICO state that a calorie is a calorie; that’s wrong.

Imagine a crowded room. The room is getting more crowded, and you ask me why the room is getting more crowded. I say ‘the room is more crowded because more people are entering it than leaving it.’ You say ‘duh, of course that’s true, but why is the room more crowded?’ Saying a room gets crowded because more people are entering than leaving it is redundant; saying that one gets fat because more calories are consumed than burned is redundant, it only says the same thing in two different ways so it is meaningless. Rooms that have more people enter them than leave them will become more crowded since there is no getting around the First Law, right?

Now take that same logic with obesity. Thermodynamics states that if we get fatter then more energy is entering our body than leaving it. Overeating means we’ve consumed more calories than we have expended. It’s tautological.

‘CICO could work’ but that is irrelevant, since what is assumed by the CICOers is that calories are calories; the assumption that once ingested, they go through the same metabolic pathways. This is false. The First Law says nothing about why we get fat. It is irrelevant to human physiology.

Taubes (2007: 293) writes:

Change in energy stores = Energy intake — Energy expenditure

[…]

The first law of thermodynamics dictates that weight gain—the increase in energy stored as fat and lean-tissue mass—will be accompanied by or associated with positive energy balance, but it does not say that it is caused by a positive energy balance—by “a plethora of calories,” as Russel Cecil and Robert Loeb’s 1951 Textbook of Medicine put it. There is no arrow of causality in the equation. It is equally possible, without violating this fundamental truth, for a change in energy stores, the left side of the above equation, to be the driving force in the cause and effect; some regulatory phenomenon could drive us to gai weight, which would in turn cause a positive energy balance—and thus overeating or sedentary behavior. Either way, the calories in will equal the calories out, as they must, but what is the cause in one cause is effect in the other.

And on pg 294:

The alternative hypothesis reverses the causality: we are driven to get fat by “primary metabolic or enzymatic effects,” as Hilde Bruch phrased it, and this fattening process induces the compensatory responses of overeating and/or physical inactivity. We eat more, move less, and have less energy to expend because we are metabolically or hormonally driven to get fat.

All the first law of thermodynamics tells us is that people can’t become more massive without taking in more energy than they expend since people who are heavier contain more energy than people who are lighter. That person has to consume more energy to accommodate said increasing mass. That person also cannot become lighter without expending more energy than they take in. That’s all the First Law tells us: energy is conserved. It says nothing about causation. The First Law literally only says that if something becomes more massive than more energy has to come in than leave. Nothing is said about cause and effect; it only tells us what has to happen if said thing does happen. That’s not causal information.

People only assume that the First Law has any relevance to obesity because of the ‘energy cannot be created nor destroyed’ part. But this shows no understanding of the Law. If you carefully read and understand it, you will see that it gives you absolutely no causal information. You can then reverse the commonly-held mantra—that eating more leads to obesity—to becoming obese leads one to eat more. It’s perfectly logical to reverse it and no Law is broken. People erroneously assume that the Laws of physics dictate weight gain and loss, but in complex metabolic systems, what is ingested is more important than how much is ingested (because we have hormones that let us know when to stop eating—which don’t get released while one eats carbohydrates).

The Second Law of Thermodynamics

The second weight loss fallacy is ‘a calorie is a calorie’, therefore, for weight loss, it doesn’t matter if a majority of my calories comes from fat, carbs or protein; the body will register the calories consumed and will regulate fat stores as dictated by the First Law (supposedly). The fallacy of invoking the First Law of thermodynamics ties directly into the fallacy of the Second Law of Thermodynamics—what the Second Law states is, that variation in metabolic pathways is to be expected, therefore, the mantra “a calorie is a calorie” violates the Second Law as a principle (Feinman and Fine, 2004, 2007).

A diet split of 55:30:15 CHO, fat, protein, yielded 1848 kcal. In fact, thermodynamics does not support the dictum that, all else being equal (i.e., two diets with the same amount of calories, but differing macro splits; one high-fat low carb the other high carb low-fat).

However, in 2004 Zoe Harcombe recalculated the figure from Feinman and Fein (2004) and found it to be wrong. The correct number ended up being 1825 kcal, not 1848 kcal, which strengthened Feinman and Fine’s (2004) point (Harcombe, 2004). She also writes:

I then repeated the calculations for a 10:30:60 high protein diet (keeping fat the same and swapping carbs out and protein in), and the calories available to the body dropped to 1,641. This is incredible. This means that two people can both eat 2000 calories a day and the high carbohydrate person is effectively getting nearly 200 calories more than the high protein person. Anyone still wonder why low-carbohydreate diets have a built in advantage?

So we can see that it’s ridiculous to ignore the thermic effect of food, seeing as it’s 20 percent for protein and 5 percent for CHO.

To put this into perspective, two people eating similar diets (but differing macro splits) only need to out-eat the other by 20 calories per day and that will be enough to gain more weight than the other person. Taubes (2011: 58) writes:

How many calories do we have to consume, but not expend, stashing them away in our fat tissue, to trsnsform ourselves, as many of us do, from lean twenty-five-year-olds to obese fifty-year-olds?

Twenty calories a day.

Twenty calories a day times the 365 days in a year comes to a little more than seven thousand calories stored as fat every year—two pounds of excess fat.

Multiply that by 10 and that’s twenty pounds gained in ten years—all from counting kcal wrong (Aamodt, 2016: 111-112). So with Harcomb’s (2004) example, the damage will be much worse in 10 years. This is all based on the assumption that ‘calories are calories’ which is false, as I have shown.

Conclusion

The CICO paradigm is wrong. Consumption and expenditure are not independent variables, they are dependent. So if you decrease one of them, the other will decrease as well. This is the fatal flaw in the CICO paradigm. The First Law always holds, yes, but it tells us absolutely nothing about obesity or human physiology and is therefore irrelevant. The Second Law is violated when one states that ‘a calorie is a calorie’, but this is demonstrably false. The Second Law states that variation in metabolic pathway efficiency is to be expected. Therefore stating that “a calorie is a calorie” violates the Second Law. This has further implications. Using Taubes’ example of 20 calories per day, if people truly believe the CICO mantra then people eating the same exact number of calories will have different weight gains if the skew of carbs to fat is higher in one than the other. Couple that with what insulin does in the body and this exacerbates the problem.

Stating that thermodynamics has anything to do with weight loss is clearly fallacious.

Behavior Genetics and the Fallacy of Nature vs Nurture

3250 words

People appeal to moderate to high heritability estimates as evidence that a trait is controlled by genes. They then assume that because something has a high heritability then that it must show something about causation. The fact of the matter is, they do not. Heritability estimates assume a false dichotomy of nature vs nurture; it assumes that we can neatly partition genetic from environmental effects. It assumes that the higher a trait’s heritability the more genes control said trait. These are all false. One of the main ways that heritability is estimated is by the CTM (classic twin method). This method, though, has a ton of assumptions poured into it—most importantly, the assumption that DZ and MZ fraternal twins experience roughly equal environments—the equal environments assumption (EEA). Heritability studies are useless for humans; twin studies bias estimates upwards with a whole host of assumptions.

I will show that i) heritability estimates are highly flawed (due to erroneous assumptions); ii) nature vs nurture cannot be separated (like behavior geneticists claim) and so their main tool (the heritability estimate) should be discontinued; iii) genetic reductionism is not a tenable model due to what we now know about how genes work. All three of these reasons are enough to discontinue heritability estimates. If the nature vs nurture debate rests on a fallacy, and this fallacy is used as a vehicle for heritability estimates, then they should be discontinued for humans and only be used for breeding animals where they can control the environment fully (Schonemann, 1997; Moore and Shenk, 2016).

Heritability, twin studies, and equal environments

Back in 2014-2015, there was a debate in the criminological literature that had implications for heritability studies as a whole. Burt and Simons (2014) stated that it was time to get rid of heritability studies. Barnes et al (2015) responded that this was “a de facto form of censorship” (pg 2). Joseph et al (2015) respond to these accusations, writing, “It was good science and not “censorship” when earlier scientists called for ending studies based on craniometry, phrenology, and physiognomy, and any contemporary criminologist calling for the use of astrological charts to predict whether certain people will commit violent crimes would be justifiably ridiculed.” The main thing here, in my opinion, is that heritability estimates are based on an oversimplified (and wrong) model of the gene. Partitioning variance assumes that you can partition how much a trait is influenced by “nature” or “nurture” which is a false dichotomy (Moore, 2002; Schneider, 2007; Moore and Shenk, 2016).

More importantly, no “genes have been found” (I know that’s everyone’s favorite thing to hear) for traits that supposedly have high heritabilities. On page 179 of his book (nook version), Misbehaving Science, Controversy and the Development of Behavior Genetics Panofsky (2014) writes:

Molecular genetics has been a major dissapointment, if not an outright failure, in behavior genetics. Scientists have made many bold claims about genes for behavioral traits or mental disorders only to later retract them or to have them not be replicated by other scientists. Further, the findings that have been confirmed, or not yet falsified, have been few, far between, and small in magnitude.

There seems to be a huge disconnect between heritability estimates gleaned from twin studies and what the actual molecular genetic evidence says. This is because the EEA—that fraternal MZ twins experience roughly similar environments compared to fraternal DZ twins—is false. Fraternal MZ twins end up experiencing more similar environments when compared with fraternal DZ twins. Though most researchers attempt to save face by stating that MZ twins “seek out” and “elicit” their own environments which then makes them more similar compared to DZ twins. However, this is circular logic. The conclusion (that twins experience more similar environments) is in the premise, and therefore it is an invalid argument due to the logical fallacy. (It should also be noted that identical twins’ genes are not identical.)

Heritability studies assume an outdated model of the gene. The flaw regarding heritability estimates is simple: they imply a false dichotomy of nature vs nurture, while also assuming that genes and environment are independent, while the contribution to complex behaviors can be precisely quantified (Charney, 2013). This is one of the most critical parts of the heritability debate. Prenatal environments of DZ twins “can be significantly more stressful than that of DZ twins, and hence a cause of greater stress-related phenotypic concordance, the equal environment assumption will not hold in relation to behavioral phenotypes potentially associated with prenatal stress” (Charney, 2012: 20). This also is cause for concern regarding studies of twins reared apart. While twins are reared apart to eliminate shared environmental confounds, it cannot eliminate perhaps the most important confound of all—the prenatal environment (Moore and Shenk, 2016).

One of the most-cited studies regarding twins reared apart is Bouchard (1990). Though there are a whole slew of problems with this study.

1) You have the huge confound of similar environments before birth.

2) Full details for the MISTRA have never been published, so we don’t know how ‘separated’ the twins were. Though Bouchard et al do say that they were separated between 0 to 48.7 months (table 1) so some pairs spent at least 4 years together. Some of the twins even had reunions and spent a lot of time together.

3) They’re not representative and twins who do sign up for this research are self-selecting. Ken Richardson says in his book (2017, pg 55): “Twins generally tend to be self-selecting in any twin study. They may have responded to advertisements placed by investigators or have been prompted to do so by friends or family, on the grounds that they are alike. Remember, at least some of them knew each other prior to the study. Jay Joseph has suggested that the twins who elected to participate in all twin studies are likely to be more similar to one another than twins who chose not to participate. This makes it difficult to claim that the results would apply to the general population.

4) And the results aren’t fully reported. Richardson also states that (2017, pg 55) “… of two IQ tests administered in the MISTRA, results have been published for one but not the other. No explanation was given for that omission. Could it be they produced different results?” He even states that attempts to get the data, by researchers like Jay Joseph, have been denied. Why would you refuse to publish, or give to another researcher, your data when asked?

We don’t know the relevant environments, the children’s average age at testing is closer to the biological mother than adopted mother; the biological mother and child will have reduced self-esteem and be more vulnerable to difficult situations, and in this sense they share environments; and conscious or unconscious bias make adopted children different from other family members. Adoption agencies also attempt to put children into similar homes as the biological mother too.

Charney (2012: 25) brings up an important point: “For phenotypes of any degree of complexity, DNA does not contain a determinate genetic program (analogous to the digital code of a computer) from which we can predict phenotype. If DNA were the sole carrier of information relevant to phenotype formation, and contained a genetic program sufficiently determinate that solely by reading it we could predict phenotype, then humans (and all other organisms) would be largely lacking in phenotypic plasticity.Moore and Shenk (2016) also state that “we inherit developmental resources, not traits.”

1 For twin studies to be valid DZ twins and MZ fraternal twins would have to experience roughly equal environments. 2 Fraternal MZ twins experience much more similar environments than DZ twins. 3 Therefore the EEA is false and no genetic interpretations can be drawn from the data.

Heritability estimates cannot detangle genes and environment, and therefore they should be discontinued or reinterpreted (Joseph et al, 2015). Burt and Simons (2014: 110) also conclude: “Rejecting heritability studies and the false nature–nurture dichotomy and gene-centric model on which they are grounded is a necessary step forward that will pave the way for a reconceptualization of the link between the biological and the social in shaping criminal propensities in ways that are consistent with postgenomic knowledge“. I disagree with Barnes et al (2015) when they say that ending heritability estimates are “a defacto form of censorship“, because if nature vs nurture is a false dichotomy and the gene-centric model that heritability estimates rely on is wrong, then we need to either discontinue or reinterpret the estimates, not saying that ‘this is how much nature contributes to X and this is how much nurture contributes to Y’. (See also Richardson and Norgate, 2005 for more arguments regarding the EEA.)

Sapolsky (2017: 219) writes:

Oh, that’s right, humans. Of all species, heritability scores in humans plummet the most when shifting from a controlled experimental setting to considering the species’ full range of habitats. Just consider how much the heritability score for wearing earrings, with its gender split, has declined since 1958.

Heritability flaws

High heritability estimates have been used as evidence for causation—that genes control a large part of the trait in question. This reasoning, however, is highly flawed. People confuse “heritable” with “inheritable” (Moore and Shenk, 2016). Heritability does not inform us what causes a trait, how much environment contributes to a trait, nor does it tell us the relative influence of genes on a trait. Moore and Shenk (2016) agree with Joseph et al (2015) and Burt and Simons (2014) that heritability studies need to end, but Moore and Shenk’s reasoning slightly differs: they say we should end estimates because people confuse “heritable” with “inheritable”. Likewise, Guo (2000: 299) concurs, writing “it can be argued that the term ‘heritability’, which carries a strong conviction or connotation of something ‘heritable’ in everyday sense, is no longer suitable for use in human genetics and its use should be discontinued.

Some may say that if a trait turns out to be mildly heritable then we can say that genes have some effect, but we know that genes affect all traits so it seems kind of redundant to have a useless measure that assumes a false dichotomy and relies on an outdated, additive model of the gene.

Rose (2006), too, agrees that heritability estimates imply a false dichotomy of nature vs nurture onto biological systems:

Biological systems are complex, non-linear, and non-additive. Heritability estimates are attempts to impose a simplistic and reified dichotomy (nature/nurture) on non-dichotomous processes.

Likewise, Lewontin (2006) argues we should be analyzing and studying causes, not variance.

There are numerous hereditarian scientific fallacies which include: 1) trait heritability does not predict what would occur when environments/genes change; 2) they’re inaccurate since they  don’t account for gene-environment covariation or interaction while also ignoring nonadditive effects on behavior and cognitive ability; 3) molecular genetics does not show evidence that we can partition environment from genetic factors; 4) it wouldn’t tell us which traits are ‘genetic’ or not; and 5) proposed evolutionary models of human divergence are not supported by these studies (since heritability in the present doesn’t speak to what traits were like thousands of years ago) (Bailey, 1997).

Bailey (1997) brings up important arguments against the use of heritability, and even discusses fallacious writing from Rushton on the matter:

Rushton (1995), for example, thinks that if observed differences among the
racial groups that he defines are higher for traits that have high heritability within the groups, the hypothesis of genetically caused differences among the groups is supported.

Bailey (1997) then goes on to discuss three lakes: Otter lake, Welcome lake, and Bark lake. Otter lake has very high primary production, while Bark lake has very little and Welcome lake is somewhere in between (you can see that ‘Otter’, ‘Bark’ and ‘Welcome’ lakes are analogies for ‘Orientals’, ‘Blacks’, and ‘Whites’ as said by Rushton). But there is variation within the lakes, there are high production pockets of water in Bark lake while there are low production pockets of water in Otter lake. All three lakes are visited and measurements are taken. Bailey (1997) states that his conclusion would be that they differ in how much light each receives. Bailey (1997: 131) writes:

If I substitute three groups of people for my lakes, IQ for primary production, and genes for light levels, the fallacy of the slippery scale, as applied to human behaviour genetics, becomes clear. Even if we are sure that there is a difference among groups of people in IQ, and we are sure that IQ has high heritability within
each of the groups (i.e. variation in IQ is largely caused by genetic variation), we can make no inference about the cause of differences in IQ among the groups. The differences might be caused by genetic differences or they might not, but the heritability studies within the groups can’t help us make that judgment.

lakes

(Genes don’t cause IQ scores—or behavior—but that’s for another day.)

Heritability estimates for, say, IQ, are higher than any other trait in the animal kingdom. Heritability estimates for animal traits are low—lower than the stratospheric heritability of IQ. For example, heritability estimates of the bodyweight of farm animals is about 30 percent, which is the same for egg and milk production. Body fat in pigs and wool on sheep has a heritability of about 50 percent. But these estimates pale in comparison to the heritability estimates of IQ: estimates have been as high as 80 percent (but Schonemann, 1997 states it’s 60 percent but it’s as high as 80-90 percent today); this heritability estimate for IQ “surpasses almost anything found in the animal kingdom” (Schonemann, 1997: 104).

This high heritability estimate for IQ, of course, comes to us from the highly flawed twin studies discussed above. The reason why farmers and botanists use heritability estimates is that they can perfectly control the environment, and therefore get accurate—or close enough to it—estimates that will help them in their breeding efforts. Conversely, for humans, environments cannot be perfectly controlled and it is, of course, unethical to rear twins, MZ and DZ, in a controlled environment. Proponents of the twin method may say “It doesn’t matter if it’s flawed, it still shows there is a genetic component to trait X!”. But as discussed by Moore and Shenk (2016), that’s irrelevant because genetic factors influence all of our characteristics.

Heritability and causation

In the final section, I will shortly discuss how people fallaciously assume that high heritability estimates imply that a trait is strongly influenced by genetic factors.

In his essay in the book Postgenomics: Perspectives on Biology After the Genome, sociologist Aaron Panofsky (2016: 167; nook version) writes:

Heritability estimates do not help identify particular genes or ascertain their functions in development or physiology, and thus, by this way of thinking, they yield no causal information.

This is important to note: to those who truly believe that heritability estimates tell us anything about causation, how could they, logically, give us causal information if genes that lead to trait variation are not identified (Richardson, 2012)?

Panofsky (2014: 102-103) writes:

Experimental evidence from plants and animals suggest that shapes of the curves cannot inferred in advance and rarely follow the smooth, nonintersecting pattern like in figure 3.2 [I will provide the figure after this quote]. Thus true causal interpretations of heritability are hopeless and must be abandoned. Behavior geneticists did not claim direct experimental evidence, but they thought these various indirect lines of evidence provided a reasonable set of assumptions that would enable them to interpret heritability scores causally—provided they offer apporopriate, reasonable qualifications.

plantnormsofreactionpanofsky

Graph from Panofsky (2014: 103)

Heritability estimates imply nothing about causation. It is about associations with variance, not identity and causes (Richardson, 2017: 69). A heritability of 0 does not mean that genes do not play a role in the development of form and function and phenotypic variation, it just means that, for whatever reason, there is little correlation between the two.

Scheneider (2007) writes (emphasis mine):

Heritability estimates apply only to groups, and are inherently inapplicable to individuals in any sense. And they do not imply causation. As Moore notes, all of these important limitations have been frequently ignored or minimized.

Reductionism

Heritability estimates imply nothing about causation. Behavior geneticists and others assume that heritability estimates will lead to ‘finding the genes’ that ’cause’ or are ‘associated with’ behavior. Their models are also, of course, extremely reductionist. It is then important to note that genes do not determine behavior. To quote Lerner and Overton (2017: 114):

Data presented in a 2016 special section of the journal Child Development indicate
that “some behaviors may be affected by only slight changes in DNA methylation,
while others may require a larger percent change in methylation; of course, the
effects are also likely bidirectional, with behavior impacting changes in methylation” [Lester et al., 2016, p. 31]. This point is key . It underscores the absurdity of genetic reductionist models: Genes do not determine behavior.

Methylation impacts behavior; behavior impacts methylation. It is the relations between methylation and behavior, not the genes acting as the “command center”, the “executive” of human behavior and development, that constitute the basic role of biology across the developmental course. This is the fatal flaw of reductionist models. Lastly, Lerner and Overton (2017: 145) write (emphasis mine):

That is, with the recent advances in understanding the role of epigenetics and recent research findings supporting this role, it should no longer be possible for any scientist to undertake the procedure of splitting of nature and nurture and, through reductionist procedures, come to conclusions that the one or the other plays a more important role in behavior and development.

[Richardson (2017: 129) also writes: “Note that this environmental source of  [epigenetic] variation will appear in the behavioral geneticists twin-study as genetic variation: quite probably another way in which heritability estimates are distorted.”]

Reductionism in biology is fatally flawed. Reductionism, of course, has greatly increased our understanding of biology. However, it is time to move past the false dichotomy of nature vs nurture, and with that, move past heritability estimates since they prop up the fallacy of nature vs nurture. There is no way to separate the two since they are intertwined, but behavior geneticists would like you to believe that by studying twins raised apart will tell you anything about how ‘genetic’ or ‘environmental’ variation in a trait is in a population. Since heritability estimates are gleaned from the highly flawed studies of twins reared apart, a whole host of assumptions is poured in and these estimates are highly inflated, showing that genes influence a trait more than they supposedly do.

Twin studies, and along with it, heritability estimates, are useless for figuring out, and describing, trait variation in humans. The developmental system is more complex than the genetic reductionists (behavior geneticists) would like one to believe. The reductionist model has been heavily attacked in recent years (Regenmortal, 2004; Noble, 2008, 2012, 20152016; Joyner, 2011, b; Joyner and Pederson, 2011).

Nature vs nurture has also shown to be a false dichotomy because the system develops in whichever environment it finds itself in (Oyama, 1985, 1999; 2000Moore, 2002; Schneider, 2007)

Conclusion

Since the genetic reductionist model is wrong, along with heritability estimates (because of the nature/nurture fallacy), both should be discontinued. One of the main vehicles of these two models—twin studies—should also be discontinued. These fatal flaws of the behavior geneticists’ paradigm should be enough to discontinue these techniques in the study of human development and behavior. Heritability estimates give no causal information and they also use an outdated model of the gene; twin studies assume too many things for it to be a viable model in the discovering how traits manifest (most importantly, twin studies keep the nature/nurture fallacy alive and should be discontinued on that note only, in my opinion); and genetic reductionist models have been shown to be fatally flawed in recent years. We now have a better understanding of what a gene is today (Portin and Wilkins, 2017), and due to this, we should discontinue whatever implies the fallacy of nature vs nurture because it is irrelevant and a false dichotomy. That, alone, should be enough to discontinue twin studies and heritability estimates.

 

Race Differences in Penis Size Revisited: Is Rushton’s r/K Theory of Race Differences in Penis Length Confirmed?

2050 words

In 1985 JP Rushton, psychology professor at the University of Ontario, published a paper arguing that r/K selection theory (which he termed Differential K theory) explained and predicted outcomes of what he termed the three main races of humanity—Mongoloids, Negroids and Caucasoids (Rushton, 1985; 1997). Since Rushton’s three races differed on a whole suite of traits, he reasoned races that were more K-selected (Caucasoids and Mongoloids) had slower reproduction times, higher time preference, higher IQ etc in comparison to the more r-selected Negroids who had faster reproduction times, lower time preference, lower IQ etc (see Rushton, 1997 for a review; also see Van Lange, Rinderu, and Bushmen, 2017 for a replication of Rushton’s data not theory). Were Rushton’s assertions on race and penis size verified and do they lend credence to his Differential-K claims regarding human races?

Rushton’s so-called r/K continuum has a whole suite of traits on it. Ranging from brain size to speed of maturation to reaction time and IQ, these data points supposedly lend credence to Rushton’s Differential-K theory of human differences. Penis size is, of course, important for Rushton’s theory due to what he’s said about it in interviews.

Rushton’s main reasoning for penis size differences between race is “You can’t have both”, and that if you have a larger brain then you must have a smaller penis; if you have a smaller penis you must have a larger brain. He believed there was a “tradeoff” between brain size and penis size. In the book Darwin’s Athletes: How Sport Has Damaged Black America and Preserved the Myth of Race, Hoberman (1997: 312) quotes Rushton: “Even if you take something like athletic ability or sexuality—not to reinforce stereotypes or some such thing—but, you know, it’s a trade-off: more brain or more penis. You can’t have both.” This, though, is false. There is no type of evidence to imply that this so-called ‘trade-off’ exists. In my readings of Rushton’s work over the years, that’s always something I’ve wondered: was Rushton implying that large penises take more energy to have and therefore the trade-off exists due to this supposed relationship?

Andrew Joyce of the Occidental Observer published an article the other day in defense of Richard Lynn. Near the end of his article he writes:

Another tactic is to belittle an entire area of research by picking out a particularly counter-intuitive example that the public can be depended on to regard as ridiculous. A good example is J. Philippe Rushton’s claim, based on data he compiled for his classic Race, Evolution and Behavior, that average penis size varied between races in accord with the predictions of r/K theory. This claim was held up to ridicule by the likes of Richard Lewontin and other crusaders against race realism, and it is regularly presented in articles hostile to the race realist perspective. Richard Lynn’s response, as always, was to gather more data—from 113 populations. And unsurprisingly for those who keep up with this area of research, he found that indeed the data confirmedRushton’s original claim.

The claim was ridiculed because it was ridiculous. This paper by Lynn (2013) titled Rushton’s r-K life history theory of race differences in penis length and circumference examined in 113 populations is the paper that supposedly verifies Rushton’s theory regarding race differences in penis size, along with one of its correlates in Rushton’s theory (testosterone). Lynn (2013) proclaims that East Asians are the most K-evolved, then come Europeans, while Africans are the least K-evolved. This, then, is the cause of the supposed racial differences in penis size.

Lynn (2013) begins by briefly discussing Rushton’s ‘findings’ on racial differences in penis size while also giving an overview of Rushton’s debunked r/K selection theory. He then discusses some of Rushton’s studies (which I will describe briefly below) along with stories from antiquity of the supposed larger penis size of African males.

Our old friend testosterone also makes an appearance in this paper. Lynn (2013: 262) writes:

Testosterone is a determinant of aggression (Book, Starzyk, & Quinsey, 2001; Brooks & Reddon, 1996; Dabbs, 2000). Hence, a reduction of aggression and sexual competitiveness between men in the colder climates would have been achieved by a reduction of testosterone, entailing the race differences in testosterone (Negroids > Caucasoids > Mongoloids) that are given in Lynn (1990). The reduction of testosterone had the effect of reducing penis length, for which evidence is given by Widodsky and Greene (1940).

Phew, there’s a lot to unpack here. (I discuss Lynn 1990 in this article.) Testosterone does not determine aggression; see my most recent article on testosterone (aggression increases testosterone; testosterone does not increase aggression. Book, Starzyk and Quinsey, 2001 show a .14 correlation between testosterone and aggression, whereas Archer, Graham-Kevan, and Davies 2005 show the correlation is .08). This is just a correlation. Sapolsky (1997: 113) writes:

Okay, suppose you note a correlation between levels of aggression and levels of testosterone among these normal males. This could be because (a)  testosterone elevates aggression; (b) aggression elevates testosterone secretion; (c) neither causes the other. There’s a huge bias to assume option a while b is the answer. Study after study has shown that when you examine testosterone when males are first placed together in the social group, testosterone levels predict nothing about who is going to be aggressive. The subsequent behavioral differences drive the hormonal changes, not the other way around.

Brooks and Reddon (1996) also only show relationships with testosterone and aggressive acts; they show no causation. This same relationship was noted by Dabbs (2000; another Lynn 2013 citation) in prisoners. More violent prisoners were seen to have higher testosterone, but there is a caveat here too: being aggressive stimulates testosterone production so of course they had higher levels of testosterone; this is not evidence for testosterone causing aggression.

Another problem with that paragraph quoted from Lynn (2013) is that it’s a just-so story. It’s an ad-hoc explanation. You notice something with data you have today and then you imagine a nice-sounding story to attempt to explain your data in an evolutionary context. Nice-sounding stories are cool and all and I’m sure everyone loves a nicely told story, but when it comes to evolutionary theory I’d like theories that can be independently verified of the data they’re trying to explain.

My last problem with that paragraph from Lynn (2013) is his final citation: he cites it as evidence that the reduction of testosterone affects penis length…..but his citation (Widodsky and Green, 1940) is a study on rats… While these studies can give us a wealth of information regarding our physiologic systems (at least showing us which types of avenues to pursue; see my previous article on myostatin), they don’t really mean anything for humans; especially this study on the application of testosterone to the penis of a rat. See, the fatal flaw in these assertions is this: would a, say, 5 percent difference in testosterone lead to a larger penis as if there is a dose-response relationship between testosterone and penis length? It doesn’t make any sense.

Lynn (2013), though, says that Rushton’s theory doesn’t propose that there is a direct causal relationship between “intelligence”‘ and penis length, but just that they co-evolved together, with testosterone reduction occurring when Homo sapiens migrated north out of Africa they needed to cooperate more so selection for lower levels of testosterone subsequently occurred which then shrunk the penises of Rushton’s Caucasian and Mongoloid races.

Lynn (2013) then discusses two “new datasets”, one of which is apparently in Donald Templer’s book Is Size Important (which is on my to-read list, so many books, so little time). Table 1 below is from Lynn reproducing Templer’s ‘work’ in his book.

Lynn table 1

The second “dataset” is extremely dubious. Lynn (2013) attempts to dress it up, writing that “The information in this website has been collated from data obtained by research centres and reports worldwide.Ethnicmuse has a good article on the pitfalls of Lynn’s (2013) article. (Also read Scott McGreal’s rebuttal.)

Rushton attempted to link race and penis size for 30 years. In a paper with Bogaert (Rushton and Bogaert, 1987), they attempt to show that blacks had larger penises than whites who h ad longer penises than Asians which then supposedly verified one dimension of Rushton’s theory. Rushton (1988) also discusses race differences in penis size, citing a previous paper by Rushton and Bogaert, where they use data from Alfred Kinsey, but this data is nonrepresentative and nonrandom (see Zuckermann and Brody, 1988 and Weizmann et al, 1990: 8).

Still others may attempt to use supposed differences in IGF-1 (insulin-like growth factor 1) as evidence that there is, at least, physiological evidence for the claim that black men have larger penises than white men, though I discussed that back in December of 2016 and found it strongly lacking.

Rushton (1997: 182) shows a table of racial differences in penis size which was supposedly collected by the WHO (World Health Organization). Though a closer look shows this is not true. Ethnicmuse writes:

ANALYSIS: The WHO did not study penis sizes. It relied on three separate studies, two of which were not peer-reviewed and the data was included as “Appendix III” (which should have alerted Rushton that this was not an original study). The first study references Africans in the US (not Africa!) and Europeans in the US (not Europe!), the second Europeans in Australia (not Europe!) and the third, Thais.

So it seems to be bullshit all the way down.

Ajmani et al (1985) showed that 385 healthy Nigerians had an average penile length of 3.21 inches (flaccid). Orakwe and Ebuh (2007) show that while Nigerians had longer penises than other ethnies tested, the only statistical difference was between them and Koreans. Though Veale et al (2014: 983) write that “There are no indications of differences in racial variability in our present study, e.g. the study from Nigeria was not a positive outlier.”

Lynn and Dutton have attempted to use androgen differentials between the races as evidence for racial differences in penis size (this is another attempt at a physiological argument to attempt to show the existence of racial differences in penis size). Edward Dutton attempted to revive the debate on racial differences in penis size during a 2015 presentation where he, again, showed that Negroids have higher levels of testosterone than Caucasoids who have higher levels of androgens than Mongoloids. These claims, though, have been rebutted by Scott McGreal who showed that populations differences in androgen levels are meaningless while they subsequently fail to validate Rushton and Lynn’s claims on racial differences in penis size.

Finally, it was reported the other day that condoms from China were too small in Zimbabwe, per Zimbabwe’s health minister. This led Kevin MacDonald to proclaim that this was “More corroboration of race differences in penis size which was part of the data Philippe Rushton used in his theory of r/K selection (along with brain size, maturation rates, IQ, etc.)” This isn’t “more corroboration” for Rushton’s long-dead theory; nor is this evidence that blacks have longer penises. I don’t understand why people make broad and sweeping generalizations. It’s one country in Africa that complained about smaller condoms from a country in East Asia, therefore this is more corroboration for Rushton’s r/K selection theory? The logic doesn’t follow.

Asians have small condoms. Those condoms go to Africa. They complain condoms from China are too small. Therefore Rushton’s r/K selection theory is corroborated. Flawed logic.

In sum, Lynn (2013) didn’t verify Rushton’s theory regarding racial differences in penis size and I find it even funnier that Lynn ends his article talking about “falsification’ stating that this aspect of Rushton’s theory has survived two attempts at falsification, therefore, it can be regarded as a “progressive research program“, though obviously, with the highly flawed “data” that was used, one cannot rationally make that statement. Supposed hormonal differences between the races do not cause penis size differences; even if blacks had levels of testosterone significantly higher than whites (the 19 percent that is claimed by Lynn and Rushton off of one highly flawed study in Ross et al, 1986) they still would not have longer penises.

The study of physical differences between populations is important, but sometimes, stereotypes do not tell you anything, especially in this case. Though in this instance, the claim that blacks have the longest penis lies on shaky ground, and with what evidence we do have for the claim, we cannot logically make the inference (especially not from Lynn’s (2013) flimsy data). Richard Lynn did not “confirm” anything with this paper; the only thing he “confirmed” are his own preconceived notions; he did not ‘prove’ what he set out to.

‘Double-Muscled’ Humans?

1800 words

I’ve been reading bodybuilding magazines for almost ten years. Good science articles on training and diet, but there was always one ad in the magazines that I always saw: the leg and calf of a neonate and then that same neonate at 7 months. The kid was brolic. Defined calves with absolutely no training. What was the cause? Well, he had a deletion on the myostatin gene— also called growth-differentiating factor 8, GDF-8. The ads in the magazines would try to get you to buy some shitty supplement that did not work, but the kid? The kid is real and he had a deletion on the gene that codes for a protein that myostatin. Myostatin restrains muscle growth, normally, which ensures that muscles don’t grow too large. Myo means muscle, while statin means heart. This can be a huge breakthrough regarding muscular dystrophy (Smith and Lin, 2013). Myostatin seems to have two roles: 1) regulating the number of muscle fibers formed in development and 2) to regulate the growth of muscle fibers postnatally.

When it is deleted, in cattle, it causes “double-muscle” cattle—cattle that have about 20 percent more muscle mass than cattle who don’t have the deletion (Grobert et al, 1997; Amthor et al, 2007). The cause is skeletal-muscle hyperplasia, which causes an increase in the number of muscle fibers, not only an increase in diameter. This is what causes these crazy-looking animals. Myostatin is coded by the MSTN gene. So they discovered that the gene caused double-muscled cattle. It should also be noted that while mice who lack myostatin are more muscular than average, they have impaired force generation (Amthor et al, 2007)

belgian blue double-muscle

From Grobet et al, 1997; a double-muscle Belgian Blue homozygous for a deletion in the myostatin gene

The same thing is seen in mice—mice with a myostatin deletion are stronger and bigger (muscle) than mice without the myostatin deletion; myostatin, in adult mice, is expressed in all muscle tissue but more specifically in fast twitch muscle fibers (Whittemore et al, 2002). Se-Jin Lee is the one to discover the myostatin gene, and for his work he was elected the to the National Academy of Sciences in 2012 (Glass and Spiegelman, 2012). Mice that lack myostatin have, on average, double the muscle compared to mice who have myostatin. However, Lee (2007) proved that mice who lack myostatin and who overproduce follistatin (which is capable of blocking myostatin activity in muscle cells). Lee (2007) writes:

Moreover, the rank order of magnitude of these increases correlated with the rank order of expression levels of the transgene; in the highest-expressing line, Z116A [Z116a is one of Lee’s four transgenic mouse lines], muscle weights were increased by 57–81% in females and 87–116% in males compared to wild type mice. Hence, FLRG is capable of increasing muscle growth in a dose-dependent manner when expressed as a transgene in skeletal muscle.

So Lee (2007) discovered that the effect of FLRG is additive. He then attempted to determine whether or not the FLRG gene was truly causing increased muscle growth by blocking myostatin activity, so he examined the effect of combining the FLRG transgene with a knocked-out myostatin gene. He was not able to find this relationship in Z116A—i.e. being positive for the FLRG transgene and homozygous for myostatin—but he did discover that females from the Z166A strain were heterozygous for the myostatin deletion, having further increases in muscle weights combined with wild-type mice with ‘normal’ myostatin.

Most importantly, in two of the muscles that were examined (quadriceps and gastrocnemius) the observed increases were also greater than those seen in Mstn−/− mice lacking the transgene. Based on this finding, it appears that myostatin cannot be the sole target for FLRG in the transgenic mice and, therefore, that additional ligands must be capable of suppressing muscle growth in vivo.

Then Lee examined the effects of follistatin in MSTN null mice. He found that the presence of the F66 transgene in MSTN null mice, which caused another doubling in muscle. Lee had bred mice with quadruple muscle. Like FLRG, follistatin exerts its effects on other ligands, along with myostatin, so the effect of blocking still other ligands is also comparable to that loss-of-function from the myostatin.

So there are two important take-aways here with this landmark study: 1) the loss-of-function mutation on the Mstn gene exerts a maternal effect; muscle mass in the fetus is determined by the number of functional Mstn alleles (the offspring had higher muscle weights if the mother had fewer functioning Mstn alleles even if the offspring had the same genotype); and 2) Lee showed that other ligands worked with myostatin to control muscle growth. Both FLRG and follistatin can promote muscle growth when they are transgenes in skeletal muscle. So when he combined follistatin transgene and myostatin null mutation deletions, he had bred mice with qaudruple muscle.

journal.pone.0000789.g003

These mice are huge. And it’s only due simply to a loss-of-function mutation along with the myostatin-binding protein follistatin that causes mice with quadruple muscle.  Myostatin regulates muscle growth. So if myostatin regulates muscle growth, then a deletion on the gene that codes for the protein that codes for myostatin to regulate muscle growth should causes increases in size and strength in animals with this null myostatin deletion.

In his 2014 book, David Epstein writes about how Lee attempted to find him subjects for human testing, so he put an ad in muscle magazines such as Muscle and Fitness and Muscular Development. Over 150 people answered his ad, but he had found no myostatin mutants.

This was until 2003, when he got a phone call of a babe who was born with bulging muscles, in Germany. He had mutations on both of his myostatin genes, therefore he had no myostatin in his blood. This baby’s mother (called “Superbaby”) had one normal myostatin gene and one mutant so she had more myostatin than her son but less than the general population. She is the only adult with a known myostatin deletion, and she just so happens to be a professional sprinter.

Before I discuss Superbaby, I need to discuss myostatin and its role in development. Myostatin plays the same role in birds, cattle, mice, humans, etc. Muscle is costly, energetically speaking, and if one is too muscular they may not be able to find enough food to sustain their higher-than-average muscle mass, so myostatin is kind of like the body’s ‘fail-safe’ to prevent one’s muscles from becoming too big. Of course larger muscles require more calories—and of course protein for muscle-building—and so, it wouldn’t make sense, for instance, for our ancestors to have huge bulging muscles since they ate intermittently. So myostatin helps us stay smaller than we would be than if we had the null mutation.

One of the incredible things about Superbaby is that he had no heart problems, although doctors were worried that he would (like his heart growing out of control), but him nor his mother have reported any problems. Epstein (2014: 105) writes:

But the facts that the one boy with two of the rare myostatin gene variants has exceptional strength, and that his mother has exceptional  speed, are no coincidence. Superbaby and his mother fall precisely in line with whippets.

Epstein describes how two whippets, one with one copy of the myostatin gene, have four puppies and how the mutation would go to the offspring (kind of like a Punnet square):

If two sprinter whippets—dogs that each have one copy of the myostatin mutation—have four puppies, this is the likely scenario: one puppy will have zero copies of the mutation and be normal; two puppies will have one copy of the mutation, like Superbaby’s mother, and be sprinters; the fourth pippy will have two copies of the mutation, like Superbaby, which make for a double-muscled “bully” whippet.

Schuelke et al’s (2004) case report is one of the first-known cases of the myostatin mutation in humans. The pregnancy was normal, and when he was born, Superbaby had protruding calves (see Fig. 1: a below; left is as a 6-day-old neonate and the right is 7 months), along with his upper arms too. The ultrasonograms of his muscles were also different from controls as was the morphometric analysis (Fig. 1 b and c respectively). All around, Superbaby was normal; but by age 3 he still had increased muscle mass/strength and could even hold 3 kg dumbells in suspension, horizontally with his arms extended. He had some strong family members, one was a construction worker who was able to unload curbstones by hand, while the mother “appeared muscular” but not as muscular as her son (see Fig. 1 d).

Superbaby myostatin

Myostatin is also expressed in the heart, and since Superbaby had a loss-of-function mutation on his myostatin gene, he was monitored for cardiomyopathy but he may have been too young to detect any defects. So Schuelke et al’s (2004) obvious conclusion was that a loss-of-function on the myostatin gene could increase muscle bulk and strength and be good therapy for people with a muscle-wasting disease.

One deletion in the MSTN gene can cause myostatin-related hypertrophy. One mutation disrupts how the gene that codes for the protein to make MSTN and therefore muscle cells. So when this occurs the cells make little to no functional myostatin. When one protein is lost, it leads to an overgrowth of muscle cells, with no other apparent medical problems (which is also seen in Superbaby and his mother).

If that were my son, I’d be a proud father. My baby coming out of the womb already jacked and strong? He’d immediately be in the gym as soon as he was able to and I would attempt to mold him into a champion bodybuilder/powerlifter; I’m not sure if the mutation that Superbaby has would truly matter at the elite level in the IFBB, but it would matter in the amateur level. Superbaby—and all of the other loss-of-function myostatin animal mutants—have paved the way for new forms of gene therapy for humans who have a muscle-wasting disease. Another American boy also had the same mutation as Superbaby. (Though the cause is different in this child, his body produces a normal level of myostatin, a defect in his myostatin receptors is thought to prevent his muscle cells from responding to myostatin, and since he’s bigger and stronger than children his age, this is a sensible hypothesis.)

With these loss-of-function mutants along with other transgenes, we can understand how and why muscles atrophy and grow, and we can help people with serious disease. Superbaby is not 14 years old, and while I am unable to find any new information on Superbaby (I will write something else on this if and when I do), it’s clear that a loss-of-function on the myostatin gene causes higher amounts of muscle mass and strength when people with the mutation are compared to people without the mutation. I’d personally line up turn off my myostatin gene, so I can get double-muscled and if there is any gene therapy for follistatin, I’d get that, too, in order to become quadruple-muscled.

Is Racial Superiority in Sports a Myth? A Response to Kerr (2010)

2750 words

Racial differences in sporting success are undeniable. The races are somewhat stratified in different sports and we can trace the cause of this to differences in genes and where one’s ancestors were born. We can then say that there is a relationship between them since, they have certain traits which their ancestors also had, which then correlate with geographic ancestry, and we can explain how and why certain populations dominate (or would have the capacity to based on body type and physiology) certain sporting events. Critiques of Taboo: Why Black Athletes Dominate Sports and Why We’re Afraid to Talk About It are few and far between, and the few that I am aware of are alright, but this one I will discuss today is not particularly good, because the author makes a lot of claims he could have easily verified himself.

In 2010, Ian Kerr published The Myth of Racial Superiority in Sports, who states that there is a “dark side” to sports, and specifically sets his sights on Jon Entine’s (2000) book Taboo. In this article, Kerr (2010) makes a lot of, in my opinion, giant claims which provide a lot of evidence and arguments in order to show their validity. I will discuss Kerr’s views on race, biology, the “environment”, “genetic determinism”, and racial dominance in sports (which will have a focus on sprinting/distance running in this article).

Race

Since establishing the reality and validity of the concept of race is central to proving Entine’s (2002) argument on racial differences in sports, then I must prove the reality of race (and rebut what Kerr 2010 writes about race). Kerr (2010: 20) writes:

First, it is important to note that Entine is not working in a vacuum; his assertions about race and sports are part of a larger ongoing argument about folk notions of race. Folk notions of race founded on the idea that deep, mutually exclusive biological categories dividing groups of people have scientific and cultural merit. This type of thinking is rooted in the notion that there are underlying, essential differences among people and that those observable physical differences among people are rooted in biology, in genetics (Ossorio, Duster, 2005: 2).

Dividing groups of people does have scientific, cultural and philosophical merit. The concept of “essences” has long been discarded by philosophers. Though there are differences in both anatomy and physiology in people that differ by geographic location, and this then, at the extreme end, would be enough to cause the differences in elite sporting competition that is seen.

Either way, the argument for the existence of race is simple: 1) populations differ in physical attributes (facial, morphological) which then 2) correlate with geographic ancestry. Therefore, race has a biological basis since the physical differences between these populations are biological in nature. Now that we have established that race exists using only physical features, it should be extremely simple to show how Kerr (2010) is in error with his strong claims regarding race and the so-called “mythology” of racial superiority in sports. Race is biological; the biological argument for race is sound (read here and here, and also see Hardimon, 2017).

Genetic determinism

True genetic determinism—as is commonly thought—does not have any sound, logical basis (Resnick and Vorhaus, 2006). So Kerr’s (2010) claims in this section need to be dissected here. This next quote, though, is pretty much imperative to the soundness and validity of his whole article, and let’s just say that it’s easy to  rebut and invalidates his whole entire argument:

Vinay Harpalani is one of the most outspoken critics of using genetic determinism to validate notions of inferiority or the superiority of certain groups (in this case Black athletes). He argues that in order for any of Entine’s claims to be valid he must prove that: 1) there is a systematic way to define Black and White populations; 2) consistent and plausible genetic differences between the populations can be demonstrated; 3) a link between those genetic differences and athletic performance can be clearly shown (2004).

This is too easy to prove.

1) While I do agree that the terminology of ‘white’ and ‘black’ are extremely broad, as can be seen by looking at Rosenberg et al (2002), population clusters that cluster with what we call ‘white’ and ‘black’ exist (and are a part of continental-level minimalist races). So is there a systematic way to define ‘Black’ and ‘White’ populations? Yes, there is; genetic testing will show where one’s ancestors came from recently, thereby proving point 1.

2) Consistent and plausible genetic differences between populations can be demonstrated. Sure,  there is more variation within races than between them (Lewontin, 1972Rosenberg et al, 2002Witherspoon et al, 2007Hunley, Cabana, and Long, 2016). Even these small between-continent/group differences would have huge effects on the tail end of said distribution.

3) I have compiled numerous data on genetic differences between African ethnies and European ethnies and how these genetic differences then cause differences in elite athletic performance. I have shown that Jamaicans, West Africans, Kenyans and Ethiopians (certain subgroups of the two aforementioned countries) have genetic/somatypic differences that then lead to differences in these sporting competitions. So we can say that race can predict traits important for certain athletic competitions. 

1) The terminology of ‘White’ and ‘Black’ are broad; but we can still classify individuals along these lines; 2) consistent and plausible genetic differences between races and ethnies do exist; 3) a link between these genetic differences between genes/athletic differences between groups can be found. Therefore Entine’s (2002) arguments—and the validity thereof—are sound.

Kerr (2010) then makes a few comments on the West’s “obsession with superficial physical features such as skin color”, but using Hardimon’s minimalist race concept, skin color is a part of the argument to prove the existence and biological reality of race, therefore skin color is not ‘superficial’, since it is also a tell of where one’s ancestors evolved in the recent past. Kerr (2010: 21) then writes:

Marks writes that Entine is saying one of three things: that the very best Black athletes have an inherent genetic advantage over the very best White athletes; that the average Black athlete has a genetic advantage over the average White athlete; that all Blacks have the genetic potential to be better athletes than all Whites. Clearly these three propositions are both unknowable and scientifically untenable. Marks writes that “the first statement is trivial, the secondly statistically intractable, and the third ridiculous for its racial essentialism” (Marks, 2000: 1077).

The first two, in my opinion (the very best black athletes have an inherent genetic advantage over the very  best white athletes and the average black athlete has a genetic advantage over the average white athlete), are true, and I don’t know how you can deny this; especially if you’re talking about AVERAGES. The third statement is ridiculous, because it doesn’t work like that. Kerr (2010), of course, states that race is not a biological reality, but I’ve proven that it is so that statement is a non-factor.

Kerr (2010) then states that “ demonstrating across the board genetic variations between
populations — has in recent years been roundly debunked
“, and also says “ Differences in height, skin color, and hair texture are simply the result of climate-related variation.” This is one of the craziest things I’ve read all year! Differences in height would cause differences in elite sporting competition; differences in skin color can be conceptualized as one’s ancestors’ multi-generational adaptation to the climate they evolved in as can hair texture. If only Kerr (2010) knew that this statement here was the beginning of the end of his shitty argument on Entine’s book. Race is a social construct of a biological reality, and there are genetic differences between races—however small (Risch et al, 2002; Tang et al, 2005) but these small differences can mean big differences at the elite level.

The “environment” and biological variability

Kerr (2010) then shifts his focus over to, not genetic differences, but biological differences. He specifically discusses the Kenyans—Kalenjin—stating that “height or weight, which play an instrumental role in helping define an individual’s athletic prowess, have not been proven to be exclusively rooted in biology or genetics.” While estimates of BMI and height are high (both around .8), I think we can disregard the numbers since they came from highly flawed twin studies, since molecular genetic evidence shows lower heritabilities. Either way, surely height is strongly influenced by ‘genes’. Another important caveat is that Kenya has one of the lowest BMIs in the world, 20.7 for Kenyan men, which also is part of the cause of why certain African ethnies dominate running competitions.

I don’t disagree with Kerr (2010) here too much; many papers show that SES/cultural/social factors are very important to Kenyan runners (Onywera et al, 2006; Wilbur and Pistiladis, 2012Tucker, Onywera, and Santos-Concejero, 2015). You can have all of the ‘physical gifts’ in the world, if it’s not combined with the will to want to do your best, along with cultural and social factors you won’t succeed. But having an advantageous genotype and physique are useless without a strong mind (Lippi, Favaloro, and Guidi, 2008):

An advantageous physical genotype is not enough to build a top-class athlete, a champion capable of breaking Olympic records, if endurance elite performances (maximal rate of oxygen uptake, economy of movement, lactate/ventilatory threshold and, potentially, oxygen uptake kinetics) (Williams & Folland, 2008) are not supported by a strong mental background.

Dissecting this, though, is tougher. Because being born at certain altitudes will cause certain advantageous traits, such as a larger lung capacity (and you will have an advantage in lung capacity when competing at lower altitudes), but certain subpopulations live in these high-altitude areas, so what is it? Genetic? Cultural? Environmental? All three? Nature vs nurture is a false dichotomy; so it is a mixture of the three.

How does one explain, then, the athlete who trains countless hours a day fine-tuning a jump shot, like LeBron James or shaving seconds off sub-four minute miles like Robert Kipkoech Cheruiyot, a four time Boston Marathon winner?

Literally no one denies that elite athletes put in insane amounts of practice; but if everyone has the same amount of practice they won’t have similar abilities.

He also briefly brings up muscle fibers, stating:

These include studies on African fast twitch muscle fibers and development of motor skills. Entine includes these studies to demonstrate irrevocable proof of embedded genetic differences between populations but refuses to accept the fact that any differences may be due to environmental factors or training.

This, again, shows ignorance of the literature. An individual’s muscle fibers are formed during development from the fusion of several myoblasts, with differentiation being completed before birth. Muscle fiber typoing is also set at age 6, no difference in skeletal muscle tissue was found when comparing 6-year-olds and adults, therefore we can state that muscle fiber typing is set by age 6 (Bell et al, 1980). You can, of course, train type II fibers to have similar aerobic capacity to type I fibers, but they’ll never be fully similar. This is something that Kerr (2010) obviously is ignorant to because he’s not well-read on the literature which causes him to make dumb statements like “any differences [in muscle fiber typing] may be due to environmental factors or training“.

Black domination in sports

Finally, Kerr (2010) discusses the fact that whites dominated certain running competitions in the Olympics and that before the 1960s, a majority of distance-running gold medals went to white athletes. He then states that the 2008 Boston Marathon winner was Kenyan; but the next 4 behind him were not. Now, let’s check out the 2017 Marathon winners: Kenya, USA, Japan for the top 3; while 5 Kenyans/Ethiopians are in the top 15 while the same is also true of women; a Kenyan winner, with Kenyans/Ethiopians taking 5 of the top 15 spots. The fact that whites used to do well in running sports is a non-factor; Jesse Owens blew away the competition in the Games in Germany, which showed how blacks would begin to dominate in the US decades later.

Kerr (2010) then ends the article with a ton of wild claims; the wildest one, in my opinion, being that “Kenyans are no more genetically different from any other African or European population on average“, does anyone believe this? Because I have data to the contrary. They have a higher Vo2 max, which of course is trainable but with a ‘genetic’ component (Larsen, 2003), while other authors argue that genetic differences between populations account for differences in success in running competition between populations (Vancini et al, 2014), while male and female Kenyan and Ethiopian runners are the fastest in the half and full marathon (Knechtle et al, 2016). There is a large amount of data out there that speaks about Kenyan/Ethiopian and others’ dominance in running; it seems Kerr (2010) just ignored the data. I agree with Kerr that Kenyanholos show that humans can adapt to their environment; but his conclusion here:

The fact that runners coming from Kenya do so well in running events attests to the fact the combination of intense high altitude training, consumption of a low-fat, high protein diet, and a social and cultural expectation to succeed have created in recent decades an environment which is highly conducive to producing excellent long-distance runners.

is very strong, and while I don’t disagree at all with anything here, he’s disregarding how somatype and genes differ between Kenyans and other populations that compete in these sports that then lead to differences in elite sporting competitions.

Elite sporting performance is influenced by myriad factors, including psychology, ‘environment’, and genetic factors. Something that Kerr (2010) doesn’t understand—because he’s not well-read on this literature—is that many genetic factors that influence sporting performance are known. The ability to become elite depends on one’s capacity for endurance, muscle performance, the ability of the tendons and ligaments to withstand stress and injury, and the attitude to train and push above and beyond what normal people can do (Lippi, Longo, and Maffulli, 2010). We can then extend this to human races; some are better-equipped to excel in running competitions than others.

On its face, Kerr’s (2010) claim that there are no inherent differences between races is wrong. Races differ in somatype, which is due to evolution in different geographic locations for tens of thousands of years. The human body is perfectly adapted to for long distance running (Murray and Costa, 2012), and since our capabilities for endurance running evolved in Africa and they, theoretically, have a musculoskeletal structure similar to the Homo sapiens that left Africa around 70 kya, then it’s only logical to state that African’s, on average, have an inherent ability in running competitions (West and East Africans, while North Africans fare very well in middle distance running, which, again, comes down to living in higher altitudes like Kenyans and Ethiopians).

Wagner and Heyward (2000) reviewed many studies on the physiological differences between blacks and whites. Blacks skew towards mesomorphy; black youths had smaller billiac and bitrochanteric width (the widest measure of the pelvis at the outer edges and the flat process on the femur, respectively), and black infants had longer extremities than white infants (Wagner and Heyward, 2000). We have anatomic evidence that blacks are superior runners (in an American context). Mesomorphic athletes are more likely to be sprinters (Sands et al, 2005; which is also seen in prepubescent children: Marta et al, 2013) Kenyans are ecto-dominant (Vernillo et al, 2013) which helps to explain their success at long-distance running. So just on only looking at the phenotype (a marker for race with geographic ancestry, proving the biological existence of race) we can confidently state, on average just by looking at an individual or a population, how they will fare in certain competitions.

Conclusion

Kerr’s (2010) arguments leave a ton to be desired. Race exists and is a biological reality. I don’t know why this paper got published since it was so full of errors; his arguments were not sound and much of the literature contradicts his claims. What he states at the end about Kenyans is not wrong at all, but to not even bring up genetic/biologic differences as a factor influencing their performance is dishonest.

Of course, a whole slew of factors, be they biological, cultural, psychological, genetic, socioeconomic, anatomic, physiologic etc influence sporting performance, but certain traits are more likely to be found in certain populations, and in the year 2018 we have a good idea of what influences elite sporting performance and what does not. It just so happens that these traits are unevenly distributed between populations, and the cause is evolution in differing climates in differing geographic locations.

Race exists and is a biological reality. Biological anatomic/physiological differences between these races then manifest themselves in elite sporting competition. The races differ, on average, in traits important for success in certain competitions. Therefore, race explains some of the variance in elite sporting competition.

Does Playing Violent Video Games Lead to Violent Behavior?

1400 words

President Trump was quoted the other day saying We have to look at the Internet because a lot of bad things are happening to young kids and young minds and their minds are being formed,” Trump said, according to a pool report, “and we have to do something about maybe what they’re seeing and how they’re seeing it. And also video games. I’m hearing more and more people say the level of violence on video games is really shaping young people’s thoughts.” But outside of broad assertions like this—that playing violent video games cause violent behavior—does it stack up to what the scientific literature says about it? In short, no, it does not. (A lot of publication bias exists in this debate, too.) Why do people think that violent video games cause violent behavior? Mostly due to the APA and their broad claims with little evidence.

Just doing a cursory Google search of ‘violence in video games pubmed‘ brings up 9 journal articles, so let’s take a look at a few of those.

The first article is titled The Effect of Online Violent Video Games on Levels of Aggression by Hollingdale and Greitemeyer (2014). They took 101 participants and randomized them to one of four experimental conditions: neutral, offline; neutral online; (Little Big Planet 2) violent offline; and violent online video games (Call of Duty: Modern Warfare). After they played said games, they answered a questionnaire and then measured aggression using the hot sauce paradigm (Lieberman et al, 1999) to measure aggressive behavior. Hollingdale and Greitemeyer (2014) conclude that “this study has identified that increases in aggression are not more pronounced when playing a violent video game online in comparison to playing a neutral video game online.”

Staude-Muller (2011) finds that “it was not the consumption of violent video games but rather an uncontrolled pattern of video game use that was associated with increasing aggressive tendencies.Przybylski, Ryan, and Rigby (2009) found that enjoyment, value, and desire to play in the future were strongly related to competence in the game. Players who were high in trait aggression, though, were more likely to prefer violent games, even though it didn’t add to their enjoyment of the game, while violent content lent little overall variance to the satisfactions previously cited.

Tear and Nielsen (2013) failed to find evidence that violent video game playing leads to a decrease in pro-social behavior (Szycik et al, 2017 also show that video games do not affect empathy). Gentile et al (2014) show that “habitual violent VGP increases long-term AB [aggressive behavior] by producing general changes in ACs [aggressive cognitions], and this occurs regardless of sex, age, initial aggressiveness, and parental involvement. These robust effects support the long-term predictions of social-cognitive theories of aggression and confirm that these effects generalize across culture.” The APA (2015) even states that “scientific research has demonstrated an association between violent video game use and both increases in aggressive behavior, aggressive affect, aggressive cognitions and decreases in prosocial behavior, empathy, and moral engagement.” How true is all of this, though? Does playing violent video games truly increase aggression/aggressive behavior? Does it have an effect on violence in America and shootings overall?

No.

Whitney (2015) states that the video-games-cause-violence paradigm has “weak support” (pg 11) and that, pretty much, we should be cautious before taking this “weak support” as conclusive. He concludes that there is not enough evidence to establish a truly causal connection between violent video game playing and violent and aggressive behavior. Cunningham, Engelstatter, and Ward (2016) tracked the sale of violent video games and criminal offenses after those games were sold. They found that violent crime actually decreased the weeks following the release of a violent game. Of course, this does not rule out any longer-term effects of violent game-playing, but in the short term, this is good evidence against the case of violent games causing violence. (Also see the PsychologyToday article on the matter.)

We seem to have a few problems here, though. How are we to untangle the effects of movies and other forms of violent media that children consume? You can’t. So the researcher(s) must assume that video games and only video games cause this type of aggression. I don’t even see how one can logically state that out of all other types of media that violent video games—and not violent movies, cartoons, TV shows etc—cause aggression/violent behavior.

Back in 2011, the Supreme Court case Brown vs. Entertainment Merchants Association concluding that since the effects on violent/aggressive behavior were so small and couldn’t be untangled from other so-called effects from other violent types of media. Ferguson (2015) found that violent video game playing had little effect on children’s mood, aggression levels, pro-social behavior or grades. He also found publication bias in this literature (Ferguson, 2017). Contrary to what those say about video games causing violence/aggressive behavior, video game playing was associated with a decrease in youth crime (Ferguson, 2014; Markey, Markey, and French, 2015 which is in line with Cunningham, Engelstatter, and Ward, 2016). You can read more about this in Ferguson’s article for The Conversationalong with his and others’ responses to the APA who state that violent video games cause violent behavior (with them stating that the APA is biased). (Also read a letter from 230 researchers on the bias in the APA’s Task Force on Violent Media.)

How would one actually untangle the effects of, say, violent video game playing and the effects of such other ‘problematic’ forms of media that also show aggression/aggressive acts towards others and actually pinpoint that violent video games are the culprit? That’s right, they can’t. How would you realistically control for the fact that the child grows up around—and consumes—so much ‘violent’ media, seeing others become violent around him etc; how can you logically state that the video games are the cause? Some may think it logical that someone who plays a game like, say, Call of Duty for hours on end a day would be more likely to be more violent/aggressive or more likely to commit such atrocities like school shootings. But none of these studies have ever come to the conclusion that violent video games may/will cause someone to kill or go on a shooting spree. It just doesn’t make sense. I can, of course, see the logic in believing that it would lead to aggressive behavior/lack of pro-social behavior (let’s say the kid played a lot of games and had little outside contact with people his age), but of course the literature on this subject should be enough to put claims like this to bed.

It’s just about impossible to untangle the so-called small effects of video games on violent/aggressive behavior from other types of media such as violent cartoons and violent movies. Who’s to say it’s not just the violent video games and not the violent movies and violent cartoons, too, that ’cause’ this type of behavior? It’s logically impossible to distinguish this, so therefore the small relationship between video games and violent behavior should be safely ignored. The media seems to be getting this right, which is a surprise (though I bet if Trump said the opposite—that violent video games didn’t cause violent behavior/shootings—that these same people would be saying that they do), but a broken clock is right twice a day.

So Trump’s claim (even if he didn’t outright state it) is wrong, along with anyone else who would want to jump in and attempt to say that video games cause violence. In fact, the literature shows a decrease in violence after games are released (Ferguson, 2014; Markey, Markey, and French, 2015; Cunningham, Engelstatter, and Ward, 2016). The amount of publication bias (also see Copenhaver and Ferguson, 2015 where they show how the APA ignores bias and methodological problems regarding these studies) in this field (Ferguson, 2017) should lead one to question the body of data we currently have, since studies that find an effect are more likely to get published than studies that find no effect.

Video games do not cause violent/aggressive behavior/school shootings. There is literally no evidence that they are linked to the deaths of individuals, and with the small effects noted on violent/aggressive behavior due to violent video game playing, we can disregard those claims. (One thing video games are good for, though, is improving reaction time (Benoit et al, 2017). The literature is strong here; playing these so-called “violent video games” such as Call of Duty improved children’s reaction time, so wouldn’t you say that these ‘violent video games’ have some utility?)

Lead, Race, and Crime

2500 words

Lead has many known neurological effects on the brain (regarding the development of the brain and nervous system) that lead to many deleterious health outcomes and negative outcomes in general. Including (but not limited to) lower IQ, higher rates of crime, higher blood pressure and higher rates of kidney damage, which have permanent, persistent effects (Stewart et al, 2007). Chronic lead exposure, too, can “also lead to decreased fertility, cataracts, nerve disorders, muscle and joint pain, and memory or concentration problems” (Sanders et al, 2009). Lead exposure in vitro, in infancy, and in childhood can also lead to “neuronal death” (Lidsky and Schneider, 2003). While epigenetic inheritance also plays a part (Sen et al, 2015). How do blacks and whites differ in exposure to lead? How much is the difference between the two races in America, and how much would it contribute to crime? On the other hand, China has high rates of lead exposure, but lower rates of crime, so how does this relationship play out with the lead-crime relationship overall? Are the Chinese an outlier or is there something else going on?

The effects of lead on the brain are well known, and numerous amounts of effort have been put into lowering levels of lead in America (Gould, 2009). Higher exposure to lead is also found in poorer, lower class communities (Hood, 2005). So since higher levels of lead exposure are found more often in lower-class communities, then blacks should have higher blood-lead levels than whites. This is what we find.

Blacks had a 27 percent higher concentration of lead in their tibia, while having significantly higher levels of blood lead, “likely because of sustained higher ongoing lead exposure over the decades” (Theppeang et al, 2008). Other data—coming out of Detroit—shows the same relationships (Haar et al, 1979Talbot, Murphy, and Kuller, 1982Lead poisoning in children under 6 jumped 28% in Detroit in 2016; also see Maqsood, Stanbury, and Miller, 2017) while lead levels in the water contribute to high levels of blood-lead in Flint, Michigan (Hanna-Attisha et al, 2016Laidlaw et al, 2016). Cassidy-Bushrow et al (2017) also show that “The disproportionate burden of lead exposure is vertically transmitted (i.e., mother-to-child) to African-American children before they are born and persists into early childhood.

Children exposed to lead have lower brain volumes as children, specifically in the ventrolateral prefrontal cortex, which is the same region of the brain that is impaired in antisocial and psychotic persons (Cecil et al, 2008). The community that was tested was well within the ‘safe’ range set by the CDC (Raine, 2014: 224), though the CDC says that there is no safe level of lead exposure. There is a large body of studies which show that there is no safe level of lead exposure (Needleman and Landrigan, 2004; Canfield, Jusko, and Kordas, 2005Barret, 2008; Rossi, 2008; Abelsohn and Sanborn, 2010; Betts, 2012; Flora, Gupta, and Tiwari, 2012; Gidlow, 2015; Lanphear, 2015; Wani, Ara, and Usmani, 2015; Council on Environmental Health, 2016Hanna-Attisha et al, 2016Vorvolakos, Aresniou, and Samakouri, 2016; Lanphear, 2017). So the data is clear that there is absolutely no safe level of lead exposure, and even small effects can lead to deleterious outcomes.

Further, one brain study of 532 men who worked in a lead plant showed that those who had higher levels of lead in their bones had smaller brains, even after controlling for confounds like age and education (Stewart et al, 2008). Raine (2014: 224) writes:

The fact that the frontal cortex was particularly reduced is very interesting, given that this brain region is involved in violence. This lead effect was equivalent to five years of premature aging of the brain.

So we have good data that the parts of the brain that relate to violent tendencies are reduced in people exposed to more lead had the same smaller parts of the brain, indicating a relationship. But what about antisocial disorders? Are people with higher levels of lead in their blood more likely to be antisocial?

Needleman et al (1996) show that boys who had higher levels of lead in their blood had higher teacher ratings of aggressive and delinquent behavior, along with higher self-reported ratings of aggressive behavior. Even high blood-lead levels later in life is related to crime. One study in Yugoslavia showed that blood lead levels at age three had a stronger relationship with destructive behavior than did prenatal blood lead levels (Wasserman et al, 2008); with this same relationship being seen in America with high blood lead levels correlating with antisocial and aggressive behavior at age 7 and not age 2 (Chen et al 2007).

Nevin (2007) showed a strong relationship between preschool lead exposure and subsequent increases in criminal cases in America, Canada, Britain, France, Australia, Finland, West Germany, and New Zealand. Reyes (2007) also shows that crime increased quicker in states that saw a subsequent large decrease in lead levels, while variations in lead levels within cities correlating with variations in crime rates (Mielke and Zahran, 2012). Nevin (2000) showed a strong relationship between environmental lead levels from 1941 to 1986 and corresponding changes to violent crime twenty-three years later in the United States. Raine (2014: 226) writes (emphasis mine):

So, young children who are most vulnerable to lead absorption go on twenty-three years later to perpetrate adult violence. As lead levels rose throughout the 1950s, 1960s, and 1970s, so too did violence correspondingly rise in the 1970s, 1980s and 1990s. When lead levels fell in the late 1970s and early 1980s, so too did violence fall in the 1990s and the first decade of the twenty-first century. Changes in lead levels explained a full 91 percent of the variance in violent offending—an extremely strong relationship.

[…]

From international to national to state to city levels, the lead levels and violence curves match up almost exactly.

But does lead have a causal effect on crime? Due to the deleterious effects it has on the developing brain and nervous system, we should expect to find a relationship, and this relationship should become stronger with higher doses of lead. Fortunately, I am aware of one analysis, a sample that’s 90 percent black, which shows that with every 5 microgram increase in prenatal blood-lead levels, that there was a 40 percent higher risk of arrest (Wright et al, 2008). This makes sense with the deleterious developmental effects of lead; we are aware of how and why people with high levels of lead in their blood show similar brain scans/brain volume in certain parts of the brain in comparison to antisocial/violent people. So this is yet more suggestive evidence for a causal relationship.

Jennifer Doleac discusses three studies that show that blood-lead levels in America need to be addressed, since they are related strongly to negative health outcomes.Aizer and Curry (2017) show that “A one-unit increase in lead increased the probability of suspension from school by 6.4-9.3 percent and the probability of detention by 27-74 percent, though the latter applies only to boys.” They also show that children who live nearer to roads have higher blood-lead levels, since the soil near highways was contaminated decades ago with leaded gasoline. Fiegenbaum and Muller (2016) show that cities’ use of lead pipes increased murder rates between the years o921 and 1936. Finally, Billings and Schnepnel (2017: 4) show that their “results suggest that the effects of high levels of [lead] exposure on antisocial behavior can largely be reversed by intervention—children who test twice over the alert threshold exhibit similar outcomes as children with lower levels of [lead] exposure (BLL<5μg/dL).”

A relationship with lead exposure in vitro and arrests at adulthood. The sample was 90 percent black, with numerous controls. They found that prenatal and post-natal blood-lead exposure was associated with higher arrest rates, along with higher arrest rates for violent acts (Wright et al, 2008). To be specific, for every 5 microgram increase in prenatal blood-lead levels, there was a 40 percent greater risk for arrest. This is direct causal evidence for the lead-causes-crime hypothesis.

One study showed that in post-Katrina New Orleans, decreasing lead levels in the soil caused a subsequent decrease in blood lead levels in children (Mielke, Gonzales, and Powell, 2017). Sean Last argues that, while he believes that lead does contribute to crime, that the racial gaps have closed in the recent decades, therefore blood-lead levels cannot be a source of some of the variance in crime between blacks and whites, and even cites the CDC ‘lowering its “safe” values’ for lead, even though there is no such thing as a safe level of lead exposure (references cited above). White, Bonilha, and Ellis Jr., (2015) also show that minorities—blacks in particular—have higher rates of lead in their blood. Either way, Last seems to downplay large differences in lead exposure between whites and blacks at young ages, even though that’s when critical development of the mind/brain and other important functioning occurs. There is no safe level of lead exposure—pre- or post-natal—nor are there safe levels at adulthood. Even a small difference in blood lead levels would have some pretty large effects on criminal behavior.

Sean Last also writes that “Black children had a mean BLL which was 1 ug/dl higher than White children and that this BLL gap shrank to 0.9 ug/dl in samples taken between 2003 and 2006, and to 0.5 ug/dl in samples taken between 2007 and 2010.” Though, still, there are problems here too: “After adjustment, a 1 microgram per deciliter increase in average childhood blood lead level significantly predicts 0.06 (95% confidence interval [CI] = 0.01, 0.12) and 0.09 (95% CI = 0.03, 0.16) SD increases and a 0.37 (95% CI = 0.11, 0.64) point increase in adolescent impulsivity, anxiety or depression, and body mass index, respectively, following ordinary least squares regression. Results following matching and instrumental variable strategies are very similar” (Winter and Sampson, 2017).

Naysayers may point to China and how they have higher levels of blood-lead levels than America (two times higher), but lower rates of crime, some of the lowest in the world. The Hunan province in China has considerably lowered blood-lead levels in recent years, but they are still higher than developed countries (Qiu et al, 2015). One study even shows ridiculously high levels of lead in Chinese children “Results showed that mean blood lead level was 88.3 micro g/L for 3 – 5 year old children living in the cities in China and mean blood lead level of boys (91.1 micro g/L) was higher than that of girls (87.3 micro g/L). Twenty-nine point nine one per cent of the children’s blood lead level exceeded 100 micro g/L” (Qi et al, 2002), while Li et al (2014) found similar levels. Shanghai also has higher levels of blood lead than the rest of the developed world (Cao et al, 2014). Blood lead levels are also higher in Taizhou, China compared to other parts of the country—and the world (Gao et al, 2017). But blood lead levels are decreasing with time, but still higher than other developed countries (He, Wang, and Zhang, 2009).

Furthermore, Chinese women, compared to American women, had two times higher BLL (Wang et al, 2015). With transgenerational epigenetic inheritance playing a part in the inheritance of methylation DNA passed from mother to daughter then to grandchildren (Sen et al, 2015), this is a public health threat to Chinese women and their children. So just by going off of this data, the claim that China is a safe country should be called into question.

Reality seems to tell a different story. It seems that the true crime rate in China is covered up, especially the murder rate:

In Guangzhou, Dr Bakken’s research team found that 97.5 per cent of crime was not reported in the official statistics.

Of 2.5 million cases of crime, in 2015 the police commissioner reported 59,985 — exactly 15 less than his ‘target’ of 60,000, down from 90,000 at the start of his tenure in 2012.

The murder rate in China is around 10,000 per year according to official statistics, 25 per cent less than the rate in Australia per capita.
“I have the internal numbers from the beginning of the millennium, and in 2002 there were 52,500 murders in China,” he said.

Instead of 25 per cent less murder than Australia, Dr Bakken said the real figure was closer to 400 per cent more.”

Guangzhou, for instance, doesn’t keep data for crime committed by migrants, who commit 80 percent of the crime in this province. Out of 2.5 million crimes committed in Guangzhou, only 5,985 crimes were reported in their official statistics, which was 15 crimes away from their target of 6000. Weird… Either way, China doesn’t have a similar murder rate to Switzerland:

The murder rate in China does not equal that of Switzerland, as the Global Times claimed in 2015. It’s higher than anywhere in Europe and similar to that of the US.

China also ranks highly on the corruption index, higher than the US, which is more evidence indicative of a covered up crime rate. So this is good evidence that, contrary to the claims of people who would attempt to downplay the lead-crime relationship, that these effects are real and that they do matter in regard to crime and murder.

So it’s clear that we can’t trust the official Chinese crime stats since there much of their crime is not reported. Why should we trust crime stats from a corrupt government? The evidence is clear that China has a higher crime—and murder rate—than is seen on the Chinese books.

Lastly, effects of epigenetics can and do have a lasting effect on even the grandchildren of mothers exposed to lead while pregnant (Senut et al, 2012Sen et al, 2015). Sen et al (2015) showed lead exposure during pregnancy affected the DNA methylation status of the fetal germ cells, which then lead to altered DNA methylation on dried blood spots in the grandchildren of the mother exposed to lead while pregnant.—though it’s indirect evidence. If this is true and holds in larger samples, then this could be big for criminological theory and could be a cause for higher rates of black crime (note: I am not claiming that lead exposure could account for all, or even most of the racial crime disparity. It does account for some, as can be seen by the data compiled here).

In conclusion, the relationship between lead exposure and crime is robust and replicated across many countries and cultures. No safe level of blood lead exists, even so-called trace amounts can have horrible developmental and life outcomes, which include higher rates of criminal activity. There is a clear relationship between lead increases/decreases in populations—even within cities—that then predict crime rates. Some may point to the Chinese as evidence against a strong relationship, though there is strong evidence that the Chinese do not report anywhere near all of their crime data. Epigenetic inheritance, too, can play a role here mostly regarding blacks since they’re more likely to be exposed to high levels of lead in the womb, their infancy, and childhood. This could also exacerbate crime rates, too. The evidence is clear that lead exposure leads to increased criminal activity, and that there is a strong relationship between blood lead levels and crime.

Jean Baptiste Lamarck

Eva Jablonka

Charles Murray

Arthur Jensen

Blog Stats

  • 356,726 hits
Follow NotPoliticallyCorrect on WordPress.com