By Afrosapiens, 555 words.
The Harmattan season is a little known feature of the climatic characteristics of the West African subcontinent. Similarly to a temperate climate winter, it occurs between November and March, 5 months during which the region experiences dry, hazy, and colder conditions due to Saharan dust particles brought by the Harmattan trade wind.
Although the Harmattan season is sometimes referred to as a West African winter with temperature commonly dropping to a low 7°C (45°F) during the night and in the morning, typical tropical temperatures ranging from 25°C to 35°C (77°F – 95°C) are experienced during the afternoon. The humidity rate is below 15% and the region experiences no rainfall during the season. More than the drought, the colder temperature and occasional dust storms and wildfires, the dusty Harmattan haze is what makes the West African winter challenging by significantly reducing visibility and causing several health problems such as asthma, meningitis, skin and eye conditions. From an evolutionary standpoint, it is possible that the Harmattan season has caused various anatomical adaptations affecting brain characteristics.
Despite the common hereditarian claim that Sub-Saharan Africans average smaller cranial capacities than Eurasians due to the warmer climates of tropical Africa, the few studies that I’ve come across regarding West Africa paint a significantly different picture. In a 2011 sample of North-Eastern Nigerian adults likely of Kanuri ethnicity, the reported average cranial capacity was 1424cc for males and 1331cc for females, which makes a total average of 1378cc. In a 2013 sample of 527 Igbos aged 14-20 from Anambra State (Southeastern Nigeria), the reported cranial capacities were 1411cc for males and 1443cc for females and a combined average of 1427cc. In another study of Southeastern Nigerians (year 2011), the reported values were closer to those usually claimed with an average of 1310cc among Edos, 1273cc among Igbos and 1256cc among Urhobos.
Although these are only a few studies on West African cranial characteristics, they at least have the merit of being recent (less than 10 years old) and drawn from actual measurements on living persons contrary to Beals et al.’s 1984 reference study in which the West African values are inferred from simplistic climatic variables in the absence of actual skulls from the region. I have often shown these high cranial capacity West African samples as a refutation of the cold winter theory of brain size differences. And whereas hereditarian debaters have commonly dismissed them as meaningless exceptions to the rule, there is no scientific rule with unexplainable exceptions.
In fact, judging from the competing and more generally accepted theory that brain size and eyeballs grow in adaptation to low light environments, the high cranial capacity, especially in Northern Nigerian samples, makes perfect sense if seen as an adaptation to the low visibility caused by the Harmattan haze from November to March. Unfortunately I wasn’t able to find any data on West African eye characteristics, I can only anecdotally mention a higher frequency of epicanthic fold among West Africans relative to other non-Mongoloid populations. I acknowledge that this post is somewhat speculative and based on poor data. Nevertheless, I write it as a warning to all the bloggers and “scholars” making up theories and inferences based on a poor understanding of the complexity of the world’s current and past climatic conditions.
When I first got into HBD back in 2012, one of the first things I came across—along with the research on racial IQs from Rushton, Lynn, Jensen et al—was that the races differed in a gene called MAOA-L, which has a frequency in Caucasians at .1 percent (Beaver et al, 2013), 54 percent in Chinese people (Lu et al, 2013; as well as 77 percent for the 3r MAOA allele; Lea and Chambers, 2007), 56 percent in Maoris (Lea and Chambers (2007) while about 60-65 percent of Japanese people have the low-frequency version of this gene (Way and Lieberman, 2007: 207, table 2).
So if these ethnies have a higher rate of this polymorphism and it is true that this gene causes crime, then the Chinese and Japanese should have the highest rates of crime in the world, since even apparently the effect of MAOA and violence and antisocial behavior is seen even without child abuse (Ficks and Waldman, 2014). Except East Asian countries have lower rates of crime (Rushton, 1995; Rushton and Whytney, 2002). Though, Japan’s low crime rate is relatively recent, and when compared with other countries on certain measures “Japan fares the same or worse when compared to other nations” (Barberet 2009, 198). This goes against a lot of HBD theory, and I will save that for another day. (Japan has a 99 percent prosecution rate, which could be due to low prosecutorial budgets; Ramseyer and Rasmusen, 2001. I will cover this in the future.)
The media fervor—as usual—gave the MAOA gene the nickname “the warrior gene“, which is extremely simplistic (I will have much more to say on ‘genes for’ any trait towards the end of the article). I will show how this is a very simplistic view.
The MAOA gene was first discovered in 1993 in a Dutch family who had a history of extreme violence going as far back as the 1890s. Since the discovery of this gene, it has been invoked as an ultimate cause of crime. However, as some hereditarians do note, MAOA only ’causes’ violence if one has a specific MAOA genotype and if they have been abused as a child (Caspi et al, 2002; Cohen et al, 2006; Beaver et al, 2009; Ferguson et al, 2011; Cicchetti, Rogosch, Thibodeau, 2012;). People have invoked these gene variants as ultimate causes of crime—that is, people who have the low-expressing MAOA variants are more likely to commit more crime—but the relationship is not so simple.
Maoris are more four times more likely to have the low-expressing gene variant than Europeans, the same holding for African Americans and Europeans (Lea and Chambers, 2007).
There is, however, a protective effect that protects whites (and not non-whites in certain cases) against antisocial behavior/violent attitudes if one has a certain genotype (Widom and Brzustowicz, 2006), though the authors write on page 688: “For non-whites, the effect of child abuse and neglect on the juvenile VASB was not significant (beta .08, SE .11, t 1.19, ns), whereas the effect of child maltreatment on lifetime VASB composite approached significance (beta .13, SE .12, t 1.86, p .06). For non-whites (see Figure 2), neither gene (MAOA) environment (child abuse and neglect) interaction was significant: juvenile VASB (beta .06, SE .28, t .67, ns) and lifetime VASB (beta .01, SE .29, t .14, ns).” So as you can see, there are mixed results. Whites seem to be protected against the effect of antisocial behavior and violence but only if they have a certain genotype (which implies that if they have the other genotype, then if abused they will show violent and antisocial behavior). So, we can see that the relationship between MAOA and criminal behavior is not as simple as some would make it out to be.
MAOA, like other genetic variants, of course, has been linked to numerous other traits. Steven J. Heine, author of the book DNA is Not Destiny: The Remarkable and Completely Misunderstood Relationship Between You and Your Genes:
However, any labels like “the warrior gene” are highly problematic because they suggest that the this gene is specifically associated with violence. It’s not, just as alleles from other genes do not only have one oucome. Pleiotropy is the term for how a single genetic variant can influence multiple different phenotypes. MAOA is highly pleiotropic: the traits and conditions potientially connected to the MAOA gene invlude Alzheimer’s. anoerxia, autism, body mass index, bone mineral density, chronic fatigue syndrome, depression, extraversion, hypertension, individualism, insomnia, intelligence, memory, neuroticism, obesity, openness to experience, persistence, restless leg syndrome, schizophrenia, social phobia, sudden infant death syndrome, time perception and voting behavior. (59) Perhaps it would be more fitting to call MAOA “the everything but the kitchen sink gene. (Heine, 2017: 195)
Something that I have not seen brought up when discussions of race, crime, and MAOA come up is that Japanese people have the highest chance—even higher than blacks, Maoris, and whites—to have the low repeat MAOA variant (Way and Lieberman, 2006: 205, Fig. 2) yet have lower rates of crime. So MAOA cannot possibly be a ‘main cause’ of crime. It is way more complex than that. “However intuitively satisfying it may be to explain cultural differences in violence in terms of genes“, Heine writes, “as of yet there is no direct evidence for this” (Heine, 2017: 196).
Numerous people have used ‘their genes’ in an attempt to get out of criminal acts that they have committed. A judge even knocked off one year off of a murder’s sentence since he found the evidence for the MAOA gene’s link to violence “particularly compelling.” I find it “particularly ridiculous” that the man got less time in jail than someone who ‘had a choice’ in his actions to murder someone. Doesn’t it seem ridiculous to you that someone gets less time in jail than someone else, all because he may have the ‘crime/warrior gene’?
Aspinwall, Brown, and Tabery (2012) showed that when evidence of a ‘biomechanic’ cause of violence/psychopathy was shown to the judges (n=191), that they reduced their sentences by almost one year if they were reading a story in which the accused was found to have the low-repeat MAOA allele (13.93 to 12.83 years). So, as you can see, this can sway judges’ perception into giving one a lighter sentence since they believe that the evidence shows that one ‘can not control themselves’, which results in the judge giving assailants lighter sentences because ‘it’s in their genes’.
Further, people would be more lenient on sentences for criminals who are found to have these ‘criminal genes’ than those who were found to not have them (Cheung and Heine, 2015). Monterosso, Royzman, and Schwartz (2010) also write: “Physiologically explained behavior was more likely to be characterized as “automatic,” and willpower and character were less likely to be cited as relevant to the behavior. Physiological explanations of undesirable behavior may mitigate blame by inviting nonteleological causal attributions.” So, clearly, most college students would give a lighter sentence if the individual in question were found to have ‘criminal genes’. But, if these genes really did ’cause’ crime, shouldn’t they be given heavier sentences to keep them on the inside more so those with the ‘non-criminal genes’ don’t have to suffer from the ‘genetically induced’ crime?
Heine (2017: 198-199) also writes:
But is someone really less any responsible for their actions if his or her genes are implicated? A problem with this argument is that we would be hard-pressed to find any actions that we engage in where our genes are not involved—our behaviors do not occur in any gene-free zones. Or, consider this: there actually is a particular genetic variant that, if you possess it, makes you about 40 times more likely to engage in same-sex homicides than those who possess a different variant. (66) It’s known as the Y chromosome—that is, people who possess it are biologically male. Given this, should we infer that Y chromosomes cause murders, and thus give a reduced sentence to anyone who is the carrier of such a chromosome because he is really not responsible for his actions? The philosopher Stephen Morse calls the tendency to excuse a crime because of a biological basis the “fundamental psycholegal error.” (67) The problem with this tendency is that it involves separating yout genes from yourself. Saying “my genes made me do it” doesn’t make sense because there is no “I” that is independent of your genetic makeup. But curiously, once genes are implicaed, people see, to feel that the accused is no longer fully in control of his or her actions.
Further, in the case of a child pornographer, one named Gary Cossey, the court said:
The court predicted that some fifty years from now Cossey’s offense conduct would likely be discovered to be caused by “a gene you were born with. And it’s not a gene you can get rid of.” The court expressed its belief that although Cossey was in therapy, it “can only lead, in my view, to a sincere effort on your part to control, but you can’t get rid of it. You are what you’re born with. And that’s the only explanation for what I see here.”
However, this judge punished Cossey more severely due to the ‘possibility’ that scientists may find ‘genes for’ child pornography use in 50 years. Cossey was then given another, unbiased judge, and was given a ‘more lenient’ sentence than the genetic determinist judge did.
Sean Last over at The Alternative Hypothesis is also a big believer in this so-called MAOA-race difference that explains racial differences in crime. However, as reviewed above (and as he writes), MAOA can be called the “everything but the kitchen sink gene” (Heine, 2017: 195), as I will touch on briefly below, to attribute ’causes’ to genes is not the right way to look at them. It’s not so easy to say that since one ‘has the warrior gene’ that they’d automatically be violent. Last cites a study saying that even those who have the MAOA allele who were not abused showed higher rates of violent behavior (Ficks and Waldman, 2014). They write (pg. 429):
The frequency of the ‘‘risk’’ allele in nonclinical samples of European ancestry ranges from 0.3 to 0.4, although the frequency of this allele in individuals of Asian and African ancestry appears to be substantially higher (*0.6 in both groups; Sabol et al. 1998).
So, why don’t Asians have higher rates of crime—along with blacks—if MAOA on its own causes violent and antisocial behavior? Next I know that someone would claim that “AHA! TESTOSTERONE ALSO MEDIATES THIS RELATIONSHIP!!” However, as I’ve talked about countless times (until I’m blue in the face), blacks do not have/have lower levels of testosterone than whites (Richards et al, 1992; Gapstur et al, 2002; Rohrmann et al, 2007; Mazur, 2009; Lopez et al, 2013; Hu et al, 2014; Richard et al, 2014). Though young black males have higher levels of testosterone due to the environment (honor culture) (Mazur, 2016). So that canard cannot be trotted out.
All in all, these simplistic and reductionist approaches to ‘figuring out’ the ’causes’ of crime do not make any sense. To point at one gene and say that this is ‘the cause’ of that do not make sense.
One last point on ‘genes as causes’ for behavior. This is something that deserves a piece of its own, but I will just provide a quote from Eva Jablonska and Marion Lamb’s book Evolution in Four Dimensions: Genetic, Epigenetic, Behavioral, and Symbolic Variation in the History of Life (Jablonska and Lamb, 2014: 17; read chapter one of the book here; I have the nook version so the page number may be different):
Although many psychiatrists, biochemists, and other scientists who are not geneticists (yet express themselves with remarkable facility on genetic issues) still use the language of genes as simple causal agents, and promise their audience rapid solutions to all sorts of problems, they are no more than propagandists whose knowledge or motives must be suspect. The geneticists themselves now think and talk (most of the time) in terms of genetic networks composed of tens or hundreds of genes and gene products, which interact with each other and together affect the development of a particular trait. They recognize that whether or not a trait (a sexual preference, for example) develops does not depend, in the majority of cases, on a difference in a single gene. It involves interactions among many genes, many proteins and other types of molecule, and the environment in which an individual develops.
So to say that those who have low-functioning MAOA variants have an ‘excuse’ as to why they commit crime is incorrect. I know that most people know this, but when you read some people’s writings on things like this it’s like they think that these singular genes/polymorphisms/etc cause these things on their own. In actuality, you need to look at how the whole system interacts with these things, and not reduce whole complex physiological systems to a sum of its parts. This is why implicating singular genes/polymorphisms as explanations for racial differences in crime does not make sense (as can be seen with the Japanese example).
To reduce behaviors simply to gene X and not look at the whole system does not make any sense. There are no ‘genes for’ anything, except a few Mendelian diseases (Ropers, 2010). Stating that certain genes ’cause’ X, as I have shown does not make sense and, wrongly, in my opinion, gives criminals less of a sentencing since judges find stuff like this ‘very compelling’. If that’s the case, why implicate any murderer? ‘Their genes made them do it’, right? Though, things are not that simple to implicate one gene as a cause for crime or any other complex behavior; in this sense—like for most things to do with the human body—holism makes way more sense and not reductionism. We need to look at how these genes that are ‘implicated’ in criminal behavior interact with the whole system. Only then can we understand the causes of criminal behavior. Looking at singular genes impedes us from figuring out the true underlying reasons why people commit crime.
Remember: we can’t blame “warrior genes” for violent crime. If someone does have a ‘genetic predisposition to crime’ from the MAOA gene, then wouldn’t it make more sense to give them more time? Though, the relationship is not so simple as I have covered. So to close, there is no ‘simple relationship’ between race, crime and MAOA. Not in the way that other hereditarians would like you to believe. Because if this relationship were so simple, then East Asians (Chinese, Japanese) would have the highest rates of crime, and they do not.
The notion that there is any ‘progress’ to evolution is something that I have rebutted countless times on this blog. My most recent entry being Marching Up the ‘Evolutionary Tree’? which was a response to Pumpkin Person’s article Marching up the evolutionary tree. Of course, people never ever change their views in a discussion (I have seen it, albeit it is rare) due, mainly to, in my opinion, ideology. People have so much time invested in their little pet theories that they cannot possibly fathom at the thought of being wrong or being led astray by shoddy hypotheses/theories that confirm their pre-existing beliefs. I will quote a few comments from Pumpkin Person’s blog where he just spews his ‘correlations with brain size and ‘splits’ on the ‘evolutionary tree” that ‘proves that evolution is progressive’, then I will touch on two papers (I will cover both in great depth in the future) that directly rebut his idiotic notion that so-called brain size increases across our evolutionary history (and even before we became humans) are due to ‘progress in evolution’
I think you mistyped that, but i see your point. Problem, however, most of your used phylogenies were unbalanced.
Based on the definition you provided, but not based on any meaningful definition. To me, an unbalanced tree is . . .
This is literally meaningless. Keep showing that you’ve never taken a biology class in your life, it really shows.
All it is is ignorance to basic biological thinking, along with an ideology to prove his ridiculous Rushtonian notion that ‘brain size increases prove that evolution is progressive’.
You have yet to present ANY scientific logic, and my argument about taxonomic specificity is clearly beyond you.
Scientific logic?! Scientific logic?! Please. Berkely has a whole page on misconceptions on evolution that directly rebut his idiotic, uneducated views on evolution. It doesn’t help that his evolution education most likely comes from psychologists. Nevertheless, PP’s ‘argument’ is straight garbage. Taxonomic specificity’ is meaningless when you don’t have an understanding of basic biological concepts and evolution. (I will have much more to say on his ‘taxonomic specificity’ below.)
Was every tree perfect? No, but most were pretty close, and keep in mind that any flawed trees would have the effect of REDUCING the correlation between brain size/encephalization and branching, because random error is a source of statistical noise which obscures any underlying relationship. So the fact that I repeatedly found such robust correlation in spite of alleged problems with my trees, makes my conclusions stronger, not weaker.
The fact that you ‘repeatedly’ found ‘correlations’ in spite of the ‘problems’ with your trees makes your ‘conclusions’ weaker. Comparing organisms over evolutionary time and you notice a ‘trend’ in brain size. Must mean that evolution is progressive and brain size is its calling card!!
I’m right and all the skeptics you cite are wrong.
Said like a true idealogue.
It’s not how many splits they have that I’ve been measuring, it’s how many splits occur on the tree before they branch off. Here’s a source from 2017:
Eukaryotes represent a domain of life, but within this domain there are multiple kingdoms. The most common classification creates four kingdoms in this domain: Protista, Fungi, Plantae, and Animalia.
So you needed ‘a source from 2017’ to tell you something that is literally taught on the first day of biology 101? Keep showing how uneducated you are here.
Nothing fallacious about a correlation between number of splits and brain size/encephalization.
Post hoc, ergo propter hoc is a Latin phrase for “after this, therefore, because of this.” The term refers to a logical fallacy that because two events occurred in succession, the former event caused the latter event.
Magical thinking is a form of post hoc, ergo propter hoc fallacy, in which superstitions are formed based on seeing patterns in a series of coincidences. For example, “these are my lucky trousers. Sometimes good things happen to me when I wear them.”
P1: X happened before Y.
P2: (unstated) Y was caused by something (that happened before Y).
C1: Therefore, X caused Y.
Here is PP’s (fallacious) logic:
P1: splits (X) happened before Y (brain size increase)
P2: (unstated) brain size increase was caused by something (that happened before brain size increaes [splits on the tree])
C1: therefore, splits caused brain size increase
Now, I know that PP will argue that ‘splits on the evolutionary tree’ denote speciation which, in turn, denotes environmental change. This is meaningless. You’re still stating that Y was caused by something (that happened before Y) and therefore inferring that X caused Y. That is the fallacy (which a lot of HBD theories rest on).
You don’t get it. Even statistically insignificant correlations become significant when you get them FIVE TIMES IN A ROW. If you want to believe it was all a coincidence, then fine.
Phylogenies are created from shared derived factors. Berkely is the go-to authority here on this matter. (No that’s not appeal to authority.) Biologists collect information about a given animal and then infer the evolutionary relationship. Furthermore, PP’s logic is, again, fallacious. Berkely also has tips for tree reading, which they write:
Trees depict evolutionary relationships, not evolutionary progress. It’s easy to think that taxa that appear near one side of a phylogenetic tree are more advanced than other organisms on the tree, but this is simply not the case. First, the idea of evolutionary “advancement” is not a particularly scientific idea. There is no unbiased, universal scale for “advancement.” Second, taxa with extreme versions of traits (which might be perceived as more “advanced”) may occur on any terminal branch. The position of a terminal taxon is not an indication of how adaptive, specialized, or extreme its traits are.
He may emphatically argue (as I know he will) that he’s not doing this. But, as can be seen from his article, X is ‘less advanced’ than Y, therefore splits, brain size, correlation=progress. This is dumb.
For anyone who wants to know how (and how not to) read phylogenies, read Gregory (2008). These idotic notions that PP espouses are what Freshman in college believe due to ‘intuitiveness’ about evolution. It’s so rampant that biologists have writen numerous papers on the matter. But some guy with a blog and no science background (and an ideology to hammer) must know more than people who do this for a living (educate people on phylogenies).
On Phil’s response to see the Deacon paper that I will discuss below, PP writes:
That’s not a rebuttal.
Yes it is, as I will show shortly.
The first paper I will discuss is Deacon’s (1990) paper Fallacies of Progression in Theories of Brain-Size Evolution. This is a meaty paper with a ton of great ideas about phylogenies, along with numerous fallacies that people go to when reading trees (my favorite being the Numerology fallacy, which PP uses, see below).
Deacon argues that since people fail to analyze allometry, this anatomists have mistaken artifacts for evolutionary trends. He also argues that many structural’brain size increases’ from ‘primitive to advanced forms’ (take note here, because this is what PP did and this is what discredits his idiotic ideology) are the result of allometric processes.
Source: Evolution of consciousness: Phylogeny, ontogeny, and emergence from general anesthesia Mashour and Alkire (2013)
This paper (and picture) show it all. This notion of scala naturae (which Rushton (2004) attempted to revive with r/K selection theory has been rebutted by me) was first proposed by Aristotle. We now know how the brain structure evolved, so the old ‘simple scala naturae‘ is, obviously, out of date in the study of brain evolution.
This paper is pretty long and I don’t have time to discuss all of it so I will just provide one quote that disproves PP’s ‘study’:
Whenever a method is discovered for simplifying the representation of a complex or apparently nonsystematic numerical relationship, the method of simplification itself provides new insight into the phenomenon under study. But reduction of a complex relationship to a simple statistic makes it far easier to find spurious relationships with other simple statistics. Numerology fallacies are apparent correlations that turn out to be artifacts of numerical oversimplification. Numerology fallacies in science, like their mystical counterparts, are likely to be committed when meaning is ascribed to some statistic merely by virtue of its numeric similarity to some other statistic, without supportive evidence from the empirical system that is being described.
Deacon also writes in another 1990 article titled Commentary on Ilya I. Glezer, Myron So Jacobs, and Peter J Morgane (1988) Implications of the “initial brain’9 concept for brain evolution in Cetacea:
The study of brain evolution is one of the last refuges for theories of progressive evolution in biology, but in this field its influence is still pervasive. To a great extent the apparent “progress” of mammalian brain evolution vanishes when the effects of brain size and functional specialization are taken into account.
(It’s worth noting that in the author’s response to Deacon, he did not have any qualms about ‘progressive brain-size’.)
In regards to PP’s final ‘correlation’ on human races and brain-size, this is a perfect quote from McShea (1994: 1761):
If such a trend [increase in brain size leading to ‘intelligence’] in primates exists and it is driven, that is, if the trend is a direct result of concerted forces acting on most lineages across the intelligence spectrum, then the inference is justified. But if it is passive, that is, forces act only on lineages at the low-intelligence end, then most lineages will have no increasing tendency. In that case, most primate species—especially those out on the right tail of the distribution like ours—would be just as likely to lose intelligence as to gain it in subsequent evolution (if they change at all).
The ‘trend’ is passive. Homo floresiensis is the best example. We are just as likely to lose our ‘intellect’ and our ‘big brains’ as we are to ‘get more intelligent’ and ‘smaller brains’. The fact of the matter is this: environment dictates brain size/whatever other traits an organism has. Imagine a future environment that is a barren wasteland. Kilocalories are scarce; do you think that humans would keep their big brains—which are two percent of their body weight accounting for a whopping 25 percent of total daily energy needs—without enough high-quality energy? When brain size supposedly began to increase in our taxa is when erectus learned to control fire and cook meat (Hlublik et al, 2017).
All in all, there is no ‘progress’ to evolution and, as Deacon argues, so-called brain-size increases across evolutionary time disappear after adjustments for body size and functional specialties are taken into account. However, for the idealogue who looks for everything they can to push their ideology/worldview, things like this are never enough. “No, that wasn’t a rebuttal! YOU’RE WRONG!!” Those are not scientific arguments. If one believes in ‘evolutionary progress’ and that brain-size increases are the proof in the pudding that evolution is ‘progressive’ (re has a ‘direction’), then they must rebut Deacon’s arguments on allometry and his fallacies in his 1990 paper. Stop equating evolution with ‘progress’. Though, I can’t fault laymen for believing that. I can, however, fault someone who supposedly enjoys the study of evolution. You’re wrong. The people you cite (who are out of their field of expertise) are wrong.
Evolution is an amazing process. To equate it with ‘progress’ does not allow one to appreciate the beauty of the process. Evolution does carry baggage with it, and if I weren’t so used to the term I would use Descent by Modification (DbM, which is what Darwin used). Nevertheless, progressionists will hide out in whatever safehold they can to attempt to push their idealogy that is not based on science.
(Also read Rethinking Mammalian Brain Evolution by Terrence Deacon. I go more in depth on these three articles in the future.)
I came across this video on YouTube last night by a geneticist/science writer Steve Jones. He is also the Emeritus Professor of genetics at University College London. This makes what he says in the video I will speak about below very troubling—especially to a man of his caliber with the knowledge he has—views he has on the hormone.
In the very beginning of the video titled Testosterone and Crime: What Can Genes Tell Us About Behavior?, Jones says “But in fact, there are genes—there is a gene—for crime, which causes nearly all the crime, and is widely used and we understand a great deal about it. It’s a chemical gene it produces a particular chemical, which we understand in detail is the chemical testosterone. Testosterone—we all have it but some of us have rather more than others—testosterone is of course a gene that is made—switched on by the Y chromosome and makes males male. Women have a small amount but only a small amount and as they get older … Now testosterone is a dangerous, dangerous thing to have. I don’t recommend it, those of you who have it, don’t get it. And if you’ve got some, don’t get any more.” What bullshit! This guy is a literal genetics Ph.D. saying this; this is proof that knowledge/educational attainment does not stop you from saying dumb, untrue things.
“I don’t know that this character does it, but certainly plenty of bodybuilders inject steroids—testosterone—into themselves. They damage themselves severely. Their life expectancy goes down strikingly. They die for all those male reasons. They die from violence, they die from suicide, they die from car accidents, they die from heart disease, all those things are true of males. … But even if you look at males and females in general, there is kind of a depressing picture for half of the room, I’m not sure which half.” Jones then talks about how men die at a much higher rate than women for a slew of reasons. This is his logic: Men have higher testosterone than women. Testosterone is shown to cause violence, aggression, heart disease, risk-taking, etc. Men have way more testosterone than women. Therefore testosterone is the reason why men die more than women and commit more violence than women. This is horrible logic—coming from a geneticist no less!
“Men actually—less expectedly perhaps—are much less good at dealing with parasites and infectious disease than women are. And that’s because testosterone—the male hormone—suppresses the immune system. Now the immune system fights off the parasites and we don’t do nearly as well.” There is actually some empirical data for his argument here. Back in 2013, it was shown that testosterone, gene expression, and the immune system were linked. They discovered that higher levels of testosterone prevented Module 52 genes from turning on. So higher levels of testosterone result in more Module 52 expression. Testosterone also does exert immune-suppressing effects, “increasing the severity of malaria, leishmaniasis, amebiasis, and tuberculosis, while at the same time supporting the clearance of toxoplasmosis (Bernin & Lotter, 2014; Nhamoyebonde & Leslie, 2014)” (Giefing-Kroll et al, 2015). The suppressive effects of testosterone on the immune system and how down-regulates “the systemic immune response by cell type specific effects in the context of immunological disorders.” (Trigunaite, Dimo, and Jorgensen, 2015).
The effects of testosterone replacement therapy (TRT) on the immune system have not been looked into, but it has a positive effect on elderly men (Osterberg, Bernie, and Ramasamy, 2014). However, Braude, Tang-Martinez, and Taylor (1999) challenge the wisdom that testosterone is an immuno-depressor. This is Jones’ only claim that is not outright wrong; there is data out there for both positions (of course I think that Braude, Tang-Martinez and Taylor, 1999 drive a solid argument against the testosterone-causes-immuno-suppression hypothesis).
The Jones says one of the dumbest things I’ve ever heard “And men, of course, are murdered much more than women. And who murders them—of course—other men. … Men murder at a much higher rate than women. … And that effect is striking—that effect is true worldwide—all over the world men, testosterone, murder at 10 times the rate of women. … So it’s a universal, it’s a biological universal, it’s clearly due to testosterone. There’s no question. The evidence is absolutely clear. So it’s a genetic phenomenon, it’s a gene for crime.” Should I be nice here and assume that whatever ‘gene’ he’s proposing that ’causes’ testosterone production actually causes the crime? Or should I take what he said at face value—that testosterone is a literal gene that causes crime? I think I’ll go with the second one.
“It’s certainly genetic, it’s also environmental. And you can’t disentangle it. You can change part of it—the environment—you can’t change the other part—the genes. And I always find it kind of odd that the public is so interested in the bit you can’t change—the genes—and is so uninterested in the bit you can—the environment.” This is wrong. Not all of it, but most of it. I don’t think that people are more interested in genes and toss aside environment—especially for testosterone. Because, as I documented yesterday, hereditarians assume that since testosterone has a heritability of around .6 then it must be mostly genetic in nature. This is wrong. As Jones said, the environment effects testosterone production too (though he didn’t go into the mechanisms).
The Left goes to the environment side—change the environment, change hormone production (this is true)—whereas the Right goes to the genes side—can’t change genes and environment is a product of genes so nothing can be done. (Oversimplified, don’t crucify me.) Both are wrong. Strong genetic determinism (gene G almost always leads to the development of trait T. (G increases the probability of T and the probability of T, given G, is 95% or greater) doesn’t make sense because a large majority of traits are moderately or weakly determined by genetics (Resnick and Vorhaus, 2006).
In sum, Jones is clueless about testosterone. He only really said one thing that is not outright wrong (but it is questionable). It doesn’t cause crime, it doesn’t cause men to murder more. The press has gotten all of these views into people’s heads because they want to demonize men—and the hormone that is largely responsible for male-ness. It’s incredible that this guy is both a geneticist, science writer and professor of genetics and still calls testosterone a ‘gene’ saying that it is responsible for ‘most of the crime’ committed. Anyone who has been reading this blog for the past year or so since I have began revising many of my main views knows how wrong this is. People really need to get a clue on testosterone and stop spreading bullshit. I know that I’ll have to keep correcting misconceptions on testosterone for a good long time (like with r/K theory) but I enjoy writing about both things so it’s not too big a deal. I just wish people would actually educated themselves on basic physiology so that the trainwreck of a video that Jones made does not get made.
No, Black Women Do Not Have Higher Testosterone than White Women (And More On Hereditarian Claims on Racial Testosterone Differences)
It has been over a year since I wrote the article Black Women and Testosterone, and I really regret it. Yes, I did believe that black women had higher levels of testosterone than white women due to one flimsy study and another article on pregnant black women. I then wised up to the truth about testosterone and aggression/crime/race/sex and revised the articles (like I have done with r/K selection theory). However, after I revised my views on the supposed differences in testosterone between black men/white men and black women/white women, people still cite the article, disregarding the disclaimer at the top of the article. I quoted Mazur (2016), who writes (emphasis mine):
The pattern [high testosterone] is not seen among teenage boys or among females.
There is no indication of inordinately high T among young black women with low education.
Honor cultures are cast as male affairs, but with T data in hand for both sexes, it is worth exploring whether or not a similar pattern exists among women. Mean T was calculated as a function of age for the four combinations of race and education used in Table 1 but now for women. All plots show T declining with age, from about 35 ng/dL in the 20–29 age group to about 20 ng/dL among women 60 years and older. The four plots essentially overlap without discernible differences among them. Given the high skew of T among adult females, both raw and ln-transformed values were analyzed with similar results. There is no indication of inordinately high T among young black women with low education.
In the present study, at least, the sexes differ because the very high T seen among young black men with low education does not occur among young black women with low education.
This is very clear… Mazur (2016) analyzed the NHANES 2011-2012 data and this is what he found. I understand that most HBD bloggers do believe this, well, like a lot of their strong assertions (which I have rebutted myself), they’re wrong. They don’t get it. They do not understand the hormone.
The reason why I’m finally writing this (which is long overdue) is that I saw a referral from this website today: https://www.minds.com/RedPillTV who writes about the aforementioned black women and testosterone article:
It is known that blacks have the highest levels of testosterone out of the major races of humanity. However, what’s not known is that black women have higher rates than white women. The same evolutionary factors that make it possible for black men to have high testosterone make it possible for women as well.
…..No. It seems that people just scroll on by the disclaimer at the top that is bolded and italicized and just go to the (now defunct) article and attempt to prove their assertion that black women have higher testosterone than white women with an article that I have stated myself I no longer believe and have provided the rationale/data for the position. This shows that people have their own biases and no matter what the author writes about their views that have changed due to good arguments/data, they will still attempt to use the article to prove their assertion.
I’ve written at length that testosterone does not cause 1) aggression, 2) crime and 3) prostate cancer. People are scared of testosterone mostly due to the media fervor of any story that may have a hint of ‘toxic masculinity’. They (most alt-righters) are scared of it because of Lynn/Rushton/Templer/Kanazawa bullshit on the hormone. Richard Lynn doesn’t know what he’s talking about on testosterone. No, Europeans did not need lower levels of aggression in the cold; Africans didn’t need higher levels of aggression (relative to Europeans) to survive in the tropics. The theory that supposed differential testosterone differences between the races are “the physiological basis in males of the racial differences in sexual drive which form the core of the different r/K reproduction strategies documented by J.P. Rushton” (Lynn, 1990: 1203). The races, on average, do not differ in testosterone as I have extensively documented. So hereditarians like Lynn and others need to look for other reasons to explain blacks’ higher rate of sexual activity.
Rushton’s views on the testosterone and supposed r/K continuum have been summarily rebutted by me. These psychologists’ views on the hormone (that they don’t understand the production of nor do they understand the true reality of the differences between the races) are why people are afraid of testosterone. No, testosterone is not some ‘master switch’ as Rushton (1999) asserts. Rushton asserts that racial differences in temperament are mediated by the hormone testosterone. He further dives into this assertion stating “Testosterone level correlates with temperament, self-concept, aggression, altruism, crime, and sexuality, in women as well as in men (Harris, Rushton, Hampson, & Jackson, 1996). It may ‘correlate’ with aggression and crime, but as I have documented, they do not cause either.
The aggression/testosterone correlation is only .08 (Archer, Graham-Kevan, and Davies, 2005). Furthermore, the diurnal variation in testosterone does not directly correlate to when testosterone levels are highest in the day (at 8 am and drop thereafter), with adults peaking in crime at 10 pm and kids at 3 pm, with rises at 8 pm and 12 pm (not surprisingly, kids go in to school around 8 am, go to recess at 12 and leave at 3).
If you’ve read as much Rushton as I have, you’ll notice that he begins to sound like a broken record when talking about certain things. One of the most telling is Rushton’s repeated assertions that blacks average 3-19 percent higher testosterone than whites. The 3 percent number comes from Ellis and Nyborg (1992) and the 19 percent number comes from Ross et al (1986) (which Rushton should know that after adjustments for confounding, it decreased to 13 percent). These are the only studies that hereditarians ever cite for these claims that blacks average higher testosterone than whites. That seems a bit fishy to me. Cite a 30-year-old study along with a 25-year-old study (with such huge variation from Rushton and those who cite him for this matter—3-19 percent!!) as ‘proof’ that blacks average such higher levels of testosterone in comparison to whites.
Ross et al (1986) is one of the most important studies to rebut for this hereditarian claim that testosterone causes all of these maladies in black American populations. Ross et al (1986) propose that higher levels of the hormone lead to the higher rates of prostate cancer in black American populations. However, meta-analyses do not show this (Zagars et al, 1998; Sridhar et al, 2010).
Rushton et al’s assertions—largely—lie on this supposed testosterone difference between the races and how it supposedly leads to higher rates of crime, prostate cancer, aggression, and violence. However, the truth of the matter is, this is all just hereditarian bullshit. Larger analyses—as I have extensively documented—do not show this trend. And even accepting the claim that blacks have, say, 19 percent higher levels of testosterone than whites, it still would not explain the supposed prostate cancer rates between the races (Stattin et al, 2003; Michaud, Billups, and Partin, 2015). Even if blacks had 19 percent higher testosterone than whites, it would not explain higher levels of crime nor aggression due to such a hilariously low correlation of .08 (Archer, Graham-Kevan, and Davies, 2005).
Finally, I have a few words for Michael Hart and his (albeit sparse) claims on testosterone in his 2007 book Understanding Human History.
Hart (2007) writes:
(Many of these differences in sexual behavior may be a consequence of the fact that
blacks, on average, have higher levels of testosterone than whites.7) (pg. 127)
And….. footnote number 7 is…. surprisingly (not): 7) Ross, R., et al. (1986). Not going to waste my time on this one, again. I’ve pointed out numerous flaws in the study. (I will eventually review the whole thing.)
It seems unlikely, though, that the higher testosterone level in blacks — which is largely genetic in origin — has no effect on their sexual behavior (pg. 128; emphasis mine)
This is bullshit. People see the moderately high heritability of testosterone (.60; Harris, Vernon, and Boomsma, 1998) and jump right to the “It’s genetics!!!” canard without even understanding its production in the body (it is a cholesterol-based hormone which is indirectly controlled by DNA, there are no ‘genes for’ testosterone). Here are the steps: 1) DNA codes for mRNA; 2) mRNA codes for the synthesis of an enzyme in the cytoplasm; 3) luteinizing hormone stimulates the production of another messenger in the cell when testosterone is needed; 4) this second messenger activates the enzyme; 5) the enzyme then converts cholesterol to testosterone
I have documented numerous lines of evidence showing that testosterone is extremely sensitive to environmental factors (Mazur and Booth, 1998; Mazur, 2016), and due to the homeodynamic physiology we have acquired due to ever-changing environments (Richardson, 2017), this allows our hormones to up- or down-regulate depending on what occurs in the environment. The quote from Hart is bullshit; he doesn’t know what he’s talking about.
For females in Siberia, the disadvantages of failing to find a man who would
provide for her and her children during their childhood were much greater than they were in tropical climates, and females who were not careful to do so were much less likely to pass on their genes. Furthermore, because females in harsh climates were so demanding on this point, males who seemed unlikely to provide the needed assistance found it hard to find mates. In other words, there was a marked sexual selection against such males. Such selection could result, for example, in the peoples living in northerly climates gradually evolving lower levels of testosterone than the peoples living in subSaharan Africa. (pg. 131)
This is a bullshit just-so story. Africans in Africa have lower levels of testosterone than Western men (Campbell, O’Rourke, and Lipson, 2003; Lucas and Campbell, and Ellison, 2004; Campbell, Gray, and Ellison, 2006).
Note also that a difference in testosterone level frequently affects not
only the sexual behavior of a young male, but also his aggressiveness.
No it does not (Archer, Graham-Kevan, and Davies, 2005).
Thankfully, that’s all he wrote about testosterone. There is so much bullshit out there. Though, people who like and seek out the truth will learn that there are no racial differences and that testosterone does not cause crime/aggression/prostate cancer and that it’s just hereditarian bullshit.
The evidence I have amassed and the arguments I have given point to a few things: 1) the races do not differ in testosterone/there is a small negligible difference; 2) testosterone does not cause crime; 3) testosterone does not cause aggression; 4) black women do not have higher levels of testosterone than white women; 5) high levels of testosterone do not cause prostate cancer; and 6) even allowing a 19 percent black/white difference will not have hereditarian claims hold true.
So for anyone who comes across my old articles on testosterone and sex/race, do a bit more reading of my newer material here to see my new viewpoints/arguments. DO NOT cite these articles as proof for your claims of higher levels of black men/women. DO cite the old articles ALONG WITH the new ones to show how and why my views changed along with the studies I have cited that changed my view. (Actually understanding the production of testosterone in the body was a huge factor too, which I talk about in Why Testosterone Does Not Cause Crime.)
Job performance is supposedly one measure that validates the construct of IQ tests since they correlate so highly with IQ tests (Schmidt et al, 1986). However, there are problems with the methods used to get the high correlations (sometimes doubling correlations, there are also questions to the robusticity of the studies meta-analyzed); corrections used have to make a number of assumptions; uncertainty of the interpretation of what the supposed IQ and job performance correlations mean; other non-cognitive factors may also explain differences in job performance. Most surprisingly, intelligence test scores did not predict promotion to senior doctor and intelligence does not predict careers.
Job performance and IQ
Does IQ really correlate around .5 with job performance like is so commonly stated? There are a number of problems citing such the commonly used meta-analyses for evidence that IQ does indeed predict job performance.
Richardson and Norgate (2015) show that one should use caution when interpreting the results of IQ and job performance on the basis of numerous criteria. It is important to note that job performance is rated by supervisors, which is, of course, a problem since supervisors tend to be subjective in their ratings. Further, supervisor ratings have low correlations with work performance, while work knowledge has a correlation of around .3 (Richardson and Norgate 2015; Richardson, 2002). So, one of the main things that the correlation hinges upon is strongly subjective.
However, one of the most important things to note here is that the validation of IQ tests is relied on with correlations with other tests. For instance, blood alcohol and level of consumption are valid constructs. The higher your blood alcohol is, the more alcohol you consumed. There is no such validity for the construct of IQ—except correlations with other tests—which is a huge problem. This goes back to the fact that there is no individual theory of intelligence differences (Deary, 2001: 14) and no neurophysiological theory of g (Jensen, 1998: 257).
So IQ tests don’t have the same construct validity that other models that describe biologic/physiologic functions do; hundreds of studies before the 70s showed low correlations between IQ and job performance; corrections for error make a lot of assumptions; the common claim that the IQ/job performance correlation increases with more complex jobs is not observed in more recent studies; and there is great uncertainty in the interpretation of the IQ and job performance correlation, due to the fact that there is no construct validity to IQ tests. This goes back to the question: What is it that IQ tests test (Richardson, 2002)? Is it the ever-elusive general factor of intelligence? I’m skeptical there.
Richardson (2017) writes:
The committee described the differences as “puzzling and somewhat worrisome.” But they noted how the quality of the data might explain it. For example, the 264 newer studies have much greater numbers of participants, on average (146 versus 75). It was shown how the larger samples produced much lower sampling error and less range restriction, also requiring less correction (with much less possibility of a false boost to observed correlations). And there was no need to devise estimates to cover for missing data. So, even by 1989, these more recent results are indicative of the unreliability of those usually cited. But it is the earlier test results that are still being cited by IQ testers. (pg. 89)
IQ and job performance correlations are also substantially weaker in other parts of the world, such as the Middle East and China, where motivation and effort explain school and work performance and not cognitive ability (Byington and Felps, 2010). So, again, caution is to be taken when interpreting any IQ and job performance correlation, as well as—most importantly—asserting that higher IQ means better job performance.
In his 2015 book Intelligence in the Flesh, Guy Claxton wrote:
We saw earlier that Google is not impressed by people’s track records of success, but is equally sceptical of high IQs. Laszlo Bock, the senior vice-president in charge of ‘people operations’ – the head of HR – says: ‘For every job the No. 1 thing we look for is general cognitive ability, and it’s not I.Q. It’s learning agility. It’s the ability to process on the fly.‘ Behind the ability to learn quickly lies what Bock calls ‘intellectual humility.’ You have to be able to give up the knowledge and expertise you thought would see you through, and look with fresh eyes. People with a high IQ ofen have a hard time doing that. They are certainly no better than average at tolerating uncertainty or being able to adopt fresh perspectives.
Now that we know to take caution when speaking about the IQ and job performance correlation, what do IQ tests say about success as a doctor?
Doctors and IQ
Since becoming a doctor is so demanding and takes a lot of time and motivation to complete a doctoral degree, most rightly assume that it takes a higher than average intelligence to acquire these accolades and become a medical doctor. However, reality is more nuanced.
McManus et al (2003) put forth three hypotheses: 1) the achievement argument: A-levels ensure maximum competence on sciences which are basic to medicine (biology and chemistry); 2) the ability argument: Academic success depends mainly on cognitive ability; and 3) the motivation argument: Using A-levels is effective because it University education not only reflects intelligence but motivation and good, consistent study skills.
There is evidence that IQ is irrelevant to becoming a doctor and that it did not predict dropping out of the program, career outcome, amount of research publications published, or stress, burnout and satisfaction with taking a career in medicine (McManus et al, 2003). Diplomas, higher academic degrees, and research publications were significantly correlated with personality.
McManus et al (2003) write:
Intelligence did not independently predict dropping off the register, career outcome, or other measures.
Intelligence does not predict careers, thus rejecting the ability argument. A levels predict because they assess achievement, and the structural model shows how past achievements predict future achievement.
And on the causes for dropping out:
All 511 students registered with the General Medical Council, but only 464 were on the 2001 Medical Register. The 47 doctors who left the register (a mean of 11.1 years after qualifying; SD 5.9; range 2-23) had lower A level grades but not lower AH5 scores (table A, bmj.com); see http://www.bmj.com for ROC analysis. Two doctors subsequently returned to the register. Of the remainder, three had died, contact details were available for 35, and no information was available for seven.
So lower intelligence scores were not the cause for dropping out.
McManus et al (2003), however, could not distinguish between the motivation and achievement argument, but falsified the intelligence argument (Hypothesis 2 was falsified, but not 1 and 3).
This was also replicated by McManus et al (2013), where they should that IQ scores did not predict promotion to senior doctor. A-level scores, yet again, predicted success better when it came to doctoral success.
The relationship between IQ and job performance is not as clear-cut as most would like to believe. One of the most important factors there, in my opinion, is the subjectivity of supervisors on the performance of their workers. Numerous factors could influence a supervisors’ view of an individual, biasing the supervisor to a high rating. Furthermore, the corrected correlations are a problem. More recent analyses show a correlation of .25 (Richardson, 2017: 89).
Perhaps more importantly, two studies show that there is no predictive effect on job performance when it comes to IQ for doctors (McManus et al, 2003; McManus et al, 2013). They show that A-level scores predict success better, with personality variables mediating other relationships—not IQ scores.
The fact of the matter is, job performance and IQ is on shaky ground since IQ tests are not constructed valid, and the job performance ratings are based on supervisor ratings which are highly subjective. Analyses in other locations around the world show that IQ does not predict job performance, however, motivation and effort do. IQ does not predict a doctor’s job performance; job performance tests do not prove the validity of IQ tests.
IQ does not predict a doctor’s job performance; job performance tests do not prove the validity of IQ tests.
[Edit: I have come across more data on doctors IQ. Some studies show that complaints by patients on their doctors are related to infractions. Perry and Crean (2005) show that the average IQ for a doctor is 125. They also state that neurocognitive impairment may be responsible for 63% of all physician related adverse events. This same observation is also noted in other studies (Pitkanen, Hurn, and Kopelman, 2008; Lauri et al, 2009; Kataria et al, 2014). Also of note is that these papers—to the best of my knowledge—do not explore the role of stress in cognitive decline. Though Pitkanen, Hurn, and Kopelman (2008) note that depression, PTSD, amnesia, transient global amnesia, alcoholic brain damage, frontotemporal dimentia, dimentia, alzheimer’s disease, vascular dimentia, and post-traumatic amnesia (PTA) influence cognitive decline in doctors.
Veena et al, (2015) show that 88 percent of medical students had near average intelligence, putting in 6 hours a day of studying, while 10 percent of students had above average IQ, spent less time studying but were sincere in their classes.
Veena et al (2015) conclude:
Students with near average IQ work hard in their studies and their academic performance was similar to students with higher IQ. So IQ can`t be made the basis for medical entrance; instead giving weight-age to secondary school results and limiting the number of attempts may shorten the time duration for entry and completion of MBBS degree.
So students with average intelligence work just as hard (if not harder) than people with above average IQ and have similar educational achievement. This shows that IQ can’t be the basis for medical school entry.
This is a really interesting matter and I will cover it more in the future. I’ve been wondering for years if there is data on physician/doctoral malpractice and race I have yet to come across any papers on the matter. If anyone knows of any, please leave some citations.]