Humans reach their maximum height at around their mid-20s. It is commonly thought that taller people have better life outcomes, and are in general healthier. Though this misconception stems from misconceptions about the human body. In all reality, shorter people live longer than taller people. (Manlets of the world should be rejoicing; in case anyone is wondering I am 5’10”.) This flies in the face about what people think, and may be counter-intuitive to some but the logic—and data—is sound. I will touch on mortality differences between tall and short people and at the end talk a bit about shrinking with age (and studies that show there is no—or little—decrease in height due to self-reports, the study is flawed).
One reason why the misconception of taller people living longer, healthier lives than shorter people is the correlation between height and IQ—people assume that they are traits that are ‘similar’ in that they become ‘stable’ at adulthood—but one way to explain that relationship is that IQ is correlated with height because higher SES people can afford better food and thus be better nourished. Either way, it is a myth that taller people have lower rates of all-cause mortality.
The truth of the matter is this: smaller bodies live longer lives, and this is seen in the animal kingdom and humans—larger body size independently reduces mortality (Samaras and Elrick, 2002). They discuss numerous lines of evidence—from human to animal studies—and show that smaller bodies have a lower chance of all-cause mortality, the reasoning being (one of the reasons, anyway) that larger bodies have more cells which then would, in turn, be more subject to carcinogens and, obviously, would have higher rates of cancer which would then, too, lower mortality rates. Samaras (2012) also has another paper where the implications are reviewed for this, and other causes are proposed for this observation. Causes are reduced cell damage, lower DNA damage, and lower cancer incidence; with other, hormonal differences, between tall and short people that explain more of the variation between them.
One study found a positive linear correlation between height and cancer mortality. Lee et al (2009) write:
A positive linear association was observed between height and cancer mortality. For each standard deviation greater height, the risk of cancer was increased by 5% (2–8%) and 9% (5–14%) in men and women, respectively.
One study suggests that “variations in adult height (and, by implication, the genetic and other determinants of height) have pleiotropic effects on several major adult-onset diseases” (The Emerging Risk Factors Collaboration, 2012). Taller people also are at greater risk for heart attack (Tamaras, 2013). The cause for this, Tamaras writes, is “including reduced telomere shortening, lower atrial fibrillation, higher heart pumping efficiency, lower DNA damage, lower risk of blood clots, lower left ventricular hypertrophy and superior blood parameters.” Height, though, may be inversely associated with long-term incidence of fatal stroke (Goldbourt and Tanne, 2002). Schmidt et al (2014) conclude: “In conclusion, short stature was a risk factor for ischemic heart disease and premature death, but a protective factor for atrial fibrillation. Stature was not substantially associated with stroke or venous thromboembolism.” Cancer incidence also increases with height (Green et al, 2011). Samaras, Elrick, and Storms (2003) suggest that men live longer than women live longer than men due to the height difference between them, being about 8 percent taller than women but having a 7.9 percent lower life expectancy at birth.
Height at mid-life, too, is a predictor of mortality with shorter people living longer lives (He et al, 2014). There are numerous lines of evidence that shorter people—and people of shorter ethnies, too—live longer lives if they are vertically challenged. One study on patients undergoing maintenance hemodialysis stated that “height was directly associated with all-cause mortality and with mortality due to cardiovascular events, cancer, and infection” (Daugirdas, 2015; Shapiro et al, 2015). Even childhood height is associated with prostate cancer acquisition (Aarestrup et al, 2015). Even men who are both tall and have more adipose tissue (body fat) are more likely to die younger and that greater height was associated with a higher risk of acquiring prostate cancer (Perez-Cornago et al, 2017). Short height is a risk factor for death for hemodyalisis patients (Takenaka et al, 2010). Though there are conflicting papers regarding short height and CHD, many reviews show that shorter people have better health outcomes than taller people.
Sohn (2016) writes:
An additional inch increase in height is related to a hazard ratio of death from all causes that is 2.2% higher for men and 2.5% higher for women. The findings are robust to changing survival distributions, and further analyses indicate that the figures are lower bounds. This relationship is mainly driven by the positive relationship between height and development of cancer. An additional inch increase in height is related to a hazard ratio of death from malignant neoplasms that is 7.1% higher for men and 5.7% higher for women.
It has been widely observed that tall individuals live longer or die later than short ones even when age and other socioeconomic conditions are controlled for. Some researchers challenged this position, but their evidence was largely based on selective samples.
Four additional inches of height in post-menopausal women coincided with an increase in all types of cancer risk by 13 percent (Kabat et al, 2013), while taller people also have less efficient lungs (Leon et al, 1995; Smith et al, 2000). Samaras and Storms (1992) write “Men of height 175.3 cm or less lived an average of 4.95 years longer than those of height over 175.3 cm, while men of height 170.2 cm or less lived 7.46 years longer than those of at least 182.9 cm.”
Lastly, regarding height and mortality, Turchin et al (2012) write “We show that frequencies of alleles associated with increased height, both at known loci and genome wide, are systematically elevated in Northern Europeans compared with Southern Europeans.” This makes sense, because Southern European populations live longer (and have fewer maladies) than Northern European populations:
Compared with northern Europeans, shorter southern Europeans had substantially lower death rates from CHD and all causes.2 Greeks and Italians in Australia live about 4 years longer than the taller host population … (Samaras and Elrick, 2002)
So we have some data that doesn’t follow the trend of taller people living shorter lives due to maladies they acquire due to their height, but most of the data points in the direction that taller people live shorter lives, higher rates of cancer, lower heart pumping efficiency (the heart needs to pump more blood through a bigger body) etc. It makes logical sense that a shorter body would have fewer maladies, and would have higher heart pumping efficiency, lower atrial fibrillation, lower DNA damage, lower risk of blood clotting (duh) when compared to taller people. So it seems that, if you’re a normal American man, then if you want to live a good, long life then you’d want to be shorther, rather than taller.
Lastly, do we truly shrink as we age? Steve Hsu has an article on this matter, citing Birrell et al (2005) which is a longitudinal study in Newcastle, England which began in 1947. The children were measured when full height was expected to be acheived, which is about 22 years of age. They were then followed up at age 50. Birrell et al (2005) write:
Height loss was reported by 57 study members (15%, median height loss: 2.5 cm), with nine reporting height loss of >3.5 cm. However, of the 24 subjects reporting height loss for whom true height loss from age 22 could be calculated, assuming equivalence of heights within 0.5 cm, 7 had gained height, 9 were unchanged and only 8 had lost height. There was a poor correlation between self-reported and true height loss (r=0.28) (Fig. 1).
In this population, self-reported height was off the mark, and it seems like Hsu takes this conclusion further than he should, writing “Apparently people don’t shrink quite as much with age as they think they do.” No no no. This study is not good. We begin shrinking at around age 30:
Men gradually lose an inch between the ages of 30 to 70, and women can lose about two inches. After the age of 80, it’s possible for both men and women to lose another inch.
The conclusion from Hsu on that study is not warranted. To see this, we can look at Sorkin, Muller, and Andres (1999) who write:
For both sexes, height loss began at about age 30 years and accelerated with increasing age. Cumulative height loss from age 30 to 70 years averaged about 3 cm for men and 5 cm for women; by age 80 years, it increased to 5 cm for men and 8 cm for women. This degree of height loss would account for an “artifactual” increase in body mass index of approximately 0.7 kg/m2 for men and 1.6 kg/m2 for women by age 70 years that increases to 1.4 and 2.6 kg/m2, respectively, by age 80 years.
So, it seems that Hsu’s conclusion is wrong. We do shrink with age for myriad reasons, including discs between the vertebrae and spine decompress and dehydrate, the aging spine becomes more curved due to loss of bone density, and loss of torso muscle could contribute to the differing posture. Either way, these are preventable, but some height decrease will be notable for most people. Either way, Hsu doesn’t know what he’s talking about here.
In conclusion, while there is some conflicting data on whether tall or short people have lower all-cause mortality, the data seems to point to the fact that shorter people live longer due since they have lower atrial fibrillation, higher heart pumping efficiency, low DNA damage, lower risk for blood clots (since the blood doesn’t have to travel too far in shorter people), along with superior blood parameters etc. With the exception of a few diseases, shorter people do have a higher quality of life and higher lung efficiency. We do get shorter as we age—though with the right diet we can ameliroate some of those effects (for instance keeping calcium high). There are many reasons why we shrink due to age, and the study that Hsu cited isn’t good compared to the other data we have in the literature on this phenomenon. All in all, shorter people live longer for myriad reasons and we do shrink as we age, contrary to Steve Hsu’s claims.
The first Darwin Day I started writing just for this day, I wrote about (and defended Darwin’s words) how both Creationists and evolutionists who are themselves evolutionary progressionists twist Darwin’s words for their own gain. Darwin never wrote in The Descent of Man that the ‘higher races’ would take out ‘the lower races’, but that doesn’t stop Creationists and evolutionists—who I presume have not read one sentence in Darwin’s words from one of his books—from taking what Darwin meant out of context and attributing to him beliefs he does not hold. This year, though, I am going in a different direction. The Modern Synthesis (MS) has causation in biology wrong. The MS upholds the ‘gene’ as one of the highest seats in evolutionary biology, with a sort of ‘power’ to direct. Though, as I will show, genes do nothing unless transcribed by the system. Since the MS has causation in biology wrong, then we either need to extend or replace the MS.
To begin, Darwin, without knowledge of genes or other hypothesized units of inheritance, had a theory of inheritance in which things called ‘gemmules’ (what Darwin called heritable molecules) were transmitted to offspring (Choi and Mango, 2014). It’s ironic, because Darwin’s theory of inheritance was one of the more Lamarckian theories of inheritance in his day, and Darwin himself sympathized with the Lamarckian view of evolution—he most definitely did not discard it like modern-day Darwinists do. Darwin suggested that these gemmules circulated in the body and that some were used for the regeneration of some bodily tissues, but most aggregated in the reproductive organs (Jablonka and Lamb, 2015: 23). Further, according to Darwin, gemmules were not always immediately used but could reappear later in life or even be used in future generations. Darwin even said that “inheritance must be looked at as a form of growth” (Darwin, 1883, vol 2, p. 398; quoted by Jablonka and Lamb, 2015: 24).
The crux of the MS is the selfish gene theory of Dawkins (1976). Dawkins (1976, 2006) writing “They are in you and me; they created us, body and mind; and their preservation is the ultimate rationale for our existence.” “They”, of course, being genes. The gene has been given a sort of power that it does not have, but has been placed on it by overzealous people, quick to jump to conclusions while we still have yet to understand what ‘genes’ do. The MS—with the selfish gene theory—is at the forefront of the neo-Darwinist revolution, that evolution is gene-centered, with genes playing the starring role in the evolutionary story.
Though, numerous researchers are against such simplistic and reductionist viewpoints of evolution, mainly the gene-centered view of evolution pushed by the MS. There is no privileged level of causation in biology (though I will state later in this article that I think ATP comes close to it) (Noble, 2016).
Neo-Darwinists, like Richard Dawkins, overstate natural selection’s importance regarding evolution. They elevate the gene’s overall importance. In the quote from Dawkins above, where he stated that “they” (genes) “created us, body and mind”, he is implying that genes are a sort of ‘blueprint’, like a ‘plan’ or ‘recipe’ for the form of the organism. But this was taken care of by Susan Oyama in her 1985 book The Ontogeny of Information where she writes on pages 77:
“Though a plan implies action, it does not itself act, so if the genes are a blueprint, something else is the constructor-construction worker. Though blueprints are usually contrasted with building materials, the genes are quite easily conceptualized as templates for building tools and materials; once so utilized, of course, they enter the developmental process and influence its course. The point of the blueprint analogy, though, does not seem to be to illuminate developmental processes, but rather to assume them and, in celebrating their regularity, to impute cognitive functions to genes. How these functions are exercised is left unclear in this type of metaphor, except that the genetic plan is seen in some peculiar way to carry itself out, generating all the necessary steps in the necessary sequence. No light is shed on multiple developmental possibilities, species-typical or atypical.“
The genes-as-blueprints canard is one that is heavily used by proponents of the MS. Oyama also writes on page 53 “Just as traditional thought placed biological forms in the mind of God, so modern thought finds many ways of endowing the genes with ultimate formative power, a power bestowed by Nature over countless millennia.” This same sentiment from Oyama is also echoed by developmental systems theorist and psychologist David Moore in his book The Dependent Gene: The Fallacy of “Nature VS. nurture”, where he writes:
Such contextual dependence renders untenable the simplistic belief that there are coherent, long-lived entities called “genes” that dictate instructions to cellular machinery that merely constructs the body accordingly. The common belief that genes contain context-independent “information”—and so are analogous to “blueprints” or “recipes”—is simply false. (p. 81) (Quoted from Schneider, 2007)
Environmental factors are imperative in determining which protein-coding exons get read from a cistron, when and how often. So the very concept of a gene depends on the environment and environmental inputs, and thusly gene ABC does not code for trait T on its own.
When it comes to epigenetics (defined here as inherited changes in gene expression with no genetic change to the genome), this completely changes how we view evolution.
The underlying nucleotide sequence stays the same but differences are inherited due to environmental stressors. I’ve stated in the past that these inherited marks on the genome (through histone modification, DNA methylation, which then alter the chromatin structure of the DNA. Further, this would show up on heritability estimates as ‘genetic’ when the ’cause’ was ‘environmental’ in nature (which is also yet another reason that heritability estimates are inflated).
DNA methylation, histone modification and noncoding RNA all can affect the structure of chromatin. As of now, the mechanisms of mitotic inheritance aren’t too well known, but advances in the field are coming.
If you want to talk the P and F1 generations regarding transgenerational epigenetics, then you must realize that these changes do not occur on the genome, the genome remains the same, just certain genes are expressed differently (as I’m sure you know). Though mi-MRNA signals can change the DNA methylation patterns in the F2 sperm which then is replicated in meiotic and mitotic cycles (Trerotola et al, 2015).
For another similar process on how DNA methylation persists, this (semiconservative) replication of DNA methylation occurs on both strands of the DNA which then become hemimethylated DNA which can then become fully methylated by methylase maintenance. So chromatin structure affects the genetic expression of the eukaryotic genome which then becomes the basis for epigenetic effects. Xist RNA also mediates the X-chromosome deactivation. This doesn’t even get into how and why the microbiome can also affect gene expression (which has also been called ‘the second genome’ (Zhu, Wang, and Li, 2010) with other authors calling it an ‘organ’ (Clarke et al, 2014; Brown and Hazen, 2015) this can also affect gene expression and heritable variation that becomes the target of selection (along with the other modes of selection) (Maurice, Haiser, and Turnbaug, 2014; Byrd and Segre, 2015). This shows that gene expression in the F2 and F3 generations is not so simple, and that other factors such as our gut microbiota can also affect gene expression and stressors experienced by parents and grandparents can also be passed to future generations, and may have a chance of becoming part of heritable variation that natural selection then acts on (Jablonka and Lamb, 2015).
The point of the debate with neo-Darwinists is over causation: do genes hold this ‘ultimate formative power’ as people like Dawkins contest? Or are genes nothing but ‘slaves’, passive, not active, causes as Denis Noble writes in his 2016 book Dance to the Tune of Life. (Noble, 2008 discusses genes and causation, again showing that there is no true causation, but getting technical, ATP is up there in the ‘chain’, if you want to get literal. The point is that genes do not have the ‘power’ that the neo-Darwinists think they do, they’re just slaves for the intelligent physiological system.)
When discovering the structure of DNA, Francis Crick famously announced to his drinking companions in a Cambridge tavern that he had discovered ‘the secret of life’. The director of his Institute, Max Perutz, was rather more careful than Crick when he said that DNA was the ‘score of life’. That is more correct since a musical score does nothing until it is played, DNA does nothing until activated to do so.
Recent experimental work in biological science has deconstructed the idea of a gene, and an important message of this book is that it has thereby drthroned the gene as a uniquely privileged level of causation. As we will see, genes, defined as DNA sequences, are indeed essential, but not in the way in which they are often portrayed. (Noble, 2016: 53)
A 2017 paper titled Was the Watchmaker Blind? Or Was She One-Eyed?, Noble and Noble (2017) write that organisms and their interacting populations have evolved mechanisms so that they can harness blind stochasticity, thereby generating functional changes to the phenotype as to better respond to environmental challenges. They put forth a good argument, though it really makes me think because I’ve been such a staunch critic against evolution having a ‘direction’ and against the ‘teleological view’ of evolution: “If organisms have agency and, within obvious limits, can choose their lifestyles, and if these lifestyles result in inheritable epigenetic changes, then it follows that organisms can at least partially make choices that can have long-term evolutionary impact.”
Noble and Noble (2017) argue (using Dawkins’ analogy of the Blind Watchmaker) that humans are the only Watchmakers that we know of. Humans evolved from other organisms. The ability to become a Watchmaker has evolved. Ergo, there is no surprise that there is directed agency for other organisms that directs their evolution too. There are several processes, they conclude, that could account for directed evolutionary change which are “targeted mutation, gene transposition, epigenetics, cultural change, niche construction and adaptation” (Noble and Noble, 2017). Niche construction, for instance, is heavily pushed by Kevin Laland, author of the book Darwin’s Unfinished Symphony: How Culture Made the Human Mind who has a few papers and featured it heavily in his new book. Either way, these ways in which organisms can in a way direct their own evolution are not covered by the MS.
Though I couldn’t end this article without, of course, discussing Jerry Coyne who goes absolutely crazy at people pushing to either extend or replace the MS. His most recent article is about Kevin Laland and how he is “at it again” touting “a radically different view of evolution”. It seems as Coyne has made up his mind and that the MS is all there is—he believes it is no problem for our current understanding of evolutionary theory to absorb things such as niche construction, epigenetic inheritance, stochasticity, and even (way more controversially) directed mutations. Coyne has also criticized Noble’s attacks on the MS, though Noble came back and responded to Coyne during a video presentation.
Lastly, Portin and Wilkins (2017) review the history of the gene, and go through different definitions it has been given over the decades. They conclude in this paper that they “will propose a definition that we believe comes closer to doing justice to the idea of the “gene,”
in light of current knowledge. It makes no reference to “the unit of heredity”—the long-standing sense of the term—because we feel that it is now clear that no such generic universal unit exists.” Writing on page 1361-1362:
A gene is a DNA sequence (whose component segments do not necessarily need to be physically contiguous) that specifies one or more sequence-related RNAs/proteins that are both evoked by GRNs and participate as elements in GRNs, often with indirect effects, or as outputs of GRNs, the latter yielding more direct phenotypic effects. [GRNs are genetic regulatory networks]
This is similar to what Jablonka and Lamb (2015: 17) write:
Although many psychiatrists, biochemists, and other scientists who are not geneticists (yet express themselves with remarkable facility on genetic issues) still use the language of genes as simple causal agents, and promise their audience rapid solutions to all sorts of problems, they are no more than propagandists whose knowledge or motives must be suspect. The geneticists themselves now think and talk (most of the time) in terms of genetic networks composed of tens or hundreds of genes and gene products, which interact with each other and together affect the development of a particular trait. They recognize that whether or not a trait (a sexual preference, for example) develops does not depend, in the majority of cases, on a difference in a single gene. It involves interactions among many genes, many proteins and other types of molecule, and the environment in which an individual develops.
The gene as an active causal actor has been definitively refuted. Genes on their own do nothing at all, until they are transcribed by the intelligent physiological system. Noble likens genes as slaves that are used by the system to carry out processes by and for the system. So genes are caused to give their information by and to the system that activates them (Noble, 2011). Noble’s slave metaphor makes much more sense than Dawkins’ selfish metaphor, since genes are used like slaves by the system, the genes are then caused to give their information by and to the system that activates them, which shows how they are a passive, not active, cause, completely upending the MS and how it views causation in biology. Indeed, Jablonka and Lamb state that one of their problems with Dawkins is that “Dawkins assumes that the gene is the only biological (noncultural) hereditary unit. This simply is not true. There are additional biological inheritance systems, which he does not consider, and these have properties different from those we see in the genetic system. In these systems his distinction between replicator and vehicle is not valid.”
So, both Gould and Dawkins overlooked the inheritance of acquired characters, as Jablonka and Lamb write in their book. They argue that inherited variation had a large effect on the evolution of species, but admit that evidence for the view is scant. They write on page 145 “If you accept that heritable epigenetic variation is possible, self-evidently some of the variants will have an advantage relative to other variants. Even if all epigenetic variations were blind, this would happen, and it’s very much more likely if we accept that a lot of them are induced and directed.” Not everything that is inherited is genetic.
DNA is found in the cell, and what powers the cell? ATP (adenosine triphosphate). Cells use and store ATP to carry out their functions (Khakh and Burnstock, 2016). Cells produce ATP from ADP and Pi. Cells use exergonic reactions to provide the energy needed to synthesize ATP from ADP and Pi. The hydrolysis of ATP provides the energy needed to drive endergonic actions.So the cells continuously produced more ATP from ADP and Pi to then carry out diverse functionings across the body. So, in a way, you can argue that one of the ultimate causes is ATP since it has to power the cell, then you can look at all of the other reactions that occur before ATP is created and privilege that part of the chain, but there will never be some ultimate causation, since, as Noble argues in his book Dance to the Tune of Life, there is no privileged causation in biology.
In conclusion, evolution, development, and life, in general, is extremely complex. Paradigms like the selfish gene—a largely reductionist paradigm—do not account for numerous other factors that drive the evolution of species, such as targeted mutation, niche construction etc. An extended evolutionary synthesis that integrates these phenomena will better be able to describe what occurs to drive the evolution of species, and if the directed mutation idea has any weight, then it will be interesting to see how and why certain organisms have evolved this ability. It’s ironic how the MS is being defended as if it is infallible—like it can do no wrong and that it does not need to be added to/extended or replaced by something else that incorporates the phenomena brought up in this article.
Either way, a revolution in modern biology is coming, and Darwin would have it no other way. The Modern Synthesis has causation in biology wrong: the gene is not an active agent in evolution, it only does what it is told by the intelligent physiological system, and so we must look at whole organisms and not reduce organisms down to genes, but we must look at the whole organism—a holistic view of the organism, not one that is reduced down to just ‘the genes’, because there is no privileged level of causation in biology (Noble, 2016).
In 1972 Richard Lewontin, studying the blood groups of different races, came to the conclusion that “Human racial classification is of no social value and is positively destructive of social and human relations. Since such racial classification is now seen to be of virtually no genetic or taxonomic significance either, no justification can be offered for its continuance” (pg 397). He also found that “the difference between populations within a race account for an additional 8.3 percent, so that only 6.3 percent is accounted for by racial classification.” This has lead numerous people to, along with Lewontin, conclude that race is ‘of virtually no genetic or taxonomic significance’ and conclude that, due to this, race does not exist.
Lewontin’s main reasoning was that since there is more variation within races than between them (85 percent of differences were within populations while 15 percent was within them) then since a lion’s share of human diversity is distributed within races, not between them, then race is of no genetic nor taxonomic use. Lewontin is correct that there is more variation within races than between them, but he is incorrect that this means that racial classification ‘is of no social value’, since knowing and understanding the reality of race (even our perceptions of them, whether they are true or not) influence things such as medical outcomes.
Though, like Lewontin, people have cited this paper as evidence against the existence of human races, for if there is way more genetic variation between races, and that most human genetic variation is within races, then race cannot be of any significance for things such as medical outcomes since most genetic variation is within races not between them.
Rosenberg et al (2002) also confirmed and replicated Lewontin’s analysis, showing that within-population genetic variation accounts for 93-95 percent of human genetic variation, while 3 to 5 percent of human genetic variation lies between groups. Philosopher Michael Hardimon (2017) uses these arguments to buttress his point that ‘racialist races’ (as he calls them) do not exist. His criteria being:
(a) The fraction of human genetic diversity between populations must exceed the fraction of diversity between them.
(b) The fraction of human genetic diversity within populations must be small.
(c) The fraction of diversity between populations must be large.
(d) Most genes must be highly differentiated by race.
(e) The variation in genes that underlie obvious physical differences must be typical of the genome in general.
(f) There must be several important genetic differences between races apart from the genetic differences that underlie obvious physical differences.
Note: (b) says that racialist races are genetically racially homogeneous groups; (c)-(f) say that racialist races are distinguised by major biological differences.
Call (a)-(f) the racialist concept of race’s genetic profile. (Hardimon, 2017: 21)
He clearly strawmans the racialist position, but I’ll get into that another day. Hardimon writes about how both of these studies lend credence to his above argument on racialist races (pg 24):
Rosenberg and colleagues also confirm Lewontin’s findings that most genes are not highly differentiated by race and that the variation in genes that underlie obvious physical differences is not typical of the variation of the genome in general. They also suggest that it is not the case that there are many important genetic differences between races apart from the genetic differences that underlie the obvious physical differences. These considerations further buttress the case against the existence of racialist races.
The results of Lewontin’s 1972 study and Rosenberg and colleagues’ 2002 study strongly suggest that it is extremely unlikely that there are many important genetic differences between races apart from the genetic differences that underlie the obvious physical differences.
(Hardimon also writes on page 124 that Rosenberg et al’s 2002 study could also be used as evidence for his populationist concept of race, which I will return to in the future.)
Though, my reasoning for writing this article is to show that the findings by Lewontin and Rosenberg et al regarding more variation within races than between them are indeed true, despite claims to the contrary. There is one article, though, that people cite as evidence against the conclusions by Lewontin and Rosenberg et al, though it’s clear that they only read the abstract and not the full paper.
Witherspoon et al (2007) write that “sufficient genetic data can permit accurate classification of individuals into populations“, which is what the individuals who cite this study as evidence for their contention mean, though they conclude (emphasis mine):
The fact that, given enough genetic data, individuals can be correctly assigned to their populations of origin is compatible with the observation that most human genetic variation is found within populations, not between them. It is also compatible with our finding that, even when the most distinct populations are considered and hundreds of loci are used, individuals are frequently more similar to members of other populations than to members of their own population. Thus, caution should be used when using geographic or genetic ancestry to make inferences about individual phenotypes.
Witherspoon et al (2007) analyzed the three classical races (Europeans, Africans and East Asians) over thousands of loci and came to the conclusion when genetic similarity is measured over thousands of loci, the answer to the question “How often is a pair of individuals from one population genetically more dissimilar than two individuals chosen from two different populations?” is “never“.
Hunley, Cabana, and Long (2016: 7) also confirm Lewontin’s analysis, writing “In sum, we concur with Lewontin’s conclusion that Western-based racial classifications have no taxonomic significance, and we hope that this research, which takes into account our current understanding of the structure of human diversity, places his seminal finding on firmer evolutionary footing.” But the claim that “racial classifications have no taxonomic significance” is FALSE.
This is a point that Edwards (2003) rebutted in depth. While he did agree with Lewontin’s (1972) analysis that there was more variation within races than between them (which was confirmed through subsequent analysis), he strongly disagreed with Lewontin’s conclusion that race is of no taxonomic significance. Richard Dawkins, too disagreed with Lewontin, though as Dawkins writes in his book The Ancestors Tale: “Most of the variation among humans can be found within races as well as between them. Only a small admixture of extra variation distinguishes races from each other. That is all correct. What is not correct is the inferene that race is therefore a meaningless concept.” The fact that there is more variation within races than between them is irrelevant to taxonomic classification, and classifying races by phenotypic differences (morphology, and facial features) along with geographic ancestry shows that just by looking at the average phenotype that race exists, though these concepts make no value-based judgements on anything you can’t ‘see’, such as mental and personality differences between populations.
Though while some agree with Edwards’ analysis of Lewontin’s argument about race’s taxonomic significance, they don’t believe that he successfully refuted Lewontin. For instance, Hardimon (2017: 22-23) writes that Lewontin’s argument against—what Hardimon (2017) calls ‘racialist race’ (his strawman quoted above)—the existence of race because the within-race component of genetic variation is greater than the genetic variation between races “is untouched by Edwards’ objections.”
Though Sesardic (2010: 152) argues that “Therefore, contra Lewontin, the racial classification that is based on a number of genetic differences between populations may well be extremely reliable and robust, despite the fact that any single of those genetic between-population differences\ remains, in itself, a very poor predictor of racial membership.” He also states that the 7 to 10 percent difference between populations “actually refers to the inter-racial portion of variation that is averaged over the separate contributions of a number of individual genetic indicators that were sampled in different studies” (pg 150).
I personally avoid all of this talk about genes/allele frequencies between populations and jump straight to using Hardimon’s minimalist race concept—a concept that, according to Hardimon is “stripped down to its barest bones” since it captures enough of the racialist concept of race to be considered a race concept.
In sum, variation within races is greater than variation between races, but this does not mean anything for the reality of race since race can still be delineated based on peculiar physical features and peculiar geographic ancestry to that group. Using a few indicators (morphology, facial features such as nose, lips, cheekbones, facial structure, and hair along with geographic ancestry), we can group races based on these criteria and we can show that race does indeed exist in a physical—not social—sense and that these categories are meaningful in a medical context (Hardimon, 2013, 2017). So even though genetic variation is greater within races than between them, this does not mean that there is no taxonomic significance to race, as other authors have argued. Hardimon (2017: 23) agrees, writing (emphasis his) “… Lewontin’s data do not preclude the possibility that racial classification might have taxonomic significance, but they do preclude the possibility that racialist races exist.”
Hardimon’s strawman of the racialist concept notwithstanding (which I will cover in the future), his other three race concepts (minimalist, populationist and socialrace concepts) are logically sound and stand up to a lot of criticism. Either way, race does exist, and it does not matter if the apportionment of human genetic diversity is greatest within races than between them.
Tests of delayed gratification, such as the Marshmallow Experiment, show that those who can better delay their gratification have better life outcomes than those who cannot. The children who succumbed to eating the treat while the researcher was out of the room had worse life outcomes than the children who could wait. This was chalked up to cognitive processes by the originator of the test, while individual differences in these cognitive processes also were used as explanations for individual differences between children in the task. However, it doesn’t seem to be that simple. I did write an article back in December of 2015 on the Marshmallow Experiment and how it was a powerful predictor, but after extensive reading into the subject, my mind has changed. New research shows that social trust has a causal effect on whether or not one would wait for the reward—if the individual trusted the researcher he or she was more likely to wait for the other reward than if they did not trust the researcher, in which they were more likely to take what was offered in the first place.
The famous Marshmallow Experiment showed that children who could wait with a marshmallow or other treat in front of them while the researcher was out of the room, they would get an extra treat. The children who could not wait and ate the treat while the researcher was out of the room had worse life outcomes than the children who could wait for the other treat. These lead researchers to the conclusion that the ability to delay gratification depended on ‘hot’ and ‘cold’ cognitive processes. According to Walter Mischel, the originator of the study method, the ‘cool’ system is the thinking one, the cognitive system, which reminds you that you get a reward if you wait, while the ‘hot’ system is the impulsive system, the system that makes you want the treat now and not want to wait for the other treat (Metcalfe and Mischel, 1999).
Some of these participants were followed up on decades later, and those who could better delay their gratification had lower BMIs (Schlam et al, 2014); scored better on the SAT (Shoda, Mischel, and Peake, 1990) and other tests of educational attainment (Ayduk et al, 2000); along with other positive life outcomes. So it seems that placing a single treat—whether it be a marshmallow or another sweet treat—would predict one’s success, BMI, educational attainment and future prospects in life and that there are underlying cognitive processes, between individuals that lead to differences between them. But it’s not that simple.
After Mischel’s studies in the 50s, 60s and 70s on delayed gratification and positive and negative life outcomes (e.g., Mischel, 1958; Mischel, 1961; Mischel, Ebbeson, and Zeiss, 1972) it was pretty much an accepted fact that delaying gratification somehow was related to these positive life outcomes, while the negative life outcomes were partly a result of the lack of ability to delay gratification. Though in 2014, a study was conducted showing that ability to delay gratification depends on social trust (Michaelson et al, 2013).
Using Amazon’s Mechanical Turk, (n = 78, 34 male, 39 female and 5 who preferred not to state their gender) completed online surveys and read three vignettes in order—trusty, untrustworthy and neutral—while using a scale of 1-7 to note how likeable, trustworthy, and how sharing their likelihood of sharing. Michaelson et al (2013) write:
Next, participants completed intertemporal choice questions (as in Kirby and Maraković, 1996), which varied in immediate reward values ($15–83), delayed reward values ($30–85), and length of delays (10–75 days). Each question was modified to mention an individual from one of the vignettes [e.g., “If (trustworthy individual) offered you $40 now or $65 in 70 days, which would you choose?”]. Participants completed 63 questions in total, with 21 different questions that occurred once with each vignette, interleaved in a single fixed but random order for all participants. The 21 choices were classified into 7 ranks (using the classification system from Kirby and Maraković, 1996), where higher ranks should yield higher likelihood of delaying, allowing a rough estimation of a subject’s willingness to delay using a small number of trials. Rewards were hypothetical, given that hypothetical and real rewards elicit equivalent behaviors (Madden et al., 2003) and brain activity (Bickel et al., 2009), and were preceded by instructions asking participants to consider each choice as if they would actually receive the option selected. Participants took as much time as they needed to complete the procedures.
When one’s trust was manipulated in the absence of a reward, within the group of subjects influenced their ability to delay gratification, along with how trustworthy one was perceived to be, influenced their ability to delay gratification. So this suggests that, in the absence of rewards, when social trust is reduced, ability to delay gratification would be lessened. Due to the issues of social trust manipulation due to the order of how the vignettes were read, they did a second experiment using the same model using 172 participants (65 males, 63 females, and 13 who chose not to state their gender). Though in this experiment, a computer-generated trustworthy, untrustworthy and neutral face was presented to the participants. They were only paid $.25 cents, though it has been shown that the compensation only affects turnout, not data quality (Burhmester, Kwang, and Gosling, 2011).
In this experiment, each participant read a vignette and there was a particular face attached to it (trustworthy, untrustworthy and neutral), which were used in previous studies on this matter. They found that when trust was manipulated in the absence of a reward between the subjects, this influenced the participants’ willingness and to delay gratification along with the perceived trustworthiness influencing it as well.
Michaelson et al (2013) conclude that the ability to delay gratification is predicated on social trust, and present an alternative hypothesis for all of these positive and negative life outcomes:
Social factors suggest intriguing alternative interpretations of prior findings on delay of gratification, and suggest new directions for intervention. For example, the struggles of certain populations, such as addicts, criminals, and youth, might reflect their reduced ability to trust that rewards will be delivered as promised. Such variations in trust might reflect experience (e.g., children have little control over whether parents will provide a promised toy) and predisposition (e.g., with genetic variations predicting trust; Krueger et al., 2012). Children show little change in their ability to delay gratification across the 2–5 years age range (Beck et al., 2011), despite dramatic improvements in self-control, indicating that other factors must be at work. The fact that delay of gratification at 4-years predicts successful outcomes years or decades later (Casey et al., 2011; Shoda et al., 1990) might reflect the importance of delaying gratification in other processes, or the importance of individual differences in trust from an early age (e.g., Kidd et al., 2012).
Another paper (small n, n = 28) showed that the children’s perception of the researchers’ reliability predicted delay of gratification (Kidd, Palmeri, and Aslin, 2012). They suggest that “children’s wait-times reflected reasoned beliefs about whether waiting would ultimately pay off.” So these tasks “may not only reflect differences in self-control abilities, but also beliefs about the stability of the world.” Children who had reliable interactions with the researcher waited about 4 times as long—12 minutes compared to 3 minutes—if they thought the researcher was trustworthy. Sean Last over at the Alternative Hypothesis uses these types of tasks (and other correlates) to show that blacks have lower self-control than whites, citing studies showing correlations with IQ and delay of gratification. Though, as can be seen, alternative explanations for these phenomena make just as much sense, and with the new experimental evidence on social trust and delaying gratification, this adds a new wrinkle to this debate. (He also shortly discusses ‘reasons’ why blacks have lower self-control, implicating the MAOA alleles. However, I have already discussed this and blaming ‘genes for’ violence/self-control doesn’t make sense.)
Michaelson and Munakata (2016) show more evidence for the relationship between social trust and delaying gratification. When children (age 4 years, 5 months, n = 34) observed an adult as trustworthy, they were able to wait for the reward, compared to when they observed the adult as untrustworthy they ate the treat thinking that, since they observed the adult as untrustworthy, they were not likely to get the second marshmallow than if they waited for the adult to return if they believed him to be untrustworthy. Ma et al (2018) also replicated these findings in a sample of 150 Chinese children aged 3 to 5 years old. They conclude that “there is more to delay of gratification than cognitive capacity, and they suggest that there are individual differences in whether children consider sacrificing for a future outcome to be worth the risk.” Those who had higher levels of generalized trust waited longer, even when age and level of executive functioning were controlled for.
Romer et al (2010) show that people who are more willing to take risks may be more likely to engage in risky behavior that provides insights to that specific individual on why delaying gratification and having patience leads to longer-term rewards. This is a case of social learning. However, people who are more willing to take risks have higher IQs than people who do not. Though SES was not controlled for, it is possible that the ability to delay gratification in this study came down to SES, with lower class people taking the money, while higher class people deferred. Raine et al (2002) showed a relationship between sensation seeking in 3-year-old children from Mauritius, which then was related to their ‘cognitive scores’ at age 11. As usual, parental occupation was used as a measure of ‘social class’, and since SES does not capture all aspects of social class then controlling for the variable does not seem to be too useful. Because a confound here could be that children from higher classes have more of a chance to sensation seek which may cause higher IQ scores due to cognitive enrichment. Either way, you can’t say that IQ ’causes’ delayed gratification since there are more robust predictors such as social trust.
Though the relationship is there, what to make of it? Since exploring more leads to, theoretically, more chances to get things wrong and take risks by being impulsive, those who are more open to experience will have had more chances to learn from their impulsivity, and so learn to delay gratification through social learning and being more open. ‘IQ’ correlating with it, in my opinion, doesn’t matter too much; it just shows that there is a social learning component to delaying gratification.
In conclusion, there are alternative ways to look at the results from Marshmallow Experiments, such as social trust and social learning (being impulsive and seeing what occurs when an impulsive act is carried out may have one learn, in the future, to wait for something). Though these experiments are new and the research is young, it’s very promising that there are other explanations for delayed gratification that don’t have to do with differences in ‘cognitive ability’, but depend on social trust—trust between the child and the researcher. If the child sees the researcher is trustworthy, then the child will wait for the reward, whereas if they see the researcher is not trustworthy, they ill take the marshmallow or whatnot, since they believe the researcher is not trustworthy and therefore won’t stick to their word. (I am also currently reading Mischel’s 2014 book Marshmallow Test: Mastering Self-Control and will have more thoughts on this in the future.)
Steroids get a bad reputation. It largely comes from movies and people’s anecdotal experiences and repeating stories they hear from the media and other forms of entertainment, usually stating that there is a phenomenon called ‘roid rage’ that makes steroid users violent. Is this true? Are any myths about steroids true, such as a shrunken penis? Are there ways to off-set it? Steroids and their derivatives are off-topic for this blog, but it needs to be stressed that there are a few myths that get pushes about steroids and what it does to behavior, its supposed effects on aggression and so forth.
With about 3 million AAS (ab)users (anabolic-androgenic steroids) in America (El Osta et al, 2016), knowing the effects of steroids and similar drugs such as Winny (a cutting agent) would have positive effects, since, of course, athletes mostly use them.
This is, perhaps, one of the most popular. Though the actual myth is that AAS use causes the penis to shrink (which is not true), in reality, AAS use causes the testicles to shrink by causing the Leydig cells to decrease natural testosterone production which then decreases the firmness and shape of the testicles which then results in a loss of size.
In one study of 772 gay men using 6 gyms between the months of January and February (and you need to think of the type of bias there that those people who are ‘Resolutioners’ would be more likely to go to the gym those months), a questionnaire was given to the men. 15 .2 percent of the men had used, with 11.7 percent of them injecting within the past 12 months. HIV positive men were more likely to have used in the past compared to negative men (probably due to scripts). Fifty-one percent of them reported testicular atrophy, and they were more likely to report suicidal thoughts (Bolding, Sherr, and Elford, 2002). They conclude:
One in seven gay men surveyed in central London gyms in 2000 said they had used steroids in the previous 12 months. HIV positive men were more likely to have used steroids than other men, some therapeutically. Side effects were reported widely and steroid use was associated with having had suicidal thoughts and feeling depressed, although cause and effect could not be established. Our findings suggest that steroid use among gay men may have serious consequences for both physical and mental health.
Of course, those who (ab)use substances have more psychological problems than those who do not. Another study of 203 bodybuilders found that 8 percent (n = 17) found testicular atrophy (for what it’s worth, it was an internet survey of drug utilization) (Perry et al, 2005). Another study found that out of 88 percent of individuals who abused the drug complained of side-effects of AAS use, about 40 percent described testicular atrophy (Evans, 1997), while testicular atrophy was noted in about 50 percent of cases (sample size n = 24) (Darke et al, 2016).
One study of steroid users found that only 17 percent of them had normal sperm levels (Torres-Calleja et al, 2001), this is because exogenous testosterone will result in the atrophy of germinal cells which cause a decrease in spermatogenesis. Though, too, increased AAS (ab)use later into life may lead to infertility later in life. Knuth et al (1989) also studied 41 bodybuilders with an average age of 26.7. They went through a huge laundry list of different types of steroids they have taken over their lives. Nineteen of the men were still using steroids at the time of the investigation (group I), whereas 12 of them (group II) stopped taking steroids 3 months prior, while 10 of them (group III) stopped steroid use 4 to 24 months prior.
They found that only 5 of them had sperm counts below the average of 20 million sperm per square ml, while 24 of the bodybuilders showed these symptoms. No difference between group I and II was noticed and group III (the group that abstained from use for 4 to 24 months) largely had sperm levels in the normal range. So, the data suggests that even in cases of severe decrease of sensitivity to androgens due to AAS (ab)use, spermatogenesis may still continue normally in some men, even when high levels of androgens are administered exogenously, while even after prolonged use it seems it is possible for sperm levels to go back to the normal range (Knuth et al 1989).
Aggression and crime
Now it’s time for the fun part and my reason for writing this article. Does (ab)using steroids cause someone to go into an uncontrollable rage, a la the Incredible Hulk when they inject themselves with testosterone? The media has latched into the mind of many, with films and TV shows showing the insanely aggressive man who has been (ab)using AAS. But how true is this? A few papers have shown that this phenomenon is indeed true (Konacher and Workman, 1989; Pope and Katz, 1994), but how true is it on its own, since AAS (ab)users are known to use multiple substances???
Konacher and Workman (1989) is a case study done on one man who had no criminal history, who began taking AASs three months before he murdered his wife, and they conclude that AAS can be said to be a ‘personality changer’. Piacetino et al (2015) conclude in their review of steroid use and psychopathology in athletes that “AAS use in athletes is associated with mood and anxiety disturbances, as well as reckless behavior, in some predisposed individuals, who are likely to develop various types of psychopathology after long-term exposure to these substances. There is a lack of studies investigating whether the preexistence of psychopathology is likely to induce AAS consumption, but the bulk of available data, combined with animal data, point to the development of specific psycho-pathology, increased aggressiveness, mood destabilization, eating behavior abnormalities, and psychosis after AAS abuse/dependence.” I, too, would add that since most steroid abuse are polysubstance abusers (they use multiple illicit drugs on top of AAS), that the steroids per se are not causing crime or aggressive behavior, it’s the other drugs that the steroid (ab)user is also taking. And there is evidence for this assertion.
Lundholm et al (2015) showed just that: that AAS (ab)use was confounded with other substances used while the individual in question was also taking AAS. They write:
“We found a strong association between self-reported lifetime AAS use and violent offending in a population-based sample of more than 10,000 men aged 20-47 years. However, the association decreased substantially and lost statistical significance after adjusting for other substance abuse. This supports the notion that AAS use in the general population occurs as a component of polysubstance abuse, but argues against its purported role as a primary risk factor for interpersonal violence. Further, adjusting for potential individual-level confounders initially attenuated the association, but did not contribute to any substantial change after controlling for polysubstance abuse.“
Even The National Institute of Health (NIH) writes: “In summary, the extent to which steroid abuse contributes to violence and behavioral disorders is unknown. As with the health complications of steroid abuse, the prevalence of extreme cases of violence and behavioral disorders seems to be low, but it may be underreported or underrecognized.” We don’t know whether steroids cause aggression or more aggressive athletes are more likely to use the substance (Freberg, 2009: 424). Clearly, the claims of steroids causing aggressive behavior and crime are overblown and there has yet to be a scientific consensus on the matter. A great documentary on the matter is Bigger, Stronger, Faster, which goes through the myths of testosterone while chronicling the use of illicit drugs in bodybuilding and powerlifting.
This, too, was even seen in one study where men were administered supraphysiologic doses of testosterone to see its effects on muscle size and strength since it had never been tested; no changes in mood or behavior occurred (Bhasin et al, 1996). Furthermore, injecting individuals with supraphysiological doses of testosterone as high as 200 and 600 mg per week does not cause heightened anger or aggression (Tricker et al, 1996; O’Connor et, 2002). Testosterone is one of the most abused AASs around, and if a heightened level of T doesn’t cause crime, nor can testosterone levels being higher this week compared to last seem to be a trigger for crime, we can safely disregard any claims of ‘roid rage’ since they coincide with other drug use (polysubstance abuse). So since we know that supraphysiologic doses of testosterone don’t cause crime nor aggression, we can say that AAS use, on its own (and even with other drugs) does not cause crime or heightened aggression since aggression elevates testosterone secretion, testosterone doesn’t elevate aggression.
One review also suggests that medical issues associated with AAS (ab)use are exaggerated to deter their use by athletes (Hoffman and Ratamess, 2006). They conclude that “Existing data suggest that in certain circumstances the medical risk associated with anabolic steroid use may have been somewhat exaggerated, possibly to dissuade use in athletes.”
Racial differences in steroid use
Irving et al (2002) found that 2.1 percent of whites used steroids, whereas 7.6 percent of blacks did; 6.1 percent of ‘Hispanics’ use them within the past 12 months, and a whopping 14.1 percent of Hmong Chinese used them; 7.9 percent of ‘other Asians’ used them, and 3,1 percent of ‘Native Americans’ did with 11.3 percent of mixed race people using them within the past 12 months to gain muscle. Middle schoolers were more likely to use than high schoolers, while people from lower SES brackets were more likely to use than people in higher SES brackets.
Stilger and Yesalis (1999: 134) write (emphasis mine):
Of the 873 high school football players participating in the study, 54 (6.3%) reported having used or currently using AAS. Caucasians represented 85% of all subjects in the survey. Nine percent were African-American while the remainder (6%) consisted of Hispanics, Asian, and other. Of the AAS users, 74% were Caucasian, 13% African American, 7% Hispanic, and 3% Asian, x2 (4,854 4) 4.203, p 4 .38. The study also indicated that minorities are twice as likely to use AAS as opposed to Caucasians. Cross tabulated results indicate that 11.2% of all minorities use/used AAS as opposed to 6.5% of all Caucasians (data not displayed).
One study even had whites and blacks reporting the same abuse of steroids in their sample (n = 10,850 ‘Caucasians’ and n = 1,883 black Americans), with blacks reporting, too, lower levels of other drug abuse (Green et al, 2001). Studies indeed find higher rates of drug use for white Americans than other ethnies, in college (McCabe et al, 2007). Black Americans also frequently underreport and lie about their drug use (Ledgerwood et al, 2008; Lu et al, 2001). Blacks are also more likely to go to the ER after abusing drugs than whites (Drug Abuse Warning Network, 2011). Bauman and Ennett (1994) also found that blacks underreport drug use whereas whites overreport.
So can we really believe the black athletes who state that they do not (ab)use AAS? No, we cannot. Blacks like about any and all drug use, so believing that they are being truthful about AAS (ab)use in this specific instance is not called for.
Like with all things you use and abuse, there are always side-effects. Though, the media furor one hears regarding AAS and testosterone (ab)use are largely blown out of proportion. The risks associated with AAS (ab)use are ‘transient’, and will subside after one discontinues using the drugs. Blacks seem to take more AAS than whites, even if they do lie about any and all drug use. (And other races, too, seem to use it at higher rates than whites.) Steroid use does not seem to be ‘bad’ if one knows what they’re doing and are under Doctor’s supervision, but even then, if you want to know the truth about AAS, then you need to watch the documentary Bigger, Stronger, Faster. I chalk this up to the media themselves demonizing testosterone itself, along with the ‘toxic masculinity’ and the ‘toxic jock effect‘ (Miller, 2009; Miller, 2011). Though, if you dig into the literature yourself you’ll see there is scant evidence for AAS and testosterone (ab)use causing crime, that doesn’t stop papers like those two by Miller talking about the effects of ‘toxic jocks’ and in effect, deriding masculine men and with it the hormone that makes Men men: testosterone. If taken safely, there is nothing wrong with AAS/testosterone use.
(Note: Doctor’s supervision only, etc)
Back in April of last year, I wrote an article on the problems with facial ‘reconstructions’ and why, for instance, Mitochondrial Eve probably didn’t look like that. Now, recently, ‘reconstructions’ of Nariokotome boy and Neanderthals. The ‘reconstructors’, of course, have no idea what the soft tissue of said individual looked like, so they must infer and use ‘guesswork’ to show parts of the phenotype when they do these ‘reconstructions’.
My reason for writing this is due to the ‘reconstruction’ of Nefertiti. I have seen altrighers proclaim ‘The Ancient Egyptians were white!’ whereas I saw blacks stating ‘Why are they whitewashing our history!’ Both of these claims are dumb, and they’re also wrong. Then you have articles—purely driven by ideology—that proclaim ‘Facial Reconstruction Reveals Queen Nefertiti Was White!‘
This article is garbage. It first makes the claim that King Tut’s DNA came back as being similar to 70 percent of Western European man. Though, there are a lot of problems with this claim. 1) the company IGENEA inferred his Y chromosome from a TV special; the data was not available for analysis. 2) Haplogroup does not equal race. This is very simple.
Now that the White race has decisively reclaimed the Ancient Egyptians
The white race has never ‘claimed’ the Ancient Egyptians; this is just like the Arthur Kemp fantasy that the Ancient Egyptians were Nordic and that any and all civilizations throughout history were started and maintained by whites, and that the causes of the falls of these civilizations were due to racial mixing etc etc. These fantasies have no basis in reality, and, now, we will have to deal with people pushing these facial ‘reconstructions’ that are largely just ‘art’, and don’t actually show us what the individual in question used to look like (more on this below).
Stephan (2003) goes through the four primary fallacies of facial reconstruction: fallacy 1) That we can predict soft tissue from the skull, that we can create recognizable faces. This is highly flawed. Soft tissue fossilization is rare—rare enough to be irrelevant, especially when discussing what ancient humans used to look like. So for these purposes, and perhaps this is the most important criticism of ‘reconstructions’, any and all soft tissue features you see on these ‘reconstructions’ are largely guesswork and artistic flair from the ‘reconstructor’. So facial ‘reconstructions’ are mostly art. So, pretty much, the ‘reconstructor’ has to make a ton of leaps and assumptions while creating his sculpture because he does not have the relevant information to make sure it is truly accurate, which is a large blow to facial ‘reconstructions’.
And, perhaps most importantly for people who push ‘reconstructions’ of ancient hominin: “The decomposition of the soft tissue parts of paleoanthropological beings makes it impossible for the detail of their actual soft tissue face morphology and variability to be known, as well as the variability of the relationship between the hard and the soft tissue.” and “Hence any facial “reconstructions” of earlier hominids are likely to be misleading .”
As an example for the inaccuracy of these ‘reconstructions’, see this image from Wikipedia:
The left is the ‘reconstruction’ while the right is how the woman looked. She had distinct lips which could not be recreated because, again, soft tissue is missing.
2) That faces are ‘reconstructed’ from skulls: This fallacy directly follows from fallacy 1: that ‘reconstructors’ can accurately predict what the former soft tissue looked like. Faces are not ‘reconstructed’ from skulls, it’s largely guesswork. Stephan states that individuals who see and hear about facial ‘reconstructions’ state things like “wow, you have to be pretty smart/knowledgeable to be able to do such a complex task”, which Stephan then states that facial ‘approximation’ may be a better term to use since it doesn’t imply that the face was ‘reconstructed’ from the skull.
3) That this discipline is ‘credible’ because it is ‘partly science’, but Stephan argues that calling it a science is ‘misleading’. But he writes (pg 196): “The fact that several of the commonly used subjective guidelines when scientifically evaluated have been found to be inaccurate, … strongly emphasizes the point that traditional facial approximation methods are not scientific, for if they were scientific and their error known previously surely these methods would have been abandoned or improved upon.”
And finally, 4) We know that ‘reconstructions’ work because they have been successful in forensic investigations. Though this is not a strong claim because other factors could influence the discovery, such as media coverage, chance, or ‘contextual information’. So these forensics cases cannot be pointed to when one attempts to argue for the utility of facial ‘reconstructions’. There also seems to be a lot of publication bias in this literature too, with many scientists not publishing data that, for instance, did not show the ‘face’ of the individual in question. It is largely guesswork. “The inconsistency in reports combined with confounding factors influencing casework success suggest that much caution should be employed when gauging facial approximation success based on reported practitioner success and the success of individual forensic cases” (Stephan, 2003: 196).
So, 1) the main point here is that soft tissue work is ‘just a guess’ and the prediction methods employed to guess the soft tissue have not been tested. 2) faces are not ‘reconstructed’ from skulls. 3) It’s hardly ‘science’, and more of a form of art due to the guesses and large assumptions poured into the ‘technique’. 4) ‘Reconstructions’ don’t ‘work’ because they help us ‘find’ people, as there is a lot more going on there than the freak-chance happenings of finding a person based on a ‘reconstruction’ which was probably due to chance. Hayes (2015) also writes: “Their actual ability to meaningfully represent either an individual or a museum collection is questionable, as facial reconstructions created for display and published within academic journals show an enduring preference for applying invalidated methods.”
Stephan and Henneberg (2001) write: “It is concluded that it is rare for facial approximations to be sufficiently accurate to allow identification of a target individual above chance. Since 403 incorrect identifications were made out of 592 identification scenarios, facial approximation should be considered to be a highly inaccurate and unreliable forensic technique. These results suggest that facial approximations are not very useful in excluding individuals to whom skeletal remains may not belong.”
Wilkinson (2010) largely agrees, but states that ‘artistic interpretation’ should be used only when “particularly for the morphology of the ears and mouth, and with the skin for an ageing adult” but that “The greatest accuracy is possible when information is available from preserved soft tissue, from a portrait, or from a pathological condition or healed injury.” But she also writes: “… the laboratory studies of the Manchester method suggest that facial reconstruction can reproduce a sufficient likeness to allow recognition by a close friend or family member.”
So to sum up: 1) There is insufficient data for tissue thickness. This just becomes guesswork and, of course, is up to artistic ‘interpretation’, and then becomes subjective to whichever individual artist does the ‘reconstruction’. Cartilage, skin and fat does not fossilize (only in very rare cases and I am not aware of any human cases). 2) There is a lack of methodological standardization. There is no single method to use to ‘guesstimate’ things like tissue thickness and other soft tissue that does not fossilize. 3) They are very subjective! For instance, if the artist has any type of idea in his head of what the individual ‘may have’ looked like, his presuppositions may go from his head to his ‘reconstruction’, thusly biasing a look he/she will believe is true. I think this is the case for Mitochondrial Eve; just because she lived in Africa doesn’t mean that she looks similar to any modern Africans alive today.
I would make the claim that these ‘reconstructions’ are not science, they’re just the artwork of people who have assumptions of what people used to look like (for instance, with Nefertiti) and they take their assumptions and make them part of their artwork, their ‘reconstruction’. So if you are going to view the special that will be on tomorrow night, keep in the back of your mind that the ‘reconstruction’ has tons of unvalidated assumptions thrown into it. So, no, Nefertiti wasn’t ‘white’ and Nefertiti wasn’t ‘white washed’; since these ‘methods’ are highly flawed and highly subjective, we should not state that “This is what Nefertiti used to look like”, because it probably is very, very far from the truth. Do not fall for facial ‘reconstructions’.
We’re only one month into the new year and I may have come across the most ridiculous paper I think I’ll read all year. The paper is titled Knowledge of resting heart rate mediates the relationship between intelligence and the heartbeat counting task. They state that ‘intelligence’ is related to heartbeat counting task (HCT), and that HBC is employed as a measure of interoception—which is a ‘sense’ that helps one understand what is going on in their body, sensing the body’s internal state and physiological changes (Craig, 2003; Garfinkel et al, 2015).
Though, the use of HCT as a measure of interoception is controversial (Phillips et al, 1999; Brener and Ring, 2016) mostly because it is influenced by prior knowledge of one’s resting heart rate. The concept of interoception has been around since 1906, with the term first appearing in scientific journals in the 1942 (Ceunen, Vlaeyen, and Dirst, 2016). It’s also interesting to note that interoceptive accuracy is altered in schizophrenics (who had an average IQ of 101.83; Ardizzi et al, 2016).
Murphy et al (2018) undertook two studies: study one demonstrated an association with ‘intelligence’ and HCT performance whereas study 2 demonstrated that this relationship is mediated by one’s knowledge of resting heart rate. I will briefly describe the two studies then I will discuss the flaws (and how stupid the idea is that ‘intelligence’ partly is responsible for this relationship).
In both studies, they measured IQ using the Wechsler intelligence scales, specifically the matrix and vocabulary subtests. In study 1, they had 94 participants (60 female, 33 female, and one ‘non-binary’; gotta always be that guy eh?). In this study, there was a small but positive correlation between HCT and IQ (r = .261).
In study 2, they sought to again replicate the relationship between HCT and IQ, determine how specific the relationship is, and determine whether higher IQ results in more accurate knowledge of one’s heart rate which would then improve their scores. They had 134 participants for this task and to minimize false readings they were asked to forgo caffeine consumption about six hours prior to the test.
As a control task, participants were asked to complete a timing accuracy test (TAT) in which they were asked to count seconds instead of heartbeats. The correlation with HCT performance and IQ was, again, small but positive (r = -.211) with IQ also being negatively correlated with the inaccuracy of resting heart rate estimations (r = .363), while timing accuracy was not associated with the inaccuracy of heart rate estimates, IQ or HCT. In the end, knowledge of average resting heart rate completely mediated the relationship between IQ and HCT.
This study replicated another study by Mash et al (2017) who show that their “results suggest that cognitive ability moderates the effect of age on IA differently in autism and typical development.” This new paper then extends this analysis showing that it is fully mediated by prior knowledge of average resting heart rate, and this is key to know.
This is simple: if one has prior knowledge of their average resting heart rate and their fitness did not change from the time they were aware of their average resting heart rate then when they engage in the HCT they will then have a better chance of counting the number of beats in that time frame. This is very simple! There are also other, easier, ways to estimate your heart rate without doing all of that counting.
Heart rate (HR) is a strong predictor of cardiorespiratory fitness. So it would follow that those who have prior knowledge of their HRs would more fitness savvy (the authors don’t really say too much about the subjects if there is more data when the paper is published in a journal I will revisit this). So Murphy et al (2018) showed that 1) prior knowledge of resting heart rate (RHR) was correlated—however low—with IQ while IQ was negatively correlated with the inaccuracy of RHR estimates. So the second study replicated the first and showed that the relationship was specific (HCT correlated with IQ, not any other measure).
The main thing to keep in mind here is that those who had prior knowledge of their RHR scored better on the task; I’d bet that even those with low IQs would score higher on this test if they, too, had prior knowledge of their HRs. That’s, really, what this comes down to: if you have prior knowledge of your RHR and your physiological state stays largely similar (body fat, muscle mass, fitness, etc) then when asked to estimate your heart rate by, say, using the radial pulse method (placing two fingers along the right side of the arm in line just above the thumb), they, since they have prior knowledge, will more accurately guess their RHR, if they had low or high IQs, regardless.
I also question the use of the HCT as a method of interoception, in line with Brener and Ring (2016: 2) who write “participants with knowledge about heart rate may generate accurate counting scores without detecting any heartbeat sensations.” So let’s say that HCT is a good measure of interoception, then it still remains to be seen whether or not manipulating subjects’ HRs would change the accuracy of the analyses. Other studies have shown that testing HR after one exercises, people underestimate their HR (Brener and Ring, 2016: 2). This, too, is simple. To get your max HR after exercise, subtract your age from 220. So if you’re 20 years old, your max HR would be 200, and after exercise, if you know you’re body and how much energy you have expended, then you will be able to estimate better with this knowledge.
Though, you would need to have prior knowledge, of course, of these effects and knowledge of these simple formulas to know about this. So, in my opinion, this study only shows that people who have a higher ‘IQ’ (more access to cultural tools to score higher on IQ tests; Richardson, 2002) are also more likely to, of course, go to the doctor for checkups, more likely to exercise and, thusly, be more likely to have prior knowledge of their HR and score better than those with lower IQs and less access to these types of facilities where they would have access to prior knowledge and get health assesments to have prior knowledge like those with higher IQs (which are more likely to be middle class and have more access to these types of facilities).
I personally don’t think that HCT is a good measure of interoception due to the criticisms brought up above. If I have prior knowledge of my HR (average HR for a healthy person is between 50-75 BPM depending on age, sex, and activity (along with other physiological components) (Davidovic et al, 2013). So, for example,if my average HR is 74 (I just checked mine last week and I checked it in the morning, and averaged 3 morning tests one morning was 73, the other morning was 75 and the third was 74 for an average of 74 BPM), and I had this prior knowledge before undergoing this so-called HCT interoception task, I would be better equipped to score better than one who does not have the same prior knowledge of his own heart rate as I do.
In conclusion, in line with Brener and Ring (2016), I don’t think that HCT is a good measure for interoception, and even if it were, the fact that prior knowledge fully mediates this relationship means that, in my opinion, other methods of interoception need to be found and studied. The fact that if someone has prior knowledge of their HR can and would skew things—no matter their ‘IQ’—since they know that, say, their HR is in the average range (50-75 BPM). I find this study kind of ridiculous and it’s in the running for most ridiculous things I have read all year. Prior knowledge (both with RHR and PEHR; post-exercise heart rate) of these variables will have you score better and, since IQ is a measure of social class then with the small correlation between HCT and IQ found by Murphy et al (2018), some (but most is not) is mediated by IQ, which is just largely tests for skills found in a narrow social class, so it’s no wonder that they corrrlate—however low—and the reason why the relationship was found is obvious, especially if you have some prior knowledge of this field.
I say that if you are over-weight and wish to lose weight, then you should eat less. You should keep eating less until you achieve your desired weight, and then stick to that level of calorific intake.
Why only talk about calories and assume that they do the same things once ingested into the body? See Feinman and Fine (2004) to see how and why that is fallacious. This was actually studied. Contestants on the show The Biggest Loser were followed after they lost a considerable amount of weight. They followed the same old mantra: eat less, and move more. Because if you decrease what is coming in, and expend more energy then you will lose weight. Thermodynamics, energy in and out, right? That should put one into a negative energy balance and they should lose weight if they persist with the diet. And they did. However, what is going on with the metabolism of the people who lost all of this weight, and is this effect more noticeable for people who lost more weight in comparison to others?
Fothergill et al (2016) found that persistent metabolic slowdown occurred after weight loss, the average being a 600 kcal slowdown. This is what the conventional dieting advice gets you, a slowed metabolism with you having to eat fewer kcal than one who was never obese. This is what the ‘eat less, move more’ advice, the ‘CI/CO’ advice is horribly flawed and does not work!
He seems to understand that exercise does not work to induce weight loss, but it’s this supposed combo that’s supposed to be effective, a kind of one-two punch, and you only need to eat less and move more if you want to lose weight! This is horribly flawed. He then shows a few table from a paper he authored with another researcher back in 1974 (Bhanji and Thompson, 1974).
Say you take 30 people who weigh the same, have the same amount of body fat and are the same height, they eat the same exact macronutrient composition, with the same exact foods, eating at a surplus deficit with the same caloric content, and, at the end of say, 3 months, you will get a different array of weight gained/stalled/decrease in weight. Wow. Something like this would certainly disprove the CI/CO myth. Aamodt (2016: 138-139) describes a study by Bouchard and Tremblay (1997; warning: twin study), writing:
When identical twins, men in their early 20s, were fed a thousand extra calories per day for about three months, each pair showed similar weight gains. In contrast, the gains varied across twin pairs, ranging from nine to twenty-nine pound, even though the calorie imbalance esd the same for everyone. An individual’s genes also influence weight loss. When another group of identical twins burned a thousand more calories per day through exercise while maintaining a stable food intake in an inpatient facility, their losses ranged from two to eighteen pounds and were even more similar within twin pairs than weight gain.
Take a moment to think about that. Some people’s bodies resis weight loss so well that burning an extra thousand calpires a day for three months, without eating more, leads them to lose only two pounds. The “weight loss is just math” crows we met in the last chapter needs to look at what happens when their math is applied to living people. (We know what usually happens: they accuse the poor dieter of cheating, whether or not it’s true.) If cutting 3,500 calories equals one pound of weight loss, then everyone on the twuns’ exercist protocol should have lost twenty-four pounds, but not a single participant lost that much. The average weight loss was only eleven pounds, and the individual variation was huge. Such differences can result from genetic influences on resting metabolism, which varies 10 to 15 percent between people, or from differences in the gut. Because the thousand-calorie energy imbalance was the same in both the gain and loss experiments, this twin research also illustrates that it’s easier to gain weight than to lose it.
That’s weird. If a calorie were truly a calorie, then, at least in the was CI/COers word things, everyone should have had the same or similar weight loss, not with the average weight loss less than half what should have been expected from the kcal they consumed. That is a shot against the CI/CO theory. Yet more evidence against comes from the Vermont Prison Experiment (see Salans et al, 1971). In this experiment, they were given up to 10,000 kcal per day and they, like in the other study described previously, all gained differing amounts of weight. Wow, almost as if individuals are different and the simplistic caloric math of the CI/COers doesn’t size up against real-life situations.
The First Law of Thermodynamics always holds, it’s just irrelevant to human physiology. (Watch Gary Taubes take down this mythconception too; not a typo.) Think about an individual who decreases total caloric intake from 1500 kcal per day to 1200 kcal per day over a certain period of time. The body is then forced to drop its metabolism to match the caloric intake, so the metabolic system of the human body knows when to decrease when it senses it’s getting less intake, and for this reason the First Law is not violated here, it’s irrelevant. The same thing also occurred to the Biggest Loser contestants. Because the followed the CI/CO paradigm of ‘eat less and move more’.
Processed food is not bad in itself, but it is hard to monitor what is in it, and it is probably best avoided if you wish to lose weight, that is, it should not be a large part of your habitual intake.
If you’re trying to lose weight you should most definitely avoid processed foods and carbohydrates.
In general, all foods are good for you, in moderation. There are circumstances when you may have to eat what is available, even if it is not the best basis for a permanent sustained diet.
I only contest the ‘all foods are good for you’ part. Moderation, yes. But in our hedonistic world we live in today with a constant bombardment of advertisements there is no such thing as ‘moderation’. Finally, again, willpower is irrelevant to obesity.
I’d like to know the individual weight gains in Thompson’s study. I bet it’d follow both what occurred in the study described by Aamodt and the study by Sims et al. The point is, human physiological systems are more complicated than to attempt to break down weight loss to only the number of calories you eat, when not thinking of what and how you eat it. What is lost in all of this is WHEN is a good time to eat? People continuously speak about what to eat, where to eat, how to eat, who to eat with but no one ever seriously discusses WHEN to eat. What I mean by this is that people are constantly stuffing their faces all day, constantly spiking their insulin which then causes obesity.
The fatal blow for the CI/CO theory is that people do not gain or lose weight at the same rate (I’d add matched for height, overall weight, muscle mass and body fat, too) as seen above in the papers cited. Why people still think that the human body and its physiology is so simple is beyond me.
Hedonism along with an overconsumption of calories consumed (from processed carbohydrates) is why we’re so fat right now in the third world and the only way to reverse the trend is to tell the truth about human weight loss and how and why we get fat. CI/CO clearly does not work and is based on false premises, no matter how much people attempt to save it. It’s highly flawed and assumed that the human body is so ‘simple’ as to not ‘care’ about the quality of the macro nor where it came from.
People seem to be confused on the definition of the term ‘phrenology’. Many people think that just the measuring of skulls can be called ‘phrenology’. This is a very confused view to hold.
Phrenology is the study of the shape and size of the skull and then drawing conclusions from one’s character from bumps on the skull (Simpson, 2005) to overall different-sized areas of the brain compared to others then drawing on one’s character and psychology from these measures. Franz Gall—the father of phrenology—believed that by measuring one’s skull and the bumps etc on it, then he could make accurate predictions about their character and mental psychology. Gall had also proposed a theory of mind and brain (Eling, Finger, and Whitaker, 2017). The usefulness of phrenology aside, the creator Gall contributed a significant understanding to our study of the brain, being that he was a neuroanatomist and physiologist.
1.The brain is the organ of the mind.
2. The mind is composed of multiple, distinct, innate faculties.
3. Because they are distinct, each faculty must have a separate seat or “organ” in the brain.
4. The size of an organ, other things being equal, is a measure of its power.
5. The shape of the brain is determined by the development of the various organs.
6. As the skull takes its shape from the brain, the surface of the skull can be read as an accurate index of psychological aptitudes and tendencies.
Gall’s work, though, was imperative to our understanding of the brain and he was a pioneer in the inner workings of the brain. They ‘phrenologized’ by running the tips of their fingers or their hands along the top of one’s head (Gall liked using his palms). Here is an account of one individual reminiscing on this (around 1870):
The fellow proceeded to measure my head from the forehead to the back, and from one ear to the other, and then he pressed his hands upon the protuberances carefully and called them by name. He felt my pulse, looked carefully at my complexion and defined it, and then retired to make his calculations in order to reveal my destiny. I awaited his return with some anxiety, for I really attached some importance to what his statement would be; for I had been told that he had great success in that sort of work and that his conclusion would be valuable to me. Directly he returned with a piece of paper in his hand, and his statement was short. It was to the effect that my head was of the tenth magnitude with phyloprogenitiveness morbidly developed; that the essential faculties of mentality were singularly deficient; that my contour antagonized all the established rules of phrenology, and that upon the whole I was better adapted to the quietude of rural life rather than to the habit of letters. Then the boys clapped their hands and laughed lustily, but there was nothing of laughter in it for me. In fact, I took seriously what Rutherford had said and thought the fellow meant it all. He showed me a phrenological bust, with the faculties all located and labeled, representing a perfect human head, and mine did not look like that one. I had never dreamed that the size or shape of the head had anything to do with a boy’s endowments or his ability to accomplish results, to say nothing of his quality and texture of brain matter. I went to my shack rather dejected. I took a small hand- mirror and looked carefully at my head, ran my hands over it and realized that it did not resemble, in any sense, the bust that I had observed. The more I thought of the affair the worse I felt. If my head was defective there was no remedy, and what could I do? The next day I quietly went to the library and carefully looked at the heads of pictures of Webster, Clay, Calhoun, Napoleon, Alexander Stephens and various other great men. Their pictures were all there in histories.
This—what I would call skull/brain-size fetishizing—is still evident today, with people thinking that raw size matters (Rushton and Ankney, 2007; Rushton and Ankney, 2009) for cognitive ability, though I have compiled numerous data that shows that we can have smaller brains and have IQs in the normal range, implying that large brains are not needed for high IQs (Skoyles, 1999). It is also one of Deacon’s (1990) fallacies, the “bigger-is-smarter” fallacy. Just because you observe skull sizes, brain size differences, structural brain differences, etc, does not mean you’re a phrenologist. you’re making easy and verifiable claims, not like some of the outrageous claims made by phrenologists.
What did they get right? Well, phrenologists stated that the most-used part of the brain would become bigger, which, of course, was vindicated by modern research—specifically in London cab drivers (McGuire, Frackowiak, and Frith, 1997; Woolett and McGuire, 2011).
It seems that phrenologists got a few things right but their theories were largely wrong. Though those who bash the ‘science’ of phrenology should realize that phrenology was one of the first brain ‘sciences’ and so I believe phrenology should at least get some respect since it furthered our understanding of the brain and some phrenologists were kind of right.
People see the avatar I use which is three skulls, one Mongoloid, the other Negroid and the other Caucasoid and then automatically make that leap that I’m a phrenologist based just on that picture. Even, to these people, stating that races/individuals/ethnies have different skull and brain sizes caused them to state that what I was saying is phrenology. No, it isn’t. Words have definitions. Just because you observe size differences between brains of, say either individuals or ethnies, doesn’t mean that you’re making any value judgments on the character/mental aptitude of that individual based on the size of theur skull/brain. On the other hand, noting structural differences between brains like saying “the PFC is larger here but the OFC is larger in this brain than in that brain” yet no one is saying that and if that’s what you grasp from just the statement that individuals and groups have different sized skulls, brains, and parts of the brain then I don’t know what to tell you. Stating that one brain weighs more than another, say one is 1200 g and another is 1400 g is not phrenology. Stating that one brain is 1450 cc while another is 1000 cc is not phrenology. For it to be phrenology I have to outright state that differences in the size of certain areas of the brain or brains as a whole cause differences in character/mental faculties. I am not saying that.
A team of neuroscientists just recently (recently as in last month, January, 2018) showed, in the “most exhaustive way possible“, tested the claims from phrenological ‘research’ “that measuring the contour of the head provides a reliable method for inferring mental capacities” and concluded that there was “no evidence for this claim” (Jones, Alfaro-Almagro, and Jbabdi, 2018). That settles it. The ‘science’ is dead.
It’s so simple: you notice physical differences in brain size between two corpses. That one’s PFC was bigger than OFC and with the other, his OFC was bigger than his PFC. That’s it. I guess, using this logic, neuroanatomists would be considered phrenologists today since they note size differences between individual parts of brains. Just noting these differences doesn’t make any type of judgments on potential between brains of individuals with different size/overall size/bumps etc.
It is ridiculous to accuse someone of being a ‘phrenologist’ in 2018. And while the study of skull/brain sizes back in the 17th century did pave the way for modern neuroscience and while they did get a few things right, they were largely wrong. No, you cannot see one’s character from feeling the bumps on their skull. I understand the logic and, back then, it would have made a lot of sense. But to claim that one is a phrenologist or is pushing phrenology just because they notice physical differences that are empirically verifiable does not make them a phrenologist.
In sum, studying physical differences is interesting and tells us a lot about our past and maybe even our future. Stating that one is a phrenologist because they observe and accept physical differences in the size of the brain, skull, and neuroanatomic regions is like saying that physical anthropologists and forensic scientists are phrenologists because they measure people’s skulls to ascertain certain things that may be known in their medical history. Chastizing someone because they tell you that one has a different brain size than the other by calling them outdated names in an attempt to discredit them doesn’t make sense. It seems that even some people cannot accept physical differences that are measurable again and again because it may go against some long-held belief.
I was on Warski Live the other night and had an extremely short back-and-forth with Jared Taylor. I’m happy I got the chance to shortly discuss with him but I got kicked out about 20 minutes after being there. Taylor made all of the same old claims, and since everyone continued to speak I couldn’t really get a word in.
I first stated that Jared got me into race realism and that I respected him. He said that once you see the reality of race then history etc becomes clearer.
To cut through everything, I first stated that I don’t believe there is any utility to IQ tests, that a lot of people believe that people have surfeits of ‘good genes’ ‘bad genes’ that give ‘positive’ and ‘negative’ charges. IQ tests are useless and that people ‘fetishize them’. He then responded that IQ is one of, if not the, most studied trait in psychology to which JF then asked me if I contended that statement and I responded ‘no’ (behavioral geneticists need to work to ya know!). He then talked about how IQ ‘predicts’ success in life, e.g., success in college,
Then, a bit after I stated that, it seems that they painted me as a leftist because of my views on IQ. Well, I’m far right (not that my politics matters to my views on scientific matters) and they made it seem like I meant that Jared fetishized IQ, when I said ‘most people’.
Then Jared gives a quick rundown of the same old and tired talking points how IQ is related to crime, success, etc. I then asked him if there was a definition of intelligence and whether or not there was consensus in the psychological community on the matter.
I quoted this excerpt from Ken Richardson’s 2002 paper What IQ Tests Test where he writes:
Of the 25 attributes of intelligence mentioned, only 3 were mentioned by 25 per cent or more of respondents (half of the respondents mentioned `higher level components’; 25 per cent mentioned ‘executive processes’; and 29 per cent mentioned`that which is valued by culture’). Over a third of the attributes were mentioned by less than 10 per cent of respondents (only 8 per cent of the 1986 respondents mentioned `ability to learn’).
Jared then stated:
“Well, there certainly are differing ideas as to what are the differing components of intelligence. The word “intelligence” on the other hand exists in every known language. It describes something that human beings intuitively understand. I think if you were to try to describe sex appeal—what is it that makes a woman appealing sexually—not everyone would agree. But most men would agree that there is such a thing as sex appeal. And likewise in the case of intelligence, to me intelligence is an ability to look at the facts in a situation and draw the right conclusions. That to me is one of the key concepts of intelligence. It’s not necessarily “the capacity to learn”—people can memorize without being particularly intelligent. It’s not necessarily creativity. There could be creative people who are not necessarily high in IQ.
I would certainly agree that there is no universally accepted definition for intelligence, and yet, we all instinctively understand that some people are better able to see to the essence of a problem, to find correct solutions to problems. We all understand this and we all experience this in our daily lives. When we were in class in school, there were children who were smarter than other children. None of this is particularly difficult to understand at an intuitive level, and I believe that by somehow saying because it’s impossible to come up with a definition that everyone will accept, there is no such thing as intelligence, that’s like saying “Because there may be no agreement on the number of races, that there is no such thing as race.” This is an attempt to completely sidetrack a question—that I believe—comes from dishonest motives.”
(“… comes from dishonest motives”, appeal to motive. One can make the claim about anyone, for any reason. No matter the reason, it’s fallacious. On ‘ability to learn’ see below.)
Now here is the fun part: I asked him “How do IQ tests test intelligence?” He then began talking about the Raven (as expected):
“There are now culture-free tests, the best-known of which is Raven’s Progressive Matrices, and this involves recognizing patterns and trying to figure out what is the next step in a pattern. This is a test that doesn’t require any language at all. You can show an initial simple example, the first square you have one dot, the next square you have two dots, what would be in the third square? You’d have a choice between 3 dots, 5 dots, 20 dots, well the next step is going to be 3 dots. You can explain what the initial patterns are to someone who doesn’t even speak English, and then ask them to go ahead and go and complete the suceeding problems that are more difficult. No language, involved at all, and this is something that correlates very, very tightly with more traditonal, verbally based, IQ tests. Again, this is an attempt to measure capacity that we all inherently recognize as existing, even though we may not be able to define it to everyone’s mutual satisfaction, but one that is definitely there.
Ultimately, we will be able to measure intelligence through direct assessment of the brain, that it will be possible to do through genetic analysis. We are beginning to discover the gene patterns associated with high intelligence. Already there have been patent applications for IQ tests based on genetic analysis. We really aren’t at the point where spitting in a cup and analyzing the DNA you can tell that this guy has a 140 IQ, this guy’s 105 IQ. But we will eventually get there. At the same time there are aspects of the brain that can be analyzed, repeatedly, with which the signals are transmitted from one part of the brain to the other, the density of grey matter, the efficiency with which white matter communicates between the different grey matter areas of the brain.
I’m quite confident that there will come a time where you can just strap on a set of electrodes and have someone think about something—or even not think about anything at all—and we will be able to assess the power of the brain directly through physical assessment. People are welcome to imagine that this is impossible, or be skeptical about that, but I think we’re defintely moving in that direction. And when the day comes—when we really have discovered a large number of the genetic patterns that are associated with high intelligence, and there will be many of them because the brain is the most complicated organ in the human body, and a very substantial part of the human genome goes into constructing the brain. When we have gotten to the bottom of this mystery, I would bet the next dozen mortgage payments that those patterns—alleles as they’re called, genetic patterns—that are associated with high intelligence will not be found to be equally distributed between people of all races.”
Then immediately after that, the conversation changed. I will respond in points:
1) First off, as I’m sure most long-time readers know, I’m not a leftist and the fact that (in my opinion) I was implied to be a leftist since I contest the utility of IQ is kind of insulting. I’m not a leftist, nor have I ever been a leftist.
2) On his points on definitions of ‘intelligence’: The point is to come to a complete scientific consensus on how to define the word, the right way to study it and then think of the implications of the trait in question after you empirically verify its reality. That’s one reason to bring up how there is no consensus in the psychological community—ask 50 psychologists what intelligence is, get numerous different answers.
3) IQ and success/college: Funny that gets brought up. IQ tests are constructed to ‘predict’ success since they’re similar already to achievement tests in school (read arguments here, here, and here). Even then, you would expect college grades to be highly correlated with job performance 6 years after graduation from college right? Wrong. Armstrong (2011: 4) writes: “Grades at universities have a low relationship to long-term job performance (r = .05 for 6 or more years after graduation) despite the fact that cognitive skills are highly related to job performance (Roth, et al. 1996). In addition, they found that this relationship between grades and job performance has been lower for the more recent studies.” Though the claim that “cognitive skills are highly related to job performance” lie on shaky ground (Richardson and Norgate, 2015).
4) My criticisms on IQ do not mean that I deny that ‘intelligence exists’ (which is a common strawman), my criticisms are on construction and validity, not the whole “intelligence doesn’t exist” canard. I, of course, don’t discard the hypothesis that individuals and populations can differ in ‘intelligence/intelligence ‘genes’, the critiques provided are against the “IQ-tests-predict-X-in-life” claims and ‘IQ-tests-test-‘intelligence” claims. IQ tests test cultural distance from the middle class. Most IQ tests have general knowledge questions on them which then contribute a considerable amount to the final score. Therefore, since IQ tests test learned knowledge present in some cultures and not in others (which is even true for ‘culture-fair’ tests, see point 5), then learning is intimately linked with Jared’s definition of ‘intelligence’. So I would necessariliy state that they do test learned knowledge and test learned knowledge that’s present in some classes compared to others. Thusly, IQ tests test learned knowledge more present in some certain classes than others, therefore, making IQ tests proxies for social class, not ‘intelligence’ (Richardson, 2002; 2017b).
5) Now for my favorite part: the Raven. The test that everyone (or most people) believe is culture-free, culture-fair since there is nothing verbal thusly bypassing any implicit suggestion that there is cultural bias in the test due to differences in general knowledge. However, this assumption is extremely simplistic and hugely flawed.
For one, the Raven is perhaps one of the most tests, even more so than verbal tests, reflecting knowledge structures present in some cultures more than others (Richardson, 2002). One may look at the items on the Raven and then proclaim ‘Wow, anyone who gets these right must be ‘intelligent”, but the most ‘complicated’ Raven’s items are not more complicated than everyday life (Carpenter, Just, and Shell, 1990; Richardson, 2002; Richardson and Norgate, 2014). Furthermore, there is no cognitive theory in which items are selected for analysis and subsequent entry onto a particular Raven’s test. Concerning John Raven’s personal notes, Carpenter, Just, and Shell (1990: 408) show that John Raven—the creator of the Raven’s Progressive Matrices test—used his “intuition and clinical experience” to rank order items “without regard to any underlying processing theory.”
Now to address the claim that the Raven is ‘culture-free’: take one genetically similar population, one group of them are foraging hunter-gatherers while the other population lives in villages with schools. The foraging people are tested at age 11. They score 31 percent, while the ones living in more modern areas with amenities get 72 percent right (‘average’ individuals get 78 percent right while ‘intellectually defective’ individuals get 47 percent right; Heine, 2017: 188). The people I am talking about are the Tsimane, a foraging, hunter-gatherer population in Bolivia. Davis (2014) studied the Tsimane people and administered the Raven test to two groups of Tsimane, as described above. Now, if the test truly were ‘culture-free’ as is claimed, then they should score similarly, right?
Wrong. She found that reading was the best predictor of performance on the Raven. Children who attend school (presumably) learn how to read (with obviously a better chance to learn how to read if you don’t live in a hunter-gatherer environment). So the Tsimane who lived a more modern lifestyle scored more than twice as high on the Raven when compared to those who lived a hunter-gatherer lifestyle. So we have two genetically similar populations, one is exposed to more schooling while the other is not and schooling is the most related to performance on the Raven. Therefore, this study is definitive proof that the Raven is not culture-fair since “by its very nature, IQ testing is culture bound” (Cole, 1999: 646, quoted by Richardson, 2002: 293).
6) I doubt that we will be able to genotype people and get their ‘IQ’ results. Heine (2017) states that you would need all of the SNPs on a gene chip, numbering more than 500,000, to predict half of the variation between individuals in IQ (Davies et al, 2011; Chabris et al, 2012). Furthermore, since most genes may be height genes (Goldstein, 2009). This leads Heine (2017: 175) to conclude that “… it seems highly doubtful, contra Robert Plomin, that we’ll ever be able to estimate someone’s intelligence with much precision merely by looking at his or her genome.”
I’ve also critiqued GWAS/IQ studies by making an analogous argument on testosterone, the GWAS studies for testosterone, and how testosterone is produced in the body (its indirectly controlled by DNA, while what powers the cell is ATP, adenosine triphosphate (Kakh and Burnstock, 2009).
7) Regarding claims on grey and white matter: he’s citing Haier et al’s work, and their work on neural efficiency, white and grey matter correlates regarding IQ, to how different networks of the brain “talk” to each other, as in the P-FIT hypothesis of Jung and Haier (2007; numerous critiques/praises). Though I won’t go in depth on this point here, I will only say that correlations from images, correlations from correlations etc aren’t good enough (the neural network they discuss also may be related to other, noncognitive, factors). Lastly, MRI readings are known to be confounded by noise, visual artifacts and inadequate sampling, even getting emotional in the machine may cause noise in the readings (Okon-Singer et al, 2015) and since movements like speech and even eye movements affect readings, when describing normal variation, one must use caution (Richardson, 2017a).
8) There are no genes for intelligence (I’d also say “what is a gene?“) in the fluid genome (Ho, 2013), so due to this, I think that ‘identifying’ ‘genes for’ IQ will be a bit hard… Also touching on this point, Jared is correct that many genes—most, as a matter of fact—are expressed in the brain. Eighty-four percent, to be exact (Negi and Guda, 2017), so I think there will be a bit of a problem there… Further complicating these types of matters is the matter of social class. Genetic population structures have also emerged due to social class formation/migration. This would, predictably, cause genetic differences between classes, but these genetic differences are irrelevant to education and cognitive ability (Richardson, 2017b). This, then, would account for the extremely small GWAS correlations observed.
9) For the last point, I want to touch briefly on the concept of heritability (because I have a larger theme planned for the concept). Heritability ‘estimates’ have both group and individual flaws; environmental flaws; genetic flaws (Moore and Shenk, 2017), which arise due to the use of the highly flawed CTM (classical twin method) (Joseph, 2002; Richardson and Norgate, 2005; Charney, 2013; Fosse, Joseph, and Richardson, 2015). The flawed CTM inflates heritabilities since environments are not equalized, as they are in animal breeding research for instance, which is why those estimates (which as you can see are lower than the sky-high heritabilities that we get for IQ and other traits) are substantially lower than the heritabilities we observe for traits observed from controlled breeding experiments; which “surpasses almost anything found in the animal kingdom” (Schonemann, 1997: 104).
Lastly, there are numerous hereditarian scientific fallacies which include: 1) trait heritability does not predict what would occur when environments/genes change; 2) they’re inaccurate since they don’t account for gene-environment covariation or interaction while also ignoring nonadditive effects on behavior and cognitive ability; 3) molecular genetics does not show evidence that we can partition environment from genetic factors; 4) it wouldn’t tell us which traits are ‘genetic’ or not; and 5) proposed evolutionary models of human divergence are not supported by these studies (since heritability in the present doesn’t speak to what traits were like thousands of years ago) (Bailey, 1997). We, then, have a problem. Heritability estimates are useful for botanists and farmers because they can control the environment (Schonemann, 1997; Moore and Shenk, 2017). Regarding twin studies, the environment cannot be fully controlled and so they should be taken with a grain of salt. It is for these reasons that some researchers call to end the use of the term ‘heritability’ in science (Guo, 2000). For all of these reasons (and more), heritability estimates are useless for humans (Bailey, 1997; Moore and Shenk, 2017).
Still, other authors state that the use of heritability estimates “attempts to impose a simplistic and reified dichotomy (nature/nurture) on non-dichotomous processes.” (Rose, 2006) while Lewontin (2006) argues that heritability is a “useless quantity” and that to better understand biology, evolution, and development that we should analyze causes, not variances. (I too believe that heritability estimates are useless—especially due to the huge problems with twin studies and the fact that the correct protocols cannot be carried out due to ethical concerns.) Either way, heritability tells us nothing about which genes cause the trait in question, nor which pathways cause trait variation (Richardson, 2012).
In sum, I was glad to appear and discuss (however shortly) with Jared. I listened to it a few times and I realize (and have known before) that I’m a pretty bad public speaker. Either way, I’m glad to get a bit of points and some smaller parts of the overarching arguments out there and I hope I have a chance in the future to return on that show (preferably to debate JF on IQ). I will, of course, be better prepared for that. (When I saw that Jared would appear I decided to go on to discuss.) Jared is clearly wrong that the Raven is ‘culture-free’ and most of his retorts were pretty basic.
(Note: I will expand on all 9 of these points in separate articles.)