Please keep comments on topic.

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 218 other followers

Follow me on Twitter

Charles Darwin

Denis Noble

JP Rushton

Richard Lynn

Linda Gottfredson



The Malleability of IQ

1700 words

1843 Magazine published an article back in July titled The Curse of Genius, stating that “Within a few points either way, IQ is fixed throughout your life …” How true is this claim? How much is “a few points”? Would it account for any substantial increase or decrease? A few studies do look at IQ scores in one sample longitudinally. So, if this is the case, then IQ is not “like height”, as most hereditarians claim—it being “like height” since height is “stable” at adulthood (like IQ) and only certain events can decrease height (like IQ). But these claims fail.

IQ is, supposedly, a stable trait—that is, like height, at a certain age, it does not change. (Other than sufficient life events, such as having a bad back injury that causes one to slouch over, causing a decrease in height, or getting a traumatic brain injury—though that does not always decrease IQ scores). IQ tests supposedly measure a stable biological trait—“g” or general intelligence (which is built into the test, see Richardson, 2002 and see Schonemann’s papers for refutations on Jensen’s and Spearman’s “g).

IQ levels are expected to stick to people like their blood group or their height. But imagine a measure of a real, stable bodily function of an individual that is different at different times. You’d probably think what a strange kind of measure. IQ is just such a measure. (Richardson, 2017: 102)

Neuroscientist Allyson Mackey’s team, for example, foundthat after just eight weeks of playing these games the kids showed a pretty big IQ change – an improvement of about 30% or about 10 points in IQ.” Looking at a sample of 7-9 year olds, Mackey et al (2011) recruited children from low SES backgrounds to participate in cognitive training programs for an hour a day, 2 days a week. They predicted that children from a lower SES would benefit more from such cognitive/environmental enrichment (indeed, think of the differences between lower and middle SES people).

Mackey et al (2011) tested the children on their processing speed (PS), working memory (WM), and fluid reasoning (FR). Assessing FR, they used a matrix reasoning task with two versions (for the retest after the 8 week training). For PS, they used a cross-out test where  “one must rapidly identify and put a line through each instance of a specific symbol in a row of similar symbols” (Mackey et al, 2011: 584). While the coding “is a timed test in which one must rapidly translate digits into symbols by identifying the corresponding symbol for a digit provided in a legend” (ibid.) which is a part of the WISC IV. Working memory was assessed through digit and spatial span tests from the Wechsler Memory Scale.

The kinds of games they used were computerized and non-computerized (like using a Nintendo DS). Mackey et al (2011: 585) write:

Both programs incorporated a mix of commercially available computerized and non-computerized games, as well as a mix of games that were played individually or in small groups. Games selected for reasoning training demanded the joint consideration of several task rules, relations, or steps required to solve a problem. Games selected for speed training involved rapid visual processing and rapid motor responding based on simple task rules.

So at the end of the 8-week program, cognitive abilities increased in both groups. For the children in the reasoning training, they solved an average of 4.5 more matrices than their previous try. Mackey et al (585-586) write:

Before training, children in the reasoning group had an average score of 96.3 points on the TONI, which is normed with a mean of 100 and a standard deviation of 15. After training, they had an average score of 106.2 points. This gain of 9.9 points brought the reasoning ability of the group from below average for their age. [But such gains were not significant on the test of nonverbal intelligence, showing an increase of 3.5 points.]

One of the biggest surprises was that 4 out of the 20 children in the reasoning training showed an increase of over 20 points. This, of course, refutes the claim that such “ability” is “fixed”, as hereditarians have claimed. Mackey et al (2011: 587) writes that “the very existence and widespread use of IQ tests rests on the assumption that tests of FR measure an individual’s innate capacity to learn.” This, quite obviously, is a false claim. (This claim comes from Cattell, no less.) This buttresses the claim that IQ tests are, of course, experience dependent.

This study shows that IQ is not malleable and that exposure to certain cultural tools leads to increases in test scores, as hypothesized (Richardson, 2002, 2017).

Salthouse (2013) writes that:

results from different types of approaches are converging on a conclusion that practice or retest contributions to change in several cognitive abilities appear to be nearly the same magnitude in healthy adults between about 20 and 80 years of age. These findings imply that age comparisons of longitudinal change are not confounded with differences in the influences of retest and maturational components of change, and that measures of longitudinal change may be underestimates of the maturational component of change at all ages.

Moreno et al (2011) show that after 20 days of computerized training, children in the music group showed enhanced scores on a measure of verbal ability—90 percent of the sample showed the same improvement. They further write that “the fact that only one of the groups showed a positive correlation between brain plasticity (P2) and verbal IQ changes suggests a link between the specific training and the verbal IQ outcome, rather than improvement due to repeated testing.

Schellenberg (2004) describes how there was an advertisement looking for 6 year olds to enroll them in art lessons. There were 112 children enrolled into four groups: two groups received music lessons for a year, on either a standard keyboard or they had Kodaly voice training while the other two groups received either drama training or no training at all. Schellenberg (2004: 3) writes that “Children in the control groups had average
increases in IQ of 4.3 points (SD = 7.3), whereas the music groups had increases of 7.0 points (SD = 8.6).” So, compared to either drama training or no training at all, the children in the music training gained 2.7 IQ points more.


(Figure 1 from Schellenberg, 2004)

Ramsden et al (2011: 3-4) write:

The wide range of abilities in our sample was confirmed as follows: FSIQ ranged from 77 to 135 at time 1 and from 87 to 143 at time 2, with averages of 112 and 113 at times 1 and 2, respectively, and a tight correlation across testing points (r 5 0.79; P , 0.001). Our interest was in the considerable variation observed between testing points at the individual level, which ranged from 220 to 123 for VIQ, 218 to 117 for PIQ and 218 to 121 for FSIQ. Even if the extreme values of the published 90% confidence intervals are used on both occasions, 39% of the sample showed a clear change in VIQ, 21% in PIQ and 33% in FSIQ. In terms of the overall distribution, 21% of our sample showed a shift of at least one population standard deviation (15) in the VIQ measure, and 18% in the PIQ measure. [Also see The Guardian article on this paper.[

Richardson (2017: 102) writes “Carol Sigelman and Elizabeth Rider reported the IQs of one group of children tested at regular intervals between the ages of two years and seventeen years. The average difference between a child’s highest and lowest scores was 28.5 points, with almost one-third showing changes of more than 30 points (mean IQ is 100). This is sufficient to move an individual from the bottom to the top 10 percent or vice versa.” [See also the page in Sigelman and Rider, 2011.]

Mortensen et al (2003) show that IQ remains stable in mid- to young adulthood in low birthweight samples. Schwartz et al (1975: 693) write that “Individual variations in patterns of IQ changes (including no changes over time) appeared to be related to overall level of adjustment and integration and, as such, represent a sensitive barometer of coping responses. Thus, it is difficult to accept the notion of IQ as a stable, constant characteristic of the individual that, once measured, determines cognitive functioning for any age level for any test.

There is even instability in IQ seen in high SES Guatemalans born between 1941-1953 (Mansukoski et al, 2019). Mansukoski et al’s (2019) analysis “highlight[s] the complicated nature of measuring and interpreting IQ at different ages, and the many factors that can introduce variation in the results. Large variation in the pre-adult test scores seems to be more of a norm than a one-off event.” Possible reasons for the change could be due to “adverse life events, larger than expected deviations of individual developmental level at the time of the testing and differences between the testing instruments” (Mansukoski et al, 2019). They also found that “IQ scores did not significantly correlate with age, implying there is no straightforward developmental cause behind the findings“, how weird…

Summarizing such studies that show an increase in IQ scores in children and teenagers, Richardson (2017: 103) writes:

Such results suggest that we have no right to pin  such individual differences on biology without the obvious, but impossible, experiment. That would entail swapping the circumstances of upper-and lower-class newborns—parents’ inherited wealth, personalities, stresses of poverty, social self-perception, and so on—and following them up, not just over years or decades, but also  over generations (remembering the effects of maternal stress on children, mentioned above). And it would require unrigged tests based on proper cognitive theory.

In sum, the claim that IQ is stable at a certain age like another physical trait is clearly false. Numerous interventions and reasons can increase or decrease one’s IQ score. The results discussed in this article show that familiarity to certain types of cultural tools increases one’s score (like in the low SES group tested in Mackey et al, 2011). Although the n is low (which I know is one of the first things I will hear), I’m not worried about that. What I am worried about is the individual change in IQ at certain ages, and they show that. So the results here show support for Richardson’s (2002) thesis that “IQ scores might be more an index of individuals’ distance from the cultural tools making up the test than performance on a singular strength variable” (Richardson, 2012).

IQ is not stable; IQ is malleable, whether through exposure to certain cultural/class tools or through certain aspects that one is exposed to that are more likely to be included in certain classes over others.  Indeed, this lends credence to Castles’ (2013) claim that “Intelligence is in fact a cultural construct, specific to a certain time and place.


Chopsticks Genes and Population Stratification

1200 words

Why do some groups of people use chopsticks and others do not? Years back, created a thought experiment. So he found a few hundred students from a university and gathered DNA samples from their cheeks which were then mapped for candidate genes associated with chopstick use. Come to find out, one of the associated genetic markers was associated with chopstick use—accounting for 50 percent of the variation in the trait (Hamer and Sirota, 2000). The effect even replicated many times and was highly significant: but it was biologically meaningless.

One may look at East Asians and say “Why do they use chopsticks” or “Why are they so good at using them while Americans aren’t?” and come to such ridiculous studies such as the one described above. They may even find an association between the trait/behavior and a genetic marker. They may even find that it replicates and is a significant hit. But, it can all be for naught, since population stratification reared its head. Population stratification “refers to differences in allele frequencies between cases and controls due to systematic differences in ancestry rather than association of genes with disease” (Freedman et al, 2004). It “is a potential cause of false associations in genetic association studies” (Oetjens et al, 2016).

Such population stratification in the chopsticks gene study described above should have been anticipated since they studied two different populations. Kaplan (2000: 67-68) described this well:

A similar argument, bu the way, holds true for molecular studies. Basically, it is easy to mistake mere statistical associations for a causal connection if one is not careful to properly partition one’s samples. Hamer and Copeland develop and amusing example of some hypothetical, badly misguided researchers searching for the “successful use of selected hand instruments” (SUSHI) gene (hypothesized to be associated with chopstick usage) between residents in Tokyo and Indianapolis. Hamer and Copeland note that while you would be almost certain to find a gene “associated with chopstick usage” if you did this, the design of such a hypothetical study would be badly flawed. What would be likely to happen here is that a genetic marker associated with the heterogeneity of the group involved (Japanese versus Caucasian) would be found, and the heterogeneity of the group involved would independently account for the differences in the trait; in this case, there is a cultural tendency for more people who grow up in Japan than people who grow up in Indianapolis to learn how to use chopsticks. That is, growing up in Japan is the causally important factor in using chopsticks; having a certain genetic marker is only associated with chopstick use in a statistical way, and only because those people who grow up in Japan are also more likely to have the marker than those who grew up in Indianapolis. The genetic marker is in no way causally related to chopstick use! That the marker ends up associated with chopstick use is therefore just an accident of design (Hamer and Copeland, 1998, 43; Bailey 1997 develops a similar example).

In this way, most—if not all—of the results of genome-wide association studies (GWASs) can be accounted for by population stratification. Hamer and Sirota (2000) is a warning to psychiatric geneticists to not be quick to ascribe function and causation to hits on certain genes from association studies (of which GWASs are).

Many studies, for example, Sniekers et al (2017), Savage et al (2018) purport to “account for” less than 10 percent of the variance in a trait, like “intelligence” (derived from non-construct valid IQ tests). Other GWA studies purport to show genes that affect testosterone production and that those who have a certain variant are more likely to have low testosterone (Ohlsson et al, 2011). Population stratification can have an effect here in these studies, too. GWASs; they give rise to spurious correlations that arise due to population structure—which is what GWASs are actually measuring, they are measuring social class, and not a “trait” (Richardson, 2017b; Richardson and Jones, 2019). Note that correcting for socioeconomic status (SES) fails, as the two are distinct (Richardson, 2002). (Note that GWASs lead to PGSs, which are, of course, flawed too.)

Such papers presume that correlations are causes and that interactions between genes and environment either don’t exist or are irrelevant (see Gottfredson, 2009 and my reply). Both of these claims are false. Correlations can, of course, lead to figuring out causes, but, like with the chopstick example above, attributing causation to things that are even “replicable” and “strongly significant” will still lead to false positives due to that same population stratification. Of course, GWAS and similar studies are attempting to account for the heriatbility estimates gleaned from twin, family, and adoption studies. Though, the assumptions used in these kinds of studies are shown to be false and, therefore, heritability estimates are highly exaggerated (and flawed) which lead to “looking for genes” that aren’t there (Charney, 2012; Joseph et al, 2016; Richardson, 2017a).

Richardson’s (2017b) argument is simple: (1) there is genetic stratification in human populations which will correlate with social class; (2) since there is genetic stratification in human populations which will correlate with social class, the genetic stratification will be associated with the “cognitive” variation; (3) if (1) and (2) then what GWA studies are finding are not “genetic differences” between groups in terms of “intelligence” (as shown by “IQ tests”), but population stratification between social classes. Population stratification still persists even in “homogeneous” populations (see references in Richardson and Jones, 2019), and so, the “corrections for” population stratification are anything but.

So what accounts for the small pittance of “variance explained” in GWASs and other similar association studies (Sniekers et al, 2017 “explained” less than 5 percent of variance in IQ)? Population stratification—specifically it is capturing genetic differences that occurred through migration. GWA studies use huge samples in order to find the genetic signals of the genes of small effect that underline the complex trait that is being studied. Take what Noble (2018) says:

As with the results of GWAS (genome-wide association studies) generally, the associations at the genome sequence level are remarkably weak and, with the exception of certain rare genetic diseases, may even be meaningless (1321). The reason is that if you gather a sufficiently large data set, it is a mathematical necessity that you will find correlations, even if the data set was generated randomly so that the correlations must be spurious. The bigger the data set, the more spurious correlations will be found (3).

Calude and Longo (2016; emphasis theirs) “prove that very large databases have to contain arbitrary correlations. These correlations appear only due to the size, not the nature, of data. They can be found in “randomly” generated, large enough databases, which — as we will prove — implies that most correlations are spurious.”

So why should we take association studies seriously when they fall prey to the problem of population stratification (measuring differences between social classes and other populations) along with the fact that big datasets lead to spurious correlations? I fail to think of a good reason why we should take these studies seriously. The chopsticks gene example perfectly illustrates the current problems we have with GWASs for complex traits: we are just seeing what is due to social—and other—stratification between populations and not any “genetic” differences in the trait that is being looked at.

The Modern Synthesis vs the Extended Evolutionary Synthesis

2050 words

The Modern Synthesis (MS) has entrenched evolutionary thought since its inception in the mid-1950s. The MS is the integreation of Darwinian natural selection and Mendelian genetics. Key assumptions include “(i) evolutionarily significant phenotypic variation arises from genetic mutations that occur at a low rate independently of the strength and direction of natural selection; (ii) most favourable mutations have small phenotypic effects, which results in gradual phenotypic change; (iii) inheritance is genetic; (iv) natural selection is the sole explanation for adaptation; and (v) macro-evolution is the result of accumulation of differences that arise through micro-evolutionary processes” (Laland et al, 2015).

Laland et al (2015) even have a helpful table on core assumptions of both the MS and Extended Evolutionary Synthesis (EES). The MS assumptions are on the left while the EES assumptions are on the right.


Darwinian cheerleaders, such as Jerry Coyne and Richard Dawkins, would claim that neo-Darwinisim can—and already does—account for the assumptions of the EES. However, it is clear that that claim is false. At its core, the MS is a gene-centered perspective whereas the EES is an organism-centered perspective.

To the followers of the MS, evolution occurs through random mutations and change in allele frequencies which then get selected for by natural selection since they lead to an increase in fitness in that organism, and so, that trait that the genes ’cause’ then carry on to the next generation due to its contribution to fitness in that organism. Drift, mutation and gene flow also account for changes in genetic frequencies, but selection is the strongest of these modes of evolution to the Darwinian. The debate about the MS and the EES comes down to gene-selectionism vs developmental systems theory.

On the other hand, the EES is an organism-centered perspective. Adherents to the EES state that the organism is inseparable from its environment. Jarvilehto (1998) describes this well:

The theory of the organism-environment system (Jairvilehto, 1994, 1995) starts with the proposition that in any functional sense organism and environment are inseparable and form only one unitary system. The organism cannot exist without the environment and the environment has descriptive properties only if it is connected to the organism.

At its core, the EES makes evolution about the organism—its developmental system—and relegates genes, not as active causes of traits and behaviors, but as passive causes, being used by and for the system as needed (Noble, 2011; Richardson, 2017).

One can see that the core assumptions of the MS are very much like what Dawkins describes in his book The Selfish Gene (Dawkins, 1976). In the book, Dawkins claimed that we are what amounts to “gene machines”—that is, just vehicles for the riders, the genes. So, for example, since we are just gene machines, and if genes are literally selfish “things”, then all of our actions and behaviors can be reduced to the fact that our genes “want” to survive. But the “selfish gene” theory “is not even capable of direct empirical falsification” (Noble, 2011) because Richard Dawkins emphatically stated in The Extended Phenotype (Dawkins, 1982: 1) that “I doubt that there is any experiment that could prove my claim” (quoted in Noble, 2011).

Noble (2011) goes on to discuss Dawkins’ view that on genes:

Now they swarm in huge colonies, safe inside gigantic lumbering robots, sealed off from the outside world, communicating with it by tortuous indirect routes, manipulating it by remote control. They are in you and me; they created us, body and mind; and their preservation is the ultimate rationale for our existence. (1976, 20)

Noble then switches the analogy: Noble likens genes, not as having a “selfish” attribute, but to that of being “prisoners”, stuck in the body with no way of escape. Noble then says that, since there is no experiment to distinguish between the two views (which Dawkins admitted). Noble then concludes that, instead of being “selfish”, the physiological sciences look at genes as “cooperative”, since they need to “cooperate” with the environment, other genes, gene networks etc which comprise the whole organism.

In his 2018 book Agents and Goals in Evolution Samir Okasha distinguishes between type I and type II agential thinking. “In type 1 [agential thinking], the agent with the goal is an evolved entity, typically an individual organism; in type 2, the agent is ‘mother nature’, a personification of natural selection” (Okasha, 2018: 23). An example of type I agential thinking is Dawkins’ selfish genes, while type II is the personification that one imputes onto natural selection—which Okasha states that this type of thinking “Darwin was himself first to employ” (Okasha, 2018: 36) it.

Okasha states that each gene’s ultimate goal is to outcompete other genes—for that gene in question to increase its frequency in the organism. They also can have intermediate goals which is to maximize fitness. Okasha gives three rationales on what makes something “an agent”: (1) goal-directedness; (2) behavioral flexibility; and (3) adaptedness. So the “selfish” element “constitutes the strongest argument for agential thinking” of the genes (Okasha, 2018: 73). However, as Denis Noble has tirelessly pointed out, genes (DNA sequences) are inert molecules (and are one part of the developing system) and so do not show behavioral flexibility or goal-directedness. Genes can (along with other parts of the system working in concert with them) exert adaptive effects on the phenotype, though when genes (and traits) are coextensive, selection cannot distinguish between the fitness-enhancing trait and the free-riding trait so it only makes logical sense to claim that organisms are selected, not any individual traits (Fodor and Piatteli-Palmarini, 2010a, 2010b).

It is because of this, that the Neo-Darwinian gene-centric paradigm has failed, and is the reason why we need a new evolutionary synthesis. Some only wish to tweak the MS a bit in order to allow what the MS does not incorporate in it, but others want to overhaul the entire thing and extend it.

Here is the main reason why the MS fails: there is absolutely no reason to privilege any level of the system above any other! Causation is multi-level and constantly interacting. There is no a priori justification for privileging any developmental variable over any other (Noble, 2012, 2017). Both downward and upward causation exists in biological systems (which means that molecules depend on organismal context). The organism also able to control stochasticity—which is “used to … generate novelty” (Noble and Noble, 2018). Lastly, there is the creation of novelty at new levels of selection, like with how the organism is an active participant in the construction of the environment.

Now, what does the EES bring that is different from the MS? A whole bunch. Most importantly, it makes a slew of novel predictions. Laland et al (2016) write:

For example, the EES predicts that stress-induced phenotypic variation can initiate adaptive divergence in morphology, physiology and behaviour because of the ability of developmental mechanisms to accommodate new environments (consistent with predictions 1–3 and 7 in table 3). This is supported by research on colonizing populations of house finches [68], water fleas [132] and sticklebacks [55,133] and, from a more macro-evolutionary perspective, by studies of the vertebrate limb [57]. The predictions in table 3 are a small subset of those that characterize the EES, but suffice to illustrate its novelty, can be tested empirically, and should encourage deriving and testing further predictions.

[Table 3]


There are other ways to verify EES predictions, and they’re simple and can be done in the lab. In his book Above the Gene, Beyond Biology: Toward a Philosophy of Epigenetics, philosopher of biology Jan Baedke notes that studies of epigenetic processes which are induced in the lab and those that are observed in nature are similar in that they share the same methodological framework. So we can use lab-induced epigenetic processes to ask evolutionary questions and get evolutionary answers in an epigenetic framework. There are two problems, though. One, that we don’t know whether experimental and natural epigenetic inducements will match up; and two we don’t know whether or not these epigenetic explanations that focus on proximate causes and not ultimate causes can address evolutionary explananda. Baedke (2018: 89) writes:

The first has been addressed by showing that studies of epigenetic processes that are experimentally induced in the lab (in molecular epigenetics) and those observed in natural populations in the field (in ecological or evolutionary epigenetics) are not that different after all. They share a similar methodological framework, one that allows them to pose heuristically fruitful research questions and to build reciprocal transparent models. The second issue becomes far less fundamental if one understands the predominant reading of Mayr’s classical proximate-ultimate distinction as offering a simplifying picture of what (and how) developmental explanations actually explain. Once the nature of developmental dependencies has been revealed, the appropriateness of developmentally oriented approaches, such as epigenetics, in evolutionary biology is secured.

Further arguments for epigenetics from an evolutionary approach can be found in Richardson’s (2017) Genes, Brains, and Human Potential (chapter 4 and 5) and Jablonka and Lamb’s (2005) Evolution in Four Dimensions. More than genes alone are passed on and inherited, and this throws a wrench into the MS.

Some may fault DST for not offering anything comparable to Darwinisim, as Dupre (2003: 37) notes:

Critics of DST complain that it fails to offer any positive programme that has achievements comparable to more orthodox neo-Darwinism, and so far this complaint is probably justified.

But this is irrelevant. For if we look at DST as just a part of the whole EES programme, then it is the EES that needs to—and does—“offer a positive programme that has achievements comparable to more orthodox neo-Darwinism” (Dupre, 2003: 37). And that is exactly what the EES does: it makes novel predictions; it explains what needs to be explained better than the MS; and the MS has shown to be incoherent (that is, there cannot be selection on only one level; there can only be selection on the organism). That the main tool of the MS (natural selection) has been shown by Fodor to be vacuous and non-mechanistic is yet another strike against it.

Since DST is a main part of the EES, and DST is “a wholeheartedly epigenetic approach to development, inheritance and evolution” (Griffiths, 2015) and the EES incorporates epigenetic theories, then the EES will live or die on whether or not its evolutionary epigenetic theories are confirmed. And with the recent slew of books and articles that attest to the fact that there is a huge component to evolutionary epigenetics (e.g., Baedke, 2018; Bonduriansky and Day, 2018; Meloni, 2019), it is most definitely worth seeing what we can find in regard to evolutionary epigenetics studies, since epigenetic changes induced in the lab and those that are observed in natural populations in nature are not that different. This can then confirm or deconfirm major hypotheses of the EES—of which there are many. It is time for Lamarck to make his return.

It is clear that the MS is lacking, as many authors have pointed out. To understand evolutionary history and why organisms have the traits they do, we need much more than the natural selection-dominated neo-Darwinian Modern Synthesis. We need a new synthesis (which has been formulated for the past 15-20 years) and only through this new synthesis can we understand the hows and whys. The MS was good when we didn’t know any better, but the reductionism it assumes is untenable; there cannot be any direct selection on any level (i.e., the gene) so it is a nonsensical programme. Genes are not directly selected, nor are traits that enhance fitness. Whole organisms and their developmental systems are selected and propagate into future generations.

The EES (and DST along with it) hold right to the causal parity thesis—“that genes/DNA play an important role in development, but so do other variables, so there is no reason to privilege genes/DNA above other developmental variables.” This causal parity between all tools of development is telling: what is selected is not just one level of the system, as genetic reductionists (neo-Darwinists) would like to believe; it occurs on the whole organism and what it interacts with (the environment); environments are inherited too. Once we purge the falsities that were forced upon us by the MS in regard to organisms and their relationship with the environment and the MS’s assumptions about evolution as a whole, we can then truly understand how and why organisms evolve the phenotypes they do; we cannot truly understand the evolution of organisms and their phenotypes with genetic reductionist thinking with sloppy logic. So who wins? The MS does not, since it has causation in biology wrong. This only leaves us with the EES as the superior theory, predictor, and explainer.

The Human and Cetacean Neocortex and the Number of Neurons in it

2100 words

For the past 15 years, neuroscientist Suzanna Herculano-Houzel has been revolutionizing the way we look at the human brain. In 2005, Herculano-Houzel and Lent (2005) pioneered a new way to ascertain the neuronal make-up of brains: dissolving brains into soup and counting the neurons in it. Herculano-Houzel (2016: 33-34) describes it so:

Because we [Herculano-Houzel and Lent] were turning heterogeneous tissue into a homogeneous—or “isotropic”—suspension of nuclei, he proposed we call it the “isotropic fractionator.” The name stuck for lack of any better alternative. It has been pointed out to me by none other than Karl Herrup himself that it’s a terribly awkward name, and I agree. Whenever I can (which is not often because journal editors don’t appreciate informality), I prefer to call our method of counting cells what it is: “brain soup.”

So, using this method, we soon came to know that humans have 86 billion neurons. This flew in the face of the accepted wisdom—humans have 100 billion neurons in the brain. However, when Herculano-Houzel searched for the original reference for this claim, she came up empty-handed. The claim that we have 100 billion neurons “had become such an established “fact” that neuroscientists were allowed to start their review papers with generic phrases to that effect without citing references. It was the neuroscientist’s equivalent to stating that genes were made of DNA: it had become a universally known “fact” (Herculano-Houzel, 2016: 27). Herculano-Houzel (2016: 27) further states that “Digging through the literature for the original studies on how many cells brains are made of, the more I read, the more I realized that what I was looking for simply didn’t exist.”

So this “fact” that the human brain was made up of 100 billion neurons was so entrenched in the literature that it became something like common knowledge—for instance, that the sun is 93 million miles away from earth—that did not need a reference in the scientific literature. Herculano-Houzel asked her co-author of her 2005 paper (Roberto Lent) who authored a textbook called 100 Billion Neurons if he knew where the number came from, but of course he didn’t know. Though, subsequent editions added a question mark, making the title of the text 100 Billion Nuerons? (Herculano-Houzel, 2016: 28).

So using this method, we now know that the cellular composition of the human brain is expected for a brain our size (Herculano-Houzel, 2009). According to the encephilization quotient (EQ) first used by Harry Jerison, humans have an EQ of between 7 and 8—the largest for any mammal. And so, since humans are the most intelligent species on earth, this must account for Man’s exceptional abilities. But does it?

Herculano-Houzel et al (2007) showed that it wasn’t humans, as popularly believed, that had a larger brain than expected, but it was great apes, more specifically orangutans and gorillas that had bodies too big for their brains. So the human brain is nothing but a linearly scaled-up primate brain—humans have the amount of neurons expected for a primate brain of its size (Herculano-Houzel, 2012).

So Herculano-Houzel (2009) writes that “If cognitive abilities among non-human primates scale with absolute brain size (Deaner et al., 2007 ) and brain size scales linearly across primates with its number of neurons (Herculano-Houzel et al., 2007 ), it is tempting to infer that the cognitive abilities of a primate, and of other mammals for that matter, are directly related to the number of neurons in its brain.Deaner et al (2007) showed that cognitive ability in non-human primates “is not strongly correlated with neuroanatomical measures that statistically control for a possible effect of body size, such as encephalization quotient or brain size residuals. Instead, absolute brain size measures were the best predictors of primate cognitive ability.” While Herculano-Houzel et al (2007) showed that brain size scales linearly across primates with the number of neurons—so as brain size increases so does the neuronal count of that primate brain.

This can be seen in Fonseca-Azevedo’s and Herculano-Houzel’s (2012) study on the metabolic constraints between humans and gorillas. Humans cook food while great apes eat uncooked plant foods. Larger animals have larger brains, more than likely. However, gorillas have larger bodies than we do but smaller brains than expected while humans have a smaller body and bigger brain. This is due to the diet that the two species eat—gorillas spend about 8-10 hours per day feeding while, if humans had the same number of nuerons but ate a raw, plant-based diet, they would need to feed for about 9 hours a day to be able to sustain a brain with that many neurons. This, however, was overcome by Homo erectus and his ability to cook food. Since he could cook food, he could afford a large brain with more neurons. Fonseca-Azevedo and Herculano-Houzel (2012) write that:

Given the difficulties that the largest great apes have to feed for more than 8 h/d (as detailed later), it is unlikely, therefore, that Homo species beginning with H. erectus could have afforded their combinations of MBD and number of brain neurons on a raw diet.

That cooking food leads to a greater amount of energy unlocked can be seen with Richard Wrangham’s studies. Since the process of cooking gelatinizes the protein in meat, it makes it easier to chew and therefore digest. This same denaturization of proteins occurs in vegetables, too. So, the claim that cooked food (a form of processing, along with using tools to mash food) has fewer calories (kcal) than raw food is false. It was the cooking of food (meat) that led to the expansion of the human brain—and of course, allowed our linearly scaled-up primate brain to be able to afford so many neurons. Large brains with a high neuronal count are extraordinarily expensive, as shown by Fonseca-Azevedo and Herculano-Houzel (2012).

Erectus had smaller teeth, reduced bite force, reduced chewing muscles and a relatively smaller gut compared to other species of Homo. Fink and Lieberman (2016) show that slicing and mashing meat and underground storage organs (USOs) would decrease the number of chews per year by 2 million (13 percent) while the total masticatory force would be reduced about 15 percent. Further, by slicing and pounding foodstuffs into 41 percent smaller particles, the number of chews would be reduced by 5 percent and the masticatory force reduced by 12 percent. So, of course, it was not only cooking that led to the changes we see in erectus compared to others, it was also the beginning of food processing (slicing and mashing are forms of processing) which led to these changes. (See also Catching Fire: How Cooking Made Us Human by Wrangham, 2013 for the evidence that cooking catapulted our brains and neuronal capacity to the size it is now, along with Wrangham, 2017.)

So, since the neuronal count of a brain is directly related to the cognitive ability that brain is capable of, then since Herculano-Houzel and Kaas (2011) showed that since the modern range of neurons was found in heidelbergensis and neanderthalensis, that they therefore had similar cognitive potential to us. This would then mean that “Compared to their societies, our outstanding accomplishments as individuals, as groups, and as a species, in this scenario, would be witnesses of the beneficial effects of cultural accumulation and transmission over the ages” (Herculano-Houzel and Kaas, 2011).

The diets of Neanderthals and humans, while similar (and differed due to the availability of foods), nevertheless, is a large reason why they have such large brains with a large number of neurons. Though, it must be said that there is no progress in hominin brain evolution (contra the evolutionary progressionists) as brain size is predicated on the available food and nutritional quality (Montgomery et al, 2010).

But there is a problem for Herculano-Houzel’s thesis that cognitive ability scales-up with the absolute number of neurons in the cerebral cortex. Mortensen et al (2014) used the optical fractionator (not to be confused with the isotropic fractionator) and came to the conclusion that “the long-finned pilot whale neocortex has approximately 37.2 × 109 neurons, which is almost twice as many as humans, and 127 × 109 glial cells. Thus, the absolute number of neurons in the human neocortex is not correlated with the superior cognitive abilities of humans (at least compared to cetaceans) as has previously been hypothesized.” This throws a wrench in Herculano-Houzel’s thesis—or does it?

There are a couple of glaring problems here, most importantly, I do not see how many slices of the cortex that Mortensen et al (2014) studied. They refer to the flawed stereological estimate of Eriksen and Pakkenberg (2007) showed that the Minke whale had an estimated 13 billion neurons while Walloe et al (2010) showed that the harbor porpoise had 15 billion cortical neurons with an even smaller cortex. These three studies are all from the same research team who use the same stereological methods, so Hercualano-Houzel’s (2016: 104-106) comments apply:

However, both these studies suffered from the same unfortunately common problem in stereology: undersampling, in one case drawing estimates from only 12 sections out of over 3,000 sections of the Minke whale’s cerebral cortex, sampling a total of only around 200 cells from the entire cortex, when it is recommended that around 700-1000 cells be counted per individual brain structure. with such extreme undersampling, it is easy to make invalid extrapolations—like trying to predict the outcome of a national election by consulting just a small handful of people.

It is thus very likely, given the undersampling of these studies and the neuronal scaling rules that apply to cetartiodactyls, that even the cerebral cortex of the largest whales is a fraction of the average 16 billion neurons that we find in the human cerebral cortex.


It seems fitting that great apes, elephants, and probably cetaceans have similar numbers of neurons in the cerebral cortex, in the range of 3 to 9 billion: fewer than humans have, but more than all other mammals do.

Kazu et al (2014) state that “If the neuronal scaling rules for artiodactyls extend to all cetartiodactyls, we predict that the large cerebral cortex of cetaceans will still have fewer neurons than the human cerebral cortex.” Artiodactyls are cousins of cetaceans—and the order is called cetariodactyls since it is thought that whales evolved from artiodactyls. So if they did evolve from artiodactyls, then the neruonal scaling rules would apply to them (just as humans have evolved from other primates and the neuronal scaling rules apply to us). So the predicted “cerebral cortex of Phocoena phocoena, Tursiops truncatus, Grampus griseus, and Globicephala macrorhyncha, at 340, 815, 1,127, and 2,045 cm3, to be composed of 1.04, 1.75, 2.11, and 3.01 billion neurons, respectively” (Kazu et al, 2014). So the predicted number of cerebellar neurons in the pilot whale is around 3 billion—nowhere near the staggering amount that humans have (16 billion).

Humans have the most cerebellar neurons of any animal on the planet—and this, according to Herculano-Houzel and her colleagues, accounts for the human advantage. Studies that purport to show that certain species of cetaceans have similar—or more—cereballar neurons than humans rest on methodological flaws. The neuronal scaling rules that Herculano-Houzel and colleagues have for cetaceans predict far, far fewer cortical neurons in the species. It is for this reason that studies that show similar—or more—cortical neurons in other species that do not use the isotropic fractionator must be looked at with extreme caution.

However, when Herculano-Houzel and colleagues do finally use the isotropic fractionator on pilot whales, and if their prediction does not come to pass but falls in-line with that of Mortensen et al (2014), this does not, in my opinion, cast doubt on her thesis. One must remember that cetaceans have completely different body plans from humans—most glaringly, the fact that we have hands to manipulate the world with. However, Fox, Muthukrishna, and Shultz (2017) show that whales and dolphins have human-like cultures and societies while using tools and passing down that information to future generations—just like humans do.

In any case, I believe that the prediction borne out from Kazu et al (2014) will show substantially fewer cortical neurons than in humans. There is no logical reason to accept the cortical neuronal estimates from the aforementioned studies since they undersampled parts of the cortex. Herculano-Houzel’s thesis still stands—what sets humans a part from other animals is the number of neurons which is tightly packed in to the cerebral cortex. The human brain is not that special.

The human advantage, I would say, lies in having the largest number of neurons in the cerebral cortex than any other animal species has managed—and it starts by having a cortex that is built in the image of other primate cortices: remarkable in its number of neurons, but not an exception to the rules that govern how it is put together. Because it is a primate brain—and not because it is special—the human brain manages to gather a number of neurons in a still comparatively small cerebral cortex that no other mammal with a viable brain, that is, smaller than 10 kilograms, would be able to muster. (Herculano-Houzel, 2016: 105-106)

Why Did I Change My Views?

1050 words

I started this blog in June of 2015. I recall thinking of names for the blog, trying “” at first, but the domain was taken. I then decided on the name “”. Back then, of course, I was a hereditarian pushing the likes of Rushton, Kanazawa, Jensen, and others. I, to be honest, could never ever see myself disbelieving the “fact” that certain races were more or less intelligent than others, it was preposterous, I used to believe. IQ tests served as a completely scientific instrument which showed, however crudely, that certain races were more intelligent than others. I held these beliefs for around two years after the creation of this blog.

Back then, I used to go to Barnes n Noble and of course, go and browse the biology section, choose a book and drink coffee all day while reading. (I was drinking black coffee, of course.) I recall back in April of 2017 seeing this book DNA Is Not Destiny: The Remarkable, Completely Misunderstood Relationship between You and Your Genes on the shelf in the biology section. The baby blue cover of the book caught my eye—but I scoffed at the title. DNA most definitely was destiny, I thought. Without DNA we could not be who we were. I ended up buying the book and reading it. It took me about a week to finish it and by the end of the book, Heine had me questioning my beliefs.

In the book, Heine discusses IQ, heritability, genes, DNA testing to catch diseases, the MAOA gene, and so on. All in all, the book is against genetic essentialism which is rife in public—and even academic—thought.

After I read DNA Is Not Destiny, the next few weeks I went to Barnes n Noble I would keep seeing Ken Richardson’s Genes, Brains, and Human Potential: The Science and Ideology of Intelligence. I recall scoffing even more at the title than I did Heine’s book. Nevertheless, I did not buy the book but I kept seeing it every time I went. When I finally bought the book, my worldview was then transformed. Before, I thought of IQ tests as being able to—however crude—measure intelligence differences between individuals and groups. The number that spits out was one’s “intelligence quotient”, and there was no way to raise it—but of course there were many ways to decrease it.

But Richardson’s book showed me that there were many biases implicit in the study of “intelligence”, both conscious and unconscious. The book showed me the many false assumptions that IQ-ists make when constructing tests. Perhaps most importantly, it showed me that IQ test scores were due to one’s social class—and that social class encompasses many other variables that affect test performance, and so stating that IQ tests are instruments to identify one’s social class due to the construction of the test seemed apt—especially due to the content on the test along with the fact that the tests were created by members of a narrow upper-class. This, to me, ensured that the test designers would get the result they wanted.

Not only did this book change my views on IQ, but I did a complete 180 on evolution, too (which Fodor and Pitattelli-Palmarini then solidified). Richardson in chapters 4 and 5 shows that genes don’t work the way most popularly think they do and that they are only used by and for the physiological system to carry out different processes. I don’t know which part of this book—the part on IQ or evolution—most radically changed my beliefs. But after reading Richardson, I did discover Susan Oyama, Denis Noble, Eva Jablonka and Marion Lamb, David Moore, David Shenk, Paul Griffiths, Karola Stotz, Jerry Fodor. and others who opposed the Neo-Darwinian Modern Synthesis.

Richardson’s most recent book then lead me to his other work—and that of other critics of IQ and the current neo-Darwinian Modern Synthesis—and from then on, I was what most would term an “IQ-denier”—since I disbelieve the claim that IQ tests test intelligence, and an “evolution denier”—since I deny the claim that natural selection is a mechanism. In any case, the radical changes in both of my what I would term major views I held were slow-burning, occurring over the course of a few months.

This can be evidenced by just reading the archives of this blog. For example, check the archives from May 2017 and read my article Height and IQ Genes. One can then read the article from April 2017 titled Reading Wrongthought Books in Public to see that over a two-month period that my views slowly began to shift to “IQ-denalism” and that of the Extended Evolutionary Synthesis (EES). Of course, in June of 2017, after defending Rushton’s r/K selection theory for years, I recanted on those views, too, due to Anderson’s (1991) rebuttal of Rushton’s theory. That three-month period from April-June was extremely pivotal in shaping the current views I have today.

After reading those two books, my views about IQ shifted from that of one who believed that nothing could ever shake his belief in them to one of the most outspoken critics of IQ in the “HBD” community. But the views on evolution that I now hold may be more radical than my current views on IQ. This is because Darwin himself—and the theory he formulated—is the object of attack, not a test.

The views I used to hold were staunch; I really believed that I would never recant my views, because I was privy to “The Truth ™” and everyone else was just a useful idiot who did not believe in the reality of intelligence differences which IQ tests showed. Though, my curiosity got the best of me and I ended up buying two books that radically shifted my thoughts on IQ and along with that evolution itself.

So why did I change my views on IQ and evolution? I changed my views due to conceptual and methodological problems on both points that Richardson and Heine pointed out to me. These view changes I underwent more than two years ago were pretty shocking to me. As I realized that my views were beginning to shift, I couldn’t believe it, since I recall saying to myself “I’ll never change my views.” the inadequacy of the replies to the critics was yet another reason for the shift.

It’s funny how things work out.

Shockley and Cattell

2500 words

William Shockley and Raymond Cattell were some of the most prolific eugenicists of the 20th century. In that time, both men put forth the notion that breeding should be restricted based on the results of IQ testing. Both men, however, were motivated not by science—so much—as they were by racial biases. Historian Constance Hilliard discusses Shockley in her book Straightening the Bell Curve: How Stereotypes About Black Masculinity Drive Research on Race and Intelligence (Hilliard, 2012: Chapter 3), while psychologist William H. Tucker wrote a book on Cattell and the eugenic views he held called The Cattell Controversy: Race, Science, and Ideology (Tucker, 2009). This article will discuss the views of both men.


When Shockley was 51 he was in a near-fatal car accident. He was thrown many feet away from the car that he, his wife and their son were in. Their son escaped the accident with minor injuries but Shockley had a crushed pelvis and was in a body cast for months in the hospital. Hilliard (2012: 20) writes:

Chapter 3 details Shockley’s transformation from physicist to modern-day eugenicist, preoccupied with race and the superiority of white genes. Some colleagues believed that the car accident that crushed Dr. Shockley’s pelvis and left him disabled might have triggered mental changes in him as well. Whatever the case, not long after returning home from the hospital, Shockley began directing his anger toward the reckless driver who maimed him into racial formulations. His ideas began to coalesce around the notion of an inverse correlation between blacks’ cognition and physical prowess. Later, in donating his sperm at the age of seventy to a sperm bank for geniuses, Shockley suggested to an interviewer for Playboy that women who would otherwise pay little attention to his lack of physical appeal would compete for his cognitively superior sperm. But the sperm banks’ owner apparently concealed from Shockley a painful truth. Women employing its services rejected the sperm of the short, balding Shockley in favor of that from younger, taller, more physically attractive men, whatever their IQ.

Shockley was a short, small man, standing at 5 foot 6 inches, weighing 150 pounds. How ironic that his belief that women would want his “cognitively superior sperm” (whatever that means) was rebuffed by the fact that women didn’t want a small, short balding man and wanted a young, attractive man’s sperm irrespective of their IQ. How funny, these eugenicists are.

Shockley’s views, of course, were not just science-driven. He harbored racial biases against certain groups. He disowned his son for marrying a Costa Rican woman, stated that his children had “regressed to the mean”, and stated that while stating that the so horrible misfortune of his children’s genetics were due to his first wife since she was not as academically inclined as he. Hilliard (2012: 48-49) writes:

Shockley’s growing preoccupation with eugenics and selective breeding was not simply an intellectual one. He disowned his eldest son for his involvement with a Costa Rican woman since this relationship, according to Professor Shockley, threatened to contaminate the family’s white gene pool. He also described his children to a reporter “as a significant regression” even though one possessed a PhD from the University of Southern California and another held a degree from Harvard College. Shockley even went as far as to blame this “genetic misfortune” on his first wife, who according to the scientist, “had no as high an academic achievement standing as I had.”

It’s funny because Shockley described himself as a “lady’s man”, but they didn’t want the sperm of a small, balding manlet (short man, at 5 foot 6 inches weighing 150 pounds). I wonder how he would have reacted to this news?

This is the mark of a scientist who just has intellectual curiosity on “cognitive differences” between racial groups, of course. Racial—and other—biases, of course, have driven many research programmes over the decades, and it seems that, like most “intelligence researchers” Shockley was privy to such biases as well.

One of Shockley’s former colleagues attributed his shift in research focus to the accident he had, stating that the “intense and (to my mind) ill-conceived concentration on socio-genetic matters occurred after a head-on automobile collison in which he was almost killed” (quoted in Hilliard: 2012: 48). Though we, of course, can’t know the reason for Shockley’s change in research focus (from legitimate science to pseudoscience), racial biases were quite obviously a driver in his research-shift.

Hilliard (2012: 47) claims that “had it not been for the near fatal car accident [that occurred to Shockley] … the twentieth century’s preoccupation with pairing cognition and physical attributes might have faded from view. It much not have been so much the car crash as the damage it did to Shockley’s sense of self that changed the course of race science.” Evidence for this claim comes from the fact that Jensen was drawn to Shockley’s lectures. Hilliard (2012: 51-52) writes:

Jensen, who had described himself as a “frustrated symphony conductor,” may have had his own reasons for reverencing Shockley’s every word. The younger psychologist had been forced to abandon a career in music because his own considerable talents in that area nevertheless lacked “soul,” or the emotional intensity needed to succeed in so competitive a profession. He decided on psychology as a second choice, carrying along with him a grudge against those American subcultures perceived as being “more expressive” than white culture from white he sprang.

So, it seems that had Shockley passed away, one of the “IQ giants” would have not have become an IQ-ist and Jensenism would not exist. Then, maybe, we would not have this IQ pseudoscience that America is “obsessed with” (Castles, 2013).


Raymond B. Cattell is one of the most influential psychologists of the 20th century. Tucker (2009) shows how Cattell’s racial biases drove his research programs and how Cattell outlined his radical eugenic thoughts in numerous papers on personality and social psychology. Tucker describes Cattell’s acceptance of an award from the APA. However, the APA then got word of Cattell’s views and what drove his research a few days before Cattell was to fly to Hawaii to accept the award. It was said to the APA that Cattell harbored racist views which drove his research. Cattell even created a religion called “Beyondism” which is a “neo-fascist contrivance” (Mehler, 1997) in which eugenics was a part, but only on a voluntary basis.

Cattell titled a book on the matter A New Morality from Science: Beyondism (Cattell, 1972). It’s almost as if he’s saying that there can be a science of morality, but there cannot be one (contra Sam Harris). Cattell, in his book, thought of how to create a system in which ecologically sustainable eugenic programs could be established. He also published Beyondism: Religion from Science in 1987 (Cattell, 1987). Cattell’s eugenic beliefs were so strong that he actually created a “religion” based on it. It is indeed ironic, since many HBDers are religious in their views.

Tucker (2009: 14) was one of two psychologists to explain to the APA that “this was not a case of a scientist who, parenthetically, happened to have objectionable political opinions; Cattell’s political ideology and his science were inseparable from each other.” So the APA postponed the award ceremony for Cattell. Tucker (2009: 15) demonstrated “that [Cattell’s] impressive body of accomplishments in the former domain [his “scienctific” accomplishments] was always intended to serve the goal of the latter [his eugenic/political beliefs].”

Cattell’s religion was based on evolution. A believer in group selection, he claimed that racial groups were selected by “natural selection“, thusly being married to a form of group selection. Where Beyondism strayed from other religious movements is interesting and is the main point of Cattell’s new religion: compassion was seen by Cattell as “evil.” Tucker (2009: 136) writes:

Cattell finally published A New Morality From Science: Beyondism, a 480-page prolix tome describing his religious thought in detail; fifteen years later Beyondism: Religion from Science provided some elaboration of Beyondist principles. Together these two books constituted the most comprehensive statement of his sociomoral beliefs and their relation to social science. Despite the adjective in the title of the earlier volume, Beyondism showed no significant discontinuity from the “evolutionary ethics” of the 1930s. If anything, the intervening decades had made all the traditonal approaches to morality more contemptible than ever to Cattell. “The notion of ‘human rights'” was nothing more than “an instance of rigid, childish, subjective thinking,” and other humanistic principles “such … as ‘social justice and equality,’ ‘basic freedom’ and ‘human dignity,'” he dismissed as “whore phrases.” As always, conventional religion was the worst offender of all in his eyes, one of its “chief rasions detre [sic]” being the “succorance of failure of error” by prolonging the duration of genetic failures—both individuals and groups—which, “from the perspective Beyondism,” Cattell called “positively evil.” In contrast, in a religion based on evolution as the central purpose of humankind, “religious and scientific truth [would] be ultimately reducible to one truth … [obtained] by scientific discovery … therefore … developing morality out of science. Embodying this unified truth, Beyondism would be “the finest ways to spend our lives.”

So intergroup competition, to Cattell, was the mechanism for “evolutionary progress” (whatever that means; see my most recent essay on the matter). The within-group eugenic policies that Beyondism would put onto racial groups was not only for increasing the race’s quality of life, but to increase the chance of that race’s being judged “successful” in Cattell’s eyes.

Another main tenet of Beyondism is that one culture should not borrow from another, termed “cultural borrowing”. This separated “rewards” from their “genetic origins” which then “confused the process of natural selection between groups.” So Beyondism required the steady elimination of “failing” races which was essential if the earth was “not to be choked with … more primitive forerunners” (Cattell, quoted in Tucker, 2009: 146). Cattell did not use the term “genocide”, which he saved only for the literal killing off of the members of a group; he created a neologism called “genthanasia” which was the process of ““phasing out” a “moribund culture … by educational and birth measures, without a single member dying before his time” (Cattell, quoted in Tucker, 2009: 146). So, quite obviously, Beyondism could not be practiced by one individual; it needed groups—societies—to adhere to its tenets for it to be truly effective. To Cattell, the main draw of Beyondism was that intergroup competition was a necessary moral postulate while he used psychological data to parse out “winners” and “losers.”

Cattell was then put on the Editorial Advisory Board of Mankind Quarterly, which was formerly a journal populated by individuals who opposed civil rights and supported German National Socialism. Cattell, though, finally had a spot where he could publish his thoughts in a “journal” on what should be done in regard to his Beyondism religion. Tucker (2009: 153) writes:

… in an article titled “Virtue in ‘Racism’?” he offered an analysis similar to Pearson’s, arguing that racism was an innate “evolutionary force—a tendency to like the like and distrust the different” that in most cases had to be respected as ” a virtuous gift”; the mere fact that society “has had to battel racism” was for Cattell “sufficient evidence that an innate drive exists.” And rather than regarding such natural inclination as a “perversion,” the appropriate response to racism, in his opinion, was “to shape society to adjust to it,” no doubt keeping groups separate from each other.

One of Cattell’s colleagues—Oliver Robertson (a man who criticized Hitler for failing)—wrote a book in 1992 titled The Ethnostate: An Unblinkered Perspective for an Advanced Statecraft in which he detailed his plan for the balkanization (the division of one large region into many smaller, sometimes hostile, ones) which was Cattellian in nature. It seemed like The Ethnostate was just a whole bunch of Cattell’s ideas, packaged up into a “plan” for the balkanization of America. So he wanted to divide America into ethnostates. Recall how Cattell eschewed “cultural borrowing”; well so did Robertson. Tucker (2009: 166) writes:

Most important of all, in the competition between the different ethnostates each group was to rely solely “upon its own capabilities and resources,” prohibited from “borrowing” from more complex cultures advancements that “it could not create under its own power” or otherwise benefitting from outside assistance.


A critique of Cattell’s ethical system based in part on his involvement with others espousinig odious opinions naturally runs the risk of charging guilt by association. But the argument advanced here is more substantative. It is not merely that he has cited a long list of Far Right authors and activists as significant influences on his own work, including arguably the three most important English-speaking Nazi theorists of the last thirty years—Pearson, Oliver, and Robertson. It is that, in addition to citing their writing as support for his own ideology, Cattell has acknowledged their ideas as “integrable”—that is, compatible—with his thought; expressed his gratitude for their influence these ideas have had on the evolution of Beyondism; graced the pages of their journals with his own contributions, thus lending his considerable prestige to publications dedicated to keeping blacks in second-class status; registered no objection when schemes of racial balkanization were predicated expressly on his writing—and indeed edited a publication that praised such a scheme for its intellectual indebtedness to his thought and called for its implementaion; and provided a friendly interview to a periodical [American Rennaisance] directly advocating that constituonally protected rights be withheld from blacks. This is not guilt by association but rather guilt by collaboration: a core set of beliefs and a common vision of an ethnically cleansed future, and that his support for such a society has lent his academic prominence, consciosuly and deliberately, to their intolerable goals. (Tucker, 2009: 171)


The types of views these two men held, quite obviously, drove their “scientific aspirations”; and their racial biases permeated their scientific thought. Shockley’s sudden shift in his thought after his car accident is quite possibly how and why Jensen published his seminal article in 1969 which opened the race/”intelligence” debate back up. Shockley’s racial biases permeated into his familial life when his son married a Costa Rican woman; along with his thoughts on how his children “regressed to the mean” due to his first wife’s lack of educational attainment shows the kind of great guy that Shockley was. It also shows how his biases drove his thought.

The same with Cattell. Cattell’s religion Beyondism grew out of his extreme racial biases; his collaboration with National Socialists and those opposed to desegregation further shows how his political/racial beliefs drove his research. Beyondist propaganda stated that evolutionary “progress” occurred between competing groups. So this is when Robertson pretty much took all of Cattell’s ideas and wrote a book on how America will be balkanized into ethnostates where there would be no cultural borrowing. Further, he stated that the most appropriate response to racism was to shape society to adjust to racism, rather than attempt to eliminate it entirely.

The stories of these two men’s beliefs are why, in my opinion, we should know the past and motivations of why individuals push anything—no matter what type of field it is. Because biases—due to political beliefs—everywhere cloud people’s judgment. Although, holding such views was prevalent at the time (like with Henry Goddard and his Kallikak family (see Zenderland, 1998).

High IQ Societies

1500 words

The most well-known high IQ society (HIS hereafter) is Mensa. But did you know that there are many more—much more exclusive—high IQ societies? In his book The Genius in All of Us: Unlocking Your Brain’s Potential (Adam, 2018) Adam chronicles his quest to raise his IQ score using nootropics. (Nootropics are supposed brain-enhancers, such as creatine that supposedly help in increasing cognitive functioning.) Adam discusses his experience taking the Mensa test (Mensa “is Mexican slang for stupid woman“; Adam, 2018) and talking to others who did with him on the same day. One highschool student he talked to wanted to put that he was a Mensa member on his CV; yet another individual stated that they accepted a challenge from a family member, since other members were in Mensa, she wanted to show that she had what it took.

Adam states that they were handed two sheets of paper with 30 questions, to be answered in three or four minutes, with questions increasing in difficulty. The first paper, he says, had a Raven-like aspect to it—rotating shapes and choosing the correct shape that’s next in the sequence. But, since he was out of time for the test, he says that he answered “A” to the remaining questions when the instructor wasn’t looking, since he “was going to use cognitive enhancement to cheat later anyway” (Adam, 2018: 23). (I will show Adam’s results of his attempted “cognitive enhancement to cheat” on the Mensa exam at the end of this article.) The shapes-questions were from the first paper, and the second was verbal. On this part, some words had to be defined while others had to be placed into context, or be placed into a sentence in the right place. Adam (2018: 23) gives an example of some of the verbal questions:

Is ‘separate’ the equivalent of ‘unconnected’ or ‘unrelated’? Or ‘evade’ — is it the same as ‘evert’, ‘elude’ or ‘escape’?

[Compare to other verbal questions on standard IQ tests:

‘What is the boiling point of water?’ ‘Who wrote Hamlet?’ ‘In what continent is Egypt?’ (Richardson, 2002: 289)


‘When anyone has offended you and asks you to excuse him—what ought you do?’ ‘What is the difference between esteem and affection?’ [this is from the Binet Scales, but “It is interesting to note that similar items are still found on most modern intelligence tests” (Castles, 2013).]]

So it took a few weeks for Adam’s results to get delivered to his home. His wife opened the letter and informed him that he had gotten into Mensa. (He got in despite answering “A” after the time limit was up.) This, though, threw a wrench into his plans: his plan was to use cognitive enhancers (nootropics) to enhance his cognition and attempt to score higher and get into Mensa that way. However, there are much more exclusive IQ clubs than Mensa. Adam (2018: 30) writes:

Under half of the Mensa membership, for example, would get into the Top One Percent Society (TOPS). And fewer than one in ten of those TOPS members would make the grade at the One in a Thousand Society. Above that the names get cryptic and the spelling freestyle.

There’s the Epida society, the Milenija, the Sthiq Society, and Ludomind. The Universal Genius Society takes just one person in 2,330, and the Ergo Society just one in 31,500. Members of the Mega Society, naturally, are one in a million. The Giga Society? One in a billion, which means, statistically, just seven people on the planet are qualified to join. Let’s hope the know about it. If you are friends with one of them, do tell them.

At the top of the tree is the self-proclaimed Grail Society, which sets its membership criteria so high — one in 76 billion — that it currently has zero members. It’s run by Paul Cooijmans, a guitarist from the Netherlands. About 2,000 people have tried and failed to join, he says. ‘Be assured that no one has come close.’

Wow, what exclusive clubs! Mensans are also more likely to have “psychological and physiological overexcitabilities” (Karpinski et al, 2018) such as ADHD, autism, and other physiologic diseases. How psycho and socially awkward a few members of Mensa are is evidenced in this tweet thread.


How spooooky. Surely the high IQ Mensans have un-thought-of ways of killing that us normies could never fathom. And surely, with their high IQs, they can outsmart the ones who would attempt to catch them for murder.

A woman named Jamie Loftus got into Mensa and she says that you get a discount on Hertz car rentals, a link to the Geico insurance website, you get access to the Mensa dating site “Mensa Match” (there is also an “IQ” dating site called, an email address, a cardboard membership card, and access to Mensa events in your area. Oh, and of course, you have to pay to take the test and pay yearly to stay in. (Also read Loftus’ other articles on her Mensa experience: one where she describes the death threats she got, and another in which she describes how Mensans would like her to not write bad things about them (Mensans). Seems like Mensans are in their “feels” about being attacked for their little—useless—club.)

One of the founders of Mensa—Lancelot Ware—stated that he “get[s] disappointed that so many members spend so much time solving puzzles” (quoted in Tammet, 2009: 40). If Mensa were anything but members [who] spend so much time solving puzzles“, then I think Ware would have stated as much. While the other founder of Mensa—Ronald Berrill— “had intended Mensa as “an aristocracy of the intellect”, and was unhappy that a majority of Mensans came from humble homes” (the Wikipedia article on Mensa International cites Serebriakoff, 1986 as the reference for the quote).

So, when it comes to HISs, what do they bring to the world? Or is it just a dues-paid club so that the people on top can get money from people attempting to stroke their egos saying “Yea, I scored high on a test and am in a club!”
The supervisor of the Japanese Intelligence Network (JIN) writes (his emphasis):

Currently, the ESOTERIQ society has seven members and the EVANGELIQ has one member.

I can perfectly guarantee that the all members exactly certainly undoubtedly absolutely officially keep authentic the highest IQ score performances.

Especially, the EVANGELIQ is the most exclusive high IQ society which has at least one member.

Do you think the one member of EVANGELIQ talks to himself a lot? From the results of Karpinski et al (2018), I would hazard the guess that, yes, he does. Here is a list of 84 HISs, and there is an even more exclusive club than the Grail Society: the Terra Society (you need to score 205 on the test where the SD is 15 to join).

So is there a use for high IQ societies? I struggle to think of one. They seem to function as money-sinks—to sucker people into paying their dues just because they scored high on a test (with no validity). The fact that one of the founders of Mensa was upset that Mensa members spend so much time doing puzzles is very telling. What else do they do with their ‘talent’ other than solve puzzles all day? What has the Mensa group—and any of the other (quite possible, but 84 are linked above) hundreds of HISs—done for the world?

Adam—although he guessed at the end of the first Mensa exam (the Raven-like one)—got into Mensa due to his second Mensa test—the verbal one. Adam eventually retook the Mensa exam after taking his nootropic cocktails and he writes (2018: 207):

The second envelope from Mensa was waiting for me when I returned from work, poking out beneath a gas bill. I opened the gas bill first. Its numbers were higher than I expected. I hoped the same would be true of the letter that announced my new IQ.

It was. My cognitively enhanced score on the language test had crept up to 156, from 154 before. And on the Culture Fair Test [the Raven-like test], the tough one with the symbols, it had soared to 137, from 128. That put me on the ninety-ninth percentile on both.

My IQ as measured by the symbols test — the one I had tried to improve on using the brain stimulation — was now 135, up from 125, and well above the required threshold for Mensa Membership.

Adam used Modafinil (a drug used to treat sleeplessness due to narcolepsy, obstructive sleep apnea, and shift work sleep disorder) and electrical brain stimulation. So Adam increased his scores, but he—of course—has no idea what causes his score increases: the nootropic, the electrical stimulation, practice, already having an idea of what was on the test, etc.

In any case, that’s ancillary to the main discussion point in this article: What has Mensa—and other HISs—done for the world? Out of the hundreds of HISs in the world, have they done anything of note or are they just a club of people who score highly on a test who then have to pay money to be in the club? There is no value to these kinds of ‘societies’; they’re just a circlejerk for good test-takers. Mensans have a higher chance of having mental disorders, which is evidenced by the articles above by Jamie Loftus, where they threaten her life with their “criminal element”.

So, until I’m shown otherwise, Mensa and other HISs are just a circlejerk where people have to pay to be in the club—and that’s all it is.

The “Interactionism Fallacy”

2350 words

A fallacy is an error in reasoning that makes an argument invalid. The “interactionism fallacy” is the fallacy—coined by Gottfredson (2009)—that since genes and environment interact, that heritability estimates are not useful—especially for humans (they are for nonhuman animals where environments can be fully controlled; see Schonemann, 1997; Moore and Shenk, 2016). There are many reasons why this ‘fallacy’ is anything but a fallacy; it is a simple truism: genes and environment (along with other developmental products) interact to ‘construct’ the organism (what Oyama, 2000 terms ‘constructive interactionism—“whereby each combination of genes and environmental influences simultaneously interacts to produce a unique result“). The causal parity thesis (CPT) is the thesis that genes/DNA play an important role in development, but so do other variables, so there is no reason to privilege genes/DNA above other developmental variables (see Noble, 2012 for a similar approach). Genes are not special developmental resources and so, nor are they more important than other developmental resources. So the thesis is that genes and other developmental resources are developmentally ‘on par’.

Genes need the environment. Without the environment, genes would not be expressed. Behavior geneticists claim to be able to partition genes from environment—nature from nurture—on the basis of heritability estimates, mostly gleaned from twin and adoption studies. However, the method is flawed: since genes interact with the environment and other genes, how would it be possible to neatly partition the effects of genes from the effects of the environment? Behavior geneticists claim that we can partition these two variables. Behavior geneticists—and others—cite the “Interactionism fallacy”, the fallacy that since genes interact with the environment that heritability estimates are useless. This “fallacy”, though, confuses the issue.

Behavior geneticists claim to show how genes and the environment affect the ontogeny of traits in humans with twin and adoption studies (though these methods are highly flawed). The purpose of this “fallacy” is to disregard what developmental systems theorists claim about the interaction of nature and nurture—genes and environment.

Gottfredson (2009) coins the “interactionism fallacy”, which is “an irrelevant truth [which is] that an organism’s development requires genes and environment to act in concert” and the “two forces are … constantly interacting” whereas “Development is their mutual product.” Gottfredson also states that “heritability … refers to the percentage of variation in … the phenotype, which has been traced to genetic variation within a particular population.” (She also makes the false claim that “One’s genome is fixed at birth“; though this is false, see epigenetics/methylation studies.) Heritability estimates, according to Phillip Kitcher are “‘irrelevant’ and the fact that behavior geneticists persist
in using them is ‘an unfortunate tic from which they cannot free themselves’ (Kitcher,
2001: 413)” (quoted in Griffiths, 2002).

Gottfredson is engaging in developmental denialism. Developmental denialismoccurs when heritability is treated as a causal mechanism governing the developmental reoccurrence of traits across generations in individuals.” Gottfredson, with her “interactionism fallacy” is denying organismal development by attempting to partition genes from environment. As Rose (2006) notes, “Heritability estimates are attempts to impose a simplistic and reified dichotomy (nature/nurture) on non-dichotomous processes.” The nature vs nurture argument is over and neither has won—contra Plomin’s take—since they interact.

Gottfredson seems confused, since this point was debated by Plomin and Oyama back in the 80s (Plomin’s review of Oyama’s book The Ontogeny of Information; see Oyama, 1987, 1988; Plomin, 1988a, b). In any case, it is true that development requires genes to interact. But Gottfredson is talking about the concept of heritability—the attempt to partition genes and environment through twin, adoption and family studies (which have a whole slew of problems). For example, Moore and Shenk (2016: 6) write:

Heritability statistics do remain useful in some limited circumstances, including selective breeding programs in which developmental environments can be strictly controlled. But in environments that are not controlled, these statistics do not tell us much.

Susan Oyama writes in The Ontogeny of Information (2000, pg 67):

Heritability coefficients, in any case, because they refer not only to variation in genotype but to everything that varied (was passed on) with it, only beg the question of what is passed on in evolution. All too often heritability estimates obtained in one setting are used to infer something about an evolutionary process that occurred under conditions, and with respect to a gene pool, about which little is known. Nor do such estimates tell us anything about development.

Characters are produced by the interaction of nongenetic and genetic factors. The biological flaw, as Moore and Shenk note, throw a wrench into the claims of Gottfredson and other behavior geneticists. Phenotypes are ALWAYS due to genetic and nongenetic factors interacting. So the two flaws of heritability—the environmental and biological flaw (Moore and Shenk, 2016)—come together to “interact” to refute such simplistic claims that genes and environment—nature and nurture—can be separated.

For instance, as Moore (2016) writes, though “twin study methods are among the most powerful tools available to quantitative behavioral geneticists (i.e., the researchers who took up Galton’s goal of disentangling nature and nurture), they are not satisfactory tools for studying phenotype development because they do not actually explore biological processes.” (See also Richardson, 2012.) This is because twin studies ignore biological/developmental processes that lead to phenotypes.

Gamma and Rosenstock (2017) write that the concept of heritability that behavioral geneticists use is “is a generally useless quantity” while “the behavioral genetic dichotomy of genes vs environment is fundamentally misguided.” This brings us back to the CPT; there is causal parity to all processes/interactants that form the organism and its traits, thus the concept of heritability that behavioral geneticists employ is a useless measure. Oyama, Griffiths, and Gray (2001: 3) write:

These often overlooked similarities form part of the evidence for DST’s claim of causal parity between genes and other factors of development. The “parity thesis” (Griffiths and Knight 1998) does not imply that there is no difference between the particulars of the causal roles of genes and factors such as endosymbionts or imprinting events. It does assert that such differences do not justify building theories of development and evolution around a distinction between what genes do and what every other causal factor does.

Behavior geneticists’ endeavor, though, is futile. Aaron Panofsky (2016: 167) writes that “Heritability estimates do not help identify particular genes or ascertain their functions in development or physiology, and thus, by this way of thinking, they yield no causal information.” (Also see Panofsky, 2014; Misbehaving Science: Controversy and the Development of Behavior Genetics.) So, the behavioral genetic method of partitioning genes and environment does not—and can not—show causation for trait ontogeny.

Now, while people like Gottfredson and others may deny it, they are genetic determinists. Genetic determinism, as defined by Griffiths (2002) is “the idea that many significant human characteristics are rendered inevitable by the presence of certain genes.” Using this definition, many behavior geneticists and their sympathizers have argued that certain traits are “inevitable” due to the presence of certain genes. Genetic determinism is literally the idea that genes “determine” aspects of characters and traits, though it has been known for decades that it is false.

Now we can take a look at Brian Boutwell’s article Not Everything Is An Interaction. Boutwell writes:

Albert Einstein was a brilliant man. Whether his famous equation of E=mc2 means much to you or not, I think we can all concur on the intellectual prowess—and stunning hair—of Einstein. But where did his brilliance come from? Environment? Perhaps his parents fed him lots of fish (it’s supposed to be brain food, after all). Genetics? Surely Albert hit some sort of genetic lottery—oh that we should all be so lucky. Or does the answer reside in some combination of the two? How very enlightened: both genes and environment interact and intertwine to yield everything from the genius of Einstein to the comedic talent of Lewis Black. Surely, you cannot tease their impact apart; DNA and experience are hopelessly interlocked. Except, they’re not. Believing that they are is wrong; it’s a misleading mental shortcut that has largely sown confusion in the public about human development, and thus it needs to be retired.


Most traits are the product of genetic and environmental influence, but the fact that both genes and environment matter does not mean that they interact with one another. Don’t be lured by the appeal of “interactions.” Important as they might be from time to time, and from trait to trait, not everything is an interaction. In fact, many things likely are not.

I don’t even know where to begin here. Boutwell, like Gottfredson, is confused. The only thing that needs to be retired because it “has largely sown confusion in the public about human development” is, ironically, the concept of heritability (Moore and Shenk, 2016)! I have no idea why Boutwell claimed that it’s false that “DNA and experience [environment] are hopelessly interlocked.” This is because, as Schneider (2007) notes, “the very concept of a gene requires an environment.” Since the concept of the gene requires the environment, how can we disentangle them into neat percentages like behavior geneticists claim to do? That’s right: we can’t. Do be lured by the appeal of interactions; all biological and nonbiological stuff constantly interacts with one another.

Boutwell’s claims are nonsense. It would be worth it to quote Richard Lewontin’s forward in the 2000 2nd edition of Susan Oyama’s The Ontogeny of Information (emphasis Lewontin’s):

Nor can we partition variation quantitatively, ascribing some fraction of variation to genetic differences and the remainder to environmental variation. Every organism is the unique consequence of the reading of its DNA in some temporal sequence of environments and subject to random cellular events that arise because of the very small number of molecules in each cell. While we may calculate statistically an average difference between carriers of one genotype and another, such average differences are abstract constructs and must not be reified with separable concrete effects of genes in isolation from the environment in which the genes are read. In the first edition of The Ontogeny of Information Oyama characterized her construal of the causal relation between genes and environment as interactionist. That is, each unique combination of genes and environment produces a unique and a priori unpredictable outcome of development. The usual interactionist view is that there are separable genetic and environmental causes, but the effects of these causes acting in combination are unique to the particular combination. But this claim of ontogenetically independent status of the causes as causes, aside from their interaction in the effects produced, contradicts Oyama’s central analysis of the ontogeny of information. There are no “gene actions” outside environments, and no “environmental actions” can occur in the absence of genes. The very status of environment as a contributing cause to the nature of an organism depends on the existence of a developing organism. Without organisms there may be a physical world, but there are no environments. In like the manner no organisms exist in the abstract without environments, although there may be naked DNA molecules lying in the dust. Organisms are the nexus of external circumstances and DNA molecules that make these physical circumstances into causes of development in the first place. They become causes only at their nexus, and they cannot exist as causes except in their simultaneous action. That is the essence of Oyama’s claim that information comes into existence only in the process of ontogeny. (Oyama, 2000: 16)

There is an “interactionist consensus” (see Oyama, Griffiths, and Grey, 2001; What is Developmental Systems Theory? pg 1-13): the organism and the suite of traits it has is due to the interaction of genetic/environmental/epigenetic etc. resources at every stage of development. Therefore, for organismal development to be successful, it always requires the interaction of genes, environment, epigenetic processes, and interactions between everything that is used to ‘construct’ the organism and the traits it has. Thus “it makes no sense to ask if a particular trait is genetic or environmental in origin. Understanding how a trait develops is not a matter of finding out whether a particular gene or a particular environment causes the trait; rather, it is a matter of understanding how the various resources available in the production of the trait interact over time” (Kaplan, 2006).

Lastly, I will shortly comment on Sesardic’s (2005: chapter 2) critiques on developmental systems theorists and their critique of heritability and the concept of interactionism. Sesardic argues in the chapter that interaction between genes and environment, nature and nurture, does not undermine heritability estimates (the nature and nurture partition). Philosopher of science Helen Longino argues in her book Studying Human Behavior (2013):

By framing the debate in terms of nature versus nurture and as though one of these must be correct, Sesardic is committed to both downplaying the possible contributions of environmentally oriented research and to relying on a highly dubious (at any rate, nonmethodological) empirical claim.

In sum, the “interactionist fallacy” (coined by Gottfredson) is not a ‘fallacy’ (error in reasoning) at all. For, as Oyama writes in Evolution’s Eye: A Systems View of the Biology-Culture DivideA not uncommon reaction to DST is, ‘‘That’s completely crazy, and besides, I already knew it” (pg 195). This is exactly what Gottfredson (2009) states, that she “already knew” that there is an interaction between nature and nurture; but she goes on to deny arguments from Oyama, Griffiths, Stotz, Moore, and others on the uselessness of heritability estimates along with the claim that nature and nurture cannot be neatly partitioned into percentages as they are constantly interacting. Causal parity between genes and other developmental resources, too, upends the claim that heritability estimates for any trait make sense (not least for how heritability estimates are gleaned for humans—mostly twin, family, and adoption studies). Developmental denialism—what Gottfredson and others often engage in—runs rampant in the “behavioral genetic” sphere; and Oyama, Griffiths, Stotz, and others show how we should not deny development and we should discard with these estimates for human traits.

Heritability estimates imply that there is a “nature vs nurture” when it is “nature and nurture” which are constantly interacting—and, due to this, we should discard with these estimates due to the interaction of numerous developmental resources; it does not make sense to partition an interacting, self-organizing developmental system. Claims from behavior geneticists—that genes and environment can be separated—are clearly false.

Five Years Away Is Always Five Years Away

1300 words

Five years away is always five years away. When one makes such a claim, they can always fall back on the “just wait five more years!” canard. Charles Murray is one who makes such claims. In an interview with the editor of Skeptic Magazine, Murray stated to Frank Miele:

I have confidence that in five years from now, and thereafter, this book will be seen as a major accomplishment.

This interview was in 1996 (after the release of the soft cover edition of The Bell Curve), and so “five years” would be 2001. But “predictions” such as this from HBDers (that the next big thing for their ideology, for example) is only X years away happens a lot. I’ve seen many HBDers make claims that only in 5 to 10 years the evidence for their position will come out. Such claims seem strangely religious to me. There is a reason for that. (See Conley and Domingue, 2016 for a molecular genetic refutation of The Bell Curve. While Murray’s prediction failed, 22 years after The Bell Curve’s publication, the claims of Murray and Herrnstein were refuted.)

Numerous people throughout history have made predictions regarding the date of Christ’s return. Some have used calculations to ascertain the date of Christ’s return, from the Bible. We can just take a look at the Wikipedia page for predictions and claims for the second coming of Christ where there are many (obviously failed) predictions of His return.

Take John Wesley’s claim that Revelations 12:14 referred to the day that Christ should come. Or one of Charles Taze Russell’s (the first president of the Watch Tower Society of Jehova’s Witnesses) claim that Jesus would return in 1874 and be ruling invisibly from heaven.

Russell’s beliefs began with Adventist teachings. While Russell, at first, did not take to the claim that Christ’s return could be predicted, that changed when he met Adventist author Nelson Barbour. The Adventists taught that the End Times began in 1799, Christ returned invisibly in 1874 with a physical return in 1878. (When this did not come to pass, many followers left Barbour and Russell states that Barbour did not get the event wrong, he just got the fate wrong.) So all Christians that died before 1874 would be resurrected, and Armageddon would begin in 1914. Since WWI began in 1914, Russell took that as evidence that his prediction was coming to pass. So Russell sold his clothing stores, worth millions of dollars today, and began writing and preaching about Christ’s imminent refuted. This doesn’t need to be said, but the predictions obviously failed.

So the date of 1914 for Armageddon (when Christ is supposed to return), was come to by Russell from studying the Bible and the great pyramids:

A key component to the calculation was derived from the book of Daniel, Chapter 4. The book refers to “seven times“. He interpreted each “time” as equal to 360 days, giving a total of 2,520 days. He further interpreted this as representing exactly 2,520 years, measured from the starting date of 607 BCE. This resulted in the year 1914-OCT being the target date for the Millennium.

Here is the prediction in Russell’s words “…we consider it an established truth that the final end of the kingdoms of this world, and the full establishment of the Kingdom of God, will be accomplished by the end of A.D. 1914” (1889). When 1914 came and went (sans the beginning of WWI which he took to be a sign of the, End Times), Russell changed his view.

Now, we can liken the Russell situation to Murray. Murray claimed that in 5 years after his book’s publication, that the “book would be seen as a major accomplishment.” Murray also made a similar claim back in 2016. Someone wrote to evolutionary biologist Joseph Graves about a talk Murray gave; he was offered an opportunity to debate Graves about his claims. Graves stated (my emphasis):

After his talk I offered him an opportunity to debate me on his claims at/in any venue of his choosing. He refused again, stating he would agree after another five years. The five years are in the hope of the appearance of better genomic studies to buttress his claims. In my talk I pointed out the utter weakness of the current genomic studies of intelligence and any attempt to associate racial differences in measured intelligence to genomic variants.

(Do note that this was back in April of 2016, about one year before I changed my hereditarian views to that of DST. I emailed Murray about this, he responded to me, and gave me permission to post his reply which you can read at the above link.)

Emil Kirkegaard stated on Twitter:

Do you wanna bet that future genomics studies will vindicate us? Ashkenazim intelligence is higher for mostly genetic reasons. Probably someone will publish mixed-ethnic GWAS for EA/IQ within a few years

Notice, though “within a few years” is vague; though I would take that to be, as Kirkegaard states next, three years. Kirkegaard was much more specific for PGS (polygenic scores) and Ashkenazi Jews, stating that “causal variant polygenic scores will show alignment with phenotypic gaps for IQ eg in 3 years time.” I’ll remember this; January 6th, 2022. (Though it was just an “example given”, this is a good example of a prediction from an HBDer.) Nevermind the problems with PGS/GWA studies (Richardson, 2017; Janssens and Joyner, 2019; Richardson and Jones, 2019).

I can see a prediction being made, it not coming to pass, and, just like Russel, one stating “No!! X, Y, and Z happened so that invalidated the prediction! The new one is X time away!” Being vague about timetables about as-of-yet-to-occur events it dishonest; stick to the claim, and if it does not occur….stop holding the view, just as Russel did. However, people like Murray won’t change their views; they’re too entrenched in this. Most may know that I over two years ago I changed my views on hereditarianism (which “is the doctrine or school of thought that heredity plays a significant role in determining human nature and character traits, such as intelligence and personality“) due to two books: DNA Is Not Destiny: The Remarkable, Completely Misunderstood Relationship between You and Your Genes and Genes, Brains, and Human Potential: The Science and Ideology of Intelligence. But I may just be a special case here.

Genes, Brains, and Human Potential then led me to the work of Jablonka and Lamb, Denis Noble, David Moore, Robert Lickliter, and others—the developmental systems theorists. DST is completely at-ends with the main “field” of “HBD”: behavioral genetics. See Griffiths and Tabery (2013) for why teasing apart genes and environment—nature and nurture—is problematic.

In any case, five years away is always five years away, especially with HBDers. That magic evidence is always “right around the corner”, despite the fact that none ever comes. I know that some HBDers will probably clamor that I’m wrong and that Murray or another “HBDer” has made a successful prediction and not immediately change the date of said prediction. But, just like Charles Taze Russell, when the prediction does not come to pass, just make something up about how and why the prediction didn’t come to pass and everything should be fine.

I think Charles Murray should change his name to Charles Taze Russel, since he pushed back the date of the prediction so many times. Though, to Russel’s credit, he did eventually recant on his views. I would find it hard to believe that Murray would; he’s too deep in this game and his career writing books and being an AEI pundit is on the line.

So I strongly doubt that Murray would ever come outright and say “I was wrong.” Too much money is on the line for him. (Note that Murray has a new book releasing in January titled Human Diversity: Gender, Race, Class, and Genes and you know that I will give a scathing review of it, since I already know Murray’s MO.) It’s ironic to me: Most HBDers are pretty religious in their convictions and can and will explain away data that doesn’t line up with their beliefs, just like a theist.

Men Are Stronger Than Women

1200 words

The claim that “Men are stronger than women” does not need to be said—it is obvious through observation that men are stronger than women. To my (non-)surprise, I saw someone on Twitter state:

“I keep hearing that the sex basis of patriarchy is inevitable because men are (on average) stronger. Notwithstanding that part of this literally results from women in all stages of life being denied access to and discourage from physical activity, there’s other stuff to note.”

To which I replied:

“I don’t follow – are you claiming that if women were encouraged to be physically active that women (the population) can be anywhere *near* men’s (the population) strength level?”

I then got told to “Fuck off,” because I’m a “racist” (due to the handle I use and my views on the reality of race). In any case, while it is true that part of this difference does, in part, stem from cultural differences (think of women wanting the “toned” look and not wanting to get “big and bulky”—as if it happens overnight) and not wanting to lift heavy weights because they think they will become cartoonish.

Here’s the thing though: Men have about 61 percent more muscle mass than women (which is attributed to higher levels of testosterone); most of the muscle mass difference is allocated to the upper body—men have about 75 percent more arm muscle mass than women which accounts for 90 percent greater upper body strength in men. Men also have about 50 percent more muscle mass than women, while this higher percentage of muscle mass is then related to men’s 65 percent greater lower body strength (see references in Lassek and Gaulin, 2009: 322).

Men have around 24 pounds of skeletal muscle mass compared to women, though in this study, women were about 40 percent weaker in the upper body and 33 percent weaker in the lower body (Janssen et al, 2000). Miller et al (1993) found that women had a 45 percent smaller cross-section area in the brachii, 45 in the elbow flexion, 30 percent in the vastus lateralis, and 25 percent smaller CSA in the knee extensors, as I wrote in Muscular Strength by Gender and Race, where I concluded:

The cause for less upper-body strength in women is due the distribution of women’s lean tissue being smaller.

Men have larger fibers, which in my opinion is a large part of the reason for men’s strength advantage over women. Now, even if women were “discouraged” from physical activity, this would be a problem for their bone density. Our bones are porous, and so, by doing a lot of activity, we can strengthen our bones (see e.g., Fausto-Sterling, 2005). Bishop, Cureton, and Collins (1987) show that the sex difference in strength in close-to-equally-trained men and women “is almost entirely accounted for by the difference in muscle size.” Which lends credence to my claim I made above.

Lindle et al (1997) conclude that:

… the results of this study indicate that Con strength levels begin to decline in the fourth rather than in the fifth decade, as was previously reported. Contrary to previous reports, there is no preservation of Ecc compared with Con strength in men or women with advancing age. Nevertheless, the decline in Ecc strength with age appears to start later in women than in men and later than Con strength did in both sexes. In a small subgroup of subjects, there appears to be a greater ability to store and utilize elastic energy in older women. This finding needs to be confirmed by using a larger sample size. Muscle quality declines with age in both men and women when Con peak torque is used, but declines only in men when Ecc peak torque is used. [“Con” and “Ecc” strength refer to concentric and eccentric actions]

Women are shorter than men and have less fat-free muscle mass than men. Women also have a weaker grip (even when matched for height and weight, men had higher levels of lean mass compared to women (92 and 79 percent respectively; Nieves et al, 2009). So men had greater bone mineral density (BMD) and bone mineral content (BMC) compared to women. Now do some quick thinking—do you think that one with weaker bones could be stronger than someone with stronger bones? If person A had higher levels of BMC and BMD compared to person B, who do you think would be stronger and have the ability to do whatever strength test the best—the one with the weaker or stronger muscles? Quite obviously, the stronger one’s bones are the more weight they can bare on them. So if one has weak bones (low BMC/BMD) and they put a heavy load on their back, while they’re doing the lift their bones could snap.

Alswat (2017) reviewed the literature on bone density between men and women and found that men had higher BMD in the hip and higher BMC in the lower spine. Women also had bone fractures earlier than men. Some of this is no doubt cultural, as explained above. However, even if we had a boy and a girl locked in a room for their whole lives and they did the same exact things, ate the same food, and lifted the same weights, I would bet my freedom that there still would be a large difference between the two, skewing where we know it would skew. Women are more likely to suffer from osteoporosis than are men (Sözen, Özışık, and Başaran 2016).

So if women have weaker bones compared to men, then how could they possibly be stronger? Even if men and women had the same kind of physical activity down to the tee, could you imagine women being stronger than men? I couldn’t—but that’s because I have more than a basic understanding of anatomy and physiology and what that means for differences in strength—or running—between men and women.

I don’t doubt that there are cultural reasons that account for the large differences in strength between men and women—I do doubt, though, that the gap can be meaningfully closed. Yes, biology interacts with culture. So the developmental variables that coalesce to make men “Men” and those that coalesce to make women “Women” converge in creating the stark differences in phenotype between the sexes which then explains how the sex differences between the sexes manifest itself.

Differences in bone strength between men and women, along with distribution of lean tissue, differences in lean mass, and differences in muscle size explain the disparity in muscular strength between men and women. You can even imagine a man and woman of similar height and weight and they would, of course, look different. This is due to differences in hormones—the two main players being testosterone and estrogen (see Lang, 2011).

So yes, part of the difference in strength between men and women are rooted in culture and how we view women who strength train (way more women should strength train, as a matter of fact), though I find it hard to believe that even if the “cultural stigma” of the women who lifts heavy weights at the gym disappeared overnight, that women would be stronger than men. Differences in strength exist between men and women and this difference exists due to the complex relationship between biology and culture—nature and nurture (which cannot be disentangled).

Jean Baptiste Lamarck

Eva Jablonka

Charles Murray

Arthur Jensen

Blog Stats

  • 587,149 hits
Follow NotPoliticallyCorrect on