NotPoliticallyCorrect

Home » Race Realism

Category Archives: Race Realism

Hereditarianism and Religion

2200 words

In its essence the traditional notion of general intelligence may be a secularised version of the Puritan idea of the soul. … perhaps Galtonian intelligence had its roots in a far older kind of religious thinking. (John White, Personal space: The religious origins of intelligence testing)

In chapter 1 of Alas, Poor Darwin: Arguments Against Evolutionary Psychology, Dorothy Nelkin identifies the link between the founder of sociobiology E.O. Wilson’s religious beliefs and the epiphany he described when he learned of evolution. A Christian author then used Sociobiology to explain and understand the origins of our own sinfulness (Williams, 2000). But there is another hereditarian-type research program that has these kinds of assumptions baked-in—IQ.

Philosopher of education John White has looked into the origins of IQ testing and the Puritan religion. The main link between Puritanism and IQ was that of predestination. The first IQ-ists conceptualized IQ—‘g’ or general intelligence—to be innate, predetermined and hereditary. The predetermination line between both IQ and Puritanism is easy to see: To the Puritans, it was predestined whether or not one went to Hell before they even existed as human beings whereas to the IQ-ists, IQ was predestined, due to genes.

John White (2006: 39) in Intelligence, Destiny, and Education notes the parallel between “salvation and success, damnation and failure”:

Can we usefully compare the saved/damned dichotomy with the perceived contribtion of intelligence or the lack of it to success and failure in life, as conventionally understood? One thing telling against this is that intelligence testers claim to identify via IQ scores a continuous gamut of ability from lowest to highest. On the other hand, most of the pioneers in the field were … especially interested in the far ends of this range — in Galton’s phrase ‘the extreme classes, the best and the worst.’ On the other hand there were the ‘gifted’, ‘the eminent’, ‘those who have honourably succeeded in life’, presumably … the most valuable portion of our human stock. On the other, the ‘feeble-minded’, the ‘cretins’, the ‘refuse’ those seeking to avoid ‘the monotony of daily labor’, democracy’s ballast, not always useless but always a potential liability’.

A Puritan-type parallel can be drawn here—the ‘cretins and ‘feeble-minded’ are ‘the damned’ whereas ‘the extreme classes, the best and worst’ were ‘the saved.’ This kind of parallel can still be seen in modern conceptualizations of the debate and current GWASs—certain people have a certain surfeit of genes that influence intellectual attainment. Contrast with the Puritan “Certain people are chosen before they exist to either be damned or saved.” Certain people are chosen, by random mix-ups of genes during conception, to either be successful or not, and this is predetermined by the genes. So, genetic determinism when speaking of IQ is, in a way, just like Puritan predestination—according to Galton, Burt and other IQ-ists in the 1910s-1920s (ever since Goddard brought back the Binet-Simon Scales from France in 1910).

Some Puritans banned the poor from their communities seeing them asdisruptors to Puritan communities.” Stone (2018: 3-4) in An Invitation to Satan: Puritan Culture and the Salem Witch Trials writes:

The range of Puritan belief in salvation usually extended merely to members of their own communities and other Puritans. They viewed outsiders as suspicious, and people who held different beliefs, creeds, or did things differently were considered dangerous or evil. Because Puritans believed the community shared the consequences of right and wrong, often community actions were taken to atone for the misdeed. As such, they did not hesitate to punish or assault people who they deemed to be transgressors against them and against God’s will. The people who found themselves punished were the poor, and women who stood low on the social ladder. These punishments would range from beatings to public humiliation. Certain crimes, however, were viewed as far worse than others and were considered capital crimes, punishable by death.

Could the Puritan treatment of the poor be due to their beliefs of predestination? Puritan John Winthrop stated in his book A Model of Christian Charity thatsome must be rich, some poor, some high and eminent in power and dignity, others mean and in subjection.” This, too, is still around today: IQ sets “upper limits” on one’s “ability ceiling” to achieve X. The poor are those who do not have the ‘right genes’. This is, also, a reason why IQ tests were first introduced in America—to turn away the poor (Gould, 1996; Dolmage, 2018). That one’s ability is predetermined in their genes—that each person has their own ‘ceiling of ability’ that they can reach that is then constrained by their genes is just like the Puritan predestination thesis. But, it is unverifiable and unfalsifiable, so it is not a scientific theory.

To White (2006), the claim that we have this ‘innate capacity’ that is ‘general’ this ‘intelligence’ is wanting. He takes this further, though. In discussing Galton’s and Burt’s claim that there are ‘ability ceilings’—and in discussing a letter he wrote to Burn—White (2006: 16) imagines that we give instruction to all of the twin pairs and that, their scores increase by 15 points. This, then, would have a large effect on the correlation “So it must be an assumption made by the theorist — i.e. Burt — in claiming a correlation of 0.87, that coaching could not successfully improve IQ scores. Burt replied ‘I doubt whether, had we returned a second time, the coaching would have affected our correlations” (White, 2006: 16). Burt seems to be implying that a “ceiling of ability” exists, which he got from his mentor, Galton. White continues:

It would appear that Galton nor Burt have any evidence for their key claim [that ability ceilings exist]. The proposition that, for all of us, there are individually differing ceilings of ability seems to be an assumption behind their position, rather than a conclusion based on telling grounds.

I have discussed elsewhere (White, 1974; 2002a: ch. 5) what could count as evidence for this proposition, and concluded that it is neither verifiable nor falsifiable. The mere fact that a child appears not able to get beyond, say, elementary algebra is not evidence of a ceiling. The failure of this or that variation in teaching approach fares no better, since it is always possible for a tracher to try some different approach to help the learner get over the hurdle. (With some children, so neurologically damaged that they seem incapable of language, it may seem that the point where options run out for the teacher is easier to establish than it is for other children. But the proposition in question is supposed to applu to all of us: we are all said to have our own mental ceiling; and for non-brain-damaged people the existence of a ceiling sems impossible to demonstrate.) It is not falsifiable, since for even the cleverest person in the world, for whom no ceiling has been discovered, it is always possible that it exists somewhere. As an untestable — unverifiable and unfalsifiable — proposition, the claim that we each have a mental ceiling has, if we follow Karl Popper (1963: ch. 1), no role in science. It is like the proposition that God exists or that all historical events are predetermined, both of which are equally untestable. As such, it may play a foundational role, as these two propositions have played, in some ideological belief system of belief, but has no place in empirical science. (White, 2006: 16)

Burt believed that we should use IQ tests to shoe-horn people into what they would be ‘best for’ on the basis of IQ. Indeed, this is one of the main reasons why Binet constructed what would then become the modern IQ test. Binet, influenced by Galton’s (1869) Hereditary Genius, believed that we could identify and help lower-‘ability’ children. Binet envisioned an ‘ideal city’ in which people were pushed to vocations that were based on their ‘IQs.’ Mensh and Mensh (1991: 23) quote Binet on the “universal applications” of his test:

Of what use is a measure of intelligence? Without doubt, one could conceive many possible applications of the process in dreaming of a future where the social sphere would be better organized than ours; where everyone would work according to his known apptitudes in such a way that non particle of psychic force should be lost for society. That would be the ideal city.

So, it seems, Binet wanted to use his test as an early aptitude-type test (like the ones we did in grammar school which ‘showed us’ which vocations we would be ‘good at’ based on a questionnaire). Having people in Binet’s ‘ideal city’ work based on their ‘known aptitudes’ would increase, not decrease, inequality so Binet’s envisioned city is exactly the same as today’s world. Mensh and Mensh (1991: 24) continue:

When Binet asserted that everyone would work to “known” aptitudes, he was saying that the individuals comprising a particular group would work according to the aptitudes that group was “known” to have. When he suggested, for example, that children of lower socioeconomic status are perfectly suited for manual labor, he was simply expressing what elite groups “know,” that is, that they themselves have mental aptitudes, and others have manual ones. It was this elitist belief, this universal rationale for the social status quo, that would be upheld by the universal testing Binet proposed.

White (2006: 42) writes:

Children born with low IQs have been held to have no hope of a professional, well-paid job. If they are capable of joining the workforce at all, they must find their niche as the unskilled workers.

Thus, the similarities between IQ-ist and religious (Puritan) belief comes clear. The parallels between the Puritan concern for salvation and the IQ-ist belief that one’s ‘innate intelligence’ dictated whether or not they would succeed or fail in life (based on their genes); both had thoughts of those lower on the social ladder, their work ethic and morals associated with the reprobate on the one hand and the low IQ people on the other; both groups believed that the family is the ‘mechanism’ by which individuals are ‘saved’ or ‘damned’—presuming salvation is transmitted based one’s family for the Puritans and for the IQ-ists that those with ‘high intelligence’ have children with the same; they both believed that their favored group should be at the top with the best jobs, and best education, while those lower on the social ladder should also get what they accordingly deserve. Galton, Binet, Goddard, Terman, Yerkes, Burt, and others believed that one was endowed with ‘innate general intelligence’ due to genes, according to the current-day IQ-ists who take the same concept.

White drew his parallel between IQ and Puritanism without being aware that one of the first anti-IQ-ists—and American Journalist named Walter Lippman—who also been made in the mid-1920s. (See Mensh and Mensh, 1991 for a discussion of Lippman’s grievances with the IQ-ists). Such a parralel between Puritanism and Galton’s concept of ‘intelligence’ and that of the IQ-ists today. White (2005: 440) notes “that virtually all the major players in the story had Puritan connexions may prove, after all, to be no more than coincidence.” Though, the evidence that White has marshaled in favor of the claim is interesting, as noted many parallels exist. It would be some huge coincidence for there to be all of these parallels without them being causal (from Puritanistic beliefs to hereditarian IQ dogma).

This is similar to what Oyama (1985: 53) notes:

Just as traditonal though placed biological forms in the mind of God, so modern thought finds many ways of endowing the genes with ultimate formative power, a power bestowed by Nature over countless milennia.

Natural selection” plays the role that God did before Darwin, which was even stated by Ernst Mayr (Oyama, 1985: 85).

But this parallel between Puritanism and hereditarianism doesn’t just go back to the early 20th century—it can still be seen today. The assumption that genes contain a type of ‘information’ before activated by the physiological system for its uses still pervades our thought today, even though many others have been at the forefront to change that kind of thinking (Oyama, 1985, 2000; Jablonka and Lamb, 1995, 2005; Moore, 2002, 2016; Noble, 2006, 2011, 2016).

The links between hereditarianism and religion are compelling; eugenic and Puritan beliefs are similar (Durst, 2017). IQ tests have now been identified as having their origins in eugenic beliefs, along with Puritan-like beliefs have being saved/damned based on something that is predetermined, out of your control just like your genetics. The conception of ‘ability ceilings’—using IQ tests—is not verifiable nor is it falsifiable. Hereditarians believe in ‘ability ceilings’ and claim that genes contain a kind of “blueprint” (which is still held today) which predestines one toward certain dispositions/behaviors/actions. Early IQ-ists believed that one is destined for certain types of jobs based on what is ‘known’ about their group. When Binet wrote that, the gene was yet to be conceptualized, but it has stayed with us ever since.

So not only did the concept of “IQ” emerge due to the ‘need’ to ‘identify’ individuals for their certain ‘aptitudes’ that they would be well-suited for in, for instance, Binet’s ideal city, it also arose from eugenic beliefs and religious (Puritan) thinking. This may be why IQ-ists seem so hysterical—so religious—when talking about IQ and the ‘predictions’ it ‘makes’ (see Nash, 1990).

Charles Murray’s Philosophically Nonexistent Defense of Race in “Human Diversity”

2250 words

Charles Murray published his Human Diversity: The Biology of Gender, Race, and Class on 1/28/2020. I have an ongoing thread on Twitter discussing it.

Murray talks of an “orthodoxy” that denies the biology of gender, race, and class. This orthodoxy, Murray says, are social constructivists. Murray is here to set the record straight. I will discuss some of Murray’s other arguments in his book, but for now I will focus on the section on race.

Murray, it seems, has no philosophical grounding for his belief that the clusters identified in these genomic runs are races—and this is clear with his assumptions that groups that appear in these analyses are races. But this assumption is unfounded and Murray’s assumption that the clusters are races without any sound justification for his belief actually undermines his claim that races exist. That is one thing that really jumped out at me as I was reading this section of the book. Murray discusses what geneticists say, but he does not discuss what any philosophers of race say. And that is to his downfall.

Murray discusses the program STRUCTURE, in which geneticists input the number of clusters they want and, when DNA is analyzed (see also Hardimon, 2017: chapter 4). Rosenberg et al (2002) sampled 1056 individuals from 52 different populations using 377 microsatellites. They defined the populations by culture, geography, and language, not skin color or race. When K was set to 5, the clusters represented folk concepts of race, corresponding to the Americas, Europe, East Asia, Oceania, and Africa. (See Minimalist Races Exist and are Biologically Real.) Yes, the number of clusters that come out of STRUCTURE are predetermined by the researchers, but the clusters “are genetically structured … which is to say, meaningfully demarcated solely on the basis of genetic markers” (Hardimon, 2017: 88).

Races as clusters

Murray then discusses Li et al, who set K to 7 and North Africa and the Middle East were new clusters. Murray then provides a graph from Li et al:


So, Murray’s argument seems to be “(1) If clusters that correspond to concepts of race setting K to 5-7 appear in STRUCTURE and cluster analyses, then (2) race exists. (1). Therefore (2).” Murray is missing a few things here, namely conditions (see below) that would place the clusters into the racial categories. His assumption that the clusters are races—although (partly) true—is not bound by any sound reasoning, as can be seen by his partitioning Middle Easterners and North Africans as separate races. Rosenberg et al (2002) showed the Kalash in K=6, are they a race too?

No, they are not. Just because STRUCTURE identifies a population as genetically distinct, it does not entail that the population in question is a race because they do not fit the criteria for racehood. The fact that the clusters correspond to major areas means that the clusters represent continental-level minimalist races so races, therefore, exist (Hardimon, 2017: 85-86). But to be counted as a continental-level minimalist race, the group must fit the following conditions (Hardimon, 2017: 31):

(C1) … a group is distinguished from other groups of human beings by patterns of visible physical features
(C2) [the] members are linked by a common ancestry peculiar to members of that group, and
(C3) [they] originate from a distinctive geographic location

[…]

…what it is for a group to be a race is not defined in terms of what it is for an individual to be a member of a race. What it means to be an individual member of a minimalist race is defined in terms of what it is for a group to be a race.

Murray (paraphrased): “Cluster analyses/STRUCTURE spit out these continental microsatellite divisions which correspond to commonsense notions of race.” What is Murray’s logic for assuming that clusters are races? It seems that there is no logic behind it—just “commonsense.” (See also Fish, below.) Due to not finding any arguments for accepting X number of clusters as the races Murray wants, I can only assume that Murray just chose which one agreed with his notions and use for his book.  (If I am in error, then if there is an argument in the book then maybe someone can quote it.) What kind of justification is that?

Compared to Hardimon’s argument and definition. Homo sapiens is:

a subdivision of Homo sapiens—a group of populations that exhibits a distinctive pattern of genetically transmitted phenotypic characters that corresponds to the group’s geographic ancestry and belongs to a biological line of descent initiated by a geographically separated and reproductively isolated founding population. (Hardimon, 2017: 99)

[…]

Step 1. Recognize that there are differences in patterns of visible physical features of human beings that correspond to their differences in geographic ancestry.

Step 2. Observe that these patterns are exhibited by groups (that is, real existing groups).

Step 3. Note that the groups that exhibit these patterns of visible physical features correspond to differences in geographical ancestry satisfy the conditions of the minimalist concept of race.

Step 4. Infer that minimalist race exists. (Hardimon, 2017: 69)

While Murray is right that the clusters that correspond to the folk races appear in K = 5, you can clearly see that Murray assumes that ALL clusters would then be races and this is where the philosophical emptiness of Murray’s account comes in. Murray has no criteria for his belief that the clusters are races, commonsense is not good enough.

Philosophical emptiness

Murray then lambasts the orthodoxy for claiming that race is a social construct.

Advocates of “race is a social construct” have raised a host of methodological and philosophical issues with the cluster analyses. None of the critical articles has published a cluster analysis that does not show the kind of results I’ve shown.

Murray does not, however, discuss a more critical article of Rosenberg et al (2002)Mills (2017)Are Clusters Races? A Discussion of the Rhetorical Appropriation of Rosenberg et al’s “Genetic Structure of Human Populations.” Mills (2017) discusses the views of Neven Sesardic (2010)—philosopher—and Nicholas Wade—science journalist and author of A Troublesome Inheritance (Wade, 2014). Both Wade and Seasardic are what Kaplan and Winther (2014) term “biological racial realists” whereas Rosenberg et al (2002), Spencer (2014), and Hardimon (2017) are bio-genomic/cluster realists. Mills (2017) discusses the “misappropriation” of the bio-genomic cluster concept due to the “structuring of figures [and] particular phrasings” found in Rosenberg et al (2002). Wade and Seasardic shifted from bio-genomic cluster realism to their own hereditarian stance (biological racial realism, Kaplan and Winther, 2014). While this is not a blow to the positions of Hardimon and Spencer, this is a blow to Murray et al’s conception of “race.”

Murray (2020: 144)—rightly—disavows the concept of folk races but wrongly accepting the claim that we dispense with the term “race”:

The orthodoxy is also right in wanting to discard the word race. It’s not just the politically correct who believe that. For example, I have found nothing in the genetics technical literature during the last few decades that uses race except within quotation marks. The reasons are legitimate, not political, and they are both historical and scientific.

Historically, it is incontestably true that the word race has been freighted with cultural baggage that has nothing to do with biological differences. The word carries with it the legacy of nineteenth-century scientific racism combined with Europe’s colonialism and America’s history of slavery and its aftermath.

[…]

The combination of historical and scientific reasons makes a compelling case that the word race has outlived its usefulness when discussing genetics. That’s why I adopt contemporary practice in the technical literature, which uses ancestral population or simply population instead of race or ethnicity

[Murray also writes on pg 166]

The material here does not support the existence of the classically defined races.

(Nevermind the fact that Murray’s and Herrnstein’s The Bell Curve was highly responsible for bringing “scientific racism” into the 21st century—despite protestations to the contrary that his work isn’t “scientifically racist.”)

In any case, we do not need to dispense with the term race. We only need to deflate the term (Hardimon, 2017; see also Spencer, 2014). Rejecting claims from those termed biological racial realists by Kaplan and Winther (2014), both Hardimon (2017) and Spencer (2014; 2019) deflate the concept of race—that is, their concepts only discuss what we can see, not what we can’t. Their concepts are deflationist in that they take the physical differences from the racialist concept (and reject the psychological assumptions). Murray, in fact, is giving into this “orthodoxy” when he says that we should stop using the term “race.” It’s funny, Murray cites Lewontin (an eliminativist about race) but advocates eliminativism of the word but still keeping the underlying “guts” of the concept, if you will.

We should only take the concept of “race” out of our vocabulary if, and only if, our concept does not refer. So for us to take “race” out of our vocabulary it would have to not refer to any thing. But “race” does refer—to proper names for a set of human population groups and to social groups, too. So why should we get rid of the term? There is absolutely no reason to do so. But we should be eliminativist about the racialist concept of race—which needs to exist if Murray’s concept of race holds.

There is, contra Murray, material that corresponds to the “classically defined races.” This can be seen with Murra’s admission that he read the “genetics technical literature”. He didn’t say that he read any philosophy of race on the matter, and it clearly shows.

To quote Hardimon (2017: 97):

Deflationary realism provides a worked-out alternative to racialism—it is a theory that represents race as a genetically grounded, relatively superficial biological reality that is not normatively important in itself. Deflationary realism makes it possible to rethink race. It offers the promise of freeing ourselves, if only imperfectly, from the racialist background conception of race.

Spencer (2014) states that the population clusters found by Rosenberg et al’s (2002) K = 5 run are referents of racial terms used by the US Census. “Race terms” to Spencer (2014: 1025) are “a rigidly designating proper name for a biologically real entity …Spencer’s (2019b) position is now “radically pluralist.” Spencer (2019a) states that the set of races in OMB race talk (Office of Management and Budget) is one of many forms “race” can take when talking about race in the US; the set of races in OMB race talk is the set of continental human populations; and the continental set of human populations is biologically real. So “race” should be understood as proper names—we should only care if our terms refer or not.

Murray’s philosophy of race is philosophically empty—Murray just uses “commensense” to claim that the clusters found are races, which is clear with his claim that ME/NA people constitute two more races. This is almost better than Rushton’s three-race model but not by much. In fact, Murray’s defense of race seems to be almost just like Jensen’s (1998: 425) definition, which Fish (2002: 6) critiqued:

This is an example of the kind of ethnocentric operational definition described earlier. A fair translation is, “As an American, I know that blacks and whites are races, so even though I can’t find any way of making sense of the biological facts, I’ll assign people to my cultural categories, do my statistical tests, and explain the differences in biological terms.” In essence, the process involves a kind of reasoning by converse. Instead of arguing, “If races exist there are genetic differences between them,” the argument is “Genetic differences between groups exist, therefore the groups are races.”

So, even two decades later, hereditarians are STILL just assuming that race exists WITHOUT arguments and definitions/theories of race. Rushton (1997) did not define “race”, and also just assumed the existence of his three races—Caucasians, Mongoloids, and Negroids; Levin (1997), too, just assumes their existence (Fish, 2002: 5). Lynn (2006: 11) also uses a similar argument to Jensen (1998: 425). Since the concept of race is so important to the hereditarian research paradigm, why have they not operationalized a definition and rely on just assuming that race exists without argument? Murray can now join the list of his colleagues who also assume the existence of race sans definition/theory.

Conclusion

Hardimon’s and Spencer’s concepts get around Fish’s (2002: 6) objection—but Murray’s doesn’t. Murray simply claims that the clusters are races without really thinking about it and providing justification for his claim. On the other hand, philosophers of race (Hardimon, 2017; Spencer, 2014; 2019a, b) have provided sound justification for the belief in race. Murray is not fair to the social constructivist position (great accounts can be found in Zack (2002), Hardimon (2017), Haslanger (2000)). Murray seems to be one of those “Social constructivists say race doesn’t exist!” people, but this is false: Social constructs are real and the social can does have potent biological effects. Social constructivists are realists about race (Spencer, 2012; Kaplan and Winther, 2014; Hardimon, 2017), contra Helmuth Nyborg.

Murray (2020: 17) asks “Why me? I am neither a geneticist nor a neuroscientist. What business do I have writing this book?” If you are reading this book for a fair—philosophical—treatment for race, look to actual philosophers of race and don’t look to Murray et al who do not, as shown, have a definition of race and just assume its existence. Spencer’s Blumenbachian Partitions/Hardimon’s minimalist races are how we should understand race in American society, not philosophically empty accounts.

Murray is right—race exists. Murray is also wrong—his kinds of races do not exist. Murray is right, but he doesn’t give an argument for his belief. His “orthodoxy” is also right about race—since we should accept pluralism about race then there are many different ways of looking at race, what it is, and its influence on society and how society influences it. I would rather be wrong and have an argument for my belief then be right and appeal to “commonsense” without an argument.

Just-so Stories: Mass Killings

2000 words

Mass shootings occur about every 12.5 days (Meindl and Ivy, 2017) and so, figuring out why this is the case is of utmost importance. There are, of course, complex and multi-factorial reasons why people turn to mass killing, with popular fixes being to change the environment and attempt to identify at-risk individuals before they carry out such heinous acts.

Just-so stories take many forms—why men have beards, human fear of snakes and spiders, why men have bald heads, why humans have big brains, why certain genes are in different populations in different frequencies, etc. The trait/genes that influence the trait are said to be fitness-enhancing and therefore selected-for—they become “naturally selected” (see Fodor and Piattelli-Palmarini, 2010, 2011) and fixated in that species. Mass shootings are becoming more frequent and deadlier in America; is there any evolutionary rationale behind this? Don’t worry, the just-so storytellers are here to tell us why these sorts of actions are and have been prevalent in society.

The end result is a highly provocative interpretation of combining theories of human nature and evolutionary psychology. Additionally, community development and connectedness are described as evolved behaviors that help provide opportunities for individuals to engage and support each other in a conflicted society. In sum, this manuscript helps piece together centuries old [sic] theories describing human nature with current views addressing natural selection and adaptive behaviors that helped shape the good that we know in each person as well as the potential destruction that we seem to tragically be witnessing with increasing frequency. At the time of this manuscript publication yet another mass campus shooting had occurred at Umpqua Community College (near Roseburg, Orgeon). (Hoffman, 2015: 3-4, Philosophical Foundations of Evolutionary Psychology)

It seems that Hoffman (a psychology professor at Metropolitan State University) is implying that such actions like “mass campus shootings” are a part of “the potential destruction that we seem to tragically be witnessing with increasing frequency.” Hoffman (2015: 175) speaks of “genetic skills” and that just “because an individual has the genetic skills to be an athlete, artist, or auto-mechanic does not mean that ipso facto it will happen—what actually defines the outcomes of a specific human behavior is a very complex social and environmental process.” So, at least, Hoffman seems to understand (and endorse) the GxE/DST view.

There are more formal presentations that such actions are “based on an evolutionary compulsion to take action against a perceived threat to their status as males, which may pose a serious threat to their viability as mates and to their ultimate survival” (Muscoreil, 2015). (Let’s hope they stayed an undergrad.)

Muscoreil (2015) claims that such are due to status-seeking—to take action against other males that they perceive to be a threat to their social status and reproductive success. Of course, killing off the competition would have that individual’s genes spread through the population more, therefore increasing the frequency of those traits in the population if they happen to have more children (so the just-so story goes). Though, the storytellers are hopeful: Muscoreil (2015) proposes to be ready to work toward “peace and healing” whereas Hoffman (2015: 176) proposes that we should work on cooperation, which was evolutionarily adaptive, and so “communities not only have the capacity but also more importantly an obligation to create specific environments that stimulates and nurture cooperative relationships, such as the development of community service activities and civic engagement opportunities.” So it seems that these authors aren’t so doom-and-gloom—through community outreach, we can come together and attempt to decrease these kinds of crimes that have been on the rise since 1999.

There is a paraphilia called “hybristophilia” in which a woman gets sexually aroused at the thought of being cheated on, or even the thought of her partner committing heinous crimes such as rape and murder. Some women are even attracted to serial killers, and they tend to be in their 40s and 50s—through the killer, it is said, the woman gains a sense of status in her head. Two kinds of women who fall for serial killers exist: those who think they can “change” the killer and those who are attracted through news headlines on the killer’s actions. While others say lonely women who want attention will write serial killers since they are more likely to write back. This is, clearly, pointing to an innate evolutionary drive for women to be attracted to the killer, so they can feel more protected—even if they are not physically with them.

Of course, if there were no guns there would still (theoretically) be mass killings, as anything and everything can be used as a weapon to cause harm to another (which is why this is about mass killings and not mass murders). So, evolutionary psychologists note that a certain action is still prevalent (the fact that autogenic massacre exists) and attempt to explain it in a way only they can—through the tried and tested just-so story method.

Klinesmith et al (2006) showed that men who interacted with a gun showed subsequent increases in testosterone levels compared to those who tinkered with the board game Mouse Trap. Those who had access to the gun showed greater increases in testosterone and thus added more hot sauce to the water. They conclude that “exposure to guns may increase later interpersonal aggression, but further demonstrates that, at least for males, it does so in part by increasing testosterone levels” (Klinesmith et al, 2006: 570). And so, due to this, guns may increase aggressive behavior due to an increase in testosterone. This study has the usual pitfalls—small sample (n=30), college-age (younger means more aggressive, on average) and so cannot be generalized. But the idea is out there: Holding a gun has a man feel more powerful and dominant, and so, their testosterone levels increase BUT! the testosterone increase would not be driving the cause. It has even been said that mass shooters are “low dominance losers”. Lack of attention would lead to decreased social status which means fewer women would be willing to talk with the guy which makes the guy think that his access to women is decreasing due to his lack of social status and, when he gets access to a weapon, his testosterone increases as he can then give in to his evolutionary compulsions and therefore increase his virality and access to mates.

Elliot Rodger is one of these types. Killing six people because he was shunned and had no social life—he wanted to punish the women who rejected him and the men who he envied. Being inter-racial himself, he described his hatred for inter-racial couples and couples in general (he himself was half white and half Asian), the fact that he could never get a girlfriend, and the conflicts that occurred in his family. Of course, all of his life experiences coalesced into the actions he decided to undertake that day—and to the evolutionary psychologist, it is all understandable through an evolutionary lens. He could not get women and was jealous of the men who could get women, so why not attempt to take some of them out and get his “retributive justice” he so yearned for? Evolutionary psychology explains his and similar actions. (VanGeem, 2009 espouses similar ideas.)

These ideas on evolutionary psychology and mass killings can even be extended to terrorism and mass killings—as I myself (stupidly) have written on (see Rushton, 2005). He uses his (refuted) genetic similarity theory (GST; an extension of kin selection and Dawkins’ selfish gene theory) to show why suicide bombers are motivated to kill.

These political factors play an indispensable role but from an evolutionary perspective aspiring to universality, people have evolved a ‘cognitive module’ for altruistic self-sacrifice that benefits their gene pool. In an ultimate rather than proximate sense, suicide bombing can be viewed as a strategy to increase inclusive fitness. (Rushton, 2005: 502)

Genes … typically only whisper their wishes rather than shout” (Rushton, 2005: 502). Note the Dawkins-like wording. Rushton, wisely, cautions in his conclusion that his genetic similarity theory is only one of many reasons why things like this occur and that causation is complex and multi-factorial—right, nice cover. To Rushton, the suicide bomber is taking an action in order to ensure that those more closely relate to them (their family and their ethnic group as a whole) survive and propagate more of their genes, increasing the selfishness and ethnocentrism of that ethnic group. Note how Rushton, despite his protestations to the contrary, is trying to ‘rationalize’ racism and ethnocentric behavior as being ‘in the genes’ with the selfish genes having the ‘vehicle’ behave more selfishly in order to increase the frequencies of the copies of itself that are found in co-ethnics. (See Noble, 2011 for a refutation of Dawkins’ theory.) Ethnic nationalism, genocide, and genocide are the “dark side to altruism”, states Rushton (2005: 504), and this altruistic behavior, in principle, could show why Arabs commit their suicide bombings and similar attacks.

Jetter and Walker (2018) show that “news coverage is suggested to cause approximately three mass shootings in the following week, which would explain 58 percent of all mass shootings in our sample” looking at ABC News Tonight coverage between the time period of Januray 1, 2013 to June 23, 2016. Others have also suggested that such a “media contagion” effect exists in regard to mass shootings (Towers et al, 2015; Johnston and Joy, 2016; Meindl and Ivy, 2017; Lee, 2018; Pescara-Kovach et al, 2019). The idea of such a “media contagion” makes sense: If one is already harboring ideas of attempting a mass killing, seeing them occur in their own country by people around their own ages may have them think “I can do that, too.” And so, this could be one of the reasons for the increase in such attacks—the sensationalist media constantly covering the events and blasting the name of the perpetrator all over the airwaves.

Though, contrary to popular belief, the race of a mass shooter is not more likely to be white—he is more likely to be Asian. Between 1982 and 2013, out of the last 20 mass killings of the time period, 45 percent (9) were comitted by non-whites. Asians, being 6 percent of the US population, were 15 percent of the killers within the last 31 years. So, regarding population size, Asians commit the most mass shootings, not whites. (See also Mass Shootings by Race; they have up-to-date numbers.) Chen et al (2015) showed that:

being exposed to a Korean American rampage shooter in the media and perceiving race as a cause for this violence was positively associated with negative beliefs and social distance toward Korean American men. Whereas prompting White-respondents to subtype the Korean-exemplar helped White-respondents adjust their negative beliefs about Korean American men according to their attribution of the shooting to mental illness, it did not eliminate the effect of racial attribution on negative beliefs and social distance

Mass shooters who were Asian or another non-white minority got a lot more attention and receieved longer stories than those of white shooters. “While the two most covered shootings are perpetrated by whites (Sandy Hook and the 2011 shooting of Congresswoman Gabrielle Giffords in Tucson, Arizona), both an Asian and Middle Eastern shooter garnered considerable attention in The Times” (Schildkraut, Elsass, and Meredith, 2016).

Although Hoffman (2015) and Muscoreil (2015) state that we should look to the community to ensure that individuals are not socially isolated so that these kinds of things may be prevented, there is still no way to predict who a mass shooter would be. Others propose that, due to the high increase of school shootings, steps should be taken to evaluate the mental health of at-risk students (Paoloni, 2015; see alsoKnoll and Annas, 2016.) and attempt to stop these kinds of things before they happen. Mental illness cannot predict mass shootings (Leshner, 2019), but “evolutionary psychologists” cannot either. We did not need the just-so storytelling of Rushton, Hoffman, and Muscoreil to explain why mass killers still exist—the solutions to such killings put forth by Hoffman and Muscoreil are fine, but we did not need just-so story ‘reasoning’ to come to that conclusion.

Race, Test Bias, and ‘IQ Measurement’

1800 words

The history of standardized testing—including IQ testing—has a contentious history. What causes score distributions between groups of people? I stated at least four reasons why there is a test gap:

(1) Differences in genes cause differences in IQ scores;

(2) Differences in environment cause differences in IQ scores;

(3) A combination of genes and environment cause differences in IQ scores; and

(4) Differences in IQ scores are built into the test based on the test constructors’ prior biases.

I hold to (4) since, as I have noted, the hereditarian-environmentalist debate is frivolous. There is, as I have been saying for years now, no agreed-upon definition of ‘intelligence’, since there are such disparate answers from the ‘experts’ (Lanz, 2000; Richardson, 2002).

For the lack of such a definition only reflects the fact that there is no worked-out theory of intelligence. Having a successful definition of intelligence without a corresponding theory would be like having a building without foundations. This lack of theory is also responsible for the lack of some principled regimentation of the very many uses the word ‘intelligence’ and its cognates are put to. Tao many questions concerning intelligence are still open, too many answers controversial. Consider a few examples of rather basic questions: Does ‘intelligence’ name some entity which underlies and explains certain classes of performances1, or is the word ‘intelligence’ only sort of a shorthand-description for ‘being good at a couple of tasks or tests’ (typically those used in IQ tests)? In other words: Is ‘intelligence’ primarily a descriptive or also an explanatorily useful term? Is there really something like intelligence or are there only different individual abilities (compare Deese 1993)? Or should we turn our backs on the noun ‘intelligence’ and focus on the adverb ‘intelligently’, used to characterize certain classes of behaviors? (Lanz, 2000: 20)

Nash (1990: 133-4) writes:

Always since there are just a series of tasks of one sort or another on which performance can be ranked and correlated with other performances. Some performances are defined as ‘cognitive performances’ and other performances as ‘attainment performances’ on essentially arbitrary, common sense grounds. Then, since ‘cognitive performances’ require ‘ability’ they are said to measure that ‘ability’. And, obviously, the more ‘cognitive ability’ an individual posesses the more that individual can acheive. These procedures can provide no evidence that IQ is or can be measured, and it is rather besides the point to look for any, since that IQ is a metric property is a fundamental assumption of IQ theory. It is imposible that any ‘evidence’ could be produced by such procedures. A standardised test score (whether on tests designated as IQ or attainment tests) obtained by an individual indicates the relative standing of that individual. A score lies within the top ten perent or bottom half, or whatever, of those  gained by the standardisation group. None of this demonstrates measurement of any property. People may be rank ordered by their telephone numbers but that would not indicate measurement of anything. IQ theory must demonstrate not that it has ranked people according to some performance (that requires no demonstration) but that they are ranked according to some real property revealed by that performance. If the test is an IQ test the property is IQ — by definition — and there can in consequence be no evidence dependent on measurement procedures for hypothesising its existence. The question is one of theory and meaning rather than one of technique. It is impossible to provide a satisfactory, that is non-circular, definition of the supposed ‘general cognitive ability’ IQ tests attempt to measure and without that definition IQ theory fails to meet the minimal conditions of measurement.

These is similar to Mary Midgley’s critique of ‘intelligence’ in her last book before her death What Is Philosophy For? (Midgley, 2018). The ‘definitions’ of ‘intelligence’ and, along with it, its ‘measurement’ have never been satisfactory. Haier (2016: 24) refers to Gottfredson’s ‘definition’ of ‘intelligence, stating that ‘intelligence’ is a ‘general mental ability.’ But if that is the case, that it is a ‘general mental ability’ (g) then ‘intelligence’ does not exist because ‘g’ does not exist as a property in the brain. Lanz’s (2000) critique is also like Howe’s (1988; 1997) that ‘intelligence’ is a descriptive, not explanatory, term.

Now that the concept of ‘intelligence’ has been covered, let’s turn to race and test bias.


Test items are biased when they have different psychological meanings across cultures (He and van de Vijver 2012: 7). If they have different meanings across cultures, then the tests will not reflect the same ‘ability’ between cultures. Being exposed to the knowledge—and correct usage of it—on a test is imperative for performance. For if one is not exposed to the content on the test, how are they expected to do well if they do not know the content? Indeed, there is much evidence that minority groups are not acculturated to the items on the test (Manly et al, 1997; Ryan et al, 2005; Boone et al, 2007). This is what IQ tests measure: acculturation to the the tests’ constructors, school cirriculum and school teachers—aspects of white, middle-class culture (Richardson, 1998). Ryan et al (2005) found that reading and and educational level, not race or ethnicity, was related to worse performance on psychological tests.

Serpell et al (2006) took 149 white and black fourth-graders and randomly assigned them to ethnically homogeneous groups of three, working on a motion task on a computer. Both blacks and whites learned equally well, but the transfer outcomes were better for blacks than for whites.

Helms (1992) claims that standardized tests are “Eurocentric”, which is “a perceptual set in which European and/ or European American values, customs, traditions and characteristics are used as exclusive standards against which people and events in the world are evaluated and perceived.” In her conclusion, she stated that “Acculturation
and assimilation to White Euro-American culture should enhance one’s performance on currently existing cognitive ability tests” (Helms, 1992: 1098). There just so happens to be evidence for this (along with the the studies referenced above).

Fagan and Holland (2002) showed that when exposure to different kinds of information was required, whites did better than blacks but when it was based on generally available knowledge, there was no difference between the groups. Fagan and Holland (2007) asked whites and blacks to solve problems found on usual IQ-type tests (e.g., standardized tests). Half of the items were solvable on the basis of available information, but the other items were solveable only on the basis of having acquired previous knowledge, which indicated test bais (Fagan and Holland, 2007). They, again, showed that when knowledge is equalized, so are IQ scores. Thus, cultural differences in information acquisition explain IQ scores. “There is no distinction between crassly biased IQ test items and those that appear to be non-biased” (Mensh and Mensh, 1991). This is because each item is chosen because it agrees with the distribution that the test constructors presuppose (Simon, 1997).

How do the neuropsychological studies referenced above along with Fagan and Holland’s studies show that test bias—and, along with it test construction—is built into the test which causes the distribution of the scores observed? Simple: Since the test constructors come from a higher social class, and the items chosen for inclusion on the test are more likely to be found in certain cultural groups than others, it follows that the reason for lower scores was that they were not exposed to the culturally-specific knowledge used on the test (Richardson, 2002; Hilliard, 2012).


The [IQ] tests do what their construction dictates; they correlate a group’s mental worth with its place in the social hierarchy. (Mensh and Mensh, 1991)

This is very easily seen with how such tests are constructed. The biases go back to the beginning of standardized testing—the first one being the SAT. The tests’ constructors had an idea of who was or was not ‘intelligent’ and so constructed the tests to show what they already ‘knew.’

…as one delves further … into test construction, one finds a maze of arbitrary steps taken to ensure that the items selected — the surrogates of intelligence — will rank children of different classes in conformity with a mental hierarchy that is presupposed to exist. (Mensh and Mensh, 1991)

Garrison (2009: 5) states that standardized tests “exist to assess social function” and that “Standardized testing—or the theory and practice known as “psychometrics” … is not a form of measurment.” The same way tests were constructed in the 1900s is the same way they are constructed today—with arbitrary items and a presuppossed mental hiearchy which then become baked into the tests by virtue of how they are constructed.

IQ-ists like to say that certain genes are associated with high intelligence (using their GWASes), but what could the argument possibly be that would show that variation in SNPs would cause variation in ‘intelligence’? What would a theory of that look like? How is the hereditarian hypothesis not a just-so story? Such tests were created to justify the hierarchies in society, the tests were constructed to give the results that they get. So, I don’t see how genetic ‘explanations’ are not just-so stories.

1 Blacks and whites are different cultural groups.

2 If (1), then they will have different experiences by virtue of being different cultural groups.is

3 So blacks and whites, being different cultural groups, will score differently on tests of ability, since they are exposed to different knowledge structures due to their different cultures and so, all tests of ability are culture-bound. Knowledge, Culture, Logic, and IQ

Rushton and Jensen (2005) claim that the evidence they review over the past 30 years of IQ testing points to a ‘genetic component’ to the black-white IQ gap, relying on the flawed Minnesota study of twinsreared apart” (Joseph, 2018)—among other methods—to generate heritability estimates and state that “The new evidence reviewed here points to some genetic component in Black–White differences in mean IQ.” The concept of heritability, however, is a flawed metric (Bailey, 1997; Schonemann, 1997; Guo, 2000; Moore, 2002; Rose, 2006; Schneider, 2007; Charney, 2012, 2013; Burt and Simons, 2014; Panofsky, 2014; Joseph et al, 2015; Moore and Shenk, 2016; Panofsky, 2016; Richardson, 2017). That G and E interact means that we cannot tease out “percentages” of nature and nurture’s “contribution” to a “trait.” So, one cannot point to heritability estimates as if they point to a “genetic cause” of the score gap between blacks and whites. Further note that the gap has closed in recent years (Dickens and Flynn, 2006; Smith, 2018).

And now, here is another argument based on the differing experiences that cultural groups experience which then explains IQ score differences (eg Mensh and Mensh, 1991; Manly et al, 1997; Kwate, 2001; Fagan and Holland, 2002, 2007; Cole, 2004; Ryan et al, 2005; Boone et al, 2007; Au, 2008; Hilliard, 2012; Au, 2013).

(1) If children of different class levels have experiences of different kinds with different material; and
(2) if IQ tests draw a disproportionate amount of test items from the higher classes; then
(c) higher class children should have higher scores than lower-class children.

Nature, Nurture, and Athleticism

1600 words

Nature vs nurture can be said to be a debate on what is ‘innate’ and what is ‘acquired’ in an organism. Debates about how nature and nurture tie into athletic ability and race both fall back onto the dichotomous notion. “Athleticism is innate and genetic!”, the hereditarian proclaims. “That blacks of West African ancestry are over-represented in the 100m dash is evidence of nature over nurture!” How simplistic these claims are.

Steve Sailer, in his response to Birney et al on the existence of race, assumes that because those with ancestry to West Africa consistently have produced the most finalists (and winners) in the Olympics that race, therefore, must exist.

I pointed out on Twitter that it’s hard to reconcile the current dogma about race not being a biological reality with what we see in sports, such as each of the last 72 finalists in the Olympic 100-meter dash going all the way back to 1984 nine Olympics ago being at least half sub-Saharan in ancestry.

Sailer also states that:

the abundant data suggesting that individuals of sub-Saharan ancestry enjoy genetic advantages.

[…]

For example, it’s considered fine to suggest that the reason that each new Dibaba is fast is due to their shared genetics. But to say that one major reason Ethiopians keep winning Olympic running medals (now up to 54, but none at any distance shorter than the 1,500-meter metric mile because Ethiopians lack sprinting ability) is due to their shared genetics is thought unthinkable.

Sailer’s argument seems to be “Group X is better than Group Y at event A. Therefore, X and Y are races”, which is similar to the hereditarian arguments on the existence of ‘race’—just assume they exist.

The outright reductionism to genes in Sailer’s view on athleticism and race is plainly obvious. That blacks are over-represented in certain sports (e.g., football and basketball) is taken to be evidence for this type of reductionism that Sailer and others appeal to (Gnida, 1995). Such appeals can be said to be implicitly saying “The reason why blacks succeed at sport is due to genes while whites succeed due to hard work, so blacks don’t need to work as hard as whites when it comes to sports.”

There are anatomic and physiological differences between groups deemed “black” and “white”, and these differences do influence sporting success. Even though this is true, this does not mean that race exists. Such reductionist claims—as I myself have espoused years ago—do not hold up. Yes, blacks have a higher proportion of type II muscle fibers (Caesar and Henry, 2015), but this does not alone explain success in certain athletic disciplines.

Current genetic testing cannot identify an athlete (Pitsiladis et al, 2013). I reviewed some of the literature on power genotypes and race and concluded that there are no genes yet identified that can be said to be a sufficient cause of success in power sports.

Just because group A has gene or gene networks G and they compete in competition C does not mean that gene or gene networks G contribute in full—or in part—to sporting success. The correlations could be coincidental and non-functional in regard to the sport in question. Athletes should be studied in isolation, meaning just studying a specific athlete in a specific discipline to ascertain how, what, and why works for the specific athlete along with taking anthropomorphic measures, seeing how bad they want “it”, and other environmental factors such as nutrition and training. Looking at the body as a system will take us away from privileging one part over another—while we also do understand that they do play a role but not the role that reductionists believe.

No evidence exists for DNA variants that are common to endurance athletes (Rankinen et al, 2016). But they do have one thing in common (which is an environmental effect on biology): those born at altitude have a permanently altered ventilatory response as adults while “Peruvians born at altitude have a nearly 10% larger forced vital capacity compared to genetically matched Peruvians born at sea level” (Brutasaert and Parra, 2009: 16). Certain environmental effects on biology are well-known, and those biological changes do help in certain athletic events (Epstein, 2014). Yan et al (2016) conclude that “conclude that the traditional argument of nature versus nurture is no longer relevant, as it has been clearly established that both are important factors in the road to becoming an elite athlete.”

Georgiades et al (2017) go the other way and what they argue is clear in the title of their paper “Why nature prevails over nurture in the making of the elite athlete.” They continue:

Despite this complexity, the overwhelming and accumulating evidence, amounted through experimental research spanning almost two centuries, tips the balance in favour of nature in the “nature” and “nurture” debate. In other words, truly elite-level athletes are built – but only from those born with innate ability.

They use twin studies as an example stating that since heritability is greater than 50% but lower than 100% means “that the environment is also important.” But this is a strange take, especially from seasoned sports scientists (like Pitsiladis). Attempting to partition traits into a ‘nature’ and ‘nurture’ component and then argue that the emergence of that trait is due more to genetics than environment is an erroneous use of heritability estimates. It is not possible—nor is it feasible—to separate traits into genetic and environmental components. The question does not even make sense.

“… the question of how to separate the native from the acquired in the responses of man does not seem likely to be answered because the question is unintelligible.” (Leonard Carmichael 1925, quoted in Genes, Determinism and God, Alexander, 2017)

Tucker and Collins (2012) write:

Rather, individual performance thresholds are determined by our genetic make-up, and training can be defined as the process by which genetic potential is realised. Although the specific details are currently unknown, the current scientific literature clearly indicates that both nurture and nature are involved in determining elite athletic performance. In conclusion, elite sporting performance is the result of the interaction between genetic and training factors, with the result that both talent identification and management systems to facilitate optimal training are crucial to sporting success.

Tucker and Collins (2012) define training as the realization of genetic potential, while DNA “control the ceiling” of what one may be able to accomplish. “… training maximises
the likelihood of obtaining a performance level with a genetically controlled ‘ceiling’, accounts for the observed dominance of certain populations in specific sporting disciplines” (Tucker and Collins, 2012: 6). “Training” would be the environment here and the “genetically controlled ‘ceiling'” would be genes here. The authors are arguing that while training is important, training is just realizing the ‘potential’ of what is ‘already in’ the genes—an erroneous way of looking at genes. Shenk (2010: 107) explains why:

As the search for athletic genes continues, therefore, the overwhelming evidence suggests that researchers will instead locate genes prone to certain types of interactions: gene variant A in combination with gene variant B, provoked into expression by X amount of training + Y altitude + Z will to win + a hundred other life variables (coaching, injuries, etc.), will produce some specific result R. What this means, of course, What this means, of course, is that we need to dispense rhetorically with thick firewall between biology (nature) and training (nurture). The reality of GxE assures that each person’s genes interacts with his climate, altitude, culture, meals, language, customs and spirituality—everything—to produce unique lifestyle trajectories. Genes play a critical role, but as dynamic instruments, not a fixed blueprint. A seven- or fourteen- or twenty-eight-year-old is not that way merely because of genetic instruction.

The model proposed by Tucker and Collins (2012) is pretty reductionist (see Ericsson, 2012 for a response), while the model proposed by Shenk (2010) is more holistic. The hypothetical model explaining Kenyan distance running success (Wilbur and Pitsiladis, 2012) is, too, a more realistic way of assessing sport dominance:

fig6

The formation of an elite athlete comes down to a combination of genes, training, and numerous other interacting factors. The attempt to boil the appearance of a certain trait to either ‘genes’ or ‘environment’ and partition them into percentages is an unsound procedure. That a certain group continuously wins a certain event does not constitute evidence that the group in question is a race, nor does it constitute evidence that ‘genes’ are the cause of the outcome between groups in that event. The holistic model of human athletic performance in which genes contribute to certain physiological processes along with training, and other biomechanical and psychological differences is the correct way to think about sport and race. Actually seeing an athlete in motion in his preferred sport is (and I believe always will be) superior to just genetic analyses. Genetic tests also haveno role to play in talent identification” (Webborn et al, 2015).

One emerging concept is that there are many potential genetic pathways to a given phenotype []. This concept is consistent with ideas that biological redundancy underpins complex multiscale physiological responses and adaptations in humans []. From an applied perspective, the ideas discussed in this review suggest that talent identification on the basis of DNA testing is likely to be of limited value, and that field testing, which is essentially a higher order ‘bioassay’, is likely to remain a key element of talent identification in both the near and foreseeable future []. (Joyner, 2019; Genetic Approaches for Sports Performance: How Far Away Are We?)

Athleticism is irreducible to biology (Louis, 2004). Holistic (nature and nurture) will beat the reductionist (nature vs nurture) views; with how biological systems work, there is no reason to privilege one level over another (Noble, 2012), so there is no reason to privilege the gene over the environment, environment over the gene. The interaction of multiple factors explains sport success.

The Oppression of the High IQs

1250 words

I’m sure most people remember their days in high school. Popular kids, goths, preppies, the losers, jocks, and the geeks are some of the groups you may find in the typical American high school. Each group, most likely, had another group that they didn’t like and became their rival. For the geeks, their rivals are most likely the jocks. They get beat on, made fun of, and most likely sit alone at lunch.

Should there be legal protection for such individuals? One psychologist argues there should be. Sonja Falck from the University of London specializes in high “ability” individuals and states that terms like “geek”, and “nerd” should be hate crimes and categorized under the same laws like homophobic, religious and racial slurs. She even published a book on the subject, Extreme Intelligence: Development, Predicaments, Implications (Falck, 2019). (Also see The Curse of the High IQ, see here for a review.)

She wants anti-IQ slurs to be classified as hate crimes. Sure, being two percent of the population (on a constructed normal curve) does mean they are a “minority group”, just like those at the bottom two percent of the distribution. Some IQ-ists may say “If the bottom two percent are afforded special protections then so should the top two percent.”

While hostile or inciteful language about race, religion, sexuality, disability or gender identity is classed as a hate crime, “divisive and humiliating” jibes such as ‘smart-arse’, ‘smart alec’, and ‘know-it-all’ are dismissed as “banter” and used with impunity against the country’s high-IQ community, she said.
According to Dr Falck, being labelled a ‘nerd’ in the course of being bullied, especially as a child, can cause psychological damage that may last a lifetime.
Extending legislation to include so-called ‘anti-IQ’ slurs would, she claims, help stamp out the “archaic” victimisation of more than one million Britons with a ‘gifted’ IQ score of 132 or over.
Her views are based on eight years of research and after speaking to dozens of high-ability children, parents and adults about their own experiences.
Non-discrimination against those with very high IQ is also supported by Mensa, the international high IQ society and by Potential Plus UK, the national association for young people with high-learning potential. (UEL ACADEMIC: ANTI-IQ TERMS ARE HATE CRIME’S ‘LAST TABOO’)

I’m not going to lie—if I ever came across a job application and the individual had on their resume that they were a “Mensa member” or a member of some other high IQ club, it would go into the “No” pile. I would assume that is discrimination against high IQ individuals, no?

It seems like Dr. Falck is implying that terms such as “smart arse”, “geek”, and “nerd” are similar to “moron” (a term with historical significance coined by Henry Goddard, see Dolmage, 2018), idiot, dumbass and stupid should be afforded the same types of hate crime legislation? Because people deemed to be “morons” or “idiots” were sterilized in America as the eugenics movement came to a head in the 1900s.

Low IQ individuals were sterilized in America in the 1900s, and the translated Binet-Simon test (and other, newer tests) were used for those ends. The Eugenics Board of North Carolina sterilized thousands of low IQ individuals in the 1900s—around 60,000 people were sterilized in total in America before the 1960s, and IQ was one way to determine who to sterilize. Sterilization in America (which is not common knowledge) continued up until the 80s in some U.S. states (e.g., California).

There was true, real discrimination against low IQ people during the 20th century, and so, laws were enacted to protect them. They, like the ‘gifted’ individuals, comprise 2 percent of the population (on a constructed curve by the test’s constructors), low IQ individuals are afforded protection by the law. Therefore, states the IQ-ist, high IQ individuals should be afforded protection by the law.

But is being called a ‘nerd’, ‘geek’, ‘smarty pants’, ‘dweeb’, ‘smart arse’ (Falck calls these ‘anti-IQ words‘) etc is not the same as being called terms that originated during the eugenic era of the U.S.. Falck wants the term ‘nerd’ to be a ‘hate-term.’ The British Government should ‘force societal change’ and give special protections to those with high IQs. People freely use terms like ‘moron’ and ‘idiot’ in everyday speech—along with the aforementioned terms cited by Falck.

Falck wants ‘intelligence’ to be afforded the same protections under the Equality Act of 2010 (even though ‘intelligence’ means just scoring high on an IQ test and qualifying for Mensa; note that Mensans have a higher chance for psychological and physiological overexcitability; Karpinski et al, 2018). Now, Britain isn’t America (where we, thankfully, have free speech laws), but Falck wants there to be penalities for me if I call someone a ‘geek.’ How, exactly, is this supposed to work? Like with my example above on putting a resume with ‘Mensa member’ in the “No” pile? Would that be discrimination? Or is it my choice as an employer who I want to work for me? Where do we draw the line?

By way of contrast, intelligence does not neatly fit within the definition of any of the existing protected characteristics. However, if a person is treated differently because of a protected characteristic, such as a disability, it is possible that derogatory comments regarding their intelligence might form part of the factual matrix in respect of proving less favourable treatment.

[…]

If the individual is suffering from work-related stress as a result of facing repeated “anti-IQ slurs” and related behaviour, they might also fall into the definition of disabled under the Equality Act and be able to bring claims for disability discrimination. (‘Anti-IQ’ slurs: Why HR should be mindful of intelligence-related bullying

How would one know if the individual in question is ‘gifted’? Acting weird? They tell you? (How do you know if someone is a Mensan? Don’t worry, they’ll tell you.) Calling people names because they do X? That is ALL a part of workplace banter—better call up OSHA! What does it even mean for one to be mistreated in the workplace due to their ‘high intelligence’? If there is someone that I work with and they seem to be doing things right, not messing up and are good to work with, there will be no problem. On the other hand, if they start messing up and are bad to work with (like they make things harder for the team, not being a team player) there will be a problem—and if their little quirks means they have a ‘high IQ’ and I’m being an IQ bigot, then Falck would want there to be penalties for me.

I have yet to read the book (I will get to it after I read and review Murray’s Human Diversity and Warne’s Debunking 35 Myths About Human Intelligence—going to be a busy winter for me!), but the premise of the book seems strange—where do we draw the line on ‘minority group’ that gets afforced special protections? The proposal is insane; name-calling (such as the cited examples in the articles) is normal workplace banter (you, of course, need thick skin to not be able to run to HR and rat your co-workers out). It seems like Mensa has their own out there, attempting to afford them protections that they do not need. High IQ people are clearly oppressed and discriminated against in society and so need to be afforded special protection by the law. (sarcasm)

This, though, just speaks to the insanity on special group protection and the law. I thought that this was a joke when I read these articles—then I came across the book.

Goddard Undefended

A recent YouTube video from the user “Modern Heresy” purports to critique Ken Richardson’s works in his 2017 book “Genes, Brains and Human Potential: The Science and Ideology of Intelligence” as well as some of his older works. I have addressed the claims most specifically about social class here, and RaceRealist has addressed the video as a whole here, but today I want to go more in detail on the issue of Goddard, since it took up basically a third of the video. I thought the section was fairly pedantic, as it’s barely two paragraphs of his book, but since the video creator thinks it to be so important [1], it must be addressed.

The issue of Goddard’s testing of immigrants on Ellis Island has long been the subject of academic controversy, being debated in the journal American Psychologist by Herrnstein, Kamin, Albee and others (Albee 1980; Dorfman 1982; Kamin 1982; Samelson 1985; Synderman & Herrnstein 1983). The most comprehensive rebuttal to Syndermann and Herrnstein’s erroneous paper can be found in Gelb et. al (1986), as noted by RaceRealist.

Basically, the issue that the video maker alleges is based around several small issues. The first relates to the proportion of the individuals tested that were found to be feeble-minded and the meaning of this result, and the other has to do with how Goddard tested immigrants on Ellis Island.

Richardson claims in his 2017 book that:

That was after long and trying journeys, using the tests in English through interpreters. By these means, the country came to be told that 83 percent of Jews, 80 percent of Hungarians, 79 percent of Italians, and 87 percent of Russians were feebleminded. Almost as bad were the Irish, Italians, and Poles and, bottom of the list, the blacks. Only the Scandinavians and Anglo- Saxons escaped such extremes of labeling.

Richardson (2017)

The video creator correctly notes that Goddard’s study was:

makes no determination of the actual percentage, even of these groups, who are feeble-minded.

Goddard (1917)

“Modern Heresy” then claims that this statement from the beginning of the Goddard paper demonstrates that Richardson was misrepresenting the paper. I agree that it is misrepresentation insofar as Richardson did not clarify that the primary purpose of the paper was not to determine the percentage of ‘feeble-minded’ individuals, but this is different than the primary claims that Richardson was making. Richardson’s broader point here is about the use of IQ tests to cause social harm, e.g. by the passage of the Immigration Act of 1924. Clearly, as Gelb et. al (1986) has shown, he is correct on this point. But the minutiae are what are interesting here, so let’s get into them

When Richardson states that:

83 percent of Jews, 80 percent of Hungarians, 79 percent of Italians, and 87 percent of Russians were feebleminded

Richardson (2017)

he is indeed accurately quoting a real statistic from one of Goddard’s paper (contra Herrnstein 1981), which occurs in Goddard’s Table II (Dorfman 1982). And again contra Herrnstein (and Cochran et. al 2006), this sample is not (entirely) a group of immigrants specifically chosen because of their feeblemindness, but includes a fairly representative sample. As noted by Goddard:

For the purpose of the first question an investigator selected 39 cases—20 were Italians and 19 were Russians—who appeared to her to be feeble-minded. These were then tested by the other investigator, the results being recorded for later study.

Goddard (1917)

Note that contra Cochran et. al (2006), there are no Jews in the sample that Goddard specifically selected of feebleminded individuals. But Goddard’s second selection is what is most relevant here, about which he states:

For the second question cases were picked who appeared to be representative of their respective groups. In this list we had 35 Jews, 22 Hungarians, 50 Italians and 45 Russians. (5 Jews, 2 Italians and 1 Russian were children under 12 years of age.

Goddard (1917)

Despite Goddard’s caution at the beginning of the article that “the study makes no determination of the actual percentage, even of these groups, who are feebleminded” (Goddard 1917, p. 243), he later notes that the sample is “representative”(Goddard 1917, p. 244) and that despite the selection involved in the sample due to the exclusion of “superior individuals”, the small number of them ” did not noticeably affect the character of the group” (Goddard 1917, p. 244). As such, he stated that to estimate the character of these national groups, one would only have to be “revised … by a relatively small amount” (Goddard 1917, p. 244). He finally concluded that “one can hardly escape the conviction that the intelligence of the average ‘third class’ immigrant is low, perhaps of moron grade”.

But the broader point that “the country came to be told that ….” that Richardson makes is equally both slightly misrepresentative but also broadly correct. A news article published in 1917 about Goddard’s paper noted that “the most favorable interpretation of their results is that two out of every five of the immigrants studied were feebleminded” (The Survey 1917). It also describes that 83 percent of Russians were found to be feebleminded using the typical criterion, meaning that Richardson’s note that the Amerikan was exposed to the claims of immigrant feeblemindedness is accurate, even if Goddard’s article itself can’t be used to make those conclusions. Indeed, it was Pioneer Fund president Harry Laughlin who cited Goddard’s figures in his testimony to Congress during the debate over the Immigration Act of 1924 (Swanson 1995). It is well known that science is commonly misrepresented by the public, and this example may be one of many (Dumas-Mallet et. al 2017). The video creator alleges that because Goddard attributed this low intelligence level to environment rather than heredity, Richardon’s discussion of Goddard is yet again incorrect. But Richardson does not name Goddard as an anti-immigrant xenophobe, he merely points out that these figures later became the basis for anti-immigration xenophobia, which is a historical fact as noted above. Again, the video creator confuses Richardon’s discussion of Goddard and the discussion of xenophobia and then conflate Richardson’s claims with the ones made by Kamin and Gould in the past.

There is one more issue brought up in the video as to Richardon’s portrayal of Goddard’s testing of immigrants, and it is Richardson’s claim that:

Amid distressing scenes at the infamous reception center on Ellis Island, he managed to ensure that all immigrants— men, women, and children of all ages— were given the IQ test as soon as they landed

Richardson (2017)

While it is unclear as to Goddard’s specific role in the development of the use of testing during immigration proceedings on Ellis Island, it is uncontested that there was widespread use of mental tests upon entry to the island (Mullan 1917; Zenderland 1998, p. 419 note 17), and that Goddard was the person who was sent out to inspect the mental testing procedures the immigration enforcement officers were engaging in in the first place (Goddard 1917; Sussman 2014, p. 84) following the US ban on “moronic immigrants” and subsequent fear that moronic immigrants were still getting through (Davis 1925, p. 218-219; Wilkes-Barre Record 1907). Again, while Richardson’s treatment of the issue is curt and may seem a bit reductive, it is not wholly inaccurate. He’s not writing a history textbook or publishing a paper in The American Historical Review, he’s writing a book that covers numerous topics about IQ.

[1] The author again brings up the Goddard issue in the comment reply to RR’s recent article.

Social Class, Ken Richardson and “Modern Heresy”

Note: I am new to this blog. I blog at https://developmentalsystem.wordpress.com and https://www.sillyolyou.com

As RaceRealist has already noted, a recent YouTube video purports to critique Ken Richardson’s work on IQ tests and their relationship with social class. Since he already covered a number of the relevant parts about Goddard, construct validity and a number of other topics, I wanted to focus more on the question of social class. This is one of the claims that RaceRealist brings up from Richardson’s work quite frequently, so it’s probably the most pertinent at this point in time.

Note: I watched the video about a week ago, so if I get any of the details mixed up, please let me know.

One of the claims that the author of the YouTube video makes is that because a large proportion of the variation in IQ is within-family, rather than between-families, it cannot be that IQ is a metric of social class. The logic here is quite simple: behavioral genetic studies tell us that the average difference between two siblings in the same family is about 12 IQ points, while the average difference between two randomly selected individuals in the population is 15 points. That means that, according to the creator of the video, “about 70% of the IQ variation in society is due to within-family differences”. We should note two things. The first is that the figure cited here is incorrect, even if we accept the values for the standard deviations within the family. The video creator seems to have calculated the percent of variation as a proportion of the standard deviations (\sigma), rather than the actual variances (\sigma^2). The proper proportion would be \frac{12^2}{17^2}=0.498, which is 49.8%. That still leaves 50.2% of the variation between families. The other issue is that I very much doubt the figure of a typical difference of 12 IQ points between siblings. The formula for the expected absolute difference between two siblings in the same family is \frac{2\cdot\sqrt{\frac{V_a}{2}+V_e}}{\sqrt{\pi}} (here V_e=1-V_a. If we assume that the phenotypic standard deviation is 15 (as defined for IQ tests), we can compute the expected difference for different values of heritability. We can quickly note that the issue with this is that if the heritability of the trait is zero, then the expected phenotypic difference for siblings is \frac{2\cdot\sqrt{\frac{0\cdot15^2}{2}+(1-0)\cdot15^2}}{\sqrt{\pi}}=16.9, whereas two random strangers would have a difference of \int_{-\infty}^{\infty} \int_{-\infty}^{\infty}\text{PDF}(\mathcal{N}(0,1),\text{x})\cdot\text{PDF}(\mathcal{N}(0,1),y)|y-x|\cdot 15 dx dy=16.9 [1]. If we were to take the author’s proposal for how much of the variation is within-family vis a vis between-family, we would have to conclude that \frac{16.9}{16.9}=1 or 100% of the variance is within-family, while just 0% of the variance is between-families. Does it make sense that that even if heritability were zero that only 0% could represent social class? No, it doesn’t, because there are issues here. The first is that variation in IQ is not the relevant metric here. If we take the notion that there is a factor of the matter about the IQ score of someone (i.e. there exists a ‘true score’), then IQ would have measurement error (Whitaker 2010). The consequence of this alleged measurement error would be to inflate the proportion of variation that is within-family. First, denote variances as such: V_a for additive genetic variance, V_e for environmental variance, and V_m for measurement error variance. The within-family calculation would be \frac{2\sqrt{\frac{V_a}{2}+V_e+V_m}}{\sqrt{\pi}} (per the formula above) while the between-family calculation would be \frac{2\sqrt{2\cdot(V_a+V_e+V_m)}}{\sqrt{\pi}} for observed IQ scores, while the true relevant quantities would be \frac{2\sqrt{\frac{V_a}{2}+V_e}}{\sqrt{\pi}} and \frac{2\sqrt{2\cdot(V_a+V_e)}}{\sqrt{\pi}}. Note that our fractions (percent of variation within families) \frac{\frac{2\sqrt{\frac{V_a}{2}+V_e+V_m}}{\sqrt{\pi}}}{\frac{2\sqrt{2\cdot(V_a+V_e+V_m)}}{\sqrt{\pi}}} and \frac{\frac{2\sqrt{\frac{V_a}{2}+V_e}}{\sqrt{\pi}}}{\frac{2\sqrt{2\cdot(V_a+V_e)}}{\sqrt{\pi}}}. Some simple algebra can show that \frac{\frac{2\sqrt{\frac{V_a}{2}+V_e+V_m}}{\sqrt{\pi}}}{\frac{2\sqrt{2\cdot(V_a+V_e+V_m)}}{\sqrt{\pi}}} > \frac{\frac{2\sqrt{\frac{V_a}{2}+V_e}}{\sqrt{\pi}}}{\frac{2\sqrt{2\cdot(V_a+V_e)}}{\sqrt{\pi}}} [2], which means that so long as V_a (additive genetic variance) and V_m (measurement error variance) are both positive (e.g. there is both measurement error and heritability), then the fraction of variation that is between-families calculated from observed IQs will be an overestimate because of measurement error.

The other issue with the creators claims is the “assumption that social class varies between families but not within families”. While it may seem obvious at first, there are a number of issues. The first is that an individual’s attained social class during adulthood (and thus the social class in which they inhabit) is not identical within families. Two kids may have started off with very similar IQs due to their family environments, but due to various reasons (random chance, developed interests, etc), they moved into different social classes in adulthood, following which their IQs diverged due to the different social classes they now inhabit. The reason why this is relevant is because the statistic that the creator is using to (erroneously) claim that 70% of the variation is within families does not specify the age at which these measurements were taken – it is likely that they were taken in later life, meaning that the social class of the individuals has already diverged.

Moreover, even when children are within the same family at earlier ages, there is reason to argue that their social class is also not identical. Sociologists now understand that social class is a dynamic structure that is not only reproduced by aspects of economic access, resources and opportunity that individuals have access to, but that sociocultural signifiers, subjective experiences, interests and other aspects also contribute to ones social class. As noted by Richardson himself, one’s subjective SES may better reflect the sociocognitive environment that one develops in (Richardson & Jones 2019):

It is self-conscious social status in relation to others in a class structure – i.e. what has been called subjective as opposed to objective social class – that seems to influence a wide range of educational and cognitive outcomes. There is only moderate correlation between that and current SES

And this subjective SES can quite evidently differ between siblings and differentially impact their IQs. Moreover, there are numerous other aspects of social class that can vary within-family such as: school quality (e.g. school resources; Leon & Valdiva 2015), peer effects (Hoxby 2000), teacher and classroom effects (Boyd-Zaharias 1999; Chetty et. al 2015), and societal expectations (Jensen & McHale 2016). Small initial differences in these environmental variables can be magnified through phenotype-environment processes (Beam & Turkheimer 2013) that can create a seemingly large within-family variation that cannot be attributed to social class, but could in fact be the result of it.

Another vitally important thing to note is that even if we accept the concept of social class as measured by socioeconomic status as a simple standardized combination of income, parental education and occupational status, that doesn’t mean that this variable can’t differ in important ways within the family. First note that siblings are not always the same age (the exception is with twins). As a result, the developmental environment in which each child grows up with is not identical. As Richardson noted over a decade ago, children within the family do not experience the same environment (Richardson & Norgate 2005). This is not only because of perceptual experiences, phenotype→environment feedback loops, but because the importance of the timing of developmental experiences means that changes in parental socioeconomic status over the time means that children are exposed to different socioeconomic environments during these critical periods (Sylva 1997). For instance, parental socioeconomic status and as a result maternal stress can differ during the mother’s pregnancies (Richardson 2019), causing differences between the two children’s development.

Social Class or Sociocognitive Preparedness?

We should note that this critique is premised entirely on the idea that Richardson is positing that social class is a gigantic determinant of IQ, to the neglect of other environmental factors. This is an erroneous assumption, as the paper in which Richardson lays out his theory of what IQ tests are a proxy for (Richardson 2002; “What IQ Tests Test”), he argues that ” population variance in IQ scores can be described in terms of a nexus of sociocognitive-affective factors that differentially prepares individuals for the cognitive, affective and performance demands of the test”, that in effect, makes the test a measure of social class. Note that he does not quantify the specific amount of variation that is explained by social class, and that over 50% of the variation (as a minimum using the assumptions questioned above) explainable by social class (meaning a correlation of 0.7) could definitely qualify under his meaning. Regardless, Richardson’s primary explication is in terms of the “nexus of sociocognitive-affective factors” is perfectly compatible with the within vs between population variance described in the video.

There are numerous factors that influence intelligence that Richardson describes that can differ within families, such as “individuals’ psychological proximity to that set” of cultural tools, parental interactions with children (Hart & Risley 1995; Jensen & McHale 2016), affective preparedness, etc. All of these factors can additionally explain the IQ variance, meaning that the critique of Richardson’s explanation of IQ variance does not go through.

Predictive (In)Validity?

The creator of the video also claims that the Bell Curve demonstrated that IQ remains predictive once SES is controlled for and that IQ is a much better predictor that SES. Despite this common claim by Bell Curve fanatics, it has been demonstrated to be incorrect more times than one can count (Fischer et. al 1996; Ragin & Fiss 2016). In fact, a closer analysis of the model Murray and Herrnstein fit shows that they predicted NOT ONE of their poverty cases correctly (Krenz n.d.; see also Dickens et. al 1995, Goldberger & Manski 1995). A more thorough examination of the claims related to the alleged predictive validity of IQ can be found here.

[1] A simpler way to get the result is to calculate the variance from simple distribution addition/difference rules and then multiply that by the expected difference for normal distributions \frac{2}{\sqrt{\pi}}

[2] \frac{\frac{2\sqrt{\frac{V_a}{2}+V_e+V_m}}{\sqrt{\pi}}}{\frac{2\sqrt{2\cdot(V_a+V_e+V_m)}}{\sqrt{\pi}}}\geq \frac{\frac{2\sqrt{\frac{V_a}{2}+V_e}}{\sqrt{\pi}}}{\frac{2\sqrt{2\cdot(V_a+V_e)}}{\sqrt{\pi}}}\implies\frac{\sqrt{\frac{V_a}{2}+V_e+V_m}}{\sqrt{2\cdot(V_a+V_e+V_m)}}\geq\frac{\sqrt{\frac{V_a}{2}+V_e}}{\sqrt{2\cdot(V_a+V_e)}}\implies\frac{\frac{V_a}{2}+V_e+V_m}{2\cdot(V_a+V_e+V_m)}\geq\frac{\frac{V_a}{2}+V_e}{2\cdot(V_a+V_e)}\implies \frac{\frac{V_a}{2}+V_e+V_m}{V_a+V_e+V_m}\geq\frac{\frac{V_a}{2}+V_e}{V_a+V_e}\implies (\frac{V_a}{2}+V_e+V_m)(V_a+V_e)\geq(\frac{V_a}{2}+V_e)(V_a+V_e+V_m)\implies \frac{V_a^2}{2}+\frac{V_aV_e}{2}+V_eV_a+V_e^2+V_aV_m+V_eV_m\geq\frac{V_a^2}{2}+\frac{V_aV_e}{2}+\frac{V_aV_m}{2}+V_eV_a+V_e^2+V_eV_m\implies(\frac{V_a^2}{2}-\frac{V_a^2}{2})+(\frac{V_aV_e}{2}-\frac{V_aV_e}{2})+(V_eV_a-V_eV_a)+(V_e^2-V_e^2)+(V_aV_m-\frac{V_aV_m}{2})+(V_eV_m-V_eV_m)\geq0\implies \frac{V_aV_m}{2}\geq0

Correlation and Causation Regarding the Etiology of Lung Cancer in Regard to Smoking

1550 words

The etiology of the increase in lung cancer over the course of the 20th century has been a large area of debate. Was it smoking that caused cancer? Or was some other, unknown, factor the cause? Causation is multifactorial and multi-level—that is, causes of anything are numerous and these causes all interact with each other. But when it comes to smoking, it was erroneously argued that genotypic differences between individuals were the cause of both smoking and cancer. We know now that smoking is directly related to the incidence of lung cancer, but in the 20th century, there were researchers who were influenced and bribed to bring about favorable conclusions for the tobacco companies.

Psychologist Hans Eysenck (1916-1997) was a controversial psychologist researching many things, perhaps most controversially, racial differences in intelligence. It came out recently, though, that he published fraudulent papers with bad data (Smith, 2019). He, among other weird things, believed that smoking was not causal in regard to cancer. Now, why might Eysenck think that? Well, he was funded by many tobacco companies (Rose, 2010; Smith, 2019). He accepted money from tobacco companies to attempt to disprove the strong correlation between smoking and cancer. Between the 1977-1989, Eysenck accepted about 800,000 pounds from tobacco companies. He is not alone in holding erroneous beliefs such as this, however.

Biostatistician Ronald Fisher (1890-1962) (a pipe smoker himself), the inventor of many statistical techniques still used today, also held the erroneous belief that smoking was not causal in regard to cancer (Parascandola, 2004). Fisher (1957) argued in a letter to the British Medical Journal that while there was a correlation between smoking and the acquisition of lung cancer, “both [are] influenced by a common cause, in this case the individual genotype.” He went on to add that “Differentiation of genotype is not in itself an unreasonable possibility“, since it has been shown that genotypic differences in mice precede differences “in the frequency, age-incidence and type of the various kinds of cancer.

So, if we look at the chain it goes like this: people smoke; people smoking is related to incidences in cancer; but it does not follow that since people smoke that the smoking is the cause of cancer, since an unknown third factor could cause both the smoking and cancer. So now we have four hypotheses: (1) Smoking causes lung cancer; (2) Lung cancer causes smoking; (3) Both smoking and lung cancer are caused by an unknown third factor. In the case of (3), this “unknown third factor” would be the individual genotype; and (4) the relationship is spurious . Fisher was of the belief that “although lung cancer occurred in cigarette smokers it did not necessarily follow that the cancer was caused by cigarettes because there might have been something in the genetic make up of people destined to have lung cancer that made them addicted to cigarettes” (Cowen, 1999). Arguments of this type were popular in the 19th and 20th century—what I would term ‘genetic determinists’ arguments, in that genes dispose people to certain behaviors. In this case, genes disposed people to lung cancer which made them addicted to cigarettes.

Now, the argument is as follows: Smoking, while correlated to cancer is not causal in regard to cancer. Those who choose to smoke would have acquired cancer anyway, as they were predisposed to both smoke and acquire cancer at X age. We now know, of course, that such claims are ridiculous—no matter which “scientific authorities” they come from. Fisher’s idea was that differences in genotype caused differences in cancer acquisition and so along with it, caused people to either acquire the behavior of smoking or not. While at the time such an argument could have been seen as plausible, the mounting evidence against the argument did nothing to sway Fisher’s belief that smoking did not outright cause lung cancer.

The fact that smoking caused lung cancer was initially resisted by the mainstream press in America (Cowen, 1999). Cowen (1999) notes that Eysenck stated that, just because smoking and lung cancer were statistically associated, it did not follow that smoking caused lung cancer. Of course, when thinking about what causes, for example, an observed disease, we must look at similar habits they have. And if they have similar habits and it is likely that those with similar habits have the hypothesized outcome (smokers having a higher incidence of lung cancer, in this case), then it would not be erroneous to conclude that the habit in question was a driving factor behind the hypothesized disease.

It just so happens that we now have good sociological research on the foundations of smoking. Cockerham (2013: 13) cites Hughes’ (2003) Learning to Smoke: Tobacco Use in the West where he describes the five stages that smokers go through: “(1) becoming a smoker, (2) continued smoking, (3) regular smoking, (4) addicted smoking, and, for some, (5) stopping smoking.” Most people report their first few times smoking cigarettes as unpleasant, but power through it to become a part of the group. Smoking becomes somewhat of a social ritual for kids in high-school—with kids being taught how to light a cigarette and how to inhale properly. For many, starting smoking is a social thing that they do with their friends—it can be said to be similar to being social drinkers, they were social smokers. There is good evidence that, for many, their journey as smokers starts and is fully dependent on their social environment than actual physical addiction (Johnson et al, 2003; Haines, et al, 2009).

One individual interviewed in Johnson et al (2003: 1484) stated that “the social setting
of it all [smoking] is something that is somewhat addictive itself.” So, not only is the nicotine the addictive substance on the mind of the youth, so too is the social situation for the youth in which the smoking occurs. The need to fit in with their peers is one important driver for the beginning—and continuance—of the behavior of smoking. So we now have a causal chain in regard to smoking, the social, and disease: youths are influenced/pressured to smoke by their social group which then leads to addiction and then, eventually, health problems such as lung cancer.

The fact that the etiology of smoking is social leads us to a necessary conclusion: change the social network, change the behavior. Just as people begin smoking in social groups, so too, do people quit smoking in social groups (Christakis and Fowler, 2008). We can then state that, on the basis of the cited research, that the social is ultimately causal in the etiology of lung cancer—the vehicle of cancer-causation being the cigarettes pushed bu the social group.

Eysenck and Fisher, two pioneers of statistics and different methods in psychology, were blinded by self-interest. It is very clear with both Eysenck and Fisher, that their beliefs were driven by Big Tobacco and the money they acquired from them. Philosopher Donald Davidson famously stated that reasons are causes for actions (Davidson, 1963). Eysenck’s and Fisher’s “pro-belief” (in this case, the non-causation of smoking to lung cancer) would be their “pro-attitude” and their beliefs lead to their actions (taking money from Big Tobacco in an attempt to show that cigarettes do not cause cancer).

The etiology of lung cancer as brought on by smoking is multifactorial, multilevel, and complex. We do have ample research showing that the beginnings of smoking for a large majority of smokers are social in nature. They begin smoking in social groups, and their identity as a smoker is then refined by others in their social group who see them as “a smoker.” Since individuals both begin smoking in groups and quitting in groups, it then follows that the acquisition of lung cancer can be looked at as a social phenomenon as well, since most people start smoking in a peer group.

The lung cancer-smoking debate is one of the best examples of the dictum post hoc, ergo propter hoc—or, correlation does not equal causation (indeed, this is where the dictum first originated). While Fisher and Eysenck did hold to that view in regard to the etiology of lung cancer (they did not believe that since smokers were more likely to acquire lung cancer that smoking caused lung cancer), it does speak to the biases the two men had in their personal and professional lives. These beliefs were disproven by showing a dose-dependent relationship in regard to smoking and lung cancer: heavier smokers had more serious cancer incidences, which tapered down the less an individual smoked. Fisher’s belief, though, that differences in genotype caused both behavior that led to smoking and the lung cancer itself, while plausible at the time, was nothing more than a usual genetic determinist argument. We now know that genes are not causes on their own; they do not cause traits irrespective of their uses for the physiological system (Noble, 2012).

Everyone is biased—everyone. Now, this does not mean that objective science cannot be done. But what it does show is that “… scientific ideas did not develop in a vacuum but rather reflected underlying political or economic trends” (Hilliard, 2012: 85). This, and many more examples, speak to the biases of scientists. For reasons like this, though, is why science is about the reproduction of evidence. And, for that, the ideas of Eysenck and Fisher will be left in the dustbin of history.

African Neolithic Part 2: Misunderstandings Continue

Last time I’ve posted on this subject, it was dissecting the association between magical thinking and technological progress among Africans proposed by Rinderman. In general, I found it wanting of an up-to-date understanding of African anthropology, either in material culture or belief systems. This time, however, the central subject can’t be dismissed on the matter of ignorance. In fact, in a rather awkward way, his dismissal in on the grounds of how much he knows in bot his field and his own writings.

Henry Harpending, deceased, has been listed by the SPLC as a “white nationalist”, though it is the specific content of his quotes in regards to Africans, in light of his admittedly impressive contributions to SW African anthropology, is the major focus than classifying the nature of his bias. The claims in question are

  1. He has never meet an African who has a hobby, that is, one with the inclination to work.
  2.  Hunter Gatherers are impulsive, lazy, and violent.
  3. Africans are more interested in breeding than raising children.
  4. Africans, Baltimore Aframs, and PNG natives all share the same behavior in regard to points 2 and 3.
  5. Superstitions are pan-african and the only Herero he’s met that was an atheist had a ethnic German Father.

So Harpending seemingly has the edge given his background in Anthropology with specific experiences with Africans…this only makes the only more painful to articulate.

  1.  This will set the theme for the nature of my responses…his own work contradicts these assertions. The Herero refugee descent from Botswana, the main strain of Bantu-Speaking Africans he had studied,  were described and calculated as prosperous in regards to cattle per capita and ownership of rural homesteads that stand apart from typical Botswana farming infrastructure.

    Today, they are perceived as one of the wealthiest ethnic groups inBotswana and Namibia. They are known for their large herds, for theirs kill at managing cattle, and for their endogamy and staunch ethnicity even while participating fully in the developing economy and educational system of Botswana.

    His research even noted how age five is when Herero begin herding. A similar phenomenon is noted among the Dinka which prompted Dutton to review the literature of their intelligence scores.

  2.  Violence among the Khoi-San groups I’ll admit has been undermined. However, the Hadza, known for their discourage mean of conflict, express this through much lower rates of violent deaths compared to most others. The general consensus is that there is a mix, with the general trend towards higher rates but with cautious interpretation into the causes. On the charge of Laziness, however, is once again unfounded by his work on the lesser resources faced by mobile bands of foragers compared to sedentary ones on labor camps. The same link on the Hadza also pinpoints the hours spent in HG life working and accounts for the difficulty of those hours.
  3.  Harpending actually made a model of cads versus dads that he actually attributed to non-genetic factors. Otherwise, we are left with his work on the oddity of “Herero Households”.

    If women cannot provision themselves and their offspring without
    assistance, then the “cad/breeder” corner of Fig. 2 is not feasible, and we are left with “dad/feeder.” Draper and Harpending argue that this is true of the Ju/’hoansi, and other mobile foragers in marginal habitats. Among swidden agriculturalists, on the other hand, female labor is more productive, and men can afford to do less work. The theory thus predicts that such populations will be more likely to fall into the “cad/breeder” equilibrium, as in fact they seem to do. Although this theory is couched in Darwinian terms, Harpending and Draper do not see genetic evolution as the engine that accounts for variation within and among societies. Instead, they suggest a facultative adaptation: humans have “evolved the ability to sense household structure in infancy” and to alter their developmental trajectories in response to what is learned during this early critical period (1980, p. 63).

    There does not seem to be any durable group of associated individuals that we could usefully characterize as a household among the Herero. If forced we would say that the Herero have two parallel types of households. The “male household” is the homestead, consisting of an adult male, his wives, sisters, and other relatives, and it is defined by the herd and the kraals that he maintains for the herd. The “female household” is a woman and the children for whom she has responsibility, localized in a hut or hut cluster within a larger homestead. These aregynofocal households, rather than matrifocal households, since matrifocal implies mother and offspring while the Herero unit is a woman and children under the care of that woman. These children may be her own, her grandchildren, children of relatives, or even children leased from other tribes to work on the cattle. Men do not appear prominently in daily domestic life. They are gone at first light pursuing their own interests and careers with cattle, with hunting parties, or with other stereotyped male activities. Similarly, women are not prominently present at male areas like the wells where the cattle are watered. There is, then, not a Herero household, but rather there is a Herero male household that includes cattle and female households, and these females may be wives, or sisters, or other female relatives. The female households are the loci of domestic production and consumption. 

    However, it does not follow that the lack of interpersonal interaction means the lack of acknowledgement in parenthood within households. One is by association.


    We interviewed 161 adult Herero (112 females, 49 males) intensively about the residence of themselves, their siblings with whom they share a mother or a father and about their legal children (children born in marriage or children in whom they had purchased parental rights). None of the men we interviewed whose fathers were still alive (n = 10) considered his residence to be in a homestead different from his father’s. Only two of the men had sons with residences elsewhere –one was a child who had been purchased but was living with the mother and the other was a child borne by a wife from whom he was now divorced. We also heard of very few men ascertained in several hundred shorter demographic interviews that were residing in a homestead other than their fathers’. Most of the men(24/39) with deceased fathers had their own homesteads. Brothers who had both the same mother and father were more likely to stay together, however, than brothers who had different mothers. 

    The other comes from Harpending’s own blog post regarding his Herero friend who claims the children of his new wife as being of his household despite not actually conceiving them. Note, he also describes the man as “prosperous”.

  4. I sadly lack data on Bantu rates of violence, swearing I once found data showing it to be low compared to that of the Khoi-san. If anybody has quantified data like in the link regarding the Hadza then that would be appreciated. In regards to parenting however it doesn’t reflect that by comparing non-resident fathers. Regarding Africans, here’s a perspective from a female perspective.
  5. This point once again warrants the mention of superstitions still being quite common in non Western societies like China in regards to evil spirits and luck. Likewise, traditions are known to be modified or dropped among Herero in major urban centers. The Herero Harpending encountered that he labeled “employees” nonetheless grew up near the study areas.

Before I end this, I want to cover some further discrepancies by another author who refers to his work, Steve Sailer.

The small, yellow-brown Bushmen, hunters who mated more or less for life and put much effort into feeding their nuclear families, reminded Henry of his upstate New York neighbors. If fathers didn”€™t work, their children would go hungry.

In contrast, the Bantu Herero (distant relatives of American blacks) were full of surprises. In general, black African men seemed less concerned with bringing home the bacon to provision their children than did Bushmen dads.

This doesn’t deviate too far from what Harpending explains. That comes later.

In black African farming cultures, women do most of the work because agriculture involves light weeding with hoes rather than heavy plowing. Men are less expected to contribute functionally to their children’s upkeep, but are expected to be sexy.

So technically this is correct but only to a certain degree in regards to division of labor. It’s particular to farming schemes where root/tuber agriculture is done, and in those areas forest clearing is done by males.

One parallel is that Baumann views climatic and environmental factors as closely
associated with differences in the participation of the sexes in agriculture. He observes that the northern boundary of women’s predominance in agriculture is constrained by the limits of the tropical forest region, and that the boundary of female agriculture also tends to coincide with that between root crops and cereal grains. Baumann’s view of the economies of labor is also similar to our own. He emphasizes that land clearing is more difficult in the forest than in the savannah, and that males often perform clearing in the forest zones in spite of the predominance of female labor, whereas soil preparation is more difficult in the savannah than in the forest. He notes the higher male participation in agriculture in the savannah region of the Sudan than in the West African and Congo forest regions, and more generally, that women are much more likely to participate in root cultivation than in cereal cultivation.

Not mention other activities such as crafts and trading. Baumann’s old scheme as it is is still simplistic. More research shows that labor between sexes shift depending on circumstances. See this on Igbo yam farming or West African Cocoa farming. This trend of shifting continues into the modern day.

In many places in Africa, traditionally there has been a
strict division of labor by gender in agriculture. This
division of labor may be based on crop or task, and both
types of division of labor by gender may occur
simultaneously. Women may mobilize male labor for some
tasks involved in their crops and men frequently mobilize
women’s labor for crops that they control. These divisions
are not static and may change in response to new
economic opportunities.

Likewise, male development in connection to their father is represented in this early anthropological text through inheritance of property and apprenticeship. Collective Fatherhood through the extended family is also recognized, with actual absence being highly due to migrant labor in Southern African countries.

The “Sexy” part is rather presumptuous and absent from the text, where it is in fact the woman that is scrutinized uphold standards in the chapter of Ibo courtship, which seems to be widespread. The simple and lazy role reversal is obvious, as it assumes that female labor undermined the general trend of patriarchy since they weren’t always dependent on the male.

So with all this said, what is there to make of it?: As far as direct connections to the Herero go, Harpending didn’t show any particular malice. Any such was more towards western phenomenon that he draws parallels with. As far as his conference comments, obvious bias is just that. His blog posts don’t even read as such, clearly contradicting it comments of industry among Africans according to his experience for one matter.

It may sadly suggest the type of filter scene through experience in this “field”.