NotPoliticallyCorrect

Home » Articles posted by RaceRealist

Author Archives: RaceRealist

McNamara’s Morons

2650 words

The Vietnam War can be said to be the only war that America has lost. Due to a lack of men volunteering for combat (and a large number of young men getting exemptions from service from their doctors and many other ways), standards were lowered in order to meet quotas. They recruited those with low test scores who came to be known as ‘McNamara’s Morons’—a group of 357,000 or so men. With ‘mental standards’ now lower, the US now had men to fight in the war.

This decision was made by Secretary of Defense Robert McNamara and Lyndon B. Johnson. This came to be known as ‘McNamara’s Folly’—the title of a book on the subject (Hamilton, 2015). Hamilton (2015: 10) writes: “A total of 5,478 low-IQ men died will in the service, most of them in combat. Their fatality rate was three times as high as that of other GIs. An estimated 20,270 were wounded, and some were permanently disabled (including an estimated 500 amputees).

Hamilton spends the first part of the book describing his friendship with a man named Johnny Gupton who could neither read nor write. He spoke like a hillbilly and used hillbilly phrasing. According to Hamilton (2010: 14):

I was surprised that he knew nothing about the situation he was in. He didn’t understand what basic training was all about, and he didn’t know that America was in a war. I tried to  explain what was happening, but at the end, I could tell that he was still in a fog.

Hamilton describes an instance in which they were told that on their postcards they were to send home, they should not write anything “raunchy” like the sergeant said “Don’t be like that trainee who went through here and wrote ‘Dear Darlene. This is to inform you that Sugar Dick has arrived safely…’(Hamilton, 2015: 16). Hamilton went on to write that Gupton did not ‘get’ the joke while “There was a roar of laughter” from everyone else. Gupton’s postcard, since he could not read or write, was written by Hamilton but he did not know his address; he could not state the name of a family member, only stating “Granny” while not able to state her full name. He could not tie his boots correctly, so Hamilton did it for him every morning. But he was a great boot-shiner, having the shiniest boots in the barracks.

Writing home to his fiancee, Hamilton (2015: 18) wrote to her that Gupton’s dogtags “provide him with endless fascination.”

Gupton had trouble distinguishing between left and right, which prevented him from marching in step (“left, right, left, right”) and knowing which way to turn for commands like “left face!” and “right flank march!” So Sergeant Boone tied an old shoelace around Gupton’s right wrist to help him remember which side of his body was the right side, and he placed a rubber band on the left wrist to denote the left side of the body. The shoelace and the rubberband helped, but Gupton was a but slow in responding. For example, he learned how to execute “left face” and “right face,” but he was a fraction of a second behind everyone else.

Gupton was also not able to make his bunk to Army standards, so Hamilton and another soldier did it for him. Hamilton stated that Gupton could also not distinguish between sergeants and officers. “Someone in the barracks discovered that Gupton thought a nickel was more valuable than a dime because it was bigger in size(Hamilton, 2015: 26). So after that, Hamilton took Gupton’s money and rationed it out to him.

Hamilton then describes a time where he was asked by a Captain what they were doing and the situation they were in—to which he gave the correct responses. A Captain then asked Gupton “Which rank is higher, a captain or a general?” to which Gupton responded, “I don’t know, Drill Sergeant.” (He was supposed to say ‘Sir.’) The captain talking to Hamilton then said:

Can you believe this idiot we drafted? I tell you who else is an idiot. Fuckin’ Robert McNamara. How can he expect us to win a war if we draft these morons? (Hamilton, 2015: 27)

Captain Bosch’s contemptuous remark about Defense Secretary McNamara was typical of the comments I often heard from career Army men, who detested McNamara’s lowering of enlistment standards in order to bring low-IQ men into the ranks. (Hamilton, 2015: 28)

Hamilton heard one sergeant tell others that “Gupton should absolutely never be allowed to handle loaded weapons on his own(Hamilton, 2015: 41). Gupton was then sent to kitchen duty where, for 16 hours (5 am to 9 pm), they would have to peel potatoes, clean the floors, do the dishes etc.

Hamilton (2015: 45) then describes another member of “The Muck Squad” but in a different platoon who “was unfazed by the dictatorial authority of his superiors.” When an officer screamed at him for not speaking or acting correctly he would then give a slightly related answer. When asked if he had shaved one morning, he “replied with a rambling of pronouncements about body odor and his belief that the sergeants were stealing his soap and shaving cream(Hamilton, 2015: 45). He was thought to be faking insanity but he kept getting weirder; Hamilton was told that he would talk to an imaginary person in his bunk at night.

Murdoch was then told to find an electric floor buffer to buff the floors and he “wandered around in battalion headquarters until he found the biggest office, which belonged to the battalion commander. He walked in without knocking or saluting or seeking permission to speak, and asked the commander—a lieutenant colonel—for a buffer“. When in the office, he “proceeded to play with a miniature cannon and other memorabilia on the commander’s desk…(Hamilton, 2015: 45). Murdoch was then found to have schizophrenia and was sent on home medical discharge.

Right before their tests of physical fitness to see if they qualified, young-looking sergeants shaved their heads and did the tests for them—Gupton got a 95 while Hamilton got an 80, which upset Hamilton because he knew he could have scored 100.

Hamilton ended up nearly getting heatstroke (with a 105-degree fever) and so he was separated from Gupton. He eventually ended up contacting someone who had spent time with Gupton. He did not “remember much about Gupton except that he was protected by a friendly sergeant, who had grown up with a “mentally handicapped” sister and was sensitive to his plight(Hamilton, 2015: 51). Gupton was only given menial jobs by this sergeant. Hamilton discovered that Gupton had died at age 57 in 2002.

Hamilton then got sent to Special Training Company because while he was out with his fever he missed important days so his captain sent him to the Company to get “rehabilitation” before returning to another training company. They had to do log drills and a Physical Combat Proficiency Test, which most men failed. You needed 60 points per event to pass. The first event was crawling on dirt as fast as possible for 40 yards on your hands and knees. “Most of the men failed to get any points at all because they were disqualified for getting up on their knees. They had trouble grasping the concept of keeping their trunks against the ground and moving forward like supple lizards(Hamilton, 2015: 59).

The second event was the horizontal ladder—imagine a jungle gym. Think of swinging like an ape through the trees. Hamilton, though as he admits not being strong, traversed 36 rungs in under a minute for the full 60 points. When he attempted to show them how to do it and watch them try, “none of the men were able to translate the idea into action” * (Hamilton, 2015: 60).

The third event was called run, dodge, and jump. They had to zig-zag, dodge obstacles, and side-step people and finally jump over a shallow ditch. To get the 60 points they had to make 2 trips in 25 seconds.

Some of the Special Training men were befuddled by one aspect of the course: the wooden obstacles had directional arros, and if you failed to go in the right direction, you were disqualified. A person of normal intelligence would observe the arrows ahead of time and run in the right direction without pausing or breaking stride. But these men would hesitate in order to study the arros and think about which way to go. For each second they paused, they lost 10 points. A few more men were unable to jump across the ditch, so they were disqualified. (Hamilton, 2015: 60-61)

Fourth was the grenade throw. They had to throw 5 training grenades 90 feet with scoring similar to that of a dartboard where the closer you are to the bull’s eye, the higher your score. They had to throw it from one knee in order to simulate battle conditions, but “Most of the Special Training men were too weak or uncoordinated to come close to the target, so they got a zero” * (Hamilton, 2015: 61). Most of them tried throwing it in a straight line like a baseball catcher rather than an arc like a center fielder to a catcher trying to throw someone out at home plate. “…the men couldn’t understand what he was driving at, or else they couldn’t translate it into action. Their throws were pathetic little trajectories” (Hamilton, 2015: 62).

Fifth was the mile-run—they had to do it in eight minutes and 33 seconds but they had to  have their combat boots on. The other men in his group would immediately sprint, tiring themselves outs, they could not—according to Hamilton—“grasp or apply what the sergeants told them about the need to maintain a steady pace (not too slow, not too fast) throughout the entire mile.

Hamilton then discusses another instance in which sergeants told a soldier that there was a cat behind the garbage can and to pick up a cat. But the cat turned out to be a skunk and he spent the next two weeks in the hospital getting treated for possible rabies. “He had no idea that the sergeants had played a trick on him.”

It was true that most of us were unimpressive physical specimens—overweight or scrawny or just plain unhealthy-looking, with unappealing faces and awkward ways of walking and running.

[…]

Sometimes trainees from other companiees, riding by in trucks, would hoot at us and shout “morons!” and “dummies!” Once, when a platoon marched by, the sergeant led the men in singing,

If I had a low IQ,
I’d be Special Training, too!

(It was sung to the tune of the famous Jody songs, as in “Ain’t no use goin’ home/Jody’s got your girls and gone.”)

Hamilton states that there was “One exception to the general unattractiveness” who “was Freddie Hensley.” He was consumed with “dread and anxiety”, always sighing. Freddie ended up being too slow to pass the rifle test with moving targets. Hamilton had wondered “why Freddie had been chosen to take the rifle test, but it soon dawned on me that he was selected because he was a handsome young man. Many people equate good looks with competence, and ugliness with incompetence. Freddie didn’t look like a dim bulb(Hamilton, 2015: 72).

Freddy also didn’t know some ‘basic facts’ such as thunder precedes lightining. “As Freddy and I sat together on foot lockers and looked out the window, I passed the time by trying to figure out how close the lightning was. … I tried to explain what I was doing, and I was not surprised that Freddy could not comprehend. What was surprising was my discovery that Freddy did not know that lightning caused thunder. He knew what lightning was, he knew what thunder was, but he did not know that one caused the other” (Hamilton, 2015: 72).


The test used while the US was in Vietnam was the AFQT (Armed Forces Qualifying Test) (Maier, 1993: 1). As Maier (1993: 3) notes—as does Hamilton—men who chose to enlist could choose their occupation from a list whereas those who were forced had their occupation chosen for them.

For example, during the Vietnam period, the minimum selection standards were so low that many recruits were not qualified for any specialty, or the specialties for which they were qualified had already been filled by people with higher aptitude scores. These people, called no-equals, were rejected by the algorithm and had to be assigned by hand. Typically they were assigned as infantrymen, cooks, or stevedores. Maier (1993: 4)

Most of McNamara’s Morons

came from economically unstable homes with non-traditional family structures. 70% came from low-income backgrounds, and 60% came from singleparent families. Over 80% were high school dropouts, 40% read below a sixth grade level, and 15% read below a fourth grade level. 50% had IQs of less than 85. (Hsiao, 1989: 16-17)

Such tests were constructed from their very beginnings, though, to get this result.

… the tests’ very lack of effect on the placement of [army] personnel provides the clue to their use. The tests were used to justify, not alter, the army’s traditional personnel policy, which called for the selection of officers from among relatively affluent whites and the assignment of white of lower socioeconomic status go lower-status roles and African-Americans at the bottom rung. (Mensh and Mensh 1991: 31)


Reading through this book, the individuals that Hamilton describes clearly had learning disabilities. We do not need IQ tests to identify such individuals who clearly suffer from learning disabilities and other abnormalities (Sigel, 1989). Jordan Peterson claims that the military won’t accept people with IQs below 83, while Gottfredson states that

IQ 85 is a second important minimum threshold because the U.S. military sets its minimum enlistment standards at about this level. (2004, 28)

The laws in some countries, such as the United States, do not allow individuals with IQs below 80 to serve in the military because they lack adequate trainability. (2004, 18)

What “laws” do we have here in America ***specifically*** to disallow “individuals with IQs below 80 to serve in the military”? ** Where are the references? Why do Peterson and Gottfredson both make unevidenced claims when the claim in question most definitely needs a reference?

McNamara’s Folly is a good book; it shows why we should not let people with learning/physical/mental disabilities into the war. However, from the descriptions Hamilton gave, we did not need to learn their IQ to know that they could not be soldiers. It was clear as day that they weren’t all there, and their IQ score is irrelevant to that. The people described in the book clearly have developmental disabilities; how is IQ causal in this regard? IQ is an outcome, not a cause (Howe, 1997).

Both Jordan Peterson and Linda Gottfredson claim that the military will not hire a recruit with an IQ score of 80 or below; but they both just make a claim and attempting to validate the claim by searching through military papers does not validate the claim. In any case, IQ scores are not needed to learn that an individual has a learning disability (like how those described in the book clearly had). The unevidenced claims from Gottfredson and Peterson should not be accepted. In any case, one’s IQ is not causal in regard to their inability to, say, become a soldier as other factors are important, not a reified number we call ‘IQ.’ Their IQ scores were not their downfalls.

* Note that if one does not have a good mind-muscle connection then they won’t be able to carry-out novel tasks such as what they went through on the monkey bars.

1/20/2020 Edit ** I did not look hard enough for a reference for the claims. It appears that there is indeed a law (10 USC Sec. 520) that states that those that get between 1 and 9 questions right (category V) are not trainable recruits. The ASVAB is not not a measure of ‘general intelligence’, but is a measure of “acculturated learning” (Roberts et al, 2000). The ‘IQ test’ used in Murray and Herrnstein’s The Bell Curve was the AFQT, and it “best indicates poverty” (Palmer, 2018). This letter relates AFQT scores to the Weschler and Stanford-Binet—where the cut-off is 71 for the S-B and 80 for Weschler (both are category V). Returning to Mensh and Mensh (1991), such tests were—from their very beginnings—used to justify the current military order, having lower-class recruits in more menial jobs.

The Oppression of the High IQs

1250 words

I’m sure most people remember their days in high school. Popular kids, goths, preppies, the losers, jocks, and the geeks are some of the groups you may find in the typical American high school. Each group, most likely, had another group that they didn’t like and became their rival. For the geeks, their rivals are most likely the jocks. They get beat on, made fun of, and most likely sit alone at lunch.

Should there be legal protection for such individuals? One psychologist argues there should be. Sonja Falck from the University of London specializes in high “ability” individuals and states that terms like “geek”, and “nerd” should be hate crimes and categorized under the same laws like homophobic, religious and racial slurs. She even published a book on the subject, Extreme Intelligence: Development, Predicaments, Implications (Falck, 2019). (Also see The Curse of the High IQ, see here for a review.)

She wants anti-IQ slurs to be classified as hate crimes. Sure, being two percent of the population (on a constructed normal curve) does mean they are a “minority group”, just like those at the bottom two percent of the distribution. Some IQ-ists may say “If the bottom two percent are afforded special protections then so should the top two percent.”

While hostile or inciteful language about race, religion, sexuality, disability or gender identity is classed as a hate crime, “divisive and humiliating” jibes such as ‘smart-arse’, ‘smart alec’, and ‘know-it-all’ are dismissed as “banter” and used with impunity against the country’s high-IQ community, she said.
According to Dr Falck, being labelled a ‘nerd’ in the course of being bullied, especially as a child, can cause psychological damage that may last a lifetime.
Extending legislation to include so-called ‘anti-IQ’ slurs would, she claims, help stamp out the “archaic” victimisation of more than one million Britons with a ‘gifted’ IQ score of 132 or over.
Her views are based on eight years of research and after speaking to dozens of high-ability children, parents and adults about their own experiences.
Non-discrimination against those with very high IQ is also supported by Mensa, the international high IQ society and by Potential Plus UK, the national association for young people with high-learning potential. (UEL ACADEMIC: ANTI-IQ TERMS ARE HATE CRIME’S ‘LAST TABOO’)

I’m not going to lie—if I ever came across a job application and the individual had on their resume that they were a “Mensa member” or a member of some other high IQ club, it would go into the “No” pile. I would assume that is discrimination against high IQ individuals, no?

It seems like Dr. Falck is implying that terms such as “smart arse”, “geek”, and “nerd” are similar to “moron” (a term with historical significance coined by Henry Goddard, see Dolmage, 2018), idiot, dumbass and stupid should be afforded the same types of hate crime legislation? Because people deemed to be “morons” or “idiots” were sterilized in America as the eugenics movement came to a head in the 1900s.

Low IQ individuals were sterilized in America in the 1900s, and the translated Binet-Simon test (and other, newer tests) were used for those ends. The Eugenics Board of North Carolina sterilized thousands of low IQ individuals in the 1900s—around 60,000 people were sterilized in total in America before the 1960s, and IQ was one way to determine who to sterilize. Sterilization in America (which is not common knowledge) continued up until the 80s in some U.S. states (e.g., California).

There was true, real discrimination against low IQ people during the 20th century, and so, laws were enacted to protect them. They, like the ‘gifted’ individuals, comprise 2 percent of the population (on a constructed curve by the test’s constructors), low IQ individuals are afforded protection by the law. Therefore, states the IQ-ist, high IQ individuals should be afforded protection by the law.

But is being called a ‘nerd’, ‘geek’, ‘smarty pants’, ‘dweeb’, ‘smart arse’ (Falck calls these ‘anti-IQ words‘) etc is not the same as being called terms that originated during the eugenic era of the U.S.. Falck wants the term ‘nerd’ to be a ‘hate-term.’ The British Government should ‘force societal change’ and give special protections to those with high IQs. People freely use terms like ‘moron’ and ‘idiot’ in everyday speech—along with the aforementioned terms cited by Falck.

Falck wants ‘intelligence’ to be afforded the same protections under the Equality Act of 2010 (even though ‘intelligence’ means just scoring high on an IQ test and qualifying for Mensa; note that Mensans have a higher chance for psychological and physiological overexcitability; Karpinski et al, 2018). Now, Britain isn’t America (where we, thankfully, have free speech laws), but Falck wants there to be penalities for me if I call someone a ‘geek.’ How, exactly, is this supposed to work? Like with my example above on putting a resume with ‘Mensa member’ in the “No” pile? Would that be discrimination? Or is it my choice as an employer who I want to work for me? Where do we draw the line?

By way of contrast, intelligence does not neatly fit within the definition of any of the existing protected characteristics. However, if a person is treated differently because of a protected characteristic, such as a disability, it is possible that derogatory comments regarding their intelligence might form part of the factual matrix in respect of proving less favourable treatment.

[…]

If the individual is suffering from work-related stress as a result of facing repeated “anti-IQ slurs” and related behaviour, they might also fall into the definition of disabled under the Equality Act and be able to bring claims for disability discrimination. (‘Anti-IQ’ slurs: Why HR should be mindful of intelligence-related bullying

How would one know if the individual in question is ‘gifted’? Acting weird? They tell you? (How do you know if someone is a Mensan? Don’t worry, they’ll tell you.) Calling people names because they do X? That is ALL a part of workplace banter—better call up OSHA! What does it even mean for one to be mistreated in the workplace due to their ‘high intelligence’? If there is someone that I work with and they seem to be doing things right, not messing up and are good to work with, there will be no problem. On the other hand, if they start messing up and are bad to work with (like they make things harder for the team, not being a team player) there will be a problem—and if their little quirks means they have a ‘high IQ’ and I’m being an IQ bigot, then Falck would want there to be penalties for me.

I have yet to read the book (I will get to it after I read and review Murray’s Human Diversity and Warne’s Debunking 35 Myths About Human Intelligence—going to be a busy winter for me!), but the premise of the book seems strange—where do we draw the line on ‘minority group’ that gets afforced special protections? The proposal is insane; name-calling (such as the cited examples in the articles) is normal workplace banter (you, of course, need thick skin to not be able to run to HR and rat your co-workers out). It seems like Mensa has their own out there, attempting to afford them protections that they do not need. High IQ people are clearly oppressed and discriminated against in society and so need to be afforded special protection by the law. (sarcasm)

This, though, just speaks to the insanity on special group protection and the law. I thought that this was a joke when I read these articles—then I came across the book.

Genes and Disease: Reductionism and Knockouts

1600 words

Genetic reductionism refers to the belief that understanding our genes will have us understand everything from human behavior to disease. The behavioral genetic approach claims to be the best way to parse through social and biological causes of health, disease, and behavior. The aim of genetic reductionism is to reduce a complex biological system to the sum of its parts. While there was some value in doing so when our technology was in its infancy and we did learn a lot about what makes us “us”, the reductionist paradigm has outlived its usefulness.

If we want to understand a complex biological system then we shouldn’t use gene scores, heritability estimates, or gene sequencing. We should be attempting to understand how the whole biological system interacts with its surroundings—its environment.

Reductionists may claim that “gene knockout” studies can point us in the direction of genetic causation—“knockout” a gene and, if there are any changes, then we can say that that gene caused that trait. But is it so simple? Richardson (2000) puts it well:

All we know for sure is that rare changes, or mutations, in certain single genes can drastically disrupt intelligence, by virtue of the fact that they disrupt the whole system.

Noble (2011) writes:

Differences in DNA do not necessarily, or even usually, result in differences in phenotype. The great majority, 80%, of knockouts in yeast, for example, are normally ‘silent’ (Hillenmeyer et al. 2008). While there must be underlying effects in the protein networks, these are clearly buffered at the higher levels. The phenotypic effects therefore appear only when the organism is metabolically stressed, and even then they do not reveal the precise quantitative contributions for reasons I have explained elsewhere (Noble, 2011). The failure of knockouts to systematically and reliably reveal gene functions is one of the great (and expensive) disappointments of recent biology. Note, however, that the disappointment exists only in the gene-centred view. By contrast it is an exciting challenge from the systems perspective. This very effective ‘buffering’ of genetic change is itself an important systems property of cells and organisms.

Moreover, even when a difference in the phenotype does become manifest, it may not reveal the function(s) of the gene. In fact, it cannot do so, since all the functions shared between the original and the mutated gene are necessarily hidden from view.Only a full physiological analysis of the roles of the protein it codes for in higher-level functions can reveal that. That will include identifying the real biological regulators as systems properties. Knockout experiments by themselves do not identify regulators (Davies, 2009).

All knocking-out or changing genes/alleles will do is show us that T is correlated with G, not that T is caused by G. Merely observing a correlation between a change in genes or knocking genes out will tell us nothing about biological causation. Reductionism will not have us understand the etiology of disease as the discipline of physiology is not reductionist at all—it is a holistic discipline.

Lewontin (2000: 12) writes in the introduction to The Ontogeny of Information: “But if I successfully perform knockout experiments on every gene that can be seen in such experiments to have an effect on, say, wing shape, have I even learned what causes the wing shape of one species or individual to differ from that of another? After all, two species of Drosophilia presumably have the same relevant set of loci.

But the loss of a gene can be compensated by another gene—a phenomenon known as genetic compensation. In a complex bio-system, when one gene is knocked out, another similar gene may take the ‘role’ of the knocked-out gene. Noble (2006: 106-107) explains:

Suppose there are three biochemical pathways A, B, and C, by which a particular necessary molecule, such as a hormone, can be made in the body. And suppose the genes for A fail. What happens? The failure of the A genes will stimulate feedback. This feedback will affect what happens with the sets of genes for B and C. These alternate genes will be more extensively used. In the jargon, we have here a case of feedback regulation; the feedback up-regulates the expression levels of the two unaffected genes to compensate for the genes that got knocked out.

Clearly, in this case, we can compensate for two such failures and still be functional. Only if all three mechanisms fail does the system as a whole fail. The more parallel compensatory mechanisms an organism has, the more robust (fail-safe) will be its functionality.

The Neo-Darwinian Synthesis has trouble explaining such compensatory genetic mechanisms—but the systems view (Developmental Systems Theory, DST) does not. Even if a knockout affects the phenotype, we cannot say that that gene outright caused the phenotype, the system was screwed up, and so it responded in that way.

Genetic networks and their role in development became clear when geneticists began using genetic knockout techniques to disable genes which were known to be implicated in the development of characters but the phenotype remained unchanged—this, again, is an example of genetic compensation. Jablonka and Lamb (2005: 67) describe three reasons why the genome can compensate for the absence of a particular gene:

first, many genes have duplicate copies, so when both alleles of one copy are knocked out, the reserve copy compensates; second, genes that normally have other functions can take the place of a gene that has been knocked out; and third, the dynamic regulatory structure of the network is such that knocking out single components is not felt.

Using Waddington’s epigenetic landscape example, Jablonka and Lamb (2005: 68) go on to say that if you knocked a peg out, “processes that adjust the tension on the guy ropes from other pegs could leave the landscape essentially unchanged, and the character quite normal. … If knocking out a gene completely has no detectable effect, there is no reason why changing a nucleotide here and there should necessarily make a difference. The evolved network of interactions that underlies the development and maintenance of every character is able to accommodate or compensate for many genetic variations.

Wagner and Wright (2007: 163) write:

“multiple alternative pathways . . . are the rule rather than the exception . . . such pathways can continue to function despite amino acid changes that may impair one intermediate regulator. Our results underscore the importance of systems biology approaches to understand functional and evolutionary constraints on genes and proteins.” (Quoted in Richardson, 2017: 132)

When it comes to disease, genes are said to be difference-makers—that is, the one gene difference/mutation is what is causing the disease phenotype. Genes, of course, interact with our lifestyles and they are implicated in the development of disease—as necessary, not sufficient, causes. GWA studies (genome-wide association studies) have been all the rage for the past ten or so years. And, to find diseases ‘associated’ with disease, GWA practioners take healthy people and diseased people, sequence their genomes and they then look for certain alleles that are more common in one group over the other. Alleles more common in the disease group are said to be ‘associated’ with the disease while alleles more common in the control group can be said to be protective of the disease (Kampourakis, 2017: 102). (This same process is how ‘intelligence‘ is GWASed.)

Disease is a character difference” (Kampourakis, 2017: 132). So if disease is a character difference and differences in genes cannot explain the existence of different characters but can explain the variation in characters then the same must hold for disease.

“Gene for” talk is about the attribution of characters and diseases to DNA, even thoughit is not DNA that is directly responsible for them. … Therefore, if many genes produce or affect the production of the protein that in turn affects a character or disease, it makes no sense to identify one gene as the gene responsible “for” this character or disease. Single genes do not produce characters or disease …(Kampourakis, 2017: 134-135)

This all stems from the “blueprint metaphor”—the belief that the genome contains a blueprint for form and development. There are, however, no ‘genes for’ character or disease, therefore, genetic determinism is false.

Genes, in fact, are weakly associated with disease. A new study (Patron et al, 2019) analyzed 569 GWA studies, looking at 219 different diseases. David Scott (one of the co-authors) was interviewed by Reuters where he said:

“Despite these rare exceptions [genes accounting for half of the risk of acquiring Crohn’s, celiac and macular degeneration], it is becoming increasingly clear that the risks for getting most diseases arise from your metabolism, your environment, your lifestyle, or your exposure to various kinds of nutrients, chemicals, bacteria, or viruses,” Wishart said.

“Based on our results, more than 95% of diseases or disease risks (including Alzheimer’s disease, autism, asthma, juvenile diabetes, psoriasis, etc.) could NOT be predicted accurately from SNPs.”

It seems like this is, yet again, another failure of the reductionist paradigm. We need to understand how genes interact in the whole human biological system, not reducing our system to the sum of its parts (‘genes’). Programs like this are premised on reductionist assumptions; it seems intuitive to think that many diseases are ’caused by’ genes, as if genes are ‘in control’ of development. However, what is truly ‘in control’ of development is the physiological system—where genes are used only as resources, not causes. The reductionist (neo-Darwinist) paradigm cannot really explain genetic compensation after knocking out genes, but the systems view can. The amazing complexity of complex bio-systems allows them to buffer against developmental miscues and missing genes in order to complete the development of the organism.

Genes are not active causes, they are passive causes, resources—they, therefore, cannot cause disease and characters.

Behavioral Geneticists are Silenced!: Public Perceptions on Genetics

2000 words

HBDers like to talk about this perception that their ideas are not really discussed in the public discourse; that the truth is somehow withheld from the public due to a nefarious plot to shield people from the truth that they so heroically attempt to get out to the dumb masses. They like to claim that the field and its practitioners are ‘silenced’, that they are rejected outright for ‘wrongthink’ ideas they hold. But if we look at what kinds of studies get out to the public, a different picture emerges.

The title of Cofnas’ (2019) paper is Research on group differences in intelligence: A defense of free inquiry; the title of Carl’s (2018) paper is How Stifiling Debate Around Race, Genes and IQ Can Do Harm; and the title of Meisenberg’s (2019) paper is Should Cognitive Differences Research Be Forbidden? Meisenberg’s paper is the most direct response to my most recent article, an argument to ban IQ tests due to the class/racial bias they hold that then may be used to enact undesirable consequences on a group that scores low—but like all IQ-ists, they assume that IQ tests are tests of intelligence which is a dubious assumption. In any case, these three authors seem to think there is a silencing of their work.

For Darwin200 (his 200th birthday) back in 2009, the question “Should scientists study race and IQ” was asked in the journal Nature. Neuroscientist Steven Rose (2009: 788) stated “No”, writing:

The problem is not that knowledge of such group intelligence differences is too dangerous, but rather that there is no valid knowledge to be found in this area at all. It’s just ideology masquerading as science.

Ceci and Williams (2009: 789) answered “Yes” to the question, writing:

When scientists are silenced by colleagues, administrators, editors and funders who think that simply asking certain questions is inappropriate, the process begins to resemble religion rather than science. Under such a regime, we risk losing a generation of desperately needed research.

John Horgan wrote in Scientific American:

But another part of me wonders whether research on race and intelligence—given the persistence of racism in the U.S. and elsewhere–should simply be banned. I don’t say this lightly. For the most part, I am a hard-core defender of freedom of speech and science. But research on race and intelligence—no matter what its conclusions are—seems to me to have no redeeming value.

And when he says that “research on race and intelligence … should simply be banned“, he means:

Institutional review boards (IRBs), which must approve research involving human subjects carried out by universities and other organizations, should reject proposed research that will promote racial theories of intelligence, because the harm of such research–which fosters racism even if not motivated by racism–far outweighs any alleged benefits. Employing IRBs would be fitting, since they were formed in part as a response to the one of the most notorious examples of racist research in history, the Tuskegee Syphilis Study, which was carried out by the U.S. Public Health Service from 1932 to 1972.

At the end of the 2000s, journalist William Saletan was big in the ‘HBD-sphere’ due to his writings on sport and race and race and IQ. But in 2018 after the Harris/Murray fiasco on Harris’ podcast, Saletan wrote:

Many progressives, on the other hand, regard the whole topic of IQ and genetics and sinister. That too is a mistake. There’s a lot of hard science here. It can’t be wished away, and it can be put to good use. The challenge is to excavate that science from the muck of speculation about racial hierarchies.

What’s the path forward? It starts with letting go of race talk. No more podcasts hyping gratuitous racial comparisons as “forbidden knowledge.” No more essays speaking of grim ethnic truths for which, supposedly, we must prepare. Don’t imagine that if you posit an association between race and some trait, you can add enough caveats to erase the impression that people can be judged by their color. The association, not the caveats, is what people will remember.

If you’re interested in race and IQ, you might bristle at these admonitions. Perhaps you think you’re just telling the truth about test scores, IQ heritability, and the biological reality of race. It’s not your fault, you might argue, that you’re smeared and misunderstood. Harris says all of these things in his debate with Klein. And I cringe as I hear them, because I know these lines. I’ve played this role. Harris warns Klein that even if we “make certain facts taboo” and refuse “to ever look at population differences, we will be continually ambushed by these data.” He concludes: “Scientific data can’t be racist.”

Of course “scientific data can’t be racist”, but the data can be used by racists for racist motives and the tool to collect the data could be inherently biased against certain groups meaning they favor certain groups too.

Saletan claims that IQ tests can be ‘put to good use’, but it is “illogical” to think that the use of IQ tests was negative and then positive in other instances; it’s either one or the other, you cannot hold that IQ testing is good here and bad there.

Callier and Bonham (2015) write:

These types of assessments cannot be performed in a vacuum. There is a broader social context with which all investigators must engage to create meaningful and translatable research findings, including intelligence researchers. An important first step would be for the members of the genetics and behavioral genetics communities to formally and directly confront these challenges through their professional societies and the editorial boards of journals.

[…]

If traditional biases triumph over scientific rigor, the research will only exacerbate existing educational and social disparities.

Tabery (2015) states that:

it is important to remember that even if the community could keep race research at bay and out of the newspaper headlines, research on the genetics of intelligence would still not be expunged of all controversy.

IQ “science” is a subfield of behavioral genetics; so the overarching controversy is on ‘behavioral genetics’ (see Panofsky, 2014). You would expect there to hardly be any IQ research reported in mainstream outlets with how Cofnas (2019), Carl (2018) and Meisenberg (2019) talk about race and IQ. But that’s not what we find. What we find when we look at what is published regarding behavioral genetic studies compared to regular genetic studies is a stark contrast.

Society at large already harbors genetic determinist attitudes and beliefs, and what the mainstream newspapers put out then solidifies the false beliefs of the populace. Even then, a more educated populace in regard to genes and trait ontogeny will not necessarily make them supportive of new genetics research and discoveries; they are even critical of such studies (Etchegary et al, 2012). Schweitzer and Saks (2007) showed that the popular show CSI pushes false concepts of genetic testing on the public, showing that DNA testing is quick, reliable, and prosecutes many cases; about 40 percent of the ‘science’ used on CSI does not exist and this, too, promulgates false beliefs about genetics in society. Lewis et al (2000) asked schoolchildren “Why are genes important?”, to which 73 percent responded that they are important because they determine characters while 14 percent responded that they are important because they transfer information, but none spoke of gene products.

In the book Genes, Determinism and God, Denis Alexander (2017: 18) states that:

Much data suggest that the stories promulgated by the kind of ‘elite media’ stories cited previously do not act as ‘magic bullets’ to be instantly absorbed by the reader, but rather are resisted, critiqued or accepted depending on the reader’s economic interests, health and social status and access to competing discourses. A recurring theme is that people dis[lplay a ‘two-track model’ in which they can readily switch between more genetic deterministic explanations for disease or different behaviors and those which favour environmental factors or human choice (Condit et al., 2009).

The so-called two-track model is simple: one holds genetic determinist beliefs for a certain trait, like heart disease or diabetes, but then contradict themselves and state that diet and exercise can ameliorate any future complications (Condit, 2010). Though, holding “behavioral causal beliefs” (that one’s behavior is causal in regard to disease acquisition) is associated with behavioral change (Nguyen et al, 2015). This seems to be an example of what Bo Wingard means when he uses the term “selective blank slatism” or “ideologically motivated blank slatism.” That one’s ideology motivates them to believe that genes are causal regarding health, intelligence, and disease or reject the claim must be genetically mediated too. So how can we ever have objective science if people are biased by their genetics?

Condit (2011: 625) compiled a chart showing people’s attitudes to how ‘genetic’ a trait is or not:

condit

Clearly, the public understands genes as playing more of a role when it comes to bodily traits and environment plays more of a role when it comes to things that humans have agency over—for things relating to the mind (Condit and Shen, 2011). “… people seem to deploy elements of fatalism or determinism into their worldviews or life goals when they suit particular ends, either in ways that are thought to ‘explain’ why other groups are the way they are or in ways that lessen their own sense of personal responsibility (Condit, 2011)” (Alexander, 2017: 19).

So, behavioral geneticists must be silenced, right? Bubela and Caufield (2004: 1402) write:

Our data may also indicate a more subtle form of media hype, in terms of what research newspapers choose to cover. Behavioural genetics and neurogenetics were the subject of 16% of the newspaper articles. A search of PubMed on May 30, 2003, with the term “genetics” yielded 1 175 855 hits, and searches with the terms “behavioural genetics” and “neurogenetics” yielded a total of 3587 hits (less than 1% of the hits for “genetics”).

newsp

So Bubela and Caufield (2004) found that 11 percent of the articles they looked at had moderately to highly exaggerated claims, while 26 percent were slightly exaggerated. Behavioral genetic/neurogenetic stories comprised 16 percent of the articles they found, while one percent of all academic press articles were on genetics, which “might help explain why the reader gains the impression that much of genetics research is directed towards explaining human behavior; such copy makes newsworthy stories for obvious reasons” (Alexander, 2017: 17-18). Behavioral genetics research is indeed silenced!

silenced

(By @barrydeutsch)

Conclusion

The public perception of genetics seems to line-up with that of genetics researchers in some ways but not in others. The public at large is bombarded with numerous messages per day, especially in the TV programs they watch (inundated with ad after ad). Certain researchers claim that ‘free inquiry’ into race and IQ is being hushed. To Cofnas (2019) I would say, “In virtue of what is it ‘free inquiry’ that we should study how a group handles an inherently biased test?” To Carl (2018) I would say, “What about the harm done assuming that the hereditarian hypothesis is true, that IQ tests test intelligence, and the funneling of minority children into EMR classes?” And to Meisenberg (2019) I would say “The answer to the question “Should research into cognitive differences be forbidden?” should be “Yes, they should be forbidden and banned since no good can come from a test that was biased from its very beginnings.” There is no ‘good’ that can come from using inherently biased tests, which is why the hereditarian-environmentalist debate on IQ is useless.

It is due to newspapers and other media outlets that people hold the beliefs on genetics they do. Behavioral genetics studies are overrepresented in newspapers; IQ is a subfield of behavioral genetics. Is contemporary research ignored in the mainstream press? Not at all. Recent articles on the social stratification of Britain have been in the mainstream press—so what are Cofnas, Carl, and Meisenberg complaining about? It seems it just stems from a persecution complex; to be seen as the new ‘Galileo’ who, in the face of oppression told the truth that others did not want to hear so they attempted to silence him.

Well that’s not what is going on here, as behavioral genetic studies are constantly pushed in the mainstream press; the complaining from the aforementioned authors means nothing; look to the newspapers and the public’s perception of genes to see that the claims from the authors are false.

An Argument for Banning IQ Tests

1650 words

In 1979, a California judge ruled that the proliferation of IQ testing in the state was unconstitutional. Some claimed that the ruling discriminated against minority students while others claimed that the banning would be protecting them from testing which is racially and culturally biased. The Judge in Larry P. v Riles (see Wade, 1980 for an exposition) sided with the parents, stating that IQ tests were both racially and culturally biased and therefore it was unconstitutional to use them to place minority children into EMR classes (educable mentally retarded).

While his decision applied to only one test used in one state (California), its implications are universal: if IQ tests are biased against a particular group, they are not only invalid for one use but for all uses on that group. Nor is bias a one-dimensional phenomenon. If the tests are baised against one or more groups, they are necessarily biased in favor of one or more goups — and so invalid. (Mensh and Mensh, 1991: 2)

In 1987 in The Washington Times, Jay Matthews reported:

Unbeknownst to her and most other Californians, a lengthy national debate over intelligence tests in public schools had just ended in the nation’s most populous state, and anti-test forces had won.

Henceforth, no black child in California could be given a state-administered intelligence test, no matter how severe the student’s academic problems. Such tests were racially and culturally biased, U.S. District Court Judge Robert F. Peckham had ruled in 1979. After losing in the 9th U.S. Circuit Court of Appeals last year, the state agreed not to give any of the 17 banned IQ (intelligence quotient) tests to blacks.

But one year later in 1980, there was another court case, Parents in Action on Special Ed, and the court found that IQ tests were not discriminatory. However, that misses the point because all of the items on IQ and similar tests are carefully chosen out of numerous trial items to get the types of score distributions that they want.

Although the ban on standardized testing for blacks in California was apparently lifted in the early 90s, Fox News reported in 2004 that “Pamela Lewis wanted to have her 6-year-old son Nicholas take a standardized IQ test to determine if he qualifies for special education speech therapy. Officials at his school routinely provide the test to kids but as Lewis soon found out, not to children who are black, due to a statewide policy that goes back to 1979.The California Associatiotn of School Psychologists wants the ban on IQ tests for black children lifted, but they are not budging.


There is an argument somewhere here, and I will formalize.

Judge Peckham sided with the parents in the case Larry P v. Riles, stating that since IQ tests were racially and culturally biased, they should not be given to black children. He stated that we cannot truly measure nor define intelligence. But he also found that IQ tests were racially and culturally biased against blacks. Thus, the application of IQ testing was funneling more black children into EMR classrooms. All kinds of standardized tests have their origins in the IQ testing movement of the 1900s. There, it was decided which groups would be or would not be intelligent and the tests were then constructed on this a priori assumption.

Let’s assume that hereditarianism is true, like Gottfredson (2005) does. Gottfredson (2005: 318) writes that “We might especially target individuals below IQ 80 for special support, intellectual as well as material. This is the cognitive ability (“trainability”) level below which federal law prohibits induction into the American military and below which no civilian jobs in the United States routinely recruit their workers.” This seems reasonable enough on its face; some people are ‘dumber’ than others and so they deserve special treatment and education in order to maximize their abilities (or lack thereof). But hereditarianism is false and rests on false pretenses.

But if it were false and we believed it to be true—like the trend seems to be going today, then we can enact undesirable social policies due to our false belief that hereditarianism is true. Believing in such falsities, while using IQ tests to prop and back up our a priori biases, can lead to social policies that may be destructive for a group that the IQ test ‘deems’ to be ‘unintelligent.’

So if we believe something that’s not true (like, say, the Hereditarian Hypothesis is true and that IQ tests test one’s capacity for intellectual ability), then destructive social policy may be enacted that may further harm the low-scoring group in question. The debate between hereditarians and environmentalists has been on-going for the past one hundred years, but they are arguing about tests with the conclusion already in mind. Environmentalists give weight and lend credence to the claim that IQ tests are measures of intelligence where environmental factors preclude one to a low score whereas hereditarians claim that they are measures of intelligence but genes significantly influence one’s ability to be intelligent.

The belief that IQ tests test intelligence goes hand-in-hand with hereditarianism: since environmentalists lend credence to the Hereditarian Hypothesis by stating that environmental factors decrease intellectual ability, they are in effect co-signing the use for IQ tests as tests of ability. If we believe that the Hereditarian or Environmentalist Hypotheses are true, we are still presuming that these tests measure intellectual ability, and that this ability is constrained either by genes, environment or a combination of the two.

So, if a certain policy could be enacted and this certain social policy could have devastating consequences for a social group’s educational attainment, say, then why shouldn’t we ban these tests that put a label on individuals that follow them for many years? This is known as the Pygmalion effect. Rosenthal and Jacob (1965) told teachers at the beginning of the new academic year that this new test would predict which students would ‘bloom’ intellectually throughout the year. They told the teachers that their most gifted students were chosen on the basis of a new test, but they were just randomly selected from 18 classrooms while their true scores did not show that they were ‘intellectual.’ Those who were designated as ‘bloomers’ showed a 2 point increase in VIQ, 7 in reasoning, and 4 points in FSIQ. The experiment shows that a teacher’s thoughts on the abilities of their students affect their academic output—that is, the prophecy becomes self-fulfilling. (Also see Boser, Wilhelm, and Hanna, 2014.)

So if a teacher believes their student to be less ‘intelligent’, then, most likely, the prophecy will be fulfilled in virtue of the teacher’s expectations of the student (the same can be said about maternal expectations too, see also Jensen and McHale, 2015). This then could lead them to getting placed into EMR classes and being labled for life—which would screw up one’s life prospects. For instance, Ercole (2009: 5) writes that:

According to Schultz (1983), the expectations teachers have of their students inevitably effects the way that teachers interact with them, which ultimately leads to changes in the student’s behavior and attitude. In a classic study performed by Robert Rosenthal, elementary school teachers were given IQ scores for all of their students, scores that, unbeknownst to the teachers, did not reflect IQ and, in fact, measured nothing. Yet just as researchers predicted, teachers formed a positive expectation for those students who scored high on the exam vs. those who scored low (Harris, 1991). In response to these expectations, the teachers inevitably altered their environment in four ways (Harris, 1991): First, the teaching climate was drastically different depending on if a “smart” child asked questions, or offered answers, vs. if a “dumb” child performed the same behaviors. The former was met with warm and supportive feedback while the latter was not. Second, the amount of input a teacher gave to a “smart” student was much higher, and entailed more material being taught, vs. if the student was “dumb”. Third, the opportunity to respond to a question was only lengthened for students identified as smart. Lastly, teachers made much more of an effort to provide positive and encouraging feedback to the “smart” children while little attention/feedback was given to the “dumb” students, even if they provided the correct answer.

Conclusion

This is one of many reasons why such labeling does more harm than good—and always keep in mind that such labeling begins and ends with the advent of IQ testing in the 1900s. In any case, teachers—and parents—can influence the trajectory of students/children just by certain beliefs they hold about them. And believing that IQ=intelligence and that low scorers are somehow “dumber” than high scorers is how one gets ‘labeled’ which then follows them for years after the labeling.

Even though it is not explicitly stated, it is implicitly believed that the hereditarian hypothesis is true, thus, believing it is while also believing that IQ tests test intelligence is a recipe for disaster in the not-so-distant future. I only need to point to the utilities of IQ testing in the 1900s at Ellis Island. I only need to point to the fact that American IQ tests have their origins in eugenic policies and that such policies were premised on the IQ test assumption, which many American states and different countries throughout the world got involved in (Wahlsten, 1997; Kevles, 1999; Farber, 2008; Reddy, 2008; Grennon and Merrick, 2014). Many people supported sterilizing those with low IQ scores (Wilson, 2017: 46-47).


The formalized argument is here:

(P1) The Hereditarian Hypothesis is false
(P2) If the Hereditarian Hypothesis is false and we believed it to be true, then policy A could be enacted.
(P3) If Policy A could be enacted, then it will do harm to group G.
(C1) If the Hereditarian Hypothesis is false and we believed it to be true, then it will do harm to group G (Hypothetical Syllogism, P2, P3).
(P4) If the Hereditarian Hypothesis is false and we believed it to be true and it would harm group G, then we should ban whatever led to policy A.
(P5) If Policy A is derived from IQ tests, then IQ tests must be banned.
(C2) Therefore, we should ban IQ tests (Modus Ponens, P4, P5).

The Frivolousness of the Hereditarian-Environmentalist IQ Debate: Gould, Binet, and the Utility of IQ Testing

1850 words

Hereditarians have argued that IQ scores are mostly caused by genetic factors with environment influencing a small amount of the gap whereas environmentalists argue that the gaps can be fully accounted for by environmental factors such as access to resources, the educational attainment of parents and so on. However, the debate is useless. It is useless not only due to the fact that it props up a false dichotomy, it is uselss because the tests get the results the constructors want.

Why the hereditarian-environmentalist debate is frivolous

This is due to the fact that when high-stakes tests were first created (eg the SAT in the mid-1920s) they were based on the first IQ tests brought to America. All standardized tests are based on the concept of IQ—this means that, since the concept of IQ is based on presuppositions of the ‘intelligence’ distribution in society and high-stakes standardized tests are then based on that concept, then they will be inherently biased as a rule. The SAT is even the “first offshoot of the IQ test” (Mensh and Mensh, 1991: 3). Such tests are not even objective as is frequently claimed, “high-stakes, standardised testing has functions to mask the reality of structural race and class inequalities in the United States” (Au, 2013: 17; see also Knoester and Au, 2015).

The reasoning for the uselessness of the debate between hereditarians and environmentalists is simple: The first tests were constructed with the results the test constructors wanted to get; they assumed the distribution of test scores would be normal and create the test around that assumption, adding and removing items until they get the outcome they presupposed.

Sure, someone may say that “It’s all genes and environment so the debate is useless”, though that’s not what the debate is actually about. The debate isn’t one of nature and nurture, but it is a debate about tests created with prior biases in mind to attempt to justify certain social inequalities between groups. What these tests do is “sort human populations along socially, culturally, and economically determined lines” (Au, 2008: 151; c.f. Mensh and Mensh, 1991). And it’s these socially, culturally, and economically determined lines that the tests are based off. The constructors assume that people at the bottom must be less intelligent and so they build the test around the assumption.

If the test constructors had different presuppositions about the nature and distribution of “intelligence” then they would get different results. This is argued by Hilliard (2012:115-116) in Straightening the Bell Curve where she shoes that South African IQ test constructors removed a 15-20 point difference between two white South African groups.

A consistent 15-20 point IQ differential existed between the more economically privileged, better educated, urban-based, English-speaking whites and the lower-scoring, rural-based, poor, white Afrikaners. To avoid comparisons that would have led to political tensions between the two white groups, South African IQ testers squelched discussion about genetic differences between the two European ethnicities. They solved the problem by composing a modified version of the IQ test in Afrikaans. In this way, they were able to normalize scores between the two white cultural groups.

This is, quite obviously, is admission from test constructors themselves that score differences can, and have been, built into and out of the tests based on prior assumptions.

It has been claimed that equal opportunity depends on standardized testing. This is a bizarre claim because standardized testing has its origins with Binet’s (and Goddard’s, Yerkes’ and Terman’s) IQ tests.

It is paradoxical to maintain that IQ tests, which are inherently biased, can promote equal opportunity. The tests do what their construction dictates; they correlate a group’s mental worth with its place in the social hierarchy. (Mensh and Mensh, 1991, The IQ Mythology, pg 30)

They wrote that in response to Gould who believed that there was some use for IQ tests since his son was identified as learning disabled through IQ testing (even though IQ is irrelevant to the definition of learning disabilities; Siegal, 1989).

Testing, from its very beginnings, has been used to attempt to justify the current social order. They knew that certain classes and races were already less intelligent than other classes and races and so they created their tests to line-up with their biases.

Hereditarians may attempt argue that the test bias debate was put to bed by Jensen (1980) in his Bias in Mental Testing, though he largely skirts around the issue and equivocates on certain terms. Environmentalists may attempt to argue that access to different resources and information causes such test score differences—and while this does seem to be the case (eg Ceci, 1990; Au, 2007, 2008), again, the debate rests on false assumptions from people over 100 years ago.

There are at least 4 reasons for the test score gap:

(1) Differences in genes cause differences in IQ scores;

(2) Differences in environment cause differences in IQ scores;

(3) A combination of genes and environment cause differences in IQ scores; and

(4) Differences in IQ scores are built into the test based on the test constructors’ prior biases.

Hereditarians argue for (1) and (3) (eg Rushton and Jensen, 2005) while environmentalists argue for (2) (eg Klineberg, 1928) and test critics argue for (4) (eg Mensh and Mensh, 1991; Au, 2008). Knowing how and why such tests were originally created and used will show us that (4) is the correct answer.

Egalitarians may claim that IQ tests can be looked at as egalitarian devices and be used for good, such as identifying at-risk, lower-“ability” children. But such claims then end up justifying hereditarian arguments.

Like IQ tests, the hereditarian-environmentalist debate is immersed in mythology. In fact, this debate has revolved around IQ testing for so long that the myths surrounding each are not only intertwined but interdependent.

According to its image, the nature-nurture debate pits conservatives against liberals. One part of this image reflects reality; part is mythical; environmentalistsm has not only liberal and radical supporters, but many conservative ones as well.

One facor that sustains the deabte’s liberal-versus-conservative image is that many environmentalists have condemned the hereditarians’ claims of genetic intelligence differentials between races and classes as a justification for class and racial inequality. At the same time, however, environmetalists present their own thesis — which accepts the claim of class and racial intelligence differentials but attributes the alleged differentials to environment rather than heredity — as an alternative to hereditarianism. But is their thesis in fact an alternative to hereditarianism? Or does it instead — irrespective of the mentions of many environmentalists — result in an alternative justification for class and racial inequality? (Mensh and Mensh, 1991: 10-11)

Gould and Binet

One of the most famous environmentalists is Stephen Jay Gould. In the 1970s, he compared craniometry in the 19th century to IQ testing in the 20th—seemingly to discredit the notion—but he ended up, according to Mensh and Mensh (1991: 13), disassociating psychometrics from its beginnings, and then “proceeded to a defense of IQ testing” which may seem strange given the title of the book (The Mismeasure of Man), but “by saying that “man” has been mismeasured, it suggests that man can also be properly measured.”

Binet himself said many contradictory things regarding the nature of the tests that he constructed. His test was designed to “separate natural intelligence and instruction” since it is “the intelligence we seek to measure” (Binet, quoted in Mensh and Mensh, 1991: 19). Gould then attempted to explain this away stating that Binet removed items in which one’s experience would bias test outcomes, but it seems that Gould forgets that all knowledge is acquired. Gould—and others—attempt to paint Binet as an antihereditarian, but if one reads Binet’s writings they will come to find out that he did indeed express many hereditarian sentiments. (Binet seems to contradict himself often enough, writing, for example, “Psychologists do not measure…we classify“, quoted in Richardson, 2004. But Binet and his contemporaries did indeed classify—they classified at-risk, low-“ability” children into their ‘correct’ educational setting based on their ‘intelligence’.)

Binet stated that special education needed to be tailored to different groups, but he did not, of course, assume that those who would need the special education would come from the general population: they would come from lower-income areas and then constructed his test to fit his assumption.

Since all IQ-test scores are relative, or inherently depedent on each other, it is illogical to contend, as Gould did, that one test use is beneficial and the others are not. To be logical one must acknowledge that if the original test use was positive, as Gould maintained, then the others would be too. Conversely, if other test uses were negative, as Gould suggested in this instance (although not in others), then something was wrong with the original use, that is, intrinsically wrong with the test. (Mensh and Mensh, 1991: 23)

Mensh and Mensh then discuss Gould’s treatment of Yerkes’ Army qualification tests. They were administered in “Draconian traditions”, but Gould did not reject the tests. He instead did not criticize the earlier tests, but criticized the tests post-Goddard (after 1911). Because Gould “accepted the fallacious premise of mental measurement, he could overlook his technical criticism and, paradoxically, accept the figures he had apparently rejected; although the product of deviant methods, they nonetheless ranked races and classes in the same way as those produced by approved methods” (Mensh and Mensh, 1991: 29). Gould called the figures “rotten to the core” but then called them “pure numbers”, claiming that they could even be used to “promote equality of opportunity” (Gould, 1996: 228). In essence, Gould was arguing that Yerkes should have taken to an environmentalist (that a group’s intelligence is educationally-determined) and not a hereditarian position (that a group had not acquired a high level of educational attainment since they had lower intelligence).

Environmentalism perpetuates hereditarianism

It may seem counter-intuitive, but claims from environmentalists perpetuate hereditarianism in virtue of accepting the hereditarian claim that there are intelligence differences between classes, races, men and women. Otto Klineberg held the belief that IQ tests were used to justify the current racial hierarchy between blacks and whites, but unbeknownst to him, his environmentalist position perpetuates the hereditarian dogma (Klineberg, 1928).

Klineberg conducted his study with the exemplary aim of rebutting the selective migration thesis, but the study itself reinforced from an environmentalist standpoint the hereditarians’ claims that whites are superior in intelligence to blacks and that IQ tests and measures of school performance are measures of intelligence. (Mensh and Mensh, 1991: 91)

Conclusion

For these reasons, the hereditarian/environmentalist IQ debate is useless as score differences can be—and have been—built into the tests which IQ testers used as justification that certain groups were less “intelligent” than others. For if the constructors had different presuppositions (say they believed Europeans were inferior in “intelligence” compared to other races) then they would construct the tests to show that assumption.

Such tests are premised on subjective assumptions about ‘intelligence’ (whatever that is) and its distribution among groups. But the hereditarian-environmentalist debate becomes ridiculous once one knows how and why IQ tests (the basis for high-stakes standardized testing which is in use today) were created and used for. Binet even held hereditarian views, contra claims from environmentalists.

But, as has been argued, the debate is meaningless—no meaningful dialogue can be had as the test constructors’ assumptions about intelligence and its distribution are built into  the test. Even when arguing against hereditarianism, environmentalist hypotheses still lend credence to the hereditarian position. For these reasons, the debate should cease.

Christianity and Sociobiology: Synthesizing Just-so Stories

1600 words

The story of Adam and Eve is critical to Christian thought. For many Christians, the story tells us how and why we fell from God’s grace and moved away from Him. Some Christians are Biblical literalists—they believe that the events in the Bible truly happened as described. Other Christians attempt to combine Christianity with ‘science’ in an attempt to explain the natural world. In the book Doing Without Adam and Eve: Sociobiology and Original Sin, Williams (2000) argues that Adam and Eve are symbolic figures and that they did not exist.

But suppose, as many Christians now do, that Adam and Eve are simply symbolic figures in an imaginary garden rather than the cause of all our woe. Suppose further that the idea od “the fall” from grace is not in Scripture? Does this destroy Christian theology? This book says no. This book says that doing without Adam and Eve while drawing on sociobiology improves Christian theology and helps us understand the origin and persistance of our own sinfulness. (Williams, 2000: 10)

How ironic is it for just-so storytellers to combine their doctrine with another doctrine of just-so stories? The Christian Bible is chock-full of just-so stories purporting to show the origin of how and why we do certain things. Stories such as those in the Christian Bible do serve a purpose—which explain why they are still told and why there are still so many believers today. In any case, Williams (2000) is replacing one way of storytelling with another: the combination of the doctrine of Christianity along with Sociobiology (SB), attempting to use ‘science’ to lend an air of respectability to Christian thought.

How ironic that Christian storytelling would be combined with another form of storytelling—one masquerading as science? SB was the precursor to what is now known as ‘Evolutionary Psychology’ (EP) (see Buller, 2005; Wallace, 2010). There, we see that what amounts to just-so stories today has its beginnings in E. O. Wilson’s (1975) book Sociobiology: The New Synthesis. Sociobiology is premised on the claim that both social and individual behaviors can become objects of selection which then become fixated as species-typical behaviors. SB, then, was crafted to explain human nature and how and why we behave the way we do today. If certain genes cause or influence certain behaviors and these behaviors increase group fitness, then the behavior in question will persist since it increases group fitness.

I have no qualms with the group selection claim (I think it is underappreciated, see Sterelny and Griffiths, 1999). But note that SB, like its cousin EP, attempts to explain the evolution of human behavior through Darwinian natural selection. But the problems with the assumption that traits persist because they are selected-for their contribution to fitness has already been shown to be highly flawed and wanting by Fodor (2008) and Fodor and Piattelli-Palmarini (2010; 2011). In a nutshell, if a behavior is correlated (coextensive) with another behavior (or a gene that causes/influences a behavior is coextensive with another that causes/influences a different behavior) that is not fitness-enhancing then selection has no way of knowing which of the correlated traits influences fitness and so, since both traits are selected, there is no fact of the matter (when it comes to evolution) about why a trait was selected-for. There can be to us humans, as we can attempt to find out which trait is fitness-enhancing and which takes a free-ride—but for our conception of ‘natural selection’, it cannot distinguish between the cause and the correlate since there is no mind doing the selecting nor laws of selection for trait fixation which hold in all ecologies.

In any case, even if we assume that natural selection is an explanatory mechanism, the Sociobiologist/Evolutionary Psychologist would still have a hard time explaining how and why humans behave the way they do (note that behavior is distinct from action in that behavior is dispositional and actions are intentional) as Hull (1986) notes. In fact, Hull has a very simple argument showing that if one believes in evolution, then they should not believe in a ‘human nature’:

If species are the things that evolve at least in large part through the action of natural selection, then both genetic and phenotypic variability are essential to biological species. If all species are variable, then Homo sapiens must be variable. Hence, it is very unlikely that the human species as a biological species can be characterized by a set of invariable traits.

This does not stop Sociobiologists/Evolutionary Psychologists, though, from attempting to carry out their framework premised under untenable assumptions (that traits can be ‘selected-for’ and that natural selection can explain trait fixation). This shows, though, that even accepting the Darwinian claims, they do not lead to the conclusion that Darwinists would like.

To explain human nature through scientific principles is the aim of SB/EP. Indeed, it was what E. O. Wilson wanted to do when he attempted his new synthesis. Though, Dorothy Nelkin—in the book Alas, Poor Darwin: Arguments Against Evolutionary Psychology (Rose and Rose, 2001)—has pointed out that Wilson was a religious man in his early years. This may have influenced his views on everything, from genetics to evolution.

When Harvard University entomologist Edward O. Wilson first learned about evolution, he experienced, in his words, an ‘epiphany’. He describes the experience: ‘Suddenly — that is not too strong a word — I saw the world in a wholly new way … A tumbler fell somewhere in my mind, and a door opened to a new world. I was enthralled, couldn’t stop thinking about the implications evolution has … for just about everything.’

Wilson, who was raised as a southern Baptist, believes in the power of revelation. Though he drifted away from the church, he maintained his religious feeling. ‘Perhaps science is a continuation on new and better tested ground to attain the same end. If so, then, in that sense science is religion liberated and writ large.’ (Nelkin, 2001)

The Sociobiological enterprise, though, was further kicked-off by Richard Dawkins’ publication of The Selfish Gene just one year after Wilson published Sociobiology: The New Synthesis. This is when such storytelling truly got its start—and it has plagued us ever since. How ironic that the start of what I would call ‘the disciplines of storytelling’ would be started by a religious man and an atheist? The just-so storytellers are no better than any other just-so storytellers—Christians included. They have a ‘religious bent’ to their thinking, though the may vehemently deny it (in the case of Dawkins). Nelkin claims that, though Dawkins rejects a religious kind of purpose in life, “Dawkins does [find] ultimate purpose in human existence — the propagation of genes.”

Nelkin goes on to argue that Dawkins is “an extreme reductionist” and that our bodies don’t matter but what really matters is our DNA sequences—what supposedly makes us who we are. Nelkin puts Dawkins’ view simply: our bodies don’t matter (the material doesn’t matter), but our genes are immortal and what explains behavior is our selfish genes attempting to propagate by causing/influencing certain behaviors. These kinds of metaphors are pushed by geneticists, too, with their claims that DNA is ‘the book of life’. Nelkin also quotes Wilson stating that “‘you get a sense of immortality’ as genes move on to future generations. Like the sacred texts of revealed religion, the ‘evolutionary epic’ explains our place in the world, our relationships, behaviour, morality and fate. It is indeed of truly epic proportions.”

Nelkin then claims that Evolutionary Psychologists are like missionaries attempting to proselytize people from one ‘religion’ to another. They have the answer to the meaning of life—and the meaning is in your genes and to propagate your genes.

[Evolutionary Psychologists] are convinced they have insights into the human condition that must be accepted as truth. And their insights often come through revelations. Describing his conversion experience, Wilson notes that his biggest ideas happened ‘within minutes … Those moments don’t happen very often in a career, but they’re climactic and exhilarating.’ He believes he is privy to ‘new revelations of great moral importance’, that from science ‘new intimations of immortality can be drawn and a new mythos evolved’. Convinced that evolutionary explanations should prevail over all other beliefs, he seeks conversions. (Nelkin, 2001)

Conclusion

It is ironic that Williams (2000) is attempting to reconcile Christian theology with Sociobiology. The parallels between the two are strikingly evident. Christian theology is based on faith just like SB/EP just-so stories are (since there can be no independent verification for the hypothesis). Christians have missionaries who attempt to proselytize new converts and so do those who push the doctrine of SB/EP. Anyone who agrees with my doctrine is wrong; I am right. ‘Natural selection’ cannot explain the propagation of behavior today. The attempt to explain human nature through evolution and by extension natural selection was inevitable ever since Darwin formulated the theory of natural selection in the 19th century. However, if one believes in evolution then it is illogical to believe that there IS a human nature; if one is a good evolutionist then they believe that human nature is fairy tale and that our species cannot be characterized by a set of invariable traits (Hull, 1986).

How ironic it is for theists and scientists to have similar kinds of beliefs and convictions about the beliefs they hold near and dear to their hearts. The attempted synthesis of Christian theology and Sociobiology (an attempted synthesis itself) is very telling: it shows that the two groups who propagate such explanations are, in actuality, cut from the same cloth with the same kinds of beliefs—though they use different language than the other.

Response to “A Critique of Ken Richardson: Initial Impressions and Social Class”

3700 words

I am now going on my fifth year blogging. In that time, my views have considerably shifted to what I would term HBD racial realism (reductionism of the Neo-Darwinian type which is refuted by a holistic perspective of the organism) to a more holistic, systems approach of the organism and how it interacts with its environment—the gene-environment system.

Many long-time readers may know that I used to be a staunch hereditarian especially when it came to IQ. However, back in the Spring of 2017, I read DNA is Not Destiny (Heine, 2017) and Genes, Brains, and Human Potential (Richardson, 2017a) (in the same month, no less). Heine had me questioning my views while Richardson completely changed them. I would say that the biggest catalysts were chapters 4 and 5 on genes, what they are and how they work in concert with the physiological system were imperative to my view changes. Further, learning more about the history of IQ testing also further lead to these view changes. (See my article “Why Did I Change My Views?” for more information.)

This then leads me to someone on Twitter by the name of “ModernHeresy” who, back in October, asked me which books best represent my views on IQ:

I replied, Genes, Brains, and Human Potential (Ken Richardson), On Intelligence (Stephen Ceci) and Inventing Intelligence (Elaine Castles). He then said that he thinks that Jensen et al are right about IQ, but that he will give Richardson’s book an honest chance. Well, I was heavily biased against anti-hereditarian arguments before I read Richardson’s book almost 3 years ago, and now look at me.

In any case, ModernHeresy (MH) had responded to some of Richardson’s arguments in his latest book in a video titled “A Critique of Ken Richardson: Initial Impressions and Social Class“. It seems like a well-researched video with four topics that I will also cover today. MH covers Goddard’s use of the Binet-Simon scales in turning away prospective immigrants who scored lower; the construct validity argument; IQ as a measure of social class; and IQ ‘predicts’ only through test construction. I will respond to each point per section.

Goddard

Goddard was the man who translated Binet’s original test and brought it to America, translating it to English in 1910. He was the director of the Vineland Training School of Feebleminded Boys and Girls in Vineland, New Jersey and he believed that one’s intellectual potential was biologically determined. Goddard used his translated-Binet to attempt to turn away those who he deemed “feebleminded” or “morons” (indeed, he was the one to coin the term; see Castles, 2012; Wilson, 2017; Dolmage, 2018). Goddard is of Kallikak family fame—a pseudonymous name for a family of “feebleminded people”, see Smith and Wehmeyer (2014) for an exposition on how Goddard was wrong about the Kallikaks and telling Deoborah Kallikaks true identity. To Goddard’s credit, though, he did recant some of his views in 1928 stating that “feeblemindedness” was not incurable, as he once thought.

MH then cites Snyderman and Herrnstein (1983) stating that they “thoroughly review the congressional record and testimony is almost no evidence that intelligence tests had any influence over the content or the passage of the 1924 immigration act.” MH then goes on to say that the claim that IQ testing had anything to do with the 1924 immigration act had its roots in the 70s, specifically in Leon Kamin’s The Science and Politics of IQ, which Gould then reiterated in both versions of Mismeasure of Man. (See here for a defense from Kamin and also see Dorfman.) MH then says that

Richardson’s book was published in 2017 this is completely inexcusable and I would argue an indication that Richardson’s work has a lot of its roots and arguments that originated in the 1970s and the formulation of these arguments have basically ignored or at best extremely selectively referenced any work in the almost 50 years since that have challenged them.

This is ridiculous. Snyderman and Herrnstein did nothing of the sort. Gelb et al (1986) write:

The historical record clearly documents that mental testing played a part in the national immigration debate between 1921 and 1924, though certainly in a less direct manner than Snyderman and Herrnstein purportedly sought to uncover.

[…]

In their distorted and simplistic account of the period, Snyderman and Herrnstein failed to account for the interconnections between psychometric, eugenic and political communities. While some historians of psychology have exxagerated the influence of the mental testers on the passage of the Immigration Act of 1924, Snyderman and Herrnstein’s attempt to exonerate the early testers contains flaws at least as serious as any of those they criticize. Important mental testers of the 1910s and 1920s were willing to use their fledgling science to promote immigration restriction. One cannot examine the relevant historical material without concluding that prominent testers promoted eugenic and racist interests and sought to, and in some degree succeeded in, providing those interests with a mantle of scientific respectability.

While Ford (1985) writes that “If the long-standing acceptance of racial, ethnic, and sexual bias with intellectual circles prior to 1924 is considered, Snyderman and Herrnstein’s conclusion becomes invalid.” We know that there is racial, ethnic, and sex bias which are built into the test to get the score distributions the researchers want (Mensh and Mensh, 1991; Hilliard, 2012).

Dolmage (2018: 119) states that “Whenever [Henry Laughlin] testified [to the U.S. Congress], he brought charts, graphs, pedigree charts, and the results of hundreds of IQ tests as evidence of “the immigrant menace. Laughlin plastered the Congress committee room with charts and graphs showing ethnic differences in rates of institutionalization for various degenerative conditions, and he presented data about the mental and physical inferiority of recent immigrant groups.” So, IQ tests were, quite clearly, used to stifle immigration from Eastern and Southern Europe (though this was not specifically on Goddard, this was due directly to his bringing the Binet-Simon test to America and translating it into English).

MH then cites Richardson’s (2002) paper What IQ Tests Test, stating that Richardson cited Leila Zenderland’s (1998) book Measuring Minds, a biography of Goddard. MH cites a passage from Zenderland on Goddard:

While Goddard believed that most of these immigrants were indeed mentally weak, he wondered about the cause. “Are these immigrants of low mentality cases of hereditary defect”, Goddard now asked pointedly, “or cases of apparent mental defect by deprivation?” If the former, they still posed a threat to posterity; if the latter, then Americans need have no fears about the succeeding generations. While Goddard knew of no data to settle this “vital question”, he himself believed it “far more probably that their condition is due to environment than it is due to heredity. Their “environment has been poor” and “seems to account for the result,” he decided.

Such conclusions could hardly be said to support those calling for more restrictive legislation.

MH then says “As we will see later, Richardson cites sources that if read in their entirety frequently contradict his claims.” This is ridiculous. In his 2002 paper, he does indeed cite Zenderland 6 times, but here’s the thing: five of the citations are about Binet; one for the claim that IQ tests are ‘intelligence’ tests like Galton claimed. As I showed above, IQ testing was indeed used to attempt to curtail the number of immigrants into America.

MH then claims that, due to a quote with ellipses in Richardson’s 2002 paper that he was being deceptive not giving the whole quote and that he was

trying to dig up stuff where spearman or Charles Murray or somebody is admitting that something he’s arguing against has major weaknesses. So he finds that quote and thinks ‘Hm pervasive. That makes it sound as if there is a lot of evidence for this, I don’t like that. But I like the part where he says the evidence is circumstantial and the reality remains arguable. So I’ll just cut that part out. Who’s actually going to check this? The vast majority of my readers wouldn’t be caught dead owning The Bell Curve, much less actually reading it in any detail. Besides, I put ellipses, it’s all legal and above board.’

I personally have read The Bell Curve a few times and I’m familiar with the quote; I don’t think that the ellipses, in any way, diminishes Richardson’s point.

Construct Validity

I’ve written in-depth on this subject so I will be quick here. MH states that “it cannot be claimed that IQ tests have construct validity in the strict definitional sense.” He “partially agrees with the criticism” but he only “partially agrees” due to the “correlations” with regard to job performance and scholastic achievement.

Back in September, I wrote an article on test construction, item bias and item analysis. More recently, I wrote on the history of IQ testing and how tests are constructed with the presuppositions of the test’s constructors. Finally, in my most recent article on the ‘meaasurment’ of ‘intelligence’ I noted that first, IQ-ists need to provide a definition for intelligence, then they need to prove that IQ tests measure intelligence (they assume the tests measure what needs to be defined); then, after all is said and done, can IQ-ists then posit about “genetic” causes of intelligence and other psychological traits and variation between racial and ethnic groups. I have also created a syllogism in the modus tollens form showing that IQ tests cannot be construct valid:

Premise 1If the claim “IQ tests test intelligence” is true, then IQ tests must be construct valid.
Premise 2IQ tests are not construct valid.
ConclusionTherefore, the claim “IQ tests test intelligence” is false. (modus tollens, P1, P2)

IQ ‘predicts’ things through test construction; it’s not really a ‘prediction’, in any case. Since IQ tests are related to other kinds of achievement tests—indeed, they are different versions of the same test—the claim that IQ is a predictor of future success is therefore circular (Richardson, 2017b). Indeed, all of the claims that IQ specifically are predictive can be explained by other, less ‘mystical’ ways.

Social class and IQ

MH states that a problem for the “IQ as a measure of social class” argument is the fact that “most of the IQ variation in society is within families … about 70 percent of IQ variation is due to with-in family differences.” MH then quotes Richardson stating that correlations between .6 and .7 have been reported between IQ and maternal encouragement, for example, then stating that Richardson did “not mention the strong caveats Mackintosh presents following his summaries of these studies.” MH then quotes Mackintosh stating that while the correlations between a developing child’s IQ and variables like parental involvement and attitudes and the presence of books, toys and games in the home “the establishment of these correlations alone will never prove that one is direct cause of the other.” MH then states that there are two possibilities: how the child acts can influence elicits certain responses from the parent or that parents influence child development at least as much through their actions toward their children along with the genes they pass on to them.

MH then invokes the “sociologists fallacy” which is the tendency to think of a correlation between a social variable and a phenotype as causal without thinking that genetics mediates the relationship between the social variable and the phenotype in question—which is known as “genetic confounding”, where genes confound the relationship between two variables. However, for the “genetic confounding” claim to have any weight, there must be a mechanism that produces psychological variation, so in lieu of that, the “genetic confounding” claim, and along with it the “sociologist’s fallacy” charge are irrelevant until a mechanism is identified.

Other aspects of social class can, as well, differ between siblings such as teacher quality, teacher treatment, school quality and so on—all of which influence IQ (Ceci, 1990). Furthermore, Richardson never claimed that social class accounts for all of the variations in IQ. Richardson (2002) writes:

It suggests that all of the population variance in IQ scores can be described in terms of a nexus of sociocognitive-affective factors that differentially prepares individuals for the cognitive, affective and performance demands of the test—in effect that the test is a measure of social class background, and not one of the ability for complex cognition as such.

Richardson’s main claim (and which he successfully argues for) is that variation in the sociocognitive affective preparedness nexus accounts for the variation in IQ. IQ is “in effect” (to use Richardson’s words) a measure of social class since social class is a significant determinant of the variables that make up the sociocognitive affective preparedness nexus.

MH then cites Korenman and Winship (1995) who write that:

incredible as it may seem, our sibling analysis suggest that, even though Herrnstein and Murray’s parental SES index is poorly measured and narrowly conceived, it appears in most cases adequate for producing unbiased estimates of the effect of AFQT scores on socioeconomic outcomes.

MH then states that the AFQT (Armed Forces Qualifying Test) “is really just an IQ test” but, as Mensh and Mensh (1991) note, such tests were biased from their beginnings due to how they were constructed and how items were chosen to go along with the presupposed biases of the test’s constructors.

MH then brings up the Wilson Effect, which “is the observation that the heritability of IQ increases by age and by adulthood, the effect of the home environment has almost zero contribution to individual differences in IQ on average” (MH). The Wilson Effect, too, is an artifact of test construction. Richardson (2000: 36) writes:

Another assumption adopted in the construction of tests for IQ is that, as a supposed physical measure like height, it will steadily “grow” with age, tailing off at around late puberty. This property was duly built into the tests by selecting items which a steady proportion of subjects in each age group passed. Of course, there are many reasons why intelligence, however we definne it, may not develop like this. More embarrassing, though, has been the undesired, and unrealistic, side effect in which intelligence appeared to improve steadily up to the age of around eighteen years, and then start to decline. Again, this is all a matter of item selection, the effect easily being reversed by adding items on which older people perform better and reducing those on which younger people perform better. […] That [IQ score differences] are allowed to persist is a matter of prior assumption, not scientific fact. In all these ways, then, we find that the IQ-testing movement is not merely describing properties of people: rather, the IQ test has largely created them.”

In response to the claim that Richardson has never “operationalized” social class, this claim is false. In his most recent paper, Richardson and Jones (2019) cite a whole slew of more recent research to buttress Richardson’s (2002) sociocognitive affective nexus, noting that social class is more about money, cars and things, but also is how we think and feel. Richardson and Jones (2019: 39) write:

Finally, different social conditions also lead to different affective orientations, such as self-confidence and achievement expectancies, that impact on school learning and test performances (Frankenhuis & de Weerth, 2013; Odgers, 2015; Schmader, Johns, & Forbes, 2008). The effects of test anxiety on cognitive performance are well known, and have been estimated to affect up to 15%–20% of school children (Chin, Williams, Taylor, & Harvey, 2017). In addition, feelings of social rejection effect test performances and self-regulation (Stillman & Baumeister, 2013).

In sum, whatever else CA and EA scores measure, they at least partly reflect a socio-psychological population structure in ways probably unrelated to any general cognitive or learning ability.

MH then quotes Richardson citing Hoge and Coladarci (1989) who states that teacher judgments have a higher correlation between teacher’s assessment and future success in life. MH states that since the teachers were presumably well-acquainted with the children and their academic aptitudes that this explains the higher correlation than IQ tests have with future success of students in their life.

… the marginal time cost is small, nearly every child is already in school, but if you’re a parent being told your child needs to be placed in remedial classes, what are you more likely to trust? The judgment of a single random teacher or an IQ test standardized on thousands of children from a representative sample of the population with a test-retest reliability of .9?

The claim that teacher’s judgments can be done in a “fraction of the time” compared to IQ tests is indeed true. I have noted that this is how these tests were constructed originally in the early 1900s, and early test constructors related teacher’s judgments on ‘intelligence’ to their subjective presuppositions, constructing the test on the basis of teacher’s judgments and their own biases.

What explains professional success? IQ or social class? Ceci (1990: 87) notes that “the effects of IQ as a predictor of adult income were totally eliminated … when we entered parental social status, and years of schooling as covariates.” Ceci goes on to write that since education and social class were signficant and positive indicators of adult income “this indicates that the relationship between IQ and adult income is illusory … Thus, it appears that the IQ-income relationship is really the result of schooling and family background, not IQ.” (pg 87). So it one’s social standing (access to schooling and family background) that mediates the IQ-income relationship.

Mensh and Mensh (1991) note that Gould held contradictory views on IQ testing. He noted the racist and social origins of the testing movement, but accepted IQ tests for their utility for certain uses—most likely because they helped to identify his son that had a learning disability. IQ tests are not objective scientific instruments; indeed, how can a human mind (in all of its subjectivity) create an unbiased test? That IQ tests are standardized on thousands of people are irrelevant; the IQ test constructors can build what they want into and out of the test, so claiming that a parent should trust a (biased) IQ test over the judgment of “a single teacher” who has had years of teaching experience is superior—as Hoge and Coladeri (1989) do indeed show.

Lastly, MH cites brain imaging/head measuring studies showing correlations between IQ and the measures (Rushton and Ankney, 2009), while also purportedly showing that this holds among siblings as well (Lee et al, 2019). Schonemann et al (2000) show that brain size does not predict general cognitive ability within families, while pre-registered studies show lower correlations between .12 and .24 (Pietschnig et al, 2015; Nave et al, 2018).

Indeed, a parent’s belief about their child’s GPA (grade point average) remain even “after controlling for siblings’ average grades and prior differences in performance, parents’ beliefs about sibling differences in academic ability predicted differences in performance such that youth rated by parents as relatively more competent than their sibling earned relatively higher grades the following year” (Jensen and McHale, 2015: 469). More arguments showing why these things would differ within families can be found in Richardson and Jones (2019). MH then cites a table of motor vehicle fatalities in Australian army personnel under 40, noting that the death rate in motor vehicle accidents sharply increased the lower one’s IQ score (O’Toole, 1990). I don’t contest the data, I contest MH’s interpreation of it: am I supposed to accept IQ as causal in regard to motor vehicle fatalities? That one is just dumber than average which then causes such fatalities? Or is the social class explanation much stronger—in that one’s access to resources and education influences their IQ scores? MH finally discusses reaction time (RT) in the context of its relationship to IQ. But Richardson’s (2002: 34) sociocognitive affective nexus, too, explains the relationship:

… low-IQ subjects regularly produce RTs equal to those of high-IQ subjects, but with less consistency over trials. This lack of consistency may well reflect poor self-confidence and high test anxiety and their effects on information processing, incursions of extraneous cognitions, sensory distractions and so on.

All in all, MH is implying that IQ’s correlations with brain imaging/skull measurement, the relationship between motor vehicle fatalities and the relationship between RT and IQ all point to the claim that IQ measures intelligence and not social class. This is a strange claim. For the structure and items on IQ (and similar) tests reflect that of the middle class. Indeed, the Flynn Effect rising as the middle-class increases is yet more evidence that IQ is a measure of social class. MH then claims that assuming that IQ=intelligence explains these things better than the assumption that IQ=social class. However, there has been much sociological research into how social class affects health and, along with it would affect scores on achievement tests (which are inherently biased by race, class, and sex; Mensh and Mensh, 1991; Au, 2007, 2008). IQ tests do not measure learning (what many IQ-ists use as a stand-in for ‘intelligence’); what IQ tests do is “sort human populations along socially, culturally, and economically determined lines” (Au, 2008: 151; c.f., Mensh and Mensh, 1991).

Conclusion

I think the video was well-researched and well-cited (to a point, he didn’t discuss all of the critiques that Snyderman and Herrnstein received on their Immigration Act paper), but he failed to prove his ultimate claim: that IQ tests measure intelligence and not social class. Goddard was one of the most well-known eugenicists in the 19th century, and his views had a devastating social impact, not only on European immigrants vying to emigrate to America, on the populace of ‘morons’ and those who were ‘feebleminded’ in America: they were sterilized as they were deemed ‘unfit’ to have and care for children (Wilson, 2017). IQ tests are not construct valid (which MH agrees with) but he still is possessed by the delusion that success at jobs is causally related to IQ (see Richardson and Norgate, 2015). The ‘sociologist’s fallacy’ claim and the genetic confounding claim both fail as you need to identify a causal (genetic) mechanism that is responsible for variation in psychological traits. The observation that IQ score heritability increases as children age is, too, built into the test through item selection. The claim that Richardson does not operationalize social class is false (see Richardson and Jones, 2019). Neuroimaging analyses show lower relationships between brain size and IQ when they are pre-registered; his citation to vehicle fatalities and IQ is irrelevant as is the part about RT and IQ—as social class, too explains the outcomes.

IQ most definitely is a measure of social class, as an analysis of the items on the test will show (see Mensh and Mensh, 1991; Richardson, 2002; Castles, 2012) and not a ‘measure’ of ‘intelligence.’

Correlation and Causation Regarding the Etiology of Lung Cancer in Regard to Smoking

1550 words

The etiology of the increase in lung cancer over the course of the 20th century has been a large area of debate. Was it smoking that caused cancer? Or was some other, unknown, factor the cause? Causation is multifactorial and multi-level—that is, causes of anything are numerous and these causes all interact with each other. But when it comes to smoking, it was erroneously argued that genotypic differences between individuals were the cause of both smoking and cancer. We know now that smoking is directly related to the incidence of lung cancer, but in the 20th century, there were researchers who were influenced and bribed to bring about favorable conclusions for the tobacco companies.

Psychologist Hans Eysenck (1916-1997) was a controversial psychologist researching many things, perhaps most controversially, racial differences in intelligence. It came out recently, though, that he published fraudulent papers with bad data (Smith, 2019). He, among other weird things, believed that smoking was not causal in regard to cancer. Now, why might Eysenck think that? Well, he was funded by many tobacco companies (Rose, 2010; Smith, 2019). He accepted money from tobacco companies to attempt to disprove the strong correlation between smoking and cancer. Between the 1977-1989, Eysenck accepted about 800,000 pounds from tobacco companies. He is not alone in holding erroneous beliefs such as this, however.

Biostatistician Ronald Fisher (1890-1962) (a pipe smoker himself), the inventor of many statistical techniques still used today, also held the erroneous belief that smoking was not causal in regard to cancer (Parascandola, 2004). Fisher (1957) argued in a letter to the British Medical Journal that while there was a correlation between smoking and the acquisition of lung cancer, “both [are] influenced by a common cause, in this case the individual genotype.” He went on to add that “Differentiation of genotype is not in itself an unreasonable possibility“, since it has been shown that genotypic differences in mice precede differences “in the frequency, age-incidence and type of the various kinds of cancer.

So, if we look at the chain it goes like this: people smoke; people smoking is related to incidences in cancer; but it does not follow that since people smoke that the smoking is the cause of cancer, since an unknown third factor could cause both the smoking and cancer. So now we have four hypotheses: (1) Smoking causes lung cancer; (2) Lung cancer causes smoking; (3) Both smoking and lung cancer are caused by an unknown third factor. In the case of (3), this “unknown third factor” would be the individual genotype; and (4) the relationship is spurious . Fisher was of the belief that “although lung cancer occurred in cigarette smokers it did not necessarily follow that the cancer was caused by cigarettes because there might have been something in the genetic make up of people destined to have lung cancer that made them addicted to cigarettes” (Cowen, 1999). Arguments of this type were popular in the 19th and 20th century—what I would term ‘genetic determinists’ arguments, in that genes dispose people to certain behaviors. In this case, genes disposed people to lung cancer which made them addicted to cigarettes.

Now, the argument is as follows: Smoking, while correlated to cancer is not causal in regard to cancer. Those who choose to smoke would have acquired cancer anyway, as they were predisposed to both smoke and acquire cancer at X age. We now know, of course, that such claims are ridiculous—no matter which “scientific authorities” they come from. Fisher’s idea was that differences in genotype caused differences in cancer acquisition and so along with it, caused people to either acquire the behavior of smoking or not. While at the time such an argument could have been seen as plausible, the mounting evidence against the argument did nothing to sway Fisher’s belief that smoking did not outright cause lung cancer.

The fact that smoking caused lung cancer was initially resisted by the mainstream press in America (Cowen, 1999). Cowen (1999) notes that Eysenck stated that, just because smoking and lung cancer were statistically associated, it did not follow that smoking caused lung cancer. Of course, when thinking about what causes, for example, an observed disease, we must look at similar habits they have. And if they have similar habits and it is likely that those with similar habits have the hypothesized outcome (smokers having a higher incidence of lung cancer, in this case), then it would not be erroneous to conclude that the habit in question was a driving factor behind the hypothesized disease.

It just so happens that we now have good sociological research on the foundations of smoking. Cockerham (2013: 13) cites Hughes’ (2003) Learning to Smoke: Tobacco Use in the West where he describes the five stages that smokers go through: “(1) becoming a smoker, (2) continued smoking, (3) regular smoking, (4) addicted smoking, and, for some, (5) stopping smoking.” Most people report their first few times smoking cigarettes as unpleasant, but power through it to become a part of the group. Smoking becomes somewhat of a social ritual for kids in high-school—with kids being taught how to light a cigarette and how to inhale properly. For many, starting smoking is a social thing that they do with their friends—it can be said to be similar to being social drinkers, they were social smokers. There is good evidence that, for many, their journey as smokers starts and is fully dependent on their social environment than actual physical addiction (Johnson et al, 2003; Haines, et al, 2009).

One individual interviewed in Johnson et al (2003: 1484) stated that “the social setting
of it all [smoking] is something that is somewhat addictive itself.” So, not only is the nicotine the addictive substance on the mind of the youth, so too is the social situation for the youth in which the smoking occurs. The need to fit in with their peers is one important driver for the beginning—and continuance—of the behavior of smoking. So we now have a causal chain in regard to smoking, the social, and disease: youths are influenced/pressured to smoke by their social group which then leads to addiction and then, eventually, health problems such as lung cancer.

The fact that the etiology of smoking is social leads us to a necessary conclusion: change the social network, change the behavior. Just as people begin smoking in social groups, so too, do people quit smoking in social groups (Christakis and Fowler, 2008). We can then state that, on the basis of the cited research, that the social is ultimately causal in the etiology of lung cancer—the vehicle of cancer-causation being the cigarettes pushed bu the social group.

Eysenck and Fisher, two pioneers of statistics and different methods in psychology, were blinded by self-interest. It is very clear with both Eysenck and Fisher, that their beliefs were driven by Big Tobacco and the money they acquired from them. Philosopher Donald Davidson famously stated that reasons are causes for actions (Davidson, 1963). Eysenck’s and Fisher’s “pro-belief” (in this case, the non-causation of smoking to lung cancer) would be their “pro-attitude” and their beliefs lead to their actions (taking money from Big Tobacco in an attempt to show that cigarettes do not cause cancer).

The etiology of lung cancer as brought on by smoking is multifactorial, multilevel, and complex. We do have ample research showing that the beginnings of smoking for a large majority of smokers are social in nature. They begin smoking in social groups, and their identity as a smoker is then refined by others in their social group who see them as “a smoker.” Since individuals both begin smoking in groups and quitting in groups, it then follows that the acquisition of lung cancer can be looked at as a social phenomenon as well, since most people start smoking in a peer group.

The lung cancer-smoking debate is one of the best examples of the dictum post hoc, ergo propter hoc—or, correlation does not equal causation (indeed, this is where the dictum first originated). While Fisher and Eysenck did hold to that view in regard to the etiology of lung cancer (they did not believe that since smokers were more likely to acquire lung cancer that smoking caused lung cancer), it does speak to the biases the two men had in their personal and professional lives. These beliefs were disproven by showing a dose-dependent relationship in regard to smoking and lung cancer: heavier smokers had more serious cancer incidences, which tapered down the less an individual smoked. Fisher’s belief, though, that differences in genotype caused both behavior that led to smoking and the lung cancer itself, while plausible at the time, was nothing more than a usual genetic determinist argument. We now know that genes are not causes on their own; they do not cause traits irrespective of their uses for the physiological system (Noble, 2012).

Everyone is biased—everyone. Now, this does not mean that objective science cannot be done. But what it does show is that “… scientific ideas did not develop in a vacuum but rather reflected underlying political or economic trends” (Hilliard, 2012: 85). This, and many more examples, speak to the biases of scientists. For reasons like this, though, is why science is about the reproduction of evidence. And, for that, the ideas of Eysenck and Fisher will be left in the dustbin of history.

“Definitions” of ‘Intelligence’ and its ‘Measurement’

1750 words

What ‘intelligence’ is and how, and if, we can measure it has puzzled us for the better part of 100 years. A few surveys have been done on what ‘intelligence’ is, and there has been little agreement on what it is and even if IQ tests measure ‘intelligence.’ Richardson (2002: 284) noted that:

Of the 25 attributes of intelligence mentioned, only 3 were mentioned by 25 per cent or more of respondents (half of the respondents mentioned ‘higher level components’; 25 per cent mentioned ‘executive processes’; and 29 per cent mentioned ‘that which is valued by culture’). Over a third of the attributes were mentioned by less than 10 per cent of respondents (only 8 per cent of the 1986 respondents mentioned ‘ability to learn’).

As can be seen, even IQ-ists today cannot agree upon a definition—indeed, even Ian Deary admits that “There is no such thing as a theory of human intelligence differences—not in the way that grown-up sciences like physics or chemistry have theories” (quoted in Richardson, 2012). (Also note that attempts of validity are circular, relying on correlations with other, similar tests; Richardson and Norgate, 2015; Richardson, 2017b.)

Linda Gottfredson, University of Delaware sociologist and well-known hereditarian, is a staunch defender of JP Rushton (Gottfredson, 2013) and the hereditarian hypothesis (Gottfredson, 2005, 2009). Her ‘definition’ of intelligence is one of the most-oft cited ones, eg, Gottfredson et al (1993: 13) notes that (my emphasis):

Intelligence is a very general mental capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly and learn from experience. It is not merely book learning, a narrow academic skill, or test-taking smarts. Rather, it reflects a broader and deeper capability for comprehending our surroundings-“catching on,” “ making sense” of things, or “figuring out” what to do.

So ‘intelligence’ is “a very general mental capability”, its main ‘measure’ IQ tests (knowledge tests), but ‘intelligence’ “is not merely book learning, a narrow academic skill, or test-taking smarts.” Here’s some more hereditarian “reasoning” (which you can contrast with the hereditarian “reasoning” on race—just assume it exists). Gottfredson also argues that ‘intelligence’ or ‘g’ is learning ability. But, as Richardson (2017a: 100) notes, “it will always be quite impossible to measure ability with an instrument that depends on learning in one particular culture“—which he terms “the g paradox, or a general measurement paradox.

Gottfredson (1997) also argues that the “active ingredient” in IQ testing is the “complexity” of the items—what makes one item more difficult than another, such as a 3×3 matrix item being more complex than a 2×2 matrix item and giving some examples of analogies which she believes to show a type of higher, more complex cognition in order to figure out the answer to the problem. (Also see Richardson and Norgate, 2014 for further critiques of Gottfredson.)

The trouble with this argument is that IQ test items are remarkably simple in their cognitive demands compared with, say, the cognitive demands of ordinary social life and other activities that the vast majority of children and adults can meet adequately every day.

For example, many test items demand little more than rote reproduction of factual knowledge most likely acquired from experience at home or by being taught in school. Opportunities and pressures for acquiring such valued pieces of information, from books in the home to parents’ interests and educational level, are more likely to be found in middle-class than in working-class homes. So the causes of differences could be causes in opportunities for such learning.

The same could be said about other frequently used items, such as “vocabulary” (or word definitions); “similarities” (describing how two things are the same); “comprehension” (explaining common phenomena, such as why doctors need more training). This helps explain why differences in home background correlate so highly with school performance—a common finding. In effect, such items could simply reflect the specific learning demanded by the items, rather than a more general cognitive strength. (Richardson, 2017a: 91)

IQ-ists, of course, would then state that there is utility in such “simple-looking” test items, but we have to remember that items on IQ tests are not selected based on a theoretical cognitive model, but are selected to give the desired distributions that the test constructors want (Mensh and Mensh, 1991). “… those items in IQ tests have been selected because they help produce the expected pattern of scores. A mere assertion of complexity about IQ test items is not good enough” (Richardson, 2017a: 93). “The items selected for inclusion [on Binet’s test] were those that in the judgment of the teachers distinguished bright from dull students” (Castles, 2012: 88). It seems that all hereditarians do is “assert” or “assume” things—like the equal environments assumption (EEA), the existence of race, and now, the existence of “intelligence”. Just presuppose what you want and, unsurprisingly, you get what you wanted. The IQ-ist then triumphs that the test did its job—sorting high- and low-quality thinkers on the basis of their IQ scores. But that’s exactly the problem: prior assumptions on the nature of ‘intelligence’ and its distribution dictate the construction of the tests in question.

Mensh and Mensh (1991: 30) state that “The [IQ] tests do what their construction dictates; they correlate a group’s mental worth with its place in the social hierarchy.” That is, who is or is not “intelligent” is already presupposed. There has been ample admission of such presumptions affecting the distribution of scores, as some critics have documented (e.g., Hilliard, 2012’s documentation of test norming for two different white cultural groups in South Africa and that Terman equalized scores on his 1937 revision of the Stanford-Binet).

Herrnstein and Murray (1994: 1) write that:

That the word intelligence describes something real and that it varies from person to person is as universal and ancient as any understanding about the state of being human. Literate cultures everywhere and throughout history have had words for saying that some people are smarter than others. Given the survival value of intelligence, the concept must be still older than that. Gossip about who in the tribe is cleverest has probably been a topic of conversation around the fire since fires, and conversation, were invented.

Castles (2012: 83) responds to these assertions stating that “the concept of intelligence is indeed a “brashing modern notion.” 1” Herrnstein and Murray, of course, are in the “Of COURSE intelligence exists!” camp, for, to them, it conferred survival advantages and so, it must exist and we can, therefore, measure it in humans.

Howe (1997), in his book IQ in Question, asks us to imagine someone asking to construct a vanity test. Vanity, like ‘intelligence’, has no agreed-upon definition which states how it should be measured nor anything that makes it possible to check that we are measuring the supposed construct correctly. So the one who wants to assess vanity needs to construct a test with questions he presumes tests vanity. So if the questions he asks relates to how others perceive vanity, then the ‘vanity test’ has been successfully constructed and the test constructor can then believe that he’s measuring “differences in” vanity. But, of course, selecting items on a test is a subjective matter; there is no objective way for this to occur. We can say, with length for instance, that line A is twice as long as line B. But we could not, then, state that person A is twice as vain as person B—nor could we say that person A is twice as intelligent as person B (on the basis of IQ scores)—for what would it mean for someone to be twice as vain as someone else, just like what would it mean for someone to be twice as intelligent as someone else?

Howe (1997: 6) writes:

The measurement of intelligence is bedeviled by the same problems that make it virtually impossible to measure vanity. It is of course possible to construct intelligence tests, and the tests can be useful in a number of ways for assessing human mental abilities, but it is wrong to assume that such tests have the capability of measuring an underlying quality of intelligence, if by ‘measuring’ we have in mind the same operations that are involved in the measurement of a physical quality such as length. A psychological test score is no more than an indication of how well someone has performed at a number of questions that have been chosen for largely practical reasons. Nothing is genuinely being measured.

But if “A psychological test score is no more an indication of how well someone has performed at a number of questions that have been chosen largely for practical reasons”, then it follows that knowledge exposure explains outcomes in psychological test scores. Richardson (1998: 127) writes:

The most reasonable answer to the question “What is being measured?”, then, is ‘degree of cultural affiliation’: to the culture of test constructors, school teachers and school curricula. It is (unconsciously) to conceal this that all the manipulations of item selection, evasions about test validities, and searches for post hoc theoretical underpinning seem to be about. What is being measured is certainly not genetically constrained complexity of general reasoning ability as such,

Mensh and Mensh (1991: 73) note that “In reality — which is precisely the opposite of what Jensen claims it to be — test discrimination among individuals within any group is the incidental by-product of tests constructed to discriminate between groups. Because the tests’ class and racial bias ensures that some groups will be higher and others lower in the scoring hierarchy, the status of an individual member of a group is as a rule predetermined by the status of that group.

In sum, what these tests test is what the test constructors presume—mainly, class and racial bias—so they get what they want to see. If the test does not match their presuppositions, the test gets discarded or reconstructed to fit with their biases. Thus, definitions of ‘intelligence’ will always be, as Castles (2012: 29), “intelligence is a cultural construct, specific to a certain time and place.” The definition from Gottfredson doesn’t make sense, as the “test-taking smarts” is the main “measure” of ‘intelligence’, and so intelligence’s “main measure” is the IQ test—which presupposes the distribution of scores as developed by the test constructors (Mensh and Mensh, 1991). Herrnstein and Murray’s definition does not make sense either, as the concept of “intelligence” is a modern notion.

At best, IQ test scores measure the degree of cultural acquisition of knowledge; they do not, nor can they, measure ‘intelligence’—which is a cultural concept which changes with the times. The tests are inherently biased against certain groups; looking at the history and construction of IQ testing will make that clear. The tests are middle-class knowledge tests; not tests of ‘intelligence.’