Home » IQ » The Non-Validity of IQ: A Response to The Alternative Hypothesis

The Non-Validity of IQ: A Response to The Alternative Hypothesis

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 292 other subscribers

Follow me on Twitter


1250 words

Ryan Faulk, like most IQ-ists, believes that the correlation with job performance and IQ somehow is evidence for its validity. He further believes that because self- and peer-ratings correlate with one’s IQ scores that that is further evidence for IQ’s validity.

The Validity of IQ

Well too bad for Faulk, correlations with other tests and other IQ tests lead to circular assumptions. The first problem, as I’ve covered before, is that there is no agreed-upon model or description of IQ/intelligence/’g’ and so therefore we cannot reliably and truthfully state that differences in ‘g’ this supposed ‘mental power’ this ‘strength’ is what causes differences in test scores. Unfortunately for Ryan Faulk and other IQ-ists, again, coming back to our good old friend test construction, it’s no wonder that IQ tests correlate around .5—or so is claimed—with job performance, however IQ test scores correlate at around .5 with school achievement, which is caused by some items containing knowledge that has been learned in school, such as “In what continent is Egypt?” and Who wrote Hamlet?” and “What is the boiling point of water?” As Ken Richardson writes in his 2017 book Genes, Brains, and Human Potential: The Science and Ideology of Intelligence (pg 85):

So it should come as no surprise that performance on them [IQ tests] is associated with school performance. As Robert L. Thorndike and Elizabeth P. Hagen explained in their leading textbook, Educational and Psychological Measurement, “From the very way in which the tests were assembled [such correlation] could hardly be otherwise.”

So, obviously, neither of the two tests determine independently that they measure intelligence, this so-called innate power, and because they’re different versions of the same test there is a moderate correlation between them. This goes back to item analysis and test construction. Is it any wonder, then, why correlations with IQ and achievement increase with age? It’s built into the test! And while Faulk does cite high correlations from one of Schmidt and Hunter’s meta-analyses on the subject, what he doesn’t tell you is that one review found a correlation of .66 between teacher’s assessment and future achievement of their students later in life (higher than the correlation with job performance and IQ) (Hoge and Coladarci, 1989.) They write (pg 303): “The median correlation, 0.66, suggests a moderate to strong correspondence between teacher judgments and student achievement.” This is just like what I quoted the other day in my response to Grey Enlightenment where I quoted Layzer (1972) who wrote:

Admirers of IQ tests usually lay great stress on their predictive power. They marvel that a one-hour test administered to a child at the age of eight can predict with considerable interest whether he will finish college. But as Burt and colleagues have clearly demonstrated, teachers subjective assessments afford even more reliable predictors. This is almost a truism.

So the correlation of .5 between occupation level and IQ is self-fulfilling, which are not independent measures. In regard to the IQ and job performance correlation, which I’ve discussed in the past, studies in the 70s showed much lower correlations, between .2 and .3, which Jensen points out in The g Factor.

The problem with the so-called validity studies carried out by Schmidt and Hunter, as cited by Ryan Faulk, is that they included numerous other tests that were not IQ tests in their analysis like memory tests, reading tests, the SAT, university admission tests, employment selection tests, and a variety of armed forces tests. “Just calling these “general ability tests,” as Schmidt and Hunter do, is like reducing a diversity of serum counts to a “general. blood test” (Richardson, 2017: 87). Of course the problem with using vastly different tests is that they tap into different abilities and sources of individual differences. The correlation between SAT scores and high school grades is .28 whereas the correlation between both the SAT and high school grades and IQ is about .2. So it’s clearly not testing the same “general ability” that’s being tested.

Furthermore, regarding job performance, it’s based on one measure: supervisor ratings. These ratings are highly subjective and extremely biased with age and halo effects seen with height and facial attractiveness being seen to sway judgments on how well one works. Measures of job performance are unreliable—especially from supervisors—due to the assumptions and biases that go into the measure.

I’ve also shown back in October that there is little relationship between IQ and promotion to senior doctor (McManus et al, 2013).

Do IQ tests test neural processes? Not really. One of the most-studied variables is reaction time. The quicker they react to a stimulus, supposedly, the higher their IQ is in average as they are quicker to process information, the story goes. Detterman (1987) notes that other factors other than ‘processing speed’ can explain differences in reaction time, including but not limited to, stress, understanding instructions, motivation to do said task, attention, arousal, sensory acuity, confidence, etc. Khodadadi et al (2014) even write “The relationship between reaction time and IQ is too complicated and reveal a significant correlation depends on various variables (e.g. methodology, data analysis, instrument etc.).” Complex cognition in real life is also completely different than the simple questions asked in the Raven (Richardson and Norgate, 2014).

It is easy to look at the puzzles that make up IQ tests and be convinced that they really do test brain power. But then we ignore the brain power thst nearly everyone displays in their everyday lives. Some psychologists have noticed thst people who stumble over formal tests of cognitive can bangle highly complex problems in their real lives all the time. As Michael Eysenck put it in his well-known book Psychology, “There is an apparent contradiction between our ability to deal effectively with out everyday environment and our failure to perform well on many laboratory reasoning tasks.” We can say the same about IQ tests.


Real-life problems combine many more variables that change over time and interact. It seems that the ability to do pretentious problems in a pencil-and-paper (or computer) format, like IQ test items, is itself a learned, if not-so-complex skill. (Richardson, 2017: 95-96)

Finally, Faulk cites studies showing that how intelligent people and their peers rates themselves and others predicted how well they did on IQ tests. This isn’t surprising. Since they correlate with academic achievement at .5 then if one is good academically then they’d have a high test score more often than not. That friends rate friends high and they end up matching scores is no surprise either as people generally group together with other people like themselves and so therefore will have similar achievements. That is not evidence for test validity though!! See Richardson and Norgate (2015) “In scientific method, generally, we accept external, observable differences as a valid measure of an unseen function when we can mechanistically relate differences in one to diffences in the other …” So even Faulk’s attempt to ‘validate’ IQ tests using peer- and self-ratings of ‘intelligence’ (whatever that is) falls on its face since its not a true measure of validity. It’s not construct validity. (EDIT: Psychological constructs are validated ‘by testing whether they relate to measures of other constructs as specified by theory‘ (Strauss and Smith, 2009). This doesn’t exist for IQ therefore IQ isn’t construct valid.)

In sum, Faulk’s article leaves a ton to be desired and doesn’t outright prove that there is validity to IQ tests because, as I’ve shown in the past, validity for IQ is nonexistent, though some have tried (using correlations with job performance as evidence) but Richardson and Norgate (2015) take down those claims and show that the correlation is between .2 and .3, not the .5+ cited by Hunter and Schmidt in their ‘validation studies’. The criteria laid out by Faulk does not prove that there is true construct validity to IQ tests and due to test construction, we see these correlations with educational achievement.



  1. About a year ago I read an article that said the frontal-parietal junction was most correlated with general intelligence. The parietal lobe deals with spacial awareness and locates objects in space from the temporal lobe. The frontal lobe deals with working memory so that objects in space can be manipulated mentally. The temporal lobe and the frontal lobe also handle verbal fluidity.

    The test I took said my (g) was 130.
    But processing speed was (86)
    Working memory (95)

    I had to do word problems in my head that became too long and complex to do. If I had a pen and paper I could do them but the test was meant to see what I could do in my head. My digit span 5 so I understand that it is not just a problem with the test. Some people have digit spans of 10 and I bet they could do the word problems with ever-increasing difficulty. These questions rely on verbal brain areas.

    (g) questions are not timed because what is being measured is the ability to find the correct answer. You can be fast bust not know how to manipulate objects in space.

    A big reason I got 130 in (g) is that I got 130 in Figure Weights

    It is reported in the WAIS-IV Technical Manual that the Figure Weights task measures quantitative and analogical reasoning [8]. These types of Piagetian tasks routinely assess quantitative reasoning that can be expressed mathematically through deductive and inductive logic [3]


  2. ron burgundy says:

    the SAT is an IQ test.

    only retards don’t understand that.


  3. ron burgundy says:

    rr can’t appreciate that the IQ jive is intentional and to some extent self-aware.

    that is, the phenomenon of positive correlation between tests of all kinds must have an explanation. the IQ-ists don’t explain it, but they give it a name. they call it g. this is itself positively correlated with brain volume.

    if rr has an explanation he should give it. but he can’t deny that it is an interesting phenomenon.

    the SAT and other such college entrance tests are superior to soi-disant IQ tests, because they are normed on millions rather than normed on a couple thousand.

    but even then the wechsler correlates at about .9 with the SAT for a representative population of test takers. peepee claims this correlation breaks down at very high scores. this may be, but the overall correlation is still about .9.



    • RaceRealist says:

      that is, the phenomenon of positive correlation between tests of all kinds must have an explanation. the IQ-ists don’t explain it, but they give it a name. they call it g. this is itself positively correlated with brain volume

      Other explanations exist and they don’t rely on a mysterious ‘power’. You can reverse the causality from ‘good genes’ to nutrition etc regarding the low brain size-Is correlation. It’s perfectly logical to reverse causality.

      The phenomenon is only ‘interesting’ if you believe the stories about ‘g’.

      but even then the wechsler correlates at about .9 with the SAT for a representative population of test takers. peepee claims this correlation breaks down at very high scores. this may be, but the overall correlation is still about .9.


      Why such a high correlation? Because IQ tests and achievement tests are different versions of the same test.


  4. ron burgundy says:

    rr’s criticisms of IQ applies to everything in psychology.

    it applies to abnormal psychology and thus psychiatry. not a single supposed mental illness has a physical test.

    it applies to motivation and personality.

    does this mean these things don’t exist?


    • Phil78 says:

      “rr’s criticisms of IQ applies to everything in psychology.”

      Could very well might, mainly in psychometrics of intelligence and personality, but you following examples don’t exactly parallel.

      “it applies to abnormal psychology and thus psychiatry. not a single supposed mental illness has a physical test.”

      That’s because they are identified based on symptom diagnosis, not a “test” in the same sense as IQ or any other attribute aside from screenings which aren’t primary.

      Click to access Autism-Screening-Questionnaire-Diagnostic-validity.pdf

      “it applies to motivation and personality.”

      Not quite seeing how physically based explanations do exists.

      And validity has been tested and improved upon along with a better understanding of personality flexibility.

      Unlike IQ tests, as explained here, not as many assumptions are made as they are with “intelligence”.

      “does this mean these things don’t exist?”

      No, what it means that they aren’t accurately captured in the tests.


    • RaceRealist says:

      Psychology and psychiatry as a whole are largely vehicles for Big Pharma anyway.

      does this mean these things don’t exist?

      If a test of “athletic abilities” existed and I criticized the whole test would I be denying thar individuals differ in athletic ability or would I be just pointing out flaws in shitty tests? I made this argument a few weeks ago.

      Athletic Ability and IQ


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Please keep comments on topic.

Blog Stats

  • 873,814 hits
Follow NotPoliticallyCorrect on

suggestions, praises, criticisms

If you have any suggestions for future posts, criticisms or praises for me, email me at


%d bloggers like this: