Home » Posts tagged 'reductionism'
Tag Archives: reductionism
Construct validity for IQ is fleeting. Some people may refer to Haier’s brain imaging data as evidence for construct validity for IQ, even though there are numerous problems with brain imaging and that neuroreductionist explanations for cognition are “probably not” possible (Uttal, 2014; also see Uttal, 2012). Construct validity refers to how well a test measures what it purports to measure—and this is non-existent for IQ (see Richardson and Norgate, 2014). If the tests did test what they purport to (intelligence), then they would be construct valid. I will show an example of a measure that was validated and shown to be reliable without circular reliance of the instrument itself; I will show that the measures people use in attempt to prove that IQ has construct validity fail; and finally I will provide an argument that the claim “IQ tests test intelligence” is false since the tests are not construct valid.
Jung and Haier (2007) formulated the P-FIT hypothesis—the Parieto-Frontal Intelligence Theory. The theory purports to show how individual differences in test scores are linked to variations in brain structure and function. There are, however, a few problems with the theory (as Richardson and Norgate, 2007 point out in the same issue; pg 162-163). IQ and brain region volumes are experience-dependent (eg Shonkoff et al, 2014; Betancourt et al, 2015; Lipina, 2016; Kim et al, 2019). So since they are experience-dependent, then different experiences will form different brains/test scores. Richardson and Norgate (2007) state that such bigger brain areas are not the cause of IQ, rather that, the cause of IQ is the experience-dependency of both: exposure to middle-class knowledge and skills leads to a better knowledge base for test-taking (Richardson, 2002), whereas access to better nutrition would be found in middle- and upper-classes, which, as Richardson and Norgate (2007) note, lower-quality, more energy-dense foods are more likely to be found in lower classes. Thus, Haier et al did not “find” what they purported too, based on simplistic correlations.
Now let me provide the argument about IQ test experience-dependency:
Premise 1: IQ tests are experience-dependent.
Premise 2: IQ tests are experience-dependent because some classes are more exposed to the knowledge and structure of the test by way of being born into a certain social class.
Premise 3: If IQ tests are experience-dependent because some social classes are more exposed to the knowledge and structure of the test along with whatever else comes with the membership of that social class then the tests test distance from the middle class and its knowledge structure.
Conclusion 1: IQ tests test distance from the middle class and its knowledge structure (P1, P2, P3).
Premise 4: If IQ tests test distance from the middle class and its knowledge structure, then how an individual scores on a test is a function of that individual’s cultural/social distance from the middle class.
Conclusion 2: How an individual scores on a test is a function of that individual’s cultural/social distance from the middle class since the items on the test are more likely to be found in the middle class (i.e., they are experience-dependent) and so, one who is of a lower class will necessarily score lower due to not being exposed to the items on the test (C1, P4)
Conclusion 3: IQ tests test distance from the middle class and its knowledge structure, thus, IQ scores are middle-class scores (C1, C2).
Still further regarding neuroimaging, we need to take a look at William Uttal’s work.
Uttal (2014) shows that “The problem is that both of these approaches are deeply flawed for methodological, conceptual, and empirical reasons. One reason is that simple models composed of a few neurons may simulate behavior but actually be based on completely different neuronal interactions. Therefore, the current best answer to the question asked in the title of this contribution [Are neuroreductionist explanations of cognition possible?] is–probably not.”
Uttal even has a book on meta-analyses and brain imaging—which, of course, has implications for Jung and Haier’s P-FIT theory. In his book Reliability in Cognitive Neuroscience: A Meta-meta Analysis, Uttal (2012: 2) writes:
There is a real possibility, therefore, that we are ascribing much too much meaning to what are possibly random, quasi-random, or irrelevant response patterns. That is, given the many factors that can influence a brain image, it may be that cognitive states and braib image activations are, in actuality, only weakly associated. Other cryptic, uncontrolled intervening factors may account for much, if not all, of the observed findings. Furthermore, differences in the localization patterns observed from one experiment to the next nowadays seems to reflect the inescapable fact that most of the brain is involved in virtually any cognitive process.
Uttal (2012: 86) also warns about individual variability throughout the day, writing:
However, based on these findings, McGonigle and his colleagues emphasized the lack of reliability even within this highly constrained single-subject experimental design. They warned that: “If researchers had access to only a single session from a single subject, erroneous conclusions are a possibility, in that responses to this single session may be claimed to be typical responses for this subject” (p. 708).
The point, of course, is that if individual subjects are different from day to day, what chance will we have of answering the “where” question by pooling the results of a number of subjects?
That such neural activations gleaned from neuroimaging studies vary from individual to individual, and even time of day in regard to individual, means that these differences are not accounted for in such group analyses (meta-analyses). “… the pooling process could lead to grossly distorted interpretations that deviate greatly from the actual biological function of an individual brain. If this conclusion is generally confirmed, the goal of using pooled data to produce some kind of mythical average response to predict the location of activation sites on an individual brain would become less and less achievable“‘ (Uttal, 2012: 88).
Clearly, individual differences in brain imaging are not stable and they change day to day, hour to hour. Since this is the case, how does it make sense to pool (meta-analyze) such data and then point to a few brain images as important for X if there is such large variation in individuals day to day? Neuroimaging data is extremely variable, which I hope no one would deny. So when such studies are meta-analyzed, inter- and intrasubject variation is obscured.
The idea of an average or typical “activation region” is probably nonsensical in light of the neurophysiological and neuroanatomical differences among subjects. Researchers must acknowledge that pooling data obscures what may be meaningful differences among people and their brain mechanisms. THowever, there is an even more negative outcome. That is, by reifying some kinds of “average,” we may be abetting and preserving some false ideas concerning the localization of modular cognitive function (Uttal, 2012: 91).
So when we are dealing with the raw neuroimaging data (i.e., the unprocessed locations of activation peaks), the graphical plots provided of the peaks do not lead to convergence onto a small number of brain areas for that cognitive process.
… inconsistencies abount at all levels of data pooling when one uses brain imaging techniques to search for macroscopic regional correlates of cognitive processes. Individual subjects exhibit a high degree of day-to-day variability. Intersubject comparisons between subjects produce an even greater degree of variability.
The overall pattern of inconsistency and unreliability that is evident in the literature to be reviewed here again suggests that intrinsic variability observed at the subject and experimental level propagates upward into the meta-analysis level and is not relieved by subsequent pooling of additional data or averaging. It does not encourage us to believe that the individual meta-analyses will provide a better answer to the localization of cognitive processes question than does any individual study. Indeed, it now seems plausible that carrying out a meta-analysis actually increases variability of the empirical findings (Uttal, 2012: 132).
So since reliability is low at all levels of neuroimaging analysis, it is very likely that the relations between particular brain regions and specific cognitive processes have not been established and may not even exist. The numerous reports purporting to find such relations report random and quasi-random fluctuations in extremely complex systems.
Construct validity (CV) is “the degree to which a test measures what it claims, or purports, to be measuring.” A “construct” is a theoretical psychological construct. So CV in this instance refers to whether IQ tests test intelligence. We accept that unseen functions measure what they purport to when they’re mechanistically related to differences in two variables. E.g, blood alcohol and consumption level nd the height of the mercury column and blood pressure. These measures are valid because they rely on well-known theoretical constructs. There is no theory for individual intelligence differences (Richardson, 2012). So IQ tests can’t be construct valid.
The accuracy of thermometers was established without circular reliance on the instrument itself. Thermometers measure temperature. IQ tests (supposedly) measure intelligence. There is a difference between these two, though: the reliability of thermometers measuring temperature was established without circular reliance on the thermometer itself (see Chang, 2007).
In regard to IQ tests, it is proposed that the tests are valid since they predict school performance and adult occupation levels, income and wealth. Though, this is circular reasoning and doesn’t establish the claim that IQ tests are valid measures (Richardson, 2017). IQ tests rely on other tests to attempt to prove they are valid. Though, as seen with the valid example of thermometers being validated without circular reliance on the instrument itself, IQ tests are said to be valid by claiming that it predicts test scores and life success. IQ and other similar tests are different versions of the same test, and so, it cannot be said that they are validated on that measure, since they are relating how “well” the test is valid with previous IQ tests, for example, the Stanford-Binet test. This is because “Most other tests have followed the Stanford–Binet in this regard (and, indeed are usually ‘validated’ by their level of agreement with it; Anastasi, 1990)” (Richardson, 2002: 301). How weird… new tests are validated with their agreement with other, non-construct valid tests, which does not, of course, prove the validity of IQ tests.
IQ tests are constructed by excising items that discriminate between better and worse test takers, meaning, of course, that the bell curve is not natural, but forced (see Simon, 1997). Humans make the bell curve, it is not a natural phenomenon re IQ tests, since the first tests produced weird-looking distributions. (Also see Richardson, 2017a, Chapter 2 for more arguments against the bell curve distribution.)
Finally, Richardson and Norgate (2014) write:
In scientific method, generally, we accept external, observable, differences as a valid measure of an unseen function when we can mechanistically relate differences in one to differences in the other (e.g., height of a column of mercury and blood pressure; white cell count and internal infection; erythrocyte sedimentation rate (ESR) and internal levels of inflammation; breath alcohol and level of consumption). Such measures are valid because they rely on detailed, and widely accepted, theoretical models of the functions in question. There is no such theory for cognitive ability nor, therefore, of the true nature of individual differences in cognitive functions.
That “There is no such theory for cognitive ability” is even admitted by lead IQ-ist Ian Deary in his 2001 book Intelligence: A Very Short Introduction, in which he writes “There is no such thing as a theory of human intelligence differences—not in the way that grown-up sciences like physics or chemistry have theories” (Richardson, 2012). Thus, due to this, this is yet another barrier against IQ’s attempted validity, since there is no such thing as a theory of human intelligence.
In sum, neuroimaging meta-analyses (like Jung and Haier, 2007; see also Richardson and Norgate, 2007 in the same issue, pg 162-163) do not show what they purport to show for numerous reasons. (1) There are, of course, consequences of malnutrition for brain development and lower classes are more likely to not have their nutritional needs met (Ruxton and Kirk, 1996); (2) low classes are more likely to be exposed to substance abuse (Karriker-Jaffe, 2013), which may well impact brain regions; (3) “Stress arising from the poor sense of control over circumstances, including financial and workplace insecurity, affects children and leaves “an indelible impression on brain structure and function” (Teicher 2002, p. 68; cf. Austin et al. 2005)” (Richardson and Norgate, 2007: 163); and (4) working-class attitudes are related to poor self-efficacy beliefs, which also affect test performance (Richardson, 2002). So, Jung and Haier’s (2007) theory “merely redescribes the class structure and social history of society and its unfortunate consequences” (Richardson and Norgate, 2007: 163).
In regard to neuroimaging, pooling together (meta-analyzing) numerous studies is fraught with conceptual and methodological problems, since a high-degree of individual variability exists. Thus, attempting to find “average” brain differences in individuals fails, and the meta-analytic technique used (eg by Jung and Haier, 2007) fails to find what they want to find: average brain areas where, supposedly, cognition occurs between individuals. Meta-analyzing such disparate studies does not show an “average” where cognitive processes occur, and thusly, cause differences in IQ test-taking. Reductionist neuroimaging studies do not, as is popularly believed, pinpoint where cognitive processes take place in the brain, they have not been established and they may not even exist.
Nueroreductionism does not work; attempting to reduce cognitive processes to different regions of the brain, even using meta-analytic techniques as discussed here, fail. There “probably cannot” be neuroreductionist explanations for cognition (Uttal, 2014), and so, using these studies to attempt to pinpoint where in the brain—supposedly—cognition occurs for such ancillary things such as IQ test-taking fails. (Neuro)Reductionism fails.
Since there is no theory of individual differences in IQ, then they cannot be construct valid. Even if there were a theory of individual differences, IQ tests would still not be construct valid, since it would need to be established that there is a mechanistic relation between IQ tests and variable X. Attempts at validating IQ tests rely on correlations with other tests and older IQ tests—but that’s what is under contention, IQ validity, and so, correlating with older tests does not give the requisite validity to IQ tests to make the claim “IQ tests test intelligence” true. IQ does not even measure ability for complex cognition; real-life tasks are more complex than the most complex items on any IQ test (Richardson and Norgate, 2014b)
Now, having said all that, the argument can be formulated very simply:
Premise 1: If the claim “IQ tests test intelligence” is true, then IQ tests must be construct valid.
Premise 2: IQ tests are not construct valid.
Conclusion: Therefore, the claim “IQ tests test intelligence” is false. (modus tollens, P1, P2)
Reductionists would claim that athletic success comes down to the molecular level. I disagree. Though, of course, understanding the molecular pathways and how and why certain athletes excel in certain sports can and will increase our understanding of elite athleticism, reductionist accounts do not tell the full story. A reductionist (which I used to be, especially in regard to sports; see my article Racial Differences in Muscle Fiber Typing Cause Differences in Elite Sporting Competition) would claim that, as can be seen in my article, the cause for elite athletic success comes down to the molecular level. Now, that I no longer hold such reductionist views in this area does not mean that I deny that there are certain things that make an elite athlete. However, I was wrong to attempt to reduce a complex bio-system and attempt to pinpoint one variable as “the cause” of elite athletic success.
In the book The Genius of All of Us: New Insights into Genetics, Talent, and IQ, David Shenk dispatches with reductionist accounts of athletic success in the 5th chapter of the book. He writes:
2. GENES DON’T DIRECTLY CAUSE TRAITS; THEY ONLY INFLUENCE THE SYSTEM.
Consistent with other lessons of GxE [Genes x Environment], the surprising finding of the $3 billion Human Genome Project is that only in rare instances do specific gene variants directly cause specific traits or diseases. …
As the search for athletic genes continues, therefore, the overwhelming evidence suggests that researchers will instead locate genes prone to certain types of interactions: gene variant A in combination with gene variant B, provoked into expression by X amount of training + Y altitude + Z will to win + a hundred other life variables (coaching, injuries, etc.), will produce some specific result R. What this means, of course, What this means, of course, is that we need to dispense rhetorically with thick firewall between biology (nature) and training (nurture). The reality of GxE assures that each persons genes interacts with his climate, altitude, culture, meals, language, customs and spirituality—everything—to produce unique lifestyle trajectories. Genes play a critical role, but as dynamic instruments, not a fixed blueprint. A seven- or fourteen- or twenty-eight-year-old is not that way merely because of genetic instruction. (Shenk, 2010: 107) [Also read my article Explaining African Running Success Through a Systems View.]
This is looking at the whole entire system: genes, to training, to altitude, to will to win, to numerous other variables that are conducive to athletic success. You can’t pinpoint one variable in the entire system and say that that is the cause: each variable works together in concert to produce the athletic phenotype. One can invoke Noble’s (2012) argument that there is no privileged level of causation in the production of an athletic phenotype. There are just too many factors that go into the production of an elite athlete, and attempting to reduce it to one or a few factors and attempt to look for those factors in regard to elite athleticism is a fool’s errand. So we can say that there is no privileged level of causation in regard to the athletic phenotype.
In his paper Sport and common-sense racial science, Louis (2004: 41) writes:
The analysis and explanation of racial athleticism is therefore irreducible to
biological or socio-cultural determinants and requires a ‘biocultural approach’
(Malina, 1988; Burfoot, 1999; Entine, 2000) or must account for environmental
factors (Himes, 1988; Samson and Yerl`es, 1988).
Reducing anything, sports included, to environmental/socio-cultural determinants and biology doesn’t make sense; I agree with Louis that we need a ‘biocultural approach’, since biology and socio-cultural determinants are linked. This, of course, upends the nature vs. nurture debate; neither “nature” nor “nurture” has won, they causally depend on one another to produce the elite athletic phenotype.
Louis (2004) further writes:
In support of this biocultural approach, Entine (2001) argues that athleticism is
irreducible to biology because it results from the interaction between population-based genetic differences and culture that, in turn, critiques the Cartesian dualism
‘which sees environment and genes as polar-opposite forces’ (p. 305). This
critique draws on the centrality of complexity, plurality and fluidity to social
description and analysis that is significant within multicultural common sense. By
pointing to the biocultural interactivity of racial formation, Entine suggests that
race is irreducible to a single core determinant. This asserts its fundamental
complexity that must be understood as produced through the process of
articulation across social, cultural and biological categories.
Of course, race is irreducible to a single core determinant; but it is a genuine kind in biology, and so, we must understand the social, cultural, and biological causes and how they interact with each other to produce the athletic phenotype. We can look at athlete A and see that he’s black and then look at his somatotype and ascertain that the reason why athlete A is a good athlete is conducive to his biology. Indeed, it is. One needs a requisite morphology in order to succeed in a certain sport, though it is quite clearly not the only variable needed to produce the athletic phenotype.
One prevalent example here is the Kalenjin (see my article Why Do Jamaicans, Kenyans, and Ethiopians Dominate Running Competitions?). There is no core determinant of Kalenjin running success; even one study I cited in my article shows that Germans had a higher level of a physiological variable conducive to long-distance running success compared to the Kalenjin. This is irrelevant due to the systems view of athleticism. Low Kenyan BMI (the lowest in the world), combined with altitude training (they live in higher altitudes and presumably compete in lower altitudes), a meso-ecto somatotype, the will to train, and even running to and from where they have to go all combine to show how and why this small tribe of Kenyans excel so much in these types of long-distance running competitions.
Sure, we can say that what we know about anatomy and physiology that a certain parameter may be “better” or “worse” in the context of the sport in question, no one denies that. What is denied is the claim that athleticism reduces to biology, and it does not reduce to biology because biology, society, and culture all interact and the interaction itself is irreducible; it does not make sense to attempt to partition biology, society, and culture into percentage points in order to say that one variable has primacy over another. This is because each level of the system interacts with every other level. Genes, anatomy and physiology, the individual, the overarching society, cultural norms, peers, and a whole slew of other factors explain athletic success not only in the Kalenjin but in all athletes.
Broos et al (2016) showed that in those with the RR genotype, coupled with the right morphology and fast twitch muscle fibers, this would lead to more explosive contractions. Broos et al (2016) write:
In conclusion, this study shows that a-actinin-3 deficiency decreases the contraction velocity of isolated type IIa muscle fibers. The decreased cross-sectional area of type IIa and IIx fibers may explain the increased muscle volume in RR genotypes. Thus, our results suggest that, rather than fiber force, combined effects of morphological and contractile properties of individual fast muscle fibers attribute to the enhanced performance observed in RR genotypes during explosive contractions.
This shows the interaction between the genotype, morphology, fast twitch fibers (which blacks have more of; Caeser and Henry, 2015), and, of course, the grueling training these elite athletes go through. All of these factors interact. This further buttresses the argument that I am making that different levels of the system causally interact with each other to produce the athletic phenotype.
Pro-athletes also have “extraordinary skills for rapidly learning complex and neutral dynamic visual scenes” (Faubert, 2013). This is yet another part of the system, along with other physical variables, that an elite athlete needs to have. Indeed, as Lippi, Favalaro, and Guidi (2008) write:
An advantageous physical genotype is not enough to build a top-class athlete, a champion capable of breaking Olympic records, if endurance elite performances (maximal rate of oxygen uptake, economy of movement, lactate/ventilatory threshold and, potentially, oxygen uptake kinetics) (Williams & Folland, 2008) are not supported by a strong mental background.
So now we have: (1) strong mental background; (2) genes; (3) morphology; (4) Vo2 max; (5) altitude; (6) will to win; (7) training; (8) coaching; (9) injuries; (10) peer/familial support; (11) fiber typing; (12) heart strength etc. There are of course myriad other variables that are conducive to athletic success but are irreducible since we need to look at it in the whole context of the system we are observing.
In conclusion, athleticism is irreducible to biology. Since athleticism is irreducible to biology, then to explain athleticism, we need to look at the whole entire system, from the individual all the way to the society that individual is in (and everything in between) to explain how and why athletic phenotypes develop. There is no logical reason to attempt to reduce athleticism to biology since all of these factors interact. Therefore, the systems view of athleticism is the way we should view the development of athletic phenotypes.
(i) Nature and Nurture interact.
(ii) Since nature and nurture interact, it makes no sense to attempt to reduce anything to one or the other.
(iii) Since it makes no sense to attempt to reduce anything to nature or nurture since nature and nurture interact, then we must dispense with the idea that reductionism can causally explain differences in athleticism between individuals.