Home » Refutations
Category Archives: Refutations
Unless you’ve been living under a rock since the new year, you have heard of the “coup attempt” at the Capitol building on Wednesday, January 6th. Upset at the fact that the election was “stolen” from Trump, his supporters showed up at the building and rushed it, causing mass chaos. But, why did they do this? Why the violence when they did not get their way in a fair election? Well, Michael Ryan, author of The Genetics of Political Behavior: How Evolutionary Psychology Explains Ideology (2020) has the answer—what he terms “rightists” and “leftists” evolved at two different times in our evolutionary history which, then, explains the trait differences between the two political parties. This article will review part of the book—the evolutionary sections (chapters 1-3).
EP and ideology
Explaining why individuals who call themselves “rightists and leftists” behave and act differently than the other is Ryan’s goal. He argues, at length, that the two parties have two different personality profiles. This, he claims, is due to the fact that the ancestors of rightists and leftists evolved at two different times in human history. He calls this “Trump Island” and “Obama Island”—apt names, especially due to what occurred last week. Ryan claims that what makes Trump different from, say, Obama, is that his ancestors evolved at a different place in a different time compared to Obama’s ancestors. He further claims using the Stanford Prison Experiment that “we may not all be capable of becoming Nazis, after all. Just some, and conservatives especially so” (pg 12).
In the first chapter he begins with the usual adaptationism that Evolutionary Psychologists use. Reading between the lines in his implicit claims, he is arguing that “rightists and leftists” are natural kinds—that is, they are *two different kinds of people.* He explains some personality differences between rightists and leftists and then says that such trait differences are “rooted in biology and governed by genes” (pg 17). Ryan then makes a strong adaptationist claim—that traits are due to adaptation to the environment (pg 17). What makes you and I different from Trump, he claims, is that our ancestors and his ancestors evolved in different places at different times where different traits would be imperative to survival. So, over time, different traits got selected-for in these two populations leading to the trait differences we see today. So each environment led to the fixation of different adaptive traits which explains the differences we see today between the two parties, he claims.
Ryan then shifts from the evolution of personality differences to… The evolution of the beaks of Darwin’s finches and Tibetan adaptation to high-altitude living (pg 18), as if the evolution of physical traits is anything like the evolution of psychological traits. His folly is assuming that these physical traits can then be likened to personality/mental traits. The ancestors of rightists and leftists, like Darwin’s finches Ryan claims, evolved on different islands in different moments of evolutionary time. They evolved different brains and different adaptive behaviors on the basis of the evolution of those different brains. Trump’s ancestors were authoritarian, and this island occurred early in human history “which accounts for why Trump’s behavior seems so archaic at times” (pg 18).
The different traits that leftists show in comparison to rightists is due to the fact that their island came at a different point in evolutionary time—it was not recent in comparison to the so-called archaic dominance behavior portrayed by Trump and other rightists. Ryan says that Obama Island was more crowded than Trump Island where, instead of scowling, they smiled which “forges links with others and fosters reciprocity” (pg 19). So due to environmental adversity, they had a more densely populated “island”—in this novel situation, compared to the more “archaic” earlier time—the small bands needed to cooperate, rather than fight with each other, to survive. So this, according to Ryan, explains why studies show more smiling behavior in leftists compared to rightists.
Some of our ancestors evolved traits such as cooperativeness the aided the survival of all even though not everyone acquired the trait … Eventually a new genotype or subpopulation emerged. Leftist traits became a permanent feature of our genome—in some at least. (pg 19-20)
So the argument goes: Differences between rightists and leftists show us that the two did not evolve at the same points in time since they show different traits today. Different traits were adaptive at different points in time, some more archaic, some more modern. Since Trump Island came first in our evolutionary history, those whose ancestors evolved there show more archaic behavior. Since Obama Island came first, they show newer, more modern behaviors. Due to environmental uncertainty, those on Obama Island had to cooperate with each other. The trait differences between these two subpopulations were selected for in their environment that they evolved in, which is why they are different today. Now today, this led to the “arguing over the future direction of our species. This is the origin of human politics” (pg 20).
Models of evolution
Ryan then discusses four models of evolution: (1) the standard model, where “natural selection” is the main driver of evolutionary change; (2) epigenetic models like Jablonka’s and Lamb’s (2005) in Evolution in Four Dimensions; (3) where behavioral changes change genes; and (4) where organisms have phenotypic plasticity and is a way for the organism to respond to sudden environmental changes. “Leftists and rightists“, writes Ryan, “are distinguished by their own versions of phenotypic plasticity. They change behavior more readily than rightists in response to changing environmental signals” (pg 29-30).
In perhaps the most outlandish part of the book, Ryan articulates one of my now-favorite just-so stories. The passage is worth quoting in-full:
Our direct ancestor Homo erectus endured for two million years before going extinct 400,000 years ago when earth temperatures dropped far below the norm. Descendants of erectus survived till as recently as 14,000 years ago in Asia. The round head and shovel-shaped teeth of some Asians, including Vladimir Putin, are an erectile legacy. Archeologists believe erectus was a mix of Ted Bundy and Adolf Hitler. Surviving skulls point to a life of constant violence and routine killing. Erectile skulls are thick like a turtle’s, and the brow’s are ridged for protection from potentially fatal blows. Erectus’ life was precarious and violent. To survive, it had to evolve traits such as vigilant fearfulness, prejudice against outsiders, bonding with kin allies, callousness toward victims, and a penchant for inflexible habits of life that were known to guarantee safety. It had to be conservative. 34 Archeologists suggest that some of our most characteristic conservative emotions such as nationalism and xenophobia were forged at the time of Homo erectus. 35 (pg 33-34)
It is clear that Ryan is arguing that rightists have more erectus-like traits whereas leftists have more modern, Sapiens traits. “The contemporary coexistence of a population with more “modern” traits and a population with more “archaic” traits came into being” (pg 37). He is implicitly assuming that the two “populations” he discusses are natural kinds and with his “modern” “archaic” distinction (see Crisp and Cook 2005 who argue against a form of this distinction) he is also implying that there is a sort of “progress” to evolution.
Twin studies, it is claimed, show “one’s genetically informed psychological disposition” (Hatemi et al, 2014); they “suggest that leftists and rightists are born not made” while a so-called “consensus has emerged amongst scientists: political behavior is genetically controlled and heritable” (pg 43). But, Beckway and Morris (2008), Charney (2008), and Joseph (2009; 2013) argue that twin studies can do no such thing due to the violation of the equal environments assumption (Joseph, 2014; Joseph et al, 2015). Thus, Ryan’s claims of the “genetic origins” of political behavior rest on studies that cannot prove or disprove “genetic causation” (Shulitziner, 2017)—but since the EEA is false we must discount “genetic causation” for psychological traits, not least because it is impossible for genes to cause/influence psychological traits (see argument (iii)).
The arguments he provides are a form of inference to best explanation (IBE) (Smith, 2016). However, this is how just-so stories are created: the conclusion is already in mind, and then the story is crafted using “natural selection” to explain how a trait came to fixation and why it currently exists today. The whole book is full of such adaptive stories. Claiming that we have the current traits we do in the distributions they are in in the “populations” because they were, at a certain point in our evolutionary history, adaptive which then led to the individuals with those traits passing on more of their genes, eventually leading to trait fixation. (See Fodor and Piattelli-Palmarini, 2010).
Ryan makes such outlandish claims such as “Rightists are more likely than leftists to keep their desks neat. If in the distant past you knew exactly where the weapons were, you could find them quickly and react to danger more effectively. 26” (pg 45). He talks about how “time-consuming and effort-demanding accuracy of perception [were] more characteristic of leftist cognition … leftist cognition is more reflective” while “rightist cognition is intuitive rather than reflective” (pg 47). Rightists being more likely to endorse the status quo, he claims, is “an adaptive trait when scarce resources made energy management essential to getting by” (pg 48) Rightist language, he argues, uses more nouns since they are “more concrete, an anxious personalities prefer concrete to abstract language because it favors categorial rigidity and guarantees greater certainty” while leftists “use words that suggest anxiety, anger, threats, certainty, resistance to change, power, security, and conformity” (pg 49). There is “a connection between archaic physiology and rightist moral ideology” (pg 52). Certain traits that leftists have were “adaptive traits [that] were suited to later stage human evolution” (pg 53). Ryan just cites studies that show differences between rightists and leftists and then uses some great leaps and mental gymnastics to try to mold the findings as being due to evolution in the two different time periods he describes in chapter 1 (Trump and Obama Island).
I have not read one page in this book that does not have some kind of adaptive just-so story attempting to explain certain traits/behaviors between rightists and leftists in evolutionary terms. Ryan uses the same kind of “reasoning” that Evolutionary Psychologists use—have your conclusion in mind first and then craft an adaptive story to explain why the traits you see today are there. Ryan outright says that “[t]raits are the result of adaptation to the environment” (pg 17), which is a rare—strong adaptationist—claim to make.
His book ticks off all of the usual EP things: strong adaptationism, just-so storytelling, the claim that traits were selected-for due to their contribution in certain environments at different points in time. The strong adaptationist claims, for example, are where he says that erectus’ large brow “are rigid for protection from potentially fatal blows” (pg 34). Such strong adaptationist claims imply that Ryan believes that all traits are the result of adaptation and that they, as a result, are still here today because they all serve a function in our evolutionary past. His arguments are, for the most part, all evolutionary and follow the same kinds of patterns that the usual EP arguments do (see Smith, 2016 for an explication of just-so stories and what constitutes them). Due to the problems with evolutionary psychology, his adaptive claims should be ignored.
The arguments that Ryan provides are not scientific and, although they give off a veneer of being scientific by invoking “natural selection” and adaptationism, they are anything but. It is just a long-winded explanation for how and why rightists and leftists—liberals and conservatives—are different and why they cannot change, since these differences are “encoded” into our genome. The implicit claim of the book, then, that rightists and leftists are two different—natural—kinds, lies on the false bed of EP and, therefore, the arguments provided in the book fail to sway anyone that does not believe such fantastic storytelling masquerading as science. While he does discuss other evolutionary theories, such as epigenetic ones from Jablonka and Lamb (2005), the book is largely strongly adaptationist using “natural selection” to explain why we still have the traits we do in different “populations” today.
Discussions about whiteness and privilege have become more and more common. Whites, it is argued, have a form of unearned societal privilege which therefore explains certain gaps between whites and non-whites. White privilege is the privilege that whites have in society—this type of privilege does not have to be in America, it can hold for groups that are viewed as ‘white’ in other countries. This, then, perpetrates social views of race, hence these people are realists about race but in a social/political context and do not have to recognize race as biological (although race can become biologicized through social/cultural practices). This article will discuss (1) What white privilege is; (2) Who has white privilege; (3) Arguments against white privilege; and (4) If race doesn’t exist, why does white privilege matter?
What is white privilege?
The concept of white privilege, like most concepts, evolves with the times and current social thought. The concept was originally created in order to account for whites’ (unearned) privileges and the conscious bias that went into creating and then maintaining these privileges, to unconscious favoritism/psychological advantages that whites give other whites (Bennett, 2012: 75). That is, white privilege is “an invisible package of unearned assets that I can count on cashing in each day, but about which I was “meant” to remain oblivious. White privilege is like an invisible weightless knapsack of special provisions, maps, passports, codebooks, visas, clothes, tools , and blank checks” (McIntosh, 1988).
More easily, we can say that white privilege is—the privilege conferred, either consciously or subconsciously, to one based on their skin color or, as Sullivan (2016, 2019) argues, their class status ALONG WITH their whiteness is what we should be talking about—white privilege with CLASS in between ‘white’ and ‘privilege’. In this sense, one’s class status AND their whiteness is explanatory, not only the concept of whiteness (i.e., their socialrace). The concept of whiteness—one’s skin color—as the privilege leaves out numerous intricacies in how whiteness gives and upholds systemic discrimination. When we add the concept of ‘class’ into ‘white privilege’ we get what Sullivan terms ‘white class privilege’.
While yes, one’s race is an important variable in whether or not they have certain privileges, such privileges are held for middle- to upper-middle class whites. Thus, numerous examples of ‘white privilege’ are better understood as examples of ‘white class privilege’, since lower-class whites don’t have the same kinds of privileges, outlooks, and social status as middle- and upper-middle class whites. Of course, though, lower-class whites can benefit from their whiteness—they definitely can. But the force of Sullivan’s concept of ‘white class privilege’ is this: white privilege is not monolithic towards whites, and some non-whites are better-off (economically and in regard to health) than whites. Thus, according to Sullivan, ‘white privilege’ should be amended to ‘white class privilege’.
Who has white privilege?
Lower-class whites could, in a way, be treated differently than middle- and upper-class whites—even though they are of the same race. Lower-class whites can be seen to have ‘white privilege’ on the basis of everyday thought, since most think of the privilege as down to just skin color, yet there is an untalked about class dimension at play here, which, then, even gives blacks an advantage while upholding the privilege of the upper-class whites.
Non-whites who have are of a higher social class than whites would also receive different treatment. Sullivan states that the revised concept of ‘white class privilege’ must be used intersectionally—that is, privilege must be considered interacting with class, gender, national, and other social experiences. Sure, lower-class whites may be treated differently than higher-class blacks in certain contexts, but this does not mean that the lower-class white has ‘more privilege’ than the upper-class black. This shows that we should not assume that lower-class whites have the same kinds of privilege conferred by society as middle- and upper-class whites. Upper-class blacks and ‘Hispanics‘ may attempt to distinguish themselves from lower-class blacks and ‘Hispanics’, as Sullivan (2019: 18-19) explains:
Class privilege shows up as a feature of most if not all racial groups in which members with “more”—more money, education, or whatever else is valued in society—are treated better than those with “less.” For that reason, we might think that white class privilege actually is an intragroup pattern of advantage and disadvantage among whites, rather than an intergroup pattern that gives white people a leg up over non-white people. After all, many Black middle-class and upper-middle-class Americans also go to great lengths to make sure that they are not mistaken for the Black poor in public spaces: when they are shopping, working, walking, or driving in town, and so on (Lacy, 2007). A similar pattern can be found with middle-to-upper-class Hispanic/Latinx people in the United States, who can “protect” themselves from being seen as illegal immigrants by ensuring that they are not identified as poor (Masuoka and Junn, 2013).
Sullivan then goes on to state that these situations are not equivalent, since wealth, fame, and education do not protect upper-class blacks from racial discrimination. The certain privileges that upper-class whites have, thusly, do not transfer to upper-class blacks. Further, middle- to upper-class whites distinguish themselves as ‘good whites’ who are not racist, while dumping all of the racism accusations on lower-class whites. “…the line between “good” and “bad” white people drawn by many (good) white people is heavily classed. Good white people tend to be middle-to-upper-class, and they often dump responsibility for racism onto lower-class white people” (Sullivan, 2019: 35). Even though the lower-class whites get used as a ‘shield’, so to speak, by upper-class whites, they still have some semblance of white privilege, in that they are not assumed to be non-citizens to the US—something that ‘Hispanics’ do have to deal with (no matter their race).
While wealthy white people generally have more affordances than poor white people do, in a society that prizes whiteness all white people have some racial affordances, at least some of the time.
Paradoxically, whites are not the only ones that benefit off of ‘white privilege’—even non-whites can benefit, though it ultimately helps upper-class whites. They can benefit by being brought up in a white home, around whites (like being adopted or having one white parent while spending most of their childhood with their white family). Thus, white privilege can cross racial lines all the while still benefitting whites.
Sullivan (2019: chapter 2) discusses some blacks who benefit from white privilege. One of the people she discusses has a white parent. This is what gives her her lighter skin, but that is not where her privilege comes from (think colorism in the black community where lighter skin is more prized than darker skin). Her privilege came from “her implicit knowledge of white norms, sensibilities, and ways of doing things that came from living with and being accepted by white family members” (Sullivan, 2019: 26). This is what Sullivan calls “family familiarity” and is one of the ways that blacks can benefit from white privilege. Another way in which blacks can benefit from white privilege is due to “ancestral ties to whiteness.”
Colorism is the discrimination within the black community by skin color. Certain blacks may talk about “light-” and “dark-skinned” blacks and they may—ironically or not—discriminate on the basis of skin color. Such colorism is even somewhat instilled in the black community—where darker-skinned black sons and lighter-skinned black daughters report higher-quality parenting. Landor et al (2014) report that their “findings provide evidence that parents may have internalized this gendered colorism and as a result, either consciously or unconsciously, display higher quality of parenting to their lighter skin daughters and darker skin sons.” Thus, even certain blacks—in virtue of being ‘part white’—would benefit from white (skin) privilege within their own (black) community, which would therefore give them certain advantages.
Arguments against white privilege
Two recent articles with arguments against white privilege (Why White Privilege Is Wrong — Quillette and The Fallacy of White Privilege — and How It Is Corroding Society) erroneously argue that since other minority groups quickly rose up upon arrival to America, therefore white privilege is a myth. These kinds of takes, though, are quite confused. It does not follow that since other groups have risen upon entry into America and that since whites have worse outcomes on some—and not other—health outcomes, that therefore the concept of white privilege is ‘fallacious’; we just need something more fine-grained.
For example, the claims that X minority group is over-represented compared to whites in America gets used as a point that ‘white privilege’ does not exist (e.g., Avora’s article). Avora discusses the experiences and data from many black immigrants, proclaiming:
These facts challenge the prevailing progressive notion that America’s institutions are built to universally favor whites and “oppress” minorities or blacks. On the whole, whatever “systemic racism” exists appears to be incredibly ineffectual, or even nonexistent, given the multitude of groups who consistently eclipse whites.
How does that follow? In fact, how does the whole discussion of, for example, Japanese now outperforming whites follow that white privilege therefore is a ‘fallacy’? I ask the question, since Asian immigrants to America are hyper-selected (Noam, 2014; Zhou and Lee, 2017), meaning that what explains higher Asian academic achievement is academic effort (Hsin and Xie, 2014) and the fact that Asians are hyper-selected—meaning that they have a higher chance of having a higher degree.
The educational credentials of these recent [Asian] arrivals are striking. More than six-in-ten (61%) adults ages 25 to 64 who have come from Asia in recent years have at least a bachelor’s degree. This is double the share among recent non-Asian arrivals, and almost surely makes the recent Asian arrivals the most highly educated cohort of immigrants in U.S. history.
Compared with the educational attainment of the population in their country of origin, recent Asian immigrants also stand out as a select group. For example, about 27% of adults ages 25 to 64 in South Korea and 25% in Japan have a bachelor’s degree or more.2 In contrast, nearly 70% of comparably aged recent immigrants from these two countries have at least a bachelor’s degree. (The Rise of Asian Americans)
Avora even discuses some African immigrants, namely Nigerians and Ghanaians. However, just like Asian immigrants to America, Nigerian and Ghanaian immigrants to America are more likely to hold advanced degrees, signifying that they are indeed hyper-selected in comparison to the population that they derive from (Duvivier, Burch, and Boulet, 2017). Thus, to go along with the stats that Avora cites on the children of Nigerian immigrants, their parents already had higher degrees, signifying that they are indeed a hyper-selected group. This means that such ethnic groups cannot be used to show that white privilege is explanatory.
While Avora does discuss “class” in his article, he shows that it’s not only ‘white privilege’, but the class element that comes along with whiteness in America. He therefore unknowingly shows that once you add the ‘class’ factor and create the concept of ‘white class privilege’, that this privilege can cross racial lines and benefit non-whites.
In the Harinam and Henderson Quillette article, they argue that since there are some things that we say are ‘good’ that non-whites have more of than whites, therefore the concept of ‘white privilege’ does not explain the existence of disparities between ethnic groups in the US since some some bad things happen to whites and some good things happen to non-whites—but this is an oversimplification. The fact of the matter is, whites that do receive privileges over other ethnic/racial groups do so not in virtue of their (white) skin privilege, but in virtue of their class privilege. This can be seen with the above citations on class being the explanatory variable regarding Asian academic success (showing how class values get reproduced in the new country which then explains the academic success of Asians in America).
The fact that both of these articles believe that by showing some minority groups in America have more ‘good’ things than whites or better outcomes for bad things (like suicides) misses the point. That whites kill themselves more than other American ethnic groups does not mean that whites do not have privilege in America compared to other groups.
If race doesn’t exist, then why does white privilege matter?
Lastly, those who argue against the concept of white privilege may say that those who are against the concept of white privilege would then at the same time say that race—and therefore whites—do not exist so, in effect, what are they talking about if ‘whites’ don’t exist because race does not exist? This is of course a ridiculous statement. One can indeed reject claims from biological racial realists and believe that race exists and is a socially constructed reality. Thus, one can reject the claim that there is a ‘biological’ European race, and they can accept the claim that there is an ever-changing ‘white’ race, in which groups get added or subtracted based on current social thought (e.g., the Irish, Italians, Jews), changing with how society views certain groups.
Though, it is perfectly possible for race to exist socially and not biologically. So the social creation of races affords the arbitrarily-created racial groups to be in certain areas on the hierarchy of races. Roberts (2011: 15) states that “Race is not a biological category that is politically charged. It is a political category that has been disguised as a biological one.” She argues that we are not biologically separated into races, we are politically separated into them, signifying race as a political construct. Most people believe that the claim “Race is a social construct” means that “Race does not exist.” However, that would be ridiculous. The social constructivist just believes that society divides people into races based on how we look (i.e., how we are born) and then society divides us into races on the basis of how we look. So society takes the phenotype and creates races out of differences which then correlate with certain continents.
So, there is no contradiction in the claim that “Race does not exist” and the claim that “Whites have certain unearned privileges over other groups.” Being an antirealist about biological race does not mean that one is an antirealist about socialraces. Thus, one can believe that whites have certain privileges over other groups, all the while being antirealists about biological races (saying that “Races don’t exist biologically”).
In this article I have explained what white privilege is and who has it. I have also discussed arguments against white privilege and claims that those who argue against race are hypocrites since they still talk about “whites” while claiming that race exists. After showing the conceptual confusions that people have about white privilege, along with the fact that groups that do better than whites in America (the groups that supposedly show that white privilege is “a fallacy”), I then forward Sulllivan’s (2016, 2019) argument on white class privilege. This shows that their whiteness is not the sole reason why they prosper—their whiteness along with their middle-to-upper-middle-class status explains why they prosper. It also, furthermore, shows that while lower-class whites do have some sort of white privilege, they do not have all of the affordances of white privilege due to their class status. Blacks can, too, benefit from white privilege, whether it’s due to their proximity to whiteness or their ancestral heritage.
White privilege does exist, but to fully understand it, we must add in the nexus of class with it.
1. If differences in mental abilities are inherited, and
2. if success requires those abilities, and
3. if earnings and prestige depend on success,
4. then social standing will be based to some extent on inherited differences among people. (Herrnstein, 1971)
Richard Herrnstein’s article I.Q. in The Atlantic (Herrnstein, 1971) caused much controversy (Herrnstein and Murray, 1994: 10). Herrnstein’s syllogism argued that as environments become more similar and if differences in mental abilities are inherited and that success in life requires such abilities and if earning and prestige depends on success which is required by inheritable mental abilities then social standing will be based, “to some extent on inherited differences among people.” Herrnstein does not say this outright in the syllogism, but he is quite obviously talking about genetic inheritance. One can, however, look at the syllogism with an environmental lens, as I will show. Lastly, Herrnstein’s syllogism crumbles since social class is predictive of success in life when both IQ and social class are equated. So since family background and schooling explains the IQ-income relationship (a measure of success) then Herrnstein’s argument falls.
Note that Herrnstein came to measurement due to being a student of William Sheldon’s somatotyping. “Somatotyping lured the impressionable and young Herrnstein into a world promising precision and human predictability based on the measurement of body parts” (Hilliard, 2012: 22).
- If differences in mental abilities are inherited
Premise 1 is simple: “If differences in mental ability are inherited …” Herrnstein is obviously talking about genetic transmission, but we can look at this through a cultural/environmental lens. For example, Berg and Belmont (1990) showed that Jewish children of different socio-cultural backgrounds had different patterns of mental abilities, which were clustered in certain socio-cultural groups (all Jewish), showing that mental abilities are, in large part, culturally derived. Another objection could be that since there are no laws linking psychological/mental states with physical states (the mental is irreducible to the physical—meaning that mental states cannot be transmitted through (physical) genes) then such genetic transmission of psychological/mental traits is impossible. In any case, one can look at cultural transmission of mental abilities and disregard genetic transmission of psychological traits and the argument fails.
We can accept all of the premises of Herrnstein’s syllogism and argue an environmental case, in fact (bracketed words are my additions):
1. If differences in mental abilities are [environmentally] inherited, and
2. if success requires those [environmentally inherited] abilities, and
3. if earnings and prestige depend on [environmentally inherited] success,
4. then social standing will be based to some extent on [enviromnentally] inherited differences among people.
The syllogism hardly changes, but my additions change what Herrnstein was arguing for—environmental, not genetic differences cause success and along with it social standing among groups of people.
The Bell Curve (Herrnstein and Murray, 1994) can, in fact, be seen as an at-length attempt to prove the validity of the syllogism in an empiric matter. Herrnstein and Murray (1994: 105, 108-110) have a full discussion of the syllogism. “As stated, the syllogism is not fearsome” (Herrnstein and Murray, 1994: 105). They go on to state that if intelligence (IQ scores, AFQT scores) is only a bit influenced by genes and if success is only a bit influenced by intelligence then only a small amount of success is inherited (genetically). Note that their measure of “IQ” is the AFQT—which is a measure of acculturated learning, measuring school achievement (Roberts et al, 2000; Cascio and Lewis, 2005).
“How much is IQ a matter of genes?“, Herrnstein and Murray ask. They then discuss the heritability of IQ, relying, of course, on twin studies. They claim that the heritability of IQ is .6 based on the results of many twin studies. But the fatal flaw with twin studies is that the EEA is false and, therefore, genetic conclusions should be dismissed outright (Burt and Simons, 2014, 2015; Joseph, 2015; Joseph et al, 2015; Fosse, Joseph, and Richardson, 2015; Moore and Shenk, 2016). Herrnstein (1971) also discusses twin studies in the context of heritability, attempting to buttress his argument. But if the main vehicle used to show that “intelligence” (whatever that is) is heritable is twin studies, why, then, should we accept the conclusions of twin research if the assumptions that make the foundation of the field are false?
When I – when we – say 60 percent heritability, it’s not 60 percent of the variation. It is 60 percent of the IQ in any given person.” Later, he repeated that for the average person, “60 percent of the intelligence comes from heredity” and added that this was true of the “human species,” missing the point that heritability makes no sense for an individual and that heritability statistics are population-relative.
So Murray used the flawed concept of heritability in the wrong way—hilarious.
So the main point of Herrnstein’s argument is that environments become more uniform for everyone, then the power of heredity will shine through since the environment is uniform—the same—for everyone. But even if we could make the environment “the same”. What does this even mean? How is my environment the same, even if the surroundings are the same, say, if I would react or see something differently than you do on the same thing? The subjectivity of the mental disproves the claim that environments can be “more uniform.” Herrnstein claimed that if no variance in environment exists, then the only thing that can influence success is heredity. This is not wrong, but how would it be possible to equalize environments? Are we supposed to start from square one? Give up the wealth and status of the rich and powerful and “equalize environments” and, according to Herrnstein and the ‘meritocracy’, those who had earnings and prestige, which depended on success which depended on inherited mental abilities would still float to the top.
But what happens when both social class and IQ are equated? What predicts life success? Stephen Ceci reanalyzed the data from Terman’s Termites (the term coined for those in the study) and found something quite different from what Terman had assumed. There were three groups in Terman’s study—group A, B, and C. Groups A and C comprised the top and bottom 20 percent of the full sample in terms of life success. So at the start of the study, all of the children “were about equal in IQ, elementary school grades, and home evaluations” (Ceci, 1996: 82). Depending on the test used, the IQs of the children ranged between 142 to 155, which then decreased by ten points during the second wave due to regression and measurement error. So although group A and C had equivalent IQs, they had starkly different life outcomes. (Group B comprised 60 percent of the sample and enjoyed mediocre life success.)
Ninety-nine percent of the men in the group that had the best professional and personal accomplishments, i.e., group A were individuals who came from professional or business-managerial families that were well educated and wealthy. In contrast, only 17% if the children from group C came from professional and business families, and even these tended to be poorer and less well educated than their group A peers. The men in the two groups present a contrast on all social indicators that were assesssed: group A individuals preferred to play tennis, while group C men preferred to watch football and baseball; as children, the group A men were more likely to collect stamps, shells, and coinds than were the group C men. Not only were the fathers of the group A men better educated than those of group C, but so were their grandfathers. In short, even though the men in group C had equivalent IQs to group A, they did not have equivalent social status. Thus, when IQ is equated and social class is not, it is the latter that seems to be deterministic of professional success. Therefore, Terman’s findings, far from demonstrating that high IQ is associated with real-world success, show that the relationship is more complex and that the social status of these so-called geniuses’ families had a “long reach,” influencing their presonal and professional achievments throughout their adult lives. Thus, the title of Terman’s volumes Genetic studies of Genius, appears to have begged the question of the causation of genius. (Ceci, 1996: 82-83)
Ceci used the Project Talent dataset to analyze the impact of IQ on occupational success. This study, unlike Terman’s, looked at a nationally representative sample of 400,000 high-school students “with both intellectual aptitude and parental social class spanning the entire range of the population” (Ceci, 1996: 85). The students were interviewed in 1960, then about 4,000 were again interviewed in 1974. “For all practical purposes, this subgroup of 4,000 adults represents a stratified national sample of persons in their early 30s” (Ceci, 1996: 86). So Ceci and his co-author, Henderson, ran several regression analyses that involved years of schooling, family and social background and a composite score of intellectual ability based on reasoning, math, and vocabulary. They excluded those who were not working at the time due to being imprisoned, being housewives or still being in school. This then left them with a sample of 2,081 for the analysis.
They looked at IQ as a predictor of variance in adult income in one analysis, which then showed an impact for IQ. “However, when we entered parental social status and years of schooling completed as additional covariates (where parental social status was a standardized score, mean of 100, SD = 10, based on a large number of items having to do with parental income, housing costs, etc.—ranging from low of 58 to high of 135), the effects of IQ as a predictor were totally eliminated” (Ceci, 1996: 86). Social class and education were very strongly related to predictors of adult income. So “this illustrates that the relationship between IQ and adult income is illusory because the more completely specified statistical model demonstrates its lack of predictive power and the real predictive power of social and educational variables” (Ceci, 1996: 86).
The considered high, average, and low IQ groups, about equal size, while examining the regressions of earnings on social class and education within the groups.
Regressions were essentially homogeneous and, contrary to the claims by those working from a meritocratic perspective, the slope for the low IQ group was steepest (see Figure 4.1). There was no limitation imposed by low IQ on the beneficial effects of good social background on earnings and, if anything, there was a trend toward individuals with low IQ actually earning more than those with average IQ (p = .09). So it turns out that although both schooling and parental social class are powerful determinants of future success (which was also true in Terman’s data), IQ adds little to their influence in explaining adult earnings. (Ceci, 1996: 86)
The same was also true for the Project Talent participants who continued school. For each increment of school completed, there was also an effect on their earnings.
Individuals who were in the top quartile of “years of schooling completed” were about 10 times as likely to be receiving incomes in the top quartile of the sample as were those who were in the bottom quartile of “years of schooling completed.” But this relationship does not appear to be due to IQ mediating school attainment or income attainment, because the identical result is found even when IQ is statistically controlled. Interestingly, the groups with the lowest and highest IQs both earned slightly more than average-IQ students when the means were adjusted for social class and education (unadjusted meansat the modal value of social class and education = $9,094, $9,242, and $9,997 for low, average, and hhigh IQ groups, whereas the unadjusted means at this same modal value = $9,972, $9,9292, and $9,9278 for the low, average, and high IQs.) (Perhaps the low IQ students were tracked into plumbing, cement finishing and other well-paying jobs and the high-IQ students were tracked intothe professions, while average IQ students became lower paid teachers. social workers, ministers, etc.) Thus, it appears that the IQ-income relationship is really the result of schooling and family background, and not IQ. (Incidentally, this range in IQs from 70 to 130 and in SES from 58 to 135 covers over 95 percent of the entire population.) (Ceci, 1996: 87-88)
Ceci’s analysis is just like Bowles and Nelson’s (1974) analysis in which they found that earnings at adulthood were more influenced by social status and schooling, not IQ. Bowles and Nelson (1974: 48) write:
Evidently, the genetic inheritance of IQ is not the mechanism which reproduces the structure of social status and economic privilege from generation to generation. Though our estimates provide no alternative explanation, they do suggest that an explanation of intergeneration immobility may well be found in aspects of family life related to socio-economic status and in the effects of socio-economic background operating both directly on economic success, and indirectly via the medium of inequalities in educational attainments.
(Note how this also refutes claims from PumpkinPerson that IQ explains income—clearly, as was shown, family background and schooling explain the IQ-income relationship, not IQ. So the “incredible correlation between IQ and income” is not due to IQ, it is due to environmental factors such as schooling and family background.)
Herrnstein’s syllogism—along with The Bell Curve (an attempt to prove the syllogism)—is therefore refuted. Since social class/family background and schooling explains the IQ-income relationship and not IQ, then Herrnstein’s syllogism crumbles. It was a main premise of The Bell Curve that society is becoming increasingly genetically stratified, with a “cognitive elite”. But Conley and Domingue (2015: 520) found “little evidence for the proposition that we are becoming increasingly genetically stratified.”
IQ testing legitimizes social hierarchies (Chomsky, 1972; Roberts, 2015) and, in Herrnstein’s case, attempted to show that social hierarchies are an inevitability due to the genetic transmission of mental abilities that influence success and income. Such research cannot be socially neutral (Roberts, 2015) and so, this is yet another reason to ban IQ tests, as I have argued. IQ tests are a measure of social class (Ceci, 1996; Richardson, 2002, 2017), and such tests were created to justify existing social hierarchies (Mensh and Mensh, 1991).
Thus, the very purpose of IQ tests was to confirm the current social order as naturally proper. Intelligence tests were not misused to support hereditary theories of social hierarchies; they were perfected in order to support them. The IQ supplied an essential difference among human beings that deliberately reflected racial and class stratifications in order to justify them as natural.9 Research on the genetics of intelligence was far from socially neutral when the very purpose of theorizing the heritability of intelligence was to confirm an unequal social order. (Roberts, 2015: S51)
Herrnstein’s syllogism seems valid, but in actuality, it is not. Herrnstein was implying that genes were the casue of mental abilities and then, eventually, success and prestige. But one can look at Herrnstein’s syllogism from an environmentalist point of view (do note that the hereditarian/environmentalist debate is futile and continues the claim that IQ tests test ‘intelligence’, whatever that is). When matched for IQ—in regard to Terman’s Termites—family background and schooling explained the IQ-income relationship. Further analyses showed that this, again, was the case. Ceci (1996) showed again, replicating Terman’s and Bowles’ and Nelson’s (1974) analyses that social class and schooling, not IQ, explains income’s relationship with IQ.
The conclusion of Herrnstein’s argument can, as I’ve already shown, be an environmental one—through cultural, not genetic, transmission. Such arguments that IQ is ‘genetic’ and, thusly, certain individuals/groups will tend to stay in their social class, as Pinker (2002: 106) states: “Smarter people will tend to float into the higher strata, and their children will tend to stay there.” This, as has been shown, is due to social class, not ‘smarts’ (scores on an IQ test). In any case, this is yet another reason why IQ tests and the research behind them should be banned: IQ tests attempt to justify the current social order as ‘inevitable’ due to genes that influence mental abilities. This claim, though, is false and, therefore—along with the fact that America is not becoming more genetically stratified (Conley and Domigue, 2015)—Herrnstein’s syllogism crumbles. The argument attempts to justify the claim that class has a ‘genetic’ component (as Murray, 2020, attempts to show) but subsequent analyses and arguments have shown that Herrnstein’s argument does not hold.
Validity for IQ tests is fleeting. IQ tests are said to be “validated” on the basis of performance with other IQ tests and that of job performance (see Richardson and Norgate, 2015). Further, IQ tests are claimed to not be biased against social class or racial group. Finally, through the process of “item selection”, test constructors make the types of distributions they want (normal) and get the results the want through the subjective procedure of removing items that don’t agree with their pre-conceived notions on who is or is not “intelligent.” Lastly, “intelligence” is descriptive measure, not an explanatory concept, and treating it like an explanatory measure can—and does—lead to circularity (of which is rife in the subject of IQ testing; see Richardson, 2017b and Taleb’s article IQ is largely a psuedoscientific swindle). This article will show that, on the basis of test construction, item analysis (selection and deselection of items) and the fact that there is no theory of what is being measured in so-called intelligence tests that they, in fact, do not test what they purport to.
Richardson (1991: 17) states that “To measure is to give … a more reliable sense of quantity than our senses alone can provide”, and that “sensed intelligence is not an objective quantity in the sense that the same hotness of a body will be felt by the same humans everywhere (given a few simple conditions); what, in experience, we choose to call ‘more’ intelligence, and what ‘less’ a social judgement that varies from people to people, employing different criteria or signals.” Richardson (1991: 17-18) goes on to say that:
Even if we arrive at a reliable instrument to parallel the experience of our senses, we can claim no more for it than that, without any underlying theory which relates differences in the measure to differences in some other, unobserved, phenomena responsible for those differences. Without such a theory we can never be sure that differences in the measure correspond with our sensed intelligence aren’t due to something else, perhaps something completely different. The phenomenon we at first imagine may not even exist. Instead, such verification most inventors and users of measures of intelligence … have simply constructed the source of differences in sensed intelligence as an underlying entity or force, rather in the way that children and naïve adults perceive hotness as a substance, or attribute the motion of objects to a fictitious impetus. What we have in cases like temperature, of course, are collateral criteria and measures that validate the theory, and thus the original measures. Without these, the assumed entity remains a fiction. This proved to be the case with impetus, and with many other naïve conceptions of nature, such as phlogiston (thought to account for differences in health and disease). How much greater such fictions are likely to be unobserved, dynamic and socially judged concepts like intelligence.
Richardson (1991: 32-35) then goes on to critique many of the old IQ tests, in that they had no way of being construct valid, and that the manuals did not even discuss the validity of the test—it was just assumed.
If we do not know what exactly is being measured when test constructors make and administer these tests, then how can we logically state that “IQ tests test intelligence”? Even Arthur Jensen admitted that psychometricians can create any type of distribution they please (1980: 71); he tacitly admits that tests are devised through the selection and deselection of items on IQ tests that correspond to the test constructors preconceived notions on what “intelligence” is. This, again, is even admitted by Jensen (1980: 147-148) who writes “The items must simply emerge arbitrarily from the heads of test constructors.”
We know, to build on Richardson’s temperature example, that we know exactly is what being measured when we look at the amount of mercury in a thermometer. That is, the concept of “temperature” and the instrument to measure it (the thermometer) were verified independently, without circular reliance on the thermometer itself (see Hasok Chang’s 2007 book Inventing Temperature). IQ tests, on the other hand, are, supposedly, “validated” through measures of job performance and correlations with other, previous tests assumed to be (construct) valid—but they were, of course, just assumed to be valid, it was never shown.
For another example (as I’ve shown with IQ many times) of a psychological construct that is not valid is ASD (autism spectrum disorder). Waterhouse, London, and Gilliberg (2016) write that “14 groups of findings reviewed in this paper that together argue that ASD lacks neurobiological and construct validity. No unitary ASD brain impairment or replicated unitary model of ASD brain impairment exists.” That a construct is valid—that is, it tests what it purports to, is of utmost importance to test measurement. Without it, we don’t know if we’re measuring something else completely different from what we hope—or purport—to.
There is another problem: the fact that, for one of the most-used IQ tests that there is no underlying theory of item selection, as seen in John Raven’s personal notes (see Carpenter, Just, and Shell, 1990). Items on the Raven were selected based on Raven’s intuition, and not any formal theory—the same can be said about, of course, modern-day IQ tests. Carpenter, Just, and Shell (1990) write that John Raven “used his intuition and clinical experience to rank order the difficulty of the six problem types . . . without regard to any underlying processing theory.”
These preconceived notions on what “intelligence” is, though, fail without (1) a theory of what intelligence is (which, as admitted by Ian Deary (2001), there is no theory of human intelligence like the way physics has theories); and (2) what ultimately is termed “construct validity”—that a test measures what it purports to. There are a few kinds of validity: and what IQ-ists claim the most is that IQ tests have predictive validity—that is, they can predict an individual’s outcome in life, and job performance (it is claimed). However, “intelligence” is “a descriptive measure, not an explanatory concept … [so] measures of intelligence level have little or no predictive value” (Howe, 1988).
Howe (1997: ix) also tells us that “Intelligence is … an outcome … not a cause. … Even the most confidently stated assertions about intelligence are often wrong, and the inferences that people have drawn from those assertions are unjustified.”
The correlation between IQ and school performance, according to Richardson (1991: 34) “may be a necessary aspect of the validity of tests, but is not a sufficient one. Such evidence, as already mentioned, requires a clear connection between a theory (a model of intelligence), and the values on the measure.” But, as Richardson (2017: 85) notes:
… it should come as no surprise that performance on them [IQ tests] is associated with school performance. As Robert L. Thorndike and Elizabeth P. Hagen explained in their leading textbook, Educational and Psychological Measurement, “From the very way in which the tests were assembled [such correlation] could hardly be otherwise.”
Gottfredson (2009) claims that the construct validity argument against IQ is “fallacious”, noting it as one of her “fallacies” on intelligence testing (one of her “fallacies” was the “interactionism fallacy”, which I have previously discussed). However, unfortunately for Gottfredson (2009), “the phenomena that testers aim to capture” are built into the test and, as noted here numerous times, preconceived by the constructors of the test. So, Gottfredson’s (2009) claim fails.
Such kinds of construction, too, come into the claim of a “normal distribution.” Just like with preconceptions of who is or is not “intelligent” on the basis of preconceived notions, the normal distribution, too, is an artifact of test construction, along the selection and deselection of items to conform with the test constructors’ presuppositions; the “bell curve” of IQ is created by the presuppositions that the test constructors have about people and society (Simon, 1997).
Charles Spearman, in the early 1900s, claims to have found a “general factor” that explains correlations between different tests. This positive manifold he termed “g” for “general intelligence.” Spearman stated “The (g) factor was taken, pending further information, to consist in something of the nature of an ‘energy’ or ‘power’…” (quoted in Richardson, 1991: 38). The refutation of “g” is a simple, logical, one: While a correlation between performances “may be a necessary requirement for a general factor … it is not a sufficient one.” This is because “it is quite possible for quite independent factors to produce a hierarchy of correlations without the existence of any underlying ‘general’ factor (Fancer, 1985a; Richardson and Bynner, 1984)” (Richardson, 1991: 38). The fact of the matter is, Spearman’s “g” has been refuted for decades (and was shown to be reified by Gould (1981), and further defenses of his concepts on “general intelligence”, like by Jensen (1998) have been refuted, most forcefully by Peter Schonemann. Though, “g” is something built into the test by way of test construction (Richardson, 2002).
Castles (2013: 93) notes that “Spearman did not simply discover g lurking in his data. Instead, he chose one peculiar interpretation of the relationships to demonstrate something in which he already believed—unitary, biologically based intelligence.”
So what explains differences in “g”? The same test construction noted above along with differences in social class, due to stress, self-confidence, test preparedness and other factors correlated with social class, termed the “sociocognitive-affective nexus” (Richardson, 2002).
Constance Hilliard, in her book Straightening the Bell Curve (Hilliard, 2012), notes that there were differences in IQ between rural and urban white South Africans. She notes that differences between those who spoke Afrikaans and those who spoke another language were completely removed through test construction (Hilliard, 2012: 116). Hilliard (2012) notes that if the tests that the constructors formulate don’t agree with their preconceived notions, they are then thrown out:
If the individuals who were supposed to come out on top didn’t score highly or, conversely, if the individuals who were assumed would be at the bottom of the scores didn’t end up there, then the test designers scrapped the test.
Sex differences in “intelligence” (IQ) have been the subject of some debate in the early-to-mid-1900s. Test constructors debated amongst themselves what to do about such differences between the sexes. Hilliard (2012) quotes Harrington (1984; in Perspectives on Bias in Mental Testing) who writes about normalizing test scores between men and women:
It was decided [by IQ test writers] a priori that the distribution of intelligence-test scores would be normal with a mean (X=100) and a standard deviation (SD=15), also that both sexes would have the same mean and distribution. To ensure the absence of sex differences, it was arranged to discard items on which the sexes differed. Then, if not enough items remained, when discarded items were reintroduced, they were balanced, i.e., for every item favoring males, another one favoring females was also introduced.
One who would construct a test for intellectual capacity has two possible methods of handling the problem of sex differences.
1 He may assume that all the sex differences yielded by his test items are about equally indicative of sex differences in native ability.
2 He may proceed on the hypothesis that large sex differences on items of the Binet type are likely to be factitious in the sense that they reflect sex differences in experience or training. To the extent that this assumption is valid, he will be justified in eliminating from his battery test items which yield large sex differences.
The authors of the New Revision have chosen the second of these alternatives and sought to avoid using test items showing large differences in percents passing. (McNemar 1942:56)
Change “sex differences” to “race” or “social class” differences and we can, too, change the distribution of the curve, along with notions of who is or is not “intelligent.” Previously low scorers can, by way of test construction, become high scorers, vice-versa for high scorers being made into low scorers. There is no logical—or empirical—justification for the inclusion of specific items on whatever IQ test is in question. That is, to put it another way, the inclusion of items on a test is subjective, which comes down to the test designers’ preconceived notions, and not an objective measure of what types of items should be on the test—as Raven stated, there is no type of underlying theory for the inclusion of items in the test, it is based on “intuition” (which is the same thing that modern-day test constructors do). These two quotes from IQ-ists in the early 20th century are paramount in the attack on the validity of IQ tests—and the causes for differences in scores between groups.
He and van de Vijver (2012: 7) write that “An item is biased when it has a different psychological meaning across cultures. More precisely, an item of a scale (e.g., measuring anxiety) is said to be biased if persons with the same trait, but coming from different cultures, are not equally likely to endorse the item (Van de Vijver & Leung, 1997).” Indeed, Reynolds and Suzuki (2012: 83) write that “Item bias due to“:
… “poor item translation, ambiguities in the original item, low familiarity/appropriateness of the item content in certain cultures, or influence of culture specifics such as nuisance factors or connotations associated with the item wording” (p. 127) (van de Vijver and Tanzer, 2004)
Drame and Ferguson (2017) note that their “Results indicate that use of the Ravens may substantially underestimate the intelligence of children in Mali” while the cause may be due to the fact that:
European and North American children may spend more time with play tasks such as jigsaw puzzles or connect the dots that have similarities with the Ravens and, thus, train on similar tasks more than do African children. If African children spend less time on similar tasks, they would have fewer opportunities to train for the Ravens (however unintentionally) reflecting in poorer scores. In this sense, verbal ability need not be the only pitfall in selecting culturally sensitive IQ testing approaches. Thus, differences in Ravens scores may be a cultural artifact rather than an indication of true intelligence differences. [Similar arguments can be found in Richardson, 2002: 291-293]
The same was also found by Dutton et al (2017) who write that “It is argued that the undeveloped nature of South Sudan means that a test based around shapes and analytic thinking is unsuitable. It is likely to heavily under-estimate their average intelligence.” So if the Raven has these problems cross-culturally (country), then it SHOULD have such biases within, say, America.
It is also true that the types of items on IQ tests are not as complex as everyday life (see Richardson and Norgate, 2014). Types of questions on IQ tests are, in effect, ones of middle-class knowledge and skills and, knowing how IQ tests are structured will make this claim clear (along with knowing the types of items that eventually make it onto the particular IQ test itself). Richardson (2002) has a few questions on modern-day IQ tests whereas Castles (2013), too, has a few questions from the Stanford-Binet. This, of course, is due to the social class of the test constructors. Some examples of some questions can be seen here:
‘What is the boiling point of water?’ ‘Who wrote Hamlet?’ ‘In what continent is Egypt?’ (Richardson, 2002: 289)
‘When anyone has offended you and asks you to excuse him—what ought you do?’ ‘What is the difference between esteem and affection?’ [this is from the Binet Scales, but “It is interesting to note that similar items are still found on most modern intelligence tests” (Castles, 2013).]]
Castles (2013: 150) further notes made-up examples of what is on the WAIS (since she cannot legally give questions away since she is a licensed psychologist), and she writes:
One section of the WAIS-III, for example, consists of arithmetic problems that the respondent must solve in his or her head. Others require test-takers to define a series of vocabulary words (many of which would be familiar only to skilled-readers), to answer school-related factual questions (e.g., “Who was the first president of the United States?” or “Who wrote the Canterbury Tales?”), and to recognize and endorse common cultural norms and values (e.g., “What should you do it a sale clerk accidentally gives you too much change?” or “Why does our Constitution call for division of powers?”). True, respondents are also given a few opportunities to solve novel problems (e.g., copying a series of abstract designs with colored blocks). But even these supposedly culture-fair items require an understanding of social conventions, familiarity with objects specific to American culture, and/or experience working with geometric shapes or symbols.
All of these factors coalesce into forming the claim—and the argument—that IQ tests are one of middle-class knowledge and skills. The thing is, contrary to the claims of IQ-ists, there is no such thing as a culture-free IQ test. Richardson (2002: 293) notes that “Since all human cognition takes place through the medium of cultural/psychological tools, the very idea of a culture-free test is, as Cole (1999) notes, ‘a contradiction in terms . . . by its very nature, IQ testing is culture bound’ (p. 646). Individuals are simply more or less prepared for dealing with the cognitive and linguistic structures built in to the particular items.”
Cole (1981) notes that “that the notion of a culture free IQ test is an absurdity” because “all higher psychological processes are shaped by our experiences and these experiences are culturally organized” (this is a point that Richardson has driven home for decades) while also—rightly—stating that “IQ tests sample school activities, and therefore, indirectly, valued social activities, in our culture.”
One of the last stands for the IQ-ist is to claim that IQ tests are useful for identifying at-risk individuals for learning disabilities (as Binet originally created the first IQ tests for). However, it is noted that IQ tests are not necessary—nor sufficient—for the identification of those with learning disabilities. Siegal (1989) states that “On logical and empirical grounds, IQ test scores are not necessary for the definition of learning disabilities.”
When Goddard brought the first IQ tests to America and translated them into English from French is when the IQ testing conglomerate really took off (see Zenderland, 1998 for a review). These tests were used to justify current social ranks. As Richardson (1991: 44) notes, “The measurement of intelligence in the twentieth century arose partly out of attempts to ‘prove’ or justify a particular world view, and partly for purposes of screening and social selection. It is hardly surprising that its subsequent fate has been one of uncertainty and controversy, nor that it has raised so many social and political issues (see, for example, Joynson 1989 for discussion of such issues).” So, what actual attempts at validation did the constructors of such tests need in the 20th century when they knew full-well what they wanted to show and, unsurprisingly, they observed it (since it was already going to happen since they construct the test to be that way)?
The conceptual arguments just given here point to a few things:
(1) IQ tests are not construct valid because there is no theory of intelligence, nor is there an underlying theory which relates differences in IQ (the unseen function) to, for example, a physiological variable. (See Uttal, 2012; 2014 for arguments against fMRI studies that purport to show differences in physiological variables cognition.)
(2) The fact that items on the tests are biased against certain classes/cultures; this obviously matters since, as noted above, there is no theory for the inclusion of items, it comes down to the subjective choice of the test designers, as noted by Jensen.
(3) ‘g’ is a reified mathematical abstraction; Spearman “discovered” nothing, he just chose the interpretation that, of course, went with his preconceived notion.
(4) The fact that sex differences in IQ scores were seen as a problem and, through item analysis, made to go away. This tells us that we can do the same for class/race differences in intelligence. Score differences are a function of test construction.
(5) The fact that the Raven has been shown to be biased in two African countries lends credence to the claims here.
So this then brings us to the ultimate claim of this article: IQ tests don’t test intelligence; they test middle-class knowledge and skills. Therefore, the scores on IQ tests are not that of intelligence, but of an index of one’s cultural knowledge of the middle class and its knowledge structure. This, IQ scores are, in actuality, “middle-class knowledge and skills” scores. So, contra Jensen (1980), there is bias in mental testing due to the items chosen for inclusion on the test (we have admission that score variances and distributions can change from IQ-ists themselves)
Why do some groups of people use chopsticks and others do not? Years back, created a thought experiment. So he found a few hundred students from a university and gathered DNA samples from their cheeks which were then mapped for candidate genes associated with chopstick use. Come to find out, one of the associated genetic markers was associated with chopstick use—accounting for 50 percent of the variation in the trait (Hamer and Sirota, 2000). The effect even replicated many times and was highly significant: but it was biologically meaningless.
One may look at East Asians and say “Why do they use chopsticks” or “Why are they so good at using them while Americans aren’t?” and come to such ridiculous studies such as the one described above. They may even find an association between the trait/behavior and a genetic marker. They may even find that it replicates and is a significant hit. But, it can all be for naught, since population stratification reared its head. Population stratification “refers to differences in allele frequencies between cases and controls due to systematic differences in ancestry rather than association of genes with disease” (Freedman et al, 2004). It “is a potential cause of false associations in genetic association studies” (Oetjens et al, 2016).
Such population stratification in the chopsticks gene study described above should have been anticipated since they studied two different populations. Kaplan (2000: 67-68) described this well:
A similar argument, bu the way, holds true for molecular studies. Basically, it is easy to mistake mere statistical associations for a causal connection if one is not careful to properly partition one’s samples. Hamer and Copeland develop and amusing example of some hypothetical, badly misguided researchers searching for the “successful use of selected hand instruments” (SUSHI) gene (hypothesized to be associated with chopstick usage) between residents in Tokyo and Indianapolis. Hamer and Copeland note that while you would be almost certain to find a gene “associated with chopstick usage” if you did this, the design of such a hypothetical study would be badly flawed. What would be likely to happen here is that a genetic marker associated with the heterogeneity of the group involved (Japanese versus Caucasian) would be found, and the heterogeneity of the group involved would independently account for the differences in the trait; in this case, there is a cultural tendency for more people who grow up in Japan than people who grow up in Indianapolis to learn how to use chopsticks. That is, growing up in Japan is the causally important factor in using chopsticks; having a certain genetic marker is only associated with chopstick use in a statistical way, and only because those people who grow up in Japan are also more likely to have the marker than those who grew up in Indianapolis. The genetic marker is in no way causally related to chopstick use! That the marker ends up associated with chopstick use is therefore just an accident of design (Hamer and Copeland, 1998, 43; Bailey 1997 develops a similar example).
In this way, most—if not all—of the results of genome-wide association studies (GWASs) can be accounted for by population stratification. Hamer and Sirota (2000) is a warning to psychiatric geneticists to not be quick to ascribe function and causation to hits on certain genes from association studies (of which GWASs are).
Many studies, for example, Sniekers et al (2017), Savage et al (2018) purport to “account for” less than 10 percent of the variance in a trait, like “intelligence” (derived from non-construct valid IQ tests). Other GWA studies purport to show genes that affect testosterone production and that those who have a certain variant are more likely to have low testosterone (Ohlsson et al, 2011). Population stratification can have an effect here in these studies, too. GWASs; they give rise to spurious correlations that arise due to population structure—which is what GWASs are actually measuring, they are measuring social class, and not a “trait” (Richardson, 2017b; Richardson and Jones, 2019). Note that correcting for socioeconomic status (SES) fails, as the two are distinct (Richardson, 2002). (Note that GWASs lead to PGSs, which are, of course, flawed too.)
Such papers presume that correlations are causes and that interactions between genes and environment either don’t exist or are irrelevant (see Gottfredson, 2009 and my reply). Both of these claims are false. Correlations can, of course, lead to figuring out causes, but, like with the chopstick example above, attributing causation to things that are even “replicable” and “strongly significant” will still lead to false positives due to that same population stratification. Of course, GWAS and similar studies are attempting to account for the heriatbility estimates gleaned from twin, family, and adoption studies. Though, the assumptions used in these kinds of studies are shown to be false and, therefore, heritability estimates are highly exaggerated (and flawed) which lead to “looking for genes” that aren’t there (Charney, 2012; Joseph et al, 2016; Richardson, 2017a).
Richardson’s (2017b) argument is simple: (1) there is genetic stratification in human populations which will correlate with social class; (2) since there is genetic stratification in human populations which will correlate with social class, the genetic stratification will be associated with the “cognitive” variation; (3) if (1) and (2) then what GWA studies are finding are not “genetic differences” between groups in terms of “intelligence” (as shown by “IQ tests”), but population stratification between social classes. Population stratification still persists even in “homogeneous” populations (see references in Richardson and Jones, 2019), and so, the “corrections for” population stratification are anything but.
So what accounts for the small pittance of “variance explained” in GWASs and other similar association studies (Sniekers et al, 2017 “explained” less than 5 percent of variance in IQ)? Population stratification—specifically it is capturing genetic differences that occurred through migration. GWA studies use huge samples in order to find the genetic signals of the genes of small effect that underline the complex trait that is being studied. Take what Noble (2018) says:
As with the results of GWAS (genome-wide association studies) generally, the associations at the genome sequence level are remarkably weak and, with the exception of certain rare genetic diseases, may even be meaningless (13, 21). The reason is that if you gather a sufficiently large data set, it is a mathematical necessity that you will find correlations, even if the data set was generated randomly so that the correlations must be spurious. The bigger the data set, the more spurious correlations will be found (3).
Calude and Longo (2016; emphasis theirs) “prove that very large databases have to contain arbitrary correlations. These correlations appear only due to the size, not the nature, of data. They can be found in “randomly” generated, large enough databases, which — as we will prove — implies that most correlations are spurious.”
So why should we take association studies seriously when they fall prey to the problem of population stratification (measuring differences between social classes and other populations) along with the fact that big datasets lead to spurious correlations? I fail to think of a good reason why we should take these studies seriously. The chopsticks gene example perfectly illustrates the current problems we have with GWASs for complex traits: we are just seeing what is due to social—and other—stratification between populations and not any “genetic” differences in the trait that is being looked at.
A fallacy is an error in reasoning that makes an argument invalid. The “interactionism fallacy” is the fallacy—coined by Gottfredson (2009)—that since genes and environment interact, that heritability estimates are not useful—especially for humans (they are for nonhuman animals where environments can be fully controlled; see Schonemann, 1997; Moore and Shenk, 2016). There are many reasons why this ‘fallacy’ is anything but a fallacy; it is a simple truism: genes and environment (along with other developmental products) interact to ‘construct’ the organism (what Oyama, 2000 terms ‘constructive interactionism—“whereby each combination of genes and environmental influences simultaneously interacts to produce a unique result“). The causal parity thesis (CPT) is the thesis that genes/DNA play an important role in development, but so do other variables, so there is no reason to privilege genes/DNA above other developmental variables (see Noble, 2012 for a similar approach). Genes are not special developmental resources and so, nor are they more important than other developmental resources. So the thesis is that genes and other developmental resources are developmentally ‘on par’.
Genes need the environment. Without the environment, genes would not be expressed. Behavior geneticists claim to be able to partition genes from environment—nature from nurture—on the basis of heritability estimates, mostly gleaned from twin and adoption studies. However, the method is flawed: since genes interact with the environment and other genes, how would it be possible to neatly partition the effects of genes from the effects of the environment? Behavior geneticists claim that we can partition these two variables. Behavior geneticists—and others—cite the “Interactionism fallacy”, the fallacy that since genes interact with the environment that heritability estimates are useless. This “fallacy”, though, confuses the issue.
Behavior geneticists claim to show how genes and the environment affect the ontogeny of traits in humans with twin and adoption studies (though these methods are highly flawed). The purpose of this “fallacy” is to disregard what developmental systems theorists claim about the interaction of nature and nurture—genes and environment.
Gottfredson (2009) coins the “interactionism fallacy”, which is “an irrelevant truth [which is] that an organism’s development requires genes and environment to act in concert” and the “two forces are … constantly interacting” whereas “Development is their mutual product.” Gottfredson also states that “heritability … refers to the percentage of variation in … the phenotype, which has been traced to genetic variation within a particular population.” (She also makes the false claim that “One’s genome is fixed at birth“; though this is false, see epigenetics/methylation studies.) Heritability estimates, according to Phillip Kitcher are “‘irrelevant’ and the fact that behavior geneticists persist
in using them is ‘an unfortunate tic from which they cannot free themselves’ (Kitcher,
2001: 413)” (quoted in Griffiths, 2002).
Gottfredson is engaging in developmental denialism. Developmental denialism “occurs when heritability is treated as a causal mechanism governing the developmental reoccurrence of traits across generations in individuals.” Gottfredson, with her “interactionism fallacy” is denying organismal development by attempting to partition genes from environment. As Rose (2006) notes, “Heritability estimates are attempts to impose a simplistic and reified dichotomy (nature/nurture) on non-dichotomous processes.” The nature vs nurture argument is over and neither has won—contra Plomin’s take—since they interact.
Gottfredson seems confused, since this point was debated by Plomin and Oyama back in the 80s (Plomin’s review of Oyama’s book The Ontogeny of Information; see Oyama, 1987, 1988; Plomin, 1988a, b). In any case, it is true that development requires genes to interact. But Gottfredson is talking about the concept of heritability—the attempt to partition genes and environment through twin, adoption and family studies (which have a whole slew of problems). For example, Moore and Shenk (2016: 6) write:
Heritability statistics do remain useful in some limited circumstances, including selective breeding programs in which developmental environments can be strictly controlled. But in environments that are not controlled, these statistics do not tell us much.
Susan Oyama writes in The Ontogeny of Information (2000, pg 67):
Heritability coefficients, in any case, because they refer not only to variation in genotype but to everything that varied (was passed on) with it, only beg the question of what is passed on in evolution. All too often heritability estimates obtained in one setting are used to infer something about an evolutionary process that occurred under conditions, and with respect to a gene pool, about which little is known. Nor do such estimates tell us anything about development.
Characters are produced by the interaction of nongenetic and genetic factors. The biological flaw, as Moore and Shenk note, throw a wrench into the claims of Gottfredson and other behavior geneticists. Phenotypes are ALWAYS due to genetic and nongenetic factors interacting. So the two flaws of heritability—the environmental and biological flaw (Moore and Shenk, 2016)—come together to “interact” to refute such simplistic claims that genes and environment—nature and nurture—can be separated.
For instance, as Moore (2016) writes, though “twin study methods are among the most powerful tools available to quantitative behavioral geneticists (i.e., the researchers who took up Galton’s goal of disentangling nature and nurture), they are not satisfactory tools for studying phenotype development because they do not actually explore biological processes.” (See also Richardson, 2012.) This is because twin studies ignore biological/developmental processes that lead to phenotypes.
Gamma and Rosenstock (2017) write that the concept of heritability that behavioral geneticists use is “is a generally useless quantity” while “the behavioral genetic dichotomy of genes vs environment is fundamentally misguided.” This brings us back to the CPT; there is causal parity to all processes/interactants that form the organism and its traits, thus the concept of heritability that behavioral geneticists employ is a useless measure. Oyama, Griffiths, and Gray (2001: 3) write:
These often overlooked similarities form part of the evidence for DST’s claim of causal parity between genes and other factors of development. The “parity thesis” (Griffiths and Knight 1998) does not imply that there is no difference between the particulars of the causal roles of genes and factors such as endosymbionts or imprinting events. It does assert that such differences do not justify building theories of development and evolution around a distinction between what genes do and what every other causal factor does.
Behavior geneticists’ endeavor, though, is futile. Aaron Panofsky (2016: 167) writes that “Heritability estimates do not help identify particular genes or ascertain their functions in development or physiology, and thus, by this way of thinking, they yield no causal information.” (Also see Panofsky, 2014; Misbehaving Science: Controversy and the Development of Behavior Genetics.) So, the behavioral genetic method of partitioning genes and environment does not—and can not—show causation for trait ontogeny.
Now, while people like Gottfredson and others may deny it, they are genetic determinists. Genetic determinism, as defined by Griffiths (2002) is “the idea that many significant human characteristics are rendered inevitable by the presence of certain genes.” Using this definition, many behavior geneticists and their sympathizers have argued that certain traits are “inevitable” due to the presence of certain genes. Genetic determinism is literally the idea that genes “determine” aspects of characters and traits, though it has been known for decades that it is false.
Now we can take a look at Brian Boutwell’s article Not Everything Is An Interaction. Boutwell writes:
Albert Einstein was a brilliant man. Whether his famous equation of E=mc2 means much to you or not, I think we can all concur on the intellectual prowess—and stunning hair—of Einstein. But where did his brilliance come from? Environment? Perhaps his parents fed him lots of fish (it’s supposed to be brain food, after all). Genetics? Surely Albert hit some sort of genetic lottery—oh that we should all be so lucky. Or does the answer reside in some combination of the two? How very enlightened: both genes and environment interact and intertwine to yield everything from the genius of Einstein to the comedic talent of Lewis Black. Surely, you cannot tease their impact apart; DNA and experience are hopelessly interlocked. Except, they’re not. Believing that they are is wrong; it’s a misleading mental shortcut that has largely sown confusion in the public about human development, and thus it needs to be retired.
Most traits are the product of genetic and environmental influence, but the fact that both genes and environment matter does not mean that they interact with one another. Don’t be lured by the appeal of “interactions.” Important as they might be from time to time, and from trait to trait, not everything is an interaction. In fact, many things likely are not.
I don’t even know where to begin here. Boutwell, like Gottfredson, is confused. The only thing that needs to be retired because it “has largely sown confusion in the public about human development” is, ironically, the concept of heritability (Moore and Shenk, 2016)! I have no idea why Boutwell claimed that it’s false that “DNA and experience [environment] are hopelessly interlocked.” This is because, as Schneider (2007) notes, “the very concept of a gene requires an environment.” Since the concept of the gene requires the environment, how can we disentangle them into neat percentages like behavior geneticists claim to do? That’s right: we can’t. Do be lured by the appeal of interactions; all biological and nonbiological stuff constantly interacts with one another.
Boutwell’s claims are nonsense. It would be worth it to quote Richard Lewontin’s forward in the 2000 2nd edition of Susan Oyama’s The Ontogeny of Information (emphasis Lewontin’s):
Nor can we partition variation quantitatively, ascribing some fraction of variation to genetic differences and the remainder to environmental variation. Every organism is the unique consequence of the reading of its DNA in some temporal sequence of environments and subject to random cellular events that arise because of the very small number of molecules in each cell. While we may calculate statistically an average difference between carriers of one genotype and another, such average differences are abstract constructs and must not be reified with separable concrete effects of genes in isolation from the environment in which the genes are read. In the first edition of The Ontogeny of Information Oyama characterized her construal of the causal relation between genes and environment as interactionist. That is, each unique combination of genes and environment produces a unique and a priori unpredictable outcome of development. The usual interactionist view is that there are separable genetic and environmental causes, but the effects of these causes acting in combination are unique to the particular combination. But this claim of ontogenetically independent status of the causes as causes, aside from their interaction in the effects produced, contradicts Oyama’s central analysis of the ontogeny of information. There are no “gene actions” outside environments, and no “environmental actions” can occur in the absence of genes. The very status of environment as a contributing cause to the nature of an organism depends on the existence of a developing organism. Without organisms there may be a physical world, but there are no environments. In like the manner no organisms exist in the abstract without environments, although there may be naked DNA molecules lying in the dust. Organisms are the nexus of external circumstances and DNA molecules that make these physical circumstances into causes of development in the first place. They become causes only at their nexus, and they cannot exist as causes except in their simultaneous action. That is the essence of Oyama’s claim that information comes into existence only in the process of ontogeny. (Oyama, 2000: 16)
There is an “interactionist consensus” (see Oyama, Griffiths, and Grey, 2001; What is Developmental Systems Theory? pg 1-13): the organism and the suite of traits it has is due to the interaction of genetic/environmental/epigenetic etc. resources at every stage of development. Therefore, for organismal development to be successful, it always requires the interaction of genes, environment, epigenetic processes, and interactions between everything that is used to ‘construct’ the organism and the traits it has. Thus “it makes no sense to ask if a particular trait is genetic or environmental in origin. Understanding how a trait develops is not a matter of finding out whether a particular gene or a particular environment causes the trait; rather, it is a matter of understanding how the various resources available in the production of the trait interact over time” (Kaplan, 2006).
Lastly, I will shortly comment on Sesardic’s (2005: chapter 2) critiques on developmental systems theorists and their critique of heritability and the concept of interactionism. Sesardic argues in the chapter that interaction between genes and environment, nature and nurture, does not undermine heritability estimates (the nature and nurture partition). Philosopher of science Helen Longino argues in her book Studying Human Behavior (2013):
By framing the debate in terms of nature versus nurture and as though one of these must be correct, Sesardic is committed to both downplaying the possible contributions of environmentally oriented research and to relying on a highly dubious (at any rate, nonmethodological) empirical claim.
In sum, the “interactionist fallacy” (coined by Gottfredson) is not a ‘fallacy’ (error in reasoning) at all. For, as Oyama writes in Evolution’s Eye: A Systems View of the Biology-Culture Divide “A not uncommon reaction to DST is, ‘‘That’s completely crazy, and besides, I already knew it” (pg 195). This is exactly what Gottfredson (2009) states, that she “already knew” that there is an interaction between nature and nurture; but she goes on to deny arguments from Oyama, Griffiths, Stotz, Moore, and others on the uselessness of heritability estimates along with the claim that nature and nurture cannot be neatly partitioned into percentages as they are constantly interacting. Causal parity between genes and other developmental resources, too, upends the claim that heritability estimates for any trait make sense (not least for how heritability estimates are gleaned for humans—mostly twin, family, and adoption studies). Developmental denialism—what Gottfredson and others often engage in—runs rampant in the “behavioral genetic” sphere; and Oyama, Griffiths, Stotz, and others show how we should not deny development and we should discard with these estimates for human traits.
Heritability estimates imply that there is a “nature vs nurture” when it is “nature and nurture” which are constantly interacting—and, due to this, we should discard with these estimates due to the interaction of numerous developmental resources; it does not make sense to partition an interacting, self-organizing developmental system. Claims from behavior geneticists—that genes and environment can be separated—are clearly false.
The claim that “Men are stronger than women” does not need to be said—it is obvious through observation that men are stronger than women. To my (non-)surprise, I saw someone on Twitter state:
“I keep hearing that the sex basis of patriarchy is inevitable because men are (on average) stronger. Notwithstanding that part of this literally results from women in all stages of life being denied access to and discourage from physical activity, there’s other stuff to note.”
To which I replied:
“I don’t follow – are you claiming that if women were encouraged to be physically active that women (the population) can be anywhere *near* men’s (the population) strength level?”
I then got told to “Fuck off,” because I’m a “racist” (due to the handle I use and my views on the reality of race). In any case, while it is true that part of this difference does, in part, stem from cultural differences (think of women wanting the “toned” look and not wanting to get “big and bulky”—as if it happens overnight) and not wanting to lift heavy weights because they think they will become cartoonish.
Here’s the thing though: Men have about 61 percent more muscle mass than women (which is attributed to higher levels of testosterone); most of the muscle mass difference is allocated to the upper body—men have about 75 percent more arm muscle mass than women which accounts for 90 percent greater upper body strength in men. Men also have about 50 percent more muscle mass than women, while this higher percentage of muscle mass is then related to men’s 65 percent greater lower body strength (see references in Lassek and Gaulin, 2009: 322).
Men have around 24 pounds of skeletal muscle mass compared to women, though in this study, women were about 40 percent weaker in the upper body and 33 percent weaker in the lower body (Janssen et al, 2000). Miller et al (1993) found that women had a 45 percent smaller cross-section area in the brachii, 45 in the elbow flexion, 30 percent in the vastus lateralis, and 25 percent smaller CSA in the knee extensors, as I wrote in Muscular Strength by Gender and Race, where I concluded:
The cause for less upper-body strength in women is due the distribution of women’s lean tissue being smaller.
Men have larger fibers, which in my opinion is a large part of the reason for men’s strength advantage over women. Now, even if women were “discouraged” from physical activity, this would be a problem for their bone density. Our bones are porous, and so, by doing a lot of activity, we can strengthen our bones (see e.g., Fausto-Sterling, 2005). Bishop, Cureton, and Collins (1987) show that the sex difference in strength in close-to-equally-trained men and women “is almost entirely accounted for by the difference in muscle size.” Which lends credence to my claim I made above.
Lindle et al (1997) conclude that:
… the results of this study indicate that Con strength levels begin to decline in the fourth rather than in the fifth decade, as was previously reported. Contrary to previous reports, there is no preservation of Ecc compared with Con strength in men or women with advancing age. Nevertheless, the decline in Ecc strength with age appears to start later in women than in men and later than Con strength did in both sexes. In a small subgroup of subjects, there appears to be a greater ability to store and utilize elastic energy in older women. This finding needs to be confirmed by using a larger sample size. Muscle quality declines with age in both men and women when Con peak torque is used, but declines only in men when Ecc peak torque is used. [“Con” and “Ecc” strength refer to concentric and eccentric actions]
Women are shorter than men and have less fat-free muscle mass than men. Women also have a weaker grip (even when matched for height and weight, men had higher levels of lean mass compared to women (92 and 79 percent respectively; Nieves et al, 2009). So men had greater bone mineral density (BMD) and bone mineral content (BMC) compared to women. Now do some quick thinking—do you think that one with weaker bones could be stronger than someone with stronger bones? If person A had higher levels of BMC and BMD compared to person B, who do you think would be stronger and have the ability to do whatever strength test the best—the one with the weaker or stronger muscles? Quite obviously, the stronger one’s bones are the more weight they can bare on them. So if one has weak bones (low BMC/BMD) and they put a heavy load on their back, while they’re doing the lift their bones could snap.
Alswat (2017) reviewed the literature on bone density between men and women and found that men had higher BMD in the hip and higher BMC in the lower spine. Women also had bone fractures earlier than men. Some of this is no doubt cultural, as explained above. However, even if we had a boy and a girl locked in a room for their whole lives and they did the same exact things, ate the same food, and lifted the same weights, I would bet my freedom that there still would be a large difference between the two, skewing where we know it would skew. Women are more likely to suffer from osteoporosis than are men (Sözen, Özışık, and Başaran 2016).
So if women have weaker bones compared to men, then how could they possibly be stronger? Even if men and women had the same kind of physical activity down to the tee, could you imagine women being stronger than men? I couldn’t—but that’s because I have more than a basic understanding of anatomy and physiology and what that means for differences in strength—or running—between men and women.
I don’t doubt that there are cultural reasons that account for the large differences in strength between men and women—I do doubt, though, that the gap can be meaningfully closed. Yes, biology interacts with culture. So the developmental variables that coalesce to make men “Men” and those that coalesce to make women “Women” converge in creating the stark differences in phenotype between the sexes which then explains how the sex differences between the sexes manifest itself.
Differences in bone strength between men and women, along with distribution of lean tissue, differences in lean mass, and differences in muscle size explain the disparity in muscular strength between men and women. You can even imagine a man and woman of similar height and weight and they would, of course, look different. This is due to differences in hormones—the two main players being testosterone and estrogen (see Lang, 2011).
So yes, part of the difference in strength between men and women are rooted in culture and how we view women who strength train (way more women should strength train, as a matter of fact), though I find it hard to believe that even if the “cultural stigma” of the women who lifts heavy weights at the gym disappeared overnight, that women would be stronger than men. Differences in strength exist between men and women and this difference exists due to the complex relationship between biology and culture—nature and nurture (which cannot be disentangled).
Action and behavior are distinct concepts, although in common lexicon they are used interchangeably. The two concepts are needed to distinguish what one intends to do and what one reacts to and how they react. In this article, I will explain the distinction between the two and how and why people get it wrong when discussing the two concepts—since using them interchangeably is inaccurate.
Actions are intentional; they are done for reasons (Davidson, 1963). Actions are determined by one’s current intentional state and they then act for reasons. So, in effect, the agent’s intentional states cause the action, but the action is carried out for reasons. Actions are that which is done by an agent, but stated in this way, it could be used interchangeably with behavior. The Wikipedia article on “action” states:
action is an intentional, purposive, conscious and subjectively meaningful activity
So actions are conscious, compared to behaviors which are reflexive and unconscious—not done for reasons.
Davidson (1963: 685) writes:
Whenever someone does something for a reason, therefore, he can be characterized as (a) having some sort of pro attitude toward actions of a certain kind, and (b) believing (or knowing, perceiving, noticing, remembering) that his action is of that kind.
So providing the reason why an agent did A requires naming the pro-attitude—beliefs paired with desires—or the related belief that caused the agent’s action. When I explain behavior, this will become clear.
Behavior is different: behavior is a reaction to a stimulus and this reaction is unconscious. For example, take a doctor’s office visit. Hitting the knee in the right spot causes the knee to jerk up—doctors use this test to test for nerve damage. It tests the L2, L3, and L4 segments of the spinal cord, so if there is no reflex, the doctor knows there is a problem.
This is done without thought—the patient does not think about the reflex. This then shows how and why action and behavior are distinct concepts. Here’s what occurs when the doctor hits the patient’s knee:
When the doctor hits the knee, the patient’s thigh muscle stretches. When the thigh muscle stretches, a signal is then sent along the sensory neuron to the spinal cord where it interacts with a motor neuron which goes to the thigh muscle. The muscle then contracts which causes the reflex. (Recall my article on causes of muscle movement.)
So this, compared to consciously taking a step—consciously jerking your leg in the same way as a doctor expects the patellar reflex—is what distinguishes one from the other—what distinguishes action from behavior. Sure, the behavior of the patellar reflex occurred for a reason—but it was not done consciously by the agent so it is therefore not an action.
Perhaps it would be important at this point to explain the differences between action, conduct, and behavior, because we have used these three terms in the discussion of caring. …
Teleology, the reader is reminded, involves goals or lures that provide the reasons for a person acting in a certain way. It is goals or reasons that establish action from simple behavior. On the other hand the concept of efficient causation is involved in the concept of behavior. Behavior is the result of antecedent conditions. The individual behaves in response to causal stimuli or antecedent conditions. Hence, behavior is a reaction to what already is—the result of a push from the past to do something in the present. In contrast, an action aims at the future. It is motivated by a vision of what can be. (Brencick and Webster, 2000: 147)
This is also another thing that Darwin got wrong. He believed that instincts and reflexes are inherited—this is not wrong since they are behaviors and behaviors are dispositional which means they can be selected. However, he believed that before they were inherited as instincts and reflexes, they were intentional acts. As Badcock (2000: 56) writes in Evolutionary Psychology: A Critical Introduction:
Darwin explicitly states this when he says that ‘it seems probable that some actions, which were at first performed consciously, have become through habit and association converted into relex actions, and are now firmly fixed and inherited.’
This is quite obviously wrong, as I have explained above; instead of “reflexive actions”, Darwin meant “reflexive behaviors”. So, it seems that Darwin did not grasp the distinction between “action” and “behavior” either.
We can then form this simple argument, take cognition:
This is a natural outcome of what has been argued here, due to the distinction between action and behavior. So when we think of “cognition” what comes to mind? Thinking. Thinking is an action—so thinking (cognition) is intentional. Intentionality is “the power of minds and mental states to be about, to represent, or to stand for, things, properties and states of affairs.” So, when we think, our minds/mental states can represent, stand for things, properties and states of affairs. Therefore, cognition is intentional. Since cognition is intentional and behavior is dispositional, it directly follows that cognition cannot be responsible for behavior.
Thinking is a mental activity which results in a thought. So if thinking is a mental activity which results in a thought, what is a thought? A thought is a mental state of considering a particular idea or answer to a question or committing oneself to an idea or answer. These mental states are, or are related to, beliefs. When one considers a particular answer to a question they are paving the way to holding a particular belief; when they commit themselves to an answer they have formulated a new belief.
Beliefs are propositional attitudes: believing p involves adopting the belief attitude to proposition p. So, cognition is thinking: a mental process that results in the formation of a propositional belief. When one acquires a propositional attitude by thinking, a process takes place in stages. Future propositional attitudes are justified on earlier propositional attitudes. So cognition is thinking; thinking is a mental state of considering a particular view (proposition).
Therefore, thinking is an action (since it is intentional) and cannot possibly be a behavior (a disposition). Something can be either an action or a behavior—it cannot be both.
Let’s say that I have the belief that food is downtown. I desire to eat. So I intend to go downtown to get some food. While the cause is the sensation of hunger. This chain shows how actions are intentional—how one intends to act.
Furthermore, using the example I explained above, how a doctor assesses the patellar reflex is a behavior—it is not an action since the agent himself did not cause it. One could say that it is an action for the doctor performing the reflexive test, but it cannot be an action for the agent the test is being done on—it is, therefore, a behavior.
I have explained the difference between action and behavior and how and why they are distinct. I gave an example of action (cognition) and behavior (patellar reflex) and explained how they are distinct. I then gave an argument showing how cognition (an action) cannot possibly be responsible for behavior. I showed how Darwin believed (falsely) that actions could eventually become behaviors. Darwin pretty much stated “Actions can be selected and eventually become behaviors.” This is nonsense. Actions, by virtue of being intentional, cannot be selected, even if they are done over and over again, they do not eventually become behaviors. On the other hand, behavior, by virtue of being dispositional, can be selected. In any case, I have definitively shown that the two concepts are distinct and that it is nonsense to conflate the terms.
Construct validity for IQ is fleeting. Some people may refer to Haier’s brain imaging data as evidence for construct validity for IQ, even though there are numerous problems with brain imaging and that neuroreductionist explanations for cognition are “probably not” possible (Uttal, 2014; also see Uttal, 2012). Construct validity refers to how well a test measures what it purports to measure—and this is non-existent for IQ (see Richardson and Norgate, 2014). If the tests did test what they purport to (intelligence), then they would be construct valid. I will show an example of a measure that was validated and shown to be reliable without circular reliance of the instrument itself; I will show that the measures people use in attempt to prove that IQ has construct validity fail; and finally I will provide an argument that the claim “IQ tests test intelligence” is false since the tests are not construct valid.
Jung and Haier (2007) formulated the P-FIT hypothesis—the Parieto-Frontal Intelligence Theory. The theory purports to show how individual differences in test scores are linked to variations in brain structure and function. There are, however, a few problems with the theory (as Richardson and Norgate, 2007 point out in the same issue; pg 162-163). IQ and brain region volumes are experience-dependent (eg Shonkoff et al, 2014; Betancourt et al, 2015; Lipina, 2016; Kim et al, 2019). So since they are experience-dependent, then different experiences will form different brains/test scores. Richardson and Norgate (2007) state that such bigger brain areas are not the cause of IQ, rather that, the cause of IQ is the experience-dependency of both: exposure to middle-class knowledge and skills leads to a better knowledge base for test-taking (Richardson, 2002), whereas access to better nutrition would be found in middle- and upper-classes, which, as Richardson and Norgate (2007) note, lower-quality, more energy-dense foods are more likely to be found in lower classes. Thus, Haier et al did not “find” what they purported too, based on simplistic correlations.
Now let me provide the argument about IQ test experience-dependency:
Premise 1: IQ tests are experience-dependent.
Premise 2: IQ tests are experience-dependent because some classes are more exposed to the knowledge and structure of the test by way of being born into a certain social class.
Premise 3: If IQ tests are experience-dependent because some social classes are more exposed to the knowledge and structure of the test along with whatever else comes with the membership of that social class then the tests test distance from the middle class and its knowledge structure.
Conclusion 1: IQ tests test distance from the middle class and its knowledge structure (P1, P2, P3).
Premise 4: If IQ tests test distance from the middle class and its knowledge structure, then how an individual scores on a test is a function of that individual’s cultural/social distance from the middle class.
Conclusion 2: How an individual scores on a test is a function of that individual’s cultural/social distance from the middle class since the items on the test are more likely to be found in the middle class (i.e., they are experience-dependent) and so, one who is of a lower class will necessarily score lower due to not being exposed to the items on the test (C1, P4)
Conclusion 3: IQ tests test distance from the middle class and its knowledge structure, thus, IQ scores are middle-class scores (C1, C2).
Still further regarding neuroimaging, we need to take a look at William Uttal’s work.
Uttal (2014) shows that “The problem is that both of these approaches are deeply flawed for methodological, conceptual, and empirical reasons. One reason is that simple models composed of a few neurons may simulate behavior but actually be based on completely different neuronal interactions. Therefore, the current best answer to the question asked in the title of this contribution [Are neuroreductionist explanations of cognition possible?] is–probably not.”
Uttal even has a book on meta-analyses and brain imaging—which, of course, has implications for Jung and Haier’s P-FIT theory. In his book Reliability in Cognitive Neuroscience: A Meta-meta Analysis, Uttal (2012: 2) writes:
There is a real possibility, therefore, that we are ascribing much too much meaning to what are possibly random, quasi-random, or irrelevant response patterns. That is, given the many factors that can influence a brain image, it may be that cognitive states and braib image activations are, in actuality, only weakly associated. Other cryptic, uncontrolled intervening factors may account for much, if not all, of the observed findings. Furthermore, differences in the localization patterns observed from one experiment to the next nowadays seems to reflect the inescapable fact that most of the brain is involved in virtually any cognitive process.
Uttal (2012: 86) also warns about individual variability throughout the day, writing:
However, based on these findings, McGonigle and his colleagues emphasized the lack of reliability even within this highly constrained single-subject experimental design. They warned that: “If researchers had access to only a single session from a single subject, erroneous conclusions are a possibility, in that responses to this single session may be claimed to be typical responses for this subject” (p. 708).
The point, of course, is that if individual subjects are different from day to day, what chance will we have of answering the “where” question by pooling the results of a number of subjects?
That such neural activations gleaned from neuroimaging studies vary from individual to individual, and even time of day in regard to individual, means that these differences are not accounted for in such group analyses (meta-analyses). “… the pooling process could lead to grossly distorted interpretations that deviate greatly from the actual biological function of an individual brain. If this conclusion is generally confirmed, the goal of using pooled data to produce some kind of mythical average response to predict the location of activation sites on an individual brain would become less and less achievable“‘ (Uttal, 2012: 88).
Clearly, individual differences in brain imaging are not stable and they change day to day, hour to hour. Since this is the case, how does it make sense to pool (meta-analyze) such data and then point to a few brain images as important for X if there is such large variation in individuals day to day? Neuroimaging data is extremely variable, which I hope no one would deny. So when such studies are meta-analyzed, inter- and intrasubject variation is obscured.
The idea of an average or typical “activation region” is probably nonsensical in light of the neurophysiological and neuroanatomical differences among subjects. Researchers must acknowledge that pooling data obscures what may be meaningful differences among people and their brain mechanisms. THowever, there is an even more negative outcome. That is, by reifying some kinds of “average,” we may be abetting and preserving some false ideas concerning the localization of modular cognitive function (Uttal, 2012: 91).
So when we are dealing with the raw neuroimaging data (i.e., the unprocessed locations of activation peaks), the graphical plots provided of the peaks do not lead to convergence onto a small number of brain areas for that cognitive process.
… inconsistencies abound at all levels of data pooling when one uses brain imaging techniques to search for macroscopic regional correlates of cognitive processes. Individual subjects exhibit a high degree of day-to-day variability. Intersubject comparisons between subjects produce an even greater degree of variability.
The overall pattern of inconsistency and unreliability that is evident in the literature to be reviewed here again suggests that intrinsic variability observed at the subject and experimental level propagates upward into the meta-analysis level and is not relieved by subsequent pooling of additional data or averaging. It does not encourage us to believe that the individual meta-analyses will provide a better answer to the localization of cognitive processes question than does any individual study. Indeed, it now seems plausible that carrying out a meta-analysis actually increases variability of the empirical findings (Uttal, 2012: 132).
So since reliability is low at all levels of neuroimaging analysis, it is very likely that the relations between particular brain regions and specific cognitive processes have not been established and may not even exist. The numerous reports purporting to find such relations report random and quasi-random fluctuations in extremely complex systems.
Construct validity (CV) is “the degree to which a test measures what it claims, or purports, to be measuring.” A “construct” is a theoretical psychological construct. So CV in this instance refers to whether IQ tests test intelligence. We accept that unseen functions measure what they purport to when they’re mechanistically related to differences in two variables. E.g, blood alcohol and consumption level nd the height of the mercury column and blood pressure. These measures are valid because they rely on well-known theoretical constructs. There is no theory for individual intelligence differences (Richardson, 2012). So IQ tests can’t be construct valid.
The accuracy of thermometers was established without circular reliance on the instrument itself. Thermometers measure temperature. IQ tests (supposedly) measure intelligence. There is a difference between these two, though: the reliability of thermometers measuring temperature was established without circular reliance on the thermometer itself (see Chang, 2007).
In regard to IQ tests, it is proposed that the tests are valid since they predict school performance and adult occupation levels, income and wealth. Though, this is circular reasoning and doesn’t establish the claim that IQ tests are valid measures (Richardson, 2017). IQ tests rely on other tests to attempt to prove they are valid. Though, as seen with the valid example of thermometers being validated without circular reliance on the instrument itself, IQ tests are said to be valid by claiming that it predicts test scores and life success. IQ and other similar tests are different versions of the same test, and so, it cannot be said that they are validated on that measure, since they are relating how “well” the test is valid with previous IQ tests, for example, the Stanford-Binet test. This is because “Most other tests have followed the Stanford–Binet in this regard (and, indeed are usually ‘validated’ by their level of agreement with it; Anastasi, 1990)” (Richardson, 2002: 301). How weird… new tests are validated with their agreement with other, non-construct valid tests, which does not, of course, prove the validity of IQ tests.
IQ tests are constructed by excising items that discriminate between better and worse test takers, meaning, of course, that the bell curve is not natural, but forced (see Simon, 1997). Humans make the bell curve, it is not a natural phenomenon re IQ tests, since the first tests produced weird-looking distributions. (Also see Richardson, 2017a, Chapter 2 for more arguments against the bell curve distribution.)
Finally, Richardson and Norgate (2014) write:
In scientific method, generally, we accept external, observable, differences as a valid measure of an unseen function when we can mechanistically relate differences in one to differences in the other (e.g., height of a column of mercury and blood pressure; white cell count and internal infection; erythrocyte sedimentation rate (ESR) and internal levels of inflammation; breath alcohol and level of consumption). Such measures are valid because they rely on detailed, and widely accepted, theoretical models of the functions in question. There is no such theory for cognitive ability nor, therefore, of the true nature of individual differences in cognitive functions.
That “There is no such theory for cognitive ability” is even admitted by lead IQ-ist Ian Deary in his 2001 book Intelligence: A Very Short Introduction, in which he writes “There is no such thing as a theory of human intelligence differences—not in the way that grown-up sciences like physics or chemistry have theories” (Richardson, 2012). Thus, due to this, this is yet another barrier against IQ’s attempted validity, since there is no such thing as a theory of human intelligence.
In sum, neuroimaging meta-analyses (like Jung and Haier, 2007; see also Richardson and Norgate, 2007 in the same issue, pg 162-163) do not show what they purport to show for numerous reasons. (1) There are, of course, consequences of malnutrition for brain development and lower classes are more likely to not have their nutritional needs met (Ruxton and Kirk, 1996); (2) low classes are more likely to be exposed to substance abuse (Karriker-Jaffe, 2013), which may well impact brain regions; (3) “Stress arising from the poor sense of control over circumstances, including financial and workplace insecurity, affects children and leaves “an indelible impression on brain structure and function” (Teicher 2002, p. 68; cf. Austin et al. 2005)” (Richardson and Norgate, 2007: 163); and (4) working-class attitudes are related to poor self-efficacy beliefs, which also affect test performance (Richardson, 2002). So, Jung and Haier’s (2007) theory “merely redescribes the class structure and social history of society and its unfortunate consequences” (Richardson and Norgate, 2007: 163).
In regard to neuroimaging, pooling together (meta-analyzing) numerous studies is fraught with conceptual and methodological problems, since a high-degree of individual variability exists. Thus, attempting to find “average” brain differences in individuals fails, and the meta-analytic technique used (eg by Jung and Haier, 2007) fails to find what they want to find: average brain areas where, supposedly, cognition occurs between individuals. Meta-analyzing such disparate studies does not show an “average” where cognitive processes occur, and thusly, cause differences in IQ test-taking. Reductionist neuroimaging studies do not, as is popularly believed, pinpoint where cognitive processes take place in the brain, they have not been established and they may not even exist.
Nueroreductionism does not work; attempting to reduce cognitive processes to different regions of the brain, even using meta-analytic techniques as discussed here, fail. There “probably cannot” be neuroreductionist explanations for cognition (Uttal, 2014), and so, using these studies to attempt to pinpoint where in the brain—supposedly—cognition occurs for such ancillary things such as IQ test-taking fails. (Neuro)Reductionism fails.
Since there is no theory of individual differences in IQ, then they cannot be construct valid. Even if there were a theory of individual differences, IQ tests would still not be construct valid, since it would need to be established that there is a mechanistic relation between IQ tests and variable X. Attempts at validating IQ tests rely on correlations with other tests and older IQ tests—but that’s what is under contention, IQ validity, and so, correlating with older tests does not give the requisite validity to IQ tests to make the claim “IQ tests test intelligence” true. IQ does not even measure ability for complex cognition; real-life tasks are more complex than the most complex items on any IQ test (Richardson and Norgate, 2014b)
Now, having said all that, the argument can be formulated very simply:
Premise 1: If the claim “IQ tests test intelligence” is true, then IQ tests must be construct valid.
Premise 2: IQ tests are not construct valid.
Conclusion: Therefore, the claim “IQ tests test intelligence” is false. (modus tollens, P1, P2)
Michael Hardimon published Rethinking Race: The Case for Deflationary Realism last year (Hardimon, 2017). I was awaiting some critical assessment of the book, and it seems that at the end of March, some criticism finally came. The criticism came from another philosopher, Joshua Glasgow, in the journal Mind (Glasgow, 2018). The article is pretty much just arguing against his minimalist race concept and one thing he brings up in his book, the case of a twin earth and what we would call out-and-out clones of ourselves on this twin earth. Glasgow makes some good points, but I think he is largely misguided on Hardimon’s view of race.
Hardimon (2017) is the latest defense for the existence of race—all the while denying the existence of “racialist races”—that there are differences in mores, “intelligence” etc—and taking the racialist view and “stripping it down to its barebones” and shows that race exists, in a minimal way. This is what Hardimon calls “social constructivism” in the pernicious sense—racialist races, in Hardimon’s eyes, are socially constructed in a pernicious sense, arguing that racialist races do not represent any “facts of the matter” and “supports and legalizes domination” (pg 62). The minimalist concept, on the other hand, does not “support and legalize domination”, nor does it assume that there are differences in “intelligence”, mores and other mental characters; it’s only on the basis of superficial physical features. These superficial physical features are distributed across the globe geographically and these groups are real and exist who show these superficial physical features across the globe. Thus, race, in a minimal sense, exists. However, people like Glasgow have a few things to say about that.
Glasgow (2018) begins by praising Hardimon (2017) for “dispatching racialism” in his first chapter, also claiming that “academic writings have decisively shown why racialism is a bad theory” (pg 2). Hardimon argues that to believe in race, on not need believe what the racialist concept pushes; one must only acknowledge and accept that there are:
1) differences in visible physical features which correspond to geographic ancestry; 2) these differences in visible features which correspond to geographic ancestry are exhibited between real groups; 3) these real groups that exhibit these differences in physical features which correspond to geographic ancestry satisfy the conditions of minimalist race; C) therefore race exists.
This is a simple enough argument, but Glasgow disagrees. As a counter, Glasgow brings up the “twin earth” argument. Imagine a twin earth was created. On Twin Earth, everything is exactly the same; there are copies of you, me, copies of companies, animals, history mirrored down to exact minutiae, etc. The main contention here is that Hardimon claims that ancestry is important for our conception of race. But with the twin earth argument, since everything, down to everything, is the same, then the people who live on twin earth look just like us but! do not share ancestry with us, they look like us (share patterns of visible physical features), so what race would we call them? Glasgow thusly states that “sharing ancestry is not necessary for a group to count as a race” (pg 3). But, clearly, sharing ancestry is important for our conception of race. While the thought experiment is a good one it fails since ancestry is very clearly necessary for a group to count as a race, as Hardimon has argued.
Hardimon (2017: 52) addresses this, writing:
Racial Twin Americans might share our concept of race and deny that races have different geographical origins. This is because they might fail to understand that this is a component of their race concept. If, however, their belief that races do not have different geographical origins did not reflect a misunderstanding of their “race concept,” then their “race concept” would not be the same concept as the concept that is the ordinary race concept in our world. Their use of ‘race’ would pick out a different subject matter entirely from ours.
and on page 45 writes:
Glasgow envisages Racial Twin Earth in such a way that, from an empirical (that is, human) point of view, these groups would have distinctive ancestries, even if they did not have distinctive ancestries an sich. But if this is so, the groups [Racial Twin Earthings] do not provide a good example of races that lack distinctive ancestries and so do not constitute a clear counterexample to C(2) [that members of a race are “linked by a common ancestry peculiar to members of that group”].
C(2) (P2 in the simple argument for the existence of race) is fine, and the objections from Glasgow do not show that P(C)2 is false at all. The Racial Twin Earth argument is a good one, it is sound. However, as Hardimon had already noted in his book, Glasgow’s objection to C(2) does not rebut the fact that races share peculiar ancestry unique to them.
Next, Glasgow criticizes Hardimon’s viewpoints on “Hispanics” and Brazilians. These two groups, says Glasgow, shows that two siblings with the same ancestry, though they have different skin colors, would be different races in Brazil. He uses this example to state that “This suggests that race and ancestry can be disconnected” (pg 4). He criticizes Hardimon’s solution to the problem of race and Brazilians, stating that our term “race” and the term in Brazil do not track the same things. “This is jarring. All that anthropological and sociological work done to compare Brazil with the rest of the world (including the USA) would be premised on a translation error” (pg 4). Since Americans and Brazilians, in Glasgow’s eyes, can have a serious conversation about race, this suggests to Glasgow that “our concept of race must not require that races have distinct ancestral groups” (pg 5).
I did cover Brazilians and “Hispanics” as regards the minimalist race concept. Some argue that the “color system” in Brazil is actually a “racial system” (Guimaraes 2012: 1160). While they do denote race as ‘COR’ (Brazilian for ‘color), one can argue that the term used for ‘color’ is ‘race’ and that we would have no problem discussing ‘race’ with Brazilians, since Brazilians and Americans have similar views on what ‘race’ really is. Hardimon (2017: 49) writes:
On the other hand, it is not clear that the Brazilian concept of COR is altogether independent of the phenomenon we Americans designate using ‘race.’ The color that ‘COR’ picks out is racial skin color. The well-known, widespread preference for lighter (whiter) skin in Brazil is at least arguably a racial preference. It seems likely that white skin color is preferred because of its association with the white race. This provides a reason for thinking that the minimalist concept of race may be lurking in the background of Brazilian thinking about race.
Since ‘COR’ picks out racial skin color, it can be safely argued that Brazilians and Americans at least are generally speaking about the same things. Since the color system in Brazil pretty much mirrors what we know as racial systems, demarcating races on the basis of physical features, we are, it can be argued, talking about the same (or similar) things.
Further, the fact that “Latinos” do not fit into Hardimon’s minimalist race concepts is not a problem with Hardimon’s arguments about race, but is a problem with how “Latinos” see themselves and racialize themselves as a group. “Latinos” can count as a socialrace, but they do not—can not—count as a minimalist race (such as the Caucasian minimalist race; the African minimalist race; the Asian minimalist race etc), since they do not share visible physical patterns which correspond to differences in geographic ancestry. Since they do not exhibit characters that demarcate minimalist races, they are not minimalist races. Looking at Cubans compared to, say, Mexicans (on average) is enough to buttress this point.
Glasgow then argues that there are similar problems when you make the claim “that having a distinct geographical origin is required for a group to be a race” (pg 5). He says that we can create “Twin Trump” and “Twin Clinton” might be created from “whole cloth” on two different continents, but we would still call them both “white.” Glasgow then claims that “I worry that visible trait groups are not biological objects because the lines between them are biologically arbitrary” (pg 5). He argues that we need a “dividing line”, for example, to show that skin color is an arbitrary trait to divide races. But if we look at skin color as an adaptation to the climate of the people in question (Jones et al, 2018), then this trait is not “arbitrary”, and the trait is then linked to geographic ancestry.
Glasgow then goes down the old and tired route that “There is no biological reason to mark out one line as dividing the races rather than another, simply based on visible traits” (pg 5). He then goes on to discuss the fact that Hardimon invokes Rosenberg et al (2002) who show that our genes cluster in specific geographic ancestries and that this is biological evidence for the existence of race. Glasgow brings up two objections to the demarcation of races on both physical appearance and genetic analyses: picture the color spectrum, “Now thicken the orange part, and thin out the light red and yellow parts on either side of orange. You’ve just created an orange ‘cluster’” (pg 6), while asking the question:
Does the fact that there are more bits in the orange part mean that drawing a line somewhere to create the categories orange and yellow now marks a scientifically principled line, whereas it didn’t when all three zones on the spectrum were equally sized?
I admit this is a good question, and that this objection would indeed go with the visible trait of skin color in regard to race; but as I said above, since skin color can be conceptualized as a physical adaptation to climate, then that is a good proxy for geographic ancestry, whether or not there is a “smooth variation” of skin colors as you move away from the equator or not, it is evidence that “races” have biological differences and these differences start on the biggest organ in the human body. This is just the classic continuum fallacy in action: that X and Y are two different parts of an extreme; there is no definable point where X becomes Y, therefore there is no difference between X and Y.
As for Glasgow’s other objection, he writes (pg 6):
if we find a large number of individuals in the band below 62.3 inches, and another large grouping in the band above 68.7 inches, with a thinner population in between, does that mean that we have a biological reason for adopting the categories ‘short’ and ‘tall’?
It really depends on what the average height is in regard to “adopting the categories ‘short’ and ‘tall’” (pg 6). The first question was better than the second, alas, they do not do a good job of objecting to Hardimon’s race concept.
In sum, Glasgow’s (2018) review of Hardimon’s (2017) book Rethinking Race: The Case for Deflationary Realism is an alright review; though Glasgow leaves a lot to be desired and I do think that his critique could have been more strongly argued. Minimalist races do exist and are biologically real.
I am of the opinion that what matters regarding the existence of race is not biological science, i.e., testing to see which populations have which differing allele frequencies etc; what matters is the philosophical aspects to race. The debates in the philosophical literature regarding race are extremely interesting (which I will cover in the future), and are based on racial naturalism and racial eliminativism.
(Racial naturalism “signifies the old, biological conception of race“; racial eliminativism “recommends discarding the concept of race entirely“; racial constructivism “races have come into existence and continue to exist through “human culture and human decisions” (Mallon 2007, 94)“; thin constructivism “depicts race as a grouping of humans according to ancestry and genetically insignificant, “superficial properties that are prototypically linked with race,” such as skin tone, hair color and hair texture (Mallon 2006, 534); and racial skepticism “holds that because racial naturalism is false, races of any type do not exist“.) (Also note that Spencer (2018) critiques Hardimon’s viewpoints in his book as well, which will also be covered in the future, along with the back-and-forth debate in the philosophical literature between Quayshawn Spencer (e.g., 2015) and Adam Hochman (e.g., 2014).)