I was in Best Buy a few months ago looking at TVs and something caught my eye. Looking at the descriptions and features of certain TVs, I observed that certain TVs were being described as “intelligent” with “IQs.” This is one way how IQ-ism has seeped into mainstream culture and its evidence how IQ and intelligence are conflated.
The Bravia XR was heralded as “The world’s first TV with cognitive intelligence.” Though it seems that this phrase wasn’t thought through by their marketing department before they released the ad, since cognition is an action and “intelligence” is cognition. In any case, this TV is said to “have the power of the human brain.” The phrasing here implies that the human brain can be replicated in a machine. The “power” of the human brain is that it is a necessary pre-condition for our minds, and so the implication here is that this TV has the ability for mindedness. But implicit in the claim is that the mind is a collection of parts, that a collection of physical parts can count as a mind. The arguments below refute the claim that TVs are “intelligent” and can even one day be seen as “intelligent.”
Bravia states that the TV “in a sense” thinks like a human brain (nevermind the mereological fallacy here) and this cognitive intelligence allows the TV to do this.
I cannot imagine a complex, purely physical object having consciousness. So purely physical objects cannot have minds. Therefore my brain cannot possibly be my mind.
Individual physical particles are mindless. No collection of mindless things counts as a mind. Therefore a mind cannot be an arrangement of physical particles.
A mind is a single sphere of consciousness, it is not a complicated arrangement of mental parts. Though physical systems are always complicated arrangements of different parts and subsystems. Therefore the mind is nonphysical, and not a physical system.
Physical parts of the natural world lack intentionality, meaning they are not “about” anything in the same way thoughts are. No arrangement of intentionality-less parts will ever count as having intentionality. Therefore a mind cannot be an arrangement of physical parts.
The arguments given above conclude that mind cannot be an arrangement of physical parts, a (purely) physical object cannot have a mind, physical systems are arrangements of different parts, and that physical parts lack intentionality which is a mark of the mental. Thus, TVs cannot have “cognition” or “intelligence” as machines cannot be conscious.
Smart TVs are basically a mixture of computers, media players, and a TV. They can connect to the internet and do a whole slew of things that regular TVs cannot do since they lack the processing power. The “smartness” (the ability to hook-up to the internet, the ability to play media and use applications not found on regular TVs) runs through me—the TV only does what I tell it to do with the remote, though the TV has to have the ability to have apps, media, etc, it is but a passive machine that does nothing until I tell it to do something. (This is just like how genes work, they do nothing on their own until activated by and for the physiological system, the TV does nothing in its own until it’s activated by me.)
Just because a TV can be said to be “smart” or with “cognitive intelligence” does not mean that it is true. Indeed, the power of our TVs have increased since I was a kid. But this does not license the claim that TVs can do anything on their own—the human needs to tell the TV what to do; the human needs to be conscious of what they are doing to be able to tell the TV what to do.
People want “smart” TVs due to what they can do and so calling these TVs these certain descriptors will probably increase sales since people conflate IQ with intelligence. The TV is but a passive machine that awaits instruction from the human; it does not take initiative and do things without input from the human; therefore TVs are not—and cannot be due to what is argued above—intelligent.
Calling TVs “intelligent” is nothing more than a marketing campaign to sell more TVs. Sure, the TVs are nice and they have a whole lot packed into it making it better for the consumer to have one machine that does many things, but the claim that these machines are “intelligent” or have “IQs” fails, as it is (logically) impossible for machines to have those properties since machines are fully physical.
Seeing how TVs are even said to have “cognitive intelligence” and “IQs”—even if these terms are used in a specific manner—just speaks to how seeped in the IQ=intelligence claim is in our daily life, our daily discourse, and what we buy. This shows how IQ exists in a cultural context and how it is basically with us in our everyday lives. Machines cannot be intelligent, cognize, or have minds because machines are purely physical and what allows cognition (mind) is immaterial.
…but the question “What is intelligence?” has only ever been answered by a shifting social consensus. So perhaps, lke the stuff of dreams and nightmares, it too belongs in the realm of mere appearances. (Goodey, 2011)
IQ groupings/cutoffs are arbitrary. What I mean by “arbitrary” is something without reason or justification; something that is not supported by facts or reasons. What is the reason/justification/facts/reasons for the groupings? The arbitrariness of IQ is also seen historically when we look at how score distributions were changed when different assumptions were had about the “nature” of “intelligence” (e.g., Terman, 1916; Hilliard, 2012). In this article, I will argue that IQ cutoffs are arbitrary with no rational justification for them; they just use them because they get the desired distributions they want.
The arbitrariness of such cutoffs and groupings have been known since the first tests were beginning to be created by American test constructors when Binet and Simon’s test was brought over from France by Goddard in 1910. (See here for a history of the testing movement and how they construct the test.) Terman (1916: 89) warned “That the boundary lines between such groups [feebleminded, dull, superior, genius etc.] are arbitrary.” It is also in this same book—The Measurment of Intelligence—that Terman adjusted the scores of men and women, adding and subtracting items that both men and women get right/wrong the most to even out their scores. This was done by Terman putting items on the test that men were good at (“arithmetical reasoning, giving differences between a president and a king, solving the form board, making change, reversing hands of a clock, finding similiarities, and solving “the induction test.”” [Terman, 1916: 81]) while he also put items on the test that women were good at (“drawing designs from memory, aesthetic comparison, comparing object from memory, answering the “comprehension questions”, repeating digits and sentences, tying a bow-knot, and finding rhymes” [Terman, 1916: 81]). This can also be seen in SAT differences between men and women, as Rosser (1989) points out. It is a matter of item selection/analysis and what the desired distribution of scores you want is.
Such arbitrary IQ cutoffs for these “groups” that Terman used value judgments on reflect the necessity of IQ-ists to attempt to conceptualize “intelligence” as normally distributed, with most falling in the middle and fewer on the tails—where “geniuses” and above are on the right and “mildly impaired and delayed”, per the 5th edition of the Stanford-Binet. But the normal distribution for “IQ” is a myth (Richardson, 2017: chapter 2). The construction of normally distributed IQ tests means that any and all “group distinctions” and “cutoffs” are arbitrary. The test was created first, AND THEN they attempt to deduce what it “measures” on the basis of correlations with other tests and of academic achievement. Further, even showing that there is a relationship between IQ scores and academic achievement is irrelevant, and this is because they are different versions of the same test—meaning that the item content is similar between the tests (Schwartz, 1975; Beaujean et al, 2018). It is a creation of the test’s constructors, not something that we just so happened to find when these tests were created.
Thus, the “bell curve” is an artifact, not a fact, of test construction (Simon, 1997). Items are added and removed on a sample population until the desired distribution is reached. And it is this artificial distribution that all IQ theorizing rests on and it is this artificial distribution that IQ-ists attempt to use for their cutoffs between different “grades” of “intelligence” between people. When it comes to the constructed bell curve, about 2.2% of people fall below 70, so the test was constructed to get this result. So, if the bell curve is an artificial production created by humans, then so is the classification system (“intelligence”). If the classification system is an artificial creation, then so too is the concept of “learning disability.” Bazemore, Shinaprayoon, and Martin write that:
By developing an exclusion-inclusion criteria that favored the aforementioned groups, test developers created a norm “intelligent” (Gersh, 1987, p.166) population “to differentiate subjects of known superiority from subjects of known inferiority” (Terman, 1922, p. 656).
So basically, test constructors had in mind—before they developed the test—who was or was not “intelligent” and then built the test to fit their desires. I can see someone saying “Why does this matter if it happened 100 years ago?” Well, it matters because there is no conceptual support for hereditarian thinking for psychological traits and if there is no support, then the only reason they persist is due to prejudice (Mensh and Mensh, 1991). Furthermore, newer IQ tests use similar items as older ones, and newer tests are “validated” against older tests (like the Stanford-Binet), and so, biases in those tests carry over, without conscious bias toward groups being an ultimate goal (Richardson, 2002: 287,
The arbitrariness of IQ can also be seen with the cutoff for learning disability—a cutoff of 70 or below is seen as the individual needing remedial help and so, the IQ test is a good instrument for these purposes. IQ tests are arbitrary in their use to reflect deficits in everyday functioning (Arvidsson and Granlund, 2016). Cutoffs for learning disabilities have fluctuated between IQ 70-85 over the years. Someone in the US is defined as “learning disabled” if there is a discrepancy between their academic achievement and their “intelligence” (i.e., IQ test score). But, is there any justification as either for a cutoff, where if one were under a certain magic number that they would then be “learning disabled”?
The answer is no, because IQ is irrelevant to the definition of learning disabilities (Siegel, 1988, 1989, 1993). It is absolutely unnnecessary to give IQ tests to identify the learning disabled and the existence of a discrepancy is not a necessary condition (Gunderson and Siegal, 2001). People under IQ 70 frequently do not need specialist services whereas people with IQs over 70 frequently do (Whitaker, 2004). Such tests only see WHAT a person has learned, they DO NOT estimate one’s intellectual “capability”; since IQ tests are tests of a certain type of knowledge, it then follows that exposure to the items on the test and test structure—along with other non-cognitive variables (Richardson, 2002)—explain test score differences and that these differences can be built into and out of the test based on certain a priori assumptions. It further follows that if one has a low score, they were not exposed to the item content and structure of the test and that it is not a “deficit of intelligence” like IQ-ists claim.
Webb and Whitaker (2012) describe the double think employed by many clinical psychologists, privately acknowledging the limitations of IQ tests and the arbitrary nature of the cut-off score of 70 IQ points that defines learning disability, whilst publicly and professionally talking about learning disabilities ‘as if it were a real, naturally occurring condition” (p. 440). Thus the diagnostic procedure involving IQ tests can be seen as a way of passing off culturally specific norms of competence (measured through arcane rituals of assessment) as if they were universal and incontrovertible. (Chinn, 2021: 137-138)
The arbitrariness of IQ 70 as the cutoff for mental disability also rears its head in the courtroom, when defendants are on trial for murder. In Atkins v. Virginia, SCOTUS rules that it was unconstitutional to execute intellectually disabled people. Then in Hall v. Florida, it was ruled that an IQ score by itself was not, by itself, useful in the justification of sentencing; they needed to use other medical/diagnostic criteria. Some people may cry something like “But IQ matters to people it does not matter to only when there is a defendant that has rumblings of being executed but he does not because it is found that he has an IQ below 70!” Nevermind the ethical debate on the death sentence, the arbitrary cutoff of 70 for mental retardation—which, as has been shown, does not hold—has numerous legal and societal consequences for the individual so unluckily deemed “disabled.”
Kanaya and Ceci (2007) argue that when an individual takes a test (whether or not they took it at the beginning or end of the test’s cycle) would have dictated whether or not they qualified for the arbitrary IQ 70 cutoff to not be executed. So the year in which a test is administered is literally a life or death issue. So the year in which a defendant on trial for murder was tested can determine whether or not they are put to death. Prosecutors in many US states have succesfully argued for “ethnic adjustments” for IQ. Sanger (2015) reviews many US cases in which prosecutors have done so. Arguing that “ethnic adjustments” for IQ are “logically, clinically, and unconstitutionally unsound”, he reviews studies that show that abuse, neglect, poverty, and trauma decrease test scores and that the abuse, neglect, poverty, and trauma can be epigenetically passed on through multiple generations. Sanger (2015: 148-149) concludes:
Furthermore, any correlations between the average IQ test scores of racial cohorts (or average scores of cohorts to the overall community norm) are not attributable to race and are heavily influenced by race-neutral environmental factors.397 Those raceneutral environmental factors include the effects of the environment of childhood abuse, stress, poverty, and trauma.398 Such adverse environmental (but race-neutral) factors likely result in phenotypic manifestations, which include epigenetic changes affecting intellectual ability and result in greater numbers of persons with intellectual disabilities within that population.399 The individuals whose intellectual ability is adversely affected by those harmful environmental factors are disproportionately represented by minority groups and among those facing the death penalty in the United States.400
Therefore, the actual recipients of death sentences—the people on death row—are poor, of color, and have disproportionately been subjected to stress, poverty, abuse, and trauma.401 These very people are likely to suffer from actual phenotypic/biological impairment in intellectual functioning that can be passed down by way of programmed epigenetic gene expression through generations.
Quite clearly, this arbitrary IQ 70 cutoff for “intellectual disability” has real-life implications for some people, and in some cases it is a life or death matter based on “ethnic adjustments” and when an individual took a specific test sometime in that test’s lifecycle before renorming. So Sanger showed that it is common that the IQ scores of blacks and “Hispanics” get adjusted upwards routinely, so they can face the death penalty. They push them above the “cutoff” so they can be executed.
In my view, such distinctions between “IQ groups” like that created by Terman—and even continuing into the present day—is an attempt at naturalizing “intellectual disability”; an attempt at saying that these are “natural kinds.” Though “intelligent people and intellectually disabled people are not natural kinds but historically contingent forms of human self-representation and social reciprocity, of relatively recent historical origin” (Goodey, 2011:13). So, intellectual disability, learning disability, intelligence—these are all social constructs (which do not denote natural kinds) and they change with the times.
But Herrnstein and Murray (1994: 1) argued that “the word intelligence describes something real and…it varies from person to person is as universal and ancient as any understanding about the state of being human. Literate cultures everywhere and throughout history have words for saying that some people are smarter than others.” But unfortunately for Herrnstein and Murray, “Intelligence as currently and conventionally understood by psychologists is a brashly modern notion” (Daston, 1992: 211).
The arbitrariness of the designation of “intelligence” means that “IQ/intelligence” is not a “thing”, nor is a “natural kind”, but it is indeed a socially constructed historical notion (Goodey, 2011), as is the concept of “giftedness” (Borland, 1997). The creation of these tests and indeed the label “intellectually disabled” is completely racialized (Chinn, 2021). The arbitrariness and socially constructed notion of what “intelligence” is can be seen just by analyzing the test items—they are heavily classed and racialized, specifically white middle-class. When it comes to the death penalty and IQ, there are very serious issues, as when an individual was given a test may be the deciding factor between life or death, along with the fact that minorities are more likely to be on death row and they are also more likely to experience abuse, trauma, etc which can then be passed on generationally and then also influence test scores—along with test construction, which there is no justification for a certain set of items, just whatever gets the desired distribution is what is “right”; that’s why “IQ” is arbitrary.
We need to dispense with the idea that there is a “thing” called “intelligence” and that it is biological; we need to understand that what we do call “intelligence” is socially constructed as what psychologists all “intelligent” is answering items right and getting a higher score on a test which are heavily biased toward certain races/classes in America. Once we understand that this concept is socially constructed and is not biological, maybe we won’t repeat past mistakes, like sterilizing tens of thousands of people in the name of eugenics.
In their fight to get “critical race theory” (CRT) (or what they call CRT) banned from schools, James Lindsay and Christopher Rufo have twisted the terms “equity” and “equality” so they can better “beat” their opponents. But, unfortunately for them, there is a distinction between the two and the distinction between the two matters for issues of social justice. Although they don’t really understand the concepts they attempt to discuss, their readings into CRT/social justice have been repeated as if they have any meaning at all. By attempting to change the definition of equity, they are attempting to shift the discussion to what equity IS NOT, thereby attempting to side-step social justice issues. (Which is actually their goal.) This article will discuss what equity and equality are and aren’t, and how we can achieve equity not only in health but for all facets of our society.
What equity is; what equity isn’t
Lindsay, for example, makes the claim that “equity means “adjusting the shares in order to make citizens A and B equal” which would make “equity…something like a kind of “social communism”“. Lindsay’s confusion here will become apparent by the end of this article, as he is clearly strawmanning what “equity” means.
Chris Rufo (professional charlatan, like Lindsay), in his article Critical Race Fragility claims that:
critical race theorists, on the other hand, have embraced a philosophy of European-style pessimism, dismissing equality under the law as “mere nondiscrimination.” They would replace it with a system of “equity” that treats individuals unequally in order to arrive at equal group outcomes.
Lindsay and Rufo are two of the spearheads leading the charge against “CRT” (what they call “CRT” is up to their discretion; it’s basically anything they don’t like) and they—quite clearly—have no idea what “equity” truly means. By equivocating on the term, they can then begin to redefine it and, when people hear the term, they won’t think of the original meaning, they will think of the new meaning that they have manufactured. (This is pretty much what Rufo has done with CRT.)
Back in January, I wrote on the distinction between “equity” and “equality” and what it means in the context of public health.
There is a distinction between “equity” and “equality.” For instance, to continue with the public health example, take public health equality and public health equity. In this instance, “equality” means giving everyone the same thing whereas “equity” means giving individuals what they need to be the healthiest individual they can possibly be. “Strong equality of health” is “where every person or group has equal health“, while weak health equity “states that every person or group should have equal health except when: (a) health equality is only possible by making someone less healthy, or (b) there are technological limitations on further health improvement” (Norheim and Asada, 2009). But we should not attempt to “level-down” people’s health to achieve equity; we should attempt to “level up” people’s health, though. That is, it is impossible to reach a strong health equality (making all groups equal), but we should—and indeed, have a moral responsibility to—attempt to lift up those who are worse-off. Poverty is what is objectionable, inequality is not. It is impossible to achieve true equality between groups, but we can—and indeed we have a moral obligation to—lift up those who are in poverty, which is, also a social determinant of health (Braveman and Gottlieb, 2014; Frankfurt, 2015; Islam, 2019).
We achieve health equity when all individuals have the same access to be the healthiest individuals they can be; we achieve health equality when all health outcomes are the same for all groups. Health equity is, further, the absence of avoidable differences between different groups (Evans, 2020). One of these is feasible, the other is not. But racism does not allow us to achieve health equity.
So, basically, the distinction is that equality means giving people the same things whereas equity means ensuring people are not held back to be the best they can be. We can say that health inequities are differences in health that are unjust, avoidable, and unfair (Sudana and Blas, 2013) and they are systematic, unfair, and avoidable differences between social groups (McCartney et al, 2019). Social justice efforts that attempt to bring down barriers that impede people from becoming the best they can be is imperative to a fair and equitable society, but this does not mean that efforts to make society more equitable toward social groups is an effort to make society more equal—it’s about making society more equitable where making society more equal is just a byproduct of making society more equitable. So, in the end, “inequities” refer to differences between groups that are avoidable, unjust, and unfair (see also Benjamins and De Maio, 2021).
In her new book, behavioral geneticist Paige Harden (2021: 120-125) has a discussion on “equity” where quotes Conley and Fletcher who claim that “heritability” estimates are a necessary but not sufficient measure of fairness (that is, a measure of an equitable society) in a society. I’ll discuss this in my review of the book, but for now, I will discuss her figure 8.2 on page 123. She shows a photograph of her daughter’s pre-K sign which states “Fair isn’t everybody getting the same thing. Fair is everybody getting what they need to be successful.” This perfectly embodies what equity is and how it is distinct from equality. The key phrase here is “what they need to be successful” and we can liken that to social advantage/disadvantage—certain social groups do not have what is needed to be successful, which, in this case, would be being the healthiest person they can be. Since they lack what they need to be successful and the lack of what they need is systemic (think food deserts/swamps), then they do not have what they need to be successful, meaning that this is an inequity. (This may well be one of the only good things I have to say about this book.)
Braveman (2003: 182) succintly defines what “equity” means:
Equity means fairness (7-10) or justice (8-10). Because these terms are open to interpretation, an operational definition is needed to guide measurement in diverse settings. In operational terms, pursuing equity in health can be defined as striving to eliminate disparities in health between more and less-advantaged social groups, i.e. groups that occupy different positions in a social hierarchy (8). Health inequities are disparities in health or its social determinants that favour the social groups that were already more advantaged. Inequity does not refer generically to just any inequalities between any population groups, but very specifically to disparities between groups of people categorized a priori according to some important features of their underlying social position. For example, individuals may be grouped by their income or material possessions, or by characteristics of their occupations, education, or geographic location, or by their gender, race/ethnicity, or religious group. What all of these factors have in common is that they often are strongly associated with different levels of social advantage or privilege as characterized by wealth, power, and/or prestige (8).
From these definitions and the discussion I have given, it is clear that equity is conceptually distinct from equality.
Since racism is a cause of health problems, then by eliminating racism, we can then ensure health equity. CRT is a “race-equity methodology” (Ford and Airhihenbuwa, 2010), and so, by applying CRT to issues of public health, and by attempting to ameliorate racist attitudes which can and do get “under the skin” to cause differences in physiology (see Sullivan, 2015), then if we eliminate racist attitudes, then we can begin to achieve racial equity. We know that experiencing racism causes accelerated biological aging (i.e., shorter telomere length; Shammas, 2012) and we know that black women in the 47-55 age group were 7.5 years “biologically older” than white women (Geronimus et al, 2011). Thus, racism is a driver of racial health disparities and if we are to achieve racial equity, then we need to eliminate racism. We can use the framework of CRT (which accepts race as a social construct) to understand how and why racist attitudes are had and how and why they are driven by ignorance. Experiencing racist attitudes changes physiology and makes it more likely for the one experiencing the racist attitude to acquire disease states in the future, which would have been a direct outcome of experiencing racist attitudes.
If systemic biases are removed against certain groups (like how doctors are hold unconscious biases against blacks; Hoberman, 2012, and like how medical students believe that blacks have a higher threshold for pain; Hoffman et al, 2012), then we can achieve health equity. If health EQUITY is the ABSENCE OF systemic health disparities (differences in health between and within groups that are unfair, avoidable, and unjust), then we achieve health equity by eliminating systemic health disparities—that is, disparities caused by racism and disparities caused by social determinants of health (Braveman and Gruskin, 2003). Once all groups are not impeded by social goings-on (like living in food deserts/swamps which predict obesity; Cooksey-Stowers, Schwartz, and Brownell, 2017) that force them to not be their best, then we will have achieved ‘equity.’
When public health researchers speak of health equity, they are not operating under the assumption that they will equalize outcomes within and between groups. They are operating under the assumption that they will need to bring down the barriers that impede one to be the healthiest person they can be. In all of my years reading public health research, I have never once read a call for EQUALITY in health; this is just not a feasible position. But EQUITY in health is, and once we begin to change what causes INEQUITIES (health, educational—any kind), then people are no longer held back by (social) circumstances outside of their control, impeding their health, education, etc.
Therefore, the distinction between “equity” and “equality” is: “equality” is making everyone the same or ensuring they get the same things, whereas “equity” is ensuring that people are not held back to be the best person they can be. The distinction between the two concepts that I have drawn up here is very clear. Achieving equity—real equity and not the Lindsay-Rufo strawconcept—is a moral imperative and we, indeed, should attempt it. But achieving equality is just not possible; we would pretty much need to level-down higher groups. The point is, social factors (like racism) should not impede people to be their best and so, we need to eliminate factors that lead to INEQUITIES of ANY kind—be it in public health, education—to achieve a just society. Really loosely, equity can be said to be about fairness but it is more complicated than that, as Braveman’s conceptual discussions show.
All in all, we live in a racist society and racist attitudes affect health which lead to health inequities—that is, differences that cannot be said to be biological; they are differences between social groups that are unfair, unjust, and avoidable and to change these health inequities between social groups (i.e., races, social classes), then people’s attitudes need to change, as the one who is affected by the attitudes physiology can change based on how they take certain statements. Differences between groups that have arisen as part of their position in society that then cause differences in social outcomes are completely avoidable, like differences in obesity between social classes and races (their access to food is impeded by societal factors). And so, to achieve equity—that is, social justice—then we must eliminate any and all systemic barriers that cause said health inequities within and between groups. Equity in health and education are good things, and to strive to reach these goals, we need to change our attitudes as a society towards certain groups. We need to do what we can to achieve feasible goals to ensure that everyone is as healthy as physically possible and that people are not unhealthy due to systemic reasons.
Assertions derived from genetic reductionist ideas also ignore the abundant and burgeoning evidence that genes are outcomes of evolutionary processes and not bases of them. (Lerner, 2021: 449)
Genetic reductionism places (social) problems “in the genes” and so if these problems are “in the genes” then we can either (1) use gene therapy, (2) reduce the frequency of “the bad genes” in the population (eugenics) or (3) just live with these genetically caused problems. Social groups differ materially and they also differ genetically. To the gene-determinists, social positioning is genetically determined and it is due to a genetically determined intelligence. (See here for arguments against the claim.)
On the basis of heritability estimates derived from flawed methodologies like twin and adoption studies (Richardson and Norgate, 2005; Joseph, 2014; Burt and Simons, 2015; Moore and Shenk, 2016), hereditarians claim that traits like “IQ” (“intelligence”) are strongly genetically determined and if a trait is strongly genetically determined, then environmental interventions are doomed to fail (Jensen, 1969). Since IQ is said to have a heritability of .8, it is then claimed by the reductionist that environmental interventions are useless or near useless. Indeed, this was the conclusion of Jensen’s (1969) (infamous) paper—compensatory education has failed (an environmental intervention) and so the differences are genetic in nature.
Arguments like those have been forwarded for the better part of 100 years—and the arguments are false because they rely on false assumptions. The false assumptions are (1) that natural selection has caused trait differences between populations and (2) that genes are active—not passive—causes. (1) and (2) here can be combined for (3): genes that cause differences between groups were naturally selected and eventually fixed in the populations. This article will review some hereditarian thinking on natural selection and human variation, show how the theorizing is false, show how the theory of natural selection itself cannot possibly be true (Fodor and Piatteli-Palmarini, 2010) and finally will show that by accepting genetic reductionism we cannot achieve social justice since the causes of the social problems reduce to genes.
The ultimate claim from hereditarians is that human behavior, social life, and development can be reduced to—and explained by—genes. Social inequities are the target for social justice. Inequities refer to differences between groups that are avoidable and unjust. So the hereditarian attempts to reduce social ills to genes, thereby getting around what social justice activists want. They just reduce it to genes leading to possibilities (1)-(3) above. This has the possibility of being disastrous, for if we can fix the problems the hereditarians deem as “genetic”, then countless lives will not be made better.
Hereditarianism and natural selection
The crucial selection pressure responsible for the evolution of race differences in intelligence is identified as the temperate and cold environments of the northern hemisphere, imposing greater cognitive demands for survival and acting as selection pressures for greater intelligence. (Lynn, 2006: 135)
Hereditarians are neo-Darwininans and since they are neo-Darwinians, they hold that natural selection is the most powerful “mechanism” of evolution, causing trait changes by culling organisms with “bad” traits which then decreaes the frequency of the genes that supposedly cause the trait. But (1) natural selection cannot possibly be a mechanism as there is no agent of selection (that is, no mind selecting organisms with fitness-enhancing traits for a certain environment), nor are there laws of selection for trait fixation that hold across all ecologies (Fodor and Piattelli-Palmarini, 2010); and (2) genes aren’t causes of traits on their own—they are caused to give the information in them by and for the physiological system (Noble, 2011).
In his article Epistemological Objections to Materialism, in The Waning of Materialism, Koons (2010: 338) has an argument against natural selection with the same force as Fodor and Piattelli-Palmarini (2010):
The materialist must suppose that natural selection and operant conditioning work on a purely physical basis (without presupposing any prior designer or any prior intentionality of any kind). According to anti-Humean materialism, only microphysical properties can be causally efficacious. Nature cannot select a property unless that property is causally efficacious (in particular, it must causally contribute to survival and reproduction). However, few, if any, of the biological features that we all suppose to have functions (wings for flying, hearts for pumping bloods) constitute microphysical properties in a strict sense. All biological features (at least, all features above the molecular level) are physically realized in multiple ways (they consist of extensive disjunctions of exact physical properties). Such biological features, in the world of the anti-Humean materialist, don’t have effects—only their physical realizations do. Hence, the biological features can’t be selected. Since the exact physical realizations are rarely, if ever repeated in nature, they too cannot be selected. If the materialist responds by insisting that macrophysical properties can, in some loose and pragmatically useful way of speaking, be said to have real effects, the materialist has thereby returned to the Humean account, with the attendant difficulties described in the last sub-section. Hence, the materialist is caught in the dilemma.
We can grant that “nature” cannot select a trait if it isn’t causally efficacious. But combining Fodor’s argument with Koons’, if traits are linked then the fitness-enhancing trait cannot be directly selected-for since when you have one, you have the other. In any case, “natural selection” is part of the bedrock of hereditarian theorizing. It was natural selection—according to the hereditarian—that caused racial differences in behavior and “intelligence.” And so, if the hereditarian has no response to these two arguments against natural selection, then they cannot logically claim that the differences they describe are due to “natural selection.”
So the hereditarian theorist asserts that those with genes that conferred a fitness advantage had more children than those that didn’t which led to the selection of the genes that became fixed in certain populations. This is a familiar story—and the hereditarian uses this as a basis for the claim that racial differences in traits are the outcome of natural selection. These views are noted in Rushton (2000: 228-231), Jensen (1998: 170, 434-436) and Lynn (2006: Chapters 15, 16, and 17). But as Noble (2012) noted, there is no privileged level of causation—that is, before performing the relevant experiments, we cannot state that genes are causes of traits so this, too, refutes the hereditarian claim.
Rushton’s “Differential K” theory—where Mongoloids, Caucasians, and Africans differ on a suite of traits, which is influenced by their life histories and whether or not they are r- or K-strategists. Rushton (2000: 27) also claimed that “different environments cause, via natural selection, biological differences“, and by this he means that the environment acts as a filter. But the claim that the environment is the filter that causes variation in traits due to genes being “selected against” fails, too. When traits are correlated, the environmental filters (the mechanism by which selection theory purportedly works) cannot distinguish between causes of fitness and mere correlates of causes of fitness. So appealing to environments causing biological differences fails.
But unfortunately for hereditarians, a new analysis by Kevin Bird refutes the claim that natural selection is responsible for racial differences in “IQ” (Bird, 2021). So now, even assuming that genes can be selected-for their contribution to fitness and assuming that psychological traits can be genetically transmitted (which is false), hereditarianism still fails.
Hereditarianism and genetic reductionism
The ideology of IQ-ism is inherently reductionist. Behavioral geneticists, although they claim to be able to partition the relative contributions of genes and environment into nest little percentages, are also reductionists about “traits”—such as “IQ.” Further, if one is an IQ-ist then there is a good chance that they would fall into the reductionist camp of attempting to explain “intelligence” as being reducible to physiological brain states, and parts of the brain (such as Deary, 1996; Deary, Penke and Johnson, 2010; Jung and Haier, 2007; Haier, 2016; Deary, Cox, and Hill, 2021).
Reductionism can be simply stated as the parts have a sort of causal primacy over the whole. When it comes to psychological reduction, it is often assumed that genes would be the ultimate thing that it is reduced to, thereby, explaining how and why psychological traits differ between individuals—most importantly to the IQ-ist, “intelligence.” Behavioral geneticists have been reductionists since the field’s inception which has carried over to the present day (Panofsky, 2014). Even now, in the 3rd decade of the 2020s, reductionist accounts of behavior and psychology are still being pushed and the attempted reduction is reduction to genes. Now, this does not mean that environmental reduction has primacy—although we can and have identified environmental insults that do impede the ontogeny of certain traits.
Deary, Cox, and Hill (2021) argue for a “systems biology” approach to the study of “intelligence.” They review GWAS studies, neuroimaging studies and attempt a to lay the groundwork for a “mechanistic account” of intelligence, attempting to pick up where Jung and Haier (2007) left off. Unfortunately, the claims they make about GWAS fail (Richardson and Jones, 2020; Richardson, 2017b, 2021) and so do the claims they make about neuroreduction (Uttal, 2012).
This kind of genetic reductionism for psychological traits—along social ills such as addiction, violence, etc—then becomes ideological, in thinking that genes can explain how and why we have these kinds of problems. Indeed, this was why the first “IQ” tests were translated and brought to America—to screen and bar immigrants the IQ-ists saw as “feebleminded” (Richardson, 2003, 2011; Allen, 2006; Dolmage, 2018). Such tests were also used to sterilize people in the name of a eugenic ideology that was said to be for the betterment of society (Wilson, 2017). Thus, when such kinds of reductionism are applied to society and become an ideology, we definitely can see how such pseudoscientific beliefs can manifest itself in negative outcomes for the populace.
Ladner (2020:10) “constructed an economic analysis grounded in evolutionary biology.” Ladner claims that “Natural Selection is the main force that determines economic behavior.” Ladner claims that socialism will always fail since authoritarian regimes stifle our selfish proclivities while capitalism is grounded in selfishness and greed and so will always prevail over socialism. This is quite the unique argument… Of course Dawkins gets cited since Ladner is talking about selfishness, and these selfish genes are what cause the selfish behavior that allow capitalism to flourish. But the claim that genes are selfish is not a physiologically testable hypothesis (Noble, 2011) and DNA can’t be regarded as a replicator that’s independent from the cell (Noble, 2018). In any case, the argument in this book is that inequality is due to natural selection and there isn’t much we can do about capitalism since genes make us selfish and capitalism is all about selfishness. But being too selfish leads to such huge wealth inequalities we see in America today. The argument is pretty novel but it fails since it is a just-so story and the claims about “natural selection” are false.
Hereditarianism and mind-brain identity
Pairing hereditarianism with physicalism about the brain is an implicit assumption of the theory. Ever since our the power of our neuroimaging methods have increased since the beginning of the new millennium, many studies have come out correlating different psychological traits with different brain states. Processes of the mind, to the mind-brain identity theorist, are identical to states and processes of the brain (Smart, 2000). And in the past two decades, studies correlating physiological brain states and psychological traits have increased in number.
The leading theorists here are Haier and Jung with their P-FIT model. P-FIT stands for the Parieto-Frontal Integration Theory which first proposed by Jung and Haier (2007) who analyzed 37 neuroimaging studies. This, they claim, will “articulate a biology of intelligence.” (Also see Colom et al, 2009.) Again, correlations are expected but we can’t then claim that the brain states cause the trait (in this case, “IQ.” (See Klein, 2009 for a primer on the philosophical issues in neuroimaging.)
But in 2012 psychologist William Uttal published his book Reliability in Cognitive Neuroscience: A Meta-meta Analysis where he argues that pooling these kinds of studies for a meta-analysis (exactly what Jung and Haier (2007) did) “could lead to grossly distorted interpretations that could deviate greatly from the actual biological function of an individual brain.” Pooling multiple studies from different individuals taken at different times of the day under different conditions would lead to a wide variation in physiologies, nevermind the fact that motion artifacts can influence neuroimages, and it emotion and cognition are intertwined (Richardson, 2017a: 193).
The point is, we cannot pool together these types of studies in attempt to localize cognitive processes to states of the brain. This is exactly what P-FIT does (or attempts to do). In any case, the correlations found by Jung and Haier (2007) can be explained by experience. IQ tests are experience-dependent (that is, one must be exposed to the knowledge on the test and they must be familiar with test-taking), and so too are parts of the brain that change based on what the person experiences. We cannot say that the physiological states are the cause of the IQ score—since the items on the test are more likely to be found in the middle-class, they would then be more prepared for test-taking.
Socially disastrous claims
Views from the likes of Robert Plomin—that there’s “not much we can do” about “environmental effects” (Plomin, 2018: 174)—are socially disastrous. If such ideas become mainstream then we may desist with programs that actually help people, on the basis that “it doesn’t work.” But this claim, that environmental effects are “unsystematic and unstable” are derived from conclusions based largely on twin studies. So whatever variance is left is attributed to the environment. (Do note, though, the Plomin’s claim that DNA is a blueprint is false.)
Hereditarians like Plomin then claim that environmental effects derive from one’s genotype so in actuality environmental effects are genetic effects—this is called “genetic nurture.” By using this new concept, the reductionist can skirt around environmental effects and claim that the effect itself is genetic even though it’s environmental in nature. Genes, in this concept, are active causes, actively causing parental behavior. So genes cause parental behavior which then influences how parents treat/parent their children. In this way, behavioral geneticists can claim that environmental effects are genetic effects too. (This is like Joseph’s (2014) Argument A in its circularity.)
By applying and accepting genetic reductionist claims, we rob people of certain life chances and we don’t commit ourselves to social justice. Of course, to the hereditarian, since the environment doesn’t matter then genes do. So we need to look at society from the gene-view. But this view betrays how and why our current social structures are the way they are. “IQ” tests were originally created to show that the current social hierarchy is the “right one” and the hereditarian believes to have shown that the hierarchy is “genetic” and so each group has their place on the social hierarchy on the basis of IQ scores which reduce to genes (Mensh and Mensh, 1991).
But humans are social creatures and although hereditarians attempt to reduce human social life to genes (in a circular manner), they fail. And their failing has led to the destruction of thousands of lives (see the sterilizations in America during the 1900s and around the world eg in Cohen, 2016 and Wilson, 2017). Reductionist attempts of social behavior to genes have been tried over the past 20 years (e.g., Jensen, 1998; Rushton, 2000, Lynn, 2006; Hart, 2007) but they all fail (Lerner, 2018, 2021). Social (environmental) changes cannot undo what the genes have “set” in individuals and so, we need not pour money into social programs.
For instance, many hereditarians and criminologists have espoused eugenic views, like Jensen’s claim that welfare could lead to the genetic enslavement of a part of the population (Jensen, 1969: 95) and that we can “estimate a person’s genetic standing on intelligence” based on their IQ score (Jensen, 1970: 13) to name two things. It is no surprise to me that people who hold such reductionist views of genes and society that they would also hold eugenic views like these. It is, in fact, a logical endpoint of hereditarianism—“phasing out” populations, as Lynn described in his review of Cattell’s Beyondism (see Tucker, 2009).
The answer to hereditarianism
Since we have to reject hereditarianism, then the answer to hereditarian dogma is relational developmental systems (RDS) theory which emphasizes the actions of all developmental resources, not reducing development to one primary developmental resource as hereditarians do. Similar things have been noted by other developmental systems theorists, most notably Oyama (1985/2000). What is selected aren’t genes, or behaviors. What is actually selected are the whole developmental system. Genes aren’t active causes. So if we look at development as a dance with music, as Noble (2006, 2016) does, there are no sufficient causes for development, but there are necessary causes of which genes are but one part of the whole system.
The answer to hereditarianism is to simply show that it fails conceptually, it’s “causal” framework for explaining the differences is unsound (“natural selection”) and to show that multiple interacting factors are responsible for human development in the womb and throughout the life course. “Theories derived from RDS meta-theory focus on the “rules,” the processes that govern, or regulate, exchanges between (the functioning of) individuals and their contexts” (Lerner, 2021: 457). Hereditarianism relies on gene-selectionism. But genes are not leaders in evolution; development is inherently holistic, not reductonist.
The hereditarian program has its beginnings with Francis Galton and then after the first “IQ” test was made (Binet’s), American eugenicists used it to “show” who was a “moron” (meaning, who had a low “IQ” meaning “intelligence”). Tens of thousands of sterilizations were soon carried out since the causes of these problems were in these people’s genes and so, negative eugenics needed to be practiced in order to cull the population of these genes that lead to socially undesirable traits.
The hereditarian hypothesis is, therefore, a racist hypothesis, contra Carl (2019) who argued the hereditarian hypothesis is not racist while citing many arguments from critics. I won’t get into that here, as I have many articles on the matter that Carl (2019) discusses. But what I will say is that the hereditarian hypothesis is racist in virtue of (1) not being logically plausible (reductionism about the mind and physicalism are both false) and (2) the hypothesis ranks races on a scale of “higher to lower” (that is, a hierarchy). Racism “is a system of ranking human beings for the purpose of gaining and justifying an unequal distribution of political and economic power” (Lovechik, 2018). Therefore, the hereditarian hypothesis is a racist hypothesis, contra Carl’s protestations. Hereditarians may claim that their claims are stifled in the public debate, but for behavioral genetics at large, this is false (see Kampourakis, 2017). Carl (2018) claims that “stifling” the debate around race, genes, and IQ can do harm but he is sadly mistaken! By believing that differences that can be changed are “genetic”, they are deemed to be unfixable and the groups who have a higher frequency of which ever genes that are causally efficacious (supposedly) for IQ will then be treated differently.
If neuroreduction (mind-brain reduction) is false, if genetic reduction is false, and if natural selection isn’t a mechanism, then hereditarianism cannot possibly be true, and if heritability . The arguments given here go well with my conceptual arguments against hereditarianism for more force against the hereditarian hypothesis. Just like with my argument to ban IQ tests, we must ban hereditarian research too, since the outcomes can be socially disastrous (Lerner, 2021 part VI, Developmental Theory and the Promotion of Social Justice). By now, these kinds of “theories” and claims have been refuted to hell and back, and so, the only reason to hold these kinds of beliefs is due to racist attitudes (combined with some mental gymnastics).
So for these, and many more, reasons, we must outright reject genetic reductionism (not least because these claims derive from flawed studies with false assumptions like twin studies) along with its partner “natural selection.” We therefore must commit ourselves to social justice to ameliorate the effects of racist attitudes and views.
Science is one of Man’s greatest methods. Being a social convention, science is done in conjunction with other people. Since a “method” is how goals are achieved, then using a “scientific method”, then we would be achieving scientific goals. What we now know as “science” was formulated by F. Bacon in 1621 in his Novum Organum. He describes three steps (1) Collect facts; (2) classify the facts into certain categories; and (3) reject what does not cohere with the hypothesis and accept what does. But before F. Bacon espoused what many hold to be the bedrock of modern science, there was another Bacon that developed similar ideas to F. Bacon.
Some of the very beginnings of the practice now known as “science” can be attributed to Roger Bacon. R. Bacon is even called “Britain’s first scientist” (Sidebottom, 2013). R. Bacon developed his thought on the basis of Islamic scholar Ibn al-Haytham’s empirical method. The principles of what are now known as science (or should I say scientism?) were first expressed by R. Bacon in the 15th Century Dingus Manuscript:
Having laid down fundamental principles of the wisdom of the Latins so far as they are found in language, mathematics, and optics, I now wish to unfold the principles of experimental science, since without experiment nothing can be sufficiently known. There are two ways of acquiring knowledge, one through reason, the other by experiment. Argument reaches a conclusion and compels us to admit it, but it neither makes us certain nor so annihilates doubt that the mind rests calm in the intuition of truth, unless it finds this certitude by way of experience. Thus many have arguments toward attainable facts, but because they have not experienced them, they overlook them and neither avoid a harmful nor follow a beneficial course. Even if a man that has never seen fire, proves by good reasoning that fire burns, and devours and destroys things, nevertheless the mind of one hearing his arguments would never be convinced, nor would he avoid fire until he puts his hand or some combustible thing into it in order to prove by experiment what the argument taught. But after the fact of combustion is experienced, the mind is satisfied and lies calm in the certainty of truth. Hence argument is not enough, but experience is. (Quoted in Sidebottom, 2013; Sidebottom’s emphasis)
This seems to me to be a proto-view of ‘scientism‘—the claim that we can only gain knowledge through our five senses. When R. Bacon said that “argument is not enough, but experience is”, this is a clear predecessor to current scientistic thinking. The a priori is irrelevant to the a posteriori. That is, empirical evidence is irrelevant to a priori (deductive) arguments. In any case, R. Bacon’s writings on this matter were partly the catalyst for Europe’s scientific revolution. You can also see how R. Bacon distinguished between deductive and inductive arguments/thinking—which would come into play in 1600s Europe. Lastly, there is no “either-or” here, as both modes of thinking (deduction and induction) are more than sufficient for generating knowledge.
Deductive reasoning (which was pioneered by Rene Descartes) is where we attempt to see the implications of information that we already know. For example, one can construct an a priori argument—an argument that provides justification for thinking that p (a proposition) is true based on thinking or understanding p. If all of the premises in the argument are true, then the conclusion necessarily follows. On the other hand, inductive reasoning (pioneered by R. and F. Bacon) is where we attempt to located patterns in natural phenomena while attempting to predict what will occur under controlled conditions (or amassing observations to draw specific conclusions). For example. a scientist can observe a phenomenon and then predict what will occur under the controlled environment of an experiment. The conclusion in inductive arguments is not certain (as it is in deductive arguments); it is only a prediction of what may be. Inductive and deductive reasoning need not be at ends, though. (lest we fall into the trap of scientism—the claim that all knowledge is derived from the 5 senses).
F. Bacon argued that, attempted attempts to falsify (that is, test) and verify hypotheses is group effort. That is, science is a social convention. Science is predicated on prediction—predicting the future from what we currently know under a set of controlled conditions (the scientific experiment). Basically, a scientific prediction is a claim about an event that has yet to transpire. So the test of an explanatory theory is whether or not it is successful at predicting novel facts (facts that were unknown before the formulation of the hypothesis). And if a hypothesis generates a novel fact-of-the-matter, then we are justified in believing the hypothesis since the only way the prediction would come to pass would be either (1) the hypothesis is true or (2) chance. If the same result keeps generating, then we can be justified in stating the prediction that is derived from the hypothesis is not due to chance, so one can then be justified in the belief of the scientific hypothesis. This is what is known as “predictivism.” But there is a danger we must be wary of—we must take care to not retrofit facts in order to save a pet theory. A theory has to have some reach outside of what is already known; this is where the generation of novel facts comes into play.
Even before F. Bacon, the scientific method did have a predecessor (who came after R. Bacon) in the works of Galileo, Copernicus, Tycho, and Kepler. Going against the accepted wisdom of the day, Copernicus claimed that the sun—and not the earth—was the center of the solar system; the earth rotates and this rotation is what causes the seasons; and that all planets revolve around the sun. Copernicus did this only using his eyes, as the telescope was invented by Galileo in 1608. This came to be known as Copernicus’ “helio-centric” theory—the theory that the sun, and not the earth, was at the center of the solar system. During the European middle ages, the people were more religious (even though science was just starting to blossom), and since they were religious they believed in God and thought that Man was special. He is the ‘highest’ organism, has dominion over all animals, and the plane that God created for them is the center of it all.
But when Galileo pointed his telescope at the heavens, he had then confirmed Copernicus’ hypothesis that planets revolve around the sun; the sun does not revolve around the earth. He discovered this by observing the moons of Jupiter (what he called the “4 Medicean stars”), which he then mapped in the night sky. Galileo’s obtaining and analyzing of data is seen as science “before science”, as he utilized methods that scientists use today of observation and prediction (which were also espoused in previous centuries).
Tycho was not like Copernicus; instead of believing in what we currently know about the solar system, he—using observation—suggested that the planets orbited the sun and then the whole system revolves around the earth. So Tycho could account for different planets, but he did not upset the Ptolemiac order that the earth was the center of the system. Then, in the late 1590s, Tycho took all of the data that he had amassed over the years and became the court astronomer to the Holy Roman emperor. This is where Tycho met Johannes Kepler. Kepler had believed that everything that was created had been created according to mathematical laws. After Tycho died, Kepler inherited Tycho’s position and all of his notes and data. Tycho, being an Aristotlean, believed that the planets had a circular orbit and that planetary motion was uniform. But Kepler showed that the planets had an elliptical orbit (his first law) and that planetary speed varies as a function of distance from the sun (his second law).
Now, today, we have a four-step scientific method which is somewhat similar to what R. and F. Bacon, Galileo, and Copernicus used: (1) Observe; (2) formulate a hypothesis to explain the observation; (3) predict effects using the hypothesis; and (4) carry out experiments to see if the predicted effects hold. Now, that is very simplistic today. There is no one “scientific method”, although we can identify ways in which scientists use similar methods to derive their conclusions based on their hypotheses and experiments. If you think about it, there are numerous different fields of science, so why should there be “one true scientific method”?
Copernicus, Galileo, and R. and F. Bacon have all paved the way for the modern world, while creating and utilizing tools and modes of thought that are still in-use today. Copernicus and Galileo overturned the centuries-old knowledge of that day which were based on unfounded assumptions and replaced them with a method in which one has to observe a thing, so one would assume that it has something to do with “reality.” The observations by Copernicus and Galileo led to them being seen as heretics since they went against the Church’s teachings and so, they were driven out of society. As can be seen throughout history, developing something new to further develop knowledge and thought to challenge current-day hierarchies may have seemed like a bad idea at the time (to Galileo), but in the end, the truth won out: He used the principles of science and he learned a new fact.
Newton was interested in optics, mathematics, and gravity. Newton had shown that light was produced by different-colored rays, which refuted Descartes’ belief that color was a a secondary quality which was produced by the speed of particulate rotation and that light was actually white. He also had invented integral and differential calculus. Lastly, and perhaps what he was most famously known for, was his theory of gravity. Why did the apple fall straight down and not, say, sideways? Why, it’s because it was drawn to the earth. (Newton did not speak on what causes gravity. It was when Edmund Halley (discover of Halley’s comet) had asked Newton if there was any mathematical proof for the claim that the planets had elliptical orbits.
But what does it mean to “explain a phenomenon scientifically”? A “phenomenon” is an observable thing that happens. Science deals with nature, with things that occur in nature. “What happened?” and “Why did it happen?” are two questions an inquisitive mind may ask. The scientist asks questions, so in a way they create puzzles for themselves which is what a “scientist” is to Kuhn (1996: 144), “a solver of puzzles, not a tester of paradigms.” So if we are attempting to explain a phenomenon scientifically, we are attempting to solve a puzzle—how and why something may happen, for example.
What can be seen today—just as it could be seen over 500 years ago with Galileo and Copernicus—is that science is a social institution that is driven by politics, contrary to those who claim that scientists are “objective observers in a search for truth.” The biases of scientists—and the society they are in—influence both their research questions AND their conclusions from their research. Their own prejudices and preconceptions cloud their thoughts, what they want to research, and the conclusions they draw. If science is a human tool, then science will be used for whatever the human wants it to be used for. Social institutions can definitely attempt to stymie certain forms of research (like what happened to Galileo AND NOT hereditarians in the 1900s to the present-day, see Jackson and Winston, 2020). So we can see how science can be used to confirm or de-confirm certain things (i.e., people’s preconceived notions about the world). Thomas Kuhn said that “The answers you get depend on the questions you ask.” And, if you think about certain questions that certain people who fancy themselves scientists may ask, then quite obviously the conclusion (the answer) is already known and they are trying to justify their own prejudices and a priori beliefs (eg hereditarians).
Using the methods developed by Francis and Roger Bacon (no relation), we have achieved what our ancestors would have thought impossible—they would have called much of what we do today “magic” since they would not understand—that is, they would not have the frame of reference—that what they are seeing is natural, coming from the natural world. The modern world needed the scientific revolution that came from Europe, as without it (along with what was invented at the time and thought that would later become the bedrock for inventions and scientific thought today), the world would be a different place. What the so-called ‘heretics’ of the time showed was perseverance and getting what they thought to be the truth out no matter the cost and with these thoughts and ways of thinking and seeing the world, they changed it.
Phylogeny-reading is hard for some. So hard that there are numerous papers in the literature that correct many students’ misunderstandings that come along with reading these trees (eg Crisp and Cook, 2004 Baum, Smith, and Donovan, 2005; Gregory, 2008; Omland, Cook, and Crisp, 2008). Some may read certain trees as showing a type of “evolutionary progress” in the history of live, from “primitive” to more “advanced” life forms. Notions of “progress”—both in society and evolution—still continue even to this day (see Bowler, 2021 for a great discussion). That if one hasn’t “branched” on the tree, they are then “less evolved” than organisms that “branched more.” This is illustrated wonderfully by PumpkinPerson’s misunderstanding where he claims:
If you’re the first branch, and you don’t do anymore branching, then you are less evolved than higher branches
This conceptual confusion comes from his idea that more branching = more evolution, therefore more branching equals “more evolved” organisms. But, unless an organisms is extinct, all organisms have evolved for the same amount of time, so this defeats his claim here.
Such fantastical claims of “evolutionary progress”, in humans, come from JP Rushton who, although he didn’t explicitly state it (Lynn did), Rushton (1997: 293) said he “had alluded to similar ideas in previous writings.” But Rushton (1992) was more explicit—he said that “One theoretical possibility is that evolution is progressive and that some populations are more “advanced” than others.” This is in reference to his long-debunked theory that Asians are more K-selected than whites who are more K-selected than blacks, dubbed “r/K selection theory” or “Differential K theory.” But I’m not aware of Rushton wrongly inferring this from tree-reading, that’s a PP thing.
So Rushton, like PP, assumed that those groups that emerged after older groups are more “evolutionarily advanced” than others. But, although Rushton has a few editions of his book after the publication of Gould’s (1996) Full House where he refutes the claim that evolution is “progressive”, Rushton is strangely silent on the matter. In any case, any form of “progress” to evolution—if it did exist—would be upended by decimations leading toward species extinction.
Progressionists think that evolution is both directional and, obviously, progressive. That is, there seems to be a goal to get more and more complex, or at least a bigger body size, and that this is “good.” But there seems to be a kind of inherent, unspoken of “value” one has to attach to views about “evolutionary progress.” For instance, Bonner (2015: 1187) states that “If we look at evolution from a great distance, we see a progression.” For example, see Bonner’s (2019) Figure 1 where he shows an apparent increase in body size which can be said to be “progression.” This can, though, be explained passively, that is, explained by a non-directedness for body size in evolution as Gould (2011: 162), using his drunkard’s walk analogy writes (Gould’s emphasis):
Given these three conditions, we note an increase in size of the largest species only because founding species start at the left wall, and the range of size can therefore expand in only one direction. Size of the most common species (the modal decade) never changes, and descendants show no bias for arising at larger sizes than ancestors. But, during each act, the range of size expands in the only open direction by increase in the total number of species, a few of which (and only a few) become larger (while none can penetrate the left wall and get smaller). We can say only this for Cope’s Rule: in cases with boundary conditions like the three listed above, extreme achievements in body size will move away from initial values near walls. Size increase, in other words, is really random evolution away from small size, not directed evolution toward large size.
Such notions of “evolutionary progress” do date back to Aristotle, as Rushton rightly notes, who classified “lower” and “higher” organisms. The modern view of the scala naturae is that there is a steady line from less complex to more complex organisms, with humans at or near the end. Bonner, in his 1988 book, does argue for “higher or lower” species, but in his newer 2013 Randomness in Evolution he argues that evolutionary change is mostly passive or non-driven. Rushton (1997: 294) cites Bonner (1988: 6) saying that it is acceptable to use the terms “higher” and “lower” organisms. But Diogo et al (2013: 16) write:
There are two main problems with this latter statement. Firstly, there are many examples of how older animals (from
‘lower’ strata) are often considered, in various aspects of their biology and physiology, more complex than more recent ‘higher’ animals (from ‘higher’ strata). … Secondly, and perhaps more important in the context of the present review, in the original idea of scala naturae the term ‘higher’ taxa referred to humans and to the animals that are anatomically more similar to humans, and this is still the way in which this term is used by many authors nowadays (reviewed by Diogo & Wood, 2012a, 2013)
Although humans went through more transitions than other primates, this did not result in more muscles than in other primates and that “there is effectively no general trend to increase the number of muscles at the nodes leading to hominoids and to modern humans” (Diogo et al, 2013: 18). Thus, using the tortured logic of progressionists, humans are less evolved than other primates.
Using PP’s tortured logic on tree-reading, I asked him “Who is more evolved?” in the following tree from Strassert et al (2021):
PP then says that “new research inspires fresh look at evolutionary progress.” But some confusions from PP must first be noted. He “predicted” Amorphea to be “less evolved” than Diaphoretickes; but humans are in Amorphea, therefore humans—to PP—are less evolved than plants. PP then said that Wiki says that Amorphea is “unranked”—but all “unranked” means here is that the classification is not a part of the traditional Linnean taxonomy. PP likes his simpler trees where he can get the “conclusion” that he hopes for—that there are more and less evolved organisms which conform to his a priori biases on the nature of evolution. He then said that Amorphea does not appear to be a widely recognized taxon… but it has been noted that “Amorphea is robustly supported in most phylogenomic analyses” (Burki et al, 2019: 7) while Amorphea and Diaphoretickes form two Domains in Eukaryotes (Adl et al, 2019). So, it seems, Amorphea IS a widely-accepted accepted supergroup.
The philosopher of biology and mind Jianhui Li (2019) argues against many of Gould’s arguments he forwarded in Full House and Wonderful Life (Gould, 1989, 1996). Attempting to refute one of Gould’s arguments—that evolutionary progress is due to human arrogance—Li (2019) tries to argue that Gould objects to the idea of evolutionary progress on the basis that such “a belief in evolutionary progress may cause human arrogance and racism and even inequality among different species, and arrogance, racism, and inequality are morally wrong; thus, the idea of evolutionary progress is wrong. Such an argument is obviously untenable” (Li, 2019: 301). The thing is, Gould is not incorrect in his argument that a view of evolutionary “progress” (social Darwinisim) would lead to racism and the thought that we hold dominion over other animals. Social Darwinistic thought was indeed used to enact racist policies (Pressman, 2017), and this thought was based on a view of progress in evolution. (Rushton’s attempted revival of the scala naturae in humans can, of course, be seen in Gould’s eyes as using evolution to justify certain types of attitudes—in this kind, racist attitudes—which are due to certain kinds of thought in society.)
In attempting to refute Gould’s next argument—that value terms have no use in evolution—Li tries to show that, going off the previous argument, Gould used value judgments in trying to show that belief in evolutionary progress would lead to racist and speciesist views. In a nutshell, Li says that evolutionary progress is, quoting Ayala, “directional change toward the better.” But, as Gould has always argued, these kinds of value-judgements do not make any sense. What is “better” in one environment may mean that, in comparison to another environment, we may say that it is “worse” than another so-called adaptation. I have even said in the past that the terms “superior” (higher) and “inferior” (lower) only make sense in light of anatomy, where the head is superior to the foot and the foot is superior to the head.
Li then discusses the possibility that “natural selection” can serve as the basis for evolutionary progress, contra Gould. Gould did say that, if progress in evolution was real, any kind of progress would be wiped out during mass extinction events. Invoking Gould’s punctuated equilibrium theory, Li says that the theory states that there are mass extinctions as well as mass explosions (rapid speciation) and that those organisms that do not go extinct continue on to show forms of progress. Li then says that certain traits are not only local adaptations but non-local adaptations since they can be seen to be useful in all environments. That certain traits are useful in all environments does not mean that evolutionary progress is real; it only means that, at that time and space for that organism, the trait is useful and will persist and, if it becomes non-useful, the trait will desist in the lineage. It is only, like most everything, based on context. Li says next that although a replaying of life’s tape will lead to unpredictability as regards what kinds of animals evolve, we do know that there will be complex organisms. The emergence of a similar organism like humans would be an inevitability, says Li, which means that evolution is both directional and driven towards complexity. But, as argues Gould, McShea, and Bonner, evolution is a series of random, non-driven, processes that, through our biased lens looks like “progress.”
Li then tries to show that Gould’s drunkard’s walk argument is false. The argument goes: Imagine a drunk person leaving a bar. Now imagine a wall and a gutter. After being kicked out of the bar, the drunk has the bar’s wall behind him on one side, and the street gutter at the other. Although the drunkard has no intention of doing anything since he is extremely drunk, by statistical chance, he will eventually end up in the gutter after bouncing around the wall, near the gutter and everywhere in between. Using this argument by analogy, Gould likens the evolutionary process the same way. Li then tries to argue that Gould’s rejection of adaptationism and natural selection is the wrong way to go—but Fodor and Piattelli-Palmarini (2009; 2013) argue that “natural selection” is not and cannot be a mechanism since there are no laws of selection for trait fixation and no mind behind the process of selection. So this argument from Li, too, fails.
Lastly, Li attempts to take down what Gould terms his “modal bacter” argument in Full House. Bacteria are some of the simplest organisms on earth, while humans are some of the most complex, says Li. He also says that Gould does not deny that complexity has increased since the dawn of bacteria—another fact. Upon a close reading of Full House, it can be appreciated that the evolution of complexity is not driven; it is passive and non-driven. But Li (2019: 307-308) says that “although bacteria rule the earth, human beings are higher than them, not only because human beings have more complex organic structures but also because human beings have abilities that are higher than those of bacteria. The evolutionary history of life from bacteria to humans is a history of constant progress.” But what Li fails to realize is that Gould’s modal bacter wonderfully illustrates his case: Life began at the left wall of minimal complexity, while the bacteria are right next to this left wall of minimal complexity with random “walks” dictating the evolution of complexity. The mode of life—bacteria—as Gould (2011: 170) rightly asks, “can we possibly argue that progress provides a central defining thrust to evolution if complexity’s mode has never changed?” The bacterial mode never alters but the distribution of complexity becomes increasingly skewed toward the right away from the modal bacter during evolutionary time. But Gould (2011: 171) swiftly takes care of this claim:
A claim for general progress based on the right tail alone is absurd for two primary reasons: First, the tail is small and occupied by only a tiny percentage of species (more than 80 percent of multicellular animal species are arthropods, and we generally regard almost all members of this phylum as primitive and non progressive). Second, the occupants of the extreme right edge through time do not form an evolutionary sequence, but rather a motley series of disparate forms that have tumbled into this position, one after the other. Such a sequence through time might read: bacterium, eukaryotic cell, marine alga, jellyfish, trilobite, nautiloid, placoderm fish, dinosaur, saber-toothed cat, and Homo sapiens. Beyond the first two transitions, not a single form in this sequence can possibly be a direct ancestor of the next in line.
Li, it seems, is confused on the modal bacter argument—it is an inevitability that more complex organisms (the right wall) would arise after the less complex left wall, but this does not denote progress in the Darwinian sense; it only denotes that change and evolution is random. What this does show is, as Gould argued, our anthropocentric biases lead us to the conclusion that we are “higher” than other animals, on the basis of our accomplishments.
Using Gould’s arguments in Full House, I constructed this syllogism with the knowledge that “progress” can be justified if and only if “more advanced” organisms outnumber “less advanced” organisms:
P1 The claim that evolutionary “progress” is real and not illusory can only be justified iff organisms deemed more “advanced” outnumber “lesser” organisms.
P2 There are more “lesser” organisms (bacteria/insects) on earth than “advanced” organisms (mammals/species of mammals).
C Therefore evolutionary “progress” is illusory.
York and Clark, in their article Stephen Jay Gould’s Critique of Progress (2011) put Gould’s opposition to evolutionary and social progress well:
However, Gould also focused on contingency and the critique of progress to make a larger point about science and society. The belief in progress is a prime example of how social biases can distort science. Gould aimed to show that the natural world does not conform to human aspirations. Nature does not have human meaning embedded in it, and it does not provide direction to how humans should live. We live, instead, in a world that only has meaning of our own making. Rather than viewing this situation as disheartening, Gould saw it as liberating because it empowers us to make our own purpose. Gould stressed, similar to Karl Marx and other radical thinkers, that we make our own history and that the future is open.
Hold-outs for the claim that evolution is progressive are rare in today’s contemporary biology. Rushton was one of the last big names to try to argue that evolution is progressive. (These arguments are discussed here, here, and here.) Although Bonner used to be a progressionist, he changed his view in 2013, agreeing with Gould and McShea that evolution is random and non-driven—that it is passive. Dogo et al (2013) showed that there is no increase in muscles in the nodes leading towards Homo sapiens, so “humans are relatively simplified primates” (Diogo et al, 2013: 18). Li (2019) has some of the best attempts at taking down Gould’s anti-progress arguments but he comes up really short. Evolution just is not progressive, no matter who wants it to be (Ruse, 1996).
All in all, the concept of progress in evolution seems to be trending away from being touted as reality. As we learn more and more about the passive and non-driven evolutionary process, we will put to rest such simplistic notions of “more or less evolved”, “superior and inferior” organisms to rest. Because all organisms that are not extinct have undergone the same amount of evolutionary time and therefore have been evolving for that amount of time. This does not, of course, speak to the fact that MORE evolution could happen in certain species in certain timespans, but this DOES NOT mean that the species that undergoes more evolution is “more evolved” or “superior.” Gould, contrary to some, has definitively and convincingly put these kinds of anthropocentric arguments to bed. By conflating value judgments with evolution, we lose the beauty of what evolution really is—random, non-driven change that has caused all of the biological wonder we see around us today.
The hereditarian-environmentalist debate has been ongoing for over 100 years. In this time frame, many theories have been forwarded to explain the disparity between individuals and groups. In one camp you have the hereditarians who claim that any non-zero heritability for IQ scores means that hereditarianism is true (eg Warne, 2020); while in the other camp you have the environmentalists who claim that differences in IQ are explained by environmental factors. This debate has been raging since the 1870s when Francis Galton coined the “nature-nurture” dichotomy still rages today. Unfortunately, the environmentalists lend credence to IQ-ist claims that, however imperfect, IQ tests are “measures” of intelligence.
Three recent books on the matter are A Terrible Thing to Waste: Environmental Racism and its Assault on the American Mind (Washington, 2019), Making Kids Cleverer: A Manifesto for Closing the Advantage Gap (Didau, 2019), and Young Minds Wasted: Reducing Poverty by Enhancing Intelligence in Known Ways (Schick, 2019). All three of these authors are clearly environmentalists and they accept the IQ-ist canard that IQ—however crudely—is a “measure” of “intelligence.”
There are, however, no sound arguments that IQ tests “measure” intelligence and there is no response to the Berka/Nash measurement objection for the claim that IQ tests are a “measure” since no hereditarian can articulate the specified measured object, the object of measurement and the measurement unit for IQ; there is, also, no accepted definition or theory of “intelligence”. So how can we say that some”thing” is being “measured” with a certain instrument if we have no satisfactorily defined what we claim to be measuring with a well-accepted theory of what we are measuring (Richardson and Norgate, 2015; Richardson, 2017), with a specified measured object, object of measurement, and measurement unit (Berka, 1983a, 1983b; Nash, 1990; Garrison, 2003, 2009) for the construct we want to measure?
But the point of this article is that environmentalists push the hereditarian canard that IQ is equal to, however crudely, intelligence. And though the authors do have great intentions and are pointing to things that we can do to attempt to ameliorate differences between individuals in different environments, they still lend credence to the hereditarian program.
A Terrible Thing to Waste
Washington (2019) discusses the detrimental effects (and possible effects of others) of lead, mercury and other metals that are more likely to be found in low-income black and “Hispanic” communities along with iodine deficiencies. These environmental exposures retard normal brain development. But, one is not justified in claiming that they are measures of “intelligence”—at best, as Washington (2019) argues, we can claim that they are indexes of environmental polluters on the brains of developing children.
Intelligence is a product of environment and experience that is forged, not inherited; it is malleable, not fixed. (Washington, 2019: 20)
While it is true, as Washington claims, that we can mitigate these problems from the toxic metals and lack of other pertinent nutrients for brain development by addressing the problems in these communities, it does not follow that IQ is a “biological” thing. Yes, IQ is malleable (contra hereditarian claims), and Headstart does work to improve life outcomes, even though such gains “fade out” after the child leaves the enriched environment. Lead poisoning, for example, has led to a decrease in 23 million IQ points per year (Washington, 2019: 15). But I am not worried about lost IQ points (even though by saving the IQ points from being lost, we would then be directly improving the environments that lead to such a decrease). I am worried about the detrimental effects of these toxic chemicals on the developing minds of children; lost IQ points are an outcome of this effect. At best, IQ tests can track cognitive damage due to pollutants in these communities (Washington, 2019) but they do NOT “measure” intelligence. (Also note that lead consumption is associated with higher rates of crime so this is yet another reason to reduce the consumption of lead in these communities.)
Speaking of “measuring intelligence”, Washington (2019: 29) noted that Jensen (1969: 5) stated that while “intelligence” is hard to define, it can be measured… But how does that make any sense? How can you measure what you can’t define? (See arguments (i), (ii), and (iii) here.)
Big Lead, though, “actively encouraged landlords to rent to families with vulnerable young children by offering financial incentives” (Washington, 2019: 55). This was in reference to the researchers who studied the deleterious effects of lead consumption on developing humans. “The participation of a medical researcher, who is ethically and legally responsible for protecting human subjects, changes the scenario from a tragedy to an abusive situation. Moreover, this exposure was undertaken to enrich landlords and benefit researchers at the detriment of children” (Washington, 2019: 55). We realized that lead had deleterious effects on development as early as the 1800s (Rabin, 2008), but Big Lead pushed back:
[Lead Industries Association’s] vigorous “educational” campaign sought to rehabilitate lead’s image, muddying the waters by extolling the supposed virtues of lead over other building materials. It published flooding guides and dispatched expert lecturers to tutor architects, water authorities, plumbers, and federal officials in the science of how to repair and “safely” install lead pipes. All the while the [Lead Industries Association] staff published books and papers nd gave lectures to architects and water authorities that downplayed lead’s dangers. 11 (Washington, 2019: 60)
In any case, Washington’s book is a good read into the effects of toxic metals on brain development, and while we must do what we can to ameliorate the effects of these metals in low-income communities, IQ increases are a side effect of ameliorating the toxic metals in these communities.
Making Kids Cleverer
Didau (2019: 86) outright claims that “intelligence is measured by IQ tests”—he is outright pushing the hereditarian view that IQ tests “measure intelligence.” (A strange claim since on pg 95-96 he says that IQ tests are “a measure of relative intelligence.”)
In the book, Didau accepts many hereditarian premises—like the claim IQ tests measure intelligence, that heritability can partition genetic and environmental variation. Further, Didau says in the Acknowledgements (pg 11) that Ritchie’s (2015) Intelligence: All That Matters “forms the backbone for much of the information in Chapters 3 and 5.” So we can see here how the hereditarian IQ-ist stance colors his view on the relationship between “IQ” and “intelligence.” He also makes the bald claims that “intelligence is a good candidate for being the best researched and best understood characteristic of the human brain” and that it’s “also probably the most stable construct in all psychology” (pg 81).
Didau takes the view that intelligence is both a way to acquire knowledge as well as what type of knowledge we know (pg 83)—basically, it’s what we know and what we do with what we know along with ways to acquire said knowledge. What one knows is obviously a product of the environment they find themselves growing up in, and what we do with the knowledge we have is similarly down to environmental factors. Didau states that “Possibly the strongest correlations [with IQ] are those with educational outcomes” (pg 92). But Didau, it seems, fails to realize that this strong correlation is built into the test since IQ tests and scholastic achievement tests are different versions of the same test (Schwartz, 1975, Richardson, 2017).
In one of the “myths of intelligence” (Myth 3: Intelligence cannot be increased, pg 102) he discusses, Didau uses a similar analogy as myself. In an article on “the fade-out effect“, I argued that if one goes to the gym, works out and gets bigger and then stops going, we can then say that going to the gym is useless since once they leave the enriched environment they lose their gains. The direct parallels to Headstart, then, is clear with my gym/muscle-building analogy.
In another myth (Myth 4: IQ tests are unfair), Didau claims that if you get a low IQ score then you are probably unintelligent, while if you get a high one, it means you know the answers to the questions—which is obviously true. Of course, to know the answers to the questions (and to be able to reason the answers for some of the questions), one must be exposed to the knowledge that is contained in that test, or they won’t score high.
We can reject the use of IQ scores by racists, he says, who would use it to justify the superiority of their own groups and the inferiority of “the other”, all while not rejecting that IQ tests are valid (where have they been validated?). “Something real and meaningful” is being measured by these tests, and we have chosen to call this “intelligence” (pg 107). But we can say this about anything. Imagine having a test Y for X. But we don’t really know what X is, nor that Y really measures it. But because it accords with our a priori biases and since we have constructed Y to get the results we think we should see, even though we have no idea what X is, we assume that we are measuring what we set out to all without the basic requirements of measurement.
While Didau does seem to agree with some of the criticisms I’ve levied on IQ tests over the years (cross-cultural testing is pointless, IQ scores can be changed), he is, obviously, pushing a hereditarian IQ-ist agenda, cloaked as an environmentalist. He contradicts himself by saying that intelligence is measured by IQ tests without then saying what he says later about them—and I don’t think one should assume that he meant they are an “imperfect measure” of intelligence. (Imagine an imperfect measure of length—would we still be using it to build houses if it was only somewhat accurate?) Didau also agrees with the g theorists, in that there is a “general cognitive ability”, as well. He also agrees with Ritchie and Tucker-Drob (2018) and Ceci (1996) that schooling can and does increase IQ scores (as summer vacations show that IQ scores do decrease without schooling) (see Didau, 2018: Chapter 5). So while he does agree that IQ isn’t static and that education can and does increase it, he is still pushing a hereditarian IQ-ist model of “intelligence”—even though, as he admits, the concept of “intelligence” has yet to be satisfactorily defined.
Young Minds Wasted
In the last book, Young Minds Wasted (Schick, 2019), while he does dispense with many hereditarian myths (such as the myth of the normal distribution, see here), he still—through an environmentalist lens—justifies the claims that IQ tests test intelligence. While he masterfully dispenses with the “IQ is normally distributed” claim (see discussion in pg 180-186), the tagline of the book is “reducing poverty by increasing intelligence, in known ways.”
The poor’s intelligence is wasted, he says, by an intelligence-depressing environment. We can see the parallels here with Washington’s (2019) A Terrible Thing to Waste. Schick claims that “the single most important and widespread cause of poverty is the environmental constraints on intelligence” (pg 12, Schick’s emphasis). Now, like Washington, Schick says that a whole slew of chemicals and toxins decrease IQ (a truism) and by identity, intelligence. Of course, living in a deprived environment where one is exposed to different kinds of toxins and chemicals can retard brain development and lead to deleterious life outcomes down the line. But this fact does not mean that intelligence is being measured by these tests; it only shows that there are environments that can impede brain development which then is mirrored in a decrease in IQ scores.
Schick says that as intelligence increases, societal problems decrease. But, as I have argued at length, this is due to the way the tests themselves are constructed, involving the a priori biases of the test’s constructors. If we can construct a test with any kind of distribution we want to, and if the items emerge arbitrarily from the heads of the test’s constructors who then try them out on a standardized sample (Jensen, 1980: 71) looking for the results they want and assume a priori, then we can make it so that what we accept as truisms regarding the relationship between IQ and life events can be turned on their head, with no logical reason to accept one set of items over another, other than that one set has a bias in which it upholds a test constructor’s previously-held biases.
Schick does agree that “intelligent behavior” can change throughout life, based on one’s life experiences. But “Human intelligence is based on several genetically determined capabilities such as cognitive functions” (pg 39). He also claims that genetic factors determine while environmental factors influence cognitve functions, memory, and universal grammar.
Along with his acceptance that genetic factors can influence IQ scores and other aspects of the mind, he also champions heritability estimates as being able to partition genetic and environmental variation in traits (even though it can do no such thing; Moore and Shenk, 2016). He—uncritically—accepts the 80/20 genetic environmental heritability from Bouchard and the 60/40 genetic environmental heritability from Jensen and Murray and Herrnstein. These “estimates”—drawn mostly from family, twin, and adoption studies (Joseph, 2015)—though, are invalid due to the false assumptions the researchers hold, neverminding the conceptual difficulties with the concept of heritability (Moore and Shenk, 2016).
While Washington and Schick both make important points—that those who live in poor environments are at-risk of being exposed to certain things that disrupt their development—they both, along with Didau, accept the hereditarian claim that IQ tests are tests of intelligence. While each author has their own specific caveats (some of which I agree with, and other I do not), they keep the hereditarian claim alive by lending credence to their arguments, but not looking at it through a genetic lens.
While the authors have good intentions in mind and while the research they discuss is extremely important and interesting (like the effects of toxins and metals on the development of the brain and the development of the child), they—like their intellectual environmentalist ancestors—unwittingly lend credence to hereditarian claims that IQ tests measure intelligence but they go about the causes of individual and group differences in completely different ways. These authors, with their assertions, then, accept the claim that certain groups are less “intelligent” than others. But it’s not genes that are the cause—it’s the differences in environment that cause it. And while that claim is true—that the deleterious effects Washington and Schick discuss can and do retard normal development—it, in no way shape or form, means that “intelligence” is being measured.
Normal (brain) development is indeed a terrible thing to waste; we can teach kids more by exposing them to more things, and young minds are wasted by poverty. But by accepting these premises, one does not need to accept the hereditarian dogma that IQ tests are measures of some undefined thing with no theory. That poverty and the environments that those in poverty live in impedes normal brain development which is then reflected in IQ scores, it does not follow that these tests are “measuring” intelligence—they, at best, show environmental challenges that change the brain of the individual taking the test.
One needs to be careful with the language they use, lest they lend credence to hereditarian pseudoscience.
“Congenital Insensitivity to Pain” (CIPA, or congenital analgesia: CIPA hereafter) is an autosomal recessive disease (Indo, 2002) and was first observed in 1932 (Daneshjou, Jafarieh, and Raeeskarami, 2012). It is called a “congenital disorder” since it is present from birth. Since the disease is autosomal recessive, the closer the two parents are in relatedness, the more likely it is they will pass on a recessive disorder since they are more likely to have and pass on autosomal recessive mutations (Hamamy, 2012). First cousins, for example, 1.7-2.8% higher risk of having a child with an autosomal recessive disease (Teeuw et al, 2013). Consanguinity is common in North Africa (Anwar, Khyatti, and Hemminki, 2014) and the Bedouin have a high rate of this disease (Schulman et al, 2001; Lopez-Cortez et al, 2020; Singer et al, 2020). Three mutations in the TrkA (AKA NTRK1) have been shown to induce protein mis-folding which affect the function of the protein. Different mutations in the TrkA gene have been shown to have be associated with different disease outcomes (Franco et al, 2016). Since the mutated gene in question is needed for nerve growth factors, the pain signals cannot be transferred to the brain since there are hardly any of them there (Shin et al, 2016).
Individuals unfortunate enough to be inflicted with CIPA cannot feel pain. Whether it’s biting their tongues, feeling pain from extreme temperatures. People with CIPA have said that while they can feel the difference between extreme temperatures—hot and cold—they cannot feel the pain that is actually associated with the temperatures on their skin see (Schon et al, 2018). When they bump into things, they may not be aware of what happened and injuries may occur which heal incorrectly due to no medical attention and only noticing the fractures and other things that occur due to CIPA years later after they see doctors for what is possibly factors due to having the disease. People with CIPA are thought to be “dumb” because they constantly bump into things. But what is really happening is that, since they cannot feel pain, they have not learned that bumping into things could be damaging to their bodies, as pain is obviously an experience-dependent event. So these people learn, throughout their lives, to fake being in pain as to not draw suspicion to people who may not be aware of the condition. Children with the disease are thought, most of the time, to be victims of child abuse, but when it is discovered that the child who is thought to be a victim of abuse is inflicted with CIPA (van den Bosch et al, 2014; Amroh et al, 2020), treatments shift toward managing the disease.
About twenty percent of people with CIPA live until three years of age (Lear, 2011), while 20 percent of those who die at age 3 die from complications due to hyperpexia (an elevated body temperature over 106. degrees Fahrenheit) (Rosemberg, Marie, and Kliemann, 1994; Schulmann et al, 2001; Indo, 2002; Nabyev et al, 2018). Since they cannot feel the heat and get themselves to cool down, Due to a low life expectancy (many more live until about 25 years of age), this disease is really hard to study (Inoyue, 2007; Daneshjou, Jafarieh, and Raeeskarami, 2012). People hardly make it past that age since they either don’t feel the pain and do things that normal people, through experience, know not to do since we can feel pain and know to not do things that cause us pain and discomfort or they commit suicide since they have no quality of life due to damaged joints. Furthermore, since they cannot feel pain, people with this disease are more likely to self-mutilate since they cannot learn that self-mutilation causes pain (since pain is a deterrent for future action that may in fact cause pain to an individual). They also cannot sweat, meaning that control of the body temperature of one afflicted with CIPA is of utmost precedence (since they could overheat and die). Thus, these cases of deaths of individuals with CIPA do not occur due to CIPA per se, they occur due to, say, not feeling heat and then sweating while not attempting to regulate their body temperature and cool down (whether by naturally sweating due to being too hot or getting out of the extreme hot temperature causing the elevated body temperature). This is known as “hyperpyrexia” and this cause of death affects around 20 percent of CIPA patients (Sasnur, Sasnur, and Ghaus-ul, 2011). Furthermore, they are more likely to have thick, leathery skin and also show little muscular definition.
Not sweating is associated with CIPA and if one cannot sweat, one cannot have their body temperature regulated when they get too hot. So if they get too hot they cannot feel it and they will die of heat stroke. The disease, though, is rare, as only 17-60 people in America currently have it, while there are about 600 cases of the disease worldwide (Inoyue, 2007; Lear, 2011). This disease is quite hard to identify, but clinicians may be able to detect the presence of the disease through the following ways: Infants biting their lips, fingers, cheeks and not crying or showing any instance of being in pain after the event; repeated fractures in older children; a history of burns with no medical attention; observing that a child has many healed joint injuries and bone fractures without the child’s parents seeking medical care; observing that the patient does not react to hot or cold events (though they can say they can feel a difference between the two) they make errors in distinguishing in whether something is hot or cold (Indo, 2008).
Children who have this disease are at a higher risk of having certain kinds of bodily deformations, since they cannot feel the pain that would make them be hesitant to perform a certain action in the future. Due to this, people with this disease must constantly check themselves for cuts, abrasions, broken bones, etc to ensure that they cannot feel when they actually occur to them. They don’t cry, or show any discomfort, when experiencing what should be an event that would cause someone without CIPA to cry. CIPA-afflicted individuals are more likely to have bodily deformations since their joints and bones do not heal correctly after injury. This then leads to their walking and appearance to be affected. This is one of many reasons why the parents of people with CIPA must constantly check their children for signs of bodily harm or unintentional injuries. One thing that needs to be looked out for is what is termed Charcot joint—which is a degenerative joint disorder (Gucev et al, 2020).
A specific form of CIPA—called HSAN-IV—was discovered in a village in southern Finland called Vittangi, where it was traced to the founder of the village itself in the 1600s. Since the village was remote with such a small population, this meant that the only people around to marry and have children with were people who were closely related to each other. This, then, is the reason why this village in Finland has a high rate of people afflicted with this disease (Norberg, 2006; Minde, 2006). This, again, goes back to the above on consanguinity and autosomal recessive diseases—since CIPA is an autosomal recessive disease, one would reason that we would find it in populations that marry close relatives, either due to custom or population density.
Many features have been noted as showing that an individual is afflicted with CIPA: absent pain sensation from birth, the inability to sweat; and mental retardation, lower height and weight for their age (Safari, Khaledi, and Vojdani, 2011; Perez-Lopez et al, 2015). Children with CIPA have lower IQs than children without CIPA, so there is an inverse relationship between IQ and age; the older the age of the child with CIPA, the lower their IQ, while the reverse is true for individuals who are younger (Erez et al, 2010). One girl, for example. had a WISC-III IQ of 49, and she self-mutilated herself by picking at her nails until they were no longer there (Zafeirou et al, 2004). Another girl with CIPA was seen to have an IQ of 52, be afflicted with mental retardation, have a low birth weight, and was microcephalic (Nolano et al, 2000). Others were noted to have IQs in the normal range (Daneshjou, Jafarieh, and Raaeskarami, 2012). People with a specific form of this disease (HSN type II) were observed to have IQs in the normal range (though it is “caused by” a different set of genes than CIPA, HSN type IV; Kouvelas and Terzoglou, 1989). However, it has been noted that the cut-off of 70 for mental retardation is arbitrary (see Arvidsson and Granlund, 2016). While running a full gamut of tests on an individual thought to have CIPA, we can better attempt to ensure a higher quality of life in individuals afflicted with the disease. In sum, IQ scores of CIPA individuals do not reflect that the mutations in TrkA “cause” IQ scores; it is an outcome of a disrupted system (in this case, mutations on the TrkA gene).
There is currently no cure for this disease, and so, the only way to manage complications stemming from CIPA is to work on the injuries that occur to the joints that occur as they happen, to ensure that the individual has a good quality of life. Treatment for CIPA, therefore, is not actually curing the disease, but it is curing what occurs due to the disease (bone breaks, joint destruction), which would then heighten the quality of life of the person with CIPA (Nabiyev, Kara, and Aksoy, 2016). Naloxone may temporarily relieve CIPA (Rose et al, 2018), while others suggest treatments such as remifentanil (Takeuchi et al, 2018). We can treat outcomes that arise from the disease (like self-mutilation), but we cannot outright cure the disease itself (Daneshjou, Jafarieh, and Raaeskarami, 2012). The current best way to manage the disease is to identify the disease early in children and to do full-body scans of afflicted individuals to attempt to cure the by-products of the disease (such as limb/joint damage and other injuries). Maybe one day we can use gene therapy to help the afflicted, but for now, the best way forward is early identification along with frequent check-ups. By managing body temperature, having frequent check-ups, modifying the behavior of the child as to avoid injuries, wearing a mouth guard so they do not grind their teeth or bite their tongue, avoiding hot or cold environments or food, (Indo, 2008; Rose et al, 2018).
CIPA is a very rare—and very interesting—disease. By better understanding its aetiology, we can better help the extremely low number of people in the world who suffer from this disease.
An amputation is a preventative measure. It is done for a few reasons: To stop the spread of a gangrenous infection and to save more of a limb after there is no blood flow to the limb after a period of time. Other reasons are due to trauma and diabetes. Trauma, infection, and diabetes are leading causes of amputation in developing countries whereas in developed countries it is peripheral vascular disease (Sarvestani and Azam, 2013). Poor circulation to an affected limb leads to tissue death—when the tissue begins turning black, it means that there is no or low blood flow to the tissue, and to save more of the limb, the limb is amputated just above where the infection is. About 1.8 million Americans are living as amputees. After amputation, there is a phenomenon called “phantom limb” where amputees can “feel” their limb they previously had, and even feel pain to it, and it is very common in amputees; about 60-80 percent of amputees report “feeling” a phantom limb (see Collins et al, 2018; Kaur and Guan, 2018). The sensation can occur either immediately after amputation or years after. Phantom limb pain is neuropathic pain—a pain that is caused by damage to the somatosensory system (Subedi and Grossberg, 2011). Amputees even have shorter lifespans. When foot-amputation is performed due to uncontrolled diabetes, mortality ranges between 13-40 percent for year one, 35-65 percent for year 3, and 39-85 percent in year 5 (Beyaz, Guller, and Bagir, 2017).
Race and amputation
Amputation of the lower extremities are the most common amputations (Molina and Faulk, 2020). Minority populations are less likely to receive preventative care, such as preventative vascular screenings and care, which leads to them being more likely to undergo amputations. Such populations are more likely to suffer from disease of the lower extremities, and it is due to this that minorities undergo amputations more often than whites in America. Minorities in America—i.e., blacks and “Hispanics”—are about twice as likely as whites to undergo lower-extremity amputation (Rucker-Whitaker, Feinglass, and Pearce, 2003; Lowe and Tariman, 2008; Lefebvre and Lavery, 2011; Mustapha et al, 2017; Arya et al, 2018)—so it is an epidemic for black America. Blacks are even more likely to undergo repeat amputation (Rucker-Whitaker, Feinglass, and Pearce, 2003). In fact, here is a great essay chronicling the stories of some double-amputee black patients.
Why do blacks undergo amputations more often than whites? One answer is, of course: Physician bias. For example, after controlling for demographic, clinical, and chronic disease status, blacks were 1.7 times more likely than whites to undergo lower-leg amputations (Feinglass et al, 2005; Regenbogen et al, 2007; Lefebvre and Lavery, 2011). What is a cause of this is inequity in healthcare—note that “inequity” here means differences in care that are avoidable and unjust (Sudana and Blas, 2013).
Another reason is due to complications from diabetes. Blacks have higher rates of diabetes than whites (Rodriguez and Campbell, 2007) but see Signorello et al (2007). Muscle fiber differences between races (see also here). Differences in hours-slept between blacks and whites, too, could also explain the severity of the disease. But what could also be driving differences in diabetes between races is the fact that blacks are more likely than whites to live in “food swamps.” Food swamps are where it is hard to find nutritionally-dense food, whereas food deserts are areas where there is little access to healthy, nutritious food. In fact, a neighborhood being a food swamp is more predictive of obesity status of the population in the area than is its being a food desert (Cooksey-Stowers, Schwartz, and Brownell, 2017). Along with the slew of advertisements in that are directed to low-income neighborhoods (see Cassady, Liaw, and Miller, 2015), we can now see how such things like food swamps contribute to high hospitalization rates in low-income neighborhoods (Phillips and Rogriguez, 2019). These amputations are preventable—and so, we can say that there is a lack of equity in healthcare between races which leads to these different rates in amputation—before even thinking about physician bias. Amputation rates for blacks in the southeast can be almost seven times higher than other regions (Goodney et al, 2014).
Stapleton et al (2018: 644) conclude in their study on physician bias and amputation:
Our study demonstrates that such justifications may be unevenly applied across race, suggesting an underlying bias. This may reflect a form of racial paternalism, the general societal perception that minorities are less capable of “taking care of themselves,” even including issues related to health and disease management.23 Underlying bias may prompt more providers to consider amputation for minority patients. Furthermore, unlike in transplant surgery, there is currently no formal process for assessing patient compliance with treatment protocols or self-care in vascular surgery.24 Asking providers to make snap judgments about patient compliance, without a protocol for objective assessment, allows subconscious bias to influence patient care.
Physician bias is pervasive (Hoberman, 2012)—whether it is conscious or unconscious racial bias. Such biases can and do lead to outcomes that should not occur. By attempting to reduce disparities in healthcare that then lead to negative outcomes, we can then attempt to improve the quality of healthcare given to lower-income groups, like blacks. Such biases lead to negative health outcomes for blacks (such as the claim that blacks feel less pain than whites), and if they were addressed and conquered, then we could increase equity between groups until access to healthcare is equal—and physician bias is an impediment to access to equal healthcare due to the a priori biases that physicians may hold about certain racial/ethnic groups. Medical racism, therefore, drives a lot of the amputation differences between blacks and whites. Hospitals that are better equipped to offer revascularization services (attempting to save the limb by increasing blood flow to the affected limb) even had a higher rate of amputations in blacks when compared to whites (Durazzo, Frencher, and Gusberg, 2013).
For example. Mustapha et al (2017) write:
Compared to Caucasian patients, several studies have found that African-Americans with PAD are more likely to be amputated and less likely to have their lower limb revascularized either surgically or via an endovascular approach [3–9]. In an early analysis of data from acute-care hospitals in Florida, Huber et al. reported that the incidence of amputation (5.0 vs. 2.5 per 10,000) was higher and revascularization lower (4.0 vs. 7.1 per 10,000) among African-Americans compared to Caucasians, even though the incidence of any procedure for PAD was comparable (9.0 vs. 9.6 per 10,000) . Other studies have reported that the probability of undergoing a revascularization or angioplasty was reduced by 28–49 % among African-Americans relative to Caucasians [3 6]
Pro-white unconscious biases were also found among physicians, as Kandi and Tan (2020) note:
There is evidence of both healthcare provider racism and unconscious racial biases. Green et al. found significant pro-White bias among internal medicine and emergency medicine residents, while James SA supported this finding, indicating a “pro-white” unconscious bias in physician’s attitudes towards, and interactions with, patients [43,44]. In a survey assessing implicit and explicit racial bias by Emergency Department (ED) providers in care of NA children, it was discovered that many ED providers had an implicit preference for white children compared to those who identified as NA . Indeed, racism and stigmatization are identified as being many American Indians’ experiences in healthcare.
One major cause of the disparity is that blacks are not offered revascularization services at the same rate as whites. Holman et al (2011: 425) write:
Finally, given that patients’ decisions are necessarily confined to the options offered by their physicians, racial differences in limb salvage care might be attributable to differences in physician decision making. There are some data to suggest lower vein graft patency rates in black patients compared to whites.18,19 A patient’s race, therefore, may influence a vascular surgeon’s judgment about the efficacy of revascularization in preventing or delaying amputation. Similarly, a higher proportion of black patients in our sample were of low SES, which correlates with tobacco use,20-22 and we know that continued tobacco use increases the risk of lower extremity graft failure approximately three-fold.23 It is possible that a higher proportion of black patients in our sample were smokers who refused to quit, in which case vascular surgeons would be much less likely to offer them the option of revascularization. While Medicare data include an ICD-9 diagnosis code for tobacco use, the prevalence in our study sample was approximately 2%, suggesting that this code was grossly unreliable as a means of directly measuring and adjusting for tobacco use.
Smoking, of course, could be a reason why revascularization would not be offered to black patients. Though, as I have noted, smoking ads are more likely to be found in lower-income neighborhoods which increases the prevalence of smokers in the community.
With this, I am reminded of two stories I have seen on television programs (I watch Discovery Health a lot—so much so that I have seen most of the programs they show).
In Untold Stories of the ER, a man came in with his hand cut off. He refused medical care. He would not let the doctors attempt to sew his hand back on. Upon the police entering his home to check for evidence (where his hand was found), they searched his computer. It seems that he had a paraphilia called “acrotomophilia” which is where one is sexually attracted to people with amputations. Although he wanted it to be done to himself—he had inflicted the wound on himself. After the doctor tried to reason with the man to have his hand sewed back on, the man would not let up. He did not want his hand sewed back on. I wonder if, years down the line, the man regretted his decision.
In another program (Mystery Diagnosis), a man had said that as a young boy, he had seen a single-legged war veteran amputee. He said that ever since then, he would do nothing but think about becoming an amputee. He lived his whole life thinking about it without doing anything about it. He then went to a psychiatrist and spoke of his desire to become an amputee. After some time, he eventually flew to Taiwan and got the surgery done. He, eventually, found happiness since he had done what he always wanted to.
While these stories are interesting they speak to something deep in the minds of the individuals who mutilate themselves or get surgery to otherwise healthy limbs.
Blacks are more likely than whites to receive amputations in affected limbs than whites and are less likely to receive treatments that may be able to save the affected limb (Holman et al, 2011; Hughes et al, 2013; Minc et al, 2017; Massada et al, 2018). Physician bias is a large driver of this. So, to better public health, we then must attempt to mitigate these biases that physicians have that lead to these kinds of disparities in healthcare. Medical and other kinds of racism have led to this disparity in amputations between blacks and whites. Thus, to attempt to mitigate this disparity, blacks must get the preventative care needed in order to save the affected limb and not immediately go for amputation. Thankfully, such disparities have been noticed and work is being done to decrease said disparities.
So race is a factor in the decision on whether or not to amputate a limb, and blacks are less likely to receive revascularization services.
Unless you’ve been living under a rock since the new year, you have heard of the “coup attempt” at the Capitol building on Wednesday, January 6th. Upset at the fact that the election was “stolen” from Trump, his supporters showed up at the building and rushed it, causing mass chaos. But, why did they do this? Why the violence when they did not get their way in a fair election? Well, Michael Ryan, author of The Genetics of Political Behavior: How Evolutionary Psychology Explains Ideology (2020) has the answer—what he terms “rightists” and “leftists” evolved at two different times in our evolutionary history which, then, explains the trait differences between the two political parties. This article will review part of the book—the evolutionary sections (chapters 1-3).
EP and ideology
Explaining why individuals who call themselves “rightists and leftists” behave and act differently than the other is Ryan’s goal. He argues, at length, that the two parties have two different personality profiles. This, he claims, is due to the fact that the ancestors of rightists and leftists evolved at two different times in human history. He calls this “Trump Island” and “Obama Island”—apt names, especially due to what occurred last week. Ryan claims that what makes Trump different from, say, Obama, is that his ancestors evolved at a different place in a different time compared to Obama’s ancestors. He further claims using the Stanford Prison Experiment that “we may not all be capable of becoming Nazis, after all. Just some, and conservatives especially so” (pg 12).
In the first chapter he begins with the usual adaptationism that Evolutionary Psychologists use. Reading between the lines in his implicit claims, he is arguing that “rightists and leftists” are natural kinds—that is, they are *two different kinds of people.* He explains some personality differences between rightists and leftists and then says that such trait differences are “rooted in biology and governed by genes” (pg 17). Ryan then makes a strong adaptationist claim—that traits are due to adaptation to the environment (pg 17). What makes you and I different from Trump, he claims, is that our ancestors and his ancestors evolved in different places at different times where different traits would be imperative to survival. So, over time, different traits got selected-for in these two populations leading to the trait differences we see today. So each environment led to the fixation of different adaptive traits which explains the differences we see today between the two parties, he claims.
Ryan then shifts from the evolution of personality differences to… The evolution of the beaks of Darwin’s finches and Tibetan adaptation to high-altitude living (pg 18), as if the evolution of physical traits is anything like the evolution of psychological traits. His folly is assuming that these physical traits can then be likened to personality/mental traits. The ancestors of rightists and leftists, like Darwin’s finches Ryan claims, evolved on different islands in different moments of evolutionary time. They evolved different brains and different adaptive behaviors on the basis of the evolution of those different brains. Trump’s ancestors were authoritarian, and this island occurred early in human history “which accounts for why Trump’s behavior seems so archaic at times” (pg 18).
The different traits that leftists show in comparison to rightists is due to the fact that their island came at a different point in evolutionary time—it was not recent in comparison to the so-called archaic dominance behavior portrayed by Trump and other rightists. Ryan says that Obama Island was more crowded than Trump Island where, instead of scowling, they smiled which “forges links with others and fosters reciprocity” (pg 19). So due to environmental adversity, they had a more densely populated “island”—in this novel situation, compared to the more “archaic” earlier time—the small bands needed to cooperate, rather than fight with each other, to survive. So this, according to Ryan, explains why studies show more smiling behavior in leftists compared to rightists.
Some of our ancestors evolved traits such as cooperativeness the aided the survival of all even though not everyone acquired the trait … Eventually a new genotype or subpopulation emerged. Leftist traits became a permanent feature of our genome—in some at least. (pg 19-20)
So the argument goes: Differences between rightists and leftists show us that the two did not evolve at the same points in time since they show different traits today. Different traits were adaptive at different points in time, some more archaic, some more modern. Since Trump Island came first in our evolutionary history, those whose ancestors evolved there show more archaic behavior. Since Obama Island came first, they show newer, more modern behaviors. Due to environmental uncertainty, those on Obama Island had to cooperate with each other. The trait differences between these two subpopulations were selected for in their environment that they evolved in, which is why they are different today. Now today, this led to the “arguing over the future direction of our species. This is the origin of human politics” (pg 20).
Models of evolution
Ryan then discusses four models of evolution: (1) the standard model, where “natural selection” is the main driver of evolutionary change; (2) epigenetic models like Jablonka’s and Lamb’s (2005) in Evolution in Four Dimensions; (3) where behavioral changes change genes; and (4) where organisms have phenotypic plasticity and is a way for the organism to respond to sudden environmental changes. “Leftists and rightists“, writes Ryan, “are distinguished by their own versions of phenotypic plasticity. They change behavior more readily than rightists in response to changing environmental signals” (pg 29-30).
In perhaps the most outlandish part of the book, Ryan articulates one of my now-favorite just-so stories. The passage is worth quoting in-full:
Our direct ancestor Homo erectus endured for two million years before going extinct 400,000 years ago when earth temperatures dropped far below the norm. Descendants of erectus survived till as recently as 14,000 years ago in Asia. The round head and shovel-shaped teeth of some Asians, including Vladimir Putin, are an erectile legacy. Archeologists believe erectus was a mix of Ted Bundy and Adolf Hitler. Surviving skulls point to a life of constant violence and routine killing. Erectile skulls are thick like a turtle’s, and the brow’s are ridged for protection from potentially fatal blows. Erectus’ life was precarious and violent. To survive, it had to evolve traits such as vigilant fearfulness, prejudice against outsiders, bonding with kin allies, callousness toward victims, and a penchant for inflexible habits of life that were known to guarantee safety. It had to be conservative. 34 Archeologists suggest that some of our most characteristic conservative emotions such as nationalism and xenophobia were forged at the time of Homo erectus. 35 (pg 33-34)
It is clear that Ryan is arguing that rightists have more erectus-like traits whereas leftists have more modern, Sapiens traits. “The contemporary coexistence of a population with more “modern” traits and a population with more “archaic” traits came into being” (pg 37). He is implicitly assuming that the two “populations” he discusses are natural kinds and with his “modern” “archaic” distinction (see Crisp and Cook 2005 who argue against a form of this distinction) he is also implying that there is a sort of “progress” to evolution.
Twin studies, it is claimed, show “one’s genetically informed psychological disposition” (Hatemi et al, 2014); they “suggest that leftists and rightists are born not made” while a so-called “consensus has emerged amongst scientists: political behavior is genetically controlled and heritable” (pg 43). But, Beckway and Morris (2008), Charney (2008), and Joseph (2009; 2013) argue that twin studies can do no such thing due to the violation of the equal environments assumption (Joseph, 2014; Joseph et al, 2015). Thus, Ryan’s claims of the “genetic origins” of political behavior rest on studies that cannot prove or disprove “genetic causation” (Shulitziner, 2017)—but since the EEA is false we must discount “genetic causation” for psychological traits, not least because it is impossible for genes to cause/influence psychological traits (see argument (iii)).
The arguments he provides are a form of inference to best explanation (IBE) (Smith, 2016). However, this is how just-so stories are created: the conclusion is already in mind, and then the story is crafted using “natural selection” to explain how a trait came to fixation and why it currently exists today. The whole book is full of such adaptive stories. Claiming that we have the current traits we do in the distributions they are in in the “populations” because they were, at a certain point in our evolutionary history, adaptive which then led to the individuals with those traits passing on more of their genes, eventually leading to trait fixation. (See Fodor and Piattelli-Palmarini, 2010).
Ryan makes such outlandish claims such as “Rightists are more likely than leftists to keep their desks neat. If in the distant past you knew exactly where the weapons were, you could find them quickly and react to danger more effectively. 26” (pg 45). He talks about how “time-consuming and effort-demanding accuracy of perception [were] more characteristic of leftist cognition … leftist cognition is more reflective” while “rightist cognition is intuitive rather than reflective” (pg 47). Rightists being more likely to endorse the status quo, he claims, is “an adaptive trait when scarce resources made energy management essential to getting by” (pg 48) Rightist language, he argues, uses more nouns since they are “more concrete, an anxious personalities prefer concrete to abstract language because it favors categorial rigidity and guarantees greater certainty” while leftists “use words that suggest anxiety, anger, threats, certainty, resistance to change, power, security, and conformity” (pg 49). There is “a connection between archaic physiology and rightist moral ideology” (pg 52). Certain traits that leftists have were “adaptive traits [that] were suited to later stage human evolution” (pg 53). Ryan just cites studies that show differences between rightists and leftists and then uses some great leaps and mental gymnastics to try to mold the findings as being due to evolution in the two different time periods he describes in chapter 1 (Trump and Obama Island).
I have not read one page in this book that does not have some kind of adaptive just-so story attempting to explain certain traits/behaviors between rightists and leftists in evolutionary terms. Ryan uses the same kind of “reasoning” that Evolutionary Psychologists use—have your conclusion in mind first and then craft an adaptive story to explain why the traits you see today are there. Ryan outright says that “[t]raits are the result of adaptation to the environment” (pg 17), which is a rare—strong adaptationist—claim to make.
His book ticks off all of the usual EP things: strong adaptationism, just-so storytelling, the claim that traits were selected-for due to their contribution in certain environments at different points in time. The strong adaptationist claims, for example, are where he says that erectus’ large brow “are rigid for protection from potentially fatal blows” (pg 34). Such strong adaptationist claims imply that Ryan believes that all traits are the result of adaptation and that they, as a result, are still here today because they all serve a function in our evolutionary past. His arguments are, for the most part, all evolutionary and follow the same kinds of patterns that the usual EP arguments do (see Smith, 2016 for an explication of just-so stories and what constitutes them). Due to the problems with evolutionary psychology, his adaptive claims should be ignored.
The arguments that Ryan provides are not scientific and, although they give off a veneer of being scientific by invoking “natural selection” and adaptationism, they are anything but. It is just a long-winded explanation for how and why rightists and leftists—liberals and conservatives—are different and why they cannot change, since these differences are “encoded” into our genome. The implicit claim of the book, then, that rightists and leftists are two different—natural—kinds, lies on the false bed of EP and, therefore, the arguments provided in the book fail to sway anyone that does not believe such fantastic storytelling masquerading as science. While he does discuss other evolutionary theories, such as epigenetic ones from Jablonka and Lamb (2005), the book is largely strongly adaptationist using “natural selection” to explain why we still have the traits we do in different “populations” today.