NotPoliticallyCorrect

Home » Articles posted by RaceRealist

Author Archives: RaceRealist

On the Development of Science in Europe

2150 words

Science is one of Man’s greatest methods. Being a social convention, science is done in conjunction with other people. Since a “method” is how goals are achieved, then using a “scientific method”, then we would be achieving scientific goals. What we now know as “science” was formulated by F. Bacon in 1621 in his Novum Organum. He describes three steps (1) Collect facts; (2) classify the facts into certain categories; and (3) reject what does not cohere with the hypothesis and accept what does. But before F. Bacon espoused what many hold to be the bedrock of modern science, there was another Bacon that developed similar ideas to F. Bacon.

Some of the very beginnings of the practice now known as “science” can be attributed to Roger Bacon. R. Bacon is even called “Britain’s first scientist” (Sidebottom, 2013). R. Bacon developed his thought on the basis of Islamic scholar Ibn al-Haytham’s empirical method. The principles of what are now known as science (or should I say scientism?) were first expressed by R. Bacon in the 15th Century Dingus Manuscript:

Having laid down fundamental principles of the wisdom of the Latins so far as they are found in language, mathematics, and optics, I now wish to unfold the principles of experimental science, since without experiment nothing can be sufficiently known. There are two ways of acquiring knowledge, one through reason, the other by experiment. Argument reaches a conclusion and compels us to admit it, but it neither makes us certain nor so annihilates doubt that the mind rests calm in the intuition of truth, unless it finds this certitude by way of experience. Thus many have arguments toward attainable facts, but because they have not experienced them, they overlook them and neither avoid a harmful nor follow a beneficial course. Even if a man that has never seen fire, proves by good reasoning that fire burns, and devours and destroys things, nevertheless the mind of one hearing his arguments would never be convinced, nor would he avoid fire until he puts his hand or some combustible thing into it in order to prove by experiment what the argument taught. But after the fact of combustion is experienced, the mind is satisfied and lies calm in the certainty of truth. Hence argument is not enough, but experience is. (Quoted in Sidebottom, 2013; Sidebottom’s emphasis)

This seems to me to be a proto-view of ‘scientism‘—the claim that we can only gain knowledge through our five senses. When R. Bacon said that “argument is not enough, but experience is”, this is a clear predecessor to current scientistic thinking. The a priori is irrelevant to the a posteriori. That is, empirical evidence is irrelevant to a priori (deductive) arguments. In any case, R. Bacon’s writings on this matter were partly the catalyst for Europe’s scientific revolution. You can also see how R. Bacon distinguished between deductive and inductive arguments/thinking—which would come into play in 1600s Europe. Lastly, there is no “either-or” here, as both modes of thinking (deduction and induction) are more than sufficient for generating knowledge.

Deductive reasoning (which was pioneered by Rene Descartes) is where we attempt to see the implications of information that we already know. For example, one can construct an a priori argument—an argument that provides justification for thinking that p (a proposition) is true based on thinking or understanding p. If all of the premises in the argument are true, then the conclusion necessarily follows. On the other hand, inductive reasoning (pioneered by R. and F. Bacon) is where we attempt to located patterns in natural phenomena while attempting to predict what will occur under controlled conditions (or amassing observations to draw specific conclusions). For example. a scientist can observe a phenomenon and then predict what will occur under the controlled environment of an experiment. The conclusion in inductive arguments is not certain (as it is in deductive arguments); it is only a prediction of what may be. Inductive and deductive reasoning need not be at ends, though. (lest we fall into the trap of scientism—the claim that all knowledge is derived from the 5 senses).

F. Bacon argued that, attempted attempts to falsify (that is, test) and verify hypotheses is group effort. That is, science is a social convention. Science is predicated on prediction—predicting the future from what we currently know under a set of controlled conditions (the scientific experiment). Basically, a scientific prediction is a claim about an event that has yet to transpire. So the test of an explanatory theory is whether or not it is successful at predicting novel facts (facts that were unknown before the formulation of the hypothesis). And if a hypothesis generates a novel fact-of-the-matter, then we are justified in believing the hypothesis since the only way the prediction would come to pass would be either (1) the hypothesis is true or (2) chance. If the same result keeps generating, then we can be justified in stating the prediction that is derived from the hypothesis is not due to chance, so one can then be justified in the belief of the scientific hypothesis. This is what is known as “predictivism.” But there is a danger we must be wary of—we must take care to not retrofit facts in order to save a pet theory. A theory has to have some reach outside of what is already known; this is where the generation of novel facts comes into play.

Even before F. Bacon, the scientific method did have a predecessor (who came after R. Bacon) in the works of Galileo, Copernicus, Tycho, and Kepler. Going against the accepted wisdom of the day, Copernicus claimed that the sun—and not the earth—was the center of the solar system; the earth rotates and this rotation is what causes the seasons; and that all planets revolve around the sun. Copernicus did this only using his eyes, as the telescope was invented by Galileo in 1608. This came to be known as Copernicus’ “helio-centric” theory—the theory that the sun, and not the earth, was at the center of the solar system. During the European middle ages, the people were more religious (even though science was just starting to blossom), and since they were religious they believed in God and thought that Man was special. He is the ‘highest’ organism, has dominion over all animals, and the plane that God created for them is the center of it all.

But when Galileo pointed his telescope at the heavens, he had then confirmed Copernicus’ hypothesis that planets revolve around the sun; the sun does not revolve around the earth. He discovered this by observing the moons of Jupiter (what he called the “4 Medicean stars”), which he then mapped in the night sky. Galileo’s obtaining and analyzing of data is seen as science “before science”, as he utilized methods that scientists use today of observation and prediction (which were also espoused in previous centuries).

Tycho was not like Copernicus; instead of believing in what we currently know about the solar system, he—using observation—suggested that the planets orbited the sun and then the whole system revolves around the earth. So Tycho could account for different planets, but he did not upset the Ptolemiac order that the earth was the center of the system. Then, in the late 1590s, Tycho took all of the data that he had amassed over the years and became the court astronomer to the Holy Roman emperor. This is where Tycho met Johannes Kepler. Kepler had believed that everything that was created had been created according to mathematical laws. After Tycho died, Kepler inherited Tycho’s position and all of his notes and data. Tycho, being an Aristotlean, believed that the planets had a circular orbit and that planetary motion was uniform. But Kepler showed that the planets had an elliptical orbit (his first law) and that planetary speed varies as a function of distance from the sun (his second law).

Now, today, we have a four-step scientific method which is somewhat similar to what R. and F. Bacon, Galileo, and Copernicus used: (1) Observe; (2) formulate a hypothesis to explain the observation; (3) predict effects using the hypothesis; and (4) carry out experiments to see if the predicted effects hold. Now, that is very simplistic today. There is no one “scientific method”, although we can identify ways in which scientists use similar methods to derive their conclusions based on their hypotheses and experiments. If you think about it, there are numerous different fields of science, so why should there be “one true scientific method”?

Copernicus, Galileo, and R. and F. Bacon have all paved the way for the modern world, while creating and utilizing tools and modes of thought that are still in-use today. Copernicus and Galileo overturned the centuries-old knowledge of that day which were based on unfounded assumptions and replaced them with a method in which one has to observe a thing, so one would assume that it has something to do with “reality.” The observations by Copernicus and Galileo led to them being seen as heretics since they went against the Church’s teachings and so, they were driven out of society. As can be seen throughout history, developing something new to further develop knowledge and thought to challenge current-day hierarchies may have seemed like a bad idea at the time (to Galileo), but in the end, the truth won out: He used the principles of science and he learned a new fact.

Newton was interested in optics, mathematics, and gravity. Newton had shown that light was produced by different-colored rays, which refuted Descartes’ belief that color was a a secondary quality which was produced by the speed of particulate rotation and that light was actually white. He also had invented integral and differential calculus. Lastly, and perhaps what he was most famously known for, was his theory of gravity. Why did the apple fall straight down and not, say, sideways? Why, it’s because it was drawn to the earth. (Newton did not speak on what causes gravity. It was when Edmund Halley (discover of Halley’s comet) had asked Newton if there was any mathematical proof for the claim that the planets had elliptical orbits.

But what does it mean to “explain a phenomenon scientifically”? A “phenomenon” is an observable thing that happens. Science deals with nature, with things that occur in nature. “What happened?” and “Why did it happen?” are two questions an inquisitive mind may ask. The scientist asks questions, so in a way they create puzzles for themselves which is what a “scientist” is to Kuhn (1996: 144), “a solver of puzzles, not a tester of paradigms.” So if we are attempting to explain a phenomenon scientifically, we are attempting to solve a puzzle—how and why something may happen, for example.

What can be seen today—just as it could be seen over 500 years ago with Galileo and Copernicus—is that science is a social institution that is driven by politics, contrary to those who claim that scientists are “objective observers in a search for truth.” The biases of scientists—and the society they are in—influence both their research questions AND their conclusions from their research. Their own prejudices and preconceptions cloud their thoughts, what they want to research, and the conclusions they draw. If science is a human tool, then science will be used for whatever the human wants it to be used for. Social institutions can definitely attempt to stymie certain forms of research (like what happened to Galileo AND NOT hereditarians in the 1900s to the present-day, see Jackson and Winston, 2020). So we can see how science can be used to confirm or de-confirm certain things (i.e., people’s preconceived notions about the world). Thomas Kuhn said that “The answers you get depend on the questions you ask.” And, if you think about certain questions that certain people who fancy themselves scientists may ask, then quite obviously the conclusion (the answer) is already known and they are trying to justify their own prejudices and a priori beliefs (eg hereditarians).

Using the methods developed by Francis and Roger Bacon (no relation), we have achieved what our ancestors would have thought impossible—they would have called much of what we do today “magic” since they would not understand—that is, they would not have the frame of reference—that what they are seeing is natural, coming from the natural world. The modern world needed the scientific revolution that came from Europe, as without it (along with what was invented at the time and thought that would later become the bedrock for inventions and scientific thought today), the world would be a different place. What the so-called ‘heretics’ of the time showed was perseverance and getting what they thought to be the truth out no matter the cost and with these thoughts and ways of thinking and seeing the world, they changed it.

Evolutionary Progress

2750 words

Phylogeny-reading is hard for some. So hard that there are numerous papers in the literature that correct many students’ misunderstandings that come along with reading these trees (eg Crisp and Cook, 2004 Baum, Smith, and Donovan, 2005; Gregory, 2008; Omland, Cook, and Crisp, 2008). Some may read certain trees as showing a type of “evolutionary progress” in the history of live, from “primitive” to more “advanced” life forms. Notions of “progress”—both in society and evolution—still continue even to this day (see Bowler, 2021 for a great discussion). That if one hasn’t “branched” on the tree, they are then “less evolved” than organisms that “branched more.” This is illustrated wonderfully by PumpkinPerson’s misunderstanding where he claims:

If you’re the first branch, and you don’t do anymore branching, then you are less evolved than higher branches

This conceptual confusion comes from his idea that more branching = more evolution, therefore more branching equals “more evolved” organisms. But, unless an organisms is extinct, all organisms have evolved for the same amount of time, so this defeats his claim here.

Such fantastical claims of “evolutionary progress”, in humans, come from JP Rushton who, although he didn’t explicitly state it (Lynn did), Rushton (1997: 293) said he “had alluded to similar ideas in previous writings.” But Rushton (1992) was more explicit—he said that “One theoretical possibility is that evolution is progressive and that some populations are more “advanced” than others.” This is in reference to his long-debunked theory that Asians are more K-selected than whites who are more K-selected than blacks, dubbed “r/K selection theory” or “Differential K theory.” But I’m not aware of Rushton wrongly inferring this from tree-reading, that’s a PP thing.

So Rushton, like PP, assumed that those groups that emerged after older groups are more “evolutionarily advanced” than others. But, although Rushton has a few editions of his book after the publication of Gould’s (1996) Full House where he refutes the claim that evolution is “progressive”, Rushton is strangely silent on the matter. In any case, any form of “progress” to evolution—if it did exist—would be upended by decimations leading toward species extinction.

Progressionists think that evolution is both directional and, obviously, progressive. That is, there seems to be a goal to get more and more complex, or at least a bigger body size, and that this is “good.” But there seems to be a kind of inherent, unspoken of “value” one has to attach to views about “evolutionary progress.” For instance, Bonner (2015: 1187) states that “If we look at evolution from a great distance, we see a progression.” For example, see Bonner’s (2019) Figure 1 where he shows an apparent increase in body size which can be said to be “progression.” This can, though, be explained passively, that is, explained by a non-directedness for body size in evolution as Gould (2011: 162), using his drunkard’s walk analogy writes (Gould’s emphasis):

Given these three conditions, we note an increase in size of the largest species only because founding species start at the left wall, and the range of size can therefore expand in only one direction. Size of the most common species (the modal decade) never changes, and descendants show no bias for arising at larger sizes than ancestors. But, during each act, the range of size expands in the only open direction by increase in the total number of species, a few of which (and only a few) become larger (while none can penetrate the left wall and get smaller). We can say only this for Cope’s Rule: in cases with boundary conditions like the three listed above, extreme achievements in body size will move away from initial values near walls. Size increase, in other words, is really random evolution away from small size, not directed evolution toward large size.

Such notions of “evolutionary progress” do date back to Aristotle, as Rushton rightly notes, who classified “lower” and “higher” organisms. The modern view of the scala naturae is that there is a steady line from less complex to more complex organisms, with humans at or near the end. Bonner, in his 1988 book, does argue for “higher or lower” species, but in his newer 2013 Randomness in Evolution he argues that evolutionary change is mostly passive or non-driven. Rushton (1997: 294) cites Bonner (1988: 6) saying that it is acceptable to use the terms “higher” and “lower” organisms. But Diogo et al (2013: 16) write:

There are two main problems with this latter statement. Firstly, there are many examples of how older animals (from
‘lower’ strata) are often considered, in various aspects of their biology and physiology, more complex than more recent ‘higher’ animals (from ‘higher’ strata). … Secondly, and perhaps more important in the context of the present review, in the original idea of scala naturae the term ‘higher’ taxa referred to humans and to the animals that are anatomically more similar to humans, and this is still the way in which this term is used by many authors nowadays (reviewed by Diogo & Wood, 2012a, 2013)

Although humans went through more transitions than other primates, this did not result in more muscles than in other primates and that “there is effectively no general trend to increase the number of muscles at the nodes leading to hominoids and to modern humans” (Diogo et al, 2013: 18). Thus, using the tortured logic of progressionists, humans are less evolved than other primates.

Using PP’s tortured logic on tree-reading, I asked him “Who is more evolved?” in the following tree from Strassert et al (2021):

PP then says that “new research inspires fresh look at evolutionary progress.” But some confusions from PP must first be noted. He “predicted” Amorphea to be “less evolved” than Diaphoretickes; but humans are in Amorphea, therefore humans—to PP—are less evolved than plants. PP then said that Wiki says that Amorphea is “unranked”—but all “unranked” means here is that the classification is not a part of the traditional Linnean taxonomy. PP likes his simpler trees where he can get the “conclusion” that he hopes for—that there are more and less evolved organisms which conform to his a priori biases on the nature of evolution. He then said that Amorphea does not appear to be a widely recognized taxon… but it has been noted that “Amorphea is robustly supported in most phylogenomic analyses” (Burki et al, 2019: 7) while Amorphea and Diaphoretickes form two Domains in Eukaryotes (Adl et al, 2019). So, it seems, Amorphea IS a widely-accepted accepted supergroup.

The philosopher of biology and mind Jianhui Li (2019) argues against many of Gould’s arguments he forwarded in Full House and Wonderful Life (Gould, 1989, 1996). Attempting to refute one of Gould’s arguments—that evolutionary progress is due to human arrogance—Li (2019) tries to argue that Gould objects to the idea of evolutionary progress on the basis that such “a belief in evolutionary progress may cause human arrogance and racism and even inequality among different species, and arrogance, racism, and inequality are morally wrong; thus, the idea of evolutionary progress is wrong. Such an argument is obviously untenable” (Li, 2019: 301). The thing is, Gould is not incorrect in his argument that a view of evolutionary “progress” (social Darwinisim) would lead to racism and the thought that we hold dominion over other animals. Social Darwinistic thought was indeed used to enact racist policies (Pressman, 2017), and this thought was based on a view of progress in evolution. (Rushton’s attempted revival of the scala naturae in humans can, of course, be seen in Gould’s eyes as using evolution to justify certain types of attitudes—in this kind, racist attitudes—which are due to certain kinds of thought in society.)

In attempting to refute Gould’s next argument—that value terms have no use in evolution—Li tries to show that, going off the previous argument, Gould used value judgments in trying to show that belief in evolutionary progress would lead to racist and speciesist views. In a nutshell, Li says that evolutionary progress is, quoting Ayala, “directional change toward the better.” But, as Gould has always argued, these kinds of value-judgements do not make any sense. What is “better” in one environment may mean that, in comparison to another environment, we may say that it is “worse” than another so-called adaptation. I have even said in the past that the terms “superior” (higher) and “inferior” (lower) only make sense in light of anatomy, where the head is superior to the foot and the foot is superior to the head.

Li then discusses the possibility that “natural selection” can serve as the basis for evolutionary progress, contra Gould. Gould did say that, if progress in evolution was real, any kind of progress would be wiped out during mass extinction events. Invoking Gould’s punctuated equilibrium theory, Li says that the theory states that there are mass extinctions as well as mass explosions (rapid speciation) and that those organisms that do not go extinct continue on to show forms of progress. Li then says that certain traits are not only local adaptations but non-local adaptations since they can be seen to be useful in all environments. That certain traits are useful in all environments does not mean that evolutionary progress is real; it only means that, at that time and space for that organism, the trait is useful and will persist and, if it becomes non-useful, the trait will desist in the lineage. It is only, like most everything, based on context. Li says next that although a replaying of life’s tape will lead to unpredictability as regards what kinds of animals evolve, we do know that there will be complex organisms. The emergence of a similar organism like humans would be an inevitability, says Li, which means that evolution is both directional and driven towards complexity. But, as argues Gould, McShea, and Bonner, evolution is a series of random, non-driven, processes that, through our biased lens looks like “progress.”

Li then tries to show that Gould’s drunkard’s walk argument is false. The argument goes: Imagine a drunk person leaving a bar. Now imagine a wall and a gutter. After being kicked out of the bar, the drunk has the bar’s wall behind him on one side, and the street gutter at the other. Although the drunkard has no intention of doing anything since he is extremely drunk, by statistical chance, he will eventually end up in the gutter after bouncing around the wall, near the gutter and everywhere in between. Using this argument by analogy, Gould likens the evolutionary process the same way. Li then tries to argue that Gould’s rejection of adaptationism and natural selection is the wrong way to go—but Fodor and Piattelli-Palmarini (2009; 2013) argue that “natural selection” is not and cannot be a mechanism since there are no laws of selection for trait fixation and no mind behind the process of selection. So this argument from Li, too, fails.

Lastly, Li attempts to take down what Gould terms his “modal bacter” argument in Full House. Bacteria are some of the simplest organisms on earth, while humans are some of the most complex, says Li. He also says that Gould does not deny that complexity has increased since the dawn of bacteria—another fact. Upon a close reading of Full House, it can be appreciated that the evolution of complexity is not driven; it is passive and non-driven. But Li (2019: 307-308) says that “although bacteria rule the earth, human beings are higher than them, not only because human beings have more complex organic structures but also because human beings have abilities that are higher than those of bacteria. The evolutionary history of life from bacteria to humans is a history of constant progress.” But what Li fails to realize is that Gould’s modal bacter wonderfully illustrates his case: Life began at the left wall of minimal complexity, while the bacteria are right next to this left wall of minimal complexity with random “walks” dictating the evolution of complexity. The mode of life—bacteria—as Gould (2011: 170) rightly asks, “can we possibly argue that progress provides a central defining thrust to evolution if complexity’s mode has never changed?” The bacterial mode never alters but the distribution of complexity becomes increasingly skewed toward the right away from the modal bacter during evolutionary time. But Gould (2011: 171) swiftly takes care of this claim:

A claim for general progress based on the right tail alone is absurd for two primary reasons: First, the tail is small and occupied by only a tiny percentage of species (more than 80 percent of multicellular animal species are arthropods, and we generally regard almost all members of this phylum as primitive and non progressive). Second, the occupants of the extreme right edge through time do not form an evolutionary sequence, but rather a motley series of disparate forms that have tumbled into this position, one after the other. Such a sequence through time might read: bacterium, eukaryotic cell, marine alga, jellyfish, trilobite, nautiloid, placoderm fish, dinosaur, saber-toothed cat, and Homo sapiens. Beyond the first two transitions, not a single form in this sequence can possibly be a direct ancestor of the next in line.

Li, it seems, is confused on the modal bacter argument—it is an inevitability that more complex organisms (the right wall) would arise after the less complex left wall, but this does not denote progress in the Darwinian sense; it only denotes that change and evolution is random. What this does show is, as Gould argued, our anthropocentric biases lead us to the conclusion that we are “higher” than other animals, on the basis of our accomplishments.

Using Gould’s arguments in Full House, I constructed this syllogism with the knowledge that “progress” can be justified if and only if “more advanced” organisms outnumber “less advanced” organisms:

P1 The claim that evolutionary “progress” is real and not illusory can only be justified iff organisms deemed more “advanced” outnumber “lesser” organisms.
P2 There are more “lesser” organisms (bacteria/insects) on earth than “advanced” organisms (mammals/species of mammals).
C Therefore evolutionary “progress” is illusory.

York and Clark, in their article Stephen Jay Gould’s Critique of Progress (2011) put Gould’s opposition to evolutionary and social progress well:

However, Gould also focused on contingency and the critique of progress to make a larger point about science and society. The belief in progress is a prime example of how social biases can distort science. Gould aimed to show that the natural world does not conform to human aspirations. Nature does not have human meaning embedded in it, and it does not provide direction to how humans should live. We live, instead, in a world that only has meaning of our own making. Rather than viewing this situation as disheartening, Gould saw it as liberating because it empowers us to make our own purpose. Gould stressed, similar to Karl Marx and other radical thinkers, that we make our own history and that the future is open.

Conclusion

Hold-outs for the claim that evolution is progressive are rare in today’s contemporary biology. Rushton was one of the last big names to try to argue that evolution is progressive. (These arguments are discussed here, here, and here.) Although Bonner used to be a progressionist, he changed his view in 2013, agreeing with Gould and McShea that evolution is random and non-driven—that it is passive. Dogo et al (2013) showed that there is no increase in muscles in the nodes leading towards Homo sapiens, so “humans are relatively simplified primates” (Diogo et al, 2013: 18). Li (2019) has some of the best attempts at taking down Gould’s anti-progress arguments but he comes up really short. Evolution just is not progressive, no matter who wants it to be (Ruse, 1996).

All in all, the concept of progress in evolution seems to be trending away from being touted as reality. As we learn more and more about the passive and non-driven evolutionary process, we will put to rest such simplistic notions of “more or less evolved”, “superior and inferior” organisms to rest. Because all organisms that are not extinct have undergone the same amount of evolutionary time and therefore have been evolving for that amount of time. This does not, of course, speak to the fact that MORE evolution could happen in certain species in certain timespans, but this DOES NOT mean that the species that undergoes more evolution is “more evolved” or “superior.” Gould, contrary to some, has definitively and convincingly put these kinds of anthropocentric arguments to bed. By conflating value judgments with evolution, we lose the beauty of what evolution really is—random, non-driven change that has caused all of the biological wonder we see around us today.

Three Recent Environmentalist Books that Perpetuate the “IQ-As-A-Measure-of-Intelligence” Myth

2500 words

The hereditarian-environmentalist debate has been ongoing for over 100 years. In this time frame, many theories have been forwarded to explain the disparity between individuals and groups. In one camp you have the hereditarians who claim that any non-zero heritability for IQ scores means that hereditarianism is true (eg Warne, 2020); while in the other camp you have the environmentalists who claim that differences in IQ are explained by environmental factors. This debate has been raging since the 1870s when Francis Galton coined the “nature-nurture” dichotomy still rages today. Unfortunately, the environmentalists lend credence to IQ-ist claims that, however imperfect, IQ tests are “measures” of intelligence.

Three recent books on the matter are A Terrible Thing to Waste: Environmental Racism and its Assault on the American Mind (Washington, 2019), Making Kids Cleverer: A Manifesto for Closing the Advantage Gap (Didau, 2019), and Young Minds Wasted: Reducing Poverty by Enhancing Intelligence in Known Ways (Schick, 2019). All three of these authors are clearly environmentalists and they accept the IQ-ist canard that IQ—however crudely—is a “measure” of “intelligence.”

There are, however, no sound arguments that IQ tests “measure” intelligence and there is no response to the Berka/Nash measurement objection for the claim that IQ tests are a “measure” since no hereditarian can articulate the specified measured object, the object of measurement and the measurement unit for IQ; there is, also, no accepted definition or theory of “intelligence”. So how can we say that some”thing” is being “measured” with a certain instrument if we have no satisfactorily defined what we claim to be measuring with a well-accepted theory of what we are measuring (Richardson and Norgate, 2015; Richardson, 2017), with a specified measured object, object of measurement, and measurement unit (Berka, 1983a, 1983b; Nash, 1990; Garrison, 2003, 2009) for the construct we want to measure?

But the point of this article is that environmentalists push the hereditarian canard that IQ is equal to, however crudely, intelligence. And though the authors do have great intentions and are pointing to things that we can do to attempt to ameliorate differences between individuals in different environments, they still lend credence to the hereditarian program.

A Terrible Thing to Waste

Washington (2019) discusses the detrimental effects (and possible effects of others) of lead, mercury and other metals that are more likely to be found in low-income black and “Hispanic” communities along with iodine deficiencies. These environmental exposures retard normal brain development. But, one is not justified in claiming that they are measures of “intelligence”—at best, as Washington (2019) argues, we can claim that they are indexes of environmental polluters on the brains of developing children.

Intelligence is a product of environment and experience that is forged, not inherited; it is malleable, not fixed. (Washington, 2019: 20)

While it is true, as Washington claims, that we can mitigate these problems from the toxic metals and lack of other pertinent nutrients for brain development by addressing the problems in these communities, it does not follow that IQ is a “biological” thing. Yes, IQ is malleable (contra hereditarian claims), and Headstart does work to improve life outcomes, even though such gains “fade out” after the child leaves the enriched environment. Lead poisoning, for example, has led to a decrease in 23 million IQ points per year (Washington, 2019: 15). But I am not worried about lost IQ points (even though by saving the IQ points from being lost, we would then be directly improving the environments that lead to such a decrease). I am worried about the detrimental effects of these toxic chemicals on the developing minds of children; lost IQ points are an outcome of this effect. At best, IQ tests can track cognitive damage due to pollutants in these communities (Washington, 2019) but they do NOT “measure” intelligence. (Also note that lead consumption is associated with higher rates of crime so this is yet another reason to reduce the consumption of lead in these communities.)

Speaking of “measuring intelligence”, Washington (2019: 29) noted that Jensen (1969: 5) stated that while “intelligence” is hard to define, it can be measured… But how does that make any sense? How can you measure what you can’t define? (See arguments (i), (ii), and (iii) here.)

Big Lead, though, “actively encouraged landlords to rent to families with vulnerable young children by offering financial incentives” (Washington, 2019: 55). This was in reference to the researchers who studied the deleterious effects of lead consumption on developing humans. “The participation of a medical researcher, who is ethically and legally responsible for protecting human subjects, changes the scenario from a tragedy to an abusive situation. Moreover, this exposure was undertaken to enrich landlords and benefit researchers at the detriment of children” (Washington, 2019: 55). We realized that lead had deleterious effects on development as early as the 1800s (Rabin, 2008), but Big Lead pushed back:

[Lead Industries Association’s] vigorous “educational” campaign sought to rehabilitate lead’s image, muddying the waters by extolling the supposed virtues of lead over other building materials. It published flooding guides and dispatched expert lecturers to tutor architects, water authorities, plumbers, and federal officials in the science of how to repair and “safely” install lead pipes. All the while the [Lead Industries Association] staff published books and papers nd gave lectures to architects and water authorities that downplayed lead’s dangers. 11 (Washington, 2019: 60)

In any case, Washington’s book is a good read into the effects of toxic metals on brain development, and while we must do what we can to ameliorate the effects of these metals in low-income communities, IQ increases are a side effect of ameliorating the toxic metals in these communities.

Making Kids Cleverer

Didau (2019: 86) outright claims that “intelligence is measured by IQ tests”—he is outright pushing the hereditarian view that IQ tests “measure intelligence.” (A strange claim since on pg 95-96 he says that IQ tests are “a measure of relative intelligence.”)

In the book, Didau accepts many hereditarian premises—like the claim IQ tests measure intelligence, that heritability can partition genetic and environmental variation. Further, Didau says in the Acknowledgements (pg 11) that Ritchie’s (2015) Intelligence: All That Matters forms the backbone for much of the information in Chapters 3 and 5.” So we can see here how the hereditarian IQ-ist stance colors his view on the relationship between “IQ” and “intelligence.” He also makes the bald claims that “intelligence is a good candidate for being the best researched and best understood characteristic of the human brain” and that it’s “also probably the most stable construct in all psychology” (pg 81).

Didau takes the view that intelligence is both a way to acquire knowledge as well as what type of knowledge we know (pg 83)—basically, it’s what we know and what we do with what we know along with ways to acquire said knowledge. What one knows is obviously a product of the environment they find themselves growing up in, and what we do with the knowledge we have is similarly down to environmental factors. Didau states that “Possibly the strongest correlations [with IQ] are those with educational outcomes” (pg 92). But Didau, it seems, fails to realize that this strong correlation is built into the test since IQ tests and scholastic achievement tests are different versions of the same test (Schwartz, 1975, Richardson, 2017).

In one of the “myths of intelligence” (Myth 3: Intelligence cannot be increased, pg 102) he discusses, Didau uses a similar analogy as myself. In an article on “the fade-out effect“, I argued that if one goes to the gym, works out and gets bigger and then stops going, we can then say that going to the gym is useless since once they leave the enriched environment they lose their gains. The direct parallels to Headstart, then, is clear with my gym/muscle-building analogy.

In another myth (Myth 4: IQ tests are unfair), Didau claims that if you get a low IQ score then you are probably unintelligent, while if you get a high one, it means you know the answers to the questions—which is obviously true. Of course, to know the answers to the questions (and to be able to reason the answers for some of the questions), one must be exposed to the knowledge that is contained in that test, or they won’t score high.

We can reject the use of IQ scores by racists, he says, who would use it to justify the superiority of their own groups and the inferiority of “the other”, all while not rejecting that IQ tests are valid (where have they been validated?). “Something real and meaningful” is being measured by these tests, and we have chosen to call this “intelligence” (pg 107). But we can say this about anything. Imagine having a test Y for X. But we don’t really know what X is, nor that Y really measures it. But because it accords with our a priori biases and since we have constructed Y to get the results we think we should see, even though we have no idea what X is, we assume that we are measuring what we set out to all without the basic requirements of measurement.

While Didau does seem to agree with some of the criticisms I’ve levied on IQ tests over the years (cross-cultural testing is pointless, IQ scores can be changed), he is, obviously, pushing a hereditarian IQ-ist agenda, cloaked as an environmentalist. He contradicts himself by saying that intelligence is measured by IQ tests without then saying what he says later about them—and I don’t think one should assume that he meant they are an “imperfect measure” of intelligence. (Imagine an imperfect measure of length—would we still be using it to build houses if it was only somewhat accurate?) Didau also agrees with the g theorists, in that there is a “general cognitive ability”, as well. He also agrees with Ritchie and Tucker-Drob (2018) and Ceci (1996) that schooling can and does increase IQ scores (as summer vacations show that IQ scores do decrease without schooling) (see Didau, 2018: Chapter 5). So while he does agree that IQ isn’t static and that education can and does increase it, he is still pushing a hereditarian IQ-ist model of “intelligence”—even though, as he admits, the concept of “intelligence” has yet to be satisfactorily defined.

Young Minds Wasted

In the last book, Young Minds Wasted (Schick, 2019), while he does dispense with many hereditarian myths (such as the myth of the normal distribution, see here), he still—through an environmentalist lens—justifies the claims that IQ tests test intelligence. While he masterfully dispenses with the “IQ is normally distributed” claim (see discussion in pg 180-186), the tagline of the book is “reducing poverty by increasing intelligence, in known ways.”

The poor’s intelligence is wasted, he says, by an intelligence-depressing environment. We can see the parallels here with Washington’s (2019) A Terrible Thing to Waste. Schick claims that “the single most important and widespread cause of poverty is the environmental constraints on intelligence” (pg 12, Schick’s emphasis). Now, like Washington, Schick says that a whole slew of chemicals and toxins decrease IQ (a truism) and by identity, intelligence. Of course, living in a deprived environment where one is exposed to different kinds of toxins and chemicals can retard brain development and lead to deleterious life outcomes down the line. But this fact does not mean that intelligence is being measured by these tests; it only shows that there are environments that can impede brain development which then is mirrored in a decrease in IQ scores.

Schick says that as intelligence increases, societal problems decrease. But, as I have argued at length, this is due to the way the tests themselves are constructed, involving the a priori biases of the test’s constructors. If we can construct a test with any kind of distribution we want to, and if the items emerge arbitrarily from the heads of the test’s constructors who then try them out on a standardized sample (Jensen, 1980: 71) looking for the results they want and assume a priori, then we can make it so that what we accept as truisms regarding the relationship between IQ and life events can be turned on their head, with no logical reason to accept one set of items over another, other than that one set has a bias in which it upholds a test constructor’s previously-held biases.

Schick does agree that “intelligent behavior” can change throughout life, based on one’s life experiences. But “Human intelligence is based on several genetically determined capabilities such as cognitive functions” (pg 39). He also claims that genetic factors determine while environmental factors influence cognitve functions, memory, and universal grammar.

Along with his acceptance that genetic factors can influence IQ scores and other aspects of the mind, he also champions heritability estimates as being able to partition genetic and environmental variation in traits (even though it can do no such thing; Moore and Shenk, 2016). He—uncritically—accepts the 80/20 genetic environmental heritability from Bouchard and the 60/40 genetic environmental heritability from Jensen and Murray and Herrnstein. These “estimates”—drawn mostly from family, twin, and adoption studies (Joseph, 2015)—though, are invalid due to the false assumptions the researchers hold, neverminding the conceptual difficulties with the concept of heritability (Moore and Shenk, 2016).

Conclusion

While Washington and Schick both make important points—that those who live in poor environments are at-risk of being exposed to certain things that disrupt their development—they both, along with Didau, accept the hereditarian claim that IQ tests are tests of intelligence. While each author has their own specific caveats (some of which I agree with, and other I do not), they keep the hereditarian claim alive by lending credence to their arguments, but not looking at it through a genetic lens.

While the authors have good intentions in mind and while the research they discuss is extremely important and interesting (like the effects of toxins and metals on the development of the brain and the development of the child), they—like their intellectual environmentalist ancestors—unwittingly lend credence to hereditarian claims that IQ tests measure intelligence but they go about the causes of individual and group differences in completely different ways. These authors, with their assertions, then, accept the claim that certain groups are less “intelligent” than others. But it’s not genes that are the cause—it’s the differences in environment that cause it. And while that claim is true—that the deleterious effects Washington and Schick discuss can and do retard normal development—it, in no way shape or form, means that “intelligence” is being measured.

Normal (brain) development is indeed a terrible thing to waste; we can teach kids more by exposing them to more things, and young minds are wasted by poverty. But by accepting these premises, one does not need to accept the hereditarian dogma that IQ tests are measures of some undefined thing with no theory. That poverty and the environments that those in poverty live in impedes normal brain development which is then reflected in IQ scores, it does not follow that these tests are “measuring” intelligence—they, at best, show environmental challenges that change the brain of the individual taking the test.

One needs to be careful with the language they use, lest they lend credence to hereditarian pseudoscience.

Not Feeling Pain: What is CIPA (Congenital Insensitivity to Pain with Anhydrosis)?

1750 words

“Congenital Insensitivity to Pain” (CIPA, or congenital analgesia: CIPA hereafter) is an autosomal recessive disease (Indo, 2002) and was first observed in 1932 (Daneshjou, Jafarieh, and Raeeskarami, 2012). It is called a “congenital disorder” since it is present from birth. Since the disease is autosomal recessive, the closer the two parents are in relatedness, the more likely it is they will pass on a recessive disorder since they are more likely to have and pass on autosomal recessive mutations (Hamamy, 2012). First cousins, for example, 1.7-2.8% higher risk of having a child with an autosomal recessive disease (Teeuw et al, 2013). Consanguinity is common in North Africa (Anwar, Khyatti, and Hemminki, 2014) and the Bedouin have a high rate of this disease (Schulman et al, 2001; Lopez-Cortez et al, 2020; Singer et al, 2020). Three mutations in the TrkA (AKA NTRK1) have been shown to induce protein mis-folding which affect the function of the protein. Different mutations in the TrkA gene have been shown to have be associated with different disease outcomes (Franco et al, 2016). Since the mutated gene in question is needed for nerve growth factors, the pain signals cannot be transferred to the brain since there are hardly any of them there (Shin et al, 2016).

Individuals unfortunate enough to be inflicted with CIPA cannot feel pain. Whether it’s biting their tongues, feeling pain from extreme temperatures. People with CIPA have said that while they can feel the difference between extreme temperatures—hot and cold—they cannot feel the pain that is actually associated with the temperatures on their skin see (Schon et al, 2018). When they bump into things, they may not be aware of what happened and injuries may occur which heal incorrectly due to no medical attention and only noticing the fractures and other things that occur due to CIPA years later after they see doctors for what is possibly factors due to having the disease. People with CIPA are thought to be “dumb” because they constantly bump into things. But what is really happening is that, since they cannot feel pain, they have not learned that bumping into things could be damaging to their bodies, as pain is obviously an experience-dependent event. So these people learn, throughout their lives, to fake being in pain as to not draw suspicion to people who may not be aware of the condition. Children with the disease are thought, most of the time, to be victims of child abuse, but when it is discovered that the child who is thought to be a victim of abuse is inflicted with CIPA (van den Bosch et al, 2014; Amroh et al, 2020), treatments shift toward managing the disease.

About twenty percent of people with CIPA live until three years of age (Lear, 2011), while 20 percent of those who die at age 3 die from complications due to hyperpexia (an elevated body temperature over 106. degrees Fahrenheit) (Rosemberg, Marie, and Kliemann, 1994; Schulmann et al, 2001; Indo, 2002; Nabyev et al, 2018). Since they cannot feel the heat and get themselves to cool down, Due to a low life expectancy (many more live until about 25 years of age), this disease is really hard to study (Inoyue, 2007; Daneshjou, Jafarieh, and Raeeskarami, 2012). People hardly make it past that age since they either don’t feel the pain and do things that normal people, through experience, know not to do since we can feel pain and know to not do things that cause us pain and discomfort or they commit suicide since they have no quality of life due to damaged joints. Furthermore, since they cannot feel pain, people with this disease are more likely to self-mutilate since they cannot learn that self-mutilation causes pain (since pain is a deterrent for future action that may in fact cause pain to an individual). They also cannot sweat, meaning that control of the body temperature of one afflicted with CIPA is of utmost precedence (since they could overheat and die). Thus, these cases of deaths of individuals with CIPA do not occur due to CIPA per se, they occur due to, say, not feeling heat and then sweating while not attempting to regulate their body temperature and cool down (whether by naturally sweating due to being too hot or getting out of the extreme hot temperature causing the elevated body temperature). This is known as “hyperpyrexia” and this cause of death affects around 20 percent of CIPA patients (Sasnur, Sasnur, and Ghaus-ul, 2011). Furthermore, they are more likely to have thick, leathery skin and also show little muscular definition.

Not sweating is associated with CIPA and if one cannot sweat, one cannot have their body temperature regulated when they get too hot. So if they get too hot they cannot feel it and they will die of heat stroke. The disease, though, is rare, as only 17-60 people in America currently have it, while there are about 600 cases of the disease worldwide (Inoyue, 2007; Lear, 2011). This disease is quite hard to identify, but clinicians may be able to detect the presence of the disease through the following ways: Infants biting their lips, fingers, cheeks and not crying or showing any instance of being in pain after the event; repeated fractures in older children; a history of burns with no medical attention; observing that a child has many healed joint injuries and bone fractures without the child’s parents seeking medical care; observing that the patient does not react to hot or cold events (though they can say they can feel a difference between the two) they make errors in distinguishing in whether something is hot or cold (Indo, 2008).

Children who have this disease are at a higher risk of having certain kinds of bodily deformations, since they cannot feel the pain that would make them be hesitant to perform a certain action in the future. Due to this, people with this disease must constantly check themselves for cuts, abrasions, broken bones, etc to ensure that they cannot feel when they actually occur to them. They don’t cry, or show any discomfort, when experiencing what should be an event that would cause someone without CIPA to cry. CIPA-afflicted individuals are more likely to have bodily deformations since their joints and bones do not heal correctly after injury. This then leads to their walking and appearance to be affected. This is one of many reasons why the parents of people with CIPA must constantly check their children for signs of bodily harm or unintentional injuries. One thing that needs to be looked out for is what is termed Charcot joint—which is a degenerative joint disorder (Gucev et al, 2020).

A specific form of CIPA—called HSAN-IV—was discovered in a village in southern Finland called Vittangi, where it was traced to the founder of the village itself in the 1600s. Since the village was remote with such a small population, this meant that the only people around to marry and have children with were people who were closely related to each other. This, then, is the reason why this village in Finland has a high rate of people afflicted with this disease (Norberg, 2006; Minde, 2006). This, again, goes back to the above on consanguinity and autosomal recessive diseases—since CIPA is an autosomal recessive disease, one would reason that we would find it in populations that marry close relatives, either due to custom or population density.

Many features have been noted as showing that an individual is afflicted with CIPA: absent pain sensation from birth, the inability to sweat; and mental retardation, lower height and weight for their age (Safari, Khaledi, and Vojdani, 2011; Perez-Lopez et al, 2015). Children with CIPA have lower IQs than children without CIPA, so there is an inverse relationship between IQ and age; the older the age of the child with CIPA, the lower their IQ, while the reverse is true for individuals who are younger (Erez et al, 2010). One girl, for example. had a WISC-III IQ of 49, and she self-mutilated herself by picking at her nails until they were no longer there (Zafeirou et al, 2004). Another girl with CIPA was seen to have an IQ of 52, be afflicted with mental retardation, have a low birth weight, and was microcephalic (Nolano et al, 2000). Others were noted to have IQs in the normal range (Daneshjou, Jafarieh, and Raaeskarami, 2012). People with a specific form of this disease (HSN type II) were observed to have IQs in the normal range (though it is “caused by” a different set of genes than CIPA, HSN type IV; Kouvelas and Terzoglou, 1989). However, it has been noted that the cut-off of 70 for mental retardation is arbitrary (see Arvidsson and Granlund, 2016). While running a full gamut of tests on an individual thought to have CIPA, we can better attempt to ensure a higher quality of life in individuals afflicted with the disease. In sum, IQ scores of CIPA individuals do not reflect that the mutations in TrkA “cause” IQ scores; it is an outcome of a disrupted system (in this case, mutations on the TrkA gene).

There is currently no cure for this disease, and so, the only way to manage complications stemming from CIPA is to work on the injuries that occur to the joints that occur as they happen, to ensure that the individual has a good quality of life. Treatment for CIPA, therefore, is not actually curing the disease, but it is curing what occurs due to the disease (bone breaks, joint destruction), which would then heighten the quality of life of the person with CIPA (Nabiyev, Kara, and Aksoy, 2016). Naloxone may temporarily relieve CIPA (Rose et al, 2018), while others suggest treatments such as remifentanil (Takeuchi et al, 2018). We can treat outcomes that arise from the disease (like self-mutilation), but we cannot outright cure the disease itself (Daneshjou, Jafarieh, and Raaeskarami, 2012). The current best way to manage the disease is to identify the disease early in children and to do full-body scans of afflicted individuals to attempt to cure the by-products of the disease (such as limb/joint damage and other injuries). Maybe one day we can use gene therapy to help the afflicted, but for now, the best way forward is early identification along with frequent check-ups. By managing body temperature, having frequent check-ups, modifying the behavior of the child as to avoid injuries, wearing a mouth guard so they do not grind their teeth or bite their tongue, avoiding hot or cold environments or food, (Indo, 2008; Rose et al, 2018).

CIPA is a very rare—and very interesting—disease. By better understanding its aetiology, we can better help the extremely low number of people in the world who suffer from this disease.

Racial Differences in Amputation

1850 words

Overview

An amputation is a preventative measure. It is done for a few reasons: To stop the spread of a gangrenous infection and to save more of a limb after there is no blood flow to the limb after a period of time. Other reasons are due to trauma and diabetes. Trauma, infection, and diabetes are leading causes of amputation in developing countries whereas in developed countries it is peripheral vascular disease (Sarvestani and Azam, 2013). Poor circulation to an affected limb leads to tissue death—when the tissue begins turning black, it means that there is no or low blood flow to the tissue, and to save more of the limb, the limb is amputated just above where the infection is. About 1.8 million Americans are living as amputees. After amputation, there is a phenomenon called “phantom limb” where amputees can “feel” their limb they previously had, and even feel pain to it, and it is very common in amputees; about 60-80 percent of amputees report “feeling” a phantom limb (see Collins et al, 2018; Kaur and Guan, 2018). The sensation can occur either immediately after amputation or years after. Phantom limb pain is neuropathic pain—a pain that is caused by damage to the somatosensory system (Subedi and Grossberg, 2011). Amputees even have shorter lifespans. When foot-amputation is performed due to uncontrolled diabetes, mortality ranges between 13-40 percent for year one, 35-65 percent for year 3, and 39-85 percent in year 5 (Beyaz, Guller, and Bagir, 2017).

Race and amputation

Amputation of the lower extremities are the most common amputations (Molina and Faulk, 2020). Minority populations are less likely to receive preventative care, such as preventative vascular screenings and care, which leads to them being more likely to undergo amputations. Such populations are more likely to suffer from disease of the lower extremities, and it is due to this that minorities undergo amputations more often than whites in America. Minorities in America—i.e., blacks and “Hispanics”—are about twice as likely as whites to undergo lower-extremity amputation (Rucker-Whitaker, Feinglass, and Pearce, 2003; Lowe and Tariman, 2008; Lefebvre and Lavery, 2011; Mustapha et al, 2017; Arya et al, 2018)—so it is an epidemic for black America. Blacks are even more likely to undergo repeat amputation (Rucker-Whitaker, Feinglass, and Pearce, 2003). In fact, here is a great essay chronicling the stories of some double-amputee black patients.

Why do blacks undergo amputations more often than whites? One answer is, of course: Physician bias. For example, after controlling for demographic, clinical, and chronic disease status, blacks were 1.7 times more likely than whites to undergo lower-leg amputations (Feinglass et al, 2005; Regenbogen et al, 2007; Lefebvre and Lavery, 2011). What is a cause of this is inequity in healthcare—note that “inequity” here means differences in care that are avoidable and unjust (Sudana and Blas, 2013).

Another reason is due to complications from diabetes. Blacks have higher rates of diabetes than whites (Rodriguez and Campbell, 2007) but see Signorello et al (2007). Muscle fiber differences between races (see also here). Differences in hours-slept between blacks and whites, too, could also explain the severity of the disease. But what could also be driving differences in diabetes between races is the fact that blacks are more likely than whites to live in “food swamps.” Food swamps are where it is hard to find nutritionally-dense food, whereas food deserts are areas where there is little access to healthy, nutritious food. In fact, a neighborhood being a food swamp is more predictive of obesity status of the population in the area than is its being a food desert (Cooksey-Stowers, Schwartz, and Brownell, 2017). Along with the slew of advertisements in that are directed to low-income neighborhoods (see Cassady, Liaw, and Miller, 2015), we can now see how such things like food swamps contribute to high hospitalization rates in low-income neighborhoods (Phillips and Rogriguez, 2019). These amputations are preventable—and so, we can say that there is a lack of equity in healthcare between races which leads to these different rates in amputation—before even thinking about physician bias. Amputation rates for blacks in the southeast can be almost seven times higher than other regions (Goodney et al, 2014).

Stapleton et al (2018: 644) conclude in their study on physician bias and amputation:

Our study demonstrates that such justifications may be unevenly applied across race, suggesting an underlying bias. This may reflect a form of racial paternalism, the general societal perception that minorities are less capable of “taking care of themselves,” even including issues related to health and disease management.23 Underlying bias may prompt more providers to consider amputation for minority patients. Furthermore, unlike in transplant surgery, there is currently no formal process for assessing patient compliance with treatment protocols or self-care in vascular surgery.24 Asking providers to make snap judgments about patient compliance, without a protocol for objective assessment, allows subconscious bias to influence patient care.

Physician bias is pervasive (Hoberman, 2012)—whether it is conscious or unconscious racial bias. Such biases can and do lead to outcomes that should not occur. By attempting to reduce disparities in healthcare that then lead to negative outcomes, we can then attempt to improve the quality of healthcare given to lower-income groups, like blacks. Such biases lead to negative health outcomes for blacks (such as the claim that blacks feel less pain than whites), and if they were addressed and conquered, then we could increase equity between groups until access to healthcare is equal—and physician bias is an impediment to access to equal healthcare due to the a priori biases that physicians may hold about certain racial/ethnic groups. Medical racism, therefore, drives a lot of the amputation differences between blacks and whites. Hospitals that are better equipped to offer revascularization services (attempting to save the limb by increasing blood flow to the affected limb) even had a higher rate of amputations in blacks when compared to whites (Durazzo, Frencher, and Gusberg, 2013).

For example. Mustapha et al (2017) write:

Compared to Caucasian patients, several studies have found that African-Americans with PAD are more likely to be amputated and less likely to have their lower limb revascularized either surgically or via an endovascular approach [39]. In an early analysis of data from acute-care hospitals in Florida, Huber et al. reported that the incidence of amputation (5.0 vs. 2.5 per 10,000) was higher and revascularization lower (4.0 vs. 7.1 per 10,000) among African-Americans compared to Caucasians, even though the incidence of any procedure for PAD was comparable (9.0 vs. 9.6 per 10,000) [4]. Other studies have reported that the probability of undergoing a revascularization or angioplasty was reduced by 28–49 % among African-Americans relative to Caucasians [3 6]

Pro-white unconscious biases were also found among physicians, as Kandi and Tan (2020) note:

There is evidence of both healthcare provider racism and unconscious racial biases. Green et al. found significant pro-White bias among internal medicine and emergency medicine residents, while James SA supported this finding, indicating a “pro-white” unconscious bias in physician’s attitudes towards, and interactions with, patients [43,44]. In a survey assessing implicit and explicit racial bias by Emergency Department (ED) providers in care of NA children, it was discovered that many ED providers had an implicit preference for white children compared to those who identified as NA [45]. Indeed, racism and stigmatization are identified as being many American Indians’ experiences in healthcare.

One major cause of the disparity is that blacks are not offered revascularization services at the same rate as whites. Holman et al (2011: 425) write:

Finally, given that patients’ decisions are necessarily confined to the options offered by their physicians, racial differences in limb salvage care might be attributable to differences in physician decision making. There are some data to suggest lower vein graft patency rates in black patients compared to whites.18,19 A patient’s race, therefore, may influence a vascular surgeon’s judgment about the efficacy of revascularization in preventing or delaying amputation. Similarly, a higher proportion of black patients in our sample were of low SES, which correlates with tobacco use,20-22 and we know that continued tobacco use increases the risk of lower extremity graft failure approximately three-fold.23 It is possible that a higher proportion of black patients in our sample were smokers who refused to quit, in which case vascular surgeons would be much less likely to offer them the option of revascularization. While Medicare data include an ICD-9 diagnosis code for tobacco use, the prevalence in our study sample was approximately 2%, suggesting that this code was grossly unreliable as a means of directly measuring and adjusting for tobacco use.

Smoking, of course, could be a reason why revascularization would not be offered to black patients. Though, as I have noted, smoking ads are more likely to be found in lower-income neighborhoods which increases the prevalence of smokers in the community.

With this, I am reminded of two stories I have seen on television programs (I watch Discovery Health a lot—so much so that I have seen most of the programs they show).

In Untold Stories of the ER, a man came in with his hand cut off. He refused medical care. He would not let the doctors attempt to sew his hand back on. Upon the police entering his home to check for evidence (where his hand was found), they searched his computer. It seems that he had a paraphilia called “acrotomophilia” which is where one is sexually attracted to people with amputations. Although he wanted it to be done to himself—he had inflicted the wound on himself. After the doctor tried to reason with the man to have his hand sewed back on, the man would not let up. He did not want his hand sewed back on. I wonder if, years down the line, the man regretted his decision.

In another program (Mystery Diagnosis), a man had said that as a young boy, he had seen a single-legged war veteran amputee. He said that ever since then, he would do nothing but think about becoming an amputee. He lived his whole life thinking about it without doing anything about it. He then went to a psychiatrist and spoke of his desire to become an amputee. After some time, he eventually flew to Taiwan and got the surgery done. He, eventually, found happiness since he had done what he always wanted to.

While these stories are interesting they speak to something deep in the minds of the individuals who mutilate themselves or get surgery to otherwise healthy limbs.

Conclusion

Blacks are more likely than whites to receive amputations in affected limbs than whites and are less likely to receive treatments that may be able to save the affected limb (Holman et al, 2011; Hughes et al, 2013; Minc et al, 2017; Massada et al, 2018). Physician bias is a large driver of this. So, to better public health, we then must attempt to mitigate these biases that physicians have that lead to these kinds of disparities in healthcare. Medical and other kinds of racism have led to this disparity in amputations between blacks and whites. Thus, to attempt to mitigate this disparity, blacks must get the preventative care needed in order to save the affected limb and not immediately go for amputation. Thankfully, such disparities have been noticed and work is being done to decrease said disparities.

So race is a factor in the decision on whether or not to amputate a limb, and blacks are less likely to receive revascularization services.

Evolutionary Psychology Does Not Explain Differences Between Rightists and Leftists

2000 words

Unless you’ve been living under a rock since the new year, you have heard of the “coup attempt” at the Capitol building on Wednesday, January 6th. Upset at the fact that the election was “stolen” from Trump, his supporters showed up at the building and rushed it, causing mass chaos. But, why did they do this? Why the violence when they did not get their way in a fair election? Well, Michael Ryan, author of The Genetics of Political Behavior: How Evolutionary Psychology Explains Ideology (2020) has the answer—what he terms “rightists” and “leftists” evolved at two different times in our evolutionary history which, then, explains the trait differences between the two political parties. This article will review part of the book—the evolutionary sections (chapters 1-3).

EP and ideology

Explaining why individuals who call themselves “rightists and leftists” behave and act differently than the other is Ryan’s goal. He argues, at length, that the two parties have two different personality profiles. This, he claims, is due to the fact that the ancestors of rightists and leftists evolved at two different times in human history. He calls this “Trump Island” and “Obama Island”—apt names, especially due to what occurred last week. Ryan claims that what makes Trump different from, say, Obama, is that his ancestors evolved at a different place in a different time compared to Obama’s ancestors. He further claims using the Stanford Prison Experiment that “we may not all be capable of becoming Nazis, after all. Just some, and conservatives especially so” (pg 12).

In the first chapter he begins with the usual adaptationism that Evolutionary Psychologists use. Reading between the lines in his implicit claims, he is arguing that “rightists and leftists” are natural kinds—that is, they are *two different kinds of people.* He explains some personality differences between rightists and leftists and then says that such trait differences are “rooted in biology and governed by genes” (pg 17). Ryan then makes a strong adaptationist claim—that traits are due to adaptation to the environment (pg 17). What makes you and I different from Trump, he claims, is that our ancestors and his ancestors evolved in different places at different times where different traits would be imperative to survival. So, over time, different traits got selected-for in these two populations leading to the trait differences we see today. So each environment led to the fixation of different adaptive traits which explains the differences we see today between the two parties, he claims.

Ryan then shifts from the evolution of personality differences to… The evolution of the beaks of Darwin’s finches and Tibetan adaptation to high-altitude living (pg 18), as if the evolution of physical traits is anything like the evolution of psychological traits. His folly is assuming that these physical traits can then be likened to personality/mental traits. The ancestors of rightists and leftists, like Darwin’s finches Ryan claims, evolved on different islands in different moments of evolutionary time. They evolved different brains and different adaptive behaviors on the basis of the evolution of those different brains. Trump’s ancestors were authoritarian, and this island occurred early in human history “which accounts for why Trump’s behavior seems so archaic at times” (pg 18).

The different traits that leftists show in comparison to rightists is due to the fact that their island came at a different point in evolutionary time—it was not recent in comparison to the so-called archaic dominance behavior portrayed by Trump and other rightists. Ryan says that Obama Island was more crowded than Trump Island where, instead of scowling, they smiled which “forges links with others and fosters reciprocity” (pg 19). So due to environmental adversity, they had a more densely populated “island”—in this novel situation, compared to the more “archaic” earlier time—the small bands needed to cooperate, rather than fight with each other, to survive. So this, according to Ryan, explains why studies show more smiling behavior in leftists compared to rightists.

Some of our ancestors evolved traits such as cooperativeness the aided the survival of all even though not everyone acquired the trait … Eventually a new genotype or subpopulation emerged. Leftist traits became a permanent feature of our genome—in some at least. (pg 19-20)

So the argument goes: Differences between rightists and leftists show us that the two did not evolve at the same points in time since they show different traits today. Different traits were adaptive at different points in time, some more archaic, some more modern. Since Trump Island came first in our evolutionary history, those whose ancestors evolved there show more archaic behavior. Since Obama Island came first, they show newer, more modern behaviors. Due to environmental uncertainty, those on Obama Island had to cooperate with each other. The trait differences between these two subpopulations were selected for in their environment that they evolved in, which is why they are different today. Now today, this led to the “arguing over the future direction of our species. This is the origin of human politics” (pg 20).

Models of evolution

Ryan then discusses four models of evolution: (1) the standard model, where “natural selection” is the main driver of evolutionary change; (2) epigenetic models like Jablonka’s and Lamb’s (2005) in Evolution in Four Dimensions; (3) where behavioral changes change genes; and (4) where organisms have phenotypic plasticity and is a way for the organism to respond to sudden environmental changes. “Leftists and rightists“, writes Ryan, “are distinguished by their own versions of phenotypic plasticity. They change behavior more readily than rightists in response to changing environmental signals” (pg 29-30).

In perhaps the most outlandish part of the book, Ryan articulates one of my now-favorite just-so stories. The passage is worth quoting in-full:

Our direct ancestor Homo erectus endured for two million years before going extinct 400,000 years ago when earth temperatures dropped far below the norm. Descendants of erectus survived till as recently as 14,000 years ago in Asia. The round head and shovel-shaped teeth of some Asians, including Vladimir Putin, are an erectile legacy. Archeologists believe erectus was a mix of Ted Bundy and Adolf Hitler. Surviving skulls point to a life of constant violence and routine killing. Erectile skulls are thick like a turtle’s, and the brow’s are ridged for protection from potentially fatal blows. Erectus’ life was precarious and violent. To survive, it had to evolve traits such as vigilant fearfulness, prejudice against outsiders, bonding with kin allies, callousness toward victims, and a penchant for inflexible habits of life that were known to guarantee safety. It had to be conservative. 34 Archeologists suggest that some of our most characteristic conservative emotions such as nationalism and xenophobia were forged at the time of Homo erectus. 35 (pg 33-34)

It is clear that Ryan is arguing that rightists have more erectus-like traits whereas leftists have more modern, Sapiens traits. “The contemporary coexistence of a population with more “modern” traits and a population with more “archaic” traits came into being” (pg 37). He is implicitly assuming that the two “populations” he discusses are natural kinds and with his “modern” “archaic” distinction (see Crisp and Cook 2005 who argue against a form of this distinction) he is also implying that there is a sort of “progress” to evolution.

Twin studies, it is claimed, show “one’s genetically informed psychological disposition” (Hatemi et al, 2014); they “suggest that leftists and rightists are born not made” while a so-called “consensus has emerged amongst scientists: political behavior is genetically controlled and heritable” (pg 43). But, Beckway and Morris (2008), Charney (2008), and Joseph (2009; 2013) argue that twin studies can do no such thing due to the violation of the equal environments assumption (Joseph, 2014; Joseph et al, 2015). Thus, Ryan’s claims of the “genetic origins” of political behavior rest on studies that cannot prove or disprove “genetic causation” (Shulitziner, 2017)—but since the EEA is false we must discount “genetic causation” for psychological traits, not least because it is impossible for genes to cause/influence psychological traits (see argument (iii)).

The arguments he provides are a form of inference to best explanation (IBE) (Smith, 2016). However, this is how just-so stories are created: the conclusion is already in mind, and then the story is crafted using “natural selection” to explain how a trait came to fixation and why it currently exists today. The whole book is full of such adaptive stories. Claiming that we have the current traits we do in the distributions they are in in the “populations” because they were, at a certain point in our evolutionary history, adaptive which then led to the individuals with those traits passing on more of their genes, eventually leading to trait fixation. (See Fodor and Piattelli-Palmarini, 2010).

Ryan makes such outlandish claims such as “Rightists are more likely than leftists to keep their desks neat. If in the distant past you knew exactly where the weapons were, you could find them quickly and react to danger more effectively. 26” (pg 45). He talks about how “time-consuming and effort-demanding accuracy of perception [were] more characteristic of leftist cognitionleftist cognition is more reflective” while “rightist cognition is intuitive rather than reflective” (pg 47). Rightists being more likely to endorse the status quo, he claims, is “an adaptive trait when scarce resources made energy management essential to getting by” (pg 48) Rightist language, he argues, uses more nouns since they are “more concrete, an anxious personalities prefer concrete to abstract language because it favors categorial rigidity and guarantees greater certainty” while leftists “use words that suggest anxiety, anger, threats, certainty, resistance to change, power, security, and conformity” (pg 49). There is “a connection between archaic physiology and rightist moral ideology” (pg 52). Certain traits that leftists have were “adaptive traits [that] were suited to later stage human evolution” (pg 53). Ryan just cites studies that show differences between rightists and leftists and then uses some great leaps and mental gymnastics to try to mold the findings as being due to evolution in the two different time periods he describes in chapter 1 (Trump and Obama Island).

Conclusion

I have not read one page in this book that does not have some kind of adaptive just-so story attempting to explain certain traits/behaviors between rightists and leftists in evolutionary terms. Ryan uses the same kind of “reasoning” that Evolutionary Psychologists use—have your conclusion in mind first and then craft an adaptive story to explain why the traits you see today are there. Ryan outright says that “[t]raits are the result of adaptation to the environment” (pg 17), which is a rare—strong adaptationist—claim to make.

His book ticks off all of the usual EP things: strong adaptationism, just-so storytelling, the claim that traits were selected-for due to their contribution in certain environments at different points in time. The strong adaptationist claims, for example, are where he says that erectus’ large brow “are rigid for protection from potentially fatal blows” (pg 34). Such strong adaptationist claims imply that Ryan believes that all traits are the result of adaptation and that they, as a result, are still here today because they all serve a function in our evolutionary past. His arguments are, for the most part, all evolutionary and follow the same kinds of patterns that the usual EP arguments do (see Smith, 2016 for an explication of just-so stories and what constitutes them). Due to the problems with evolutionary psychology, his adaptive claims should be ignored.

The arguments that Ryan provides are not scientific and, although they give off a veneer of being scientific by invoking “natural selection” and adaptationism, they are anything but. It is just a long-winded explanation for how and why rightists and leftists—liberals and conservatives—are different and why they cannot change, since these differences are “encoded” into our genome. The implicit claim of the book, then, that rightists and leftists are two different—natural—kinds, lies on the false bed of EP and, therefore, the arguments provided in the book fail to sway anyone that does not believe such fantastic storytelling masquerading as science. While he does discuss other evolutionary theories, such as epigenetic ones from Jablonka and Lamb (2005), the book is largely strongly adaptationist using “natural selection” to explain why we still have the traits we do in different “populations” today.

Racism, Action, and Health Inequity

1500 words

‘Health inequalities are the systematic, avoidable and unfair differences in health outcomes that can be observed between populations, between social groups within the same population or as a gradient across a population ranked by social position.’ (McCartney et al, 2019)

Health inequities, however, are differences in health that are judged to be avoidable, unfair, and unjust. (Sudana and Blas, 2013)

Asking “Is X racist?” is the wrong question to ask. If X is factual, then making the claim cannot be racist (facts themselves cannot be racist). But, one can perform a racist action—either consciously or subconsciously—on the basis of a fact. Facts themselves cannot be racist, but one can use facts to be racist. One can hold a belief and the belief can be racist (X group is better than Y group at Z), but systemic racism would be the result (the outcome) of holding said belief. (Some examples of systemic racism can be found in Gee and Ford, 2011.) Someone who holds the belief that, say, whites are more “intelligent” than blacks or Jews are more “intelligent” than whites could be said to be racist—they hold a racist belief and are making an invalid inference based on a fact (blacks score 15 points lower in IQ tests compared to whites so blacks are less intelligent). Truth cannot be racist, but truth can be used to attempt to justify certain policies.

I have argued that we should ban IQ tests on the basis that, if we believe that the hereditarian hypothesis is true and it is false, then we can enact policies on the basis of false information. If we enact policies on the basis of false information, then certain groups may be harmed. If certain groups may be harmed, then we should ban whatever led to the policy in question. If the policy in question is derived from IQ tests, then IQ tests must be banned. This is one example on how we can use a fact (like the IQ gap between blacks and whites) and use that fact for a racist action (to shuttle those who perform under a certain expectation into certain remedial classes based on the fact that they score lower than some average value). Believing that X group has a higher quality of life, educational achievement, and life outcomes on the basis of IQ scores—or their genes—is a racist belief but this racist belief can then be used to perform a racist action.

I have also discussed different definitions of “racism.” Each definition discussed can be construed as having a possible action attached to it. Racism is an action—something that we perform on the basis of certain beliefs, motivated by “what can be” possible in the future. Beliefs can be racist; we can say that it is an ideology that one acts on that has real causes/consequences to people. Truth can’t be racist; people can can use the truth to perform and justify certain actions. Racism, though, can be said to be a “cultural and structural system” that assigns value based on race; further, actions and intent of individuals are not necessary for structural mechanisms of racism (e.g., Bonilla-Silva, 1997).

We can, furthermore, use facts about differences between races in health outcomes and say that certain rationalizations of certain outcomes can be construed as racist. “It’s in the genes!” or similar statements could be construed as racist, since it implies that certain inequalities would be “immutable” on the basis of a strong genetic determination of disease.

Racism is indeed a public health issue. For instance, physicians can hold biases on race—just like the average person. For instance, differences in healthcare between majority and minority populations can said to be systemic in nature (Reschovsky and O’Malley, 2008). This needs to be talked about since racism can and is a determinant of health—as many places in the country are beginning to recognize. Racism is rightly noted as a public health crisis because it leads to disparate outcomes between whites and blacks based on certain assumptions on the ancestral background of both groups.

Quach et al (2012) showed that not receiving referrals to a specialist is discriminatory—Asians, too were also exposed to medical discrimination, along with blacks. Such discrimination can also lead to accelerated cellular aging (on the basis of measured telomere lengths where shorter telomeres indicate a higher biological compared to chronological age; Shammas et al, 2012) in black men and women (Geronimus et al, 2006; 2011; Schrock et al, 2017; Forrester et al, 2019). We understand the reasons why such discrimination on the basis of race happens, and we understand the mechanism by which it leads to adverse health outcomes between races (chronic elevation in allostatic load leading to higher than normal levels of certain stress hormones which will, eventually, lead to differences in health outcomes).

The idea that genes or behavior lead to differences in health outcomes is racist (Bassett and Graves, 2018). This can then lead to racist actions—that their genetic constitution impedes them from being “near-par” with whites, or that their behavior is the cause of the health disparities (sans context). Valles (2018: 186) writes:

…racism is a cause with devastating health effects, but it manifests via many intermediary mechanisms ranging from physician implicit biases leading to over-treatment, under-treatment and other clinical errors (Chapman et al. 2013; Paradies et al. 2015) to exposing minority communities to waterborne contaminants because of racist political disenfranchisement and neglect of community infrastructure (e.g., the infamous Flint Water Crisis afflicting my Michigan neighbors) (Krieger 2016; Sherwin 2017; Michigan Civil Rights Commission 2017).

There is a distinction between “equity” and “equality.” For instance, to continue with the public health example, take public health equality and public health equity. In this instance, “equality” means giving everyone the same thing whereas “equity” means giving individuals what they need to be the healthiest individual they can possibly be. “Strong equality of health” is “where every person or group has equal health“, while weak health equity “states that every person or group should have equal health except when: (a) health equality is only possible by making someone less healthy, or (b) there are technological limitations on further health improvement” (Norheim and Asada, 2009). But we should not attempt to “level-down” people’s health to achieve equity; we should attempt to “level up” people’s health, though. That is, it is impossible to reach a strong health equality (making all groups equal), but we should—and indeed, have a moral responsibility to—attempt to lift up those who are worse-off. Poverty is what is objectionable, inequality is not. It is impossible to achieve true equality between groups, but we can—and indeed we have a moral obligation to—lift up those who are in poverty, which is, also a social determinant of health (Braveman and Gottlieb, 2014; Frankfurt, 2015; Islam, 2019).

We achieve health equity when all individuals have the same access to be the healthiest individuals they can be; we achieve health equality when all health outcomes are the same for all groups. Health equity is, further, the absence of avoidable differences between different groups (Evans, 2020). One of these is feasible, the other is not. But racism does not allow us to achieve health equity.

The moral foundation for public health thus rests on general obligations in beneficence to promote good health. (Powers and Faden, 2006: 24)

Social justice is not only a matter of how individuals fare, but also about how groups fare relative to one another whenever systemic racism is linked to group membership. (Powers and Faden, 2006: 103)

…inequalities in well-being associated with severe poverty are inequalities of the highest moral urgency. (Powers and Faden, 2006: 114)

Public health is directly a matter of social justice. If public health is directly a matter of social justice, and if health outcomes due to discrimination are caused by social injustice, then we need to address the causes of such inequalities, which would be for example, conscious or unconscious prejudice against certain groups.

Certain inequalities between groups are, therefore, due to systemic racism which is an action which can be conscious or unconscious. But which inequalities matter most? In my view, the inequalities that matter most are inequalities that impede an individual or a group from having a certain quality of life. Racism can and does lead to health inequalities and by addressing the causes for such actions, we can then begin to ameliorate the causes of structural racism. This is more evidence that the social can indeed manifest in biology.

Holding certain beliefs can lead to certain actions that can be construed as racist and negatively impact health outcomes for certain groups. By committing ourselves to a framework of social just and health, we can then attempt to ameliorate inequities between social class/races, etc. that have plagued us for decades. We should strive for equity in health, which is a goal of social justice. We should not believe that such differences are “innate” and that there is nothing that we can do about group differences (some of which are no doubt caused by systemically racist policies). Health equity is something we should strive to do and we have a moral obligation to do so; health equality is not obligatory and it is not even a feasible idea.

If we can avoid health certain outcomes for certain groups on the basis of beliefs that we hold, then we should do so.

Polygenic Scores and Causation

1400 words

The use of polygenic scores has caused much excitement in the field of socio-genomics. A polygenic score is derived from statistical gene associations using what is known as a genome-wide association study (GWAS). Using genes that are associated with many traits, they propose, they will be able to unlock the genomic causes of diseases and socially-valued traits. The methods of GWA studies also assume that the ‘information’ that is ‘encoded’ in the DNA sequence is “causal in terms of cellular phenotype” (Baverstock, 2019).

For instance it is claimed by Robert Plomin thatpredictions from polygenic scores have unique causal status. Usually correlations do not imply causation, but correlations involving polygenic scores imply causation in the sense that these correlations are not subject to reverse causation because nothing changes the inherited DNA sequence variation.”

Take the stronger claim from Plomin and Stumm (2018):

GPS are unique predictors in the behavioural sciences. They are an exception to the rule that correlations do not imply causation in the sense that there can be no backward causation when GPS are correlated with traits. That is, nothing in our brains, behaviour or environment changes inherited differences in DNA sequence. A related advantage of GPS as predictors is that they are exceptionally stable throughout the life span because they index inherited differences in DNA sequence. Although mutations can accrue in the cells used to obtain DNA, like any cells in the body these mutations would not be expected to change systematically the thousands of inherited SNPs that contribute to a GPS.

This is a strange claim for two reasons.

(1) They do not, in fact, imply causation since the scores derived from GWA studies which are associational and therefore cannot show causes—GWA studies are pretty much giant correlational studies that scan the genomes of hundreds of thousands of people and look for genes that are more likely to be in the sample population for the disease/”trait” in question. These studies are also heavily skewed to European populations and, even if they were valid for European populations (which they are not), they would not be valid for non-European ethnic groups (Martin et al, 2017; Curtis, 2018; Haworth et al, 2018).

(2) The claim that “nothing changes inherited DNA sequence variation” is patently false; what one experiences throughout their lives can most definitely change their inherited DNA sequence variation (Baedke, 2018; Meloni, 2019).

But, as pointed out by Turkheimer, Plomin and Stumm are assuming that no top-down causation exists (see, e.g., Ellis, Noble, and O’Connor, 2011). We know that both top-down (downward) and bottom-up (upward) causation exists (e.g., Noble, 2012; see Noble 2017 for a review). Plomin, it seems, is coming from a very hardline view of genes and how they work. A view, it looks like to me, that derives from the Darwinian view of genes and how they ‘work.’

Such work also is carried out under the assumption that ‘nature’ and ‘nurture’ are independent and can therefore be separated. Indeed, the title of Plomin’s 2018 book Blueprint implies that DNA is a blueprint. In the book he has made the claim that DNA is a “fortune-teller” and that things like PGSs are “fortune-telling devices” (Plomin, 2018: 6). PGSs are also carried out based on the assumption that the heritability estimates derived from twin/family/adoption studies tell us anything about how “genetic” a trait is. But, since the EEA is false (Joseph, 2014; Joseph et al, 2015) then we should outright reject any and all genetic interpretations of these kinds of studies. PGS studies are premised on the assumption that the aforementioned twin/adoption/family studies show the “genetic variation” in traits. But if the main assumptions are false, then their conclusions crumble.

Indeed, lifestyle factors are better indicators of one’s disease risk compared to polygenic scores, and so “This means that a person with a “high” gene score risk but a healthy lifestyle is at lower risk than a person with a “low” gene score risk and an unhealthy lifestyle” (Joyner, 2019). Janssens (2019) argues that PRSs (polygenic risk scores) “do not ‘exist’ in the same way that blood pressure does … [nor do they] ‘exist’ in the same way clinical risk models do …” Janssens and Joyner (2019) also note that “Most [SNP] hits have no demonstrated mechanistic linkage to the biological property of interest. By showing mechanistic relations between the proposed gene(s) and the disease phenotype, researchers would, then, be on their way to show “causation” for PGS/PRS.

Nevertheless, Sexton et al (2018) argue that “While research has shown that height is a polygenic trait heavily influenced by common SNPs [712], a polygenic score that quantifies common SNP effect is generally insufficient for successful individual phenotype prediction.Smith-Wooley et al (2018) write that “… a genome-wide polygenic score … predicts up to 5% of the variance in each university success variable.” But think about the words “predicts up to”—this is a meaningless phrase. Such language is, of course, causal when they—nor anyone else—has shown that such scores are indeed casual (mechanistically).

Spurious correlations

What these studies are indexing are not causal genic variants for disease and other “traits”, they are showing the population structure of the population sampled in question (Richardson, 2017; Richardson and Jones, 2019). Furthermore, the demographic history of the sample in question can also mediate the stratification in the population (Zaidi and Mathieson, 2020). Therefore, claims that PGSs are causal are unfounded—indeed, GWA studies cannot show causation. GWA studies survive on the correlational model—but, as has been shown by many authors, the studies show spurious correlations, not the “genetics” of any studied “trait” and they, therefore, do not show causation.

One further nail-in-the-coffin for hereditarian claims for PGS/PRS and GWA studies is due to the fact that the larger the dataset (the larger the number of datapoints), there will be many more spurious correlations found (Calude and Longo, 2017). When it comes to hereditarian claims, this is relevant to twin studies (e.g., Polderman et al, 2015) and GWA studies for “intelligence” (e.g., Sniekers et al, 2017). It is entirely possible, as is argued by Richardson and Jones (2019) that the results from GWA studies “for intelligence” are entirely spurious, since the correlations may appear due to the size of the dataset, not the nature of it (Calude and Longo, 2017). Zhou and Zao (2019) argue that “For complex polygenic traits, spurious correlation makes the separation of causal and null SNPs difficult, leading to a doomed failure of PRS.” This is troubling for hereditarian claims when it comes to “genes for” “intelligence” and other socially-valued traits.

How can hereditarians show PGS/PRS causation?

This is a hard question to answer, but I think I have one. The hereditarian must:

(1) provide a valid deductive argument, in that the conclusion is the phenomena to be explained; (2) provide an explanans (the sentences adduced as the explanation for the phenomenon) that has one lawlike generalization; and (3) show the remaining premises which state the preceding conditions have to have empirical content and they have to be true.

An explanandum is a description of the events that need explaining (in this case, PGS/PRS) while an explanans does the explaining—meaning that the sentences are adduced as explanations of the explanans. Garson (2018: 30) gives the example of zebra stripes and flies. The explanans is Stripes deter flies while the explanandum is Zebras have stripes. So we can then say that zebras have stripes because stripes deter flies.

Causation for PGS would not be shown, for example, by showing that certain races/ethnies have higher PGSs for “intelligence”. The claim is that since Jews have higher PGSs for “intelligence” then it follows that PGSs can show causation (e.g., Dunkel et al, 2019; see Freese et al, 2019 for a response). But this just shows how ideology can and does color one’s conclusions they glean from certain data. That is NOT sufficient to show causation for PGS.

Conclusion

PGSs cannot, currently, show causation. The studies that such scores are derived from fall prey to the fact that spurious correlations are inevitable in large datasets, which also is a problem for other hereditarian claims (about twins and GWA studies for “intelligence”). Thus, PGSs do not show causation and the fact that large datasets lead to spurious correlations means that even by increasing the number of subjects in the study, this would still not elucidate “genetic causation.”

Binet and Simon’s “Ideal City”

1500 words

Ranking human worth on the basis of how well one compares in academic contests, with the effect that high ranks are associated with privilege, status, and power, does suggest that psychometry is best explored as a form of vertical classification and attending rankings of social value. (Garrison, 2009: 36)

Binet and Simon’s (1916) book The Development of Intelligence in Children is somewhat of a Bible for IQ-ists. The book chronicles the methods Binet and Simon used to construct their tests for children to identify those children who needed more help at school. In the book, they describe the anatomic measures they used. Indeed, before becoming a self-taught psychologist, Binet measured skulls and concluded that skull measurements did not correlate with teacher’s assessment of their students’ “intelligence” (Gould, 1995, chapter 5).

In any case, despite Binet’s protestations that Gould discusses, he wanted to use his tests to create what Binet and Simon (1916: 262) called an “ideal city.”

It now remains to explain the use of our measuring scale which we consider a standard of the child’s intelligence. Of what use is a measure of intelligence? Without doubt one could conceive many possible applications of the process, in dreaming of a future where the social sphere would be better organized than ours; where every one would work according to his own aptitudes in such a way that no particle force should be lost for society. That would be the ideal city. It is indeed far from us. But we have to remain among the sterner and matter-of-fact realities of life, since we here deal with practical experiments which are the most commonplace realities.

Binet disregarded his skull measurements as a correlate of ‘intelligence’ since they did not agree with teacher’s ratings. But then Binet and Simon (1916: 309) discuss how teachers assessed students (and gave an example). This is then how Binet made sure that the new psychological ‘measure’ that he devised related to how teachers assessed their students. Binet and Simon’s “theory” grouped certain children as “superior” and others as “inferior” in ‘intelligence’ (whatever that is), but did not pinpoint biology as the cause of the differences between the children. These groupings, though, corresponded to the social class of the children.

Thus, in effect, what Binet and Simon wanted to do was to organize society along a system of class social class lines while using his ‘intelligence tests’ to place the individual where they “belonged” on the hierarchy on the basis of their “intelligence”—whether or not this “intelligence” was “innate” or “learned.” Indeed, Binet and Simon did originally develop their scales to distinguish children who needed more help in school than others. They assumed that individuals had certain (intellectual) properties which then related to their class position. And that by using their scales, they can identify certain children and then place them into certain classes for remedial help. But a closer reading of Binet and Simon shows two hereditarians who wanted to use their tests for similar reasons that they were originally brought to America for!

Binet and Simon’s test was created to “separate natural intelligence and instruction” since they attempted to ‘measure’ the “natural intelligence” (Mensh and Mensh, 1991). Mensh and Mensh (1991: 23) continue:

Although Binet’s original aim was to construct an instrument for classifying unsuccessful school performers inferior in intelligence, it was impossible for him to create one that would do only that, i.e., function at only one extreme. Because his test was a projection of the relationship between concepts of inferiority and superiority—each of which requires the other—it was intrinsically a device for universal ranking according to alleged mental worth.

This “ideal city” that Binet and Simon imagine would have individuals work to their “known aptitudes”—meaning that individuals would work where their social class dictated they would work. This was, in fact, eerily similar to the uses of the test that Goddard translated and the test—the Stanford-Binet—that Terman developed in 1916.

Binet and Simon (1916: 92) also discuss further uses for their tests, irrespective of job placement for individuals:

When the work, which is here only begun, shall have taken its definite character, it will doubtless permit the solution of many pending questions, since we are aiming at nothing less than the measure of intelligence; one will this know how to compare the different intellectual levels not only according to age, but according to sex, social condition, and to race; applications of our method will be found useful to normal anthropology, and also to criminal anthropology, which touches closely upon the study of the subnormal, and will receive the principle conclusion of our study.

Binet, therefore, had similar views to Goddard and Terman, regarding “tests of intelligence” and Binet wanted to stratify society by ‘intelligence’ using his own tests (which were culturally biased against certain classes). Binet’s writings on the uses of his tests, ironically, mirrored what the creators of the Army Alpha and Beta tests believed. Binet believed that his tests could select individuals that were right for the role they would be designated to work. Binet, nevertheless, contradicted himself numerous times (Spring, 1972; Mensh and Mensh, 1991).

This dream of an “ideal city” was taken a step further when Binet’s test was brought and translated to America by Goddard and used for selecting military recruits (call it an “ideal country”). They would construct the test in order to “ensure” the right percentages of “the right” people who would be in their spot that was designated to them on the basis of their intelligence.

What Binet was attempting to do was to mark individual social value with his test. He claimed that we can use his (practical) test to select people for certain social roles. Thus, Binet’s dream for what his tests would do—and were then further developed by Goddard, Yerkes, Terman, et al—is inherent in what the IQ-ists of today want to do. They believe that there are “IQ cutoffs”, meaning that people with an IQ above or below a certain threshold won’t be able to do job X. However, the causal efficacy of IQ is what is in question along with the fact that IQ-ists have certain biases that they construct into their tests that they believe are ‘objective.’ But where Binet shifted from the IQ-ists of today and his contemporaries was that he believed that ‘intelligence’ is relative to one’s social situation (Binet and Simon, 1916: 266-267).

It is ironic that Gould believed that we could use Binet’s test (along with contemporary tests constructed and ‘validated’—correlated—with Terman’s Stanford-Binet test) for ‘good’; this is what Binet thought he would be done. But then, when the hereditarians had Binet’s test, they took Binet’s arguments to a logical conclusion. This also has to do with the fact that the test was constructed AND THEN they attempted to ‘see’ what was ‘measured’ with correlational studies. The ‘meaning’ of test scores, thusly, is seen after the fact with—wait for it—correlations with other tests that were ‘validated’ with other (unvalidated) tests.

This comes back to the claim that the mental can be ‘measured’ at all. If physicalism is false—and there are dozens of (a priori) arguments that establish this fact— and the mental is therefore irreducible to the physical, then psychological traits—and with it the mind—cannot be measured. It then follows that the mind cannot be measured. Further, rankings are not measures (Nash, 1990: 63), therefore, ability and achievement tests cannot be ‘measures’ of any property of individuals or groups—the object of measurement is the human and this was inherent in Binet’s original conception of his test that the IQ-ists in America attempted with their restrictions on immigration in the early 1900s.

This speaks to the fatalism that is inherent in IQ-ism—and was inherent since the creation of the first standardized tests (of which IQ tests are). These tests are—and have been since their inception—attempting to measure human worth and the differences and value between persons. The IQ-ist claims that “IQ tests must measure something.” And this ‘measurement’, it is claimed, is inherent in the fact that the tests have ‘predictive validity.’ But such claims of that a ‘property’ inherent in individuals and groups fails. The real ‘function’ of standardized testing is for assessment, and not measurement.

The “ideal city”, it seems, is just a city of IQ-ism—where one’s social roles are delegated by where they score on a test that is constructed to get the results the constructors want. Therefore, what Binet wanted his tests to do was (and some may ever argue it still is) being used to mark social worth (Garrison, 2004, 2009). Psychometry is therefore a political ring. It is inherently political and not “value-free.” Psychologists/psychometricians do not have an ‘objective science’, as the object of study (the human) can reflexively change their behavior when they know they are being studied. Their field is inherently political and they mark individuals and groups—whether they admit it or not. “Ideal cities” can lead to eugenic thinking, in any case, and to strive for “ideality” can lead to social harms—even if the intentions are ‘good.’

White Privilege: What It Is and Who Has It?

2550 words

Discussions about whiteness and privilege have become more and more common. Whites, it is argued, have a form of unearned societal privilege which therefore explains certain gaps between whites and non-whites. White privilege is the privilege that whites have in society—this type of privilege does not have to be in America, it can hold for groups that are viewed as ‘white’ in other countries. This, then, perpetrates social views of race, hence these people are realists about race but in a social/political context and do not have to recognize race as biological (although race can become biologicized through social/cultural practices). This article will discuss (1) What white privilege is; (2) Who has white privilege; (3) Arguments against white privilege; and (4) If race doesn’t exist, why does white privilege matter?

What is white privilege?

The concept of white privilege, like most concepts, evolves with the times and current social thought. The concept was originally created in order to account for whites’ (unearned) privileges and the conscious bias that went into creating and then maintaining these privileges, to unconscious favoritism/psychological advantages that whites give other whites (Bennett, 2012: 75). That is, white privilege is “an invisible package of unearned assets that I can count on cashing in each day, but about which I was “meant” to remain oblivious. White privilege is like an invisible weightless knapsack of special provisions, maps, passports, codebooks, visas, clothes, tools , and blank checks” (McIntosh, 1988).

More easily, we can say that white privilege is—the privilege conferred, either consciously or subconsciously, to one based on their skin color or, as Sullivan (2016, 2019) argues, their class status ALONG WITH their whiteness is what we should be talking about—white privilege with CLASS in between ‘white’ and ‘privilege’. In this sense, one’s class status AND their whiteness is explanatory, not only the concept of whiteness (i.e., their socialrace). The concept of whiteness—one’s skin color—as the privilege leaves out numerous intricacies in how whiteness gives and upholds systemic discrimination. When we add the concept of ‘class’ into ‘white privilege’ we get what Sullivan terms ‘white class privilege’.

While yes, one’s race is an important variable in whether or not they have certain privileges, such privileges are held for middle- to upper-middle class whites. Thus, numerous examples of ‘white privilege’ are better understood as examples of ‘white class privilege’, since lower-class whites don’t have the same kinds of privileges, outlooks, and social status as middle- and upper-middle class whites. Of course, though, lower-class whites can benefit from their whiteness—they definitely can. But the force of Sullivan’s concept of ‘white class privilege’ is this: white privilege is not monolithic towards whites, and some non-whites are better-off (economically and in regard to health) than whites. Thus, according to Sullivan, ‘white privilege’ should be amended to ‘white class privilege’.

Who has white privilege?

Lower-class whites could, in a way, be treated differently than middle- and upper-class whites—even though they are of the same race. Lower-class whites can be seen to have ‘white privilege’ on the basis of everyday thought, since most think of the privilege as down to just skin color, yet there is an untalked about class dimension at play here, which, then, even gives blacks an advantage while upholding the privilege of the upper-class whites.

Non-whites who have are of a higher social class than whites would also receive different treatment. Sullivan states that the revised concept of ‘white class privilege’ must be used intersectionally—that is, privilege must be considered interacting with class, gender, national, and other social experiences. Sure, lower-class whites may be treated differently than higher-class blacks in certain contexts, but this does not mean that the lower-class white has ‘more privilege’ than the upper-class black. This shows that we should not assume that lower-class whites have the same kinds of privilege conferred by society as middle- and upper-class whites. Upper-class blacks and ‘Hispanics‘ may attempt to distinguish themselves from lower-class blacks and ‘Hispanics’, as Sullivan (2019: 18-19) explains:

Class privilege shows up as a feature of most if not all racial groups in which members with “more”—more money, education, or whatever else is valued in society—are treated better than those with “less.” For that reason, we might think that white class privilege actually is an intragroup pattern of advantage and disadvantage among whites, rather than an intergroup pattern that gives white people a leg up over non-white people. After all, many Black middle-class and upper-middle-class Americans also go to great lengths to make sure that they are not mistaken for the Black poor in public spaces: when they are shopping, working, walking, or driving in town, and so on (Lacy, 2007). A similar pattern can be found with middle-to-upper-class Hispanic/Latinx people in the United States, who can “protect” themselves from being seen as illegal immigrants by ensuring that they are not identified as poor (Masuoka and Junn, 2013).

Sullivan then goes on to state that these situations are not equivalent, since wealth, fame, and education do not protect upper-class blacks from racial discrimination. The certain privileges that upper-class whites have, thusly, do not transfer to upper-class blacks. Further, middle- to upper-class whites distinguish themselves as ‘good whites’ who are not racist, while dumping all of the racism accusations on lower-class whites. “…the line between “good” and “bad” white people drawn by many (good) white people is heavily classed. Good white people tend to be middle-to-upper-class, and they often dump responsibility for racism onto lower-class white people” (Sullivan, 2019: 35). Even though the lower-class whites get used as a ‘shield’, so to speak, by upper-class whites, they still have some semblance of white privilege, in that they are not assumed to be non-citizens to the US—something that ‘Hispanics’ do have to deal with (no matter their race).

While wealthy white people generally have more affordances than poor white people do, in a society that prizes whiteness all white people have some racial affordances, at least some of the time.

Paradoxically, whites are not the only ones that benefit off of ‘white privilege’—even non-whites can benefit, though it ultimately helps upper-class whites. They can benefit by being brought up in a white home, around whites (like being adopted or having one white parent while spending most of their childhood with their white family). Thus, white privilege can cross racial lines all the while still benefitting whites.

Sullivan (2019: chapter 2) discusses some blacks who benefit from white privilege. One of the people she discusses has a white parent. This is what gives her her lighter skin, but that is not where her privilege comes from (think colorism in the black community where lighter skin is more prized than darker skin). Her privilege came from “her implicit knowledge of white norms, sensibilities, and ways of doing things that came from living with and being accepted by white family members” (Sullivan, 2019: 26). This is what Sullivan calls “family familiarity” and is one of the ways that blacks can benefit from white privilege. Another way in which blacks can benefit from white privilege is due to “ancestral ties to whiteness.”

Colorism is the discrimination within the black community by skin color. Certain blacks may talk about “light-” and “dark-skinned” blacks and they may—ironically or not—discriminate on the basis of skin color. Such colorism is even somewhat instilled in the black community—where darker-skinned black sons and lighter-skinned black daughters report higher-quality parenting. Landor et al (2014) report that their “findings provide evidence that parents may have internalized this gendered colorism and as a result, either consciously or unconsciously, display higher quality of parenting to their lighter skin daughters and darker skin sons.” Thus, even certain blacks—in virtue of being ‘part white’—would benefit from white (skin) privilege within their own (black) community, which would therefore give them certain advantages.

Arguments against white privilege

Two recent articles with arguments against white privilege (Why White Privilege Is Wrong — Quillette and The Fallacy of White Privilege — and How It Is Corroding Society) erroneously argue that since other minority groups quickly rose up upon arrival to America, therefore white privilege is a myth. These kinds of takes, though, are quite confused. It does not follow that since other groups have risen upon entry into America and that since whites have worse outcomes on some—and not other—health outcomes, that therefore the concept of white privilege is ‘fallacious’; we just need something more fine-grained.

For example, the claims that X minority group is over-represented compared to whites in America gets used as a point that ‘white privilege’ does not exist (e.g., Avora’s article). Avora discusses the experiences and data from many black immigrants, proclaiming:

These facts challenge the prevailing progressive notion that America’s institutions are built to universally favor whites and “oppress” minorities or blacks. On the whole, whatever “systemic racism” exists appears to be incredibly ineffectual, or even nonexistent, given the multitude of groups who consistently eclipse whites.

How does that follow? In fact, how does the whole discussion of, for example, Japanese now outperforming whites follow that white privilege therefore is a ‘fallacy’? I ask the question, since Asian immigrants to America are hyper-selected (Noam, 2014; Zhou and Lee, 2017), meaning that what explains higher Asian academic achievement is academic effort (Hsin and Xie, 2014) and the fact that Asians are hyper-selected—meaning that they have a higher chance of having a higher degree.

The educational credentials of these recent [Asian] arrivals are striking. More than six-in-ten (61%) adults ages 25 to 64 who have come from Asia in recent years have at least a bachelor’s degree. This is double the share among recent non-Asian arrivals, and almost surely makes the recent Asian arrivals the most highly educated cohort of immigrants in U.S. history.

Compared with the educational attainment of the population in their country of origin, recent Asian immigrants also stand out as a select group. For example, about 27% of adults ages 25 to 64 in South Korea and 25% in Japan have a bachelor’s degree or more.2 In contrast, nearly 70% of comparably aged recent immigrants from these two countries have at least a bachelor’s degree. (The Rise of Asian Americans)

Avora even discuses some African immigrants, namely Nigerians and Ghanaians. However, just like Asian immigrants to America, Nigerian and Ghanaian immigrants to America are more likely to hold advanced degrees, signifying that they are indeed hyper-selected in comparison to the population that they derive from (Duvivier, Burch, and Boulet, 2017). Thus, to go along with the stats that Avora cites on the children of Nigerian immigrants, their parents already had higher degrees, signifying that they are indeed a hyper-selected group. This means that such ethnic groups cannot be used to show that white privilege is explanatory.

While Avora does discuss “class” in his article, he shows that it’s not only ‘white privilege’, but the class element that comes along with whiteness in America. He therefore unknowingly shows that once you add the ‘class’ factor and create the concept of ‘white class privilege’, that this privilege can cross racial lines and benefit non-whites.

In the Harinam and Henderson Quillette article, they argue that since there are some things that we say are ‘good’ that non-whites have more of than whites, therefore the concept of ‘white privilege’ does not explain the existence of disparities between ethnic groups in the US since some some bad things happen to whites and some good things happen to non-whites—but this is an oversimplification. The fact of the matter is, whites that do receive privileges over other ethnic/racial groups do so not in virtue of their (white) skin privilege, but in virtue of their class privilege. This can be seen with the above citations on class being the explanatory variable regarding Asian academic success (showing how class values get reproduced in the new country which then explains the academic success of Asians in America).

The fact that both of these articles believe that by showing some minority groups in America have more ‘good’ things than whites or better outcomes for bad things (like suicides) misses the point. That whites kill themselves more than other American ethnic groups does not mean that whites do not have privilege in America compared to other groups.

If race doesn’t exist, then why does white privilege matter?

Lastly, those who argue against the concept of white privilege may say that those who are against the concept of white privilege would then at the same time say that race—and therefore whites—do not exist so, in effect, what are they talking about if ‘whites’ don’t exist because race does not exist? This is of course a ridiculous statement. One can indeed reject claims from biological racial realists and believe that race exists and is a socially constructed reality. Thus, one can reject the claim that there is a ‘biological’ European race, and they can accept the claim that there is an ever-changing ‘white’ race, in which groups get added or subtracted based on current social thought (e.g., the Irish, Italians, Jews), changing with how society views certain groups.

Though, it is perfectly possible for race to exist socially and not biologically. So the social creation of races affords the arbitrarily-created racial groups to be in certain areas on the hierarchy of races. Roberts (2011: 15) states that “Race is not a biological category that is politically charged. It is a political category that has been disguised as a biological one.” She argues that we are not biologically separated into races, we are politically separated into them, signifying race as a political construct. Most people believe that the claim “Race is a social construct” means that “Race does not exist.” However, that would be ridiculous. The social constructivist just believes that society divides people into races based on how we look (i.e., how we are born) and then society divides us into races on the basis of how we look. So society takes the phenotype and creates races out of differences which then correlate with certain continents.

So, there is no contradiction in the claim that “Race does not exist” and the claim that “Whites have certain unearned privileges over other groups.” Being an antirealist about biological race does not mean that one is an antirealist about socialraces. Thus, one can believe that whites have certain privileges over other groups, all the while being antirealists about biological races (saying that “Races don’t exist biologically”).

Conclusion

In this article I have explained what white privilege is and who has it. I have also discussed arguments against white privilege and claims that those who argue against race are hypocrites since they still talk about “whites” while claiming that race exists. After showing the conceptual confusions that people have about white privilege, along with the fact that groups that do better than whites in America (the groups that supposedly show that white privilege is “a fallacy”), I then forward Sulllivan’s (2016, 2019) argument on white class privilege. This shows that their whiteness is not the sole reason why they prosper—their whiteness along with their middle-to-upper-middle-class status explains why they prosper. It also, furthermore, shows that while lower-class whites do have some sort of white privilege, they do not have all of the affordances of white privilege due to their class status. Blacks can, too, benefit from white privilege, whether it’s due to their proximity to whiteness or their ancestral heritage.

White privilege does exist, but to fully understand it, we must add in the nexus of class with it.