NotPoliticallyCorrect

Home » 2023 (Page 2)

Yearly Archives: 2023

Free Will and the Immaterial Self: How Free Will Proves that Humans Aren’t Fully Physical Beings

2200 words

Introduction

That humans have freedom of will demonstrates that there is an immaterial aspect to humans. It implies that there is a nonphysical aspect to humans, thus, humans aren’t fully physical beings. I will use the Ross-Feser argument on the immateriality of thought to strengthen that conclusion. But before that, I will demonstrate that we do indeed have free will. Then, the consequence that we have free will will then be used to generate the conclusion that we are not fully physical beings. This conclusion is, however, justified by arguments for many flavors of dualism. I will then conclude by providing a compelling case against the physicalist, materialist view that seeks to reduce human beings to purely physical entities—because this claim will be directly contested by the conclusion of my argument.

CID and free will

I recently argued for a view I call cognitive interface dualism (CID). The argument I formulated used action potentials (APs) as the intermediary between the mental and physical realms that Descartes was looking for (he thought that this interaction took place at the peneal gland, but he was wrong). So free will using my CID can be seen as a product of mental autonomy, non-deterministic mental causation, and the emergent properties of mind. So CID can accommodate free will and allow for it’s existence without relying on determinism.

The CID framework also argues that M is irreducible to P, consistent with other forms of dualism. This suggests that the mind has a level of autonomy that isn’t completely determined by physical or material processes. Furthermore, when it comes to decision-making, this occurs in the mental realm. CID allows for mental states to causally influence physical states (mental causation), and so, free will operates when humans make choices, and these choices can initiate actions which aren’t determined by physical factors. Free will is also compatible with the necessary role of the human brain for minds—it’s an emergent property of the interaction of M and P. The fact of the matter is, minds allow agency, the ability to reason and make choices. That is, humans are unique, special animals and humans are unique and special because humans have an immaterial mind which allows the capacity to make decisions and have freedom.

Overall, the CID framework provides a coherent explanation for the existence of free will, alongside the role of the brain in human cognition. It further allows for a nuanced perspective on human agency, while emphasizing the unique qualities of human decision-making and freedom.

Philosopher Peter Van Inwagen has an argument using modus ponens which states: If moral responsibility exists, then free will exists. Moral responsibility exists because individuals are held accountable for their actions in the legal system, ethical discussions, and everyday life. Thus, free will exists. Basically, if you’ve ever said to someone “That’s your fault”, you’re holding them accountable for their actions, assuming that they had the capacity to make choices and decisions independently. So this aligns with the concept of free will, since you’re implying that a person and the ability to act differently and make alternative choices.

Libet experiments claim that unconscious brain processes are initiated before an action is made, and that it precedes conscious intention to move. But the original Libet experiment nor any similar ones justify the claim that the brain initiates freely-willed processes (Radder and Meynen, 2012)—because the mind is what is initiating these freely-willed actions.

Furthermore, when we introspect and reflect on our conscious experiences, we unmistakably perceive ourselves as making choices and decisions in various situations in our lives. These choices and decisions feel unconstrained and open, we experience and feel a sense of deliberation when making them. But if we had no free will and our choices, were entirely determined by external factors, then our experience of making choices would be illusory; our choices would be mere illusions of free will. Thus, the fact that we have a direct and introspective awareness in making choices implies that free will exists; it’s a fundamental aspect of our human experiences. So while this argument doesn’t necessarily prove that free will exists, it highlights the compelling phenomenological aspects of human decision-making, which can be seen as evidence for free will.

Having said all of this, I can now make the following argument: If humans have the ability to reason and make logical decisions, then humans have free will. Humans have the ability to reason and make logical decisions. So humans have free will. I will then take this conclusion that I inferred and use it in a later argument to infer that humans aren’t purely physical beings.

Freedom and the immaterial self

Ronald Ross (1992) argued that all formal thinking is incompossibly determinate, while no physical process or a function of physical processes are incompossibly determinate, which allowed him to infer that thoughts aren’t a functional or physical process. Then Ed Feser (2013) argued that Ross’ argument cannot be refuted or coukd be refuted by any neuroscientific discovery. Feser then added to the argument and correctly inferred that humans aren’t fully physical beings.

A, B, and C are, after all, only the heart of Ross’s position.  A little more fully spelled out, his overall argument essentially goes something like this:

A. All formal thinking is determinate.

B. No physical process is determinate.

C. No formal thinking is a physical process. [From A and B]

D. Machines are purely physical.

E. Machines do not engage in formal thinking. [From C and D]

F. We engage in formal thinking.

G. We are not purely physical. [From C and F] (Ed Feser, Can Machines Beg the Question?)

This is a conclusion that I myself have come to, through the fact that machines are purely physical and since thinking isn’t a physical process (but physical processes are necessary for thinking), then machines cannot think since they are purely physical and thinking isn’t a physical or functional process.

Only beings with minds can intend. This is because mind allows a being to think. Since the mind isn’t physical, then it would follow that a physical system can’t intend to do something—since it wouldn’t have the capacity to think. Take an alarm system. The alarm system does not intend to sound alarms when the system is tripped. It’s merely doing what it was designed to do, it’s not intending to carry out the outcome. The alarm system is a physical thing made up of physical parts. So we can then liken this to, say, A.I.. A.I. is made up of physical parts. So A.I. (a computer, a machine) can’t think. However, individual physical parts are mindless and no collection of mindless things counts as a mind. Thus, a mind isn’t a collection of physical parts. Physical systems are ALWAYS a complicated system of parts but the mind isn’t. So it seems to follow that nothing physical can ever have a mind.

Physical parts of the natural world lack intentionality. That is, they aren’t “about” anything. It is impossible for an arrangement of physical particles to be “about” anything—meaning no arrangement of intentionality-less parts will ever count as having a mind. So a mind can’t be an arrangement of physical particles, since individual particles are mindless. Since mind is necessary for intentionality, it follows that whatever doesn’t have a mind cannot intend to do anything, like nonhuman animals. It is human psychology that is normative, and since the normative ingredient for any normative concept is the concept of reason, and only beings with minds can have reasons to act, then human psychology would thusly be irreducible to anything physical. Indeed, physicalism is incompatible with intentionality (Johns, 2020). The problem of intentionality is therefore yet another kill-shot for physicalism. It is therefore impossible for intentional states (i.e. cognition) to be reduced to, or explained by, physicalist theories/physical things. (Why Purely Physical Things Will Never Be Able to Think: The Irreducibility of Intentionality to Physical States)

Now that I have argued for the existence of free will, I will now argue that our free will implies that there is an aspect of our selves and out existence that is not purely physical, but is immaterial. Effectively, I will be arguing that humans aren’t fully physical beings.

So if humans were purely physical beings, then our actions and choices would be solely determined by physical laws and processes. However, if we have free will, then our actions are not solely determined by physical laws and processes, but are influenced by our capacity to make decisions independently. So humans possess a nonphysical aspect—free will which is allowed by the immaterial mind and consciousness—which allows us to transcend the purely deterministic nature of purely physical things. Consequently, humans cannot be fully physical beings, since the existence of free will and the immaterial mind and consciousness suggests a nonphysical, immaterial aspect to out existence.

Either humans have free will, or humans do not have free will. If humans have free will, then humans aren’t purely physical. If humans don’t have free will, then it contradicts the premise that we have free will. So humans must have free will. Consequently, humans aren’t fully physical beings.

Humans aren’t fully physical beings, since we have the capacity for free will and thought—where free will is the capacity to make choices that are not determined by external factors alone. If humans have the ability to reason and make logical decisions, then humans have free will. Humans have the ability to reason and make logical decisions. So humans have free will. Reasoning and the ability to make logical decisions is based on thinking. Thinking is an immaterial—non-physical—process. So if thinking is an immaterial process, and what allows thinking are minds which can’t be physical, then we aren’t purely physical. Put into premise and conclusion form, it goes like this:

(1) If humans have the ability to reason and make logical decisions, then humans have free will.
(2) Humans have the ability to reason and make logical decisions.
(3) Reasoning and the ability to reason and make logical decisions are based on thinking.
(4) Thinking is an immaterial—non-physical—process.
(5) If humans have free will, and what allows free will is the ability to think and make decisions, then humans aren’t purely physical beings.

This argument suggests that humans possess free will and engage in immaterial thinking processes, which according to the Ross-Feser argument, implies the existence of immaterial aspects of thought. So what allows this is consciousness, and the existence of consciousness implies the existence of a nonphysical entity. This nonphysical entity is the mind.

So in CID, the self (S) is the subject the self is the subject of experience, while the mind (M) encompasses mental states, subjective experiences, thoughts, emotions, and consciousness, and consciousness (C) refers to the awareness of one’s own mental states and experiences. CID also recognizes that the brain is a necessary pre-condition for human mindedness but not a sufficient condition, so for there to be a mind at all there needs to be a brain—basically, for there to be mental facts, there must be physical facts. The self is what has the mind, and the mind is the realm in which mental states and experiences occur. So CID posits that the self is the unified experiencer, while the self interact is the entity that experiences and interacts with the contents of the mind through APs.

So this argument that I’ve mounted in this article and my original article on CID, is that humans aren’t fully physical beings since it’s based on the idea that thinking and conscious experiences are immaterial, nonphysical processes.

Conclusion

So CID offers a novel perspective on the mind-body problem, arguing that APs are the interface between the mental and the physical world. Now with this arguments I’ve made here, it establishes that humans aren’t purely physical beings. Through the argument that mental states are irreducible to physical states, CID acknowledges that the existence of an immaterial self plays a fundamental role in human mental life. Thus immaterial self—the seat of our conscious experiences, thoughts, decisions and desires—bridges the gap between M and P. This further underscores the argument that the mind is immaterial and thus so is the self (“I”, the experiencer, the subject of experience) and that the subject isn’t the brain or the nervous system.

CID recognizes that human mental life is characterized by its intrinsic mental autonomy and free will. We are not mere products of deterministic physical processes, rather we are agents capable of making genuine choices and decisions. The conscious experiences of making choices along with the profound sense of freedom in our are immediate and undeniable aspects of our profound sense of freedom in our decisions are immediate and undeniable aspects of our reality which then further cements the existence of free will. So the concept of free will reinforces the claim and argument that humans aren’t fully physical beings. These aspects of our mental life defy reduction to physical causation.

Hypertension, Brain Volume, and Race: Hypotheses, Predictions and Actionable Strategies

2300 words

Introduction

Hypertension (HT, also known as high blood pressure, BP) is defined as a BP of 140/90. But more recently, the guidelines were changed making HT being defend as a BP over 130/90 (Carey et al, 2022; Iqbal and Jamal, 2022). One 2019 study showed that in a sample with an age range of 20-79, 24 percent of men and 23 percent of women could be classified as hypertensive based on the old guidelines (140/90) (Deguire et al, 2019). Having consistent high BP could lead to devestating consequences like (from the patient’s perspective) hot flushes, dizziness, and mood disorders (Goodhart, 2016). However, one serious problem with HT is the issue that consistently high BP is associated with a decrease in brain volume (BV). This has been seen in two systematic reviews and meta-analyses (Alosco et al, 2013; Beauchet et al, 2013; Lane et al, 2019; Alateeq, Walsh and Cherbuin, 2021; Newby et al, 2022) while we know that long-standing hypertension has deleterious effects on brain health (Salerno et al, 1992). However, it’s not only high BP that’s related to this, it’s also lower BP in conjuction with lower pulse pressure (Muller et al, 2010; Foster-Dingley, 2015). So what this says to me is that too much or too little blood flow to the brain is deleterious for brain health.I will state the hypothesis and then I will state the predictions that follow from it. I will then provide three reasons why I think this relationship occurs.

The hypothesis

The hypothesis is simple: high BP (hypertension, HT) is associated with a reduced brain volume. This relationship is dose-dependent, meaning that the extent and duration of HT correlates with the degree of BV changes. So the hypothesis suggests that there is a relationship—an association—between HT and brain volume, where people with HT will be more likely to have decreased BVs than those who lack HT—that is, those with BP in the normal range.

The dose-dependent relationship that has been observed (Alateeq, Walsh and Cherbuin, 2021), and this shows that as HT increases and persists over time, the effects of decreased BV become more pronounced. This relationship suggests that it’s not a binary, either-or situation, present or absent situation, but that it varies across a continuum. So people with shorter-lasting HT will have fewer effects than those with constant and consistent elevated BP and they will then show subsequent higher decreases in BV. This dose-dependent relationship also suggests that as BP continues to elevate, the decrease in BV will worsen.

This dose-dependent relationship implies a few things. The consequences of HT on BV aren’t binary (either or), but are related to the severity of HT, how long one has HT, and at what age they have HT and that it varies on a continuum. For instance, people with mild or short-lasting HT would experience smaller reductions in BV than those that have severe or long-standing HT. The dose-dependent relationship also suggests that the longer one has HT without treatment, the more severe and worse the reduction in BV will be if it is uncontrolled. So as BP continues to elevate, it may lead to a gradual reduction in BV. So the relationship between HT and BV isn’t uniform, but it varies based on the intensity and duration of high BP.

So the hypothesis suggests that HT isn’t just a risk factor for cardiovascular disease, but it’s also a risk factor for decreased BV. This seems intuitive, since the higher one’s BP, the more likely it is that there is the beginnings of a blockage somewhere in the intricate system of blood vessels in the body. And since the brain is a vascular organ, then by decreasing the amount of blood flowing to it, this then would lead to cell death, white matter lesions which would lead to a smaller BV. One newer study showed, with a sample of Asians, whites, blacks, and “Latinos” that, compared to those with normal BP, those who were transitioning to higher BP or already had higher BP had lower brain connectivity, decreased cerebral gray matter and frontal cortex volume, while this change was worse for men (George et al, 2023). Shang et al (2021) showed that HT diagnosed in early and middle life but not late life was associated with decreased BV and increased risk of dimentia. This, of course, is due to the slow cumulative effects of HT and it’s effects on the brain. While Power et al (2016)The pattern of hypertension ~15 years prior and hypotension concurrent with neuroimaging was associated with smaller volumes in regions preferentially affected by Alzheimer’s disease.” But not only is BP relevant here, so is the variability of BP at night (Gutteridge et al, 2022; Yu et al, 2022). Alateeq, Walsh and Cherbuin (2021) conclude that:

Although reviews have been previously published in this area, they only investigated the effects of hypertension on brain volume [86]. To the best of our knowledge, this study’s the first systematic review with meta-analysis providing quantitative evidence on the negative association between continuous BP and global and regional brain volumes. Our results suggest that heightened BP across its whole range is associated with poorer cerebral health which may place individuals at increased risk of premature cognitive decline and dementia. It is therefore important that more prevention efforts be directed at younger populations with a greater focus on achieving optimal BP rather than remaining below clinical or pre-clinical thresholds[5].

One would think that a high BP would actually increase blood flow to the brain, but HT actually causes alterations in the flow of blood to the brain which leads to ischaemia and it causes the blood-brain barrier to break down (Pires et al, 2013). Essentially, HT has devestating effects on the brain which could lead to dimentia and Alzheimer’s (Iadecola and Davisson, 2009).

So the association between HT and decreased BV means that individuals with HT can experience alterations in BV in comparison to those with normal BP. The hypothesis also suggests that there are several mechanisms (detailed below), which may lead to various physiological and anatomic changes in the brain, such as vascular damage, inflammation and tissue atrophy.

The mechanisms

(1) High BP can damage blood vessels in the brain, which leads to reduced blood flow. This is called “cerebral hypoperfusion.” The reduced blood flow can deprive the cells in the brain of oxygen and nutrients, which cause them to shrink or die which leads to decreased brain volume (BV). Over time, high BP can damage the arteries, making them less elastic

(2) Over a long period of time having high BP, this can cause hypertensive encephalopathy, which is basically brain swelling. A rapid increase in BP could over the short term increase BV, but left untreated it could lead to brain damage and atrophy over time.

And (3) Chronically high BP can lead to the creation of white matter lesions on the brain, and the lesions are areas of damaged brain tissue which could result in microvascular changes caused by high BP (hypertension, HT). Thus, over time, the accumulation of white matter lesions could lead to a decrease in brain volume. HT can contribute to white matter lesions in the brain, which are then associated with cognitive changes and decreased BV, and these lesions increase with BP severity.

So we have (1) cerebral hypoperfusion, (2) hypertensive encephalopathy, and (3) white matter lesions. I need to think/read more on which of these could lead to decreased BV, or if they all actually work together to decrease BV. We know that HT damages blood vessels, and of course there are blood vessels in the brain, so it then follows that HT would decrease BV.

I can also detail a step-by-step mechanism. The process beings with consistently elevated BP, which could be due to various factors like genetics, diet/lifestyle, and underlying medical conditions. High BP then places increased strain on the blood vessels in the body, including those in the brain. This higher pressure could then lead to structural change of the blood vessels over time. Then, chronic HT over time can lead to endothelial dysfunction, which could impair the ability of blood vessels to regulate blood flow and maintain vessel integrity. The dysfunction can result in oxidative stress and inflammation.

Then as a response to prolonged elevated BP, blood vessels in the brain could undergo vascular remodeling, which involves changes im blood vessel structure and thickness, which can then affect blood flow dynamics. Furthermore, in some cases, this could lead to something called cerebral small vessel disease which involves damage to the small blood vessels in the brain including capillaries and arterioles. This could impair delivery of oxygen and nutrients to brain tissue which could lead to cell death and consequently a decrease in BV. Then reduced blood flow along compromised blood vessel integrity could lead to cerebral ischaemia—reduced blood supply—and hypoxia—reduced oxygen supply—in certain parts of the brain. This can then result in neural damage and eventually cell death.

Then HT-related vascular changes and cerebral small vessel disease can trigger brain inflammation. Prolonged exposure to neural inflammation, hypoxia and ischemia can lead to neuronal atrophy, where neurons shrink and lose their functional integrity. HT can also increase the incidence of white matter lesions in the brain which can be seen in neuroimages, which involve areas of white matter tissue which become damaged. Finally, over time, the cumulative effects of the aforementioned processes—vascular changes, inflammation, neural atrophy, and white matter changes could lead to a decrease in BV. This reduction can manifest as brain atrophy which is then observed in parts of the brain which are susceptible and vulnerable to the effects of HT.

So the step-by-step mechanism goes like this: elevated BP —> increased vascular strain —> endothelial dysfunction —> vascular remodeling —> cerebral small vessel disease —> ischemia and hypoxia —> inflammation and neuroinflammation —> neuronal atrophy —> white matter changes —> reduction in BV.

Hypotheses and predictions

H1: The severity of HT directly correlates with the extent of BV reduction. One prediction would be that people with more severe HT would exhibit greater BV decreases than those with moderate (less severe) HT, which is where the dose-dependent relationship comes in.

H2: The duration of HT is a critical factor in BV reduction. One prediction would be that people with long-standing HT will show more significant BV changes than those with recent onset HT.

H3: Effective BP management can mitigate BV reduction in people with HT. One prediction would be that people with more controlled HT would show less significant BV reduction than those with uncontrolled HT.

H4: Certain subpopulations may be more susceptible to BV decreases due to HT. One prediction is that certain factors like age of onset (HT at younger age), genetic factors (some may have certain gene variants that make them more susceptible and vulnerable to damage caused by elevated BP), comorbities (people with diabetes, obesity and heart problems could be at higher risk of decreased BV due to the interaction of these factors), ethnic/racial factors (some populations—like blacks—could be at higher risk of having HT and they could be more at risk due to experiencing disparities in healthcare and treatment.

The hypotheses and predictions generated from the main proposition that HT is associated with a reduction in BV and that the relationship is dose-dependent can be considered risky, novel predictions. They are risky in the sense that they are testable and falsifiable. Thus, if the predictions don’t hold, then it could falsify the initial hypothesis.

Blacks and blood pressure

Due to this, for populations like black Americans, this is significant. About 33 percent of blacks have hypertension (Peters, Arojan, and Flack, 2006), while urban blacks are more likely to have elevated BP than whites (Lindhorst et al, 2007). Though Non, Gravlee, and Mulligan (2012) showed that racial differences in education—not genetic ancestry—explained differences in BP in blacks compared to whites. Further, Victor et al (2018) showed that in black male barbershop attendees who had uncontrolled BP, that along with medication and outreach, this lead to a decrease in BP. Williams (1992) cited stress, socioecologic stress, social support, coping patterns, health behavior, sodium, calcium, and potassium consumption, alcohol consumption, and obesity as social factors which lead to increased BP.

Moreover, consistent with the hypothesis discussed here (that chronic elevated BP leads to reductions in BV which lead to a higher chance of dementia and Alzheimer’s), it’s been shown that vulnerability to HT is a major determinate in the risk of acquiring Alzheimer’s (Clark et al, 2020; Akushevic et al, 2022). It has also been shown that “a lifetime of racism makes Alzheimer’s more common in black Americansand consistent with the discussion here since racism is associated with stress which is associated with elevated BP, then consistent events of racial discrimination would lead to consistent and elevated BP which would then lead to decreased BV and then a higher chance of acquitting Alzheimer’s. But, there is evidence that blood pressure drugs (in this case telmisartan) reduce the incidence of Alzheimer’s in black Americans (Zhang et al, 2022) while the same result was also seen using antihyperintensive medications in blacks which led to a reduction in incidence of dementia (Murray et al, 2018), which lends credence to the discussed hypothesis. Stress and poverty—experiences—and not ancestry could explain higher rates of dementia in black Americans as well. Thus, since blood pressure could explain higher rates of dementia in black populations, this then lends credence to the discussed hypothesis.

Conclusion

The evidence that chronic elevated BP leads to reductions in BV are well-studied and the mechanisms are well-known. I discussed the hypothesis that chronically elevated BP leads to reduced blood flow to the brain which decreases BV. I then discussed the mechanisms behind the relationship, and then hypotheses and predictions that follow from them. Lastly, I discussed the well-known fact that blacks have higher rates of BP, and also higher rates of dementia and Alzheimer’s, and linked the fact that they have higher rates of BP to those maladies.

So by catching chronically elevated BP in the early ages, since the earlier one has high BP the more likely they are to have reduced brain volume and the associated maladies, we can then begin to fight the associated issues before they coalesce, since we know the mechanisms behind them, along with the fact that blood pressure drugs and antihypertensive medications decrease incidences of dementia and Alzheimer’s in black Americans.

Cope’s (Deperet’s) Rule, Evolutionary Passiveness, and Alternative Explanations

4450 words

Introduction

Cope’s rule is an evolutionary hypothesis which suggests that, over geological time, species have a tendency to increase in body size. (Although it has been proposed for Cope’s rule to be named Deperet’s rule, since Cope didn’t explicitly state the hypothesis while Deperet did, Bokma et al, 2015.) Named after Edward Drinker Cope, it proposes that on average through the process of “natural selection” species have a tendency to get larger, and so it implies a directionality to evolution (Hone and Benton, 2005; Liow and Taylor, 2019). So there are a few explanations for the so-called rule: Either it’s due to passive or driven evolution (McShea, 1994; Gould, 1996; Raia et al, 2012) or due to methodological artifacts (Sowe and Wang, 2008; Monroe and Bokma, 2010).

However, Cope’s rule has been subject to debate and scrutiny in paleontology and evolutionary biology. The interpretation of Cope’s rule hinges on how “body size” is interpreted (mass or length), along with alternative explanations. I will trace the history of Cope’s rule, discuss studies in which it was proposed that this directionality from the rule was empirically shown, discuss methodological issues. I propose alternative explanations that don’t rely on the claim that evolution is “progressive” or “driven.” I will also show that developmental plasticity throws a wrench in this claim, too. I will then end with a constructive dilemma argument showing that either Cope’s rule is a methodological artifact, or it’s due to passive evolution, since it’s not a driven trend as progressionists claim.

How developmental plasticity refutes the concept of “more evolved”

In my last article on this issue, I showed the logical fallacies inherent in the argument PumpkinPerson uses—it affirms the consequent, assuming it’s true leads to a logical contradiction, and of course reading phylogenies in the way he does just isn’t valid.

If the claim “more speciation events within a given taxon = more evolution” were valid, then we would consistently observe a direct correlation between the number of speciation events and the extent evolutionary change in all cases, but we don’t since evolutionary rates vary and other factors influence evolution, so the claim isn’t universally valid.

Take these specific examples: The horseshoe crab has a lineage going back hundreds of millions of years with few speciation events but it has undergone evolutionary changes. Consequently, microorganisms could undergo many speciation events and have relatively minor genetic change. Genetic and phenotypic diversity of the cichlid fishes (fishes that have undergone rapid evolutionary change and speciation), but the diversity between them doesn’t solely depend on speciation events, since factors like ecological niche partitioning and sexual selection also play a role in why they are different even though they are relatively young species (a specific claim that Herculano-Houzel made in her 2016 book The Human Advantage). Lastly, human evolution has relatively few speciation events but the extent of evolutionary change in our species is vast. Speciation events are of course crucial to evolution. But if one reads too much into the abstractness of the evolutionary tree then they will not read it correctly. The position of the terminal nodes is meaningless.

It’s important to realize that evolution just isn’t morphological change which then leads to the creation of a new species (this is macro-evolution), but there is also micro-evolution. Species that underwent evolutionary change without speciation include peppered moths, antibody resistance in bacteria, lactase persistence in humans, Darwin’s finches, and industrial melanism in moths. These are quite clearly evolutionary changes, and they’re due to microevolutionary changes.

Developmental plasticity directly refutes the contention of more evolved since individuals within a species can exhibit significant trait variation without speciation events. This isn’t captured by phylogenies. They’re typically modeled on genetic data and they don’t capture developmental differences that arise due to environmental factors during development. (See West-Eberhard’outstanding Developmental Plasticity and Evolution for more on how in many cases development precedes genetic change, meaning that the inference can be drawn that genes aren’t leaders in evolution, they’re mere followers.)

If “more evolved” is solely determined by the number of speciation events (branches) in a phylogeny, then species that exhibit greater developmental plasticity should be considered “more evolved.” But it is empirically observed that some species exhibit significant developmental plasticity which allows them to rapidly change their traits during development in response to environmental variation without undergoing speciation. So since the species with more developmental plasticity aren’t considered “more evolved” based on the “more evolved” criteria, then the assumption that “more evolved” is determined by speciation events is invalid. So the concept of “more evolved” as determined by speciation events or branches isn’t valid since it isn’t supported when considering the significant role of developmental plasticity in adaptation.

There is anagenesis and cladogenesis. Anagenesis is the creation of a species without a branching of the ancestral species. Cladogenesis is the formation of a new species by evolutionary divergence from an ancestral form. So due to evolutionary changes within a lineage, the organism that underwent evolutionary changes replaces the older one. So anagenesis shows that a species can slowly change and become a new species without there being a branching event. Horse, human, elephant, and bird evolution are examples of this.

Nonetheless, developmental plasticity can lead to anagenesis. Developmental, or phenotypic, plasticity is the ability of an organism to produce different phenotypes with the same genotype based on environmental cues that occur during development. Developmental plasticity can facilitate anagenesis, and since developmental plasticity is ubiquitous in development of not only an individual in a species but a species as a whole, then it is a rule and not an exception.

Directed mutation and evolution

Back in March, I wrote on the existence of directed mutations. Directed mutation directly speaks against the concept of “more evolved.” Here’s the argument:

(1) If directed mutations play a crucial role in helping organisms adapt to changing environments, then the notion of “more evolved” as a linear hierarchy is invalid.
(2) Directed mutations are known to occur and contribute to a species survivability in an environment undergoing change during development (the concept of evolvability is apt here).
(C) So the concept of “more evolved” as a linear hierarchy is invalid.

A directed mutation is a mutation that occurs due to environmental instability which helps an organism survive in the environment that changed while the individual was developing. Two mechanisms of DM are transcriptional activation (TA) and supercoiling. TAs can cause changes to single-stranded DNA, and can also cause supercoiling (the addition of more strands on DNA). TA can be caused by depression (a mechanism that occurs due to the absence of some molecule) or induction (the activation of an inactive gene which then gets transcribed). So these are examples of how nonrandom (directed) mutation and evolution can occur (Wright, 2000). Such changes are possibly through the plasticity of phenotypes during development and ultimately are due to developmental plasticity. These stress-directed mutations can be seen as quasi-Lamarckian (Koonin and Wolf, 2009). It’s quite clear that directed mutations are a thing and have been proven true.

DMs, along with developmental plasticity and evo-devo as a whole refute the simplistic thinking of “more evolved.”

Now here is the argument that PP is using, and why it’s false:

(1) More branches on a phylogeny indicate more speciation events.
(2) More speciation events imply a higher level of evolutionary advancement.
(C) Thus, more branches on a phylogeny indicate a higher level of evolutionary advancement.

The false premise is (2) since it suggests that more speciation events imply a higher level of evolutionary advancement. It implies a goal-directed aspect to evolution, where the generation of more species is equated with evolutionary progress. It’s just reducing evolution to linear advancement and progress; it’s a teleological bent on evolution (which isn’t inherently bad if argued for correctly, see Noble and Noble, 2022). But using mere branching events on a phylogeny to assume that more branches = more speciation = more evolved is simplistic thinking that doesn’t make sense.

If evolution encompasses changes in an organism’s phenotype, then changes in an organism’s phenotype, even without changing its genes, are considered examples of evolution. Evolution encompasses changes in an organism’s phenotype, so changes in an organism’s phenotype even without changes in genes are considered examples of evolution. There is nongenetic “soft inheritance” (see Bonduriansky and Day, 2018).

Organisms can exhibit similar traits due to convergent evolution. So it’s not valid to assume a direct and strong correlation between and organism’s position on a phylogeny and it’s degree of resemblance to a common ancestor.

Dolphins and ichthyosaurs share similar traits but dolphins are mammals while ichthyosaurs are reptiles that lived millions of years ago. Their convergent morphology demonstrates that common ancestry doesn’t determine resemblance. The Tasmanian and Grey wolf have independently evolved similar body plans and roles in their ecologies and despite different genetics and evolutionary history, they share a physical resemblance due to similar ecological niches. The LCA of bats and birds didn’t have wings but they have wings and they occurred independently showing that the trait emerged independently while the LCA didn’t have wings so it emerged twice independently. These examples show that the degree of resemblance to a common ancestor is not determined by an organism’s position on a phylogeny.

Now, there is a correlation between body size and branches (splits) on a phylogeny (Cope’s rule) and I will explain that later. That there is a correlation doesn’t mean that there is a linear progression and they don’t imply a linear progression. Years ago back in 2017 I used the example of floresiensis and that holds here too. And Terrance Deacon’s (1990) work suggests that pseudoprogressive trends in brain size can be explained by bigger whole organisms being selected—this is important because the whole animal is selected, not any one of its individual parts. The correlation isn’t indicative of a linear progression up some evolutionary ladder, either: It’s merely a byproduct of selecting larger animals (the only things that are selected).

I will argue that it is this remarkable parallelism, and not some progressive selection for increasing intelligence, that is responsible for many pseudoprogressive trends in mammalian brain evolution. Larger whole animals were being selected—not just larger brains—but along with the correlated brain enlargement in each lineage a multitude of parallel secondary internal adaptations followed. (Deacon, 1990)

Nonetheless, the claim here is one from DST—the whole organism is selected, so obviously so is it’s body plan (bauplan). Nevertheless, the last two havens for the progressionist is in the realm of brain size and body size. Deacon refuted the selection-for brain size claim, so we’re now left with body size.

Does the evolution of body size lend credence to claims of driven, progressive evolution?

The tendency for bodies to grow larger and larger over evolutionary time is something of a trusim. Since smaller bacterium have eventually evolved into larger (see Gould’s modal bacter argument), more complex multicellular organisms, then this must mean that evolution is progressive and driven, at least for body size, right? Wrong. I will argue here using a constructive dilemma that either evolution is passive and that’s what explains the evolution of body size increases, or is it due to methodological flaws in how body size is measured (length or mass)?

In Full House, Gould (1996) argued that the evolution of body size isn’t driven, but that it is passive, namely that it is evolution away from smaller size. Nonetheless, it seems that Cope’s (Deperet’s) rule is due to cladogenesis (the emergence of new species), not selection for body size per se (Bokma et al, 2015).

Given these three conditions, we note an increase in size of the largest species only because founding species start at the left wall, and the range of size can therefore expand in only one direction. Size of the most common species (the modal decade) never changes, and descendants show no bias for arising at larger sizes than ancestors. But, during each act, the range of size expands in the only open direction by increase in the total number of species, a few of which (and only a few) become larger (while none can penetrate the left wall and get smaller). We can say only this for Cope’s Rule: in cases with boundary conditions like the three listed above, extreme achievements in body size will move away from initial values near walls. Size increase, in other words, is really random evolution away from small size, not directed evolution toward large size. (Gould, 1996)

Dinosaurs were some of the largest animals to ever live. So we might say that there is a drivenness in their bodies to become larger and larger, right? Wrong. The evolution of body size in dinosaurs is passive, not driven (progressive) (Sookias, Butler, and Benson, 2012). Gould (1996) also showed passive trends in body size in plankton and forams. He also cited Stanley (1973) who argued that groups starting at the left wall of minimum complexity will increase in mean size as a consequence of randomness, not any driven tendency for larger body size.

In other, more legitimate cases, increases in means or extremes occur, as in our story of planktonic forams, because lineages started near the left wall of a potential range in size and then filled available space as the number of species increased—in other words, a drift of means or extremes away from a small size, rather than directed evolution of lineages toward large size (and remember that such a drift can occur within a regime of random change in size for each individual lineage—the “drunkard’s walk” model).

In 1973, my colleague Steven Stanley of Johns Hopkins University published a marvelous, and now celebrated, paper to advance this important argument. He showed (see Figure 27, taken from his work) that groups beginning at small size, and constrained by a left wall near this starting point, will increase in mean or extreme size under a regime of random evolution within each species. He also advocated that we test his idea by looking for right-skewed distributions of size within entire systems, rather than by tracking mean or extreme values that falsely abstract such systems as single numbers. In a 1985 paper I suggested that we speak of “Stanley’s Rule” when such an increase of means or extremes can best be explained by undirected evolution away from a starting point near a left wall. I would venture to guess (in fact I would wager substantial money on the proposition) that a large majority of lineages showing increase of body size for mean or extreme values (Cope’s Rule in the broad sense) will properly be explained by Stanley’s Rule of random evolution away from small size rather than by the conventional account of directed evolution toward selectively advantageous large size. (Gould, 1996)

Gould (1996) also discusses the results of McShea’s study, writing:

Passive trends (see Figure 33) conform to the unfamiliar model, championed for complexity in this book, of overall results arising as incidental consequences, with no favored direction for individual species, (McShea calls such a trend passive because no driver conducts any species along a preferred pathway. The general trend will arise even when the evolution of each individual species confirms to a “drunkard’s walk” of random motion.) For passive trends in complexity, McShea proposes the same set of constraints that I have advocated throughout this book: ancestral beginnings at a left wall of minimal complexity, with only one direction open to novelty in subsequent evolution.

But Baker et al (2015) claim that body size is an example of driven evolution. However, that they did not model cladogenetic factors calls their conclusion into question. But I think Baker et al’s claim doesn’t follow. If a taxon possesses a potential size range and the ancestral size approaches the lower limit of this range, it will result in a passive inclination for descendants to exceed the size of their ancestors. The taxon in question possesses a potential size range, and the ancestral size range is on the lower end of the range. So there will be a passive tendency for descendants of this taxon to be larger than their predecessors.

Here’s an argument that concludes that evolution is passive and not driven. I will then give examples of P2.

(1) Extant animals that are descended from more nodes on an evolutionary tree tend to be bigger than animals descended from fewer nodes (your initial premise).
(2) There exist cases where extant animals descended from fewer nodes are larger or more complex than those descended from more nodes (counterexamples of bats and whales, whales are descended from fewer nodes while having some of the largest body sizes in the world while bats are descended from more nodes while having a way comparatively smaller body size).
(C1) Thus, either P1 doesn’t consistently hold (not all extant animals descended from more nodes are larger), or it is not a reliable rule (given the counters).
(3) If P1 does not consistently hold true (not all extant animals descended from more nodes are larger), then it is not a reliable rule.
(4) P1 does not consistently hold true.
(C2) P1 is not a reliable rule.
(5) If P1 is not a reliable rule (given the existence of counterexamples), then it is not a valid generalization.
(6) P1 is not a reliable rule.
(C3) So P1 is not a valid generalization.
(6) If P1 isn’t a valid generalization in the context of evolutionary biology, then there must be exceptions to this observed trend.
(7) The existence of passive evolution, as suggested by the inconsistenties in P1, implies that the trends aren’t driven by progressive forces.
(C4) Thus, the presence of passive evolution and exceptions to P1’s trend challenge the notion of a universally progressive model of evolution.
(8) If the presence of passive evolution and exceptions to P1’s trend challenges the notion of a universally progressive model of evolution, then the notion of a universally progressive model of evolution isn’t supported by the evidence, as indicated by passive evolution and exceptions to P1’s trend.
(9) The presence of passive evolution and exceptions to P1’s trend challenge the notion. of a universally progressive model of evolution.

(1) Bluefin tuna are known to have a potential range of size, with some being small and others being massive (think of that TV show Deadliest Catch and the massive size ranges of tuna these fisherman catch, both in length and mass). So imagine a population of bluefin tuna where the ancestral size is found to be close to the lower end of their size range. So P2 is satisfied because bluefin tuna have a potential size range. So the ancestral size of the ancestors of the tuna were relatively small in comparison to the maximum size of the tuna.

(2) African elephants in some parts of Africa are small, due to ecological constraints and hunting pressures and these smaller-sized ancestors are close to the lower limit of the potential size range of African elephants. Thus, according to P1, there will be a passive tendency for descendants of these elephants to be larger than their smaller-sizes ancestors over time.

(3) Consider galapagos tortoises whom are also known for their large variation in size among the different species and populations on the galapagos islands. So consider a case of galapagos tortoises who have smaller body sizes due to either resource conditions or the conditions of their ecologies. So in this case, the potential size for the ancestors of these tortoises is close to the theoretical limit of their potential size range. Therefore, we can expect a passive tendency for descendants of these tortoises to evolve large body sizes.

Further, in Stanley’s (1973) study of Cope’s rule from fossil rodents, he observed that body size distributions in these rodents, over time, became bigger while the modal size stayed small. This doesn’t even touch the fact that because there are more small than large mammals, that there would be a passive tendency in large body sizes for mammals. This also doesn’t even touch the methodological issues in determining body size for the rule—mass, length? Nonetheless, Monroe and Bokma’s (2010) study showed that while there is a tendency for species to be larger than their ancestors, it was a mere 0.5 percent difference. So the increase in body size is explained by an increase in variance in body size (passiveness) not drivenness.

Explaining the rule

I think there are two explanations: Either a methodological artifact or passive evolution. I will discuss both, and I will then give a constructive dilemma argument that articulates this position.

Monroe and Bokma (2010) showed that even when Cope’s rule is assumed, the ancestor-descendant increase in body size showed a mere .4 percent increase. They further discussed methodological issues with the so-called rule, citing Solow and Wang (2008) who showed that Cope’s rule “appears” based on what assumptions of body size are used. For example, Monroe and Bokma (2010) write:

If Cope’s rule is interpreted as an increase in the mean size of lineages, it is for example possible that body mass suggests Cope’s rule whereas body length does not. If Cope’s rule is instead interpreted as an increase in the median body size of a lineage, its validity may depend on the number of speciation events separating an ancestor-descendant pair.

If size increase were a general property of evolutionary lineages – as Cope’s rule suggests – then even if its effect were only moderate, 120 years of research would probably have yielded more convincing and widespread evidence than we have seen so far.

Gould (1997) suggested that Cope’s rule is a mere psychological artifact. But I think it’s deeper than that. Now I will provide my constructive dilemma argument, now that I have ruled out body size being due to progressive, driven evolution.

The form of constructive dilemma goes: (1) A V B. (2) If A, then C. (3) If B, then D. (C) C V D. P1 represents a disjunction: There are two possible choices, A and B. P2 and P3 are conditional statements, that provide implications for both of the options. And C states that at least one or both of the implications have to be true (C or D).

Now, Gould’s Full House argument can be formulated either using modus tollens or constructive dillema:

(1) If evolution were a deterministic, teleological process, there would be a clear overall progression and a predetermined endpoint. (2) There is no predetermined endpoint or progression to evolution. (C) So evolution isn’t a deterministic or teleological process.

(1) Either evolution is a deterministic, teleological process (A) or it’s not (B). (2) If A, then there would be a clear direction and predetermined endpoint. (3) If B, then there is no overall direction or predetermined endpoint. (4) So either there is a clear overall progression (A), or there isn’t (B). (5) Not A. (6) Therefore, B.

Or (1) Life began at a relatively simple state (the left wall of complexity). (2) Evolution is influenced by a combination of chance events,, environmental factors and genetic variation. (3) Organisms may stumble I’m various directions along the path of evolution. (4) Evolution lacks a clear path or predetermined endpoint.

Now here is the overall argument combining the methodological issues pointed out by Sowe and Wang and the implications of passive evolution, combined with Gould’s Full House argument:

(1) Either Cope’s rule is a methodological artifact (A), or it’s due to passive, not driven evolution (B). (2) If Cope’s rule is a methodological artifact (A), then different ways to measure body size (length or mass) can come to different conclusions. (3) If Cope’s rule is due to passive, not driven evolution (B), then it implies that larger body sizes simply accumulate over time without being actively driven by selective pressures. (4) Either evolution is a deterministic, teleological process (C), or it is not (D). (5) If C, then there would be a clear overall direction and predetermined endpoint in evolution (Gould’s argument). (6) If D, then there is no clear overall direction or predetermined endpoint in evolution (Gould’s argument). (7) Therefore, either there is a clear overall direction (C) or there isn’t (D) (Constructive Dilemma). (8) If there is a clear overall direction (C) in evolution, then it contradicts passive, not driven evolution (B). (9) If there isn’t a clear overall direction (D) in evolution, then it supports passive, not driven evolution (B). (10) Therefore, either Cope’s rule is due to passive evolution or it’s a methodological artifact.

Conclusion

Evolution is quite clearly passive and non-driven (Bonner, 2013). The fact of the matter is, as I’ve shown, evolution isn’t driven (progressive), it is passive due to the drunken, random walk that organisms take from the minimum left wall of complexity. The discussions of developmental plasticity and directed mutation further show that evolution can’t be progressive or driven. Organism body plans had nowhere to go but up from the left wall of minimal complexity, and that means increase the variance in, say, body size is due to passive trends. Given the discussion here, we can draw one main inference: since evolution isn’t directed or progressive, then the so-called Cope’s (Deperet’s) rule is either due to passive trends or they are mere methodological artifacts. The argument I have mounted for that claim is sound and so, it obviously must be accepted that evolution is a random, drunken walk, not one of overall drivenness and progress and so, we must therefore look at the evolution of body size in this way.

Rushton tried to use the concept of evolutionary progress to argue that some races may be “more evolved” than other races, like “Mongoloids” being “more evolved” than “Caucasoids” who are “more evolved” than “Negroids.” But Rushton’s “theory” was merely a racist one, and it obviously fails upon close inspection. Moreover, even the claims Rushton made at the end of his book Race, Evolution, and Behavior don’t even work. (See here.) Evolution isn’t progressive so we can’t logically state that one population group is more “advanced” or “evolved” than another. This is of course merely Rushton being racist with shoddy “explanations” used to justify it. (Like in Rushton’s long-refuted r/K selection theory or Differential-K theory, where more “K-evolved” races are “more advanced” than others.)

Lastly, this argument I constructed based on the principles of Gould’s argument shows that there is no progress to evolution.

P1 The claim that evolutionary “progress” is real and not illusory can only be justified iff organisms deemed more “advanced” outnumber “lesser” organisms.
P2 There are more “lesser” organisms (bacteria/insects) on earth than “advanced” organisms (mammals/species of mammals).
C Therefore evolutionary “progress” is illusory.

The Theory of African American Offending versus Hereditarian Explanations of Crime: Exploring the Roots of the Black-White Crime Disparity

3450 words

Why do blacks commit more crime? Biological theories (racial differences in testosterone and testosterone-aggression, AR gene, MAOA) are bunk. So how can we explain it? The Unnever-Gabbidon theory of African American offending (TAAO) (Unnever and Gabbidon, 2011)—where blacks’ experience of racial discrimination and stereotypes increases criminal offenses—has substantial empirical support. To understand black crime, we need to understand the unique black American experience. The theory not only explains African American criminal offending, it also makes predictions which were borne out in independent, empirical research. I will compare the TAAO with hereditarian claims of why blacks commit more crime (higher testosterone and higher aggression due to testosterone, the AR gene and MAOA). I will show that hereditarian theories make no novel predictions and that the TAAO does make novel predictions. Then I will discuss recent research which shows that the predictions that Unnever and Gabbidon have made were verified. Then I will discuss research which has borne out the predictions made by Unnever and Gabbidon’s TAAO. I will conclude by offering suggestions on how to combat black crime.

The folly of hereditarianism in explaining black American offending

Hereditarians have three main explanations of black crime: (1) higher levels of testosterone and high levels of testosterone leading to aggressive behavior which leads to crime; (2) low activity MAOA—also known in the popular press as “the warrior gene”—could be more prevalent in some populations which would then lead to more aggressive, impulsive behavior; and (3) the AR gene and AR-CAG repeats with lower CAG repeats being associated with higher rates of criminal activity.

When it comes to (1), the evidence is mixed on which race has higher levels of testosterone (due to low-quality studies that hereditarians cite for their claim). In fact, two recent studies showed that non-Hispanic blacks didn’t have higher levels of testosterone than other races (Rohrmann et al, 2007; Lopez et al, 2013). Contrast this with the classical hereditarian response that blacks indeed do have higher rates of testosterone than whites (Rushton, 1995)—using Ross et al (1986) to make the claim. (See here for my response on why Ross et al is not evidence for the hereditarian position.) Although Nyante et al (2012) showed a small increase in testosterone in blacks compared to whites and Mexican Americans using longitudinal data, the body of evidence shows that there is no to small differences in testosterone between blacks and whites (Richard et al, 2014). So despite claims that “African-American men have repeatedly demonstrated serum total and free testosterone levels that are significantly higher than all other ethnic groups” (Alvarado, 2013: 125), claims like this are derived from flawed studies, and newer more representative analyses show that there is a small difference in testosterone between blacks and whites to no difference.

Nevertheless, even if blacks have higher levels of testosterone than other races, then this would still not explain racial differences in crime, since heightened aggression explains T increases, high T doesn’t explain heightened aggression. HBDers seem to have cause and effect backwards for this relationship. Injecting individuals with supraphysiological doses of testosterone as high as 200 and 600 mg per week does not cause heightened anger or aggression (Tricker et al, 1996O’Connor et, 2002). If the hereditarian hypothesis on the relationship between testosterone and aggression were true, then we would see the opposite finding from what Tricker et al and O’Connor et al found. Thus this discussion shows that hereditarians are wrong about racial differences in testosterone and that they are wrong about causality when it comes to the T-aggression relationship. (The actual relationship is aggression causing increases in testosterone.) So this argument shows that the hereditarian simplification on the T-aggression relationship is false. (But see Pope, Kouri and Hudson, 2000 where they show that a 600 mg dose of testosterone caused increased manic symptoms in some men, although in most men there was little to no change; there were 8 “responders” and 42 “non-responders.”)

When it comes to (2), MAOA is said to explain why those who carry low frequency version of the gene have higher rates of aggression and violent behavior (Sohrabi, 2015; McSwiggin, 2017). Sohrabi shows that while the low frequency version of MAOA is related to higher rates of aggression and violent behavior, it is mediated by environmental effects. But MAOA, to quote Heine (2017), can be seen as the “Everything but the kitchen sink gene“, since MAOA is correlated with so many different things. But at the and of the day, we can’t blame “warrior genes” for violent, criminal behavior. Thus, the relationship isn’t so simple, so this doesn’t work for hereditarians either.

Lastly when it comes to (3), due to the failure of (1), hereditarians tried looking to the AR gene. Researchers tried to relate CAG repeat length with criminal behaviors. For instance, Geniole et al (2019) tried to argue that “Testosterone thus appears to promote human aggression through an AR-related mechanism.” Ah, the last gasps to explain crime through testosterone. But there is no relationship between CAG repeats, adolescent risk-taking, depression, dominance or self-esteem (Vermeer, 2010) and the number of CAG repeats in men and women (Valenzuela et al, 2022). So this, too, fails. (Also take look at the just-so story on why African slave descendants are more sensitive to androgens; Aiken, 2011.)

Now that I have shown that the three main hereditarian explanations for higher black crime are false, now I will show why blacks have higher rates of criminal offending than other races, and the answer isn’t to be found in biology, but sociology and criminology.

The Unnever-Gabbidon theory of African American criminal offending and novel predictions

In 2011, criminologists Unnever and Gabbidon published their book A Theory of African American Offending: Race, Racism, and Crime. In the book, they explain why they formulated the theory and why it doesn’t have any explanatory or predictive power for other races. That’s because it centers on the lived experiences of black Americans. In fact, the TAAO “incorporates the finding that African Americans are more likely to offend if they associate with delinquent peers but we argue that their inadequate reinforcement for engaging in conventional behaviors is related to their racial subordination” (Unnever and Gabbidon, 2011: 34). The TAAO focuses on the criminogenic effects of racism.

Our work builds upon the fundamental assumption made by Afrocentists that an understanding of black offending can only be attained if their behavior is situated within the lived experiences of being African American in a conflicted, racially stratified society. We assert that any criminological theory that aims to explain black offending must place the black experience and their unique worldview at the core of its foundation. Our theory places the history and lived experiences of African American people at its center. We also fully embrace the Afrocentric assumption that African American offending is related to racial subordination. Thus, our work does not attempt to create a “general” theory of crime that applies to every American; instead, our theory explains how the unique experiences and worldview of blacks in America are related to their offending. In short, our theory draws on the strengths of both Afrocentricity and the Eurocentric canon. (Unnever and Gabbidon, 2011: 37)

Two kinds of racial injustices highlighted by the theory—racial discrimination and pejorative stereotyping—have empirical support. Blacks are more likely to express anger, exhibit low self-control and become depressed if they believe the racist stereotype that they’re violent. It’s also been studied whether or not a sense of racial injustice is related to offending when controlling for low self control (see below).

The core predictions of the TAAO and how they follow from it with references for empirical tests are as follows:

(Prediction 1) Black Americans with a stronger sense of racial identity are less likely to engage in criminal behavior than black Americans with a weak sense of racial identity. How does this prediction follow from the theory? TAAO suggests that a strong racial identity can act as a protective factor against criminal involvement. Those with a stronger sense of racial identity may be less likely to engage in criminal behavior as a way to cope with racial discrimination and societal marginalization. (Burt, Simons, and Gibbons, 2013; Burt, Lei, and Simons, 2017; Gaston and Doherty, 2018; Scott and Seal, 2019)

(Prediction 2) Experiencing racial discrimination increases the likelihood of black Americans engaging in criminal actions. How does this follow from the theory? TAAO posits that racial discrimination can lead to feelings of frustration and marginalization, and to cope with these stressors, some individuals may resort to committing criminal acts as a way to exert power or control in response to their experiences of racial discrimination. (Unnever, 2014; Unnever, Cullen, and Barnes, 2016; Herda, 2016, 2018; Scott and Seal, 2019)

(Prediction 3) Black Americans who feel socially marginalized and disadvantaged are more prone to committing crime as a coping mechanism and have weakened school bonds. How does this follow from the theory? TAAO suggests that those who experience social exclusion and disadvantage may turn to crime as a way to address their negative life circumstances. and feelings of agency. (Unnever, 2014; Unnever, Cullen, and Barnes, 2016)

The data show that there is a racialized worldview shared by blacks, and that a majority of blacks believe that their fate rests on what generally happens to black people in America. Around 38 percent of blacks report being discriminated against and most blacks are aware of the stereotype of them as violent. (Though a new Pew report states that around 8 in 10—about 80 percent—of blacks have experienced racial discrimination.) Racial discrimination and the belief in the racist stereotype that blacks are more violent are significant predictors of black arrests. It’s been shown that the more blacks are discriminated against and the more they believe that blacks are violent, the more likely they are to be arrested. Unnever and Gabbidon also theorized that the aforementioned isn’t just related to criminal offending but also to substance and alcohol abuse. Unnever and Gabbidon also hypothesized that racial injustices are related to crime since they increase the likelihood of experiencing negative emotions like anger and depression (Simons et al, 2002). It’s been experimentally demonstrated that blacks who perceive racial discrimination and who believe the racist stereotype that blacks are more violent express less self-control. The negative emotions from racial discrimination predict the likelihood of committing crime and similar behavior. It’s also been shown that blacks who have less self-control, who are angrier and are depressed have a higher liklihood of offending. Further, while controlling for self-control, anger and depression and other variables, racial discrimination predicts arrests and substance and alcohol abuse. Lastly the experience of being black in a racialized society predicts offending, even after controlling for other measures. Thus, it is ruled out that the reason why blacks are arrested more and perceive more racial injustice is due to low self-control. (See Unnever, 2014 for the citations and arguments for these predictions.) The TAAO also has more empirical support than racialized general strain theory (RGST) (Isom, 2015).

So the predictions of the theory are: Racial discrimination as a contributing factor; a strong racial identity could be a protective factor while a weak racial identity would be associated with a higher likelihood of engaging in criminal activity; blacks who feel socially marginalized would turn to crime as a response to their disadvantaged social position; poverty, education and neighborhood conditions play a significant role in black American offending rates, and that these factors interact with racial identity and discrimination which then influence criminal behavior; and lastly it predicts that the criminal justice system’s response to black American offenders could be influenced by their racial identity and social perceptions which could then potentially lead to disparities in treatment compared to other racial groups.

Ultimately, the unique experiences of black Americans explain why they commit more crime. Thus, given the unique experiences of black Americans, there needs to be a race-centric theory of crime for black Americans, and this is exactly what the TAAO is. The predictions that Unnever and Gabbidon (2011) made from the TAAO have independent empirical support. This is way more than the hereditarian explanations can say on why blacks commit more crime.

One way, which follows from the theory, to insulate black youth from discrimination and prejudice is racial socialization, where racial socialization is “thoughts, ideas, beliefs, and attitudes regarding race and racism are communicated across generations (Burt, Lei, & Simons, 2017Hughes, Smith, et al., 2006Lesane-Brown, 2006) (Said and Feldmeyer, 2022).

But also related to the racial socialization hypothesis is the question “Why don’t more blacks offend?” Gaston and Doherty (2018) set out to answer this question. Gaston and Doherty (2018) found that positive racial socialization buffered the effects of weak school bonds on adolescent substance abuse and criminal offending for males but not females. This is yet again another prediction from the theory that has come to pass—the fact that weak school bonds increase criminal offending.

Doherty and Gaston (2018) argue that black Americans face racial discrimination that whites in general just do not face:

Empirical studies have pointed to potential explanations of racial disparities in violent crimes, often citing that such disparities reflect Black Americans’ disproportionate exposure to criminogenic risk factors. For example, Black Americans uniquely experience racial discrimination—a robust correlate of offending—that White Americans generally do not experience (Burt, Simons, & Gibbons, 2012Caldwell, Kohn-Wood, Schmeelk-Cone, Chavous, & Zimmerman, 2004Simons, Chen, Stewart, & Brody, 2003Unnever, Cullen, Mathers, McClure, & Allison, 2009). Furthermore, Black Americans are more likely to face factors conducive to crime such as experiencing poor economic conditions and living in neighborhoods characterized by concentrated disadvantage.

They conclude that:

The support we found for ethnic-racial socialization as a crime-reducing factor has important implications for broader criminological theorizing and practice. Our findings show the value of race-specific theories that are grounded in the unique experiences of that group and focus on their unique risk and protective factors. African Americans have unique pathways to offending with racial discrimination being a salient source of offending. While it is beyond the scope of this study to determine whether TAAO predicts African American offending better than general theories of crime, the general support for the ethnic-racial socialization hypothesis suggests the value of theories that account for race-specific correlates of Black offending and resilience.

TAAO draws from the developmental psychology literature and contends, however, that positive ethnic-racial socialization offers resilience to the criminogenic effect of weak school bonds and is the main reason more Black Americans do not offend (Unnever & Gabbidon, 2011, p. 113, 145).

Thus, combined with the fact that blacks face racial discrimination that whites in general just do not face, and combined with the fact that racial discrimination has been shown to increase criminal offending, it follows that racial discrimination can lead to criminal offending, and therefore, to decrease criminal offending we need to decrease racial discrimination. Since racism is due to low education and borne of ignorance, then it follows that education can decrease racial attitudes and, along with it, decrease crime (Hughes et al, 2007Kuppens et al, 2014Donovan, 20192022).

Even partial tests of the TAAO have shown that racial discrimination related to offending and I would say that it is pretty well established that positive ethnic-racial socialization acts as a protective factor for blacks—this also explains why more blacks don’t offend (see Gaston and Doherty, 2018). It is also know that bad (ineffective) parenting also increases the risk for lower self-control (Unnever, Cullen, and Agnew, 2006). Black Americans share a racialized worldview and they view the US as racist, due to their personal lived experiences with racism (Unnever, 2014).

The TAAO and situationism

Looking at what the TAAO is and the predictions it makes, we can see how the TAAO is a situationist theory. Situationism is a psychological-philosophical theory which emphasizes the influence of the situation and its effects on human behavior. It posits that people’s actions and decisions are primarily shaped by the situational context that they find themselves in. It highlights the role of the situation in explaining behavior, suggests that people may act differently based on the context they find themselves in, situational cues which are present in the immediate context of the environment can trigger specific behavioral responses, suggests that understanding the situation one finds themselves in is important in explaining why people act the way they do, and asserts that behavior is more context-dependent and unpredictable and could vary across different situations. Although it seems that situationism conflicts with action theory, it doesn’t. Action theory explains how people form intentions and make decisions within specific situations, basically addressing the how and why. Conversely, situationism actually compliments action theory, since it addresses the where and when of behavior from an external, environmental perspective.

So the TAAO suggests that experiencing racial discrimination can contribute to criminal involvement as a response to social marginalization. So situationism can provide a framework for exploring how specific instances of environmental stressors, discrimination, or situational factors can trigger criminal behavior in context. So while TAAO focuses on historical and structural factors which lead to why blacks commit more crime, adding in situationism could show how the situational context interacts with historical and structural factors to explain black American criminal behavior.

Thus, combining situationism and the TAAO can lead to novel predictions like: predictions of how black Americans when faced with specific discriminatory situations, may be more or less likely to engage in criminal behavior based on their perception of the situation; predictions about the influence of immediate peer dynamics in moderating the relationship between structural factors like discrimination and criminal behavior in the black American community; and predictions about how variations in criminal responses to different types of situational cues—like encounters with law enforcement, experiences of discrimination, and economic stress—within the broader context of the TAAO’s historical-structural framework.

Why we should accept the TAAO over hereditarian explanations of crime

Overall, I’ve explained why hereditarian explanations of crime fail. They fail because when looking at the recent literature, the claims they make just do not hold up. Most importantly, as I’ve shown, hereditarian explanations lack empirical support, and the logic they try to use in defense of them is flawed.

We should accept the TAAO over hereditarianism because there is empirical validity, in that the TAAO is grounded in empirical research and it’s predictions and hypotheses have been subject to empirical tests and they have been found to hold. The TAAO also recognizes that crime is a complex phenomena influenced by factors like historical and contemporary discrimination, socioeconomic conditions, and the overall situational context. It also addresses the broader societal issues related to disparities in crime, which makes it more relevant for policy development and social interventions, acknowledging that to address these disparities, we must address the contemporary and historical factors which lead to crime. The TAAO also doesn’t stigmatize and stereotype, while it does emphasize the situational and contextual factors which lead to criminal activity. On the other hand, hereditarian theories can lead to stereotypes and discrimination, and since hereditarian explanations are false, we should also reject them (as I’ve explained above). Lastly, the TAAO also has the power to generate specific, testable predictions which have clear empirical support. Thus, to claim that hereditarian explanations are true while disregarding the empirical power of the TAAO is irrational, since hereditarian explanations don’t generate novel predictions while the TAAO does.

Conclusion

I have contrasted the TAAO with hereditarian explanations of crime. I showed that the three main hereditarian explanations—racial differences in testosterone and testosterone caused aggression, the AR gene, and MAOA—all fail. I have also shown that the TAAO is grounded in empirical research, and that it generates specific, testable predictions on how we can address racial differences in crime. On fhe other hand, hereditarian explanations lack empirical support, specificity, and causality, which makes it ill-suited for generating testable predictions and informing effective policies. The TAAO’s complexity, empirical support, and potential for addressing real-world issues makes it a more comprehensive framework for understanding and attempting to ameliorate racial crime disparities, in contrast to the genetic determinism from hereditarianism. In fact, I was unable to find any hereditarian response to the TAAO, so that should be telling on its own.

Overall, I have shown that the TAAO’s predictions that Unnever and Gabbidon have generated enjoy empirical support, and I have shown that hereditarian explanations fail, so we should reject hereditarian explanations and accept the TAAO, due to the considerations above. I have also shown that the TAAO makes actionable policy recommendations, and therefore, to decrease criminal offending, we thusly need to educate more, since racism is borne of ignorance and education can decrease racial bias.

Action Potentials and their Role in Cognitive Interface Dualism

3000 words

Introduction

Rene Descartes proposed that the peneal gland was the point of contact—the interface—between the immaterial mind and physical body. He thought that the peneal gland in humans was different and special to that of nonhuman animals, where in humans the peneal gland was the seat of the soul (Finger, 1995). This view was eventually shown to be false. However, claims that the mental can causally interact with the physical (interactionist dualism) have been met with similar criticism. If the mental is irreducible to the physical and if the mental does in fact causally interact with the physical, then the mental must be identical with the physical; that is, the mental is reducible to the physical due to physical laws like conservation of energy. This seems to be an issue for the truth of an interactionist dualist theory. But there are solutions. Deny that causal closure of the physical (CCP) is true (the world isn’t causally closed), or argue that CCP is compatible with interactionist dualism, or argue that CCP is question-begging (assuming in a premise what it seeks to establish and conclude) and assumes without proper justification that all physical events must be due to physical causes, which thereby illogically excludes the possibility of mental causation.

In this article I will provide some reasons to believe that CCP is question-begging, and I will argue that mental causation is invisible (see Lowe, 2008). I will also argue that action potentials are the interface by which the mental and the physical interact and which would then lead a conscious decision to make a movement be possible. I will provide arguments that show that interactionist dualism is consistent with physics, while showing that action potentials are the interface that Descartes was looking for. Ultimately, I will show how the mental interacts with the physical for mental causation to be carried out and how this isn’t an issue for the CCP. The view I will argue for here I will call “cognitive interface dualism” since it centers on the influence of mental states on action potentials and on the physical realm, and it conveys the idea that mental processes interface with physical processes through the conduit of action potentials, without implying a reduction of the mental to the physical, making it a substance dualist position since it still adheres to the mental and the physical as two different substances.

Causal closure of the physical

It is claimed that the world is causally closed—this means that every event or occurrence is due to physical causes, all physical events must be due to physical causes. Basically, no non-physical (mental) factors can cause or influence physical events. Here’s the argument:

(1) Every event in the world has a cause.
(2) Causes and effects within the physical world are governed by the laws of physics.
(3) Non-physical factors or entities, by definition, don’t belong to the physical realm.
(4) If a nonphysical factor were to influence a physical event, it would violate the laws of physics.
(5) Thus, the world is causally closed, meaning that all causes and effects in it are governed by physical interactions and laws.

But the issue here for the physicalist who wants to use causal closure is the fact that mental events and states are qualitatively different from physical events and states. This is evidenced in Lowe’s distinction between intentional (mental) and event (physical) causation. Mental states like thoughts and consciousness possess qualitatively different properties than physical states. The causal closure argument assumes that physical events are the only causes of other physical events. But mental states appear to exert causal influence over physical events, for instance voluntary action based on conscious decision, like my action right now to write this article. So if M states do influence P events, then there must be interaction between the mental and physical realms. This interaction contradicts the idea of strict causal closure of the physical realm. Since mental causation is necessary to explain aspects of human action and consciousness, it then follows that the physical world may not be causally closed.

The problem of interaction for interactionist dualism is premised on the CCP. It supposedly violated the conservation of energy (CoE). If P energy is needed to do P work, then a convergence of mental into physical energy then results in an increase in energy that is inexplicable. I think there are many ways to attack this supposed knock-down argument against interactionist dualism, and I will make the case in an argument below, arguing that action potentials are where the brain and the mind interact to carry out intentions. However, there are no strong, non-question begging arguments for causal closure that don’t beg the question (eg see Bishop, 2005; Dimitrijevic, 2010; Gabbani, 2013; Gibb, 2015), and the inductive arguments commit a sampling error or non-sequiturs (Buhler, 2020). So the CCP is either question-begging or unsound (Menzies, 2015). I will discuss this issue before concluding this article, and I will argue that my argument that APs serve as the interface between the mental and the physical, along with the question-beggingness of causal closure actually strengthens my argument.

The argument for action potentials as the interface between the mind and the brain

The view that I will argue for here, I think, is unique and has never been argued for in the philosophical literature on mental causation. In the argument that follows, I will show how arguing that action potentials (APs) are the point of contact—the interface—between the mind and brain doesn’t violate the CCP nor does it violate CoE.

In an article on strength and neuromuscular coordination, I explained the relationship between the mind-muscle connection and action potentials:

The above diagram I drew is the process by which muscle action occurs. In my recent article on fiber typing and metabolic disease, I explained the process by which muscles contract:

But the skeletal muscle will not contract unless the skeletal muscles are stimulated. The nervous system and the muscular system communicate, which is called neural activiation—defined as the contraction of muscle generated by neural stimulation. We have what are called “motor neurons”—neurons located in the CNS (central nervous system) which can send impulses to muscles to move them. This is done through a special synapse called the neuromuscular junction. A motor neuron that connects with muscle fibers is called a motor unit and the point where the muscle fiber and motor unit meet is callled the neuromuscular junction. It is a small gap between the nerve and muscle fiber called a synapse. Action potentials (electrical impulses) are sent down the axon of the motor neuron from the CNS and when the action potential reaches the end of the axon, hormones called neurotransmitters are then released. Neurotransmitters transport the electrical signal from the nerve to the muscle.

So action potentials (APs) are carried out at the junction between synapses. So, regarding acetylcholine, when it is released, it binds to the synapses (a small space which separates the muscle from the nerve) and it then binds onto the receptors of the muscle fibers. Now we know that, in order for a muscle to contract, the brain sends the chemical message (acetylcholine) across synapses which then initiates movement. So, as can be seen from the diagram above, the MMC refers to the chemo-electric connection between the motor cortex, the cortico-spinal column, peripheral nerves and the neuromuscular junction. A neuromuscular junction is a synapse formed by the contact between a motor neuron and a muscle fiber.

This explanation will set the basis for my argument on how action potentials are the interface—the point of contact—by which the mind and brain meet.

As I have already shown, APs are electrochemical events that transmit signals within the nervous system and are generated as the result of neural activity which can be influenced by mental states like thoughts and intentions. The brain operates in accordance with physical laws and obeys the CoE, the initiation of APs could be (and are, though not always) influenced by mental intentions and processes. Mental processes could modulate the threshold or likelihood of AP firing through complex biomechanical mechanisms that do not violate the CoE. Of course, the energy that is required for generating APs ultimately derives from metabolic processes within the body, which could be influenced by mental states like attention, intention and emotional states. This interaction between mental states does not violate the CoE, nor does it require a violation of the laws of physics, since it operates within the bounds of biochemical and electrochemical processes that respect the CoE. Therefore, APs serve as the point of controlled interaction between the mental and physical realms, allowing for mental causation without disrupting the overall energy balance in the physical world.

Lowe argued that mental causation is invisible, and so since it is invisible, it is not amenable to scientific investigation. This view can be integrated into my argument that APs serve as the interface between the two substances, mental and physical. APs are observable electrochemical events in a neuron which could be influenced by mental states. So as I argued above, mental processes could influence or modulate the veneration of APs. When it comes to the invisibility of mental causation, this refers to the idea that mental events like thoughts, intentions, and consciousness are not directly perceptible like physical objects or events are. Mental states are not observable in the same way that physical events or objects are. In my view, APs hold a dual role. They function as the interface between the mental and the physical, providing the means by which the mental can influence physical events while shaping APs, and they also act as the causal mechanism in connecting mental states to physical events.

Thus, given the distinction between physical events (like APs) and the subjective nature of mental states, the view I have argued for above is consistent with the invisibility of mental causation. Mental causation involves the idea that mental states can influence physical events, and that they have causal efficacy on the physical world. So our mental experiences can lead to physical changes in the world based on the actions we carry out. But since mental states aren’t observable like physical states are, it’s challenging to show how they could lead to effects on the physical world. We infer the influence of mental states on physical events through the effects on observable physical processes. We can’t directly observe intention, we infer it on the basis of one’s action. Mental states could influence physical events through complex chains of electrochemical and biochemical processes which would then make the causative relationship less apparent. So while APs serve as the interface, this doesn’t mean that mental states and APs are identical. This is because while the mental can’t be reduced to physiology (the physical), it encompasses a range of subjective experiences, emotions, thoughts, and intentions that transcend the mechanistic explanations of neural activity.

It is quite obviously an empirical fact that the mental can influence the physical. Think of the fight-or-flight response. When one sees something that they are fearful of (like, say, an animal), there is then a concurrent change in certain hormones. This simple example shows how the mental can have an effect on the physical—where the physical event of seeing something fearful (which would be also be a subjective experience) would then lead to a physical change. So the initial mental event of seeing something fearful is a subjective experience which occurs in the realm of consciousness and mental states. The subjective experience of fear then triggers the fight-or-flight response, which leads to the release of stress hormones like cortisol and adrenaline. These physiological changes are part of the body’s response to a perceived threat based on the subject’s personal subjective experience. So the release of stress hormones is a physical event, and these hormones then have a measurable effect on the body like an increase in heart rate, heightened alertness and energy mobilization which then prepares the subject for action. These physiological changes then prepare the subject to either fight or flee from the situation that caused them fear. This is a solid example on how the mental can influence the physical.

The only way, I think, that my view can be challenged is by arguing that the CCP is true. But if it is question-begging, then my proposition that mental states can influence APs is then less contentious. Furthermore, my argument on APs could be open to multiple interpretations of causal closure. So instead of strictly adhering to causal closure, my view could accommodate various interpretations that allow mental causation to have an effect in the physical realm. Thus, since I view causal closure as question begging, it provides a basis for my view that mental states can influence APs and by extension the physical world. And if the CCP is false, my view on action potentials is actually strengthened.

The view I have argued for here is a simplified perspective on the relationship between the mental and the physical. But my intention isn’t to offer a comprehensive account of all aspects of mental and physical interaction, rather, it is to highlight the role of APs as a point of connection between the mental and physical realms.

Cognitive interface dualism as a form of substance dualism

The view I have argued for here is a substance dualist position. Although it posits an intermediary in APs that facilitates interaction between the mental and physical realms, it still maintains the fundamental duality between mental and physical substances. Mental states are irreducible to physical states, and they interact though APs without collapsing into a single substance. Mental states involve subjective experiences, intentionality, and qualia which are fundamentally different from the objective and quantifiable nature of the physical realm, which I have argued before. APs serve as the bridge—the interface—between the mental and the physical realms, so my dualistic perspective allows for interaction while still preserving the unique properties of the mental and the physical.

Although APs serve as the bridge between the mental and the physical, the interaction between mental states and APs suggests that mental causation operates independently of physical processes. This, then, implies that the self which originates in mental states, isn’t confined to the physical realm, and that it isn’t reducible to the physical. The self’s subjective experiences, consciousness and self-awareness cannot be explained by physical or material processes, which indicates an immaterial substance beyond the physical. The unity of consciousness, which is the integrated sense of self and personal identity over time, are better accounted for by an immaterial self that transcends a change in physical states. Lastly mental states possess qualitative properties like qualia that defy reduction to physical properties. These qualities then, point to a distinct and immaterial self.

My view posits a form of non-reductive mental causation, where mental states influence APs, acknowledging the nonphysical influence on the mental to the physical. Interaction doesn’t imply reduction; mental states remain irreducible even though they impact physical processes. My view also accommodates consciousness, subjectivity, and intentionality which can’t be accounted for by material or physical processes. My view also addresses the explanatory gap between objective physical processes and subjective mental processes, which can’t be accounted for by reduction to physical brain (neural) processes.

Conclusion

The exploration of APs within the context of cognitive interface dualism offers a perspective on the interplay between the mental and physical substances. My view acknowledges APs as the bridge of interaction between the mental and the physical, and it fosters a deeper understanding of the role of mental causation in helping us understand reality.

Central to my view is recognizing that while APs do serve as the interface or conduit by which the mental and the physical interact, and how mental states can influence physical events, this does not entail that the mental is reducible to the physical. My cognitive interface dualism therefore presents a nuanced approach that navigates the interface between the seen and the unseen, the physical and the mental.

Traditional views of causal closure may raise questions about the feasibility of mental causation, the concept’s rigidity is challenged by the intermediary role of APs. While I do hold that the CCP is question-begging, the view I have argued for here explores an alternative avenue which seemingly transcends that limitation. So even if the strict view of the CCP were to fall, my view would remain strong.

This view is also inherently anti-reductionist, asserting that personal identity, consciousness, subjectivity and intentionality cannot be reduced to the physical. Thus, it doesn’t succumb to the traditional limitations of physicalism. Cognitive interface dualism also challenges the notion that we are reducible to our physical brains or our mental activity. The self—the bearer of mental states—isn’t confined to neural circuitry, although the physical is necessary for our mental lives, it isn’t a sufficient condition (Gabriel, 2018).

Lastly, of course this view means that since the mental is irreducible to the physical, then psychometrics isn’t a measurement enterprise. Any argument that espouses the view that the mental is irreducible to the physical would entail that psychometrics isn’t measurement. So by acknowledging that mental states, consciousness, and subjective experiences transcend the confines of physical quantification, cognitive interface dualism dismantles the assumption that the human mind can be measured and encapsulated using numerical metrics. This view holds that the mental resists quantification, since only the physical is quantifiable since only the physical have specified measured objects, objects of measurement and measurement units.

All in all, my view I title cognitive interface dualism explains how mental causation occurs through action potentials. It still holds that the mental is irreducible to the physical, but that the mental and physical interact without M being reduced to P. This view I have espoused, I think, is unique, and it shows how mental causation does occur, it shows how we perform actions.

IQ, Achievement Tests, and Circularity

2150 words

Introduction

In the realm of educational assessment and psychometrics, a distinction between IQ and achievement tests needs to be upheld. It is claimed that IQ is a measure of one’s potential learning ability, while achievement tests show what one has actually learned. However, this distinction is not strongly supported in my reading of this literature. IQ and achievement tests are merely different versions of the same evaluative tool. This is what I will argue in this article: That IQ and achievement tests are different versions of the same test, and so any attempt to “validate” IQ tests based not only on other IQ tests, achievement tests and job performance is circular, I will argue that, of course, the goal of psychometrics in measuring the mind is impossible. The hereditarian argument, when it comes to defending their concept and the claim that they are measuring some unitary and hypothetical variable, then, fails. At best, these tests show one’s distance from the middle class, since that’s the where most of the items on the test derive from. Thus, IQ and achievement tests are different versions of the same test and so, they merely show one’s “distance” from a certain kind of class-specific knowledge (Richardson, 2012), due to the cultural and psychological tools one must possess to score well on these tests (Richardson, 2002).

Circular IQ-ist arguments

IQ-ists have been using IQ tests since they were brought to America by Henry Goddard in 1913. But one major issue (one they still haven’t solved—and quite honestly never will) was that they didn’t have any way to ensure that the test was construct valid. So this is why, in 1923, Boring stated that “intelligence is what intelligence tests test“, while Jensen (1972: 76) said “intelligence, by definition, is what intelligence tests measure.” However, such statements are circular and they are circular because they don’t provide real evidence or explanation.

Boring’s claim that “intelligence is what intelligence tests test” is circular since it defines intelligence based on the outcome of “intelligence tests.” So if you ask “What is intelligence“, and I say “It’s what intelligence tests measure“, I haven’t actually provided a meaningful definition of intelligence. The claim merely rests on the assumption that “intelligence tests” measure intelligence, not telling us what it actually is.

Jensen’s (1976) claim that “intelligence, by definition, is what intelligence tests measure” is circular for similar reasons to Boring’s since it also defines intelligence by referring to “intelligence tests” and at the same time assumes that intelligence tests are accurately measuring intelligence. Neither claim actually provides an independent understanding of what intelligence is, it merely ties the concept of “intelligence” back to its “measurement” (by IQ tests). Jensen’s Spearman’s hypothesis on the nature of black-white differences has also been criticized as circular (Wilson, 1985). Not only was Jensen (and by extension Spearman) guilty of circular reasoning, so too was Sternberg (Schlinger, 2003). Such a circular claim was also made by Van der Mass, Kan, and Borsboom (2014).

But Jensen seemed to have changed his view, since in his 1998 book The g Factor, he argues that we should dispense with the term “intelligence”, but curiously that we should still study the g factor and assume identity between IQ and g… (Jensen made many more logical errors in his defense of “general intelligence”, like saying not to reify intelligence on one page and then a few pages later reifying it.) Circular arguments have been identified in not only Jensen’s writings Spearman’s hypothesis, but also in using construct validity to validate a measure (Gordon, Schonemann; Guttman, 1992: 192).

The same circularity can be seen when discussions of the correlation between IQ and achievement tests is brought up. “These two tests correlate so they’re measuring the same thing”, is an example one may come across. But the error here is assuming that mental measurement is possible and that IQ and achievement tests are independent of each other. However, IQ and achievement tests are different versions of the same test. This is an example of circular validation, which occurs when a test’s “validity” is established by the test itself, leading to a self-reinforcing loop.

IQ tests are often validated with other older editions of the test. For example, the newer version of the S-B would be “validated” against the older version of the test that the newer version was created to replace (Howe, 1997: 18; Richardson, 2002: 301), which not only leads to circular “validation”, but would also lead to the same assumptions from the older test constructors (like Terman) which would still then be alive in the test itself (since Terman assumed men and women should be equal in IQ and so this assumption is still there today). IQ tests are also often “validated” by comparing IQ test results to outcomes like job performance and academic performance. Richardson and Norgate (2015) have a critical review of the correlation between IQ and job performance, arguing that it’s inflated by “corrections”, while Sackett et al, 2023 show “a mean observed validity of .16, and a mean corrected for unreliability in the criterion and for range restriction of .23. Using this value drops cognitive ability’s rank among the set of predictors examined from 5th to 12th” for the correlation between “general cognitive ability” and job performance.

But this could lead to circular validation, in that if a high IQ is used as a predictor of success in school or work, then success in school or work would be used as evidence in validating the IQ test, which would then lead to a circular argument. The test’s validity is being supported by the outcome that it’s supposed to predict.

Achievement tests are destined to see what one had learned or achieved regarding a certain kind of subject matter. Achievement tests are often validated by correlating test scores with grades or other kinds of academic achievement (which would also be circular). But if high achievement test scores are used to validate the test and those scores are also used as evidence of academic achievement, then that would be circular. Achievement tests are “validated” on their relationship between IQ tests and grades. Heckman and Kautz (2013) note that “achievement tests are often validated using other standardized achievement tests or other measures of cognitive ability—surely a circular practice” and “Validating one measure of cognitive ability using other measures of cognitive ability is circular.” But it should also be noted that the correlation between college grades and job performance 6 or more years after college is only .05 (Armstrong, 2011).

Now what about the claim that IQ tests and achievement tests correlate so they measure the same thing? Richardson (2017) addressed this issue:

For example, IQ tests are so constructed as to predict school performance by testing for specific knowledge or text‐like rules—like those learned in school. But then, a circularity of logic makes the case that a correlation between IQ and school performance proves test validity. From the very way in which the tests are assembled, however, this is inevitable. Such circularity is also reflected in correlations between IQ and adult occupational levels, income, wealth, and so on. As education largely determines the entry level to the job market, correlations between IQ and occupation are, again, at least partly, self‐fulfilling

The circularity inherent in likening IQ and achievement tests has also been noted by Nash (1990). There is no distinction between IQ and achievement tests since there is no theory or definition of intelligence and how, then, this theory and definition would be likened to answering questions correctly on an IQ test.

But how, to put first things first, is the term ‘cognitive ability’ defined? If it is a hypothetical ability required to do well at school then an ability so theorised could be measured by an ordinary scholastic attainment test. IQ measures are the best measures of IQ we have because IQ is defined as ‘general cognitive ability’. Actually, as we have seen, IQ theory is compelled to maintain that IQ tests measure ‘cognitive ability’ by fiat, and it therefore follows that it is tautologous to claim that IQ tests are the best measures of IQ that we have. Unless IQ theory can protect the distinction it makes between IQ/ability tests and attainment/ achievement tests its argument is revealed as circular. IQ measures are the best measures of IQ we have because IQ is defined as ‘general cognitive ability’: IQ tests are the only measures of IQ.

The fact of the matter is, IQ “predicts” (is correlated with) school achievement since they are different versions of the same test (Schwartz, 1975; Beaujean et al, 2018). Since the main purpose of IQ tests in the modern day is to “predict” achievement (Kaufman et al, 2012), then if we correctly identify IQ and achievement tests as different versions of the same test, then we can rightly state that the “prediction” is itself a form of circular reasoning. What is the distinction between “intelligence” tests and achievement tests? They both have similar items on them, which is why they correlate so highly with each other. This, therefore, makes the comparison of the two in an attempt to “validate” one or the other circular.

I can now argue that the distinction between IQ and achievement tests is nonexistent. If IQ and achievement tests are different versions of the same test, then they share the same domain of assessing knowledge and skills. IQ and achievement tests contain similar informational content on them, and so they can both be considered knowledge tests—class-specific knowledge. IQ and achievement tests share the same domain of assessing knowledge and skills. Therefore, IQ and achievement tests are different versions of the same test. Put simply, if IQ and achievement tests are different versions of the same test, then they will have similar item content, and they do so we can correctly argue that they are different versions of the same test.

Moreover, even constructing tests has been criticized as circular:

Given the consistent use of teachers’ opinions as a primary criterion for validity of the Binet and Wechsler tests, it seems odd to claim  then that such tests provide “objective alternatives to the subjective judgments of teachers and employers.”  If the tests’ primary claim to predictive validity is that their results have strong correlations with academic success, one wonders how an objective test can predict performance in an acknowledged subjective environment?  No one seems willing to acknowledge the circular and tortuous reasoning behind the development of tests that rely on the subjective judgments of secondary teachers in order to develop an assessment device that claims independence of those judgments so as to then be able to claim that it can objectively assess a student’s ability to  gain the approval of subjective judgments of college professors.  (And remember, these tests were used to validate the next generation of tests and those tests validated the following generation and so forth on down to the tests that are being given today.) Anastasi (1985) comes close to admitting that bias is inherent in the tests when he confesses the tests only measure what many anthropologists would called a culturally bound definition of intelligence. (Thorndike and Lohman, 1990)

Conclusion

It seems clear to me that almost the whole field of psychometrics is plagued with the problem of inferring causes from correlation and using circular arguments in an attempt to justify and validate the claim that IQ tests measure intelligence by using flawed arguments that relate IQ to job and academic performance. However this idea is very confused. Moreover, circular arguments aren’t only restricted to IQ and achievement tests, but also in twin studies (Joseph, 2014; Joseph et al, 2015). IQ and achievement tests merely show what one knows, not their learning potential, since they are general knowledge tests—tests of class-specific knowledge. So even Gottfredson’s “definition” of intelligence fails, since Gottfredson presumes IQ to be a measure of learning ability (nevermind the fact that the “definition” is so narrow and I struggle to think of a valid way to operationalize it to culture-bound tests).

The fact that newer versions of tests already in circulation are “validated” against other older versions of the same test means that the tests are circularly validated. The original test (say the S-B) was never itself validated, and so, they’re just “validating” the newer test on the assumption that the older one was valid. The newer test, in being compared to its predecessor, means that the “validation” is occuring on the other older test which has similar principles, assumptions, and content to the newer test. The issue of content overlap, too, is a problem, since some questions or tasks on the newer test could be identical to questions or tasks on the older test. The point is, both IQ and achievement tests are merely knowledge tests, not tests of a mythical general cognitive ability.

Challenging the Myth of Objective Testing with an Absolute Scale in the Face of Non-Cognitive Influences

2200 words

The IQ-ists are at it again. This time, PP is claiming that his little tests he created are on an absolute scale—meaning that they have a true 0 point. This has been the Achilles heel of psychometry for many decades. But abstract concepts don’t have true 0 points, and this is why “cognitive measurement” isn’t possible. I will conceptually analyze PP’s arguments for his “spatial intelligence test” and his “verbal intelligence test” and show that they aren’t on absolute scales. I will then use the IQ-ists favorite measurement—temperature (one they try to claim is like IQ)—and show the folly in his reasoning on claiming that these tests are on an absolute scale. I will then discuss the real reasons for score disparities and relate them to social class and one’s life experiences and the argue that the score results reflect merely environmental variables.

Fixed reference points and absolute scales

There are no fixed reference points for “IQ” like there are for temperature. IQ-ists have claimed for decades that temperature is like IQ while thermometers are like IQ tests (Nash, 1990). But I have shown the confused thinking of hereditarians on this issue. An absolute scale requires a fixed reference point or a true 0 point which can be objectively established. Physical quantities like distance, weight, and temperature have natural objective 0 points which can serve as fixed reference points. But nonphysical or abstract concepts lack inherent or universally agreed-upon 0 points which can serve as consistent reference points. So only physical quantities can truly be measured in an absolute scale, since they possess natural 0 points which provide a foundation for measurement.

If “spatial intelligence” is a unitary and objectively measureable cognitive trait, then all individuals’ spatial abilities should consistently align across various tasks. But individuals often exhibit significant variablity in their performance across spatial tasks, excelling in one aspect and not others. This variablity suggests that “spatial intelligence” isn’t a unitary concept. So the concept of a single, unitary, measurable “spatial intelligence” is questionable.

If the test is on an absolute scale for measuring “spatial intelligence”, then the scores obtained directly reflect the inherent “spatial intelligence” of individuals, without being influenced by factors like puzzle complexity, practice, or other variables. The scores are influenced by factors like puzzle complexity and practice effects (like doing similar things in the past). Since the scores are influenced by various factors, then it’s not on an absolute scale.

If a measurement is on an absolute scale, then it should produce consistent results across different contexts and scenarios, reflecting a stable and underlying trait. But cognitive abilities can be influenced by various external factors like stress, fatigue, motivation, and test-taking conditions. These external factors can lead to fluctuations in performance which aren’t indicative of the “trait” that’s attempting to be measured. It’s merely reflective of the circumstances of the moment one took the test in. So the concept of an absolute scale for measuring cognitive abilities fails to account for the impact of external variables which can introduce variability and inaccuracies in the “measurement.” This argument undermines the claim that this—or any test—is on an absolute scale, since motivation, stress and other socio-cognitive factors, like Richardson (2002: 287-288) notes:

the basic source of variation in IQ test scores is not entirely (or even mainly) cognitive, and what is cognitive is not general or unitary. It arises from a nexus of sociocognitive-affective factors determining individuals’ relative preparedness for the demands of the IQ test. These factors include (a) the extent to which people of different social classes and cultures have acquired a specific form of intelligence (or forms of knowledge and reasoning); (b) related variation in ‘academic orientation’ and ‘self-efficacy beliefs’; and (c) related variation in test anxiety, self-confidence, and so on, which affect performance in testing situations irrespective of actual ability.

Such factors, which influence test scores, merely show what one was exposed to in their lives, under my DEC framework. Socio-cognitive factors related to social class could introduce bias, since people from different backgrounds are exposed to different information, have unequal access to information and test prep, along with familiarity with item content. Thus, we can then look at these scores as mere social class surrogates.

If test scores are influenced by stress, anxiety, fatigue, motivation, familiarity, non-cognitive factors, and socio-cognitive factors due to social class, then the concept of an absolute scale for measuring cognitive abilities may not hold true. I have established that test scores can indeed be influenced by myriad external factors. So given that these factors affect test scores and undermine the assumption of an absolute scale, the concept of measuring cognitive ability on such a scale is challenged (don’t forget the irreducibility arguments). Further, the argument that “spatial intelligence” is not measurable on an absolute scale due to its nonphysical nature aligns with this perspective, which further supports the idea that the concept of an absolute scale isn’t applicable in these contexts. Thus, the implications for testing are profound, and so score differences are due to social class and one’s life experiences, nor any kind of “genotypic IQ” (which is an oxymoron).

Regarding vocabulary, this is influenced by the home environment—the types of words one is exposed to as they grow up (and can therefore also be integrated into the DEC). Kids from lower SES families here fewer words at home and in their neighborhoods (low SES children hear 30 million fewer words than higher SES children) (Brito, 2017). We know that word usage is the strongest determinant of child vocabulary growth, and that less educated parents use fewer words with less complex syntax (Perkins, Finegood, and Swain, 2013). The language quality that is addressed to children also matters (Golinkoff et al, 2023). We can then liken this to the Vygotskian More Knowledgeable Other (MKO). An MKO would have the knowledge that their dependent doesn’t. But if the MKO in this instance isn’t educated or low income, then they will use fewer words and they then will have this feature in their home. Such tests merely show what one was exposed to in their lives, not any underlying unitary “thing” like the IQ-ists claim.

Increasing both the amount and diversity of language within the home can positively influence language development, regardless of SES. Repeated exposure to words and phrases increases the child’s opportunity to learn and remember (McGregor, Sheng, & Ball, 2007). The complexity of grammar, the responsiveness of language to the child, and the use of questions all aid language development (Bornstein, Tamis-LeMonda, Hahn, & Haynes, 2008; Huttenlocher, Waterfall, Vasilyeva, Vevea, & Hedges, 2010). Besides frequency of language input, how caregivers communicate with children also affects children’s language skills. Children from higher SES families experience more gestures by their care-givers during parent–child interactions; these SES differences predict vocabulary differences at 54 months of age (Rowe & Goldin-Meadow, 2009). Parent–child interactions provide a context for language exposure and mold the child’s language development. Specific characteristics of the caregiver, including affect, responsiveness, and sensitivity predict children’s early and later language skills (Murray & Hornbaker, 1997; Tamis-LeMonda, Bornstein, Baumwell, & Melstein Damast, 1996). Maternal sensitivity partially explains links between SES and both children’s receptive and expressive language skills at age 3 years (Raviv, Kessenich, & Morrison, 2004). These differences also appear across culture (Mistry, Biesanz, Chien, Howes, & Benner, 2008). Maternal supportiveness partially explained the link between SES and language outcomes at 3 years of age, for both immigrant and native families in the United States. (Brito, 2017: 3-4)

The issue of temperature

This can be illustrated using the IQ-ists favorite (real) measurement—temperature. The Kelvin scale avoids the issues in the first argument. In the Kelvin scale, temperature is measured in relation to absolutel 0 (the point where molecular motion theoretically stops). It doesn’t involve factors like variability in measurement techniques, practice effects, or individual differences. The Kelvin scale has a consistent reference point—absolute 0—which provides a consistent and fixed baseline for temperature measurement. The values in the Kelvin scale are directly tied to a true 0 point.

There are no external influences on the measurement of temperature (beyond that which influences the mercury in the thermometer to move up or down),  like the type of thermometer used or one’s familiarity with temperature measurement. External factors like these aren’t relevant to the Kelvin scale, unlike puzzle complexity and practice effects on the spatial abilities test.

Finally, temperature values on the Kelvin scale are universally applicable, which means that a specific temperature corresponds to the same level of molecular motion regardless of who performs the measurement, or what measurement instrument is used. So the Kelvin temperature scale doesn’t have the same issues as PP’s little “spatial intelligence” test. It has a clear and consistent measurement framework, where values directly represent the underlying physical phenomenon of molecular motion without being influenced by external factors or individual differences. When you think about actual, established measurements like temperature and then try to relate them to IQ, then the folly of “mental measurement” reveals itself.

Now, having said all of this, I can draw a parralel between the argument against an absolute scale for cognitive abilities and the concept of temperature.

Temperature measurements, while influenced by external factors (since this is what makes the mercury travel up or down in the thermometer) like atmospheric pressure and humidity, still have an absolute 0 point in the Kelvin scale which represents a complete absence of thermal energy. Unlike “spatial intelligence”, temperature has a fixed reference point which served as an objective 0 point, which allows it to be measured on an absolute scale. The external factors influencing temperature measurement are fundamentally different from the factors which influence one’s performance on a test, since they don’t introduce subjective variations in the same manner. So while temperature is influenced by external factors, it’s measurement is fundamentally different from nonphysical concepts due to the presence of an objective 0 point and the presence and distinct nature of influencing factors. This is put wonderfully by Nash (1990: 131):

First, the idea that the temperature scale is an interval scale is a myth and, second, a scale zero can be established for an intelligence scale by the same method of extrapolation used in defining absolute zero temperature. In this manner Eysenck (p. 16) concludes, ‘if the measurement of temperature is scientific (and who would doubt that it is?) then so is that of intelligence.’ It should hardly be necessary to point out that all of this is special pleading of the most unabashed sort. In order to measure temperature three requirements are necessary: (i) a scale, (ii) some thermometric property of an object and, (iii) fixed points of reference. Zero temperature is defined theoretically and successive interval points are fixed by the physical properties of material objects. As Byerly (p. 379) notes, that ‘the length of a column of mercury is a thermometric property presupposes a lawful relationship between the order of length and the temperature order under certain conditions.’ It is precisely this lawful relationship which does not exist between the normative IQ scale and any property of intelligence. The most obvious problem with the theory of IQ measurement is that although a scale of items held to test ‘intelligence’ can be constructed there are no fixed points of reference. If the ice point of water at one atmosphere fixes 276.16 K, what fixes 140 points of IQ? Fellows of the Royal Society? Ordinal scales are perfectly adequate for certain measurements, Moh’s scale of scratch hardness consists of ten fixed points, from talc to diamond, and is good enough for certain practical purposes. IQ scales (like attainment test scales) are ordinal scales, but this is not really to the point, for whatever the nature of the scale it could not provide evidence for the property IQ or, therefore, that IQ has been measured.

Conclusion

It’s quite obvious that IQ-ists have no leg to stand on, which is why they need to claim that their tests are on absolute scales even when it leads to an absurd conclusion. The fact that test performance is influenced by myriad non-cognitive traits due to one’s social class (Richardson, 2002) shows that these—and all tests—take place in certain cultural contexts, meaning that all tests are culture-bound, as argued by Cole (2004) with his West African Binet argument.

The fact of the matter is, “mental measurement” is impossible, and all these tests do is show the proximity to a certain kind of class-specific knowledge, not any kind of general cognitive “strength”. Taking a Vygotskian perspective on this issue will allow us to see how and why people score differently from each other, and it comes down to their home environment and what they learn in their lives.

Nevertheless, the claims from IQ-ists that they have a specified measured object, object of measurement and measurement unit for IQ or that their tests have a true 0 point are absurd, since these things are properties of physical objects, not non-physical, mental ones. The Vygotskian perspective will allow use to understand score variances between individuals and groups, as I have argued before. We don’t need to claim that there is an absolute scale for cognitive assessment nor do we need to claim that mental measurement is possible for this to be a truism. So, yet again, PP’s argument fails.

Ashkenazi Jews Are White

2700 words

Introduction

Recently, I have been seeing people say that Ashkenazi Jews (AJs) are not white. Some may say that Jews “pretend to be white”, so they can accomplish their “group goals” (like pitting whites and blacks against each other in an attempt to sow racial strife, due to their ethnic nepotism due to their genetic similarity). I have also seen people deriding Jews for saying “I’m white” and then finding an instance of them saying “I’m Jewish” (see here for an example), as if that’s a contradiction, but it’s not. It’s the same thing as saying “I’m Italian… I’m white” or “I’m German… I’m white.” But since pluralism about race is true, there could be some contexts and places that Jews aren’t white, due to the social construction of racial identities. However, in the American context it is quite clear: In both historical and contemporary thought in America, AJs are white.

But a claim like this, then, raises an important question: If AJs are not white, then what race are they? This is a question I will answer in this article, and I will of course show that AJs are indeed white in an American conception of race. Using Quayshawn Spencer’s racial identity argument, I will assume that Ashkenazi Jews aren’t white, and then I will argue that this leads to a contradiction, so Jews must be white. And while there was discussion about the racial status of Jews after they began emigrating to America through Ellis Island, I will show that Jews arrived to America as whites.

White or not?

The question of whether or not AJs are white is a vexing one. Of course, AJs are a religious group. However, this doesn’t mean that they themselves have their own specific racial category. It’s like if one says they are German, or Italian, or British. Those are mere ethnicities which make up the white racial group. One study found that AJs have “White privilege vis-á-vis persons of color. This privilege, however, is limited to Jews who can “pass” as White gentiles” (Blumenfeld, 2009). Jews that can “pass as white” are obviously white, and there is no other race for them to be.

This is due to the social nature of race. Since race is a social construct, then the way people’s racial background is perceived in America is based on how they look (their phenotype). An Ashkenazi Jew saying “I’m Jewish. I’m white” isn’t a contradiction, since AJs aren’t a race. It’s just like saying “I’m Italian. I’m white” or “I’m German. I’m white.” It’s quite obviously an ethnic group which is a part of the white race. Jews are white and whites are a socialrace.

This discussion is similar to the one where it is claimed that “Hispanic/Latino/Spanish” aren’t white. But that, too, is a ridiculous claim. In cluster studies, HLSs don’t have their own cluster, but they cluster near the group where their majority ancestry derives (Risch et al, 2002). Saying that AJs aren’t white is similar to this.

But during WWII, Jews were persecuted in Nazi German and eventually some 6 million Jews were killed. Jews, in this instance, were seen as a socialrace in Germany, and so they were themselves racialized. It has been shown that Germans who grew up under their Nazi regime are much more anti-Semitic than Germans who were born before or after the Nazi regime, and it was Nazi schooling which contributed to this the most (Voigtlander and Voth, 2015). This shows how one’s beliefs—and that of a whole society’s—are malleable along with how effective propaganda is. The Nuremberg laws of 1935 established anti-Jewish sentiment in the Nazi racial state, and so they had to have a way to identify Jews. They settled on the religious affiliation of one’s 4 grandparents as a way to identify Jews. But when one’s origins were in doubt, the Reich Kinship Office was deployed in order to ascertain one’s genealogy. But in the event this could not be done, one’s physical attributes would be assessed and compared to 120 physical measures between the individual and their parents (Rupnow, 2020: 373-374).

This can now be centered on Whoopi Goldberg’s divisive comment from February, 2022, where she states that the attempted genocide of Jews in Nazi Germany “wasn’t about race“, but it was about “man’s inhumanity to man; [it involved] two groups of white people.” Of course Goldberg is operating under an American conception of race, so I could see why she would say that. However, at the time in Nazi Germany, Jews were Racialized Others, and so they were a socialrace in Germany.

Per Pew, most Jews in America identify as white:

92% of U.S. Jews describe themselves as White and non-Hispanic, while 8% say they belong to another racial or ethnic group. This includes 1% who identify as Black and non-Hispanic; 4% who identify as Hispanic; and 3% who identify with another race or ethnicity – such as Asian, American Indian or Hawaiian/Pacific Islander – or with more than one race.

A super majority (94%) of American Jews are (and identify as) white and non-“Hispanic” in Pew’s 2013 research, which is down slightly from the 2020 research (Lugo et al, 2013):

From Lugo et al, 2013

AJs were viewed as white even as early as 1790 when the Naturalization Act was put into law, which stated that only free white persons were allowed to emigrate to America (Tanner, 2021). Even in 1965, Srole (1965) stated that “Jews are white.” But the perception that all Jews are white came after WWII (Levine-Rasky, 2020) and this claim is of course false. All Jews certainly aren’t white, but some Jews are white. Thus, even historically in the history of America, AJs were seen as white. Yang and Koshy (2016) write:

We found no evidence from U.S. censuses, naturalization legislation, and court cases that the racial categorization of some non-Anglo-Saxon European immigrant groups such as the Irish, Italians, and Jews changed to white. They were legally white and always white, and there was no need for them to switch to white.

White ethnics could be considered ethnically inferior and discriminated against because of their ethnic distinctions, but in terms of race or color, they were all white and had access to resources not available to nonwhites.

It was precisely because of the changing meanings of race that “the Irish race,” “the German race,” “the Dutch race,” “the Jewish race,” “the Italian race,” and so on changed their races and became white. In today’s terminology, it should be read that these European groups changed their ethnicities to become part of whites, or more precisely they were racialized to become white.

Our findings help resolve the controversy over whether certain U.S. non-Anglo-Saxon European immigrant groups became white in historical America. Our analysis suggests that “becoming white” carries different meanings: change in racial classification, and change in majority/minority status. In terms of the former, “becoming white” for non-Anglo-Saxon European immigrant groups is bogus. Hence, the argument of Eric Arnesen (2001), Aldoph Reed (2001), Barbara Fields (2001), and Thomas Guglielmo (2003) that the Irish, Italians, and Jews were white on arrival in America is vindicated.

But one article in The Forward argued that “Ashkenazi Jews are not functionally white.” The author (Danzig) attempts to make an analogy between the founder of the NAACP Walter White who was “white-passing” (both of his parents were born into slavery) and Jews who are “white-passing”, “due to years of colonialism, expulsion and exile in European lands.” The author then claims that as along as Jews maintain their unique Jewish identity, they therefore are a racial group. This article is a response to another which claims that Ashkenazi Jews are” functionally white” (Burton). Danzig discusses Button’s claim that a “white-passing ‘Latinx'” person could be deported if their immigration status is discovered. This of course implies that “Hispanics” are themselves a racial group (they aren’t). Danzig discusses the discrimination that his family went through in the 1920s, stating that they could do certain things because they were Jewish. The argument in Danzig’s article, I think, is confused. It’s confused because just because Jews were discriminated against in the past doesn’t mean they weren’t white. In fact, Jews, Italians, and the Irish were white on arrival to the United States (Steward, 1964; Yang and Koshy, 2016). But this doesn’t mean that they didn’t face discrimination. That is, Jews, Italians and the Irish didn’t change to white they were always legally white in America. (But see Gardaphe, 2002, Bisesi, 2017, Baddorf, 2020, and Rubin, 2021. Italians didn’t become white as those authors claim, they were white upon arrival). So Danzig’s claim fails—Jews are functionally white because they are white and they arrived in America as white. Claims to the contrary that AJs (and Italians and the Irish) became white are clearly false.

So despite claims that Jews became white after WWII, Jews are in fact white in America (Pearson and Geronimus, 2011). Of course in the early 1900s as immigrants were arriving to Ellis Island, the question of whether or not Jews (“Hebrews” in this instance) were white or even if they were their own racial group had a decent amount of discussion at the time (Goldstein, 2005; Pearlman, 2018). The fact that there was ethnic strife between new-wave immigrants to Ellis Island doesn’t entail that they were racial groups or that those European immigrants weren’t white. It’s quite clear that Jews—like italians and the Irish—were considered white upon arrival.

Now that I have established the fact that Jews AJs are indeed white (and arrived to America as white) despite the confused protestations of some authors, now I will formalize the argument that AJs are white, since if they aren’t white, then they would need to fit into one of the other 4 racial categories.

Many may know that I push Quayshawn Spencer’s OMB race theory, and that I am a pluralist about race. In the volume What is Race?: Four Philosophical Views, philosopher or race Quayshawn Spencer (2019: 98) writes:

After all, in OMB race talk, White is not a narrow group limited to Europeans, European Americans, and the like. Rather, White is a broad group that includes Arabs, Persians, Jews, and other ethnic groups originating from the Middle East and North Africa.

Although there is some research on the racial identity of MENA (Middle Eastern/North African people) and how they may not perceive themselves as white or be perceived as white (Maghbouleh, Schachter, and Flores, 2022), the OMB is quite clear that the social group designated “white” doesn’t refer only to Europeans (Spencer, 2019).

So, if AJs aren’t white, then they must be part of another of the 4 OMB races (black, Native American, East Asian or Pacific Islander). Part of this racial scheme is K=5—where when K is set to 5 in STRUCTURE, 5 clusters are spit out and these map onto the OMB races. But of those 5 clusters, there is no Jewish cluster. Note that I am not denying that there is some kind of genetic structure to AJs, I’m just denying that this would entail that they are a racial group. If they were, then they would appear in these runs. AJs are merely an ethno-religious in the white socialrace. So let’s assume this is true: Ashkenazi Jews are not white.

When we consider the complexities of racial classification, it becomes apparent that societies tend to organize individuals on numerous traits into distinct categories based on physical traits, cultural background, and ancestry. If AJs aren’t white in an American context, then they would have to fall into one of the four other racial groups in a Spencerian OMB race theory.

But there is one important aspect to consider here—that of the phenotype of Ashkenazi Jews. Many Ashkenazi Jews exhibit physical traits which are more likely associated with “white” populations. This simple observation shows that AJs don’t fit into the established categories of East Asian, Pacific Islander, black or Native American. AJs’ typical phenotype aligns more closely with that of white populations.

So examining the racial landscape in America, we can see that how social perceptions and classifications can significantly impact how individuals are positioned in a broader framework. AJs have historically been classified and perceived as white in the American racial context, as can be seen above. So within American racetalk, AJs are predominantly classified in the white racial grouping.

So taking all of this together, I can rightly state that Jews are white. Since we assumed at the outset that if they weren’t white they would belong to some other racial group, but they don’t look like any other racial group but look and are treated as white (both in contemporary thought and historically), then AJs are most definitely seen as white in American racetalk. Here’s the formalized argument:

P1: If AJs aren’t white, then they must belong to one of the other 4 racial categories (black, Native American, East Asian or Pacific Islander).
P2: AJs do not belong to any of the four racial categories mentioned (based on their phenotype typical of white people).
P3: In the American racial context, AJs are predominantly classified and perceived as white.
Conclusion: from P1, if AJs aren’t white then they must belong to one of the other 4 racial groups. But from P2, AJs do not belong to any of those categories, because from P3, AJs are perceived and classified as white. These premises, then, lead to a contradiction, since they all cannot be simultaneously true.

So we must reject the assumption that AJs aren’t white, and the logical conclusion is that AJs are considered white in the American context, based on their phenotype (and the fact that they arrived to America as white). Jews didn’t “become white” like some claim (eg, Brodkin, 2004). American Jews even benefit from white privilege (Schraub, 2019). MacDonald-Dennis’ (2005, 2006) qualitative research (although small not generalizable) shows that some Ashkenazi Jews think of themselves as white. AJs are legally and politically white.

All Jews aren’t white, but some (most) Jews are white (in America).

Conclusion

Thus, AJs are white. Although many authors have claimed that Jews became white after arrival to America (or even after WWII), this claim is false. It is false even as far back as 1790. If we accept the assumption that AJs aren’t white, then it leads to a contradiction, since they would have to be one of the other 4 racial groups, but since they look white, they cannot be a part of those racial groups.

There are white Jews and there are non-white Jews. But when it comes to AJs, the question “When did they become white?” is nonsense since they were always perceived and treated as white in America from it’s founding. Some AJs are white, some aren’t; some Mizrahi Jews are white, some aren’t. However in the context of this discussion, it is quite clear that AJs are white, and there is no other race for them to be, based on the OMB race theory. In fact, in the minds of most Americans, Jews aren’t a racialized group, but they are perceived as outsiders (Levin, Filindra, and Kopstein, 2022). But there were some instances in history where sometimes Jews were racialized, and sometimes they weren’t (Hochman, 2017). But what I have decisively shown here, in the American context ever since its inception, AJs are most definitely white. Saying that AJs are white is like saying that Italians or Germans are white. There is no contradiction. Jews get treated as white in the American social context, they look white, and have been considered white since they have arrived to America in the early 1900s (like the Irish and Italians).

The evidence and reasoning presented in this article points to one conclusion: That AJs are indeed white. This of course doesn’t mean that all AJs are white, it merely means that some (and I would say most) are white. AJs have been historically, legally, and politically white. Mere claims that they aren’t white are irrelevant.

Examining Misguided Notions of Evolutionary “Progress”

2650 words

Introduction

For years, PumpkinPerson (PP) has been pushing an argument which states that “if you’re the first branch, and you don’t do anymore branching, then you’re less evolved than higher branches.” This is the concept of “more evolved” or the concept of evolutionary progress. Over the years I have written a few articles on the confused nature of this thinking. PP seems to like the argument since Rushton deployed a version of it for his r/K selection (Differential K) theory, which stated that “Mongoloids” are more “K evolved” than “Caucasians” who are more “K evolved” than “Negroids”, to use Rushton’s (1992) language. Rushton posited that this ordering occurred due to the cold winters that the ancestors of “Mongoloids” and “Caucasoids” underwent, and he theorized that this led to evolutionary progress, which would mean that certain populations are more advanced than others (Rushton, 1992; see here for response). It is in this context that PP’s statement above needs to be carefully considered and analyzed to determine its implications and relevance to Rushton’s argument. It commits the affirming the consequent fallacy, and assuming the statement is true leads to many logical inconsistenties like there being a “most evolved” species,

Why this evolutionary progress argument is fallacious

if you’re the first branch, and you don’t do anymore branching, then you’re less evolved than higher branches.

This is one of the most confused statements I have ever read on the subject of phylogenies. This misconception, though, is so widespread that there have been quite a few papers that talk about this and talk about how to steer students away from this kind of thinking about evolutionary trees (Crisp and Cook, 2004; Baum, Smith, and Donovan, 2005; Gregory, 2008; Omland, Cook, and Crisp, 2008). This argument is invalid since the concept of “evolved” in evolutionary trees doesn’t refer to a hierarchical scale, where higher branches are “more evolved” than lower branches (which are “less evolved”). What evolutionary trees do is show historical relationships between different species, which shows common ancestry and divergence over time. So each branch represents a lineage and all living organisms have been evolving foe the same amount of time since the last common ancestor (LCA). Thus, the position of a branch on the tree doesn’t determine a species’ level of evolution.

The argument is invalid since it incorrectly assumes that the position of the branch on a phylogeny determines the evolution or the “evolutionary advancement” of a species. Here’s how I formulate this argument:

(P1) If you’re the first branch on the evolutionary tree and you don’t do any more branching, then you’re less evolved than higher branches.
(P2) (Assumption) Evolutionary advancement is solely determined by the position on the tree and the number of branches.
(C) So species represented by higher branches on the evolutionary tree are more evolved than species represented by lower branches.

There is a contradiction in P2, since as I explained above, each branch represents a new lineage and every species on the tree is equally evolved. PP’s assumption seems to be that newer branches have different traits than the species that preceded it, implying that there is an advancement occurring. Nevertheless, I can use a reductio to refute the argument.

Let’s consider a hypothetical scenario in which this statement is true: “If you’re the first branch and you don’t do any more branching, then you’re less evolved than higher branches.” This suggests that the position of a species on a phylogeny determines its level of evolution. So according to this concept, if a species occupies a higher branch, it should be “more evolved” than a species on a lower branch. So following this line of reasoning, a species that has undergone extensive branching and diversification should be classified as “more evolved” compared to a species that has fewer branching points.

Now imagine that in this hypothetical scenario, we have species A and species B in a phylogeny. Suppose that species A is the first branch and that it hasn’t undergone any branching. Conversely, species B, which is represented on a higher branch, has experienced extensive branching and diversification, which adheres to the criteria for a species to be considered “more evolved.” But there are logical implications for the concept concerning the positions of species A and species B on the phylogeny.

So according to the concept of linear progression which is implied in the original statement, if species B is “more evolved” than species A due to its higher branch position, it logically follows that species B should continue to further evolve and diversify. This progression should lead to new branching points, as each subsequent stage would be considered “more evolved” than the last. Thus, applying the line of reasoning in the original statement, it suggests that there should always be a species represented on an even higher branch than species B, and this should continue ad infinitim, with no endpoint.

The logical consequence of the statement is that an infinite progression of increasingly evolved species, each species being represented by a higher branch than the one before, without any final of ultimate endpoint for a “most evolved” species. This result leads to an absurdity, since it contradicts our understanding of evolution as an ongoing and continuous process. The idea of a linear and hierarchical progression of a species in an evolutionary tree culminating in a “most evolved” species isn’t supported by our scientific understanding and it leads to an absurd outcome.

Thus, the logical implications of the statement “If you’re the first branch and you don’t do any more branching, then you’re less evolved than higher branches” leads to an absurd and contradictory result and so it must be false. The concept of the position of a species on an evolutionary tree isn’t supported by scientific evidence and understanding. Phylogenies represent historical relationships and divergence events over time.

(1) Assume the original claim is true: If you’re the first branch and you don’t do any more branching, then you’re less evolved than higher branches.

(2) Suppose species A is the first branch and undergoes no further branching.

(3) Now take species B which is in a higher branch which has undergone extensive diversification and branching, making it “more evolved”, according to the statement in (1).

(4) But based on the concept of linear progression implied in (1), species B should continue to evolved and diversity even further, leading to new branches and increased evolution.

(5) Following the logic in (1), there should always be a species represented on an even higher branch than species B, which is even more evolved.

(6) This process should continue ad infinitim with species continually branching and becoming “more evolved” without an endpoint.

(7) This leads to an absurd result, since it suggests that there is no species that could be considered “more evolved” or reach a final stage of evolution, contradicting our understanding of evolution as a continuous, ongoing process, with no ultimate endpoint.

(8) So since the assumption in (1) leads to an absurd result, then it must be false.

So the original statement is false, and a species’ position on a phylogeny doesn’t determine the level of evolution and the superiority of a species. The concept of a linear and hierarchical progression of advancement in a phylogeny is not supported by scientific evidence and assuming the statement in (1) is true leads to a logically absurd outcome. Each species evolves in its unique ecological context, without reaching a final state of evolution or hierarchical scale of superiority. This reductio ad absurdum argument therefore reveals the fallacy in the original statement.

Also, think about the claim that there are species that are “more evolved” than other species. This implies that there are “less evolved” species. Thus, a logical consequence of the claim is that there could be a “most evolved” species.

So if a species is “most evolved”, it would mean that that species has surpassed all others in evolutionary advancement and there are no other species more advanced than it. Following this line of reasoning, there should be no further branching or diversification of this species since it has already achieved the highest level of evolution. But evolution is an ongoing process. Organisms continously adapt to and change their surroundings (the organism-environment system), and change in response to this. But if the “most evolved” species is static, this contradicts what we know about evolution, mainly that it is continuous, ongoing change—it is dynamic. Further, as the environment changes, the “most evolved” species could become less suited to the environment’s conditions over time, leading to a decline in its numbers or even it’s extinction. This would then imply that there would have been other species that are “more evolved.” (It merely shows the response of the organism to its environment and how it develops differently.) Finally, the idea of a “most evolved” species implies an endpoint of evolution, which contradicts our knowledge of evolution and the diversification of life one earth. Therefore, the assumption that there is a “most evolved” species leads to a logical contradiction and an absurdity based on what we know about evolution and life on earth.

The statement possesses scala naturae thinking, which is also known as the great chain of being. This is something Rushton (2004) sought to bring back to evolutionary biology. However, the assumptions that need to hold for this to be true—that is, the assumptions that need to hold for this kind of tree reading to even be within the realm of possibility is false. This is wonderfully noted by Gregory (2008) who states that “The order of terminal noses is meaningless.Crisp and Cook (2004) also state how such tree-reading is intuitive and this intuition of course is false:

Intuitive interpretation of ancestry from trees is likely to lead to errors, especially the common fallacy that a species-poor lineage is more ‘ancestral’ or ‘diverges earlier’ than does its species-rich sister group. Errors occur when trees are read in a one-sided way, which is more commonly done when trees branch asymmetrically.

There are several logical implications of that statement. I’ve already covered the claim that there is a kind of progression and advancement in evolution—a linear and hierarchical ranking—and the fixed endpoint (“most evolved”). Further, in my view, this leads to value judgments, that some species are “better” or “superior” to others. It also seems to ignore that the branching signifies not which species has undergone more evolution, but the evolutionary relationships between species. Finally, evolution occurs independently in each lineage and is based on their specific histories and interactions between developmental resources, it’s not valid to compare species as “more evolved” than others based on the relationships between species on evolutionary trees, so it’s based on an arbitrary comparison between species.

Finally, I can refute this using Gould’s full house argument.

P1: If evolution is a ladder of progress, with “more evolved” species on higher rungs, then the fossil record should demonstrate a steady increase in complexity over time.
P2: The fossils record does not shit a steady increase in complexity over time.
C: Therefore, evolution is not a ladder of progress and species cannot be ranked as “more evolved” based on complexity.


P1: If the concept of “more evolved” is valid, then there would be a linear and hierarchical progression in the advancement of evolution, wjtcertsin species considered superior to others based on their perceived level of evolutionary change.
P2: If there a linear and hierarchical progression of advancement in evolution, then the fossil record should demonstrate a steady increase in complexity over time, with species progressively becoming more complex and “better” in a hierarchical sense.
P3: The fossils record does not show a steady increase in complexity over time; it instead shows a diverse and branching pattern of evolution.
C1: So the concept of “more evolved” isn’t valid, since there is an absence of a steady increase in complexity in the fossil record and this refutes the notion of a linear and hierarchical progression of advancement in evolution.
P4: If the concept of “more evolved” is not valid, then there is no objective hierarchy of superiority among species based in their positions on an evolutionary tree.
C2: Thus, there is no objective hierarchy of superiority among species based on their positions on an evolutionary tree.

There is one final fallacy contained in that statement: it affirms the consequent. This logical fallacy takes the form of: If P then Q, P is true so Q is true.” Even if the concept of “more evolved” were valid, just because a species doesn’t do any more branching doesn’t mean it’s less evolved. So this reasoning is as follows: If you’re the first branch and you don’t do anymore branching, then you’re less evolved than higher branches (If P and Q, then R). It affirms the consequent like this: You didn’t do anymore branching (Q), so this branch has to be less evolved than the higher branches (R). It incorrectly infers the consequent Q (not doing anymore branching) as a sufficient condition for the antecedent P (being the first branch), which leads to the flawed conclusion (R) that the species is less evolved than higher branches. Just because a species doesn’t do anymore branching doesn’t mean it’s less evolved than another species. There could be numerous reasons why branching didn’t occur and it doesn’t directly determine evolutionary status. The argument infers being less evolved from doing less branching, which affirms the consequent. If a species doesn’t do anymore branching then that branch is less evolved than a higher branch. So since the argument affirms the consequent, it is therefore invalid.

Conclusion

Reading phylogenies in such a manner—in a way that would make one infer the conclusion that evolution is progressive and that there are “more evolved” species—although intuitive is false. Misconceptions like this along with many others while reading evolutionary trees are so persistent that much thought has been put into educating the public on right and wrong ways to read evolutionary trees.

As I showed in my argument ad absurdums where I accepted the claim as true, it leads to logical inconsistenties and goes against everything we know about evolution. Evolution is not progressive, it’s merely local change. That a species changes over time from another species doesn’t imply anything about “how evolved” (“more or less”) it is in comparison to the other. Excising this thinking is tough, but it is doable by understanding how evolutionary trees are constructed and how to read them correctly. It further affirms the consequent, leading to a false conclusion.

All living species have spent the same amount of time evolving. Branching merely signifies a divergence, not a linear scale of advancement. Of course one would think that if evolution is happening and one species evolves into another and that this relationship is shown on a tree that this would indicate that the newer species is “better” in some way in comparison to the species it derived from. But it merely suggests that the species faced different challenges which influenced its evolution; each species adapted and survived in its own unique evolutionary ecology, leading to diversification and the formation of newer branches on the tree. Evolution does not follow a linear path of progress, and there is no inherent hierarchy of superiority among species based on their position on the evolutionary tree. While the tree visually represents relationships between species, it doesn’t imply judgments like “better” or “worse”, “more evolved” or “less evolved.” It merely highlights the complexity and diversity of all life on earth.

Evolution is quite obviously not progressive, and even if it were, we wouldn’t see evolutionary progression from reading evolutionary trees, since such evolutionary relationships between species can be ladderized or not, with many kinds of different branches that may not be intuitive to those who read evolutionary trees as showing “more evolved” species, they nevertheless show a valid evolutionary relationship.

Dissecting Genetic Reductionism in Lead Litigation: Big Lead’s Genetic Smokescreen

2300 words

Lead industries have a history of downplaying or shifting the blame to avoid accountability for the deleterious effects of lead on public health, especially in vulnerable populations like children. As of the year 2002, about 35 percent of all low-income housing had lead hazards (Jacobs et al, 2002). Though another more recent analysis stated that 38 millions homes in the US (about 40 percent of homes) contained at least trace levels of lead, which was added to the paint before the use of lead in residential paint was banned in 1978. The American Healthy Homes Survey showed that 37.5 millions homes had at least some levels of lead in the paint (Dewalt et al, 2015). Since lead paint is more likely to be found in low-income households, public housing (Rabito, Shorter, and White, 2003) and minorities are more likely to be low-income, then it follows that minorities are more likely to be exposed to lead paint in the home—this is what we find (Carson, 2018; Eisenberg et al, 2020; Baek et al, 2021; McFarland, Hauer, and Reuben, 2022). The fact of the matter is, there is a whole host of negative effects of lead on the developing child, and there is no “safe level” of lead exposure, a point I made back in 2018:

There is a large body of studies which show that there is no safe level of lead exposure (Needleman and Landrigan, 2004Canfield, Jusko, and Kordas, 2005Barret, 2008Rossi, 2008Abelsohn and Sanborn, 2010Betts, 2012Flora, Gupta, and Tiwari, 2012Gidlow, 2015Lanphear, 2015Wani, Ara, and Usmani, 2015Council on Environmental Health, 2016Hanna-Attisha et al, 2016Vorvolakos, Aresniou, and Samakouri, 2016Lanphear, 2017). So the data is clear that there is absolutely no safe level of lead exposure, and even small effects can lead to deleterious outcomes.

This story reminds me of a similar story, which I will discuss at the end, one of Waneta Hoyt and SIDS. I will compare these two and argue that the underlying issues are the same, privileging genetic factors over other, more obvious environmental factors. After discussing how Big Lead attempted to downplay and shift the blame of what lead was doing to these children, I will liken it to the Waneta Hoyt case.

Big Lead’s downplaying of the deleterious effects of lead on developing children

We have known that lead pipes were a cause of lead poisoning since the late 1800s, and lead companies attempted to reverse this by publishing studies and reports that showed that lead was better than other kinds of materials that could be used for the same purpose (Rabin, 2008). The Lead Industries Association (LIA) even blocked bans against lead paint and pipes, even after being aware of the issues they caused. So why, even after knowing that lead pipes were a primary cause of lead poisoning, were they used to distribute water and paint homes? The answer is simple: Corporate lobbying and outright lying and downplaying of the deleterious effects of lead. Due to our knowledge of the effects of lead in pipes and consequently drinking water, they began to be phased out around the 1920s. One way they attempted to downplay the obviously causal association between lead pipes and deleterious effects was to question it and say it still needed to be tested, one Lead Industries of America (LIA) member noted (quoted in Rabin, 2008):

Of late the lead industries have been receiving much undesirable publicity regarding lead poisoning. I feel the association would be wise to devote time and money on an impartial investigation which would show once and for all whether or not lead is detrimental to health under certain conditions of use.

Lead industries even promoted the use of lead in paint even after it was known that it leads to negative effects if paint chips are ingested by children (Rabin, 1989; Markowitz and Rosner, 2000). So we now have two examples on how Big Lead arranged to downplay the obviously causal, deleterious effects of lead on the developing child. But there are some more sinister events hiding in these shadows, and that is actually putting low-income (mostly black) families into homes with lead paint in order to study their outcomes and blood, as Harriet Washington (2019: 56-57) wrote in her A Terrible Thing to Waste: Environmental Racism and its Assault on the American Mind:

But Baltimore slumlords find removing this lead too expensive and some simply abandon the toxic houses. Cost concerns drove the agenda of the KKI researchers, who did not help parents completely remove children from sources of lead exposure. Instead, they allowed unwitting children to be exposed to lead in tainted homes, thus using the bodies of the children to evaluate cheaper, partial lead-abatement techniques of unknown efficacy in the old houses with peeling paint. Although they knew that only full abatement would protect these children, scientists decided to explore cheaper ways of reducing the lead threat.

So the KKI encouraged landlords of about 125 lead-tainted housing units to rent to families with young children. It offered to facilitate the landlords’ financing for partial lead abatement—only if the landlords rented to families with young children. Available records show that the exposed children were all black.

KKI researchers monitored changes in the children’s health and blood-lead levels, noting the brain and developmental damage that resulted from different kinds of lead-abatement programs.

These changes in the children’ bodies told the researchers how efficiently the different, economically stratified abatement levels worked. The results were compared to houses that either had been completely lead-abated or that were new and presumed not to harbor lead.

Scientists offered parents of children in these lead-laden homes incentives such as fifteen-dollar payments to cooperate with the study, but did not warn parents that the research potentially placed their children at risk of lead exposure.

Instead, literature given to the parents promised that researchers would inform them of any hazards. But they did not. And parents were not warned that their children were in danger, even after testing showed rising lead content in their blood.

Quite obviously, the KKI (Kennedy Krieger Institute) and the landlords were a part of an unethical study with no informed consent. The study was undertaken to test the effectiveness of three measures which cost a differing amount of money (Rosner and Markowitz, 2012) but this study was clearly unethical (Sprigg, 2004).

The Maryland Court of Appeals (2001) called this “innately inappropriate.” This is also obviously a case in which lower-income (majority black) people were already exposed to the higher levels of lead, and they then put them into homes that were “partially abated” of lead and comparing them to homes that had no lead. They knew that only full lead abatement would have been protective but still chose to place them into homes with “partial abatement” but they knowingly chose the cheaper option at the cost of the health of children. They also didn’t expose the parents to the full context of what they were trying to accomplish, thereby putting unwitting people into their clearly unethical study.

In 2002, Tamiko Jones and others brought on a suit against the owner of the apartment building and National Lead Industries, claiming that lead paint in the home was the cause of their children’s maladies and negative outcomes (Tamiko Jones, et al., v. NL Industries, et al. (Civil Action No. 4:03CV229)). Unfortunately, after a 3 week trial, the defendants lost the case and subsequent appeals were denied. But some of the things that the witnesses the defense brought up to the court caught my attention, since it’s similar to the story of Waneta Hoyt.

NL Industries attempted what I am calling “the gene defense.” The gene defense they used was that the children’s problems weren’t caused by lead in the paint, but it was caused by genetic and familial factors which then led to environmental deprivation. One of the mothers in the case, Sherry Wragg, was quoted as sayingMy children didn’t have problems until we moved in here.” So the children that moved into this apartment building with their parents began to have behavioral and cognitive problems after they moved in, and they stated that it was due to the paint that had lead in it.

So the plaintiffs were arguing that the behavioral and cognitive deficits the children had were due to the leaded paint. Although the defense did acknowledge that the plaintiffs suffered from “economic deprivation”, which was a contributor to their maladies, they tried to argue that a familial history of retardation and environmental and economic deprivation explained the cognitive and behavioral deficits. But the defense argued that these deficits were explained by familial factors and genes which then explained the environmental deprivation. (Though the Court did recognize that the defense witnessed did not have expertise in toxicology.)

Plaintiffs first seek to strike two experts who provide arguably duplicative expert testimony that plaintiffs’ neurological deficits were most likely caused by genetic, familial and environmental factors, rather than lead exposure. For example, Dr. Barbara Quinten, director of the medical genetics department at Howard University, testified to her view that various plaintiffs had familial histories of low intelligence and/or mental retardation which explained  their symptoms. Dr. Colleen Parker, professor of pediatrics and neurology at the University of Mississippi Medical Center, similarly testified that such factors as “familial history of retardation, poor environmental stimulation, and economic deprivation,” rather than elevated blood lead levels, explained the plaintiffs’ deficits.

So it seems that the defense was using the “genes argument” for behavior and cognition to try to make it ambiguous as to what was the cause of the issues the children were having. This is, yet again, another way in which IQ tests have been weaponized. “IQ has a hereditary, genetic component, and this family has familial history of these issues, so it can’t be shown that our lead paint was the cause of the issues.” The use of the genetic component of IQ has clearly screwed people groom ring awarded what they should have rightfully gotten. This is, of course, an example of environmental racism,

Parallels with the Waneta Hoyt case

The story of Big Lead and their denial of the deleterious effects of lead paint reminds me of another similar issue: That of the case of Waneta Hoyt and SIDS. This parallels this case like this: Waneta Hoyt was killing her children by suffocating them, and a SIDS researcher—Alfred Steinschneider—claimed that the cause was genetic, ignoring all signs that Waneta was the cause of her children’s death. This is represented by genes (Steinschneider) and environment (Waneta). In the case of the current discussion, this is represented by genes (Big Lead and their attempts to pinpoint genetic causes for what lead did) and environment (actual environmental effects of lead on the developing child).

There is a pattern in these two cases: Looking to genetic causes and bypassing the actual environmental cause. Genetic factors are represented by Steinschneider and Big Lead while they ignore or downplay the actual environmental causes (represented by Waneta Hoyt and the actual effects of lead on developing children). Selective focus like this, quite clearly, did lead to ignoring or overlooking crucial information. In the Hoyt case, it lead to the death of a few infants which could have been prevented (if Steinschneider didn’t have such tunnel vision for his genetic causation for SIDS). In the Big Lead case, NL Industries and it’s witnesses pointed to genetic factors or individual behaviors as the culprit for the causes of the negative behaviors and cognitive deficits for the children. In both cases, confirmation bias was thusly a main culprit.

Conclusion

The search for genetic causes and understanding certain things to be genetically caused has caused great harm. Big Lead and its attempted downplaying of the deleterious effects of lead paint while shifting blame to genetic factors reminds us that genetic reductionism and determinism is still here, and that corporate entities will attempt to use genetic arguments in order to ensure their interests are secured. Just as in the Waneta Hoyt case, where a misdirection towards genetic factors cloaked the true cause of the harm (which was environmental), the focus on genetics by Big Lead shifted shifted attention away from the true cause and put it on causes coming from inside the body.

The lobbying efforts of Big Lead damage for countless numbers of children and their families. And by hiding with genetic arguments, trying to deflect the harmful effects of what their leaded paint did to children, they chose to go to the genes argument, pushed by hereditarians as an explanation of the IQ gap. This, as well, is yet more evidence that IQ tests (along with association studied to identify causal genes for IQ) should be banned since they have clearly caused harms to people, in this case, not getting what they should have gotten by winning a court case that they should have won. Big Lead successfully evaded accountability here, and they did so with genetic reductionism.

Quite obviously, the KKI knowingly placed black families into homes that they knew had lead paint for experimentation purposes and this was highly unethical. This shows the environmental injustice and environmental racism, where vulnerable populations are used for nothing more than an experiment without their consent and knowledge. The parallels here are obvious in how Big Lead attempted to divert blame from the environmental effects of lead and implicate genetic factors and familial histories of retardation. This strategy is mirrored in the Waneta Hoyt case. Although Steinschneider didn’t have any kind of biases like Big Lead have, he did have a bias in attempting to pinpoint a genetic cause for SIDS which left him unable to see that the cause of the deaths was the mother of the children, Waneta Hoyt.