Home » 2023 (Page 6)
Yearly Archives: 2023
Directed Mutations, Epigenetics and Evolution
2400 words
A mutation can be said to be directed if it arises due to the needs of the developing organism, and they occur at higher frequencies if it is beneficial (Foster, 2000; Saier et al, 2017). If there is some sort of stress, then an adaptive mutation would occur. The existence of this kind of mechanism has been debated in the literature, but its existence spells trouble for neo-Darwinian theory, whose proponents claim that mutations are random and then “selected-for” in virtue of their contributions to fitness. Indeed, this concept challenges a core tenet of neo-Darwinism (Sarkar, 1991). I will argue that directed mutation/non-random mutation/stress-directed adaptation (DM, directed mutation for short) spells trouble for the neo-Darwinian paradigm.
The issue at hand
The possibility of DMs were argued for by Cairns, Overbaugh, and Miller (1988), where they argue that environmental pressure can cause adaptive changes to genes that would be beneficial to the organism. This then spurred a long debate about whether or not such mutations were possible (see Sarkar, 1991; Fox Keller, 1992; Brisson, 2003; Jablonka and Lamb, 2014). Although Cairns, Overbaugh, and Miller were wrong—that is, they were not dealing with mutations that were due to the environmental disturbances they posed (Jablonka and Lamb, 2014: 84)—their paper did bring up the possibility that some mutations could be a direct consequence of environmental disturbances which would then be catapulted by the homeodynamic physiology of the organism.
Saier et al (2017) state the specific issue with DM and its existence:
Recently, strong support for directed mutation has emerged, not for point mutations as independently proposed by Cairns, Hall and their collaborators, but for transposon-mediated mutations (12, 13). If accepted by the scientific community, this concept could advance (or revise) our perception of evolution, allowing increased rates of mutational change in times of need. But this concept goes against the current dogma that states that mutations occur randomly, and only the beneficial ones are selected for (14, 15). The concept of directed mutation, if established, would require the reversal of a long accepted precept.
This is similar to the concept of phenotypic plasticity. It is the phenomenon of a given genotype expressing different phenotypes due to environmental factors. This concept is basically a physiological one. When talking about how plastic a phenotype is, its relation to the physiology of the organism is paramount. We know that physiological changes are homeodynamic. That is, changes in physiology are constantly happening due to the effects of the environment the organism finds itself in. For example, acute changes in heart rate occur due to what happens in the environment, like say a predator chase it’s prey. The heart rates of both predator and prey increases as blood flow increases due to stress hormones. I will discuss phenotypic plasticity on its own in the future, but for now I will just note that genetic and environmental factors influence the plasticity of phenotypes (Ledon-Rettig and Ragsdale, 2021) and that phenotypic plasticity and development play a role in evolution (West-Eberhard, 2003, 2005; Wund, 2015
The fact of the matter is, phenotypic plasticity is directly related to the concept of directed mutation, due to DM being a largely physiological concept. I will argue that this refutes a central Darwinian premise. Namely that since directed mutations are possible, then they are not random. If they are not random, then due to what occurs during the development of an organism, a directed mutation could be adaptive. This, then, is the answer to how phenotypic traits become fixed in the genome without the need for natural selection.
Directed mutations
Sueoka (1988) showed that basically all organisms are subject to directed mutations. It has been noted by mathematicans that on a purely random mutational model, that there would not be enough time to explain all of the phenotypic diversity we see today (Wright, 2000). Doubt is placed on three principles of neo-Darwinism: mutations occur independently of the environment the organism is in (this is empirically false); mutations are due to replication errors (this is true, but not always the case) and mutation rates are constant (Brisson, 2003).
One of the main claims of the neo-Darwinian paradigm is that mutations occur at random, and the mutation is selected-for or against based on its relationship to fitness. Fodor’s argument has refuted the concept of natural selection, since “selection-for” is an intensional context and so can’t distinguish between correlated traits. However, we know now that since physiology is sensitive to the environment, and since adaptive changes to physiology would occur not only in an organism but during its development, it then follows that directed mutations would be a thing, and so they wouldn’t be random as neo-Darwinian dogma would claim.
In her review Stress-directed adaptive mutations and evolution, Wright (2004) concludes:
In nature, where cell division must often be negligible as a result of multiple adverse conditions, beneficial mutations for evolution can arise in specific response to stressors that target related genes for derepression. Specific transcription of these genes then results in localized DNA secondary structures containing unpaired bases vulnerable to mutation. Many environmental stressors can also affect supercoiling and [stress-directed mutation] directly.
But what are the mechanisms of DMs? “Mechanism” in this meaning would “refer to the circumstances affecting mutation rates” (Wright, 2000). She also defines what “random” means in neo-Darwinian parlance: “a mutation is random if it is unrelated to the metabolic function of the gene and if it occurs at a rate that is undirected by specific selective conditions of the environment.” Thus, the existence of DMs would then refute this tenet of neo-Darwinism. Two of the mechanisms of such DMs are transcriptional activation and supercoiling. Transcriptional activation (TA) can cause changes to single-stranded DNA (ssDNA) and also supercoiling (the addition of more coils onto DNA). TA can be caused by either derepression (which is a mechanism which occurs due to the absence of some molecule) or induction (the activation of an inactive gene which then becomes transcribed). Thus, knowing this, “genetic derepression may be the only mechanism by which particular environmental conditions of stress target specific regions of the genome for higher mutation rates (hypermutation)” (Wright, 2000). Such responses rely on a quick response, and this is due to the plastic phenotypes of the organism which then allow such DMs to occur. It then follows that stress-induced changes would allow organisms to survive in new environments, without a need for neo-Darwinian “mechanisms”—mainly natural selection. Thus, the biochemical mechanism for such mutations is transcriptional activation. Such stress-directed mutation could be seen as “quasi-Lamarckian” (Koonin and Wolf, 2009).
In nature, nutritional stress and associated genetic derepression must be rampant. If mutation rates can be altered by the many variables controlling specific, stress-induced transcription, one might reasonably argue that many mutations are to some extent directed as a result of the unique metabolism of every organism responding to the challenges of its environment. (Wright, 2000)
This is noted wonderfully by Jablonka and Lamb (2014: 92) in Evolution in Four Dimensions:
No longer can we think about mutation solely in terms of random failures in DNA maintenance and repair. We now know that stress conditions can affect the operation of the enzyme systems that are responsible for maintaining and repairing DNA, and parts of these systems sometimes seem to be coupled with regulatory elements that control how, how much, and where DNA is altered.
Jablonka and Lamb present solid evidence that mutations are semi-directed. Such mutations, as we have seen, are able to be induced by the environment in response to stress, which is due to our plastic, homeodynamic physiology. They discuss “four dimensions” of evolution which are DNA, epigenetic, behavioral and cultural. Their works (including their Epigenetic Inheritance and Evolution: The Lamarckian Dimension; see Jablonka and Lamb, 2015) provide solid evidence and arguments against the neo-Darwinian view of evolution. The fact of the matter is, there are multiple inheritance systems over and above DNA, which then contribute to nonrandom, directed mutations. The fact of the matter is, Lamarckism wasn’t wrong and Jablonka and Lamb have strongly argued for that conclusion. Epigenetics clearly influences evolution, and this therefore vindicates Lamarckism. Epigenetic variation can be inherited too (Jablonka and Lamb, 1989). Since phenotypic plasticity is relevant in how organisms adapt to their environment, then epigenetic mechanisms contribute to evolution (Ashe, Colot, and Oldroyd, 2021). Such changes that arise due to epigenetic mechanisms can indeed influence mutation (Meyer, 2015), and I would say—more directly—that certain epigenetic mechanisms play a part in how an adaptive, directed mutation would arise during the development of an organism. Stochastic epigenetic variation can indeed become adaptive (Feinberg and Irizarry, 2010).
Non-random mutations have been known to be pretty ubiquitous (Tsunoyama, Bellgard, and Gojobori, 2001). This has even been shown in the plant Arabidopis (Monroe et al, 2022), which shows that basically, mutations are not random (Domingues, 2023). A similar concept to DMs is blind stochasticity. Noble and Noble (2017, 2018; cf Noble, 2017) have shown that organisms harness stochastic processes in order to adapt to the environment—to harness function. A stochastic process is a state of a system that cannot be predicted even knowing the current state of said system.
Even all the way back in 1979, such changes were beginning to be noticed by evolutionists, such as Ho and Saunders (1979) who write that variations in the phenotype
are produced by interactions between the organism and the environment during development. We propose, therefore, that the intrinsic dynamical structure of the epigenetic system itself, in its interaction with the environment, is the source of non-random variations which direct evolutionary change, and that a proper study of evolution consists in the working out of the dynamics of the epigenetic system and its response to environmental stimuli as well as the mechanisms whereby novel developmental responses are canalized.
The organism participates in its own evolution (as considerations from niche construction show), and “evolutionary novelties” can and do arise nonrandomly (Ho, 2010). This is completely at-odds with the neo-Darwinian paradigm. Indeed, the creators of the Modern Synthesis ignored developmental and epigenetic issues when it came to formulating their theory. Fortunately, in the new millennium, we have come to understand and appreciate how development and evolution occur and how dynamic the physiological system itself truly is.
There have been critical takes on the concept of DM (Lenski and Mittler, 2003; Charlesworth, Barton, and Charlesworth, 2017; see Noble and Shapiro, 2021 for critique), like for example Futuyama (2017) who claims that DM is “groundless.” However, James Shapiro’s (1992; 2013, 2014) concept of natural genetic engineering states that cells can restructure their genomes so this “means viewing genetic change as a coordinated cell biological process, the reorganization of discrete genomic modules, resulting in the formation of new DNA structures” (Shapiro, 1993). DNA is harnessed by and for the physiological system to carry out certain tasks. Since development is self-organizing and dynamic (Smith and Thelen, 2003; Saetzler, Sonnenschein, and Soto, 2012) and since development is spurred on by physiological processes, along with the fact that physiology is sensitive to the goings-on of the environment that the developing organism finds itself in, then it follows that mutations can and would arise due to need, which would refute claims from neo-Darwinians who claim that mutations arise due to chance and not need.
Conclusion
It is clear that mutations can be (1) adaptive and (2) environmentally-induced. Such adaptive mutations, clearly, arise due to need and not chance. If they arise due to need and not chance, then they are directed and adaptive. They are directed by the plastic physiology of the organism which constructs the phenotype in a dialectical manner, using genes as its passive products, not active causes. This is because biological causation is multi-leveled, not one-way (Noble, 2012). There is also the fact of the matter that “genetic change is far from random and often not gradual” (Noble, 2013).
As can be seen in this discussion, adaptive, directed mutations are a fact of life, and so, one more domino of neo-Darwinism has fallen. Berkley claims that “The genetic variation that occurs in a population because of mutation is random“; “mutations are random“, but as we’ve seen here, this is not the case. Through the biological process of physiology and its relationship to the ebbs and flows of the environment, the organism’s phenotype that is being constructed by the self-organizing system can respond to changes in the cellular and overall environment and thusly direct changes in the phenotype and genes which would then enhance survival due to the environmental insult.
Lamarckism has been vindicated over the past 25 or so years, and it’s due to a better understanding of epigenetic processes in evolution and in the developing organism. Since what Lamarck is known for is the claim that the environment can affect the phenotype in a heritable manner, and since we now know that DNA is not the only thing inherited but epigenetically-modified DNA sequences are too, it follows that Lamarck was right. What we need to understand development and evolution is the Extended Evolutionary Synthesis, which does make novel predictions and predictions that the neo-Darwinian paradigm doesn’t (Laland et al, 2015).
Such directed changes in the genome which are caused by the physiological system due to the plastic nature of organismal construction refute a main premise of the neo-Darwinian paradigm. This is the alternative to neo-Darwinian natural selection, as Fodor noted in his attack on neo-Darwinism:
The alternative possibility to Darwin’s is that the direction of phenotypic change is very largely determined by endogenous variables. The current literature suggests that alterations in the timing of genetically controlled developmental processes is often the endogenous variable of choice; hence the ‘devo’ in ‘evo-devo’.
Darwin got quite a bit wrong, and it’s of no fault of his own. But those who claim that Darwin discovered mechanisms or articulated the random process of mutations quite obviously need to update their thoughts in the new millennium on the basis of new information informed by systems biologists and epigeneticists. The process of the construction of organisms is dynamic and self-organizing, and this is how phenotypic traits become fixed in populations of organisms. Plasticity is in fact a major driver of evolution along with the concept of genetic assimilation, which results in the canalization of the plastic trait which then eliminates the plastic response from the environment (Sommer, 2020). Phenotypic plasticity can have adaptive traits arise, but natural selection can’t be the mechanism of evolution due to Fodor’s considerations. Development can lead to evolution, not only evolution leading to development (West-Eberhard, 2003). In fact, development in many cases precedes evolution.
Fodor’s Argument and Mechanisms
2500 words
It’s been almost 5 years since I read What Darwin Got Wrong (WDGW) (Fodor and Piattelli-Palmarini, 2009; F&PP) which changed my view on the theory of natural selection (ToNS). In the book, they argue that natural selection cannot possibly be a mechanism since it cannot distinguish between correlated traits, since there is no mind (agent) doing the selecting nor are there laws of selection for trait fixation across all ecologies. Fodor had originally published Why Pigs Don’t Have Wings in the London Review of Books in 2007, and then he published Against Darwinism (Fodor, 2008) where he mounted and formulated his argument against the ToNS.
A precursor to the argument against the ToNS
Although Fodor had begun articulating his argument in the late 2000s, he already had a precursor to it in a 1990 paper A Theory of Content (Fodor, 1990: 72-73):
The Moral, to repeat, is that (within certain broad limits , presently to be defined) Darwin doesn’t care how you describe the intentional objects of frog snaps. All that matters for selection is how many flies the frog manages to ingest in consequence of its snapping, and this number comes out exactly the same whether one describes the function of the snap-guidance mechanisms with respect to a world that is populated by flies that are, de facto, ambient black dots, or with respect to a world that is populated by ambient black dots that are, de facto, flies.19 “Erst kommt das Fressen, denn kommt die Morale.” Darwin cares how many flies you eat, but not what description you eat them under.(Similarly, by the way, flies may be assumed to be indifferent to the descriptions under which frogs eat them.) So it’s no use looking to Darwin to get you out of the disjunction problem.
In Against Darwinism and WDGW, F&PP reformulate and add to this argument, stating that it is a selection-for problem:
In a nutshell: if the assumption of local coextensivity holds (as, of course, it perfectly well might), then fixing the cause of the frog’s snaps doesn’t fix the content of its intention in snapping: either an intention to snap at a fly or an intention to snap at an ABN would be compatible with a causal account of what the frog has in mind when it snaps. So causal accounts of content encounter a selection-for problem: If something is a fly if and only if it is an ABN, the frog’s behaviour is correctly described either as caused by flies or as caused by ABNs. So, it seems, a causal theory of content cannot distinguish snaps that manifest intentions to catch the one from snaps that manifest intentions to catch the other. (Fodor and Piattelli-Palmarini, 2010: 108)
The argument
Fodor has formulated the argument two times, in Against Darwinism (Fodor, 2008: 11-12) and then again in WDGW (Fodor and Piattelli-Palmarini, 2010: 114).
Contrary to Darwinism, the theory of natural selection can’t explain the distribution of phenotypic traits in biological populations.
(i) To do so would require a notion of ‘selection for’ a trait. ‘Selects for….’ (unlike ‘selects…) is opaque to substitution of co-referring expressions at the ‘…’ position.
(ii) If T1 and T2 are coextensive traits, the distinction between selection for T and selection for T2 depends on counterfactuals about which of them. The truth makers for such counterfactuals must be either (a) the intensions of the agent that affects the selection, or (b) laws that determine the how the relative fitness of having the traits would be selected in a possible world where the coextension does not hold.
(iii) But:
Not (a) because there is no agent of natural selection.
Not (b) because considerations of contextual sensitivity make it unlikely that there are laws of relative fitness (‘laws of selection).
QED (Fodor, 2008)
- Selection-for is a causal process.
- Actual causal relations aren’t sensitive to counterfactual states of affairs: if it wasn’t the case that A, then the fact that it’s being A would have caused its being B doesn’t explain its being the case that B.
- But the distinction between traits that are selected-for and their free-riders turns on the truth (or falsity) of relevant counterfactuals.
- So if T and T’ are coextensive, selection cannot distinguish the case in which T free-rides on T’ from the case that T’ free-rides on T.
- So the claim that selection is the mechanism of evolution cannot be true. (Fodor and Piattelli-Palmarini, 2010: 114)
Selection for
Ernst Mayr wrote in One Long Argument that “Selection-for specifies the particular phenotypic attribute and corresponding component of the genotype (DNA) that is responsible for the success of the selected individual.” Selection-for a trait is needed for adaptationism to be true. But what kinds of hallmarks of adaptation are there, that a free-riding trait that’s not an adaptation would have? Selection-for problems need to appeal to counterfactuals, and to appeal to counterfactuals, they need laws of relative fitness. Selection-for problems arise when so-called explanations require distinguishing the causal role of coextensive properties (traits) (Fodor and Piattelli-Palmarini, 2010: 111).
But for there to be counterfactual-supporting laws that would be able to distinguish the correlation and select the fit trait over the free-riding trait, there need to be laws that apply across all ecologies and phenotypes. But the case of whether or not T1 or T2 is conducive to fitness is massively context-sensitive. So there can’t be laws of relative fitness. T may be helpful in one ecology and not another. T may also be helpful for survival in one organism in a specific ecology but not another organism. Therefore, there is no law that explains why T would win a trait competition over T2; so there aren’t any laws of relative fitness. The ToNS implies a filtering mechanism which is the environment. But the environment can only access the correlation, and it explains the selection of both traits without being able to say anything about what’s selected-for.
A mechanism?
What is a causal mechanism? A causal mechanism is a sequence of events or processes governed by lawlike regularities. Basically, causal claims are claims about the existence of a mechanism. In the ToNS instance, the lawlike regularities would refer to laws of relative fitness. In the Fodorian sense, the mechanism would need to be sensitive to correlated traits. But the only thing that would be sensitive would be laws of relative fitness. So the question is, how can natural selection be a mechanism if it can’t ground counterfactuals which distinguish selection of from selection-for? Laws would support counterfactuals. The kind of law that Fodor is looking for is “All else being equal, the probability that a t1 wins a competition with a t2 in ecological situation E is p.” Basically, if there are laws they can support the counterfactuals. However, due to massive context-sensitivity, it seems unlikely that there are ceteris paribus laws of relative fitness. If there are, why hasn’t anyone articulated them?
But the fact of the matter is this: Fodor has successfully argued against the claim that natural selection is a mechanism, and he is not even alone on that, since others argue that natural selection isn’t a mechanism, not using his arguments (eg Skipper and Millstein, 2005; Havstad, 2011).
The question is: How can natural selection be a mechanism if it can’t ground the counterfactuals that distinguish selection of from selection for? Because if T and T’ are correlated, the same story explains the selection of both traits, so we can’t use the ToNS to show which was selected-for its causal contributions to fitness and which was the free-riding trait that just came along for the ride. Thus, the ToNS doesn’t explain the trait and so if it doesn’t explain the trait then it doesn’t predict the trait. Natural selection makes no prediction as to which trait will be selected-for. That’s not to say that we (humans) can’t know what was selected-for from what was merely selected, and Fodor never claimed otherwise, contrary to those who claim that he did make that claim. The best example is Pigliucci (2010) who states in his review that:
functional analyses rooted in physiology, genetics and developmental biology, and why observations of selection in the field are whenever possible coupled with manipulative experiments that make it possible to distinguish between [correlated traits].
Fodor was emphatic about this—he never claimed that humans couldn’t distinguish between correlated traits, only that the, using the ToNS, we can’t know which trait was selected-for its contribution to fitness since, due to the correlation, the same story explains both traits. Fodor and Piattelli-Palmarini (2010) in Replies to Our Critics stated as much:
Many of the objections that have been raised against us seem unable to discriminate this claim from such quite different ones that we didn’t and don’t endorse, such as: when traits are coextensive, there is no fact of the matter about which is a cause of fitness; or, when traits are coextensive, there is no way to tell which of them is a cause of fitness; or when traits are coextensive Science cannot determine which is a cause of fitness…etc. Such views are, we think, preposterous on the face of them; we wouldn’t be caught dead holding them. To the contrary, it is precisely because there is a fact of the matter about which phenotypic traits cause fitness, and because there is no principled reason why such facts should be inaccessible to empirical inquiry, that the failure of TNS to explain what distinguishes causally active traits from mere correlates of causally of active traits, shows that something is seriously wrong with TNS.
In his 2018 book Agents and Goals in Evolution, Samir Okasha (2018) states that Darwin was the first to employ what he terms “type II agential thinking”, which is to personify “mother nature” as selecting fit traits. Darwin’s analogy between natural and artificial selection fails, though. It fails because in the case of artificial selection, there is a mind attempting to select fit traits while in the case of natural selection, there is no mind, so the only way around this is laws of relative fitness.
Gould and Lewontin (1979), although criticizing adaptive explanations, also held that natural selection is the most powerful mechanism of evolution. This claim—that natural selection is a mechanism—is ubiquitous in the literature. And it is Gould and Lewontin’s Spandrel argument that partly inspired the correlated/coextensive trait argument devised by Fodor. However, as F&PP note, Gould and Lewontin didn’t take their argument to its logical conclusion, which of course was the rejection of Darwinian natural selection in explaining the fixation of traits in organisms.
What are selected are not traits, but organisms. And just because an organism with T was selected, doesn’t mean that T was the cause of fitness. We can then say that the phrase “survival of the fittest” is a tautology, since the fit are those who survive and those who survive are fit. But Hunt (2014) claims that we can reformulate it: that it should be defined as a theory that attempts to predict and retrodict evolutionary change which acts upon organisms through the environment. However, this reformulation runs right into Fodor’s argument, since there is no way for the exogenous selector (the environment) to distinguish between correlated traits. Portin (2012) claims that we can reduce the tautology to “those who reproduce more, reproduce more”, stating that it merely “seems” like a tautology. Even Coyne and Dawkins, in their books Why Evolution is True and The Greatest Show on Earth make the mistake multiple times in explaining natural selection in a tautologous way (Hunt, 2012). The fact of the matter is, natural selection is nothing more than an oxymoronic tautology (Brey, 2002).
To explain something means to identify a causal mechanism for that explanation. In the case of natural selection, if we are to explain why T1 over T2 was selected-for, we would then need to identify the causal mechanism that would be able to distinguish between correlated traits. The ToNS is a probabilistic theory. If NS is to be a mechanism, then since it is probabilistic, it has to be a stochastic mechanism. Though leading accounts of mechanisms are deterministic. Therefore, NS can’t be a mechanism (Skipper and Millstein, 2005). There are, as a matter of fact, no stable organizations of the component parts in NS, so it can’t be a mechanism.
Conclusion
Nanay (2022: 175) tells us that Fodor’s book was disregarded by many academic publishers, which then speaks to what Leal (2022) notes as the emotionality from Fodor’s detractors:
I knew about [What Darwin Got Wrong] long before the 2009 publication date (as early as 2005, as I recall), because all my friends and colleagues working in philosophy of biology kept telling me how they had to reject, in no uncertain terms, this book from all major and then not so major academic publishers. Fodor then took it to a non-academic publisher, and it did get published,
So the question is, how does the ToNS explain the trait if it doesn’t predict which of the two correlated traits move to fixation if it can’t discern between the two correlated traits? How does the ToNS predict which trait is fitness-enhancing when two traits are correlated prior to performing an experiment? That’s, again, where humans come in. We can perform experimental manipulations and then discern the fit trait from the correlated trait that does not cause fitness. But we can’t merely use the ToNS to do this.
There is also the fact that most if not all respondents to Fodor did not understand the argument, and this can be seen from their responses to him, like Pigliucci’s where he talks about experimentation. Humans can have access to the the fit trait, but the environment—the exogenous filter—only has access to the correlation, and so the same story explains both traits.
So neo-Darwinism is false and natural selection isn’t a mechanism. What, then, is the alternative? Fodor ended Why Pigs Don’t Have Wings writing:
The alternative possibility to Darwin’s is that the direction of phenotypic change is very largely determined by endogenous variables. The current literature suggests that alterations in the timing of genetically controlled developmental processes is often the endogenous variable of choice; hence the ‘devo’ in ‘evo-devo’.
“Endogenous variables” mean causes within the organism. For instance West-Eberhard (2003: 179) noted that: “If genes are usually followers rather than leaders in evolution—that is, most gene-frequency change follows, rather than initiates, the evolution of adaptive traits—then the most important role of mutation in evolution may be to contribute not so much to the origin of phenotypic novelties as to the store of genetic variation available for long-term gradual genetic change under selection.” So that is one way that endogenous variables can direct phenotypic change. This has been noted by many recent authors (eg Noble et al, 2014).
One of the Darwinian premises is that mutations occur randomly. However, we now know that this is not always the case: “genetic change is far from random and often not gradual” (Noble, 2013). All of the assumptions of neo-Darwinism have been disproved, most importantly, the theory of natural selection being the causal mechanism of evolution.
P1: Natural selection is a mechanism iff it can distinguish between causes and correlates of causes.
P2: Natural selection can’t distinguish between causes and correlates of causes.
C: Therefore, natural selection isn’t a mechanism.
The Myth of “General Intelligence”
5000 words
Introduction
“General Intelligence” or g is championed as the hallmark “discovery” of psychology. First “discovered” by Charles Spearman in 1904, noting that schoolchildren who scored highly on one test scored highly on others and vice versa for lower-scoring children, he assumed that due to the correlation between tests, that there must be an underlying physiological basis to the correlation, which he posited to be some kind of “mental energy”, stating that the central nervous system (CNS) explained the correlation. He proclaimed that g really existed and that he had verified Galton’s claim of a unitary general ability (Richardson, 2017: 82-83). Psychometricians then claim, from these intercorrelations of scores, that what is hidden from us is then revealed, and that the correlations show that something exists and is driving the correlation in question. That’s the goal of psychometrics/psychology—to quantify and then measure psychological traits/mental abilities. However, I have argued at length that it is a conceptual impossibility—the goal of psychometrics is an impossibility since psychometrics isn’t measurement. Therefore, claims that IQ tests measure g is false.
First, I will discuss the reification of g and it’s relation to brain properties. I will argue that if g is a thing then it must have a biological basis, that is it must be a brain property. Reductionists like Jensen have said as much. But it’s due to the reification of g as a concrete, physical thing that has people hold such beliefs. Second, I will discuss Geary’s theory that g is identical with mitochondrial functioning. I will describe what mitochondria does, and what powers it, and then discuss the theory. I will have a negative view of it, due to the fact that he is attempting to co-opt real, actual functions of a bodily process and attempt to weave g theory into it. Third, I will discuss whether or not psychological traits are indeed quantifiable and measurable, and whether or not there is a definition psychometricians can use to ground their empirical investigations. I will argue negatively for all three. Fourth, I will discuss Herrnstein and Murray’s 6 claims in The Bell Curve about IQ and provide a response to each in turn. Fifth, I will discuss the real cause of score variation, which isn’t reduction to a so-called assumed existence of a biological process/mechanism, but which is due to affective factors and exposure to the specific type of knowledge items on the test. Lastly, I will conclude and give an argument for why g isn’t a thing and is therefore immeasurable.
On reifications and brain properties
Contrary to protestations from psychometricians, they in fact do reifiy correlations and then claim that there exists some unitary, general factor that pervades all mental tests. If reification is treating the abstract as something physical, and if psychometrics treat g as something physical, then they are reifying g based on mere intercorrelations between tests. I am aware that, try as they might, they do attempt to show that there is an underlying biology to g, but these claims are defeated by the myriad arguments I’ve raised against the reducibility of the mental to the physical. Another thing that Gould gets at is that psychometricians claim that they can rank people—this is where the psychometric assumption that because we can put a number to their reified thing, that there is something being measured.
Reification is “the propensity to convert an abstract concept (like intelligence) into a hard entity (like an amount of quantifiable brain stuff)” (Gould, 1996: 27). So g theorists treat g as a concrete, physical, thing, which then guides their empirical investigations. They basically posit that the mental has a material basis, and they claim that they can, by using correlations between different test batteries, we can elucidate the causal biological mechanisms/brain properties responsible for the correlation.
Spearman’s theory—and IQ—is a faculty theory (Nash, 1990). It is a theory in which it is claimed that the mind is separated into different faculties, where mental entities cause the intellectual performance. Such a theory needs to keep up the claim that a cognitive faculty is causally efficacious for information processing. But the claim that the mind is “separated” into different faculties fails, and it fails since the mind is a single sphere of consciousness, it is not a complicated arrangement of mental parts. Physicalists like Jensen and Spearman don’t even have a sound philosophical basis on which to ground their theories. Their psychology is inherently materialist/physicalist, but materialism/physicalism is false and so it follows that their claims do not hold any water. The fact of the matter is, Spearman saw what he wanted to see in his data (Schlinger, 2003).
I have already proven that since dualism is true, then the mental is irreducible to the physical and since psychometrics isn’t measurement, then what psychometricians claim to do just isn’t possible. I have further argued that science can’t study first-personal subjective states since science is third-personal and objective. The fact is the matter is, hereditarian psychologists are physicalist, but it is impossible for a purely physical thing to be able to think. Claims from psychometricians about their “mental tests” basically reduce to one singular claim: that g is a brain property. I have been saying this for years—if g exists, it has to be a brain property. But for it to be a brain property, one needs to provide defeaters for my arguments against the irreducibility of the mental and they also need to argue against the arguments that psychometrics isn’t measurement and that psychology isn’t quantifiable. They can assume all they want that it is quantifiable and that since they are giving tests, questionnaires, likert scales, and other kinds of “assessments” to people that they are really measuring something; but, ultimately, if they are actually measuring something, then that thing has to be physical.
Jensen (1999) made a suite of claims trying to argue for a physical basis for g,—to reduce g to biology—though, upon conceptual examination (which I have provided above) these claims outright fail:
g…[is] a biological [property], a property of the brain
The ultimate arbiter among various “theories of intelligence” must be the physical properties of the brain itself. The current frontier of g research is the investigation of the anatomical and physiological features of the brain that cause g.
…psychometric g has many physical correlates…[and it] is a biological phenomenon.
As can be seen, Jensen is quite obviously claiming that g is a biological brain property—and this is what I’ve been saying to IQ-ists for years: If g exists, then it MUST be a property of the brain. That is, it MUST have a physical basis. But for g proponents to show this is in fact reality, they need to attempt to discredit the arguments for dualism, that is, they need to show that the mental is reducible to the physical. Jensen is quite obviously saying that a form of mind-brain identity is true, and so my claim that it was inevitable for hereditarianism to become a form of mind-brain identity theory is quite obviously true. The fact of the matter is, Jensen’s beliefs are reliant upon an outmoded concept of the gene, and indeed even a biologically implausible heritability (Richardson, 1999; Burt and Simons, 2014, 2015).
But Jensen (1969) contradicted himself when it comes to g. On page 9, he writes that “We should not reify g as an entity, of course, since it is only a hypothetical construct intended to explain covariation am ong tests. It is a hypothetical source of variance (individual differences) in test scores.” But then 10 pages later on pages 19-20 he completely contradicts himself, writing that g is “a biological reality and not just a figment of social conventions.” That’s quite the contradiction: “Don’t reifiy X, but X is real.” Jensen then spent the rest of his career trying to reduce g to biology/the brain (brain properties), as we see above.
But we are now in the year 2023, and so of course there are new theoretical developments which attempt to show that Spearman’s hypothesized mental energy really does exist, and that it is the cause of variations in scores and of the positive manifold. This is now where we will turn.
g and mitochondrial functioning
In a series of papers, David Geary (2018, 2019, 2020, 2021) tries to argue that mitochondriaal functioning is the core component in g. At last, Spearman’s hypothetical construct has been found in the biology of our cells—or has it?
One of the main functions of mitochondria is to oxidative phosphorylation to produce adenosine triphosphate (ATP). All living cells use ATP as fuel, it acts as a signaling molecule, it is also involved in cellular differentiation and cell death (Khakh and Burnstock, 2009). The role of mitochondrial functioning in spurring disease states has been known for a while, such as with cardiovascular diseases such as cardiomyopathy (Murphy et al, 2016, Ramaccini at al, 2021).
So due to the positive manifold, where performance in one thing is correlated with a performance in another, Geary assumes—as Spearman and Jensen did before him—that there must be some underlying biological mechanism which then explains the correlation. Geary then uses established outcomes of irregular mitochondrial functioning to then argue that the mental energy that Spearman was looking for could be found in mitochondrial functioning. Basically, this mental energy is ATP. I don’t deny that mitochondriaal functioning plays a role in the acquisition of disease states, indeed this has been well known (eg, Gonzales et al, 2022). What I deny is Gary’s claim that mitochondrial functioning has identity with Spearman’s g.
His theory is, like all other hereditarian-type theories, merely correlative—just like g theory. He hasn’t shown any direct, causal, evidence of mitochondrial functioning in “intelligence” differences (nor for a given “chronological age). That as people age their bodies change which then has an effect on their functioning doesn’t mean that the powerhouse of the cell—ATP—is causing said individual differences and the intercorrelations between tests (Sternberg, 2020). Indeed, environmental pollutants affect mitochondrial functioning (Byun and Baccarelli, 2014; Lambertini and Byun, 2016). Indeed, most—if not all—of Geary’s hypotheses do not pass empirical investigation (Schubert and Hagemann, 2020). So while Geary’s theory is interesting and certainly novel, it fails in explaining what he set out to.
Quantifiable, measurable, definable, g?
The way that g is conceptualized is that there is a quantity of it—where one has “more of it” than other people, and this, then, explains how “intelligent” they are in comparison to others—so implicit in so-called psychometric theory is that whatever it is their tests are tests of, something is being quantified. But what does it mean to quantify something? Basically, what is quantification? Simply, it’s the act of giving a numerical value to a thing that is measured. Now we have come to an impasse—if it isn’t possible to measure what is immaterial, how can we quantify it? That’s the thing, we can’t. The g approach is inherently a biologically reductionist one. Biological reductionism is false. So the g approach is false.
Both Gottfredson (1998) and Plomin (1999) make similar claims to Jensen, where they talks about the “biology of g” and the “genetics of g“. Plomin (1999) claims that studies of twins show that g has a substantial heritability, while Gottfredson (1998) claims that heritability of IQ increases to up until adulthood where it “rises to 60 percent in adolescence and to 80 percent by late adulthood“, citing Bouchard’s MISTRA (Minnesota Study of Twins Reared Apart). (See Joseph 2022 for critique and for the claim that the heritability of IQ in that study is 0 percent.) They, being IQ-ists, of course assume a genetic component to this mystical g. However, there arguments are based on numerous false assumptions and studies with bad designs (and hidden results), and so they must be rejected.
If X is quantitative, then X is measurable. If X is measurable, then X has a physical basis. Psychological traits don’t have a physical basis. So psychological traits aren’t quantitative and therefore not measurable. Geary’s attempt at arguing for identity between g and mitochondrial functioning is an attempt at a specified measured object for g, though his theory just doesn’t hold. Stating truisms about a biological process and then attempting to liken the process with the construct g just doesn’t work; it’s just a post-hoc rationalization to attempt to liken g with an actual biological process.
Furthermore, if X is quantitative, then there is a specified measured object, object of measurement and measurement unit for X. But this is where things get rocky for g theorists and psychometricians. Psychometry is merely pseudo-measurement. Psychometricians cannot give a specified measured object, and if they can’t give a specified measured object they cannot give an object of measurement. They thusly also cannot construct a measurement unit. Therfore, “the necessary conditions for metrication do not exist” (Nash, 1990: 141). Even Haier (2014, 2018) admits that IQ test scores don’t have a unit that is like inches, liters, or grams. This is because those are ratio scales and IQ is ordinal. That is, there is no “0-point” for IQ, like there is for other actual, real measures like temperature. That’s the thing—if you have a thing to be measured, then you have a physical object and consequently a measument unit. But this is just not possible for psychometry. I then wonder why Haier doesn’t follow what he wrote to its logical conclusion—that the project of psychometrics is just not possible. Of course the concept of intelligence doesn’t have a referent, that is, it doesn’t name a property like height, weight, or temperature (Midgley, 2018:100-101). Even the most-cited definition of intelligence—Gottfredson’s—still fails, since she contradicts herself in her very definition.
Of course IQ “ranks” people by their performance—some people perform better on the test than others (which is an outcome of prior experience). So g theorists and IQ-ists assume that the IQ test is measuring some property that varies between groups which then leads to score differences on their psychometric tests. But as Roy Nash (1990: 134) wrote:
It is impossible to provide a satisfactory, that is non-circular, definition of the supposed ‘general cognitive ability’ IQ tests attempt to measure and without that definition IQ theory fails to meet the minimal conditions of measurement.
But Boeck and Kovas (2020) try to sidestep this issue with an extraordinary claim, “Perhaps we do not need a definition of intelligence to investigate intelligence.” How can we investigate something sans a definition of the object of investigation? How can we claim that a thing is measured if we have no definition, and no specified measured object, object of measurement and measurement unit, as IQ-ists seem to agree with? Again, IQ-ists don’t take these conclusions to their further logical conclusion—that we simply just cannot measure and quantify psychological traits.
Haier claims that PGS and “DNA profiles” may lead to “new definitions of intelligence” (however ridiculous a claim). He also, in 2009, had a negative outlook on identifying a “neuro g” since “g-scores derived from different test batteries do not necessarily have equivalent neuro-anatomical substrates, suggesting that identifying a “neuro-g” will be difficult” (Haier, 2009). But one more important reason exists, and it won’t just make it “difficult” to identify a neuro g, it makes it conceptually impossible. That is the fact that cognitive localizations are not possible, and that we reify a kind of average in brain activations when we look at brain scans using fMRI. The fact of the matter is, neuroreduction just isn’t possible, empirically (Uttal, 2001, 2014, 2012), nor is it possible conceptually.
Herrnstein and Murray’s 6 claims
Herrnstein and Murray (1994) make six claims about IQ (and also g):
(1) There is such a thing as a general factor of cognitive on which human beings differ.
Of course implicit in this claim is that it’s a brain property, and that people have this in different quantities. However, the discussion above puts this claim to bed since psychological traits aren’t quantitative. This, of course comes from the intercorrelations of test scores. But we will see that most of the source of variation isn’t even entirely cognitive and is largely affective and due to one’s life experiences (due to the nature of the item content).
(2) All standardized tests of academic aptitude or achievement measure this general factor to some degree, but IQ tests expressly designed for that purpose measure it most accurately.
Of course Herrnstein and Murray are married to the idea that these tests are measures of something, that since they give different numbers due to one’s performance, there must be an underlying biology behind the differences. But of course, psychometry isn’t true measurement.
(3) IQ scores match, to a first degree, whatever it is that people mean when they use the word intelligent or smart in ordinary language.
That’s because the tests are constructed to agree with prior assumptions on who is or is not “intelligent.” As Terman constructed his Stanford-Binet to agree with his own preconceived notions of who is or is not “intelligent”: “By developing an exclusion-inclusion criteria that favored the aforementioned groups, test developers created a norm “intelligent” (Gersh, 1987, p.166) population “to differentiate subjects of known superiority from subjects of known inferiority” (Terman, 1922, p. 656)” (Bazemoore-James, Shinaprayoon, and Martin 2017). Of course, since newer tests are “validated”(that is, correlated with) older, tests (Richardson, 1991, 2000, 2002, 2017; Howe, 1997), this assumption is still alive today.
(4) IQ scores are stable, although not perfectly so, over much of a person’s life.
IQ test scores are malleable, and this of course would be due to the experience one has in their lives which would then have them ready to take a test. Even so, if this claim were true, it wouldn’t speak to the “biology” of g.
(5) Properly administered IQ tests are not demonstrably biased against social, economic, ethnic, or racial groups.
This claim is outright false and can be known quite simply: the items on IQ tests derive from specific classes, mainly the white middle-class. Since this is true, it would then follow that people who are not exposed to the item content and test structures wouldn’t be as prepared as those who are. Thus, IQ tests are biased against different groups, and if they are biased against different groups it also follows that they are biased for certain groups, mainly white Americans. (See here for considerations on Asians.)
(6) Cognitive ability is substantially heritable, apparently no less than 40 percent and no more than 80 percent.
It’s nonsense to claim that one can apportion heritability into genetic and environmental causes, due to the interaction between the two. IQ-ists may claim that twin, family, and adoption studies show that IQ is X amount heritable so there must thusly be a genetic component to differences in test scores. But the issue with heritability has been noted for decades (see Charney, 2012, 2016, 2022; Joseph, 2014, Moore and Shenk, 2016, Richardson, 2017) so this claim also fails. There is also the fact that behavioral genetics doesn’t have any “laws.” It’s simply fallacious to believe that nature and nurture, genes and environment, contribute additively to the phenotype, and that their relative contributions to the phenotype can be apportioned. But hereditarians need to keep that facade up, since it’s the only way their ideas can have a chance at working.
What explains the intercorrelations?
We still need an explanation of the intercorrelations between test scores. I have exhaustively argued that the usual explanations from hereditarianism outright fail—g isn’t a biological reality and IQ tests aren’t a measure at all because psychometrics isn’t measurement. So what explains the intercorrelations? We know that IQ tests are comprised of different items, whether knowledge items or more “abstract” items like the Raven. Therefore, we need to look to the fact that people aren’t exposed to certain things, and so if one comes across something novel that they’ve never been exposed to, they thusly won’t know how to answer it and their score will then be affected due to their ignorance of the relationship between the question and answer on the test. But there are other things irrespective of the relationship between one’s social class and the knowledge they’re exposed to, but social class would still then have an effect on the outcome.
IQ is, merely, numerical surrogates for class affiliation (Richardson, 1999; 2002; 2022). The fact of the matter is, all human cognizing takes place in specific cultural contexts in which cultural and psychological tools are used. This means, quite simply, that culture-fair tests are impossible and, therefore, that such tests are necessarily biased against certain groups, and so they are biased for certain groups. Lev Vygotsky’s sociocultural theory of cognitive development and his concepts of psychological and cultural tools is apt here. This is wonderfully noted by Richardson (2002: 288):
IQ tests, the items of which are designed by members of a rather narrow social class, will tend to test for the acquisition of a rather particular set of cultural tools: in effect, to test, or screen, for individuals’ psychological proximity to that set per se, regardless of intellectual complexity or superiority as such.
Thinking is culturally embedded and contextually-specific (although irreducible to physical things), mediated by specific cultural tools (Richardson, 2002). This is because one is immersed in culture immediately from birth. But what is a cultural tool? Cultural tools include language (Weitzman, 2013) (it’s also a psychological tool), along with “different kinds of numbering and counting, writing schemes, mnemonic technical aids, algebraic symbol systems, art works, diagrams, maps, drawings, and all sorts of signs (John-Steiner & Mahn, 1996; Stetsenko, 1999)” (Robbins, 2005). Children are born into cultural environments, and also linguistically-mediated environments (Vasileva and Balyasnikova, 2019). But what are psychological tools? One psychological tool (which would also of course be cultural tools) would be words and symbols (Vallotton and Ayoub, 2012).
Vygotsky wrote: “In human behavior, we can observe a number of artificial means aimed at mastering one’s own psychological processes. These means can be conditionally called psychological tools or instruments… Psychological tools are artificial and intrinsically social, rather than natural and individual. They are aimed at controlling human behavior, no matter someone else’s or one’s own, just as technologies are aimed at controlling nature” (Vygotsky, 1982, vol. 1, p. 103, my translation). (Falikman, 2021).
The source of variation in IQ tests, after having argued that social class is a compound of the cultural tools one is exposed to. Furthermore, it has been shown that the language and numerical skills used on IQ tests are class-dependent (Brito, 2017). Thus, the compounded cultural tools of different classes and racial groups then coalesce to explain how and why they score the way they do. Richardson (2002: 287-288) writes
that the basic source of variation in IQ test scores is not entirely (or even mainly) cognitive, and what is cognitive is not general or unitary. It arises from a nexus of sociocognitive-affective factors determining individuals’ relative preparedness for the demands of the IQ test. These factors include (a) the extent to which people of different social classes and cultures have acquired a specific form of intelligence (or forms of knowledge and reasoning); (b) related variation in ‘academic orientation’ and ‘self-efficacy beliefs’; and (c) related variation in test anxiety, self-confidence, and so on, which affect performance in testing situations irrespective of actual ability.
Basically, what explains the intercorrelations of test scores—so-called g—are affective, non-cognitive factors (Richardson and Norgate, 2015). Being prepared for the tests, being exposed to the items on the tests (from which are drawn from the white middle-class) explains IQ score differences, not a mystical g that some have more of than others. That is, what explains IQ score variation is one’s “distance” from the middle-class—this follows due to the item content on the test. At the end of the day, IQ tests don’t measure the ability for complex cognition. (Richardson and Norgate, 2014). So one can see that differing acquisition of cultural tools by different cultures and classes would then explain how and why individuals of those groups then attain different knowledge. This, then, would license the claim that one’s IQ score is a mere outcome of their proximity to the certain cultural tools in use in the tests in question (Richardson, 2012).
The fact of the matter is, children do not enter school with the same degree of readiness (Richardson, 2022), and this is due to their social class and the types of things they are exposed to in virtue of their class membership (Richardson and Jones, 2019). Therefore, the explanation for these differences in scores need not be some kind of energy that people have in different quantities, it’s only the fact that from birth we are exposed to different cultures and therefore different cultural and psychological tools which then causes differences in the readiness of children for school. We don’t need to posit any supposed biological mechanism for that, when the answer is clear as day.
Conclusion
As can be seen from this discussion, it is clear that IQ-ist claims of g as a biological brain property fail. They fail because psychometrics isn’t measurement. They fail because psychometricians assume that what they are “measuring” (supposedly psychological traits) have a physical basis and have the necessary components for metrication. They fail because the proposed biology to back up g theory don’t work, and claiming identity between g and a biological process doesn’t mean that g has identity between that biological process. Merely describing facts about physiology and then attempting to liken it to g doesn’t work.
Psychologists try so very hard for psychology to be a respected science, even when what they are studying bares absolutely no relationship to the objects of scientific study. Their constructs are claimed to be natural kinds, but they are merely historically contingent. Due to the way these tests are constructed, is it any wonder why such score differences arise?
The so-called g factor is also an outcome of the way tests are constructed:
Subtests within a battery of intelligence tests are included n the basis of them showing a substantial correlation with the test as a whole, and tests which do not show such correlations are excluded. (Tyson, Jones, and Elcock, 2011: 67)
This is why there is a correlation between all subtests that comprise a test. Because it is an artificial creation of the test constructors, just like their normal curve. Of course if you pick and choose what you want in your battery or test, you can then coax it to get the results you want and then proclaim that what explains the correlations are some sort of unobserved, hidden variable that individuals have different quantities of. But the assumption that there is a quantity of course assumes that there is a physical basis to that thing. Physicalists like Jensen, Spearman, and then Haier of course presume that intelligence has a physical basis and is either driven by genes or can be reduced to neurophysiology. These claims don’t pass empirical and conceptual analysis. For these reasons and more, we should reject claims from hereditarian psychologists when they claim that they have discovered a genetic or neurophysiological underpinning to “intelligence.”
At the end of the day, the goal of psychometrics is clearly impossible. Try as they might, psychometricians will always fail. Their “science” will never be on the level of physics or chemistry, and that’s because they have no definition of intelligence, nor a specified measured object, object of measurement and measurement unit. They know this, and they attempt to construct arguments to argue their way out of the logical conclusions of those facts, but it just doesn’t work. “General intelligence” doesn’t exist. It’s a mere creation of psychologists and how they make their tests, so it’s basically just like the bell curve. Intelligence as an essence or quality is a myth; just because we have a noun “intelligence” doesn’t mean that there really exists a thing called “intelligence” (Schlinger, 2003). The fact is the matter is, intelligence is simply not an explanatory concept (Howe, 1997).
IQ-ist ideas have been subject to an all-out conceptual and empirical assault for decades. The model of the gene they use is false, (DNA sequences have no privileged causal role in development), heritability estimates can’t do what they need them to do, how the estimates are derived rest on highly environmentally-confounded studies, the so-called “laws” of behavioral genetics are anything but, they lack definitions and specified measured objects, objects of measurement and measurement units. It is quite simply clear that hereditarian ideas are not only empirically false, but they are conceptually false too. They don’t even have their concepts in order nor have they articulated exactly WHAT it is they are doing, and it clearly shows. The reification of what they claim to be measuring is paramount to that claim.
This is yet another arrow in the quiver of the anti-hereditarian—their supposed mental energy, their brain property, simply does not, nor can it, exist. And if it doesn’t exist, then they aren’t measuring what they think they’re measuring. If they’re not measuring what they think they’re measuring, then they’re showing relationships between score outcomes and something else, which would be social class membership along with everything else that is related with social class, like exposure to the test items, along with other affective variables.
Now here is the argument (hypothetical syllogism):
P1: If g doesn’t exist, then psychometricians are showing other sources of variation for differences in test scores.
P2: If psychometricians are showing other sources of variation for differences in test scores and we know that the items on the tests are class-dependent, then IQ score differences are mere surrogates for social class.
C: Therefore, if g doesn’t exist, then IQ score differences are mere surrogates for social class.
How Large of a Role Did Jews Play in the Trans-Atlantic Slave Trade?
1550 words
Introduction
The role of the Jews in the slave trade—and in the civil war—has garnered a great amount of scholarly attention. Over the past few years since the rise of the alt-right, claims have been levied that Jews were disproportionately slave owners AND slave transporters. Of course, it would be ridiculous to claim that they had no role in the trade, just as Christians, Muslims, and other African tribes had their own role to play. The claim of a large, disproportionate role of Jews in the slave trade came from the discredited book The Secret Relationship Between Blacks and Jews, published by the Nation of Islam (NoI). The book is nothing more than a masterclass in quote-mining. Nevertheless, the role of Jews—overall—in the slave trade other than in the Second Phase is extremely minuscule, as I will show.
Jews and the slave trade
Slave labor from Africa has been occurring since the 14th century starting with the Portuguese—Europeans justified it by stating that they were going to convert them to Christianity—while it was officially barred by the British 1808. The practice of slavery by and on Africans was done long before Europeans arrived on the continent, but the European need for slaves was so great that they searched inland for slaves. Estimates widely vary, but it is said that between 12 and 28 million Africans were enslaved during this time period. Though between 1450 and 1850, 12 million Africans rode the Middle Passage, which was the forced voyage of Africans across the Atlantic Ocean. In 1619, the first twenty slaves arrived at Jamestown (being taken from a Portuguese slave ship), signifying the beginning of slavery in what would soon become America. In 1654, one African indentured servant took what would become the first legal slave in America, John Casor. But what would become America got comparatively fewer slaves than other places—some 427,000 slaves came to America while some 4 million slaves went to Brazil.
Nevertheless, what was the role of Jews in this system? How much of a hand did they have to play in it and which parts?
It was claimed by an Afrocentrist that “Everyone knows rich Jews helped finance the slave trade.” Sephardic Jews in Spain and Portugal had decent numbers, but they were soon forced to either convert to Christianity or flee the country. Those who did not flee Portugal became known as New Christians once they were baptized, though they apparently still practiced Judaism in secret. They are also known as marranos. It is this group of Jews that had the largest role in the slave trade, and it is in the 2nd phase that they had the most influence; only in Brazil and the Caribbean could the Jews be said to have had more than a miniscule role in the trade. (Drescher, 2010). Jews also had a presence as slavers in Jamaica (Mirvis, 2020).
The economic, social, legal, and racial pattern of the Atlantic Slave trade was in place before Jews made their way back to the Atlantic ports of northwestern Europe, to the coasts and islands of Africa, or to European colonies in the Americas. They were marginal collective actors in most places and during most periods of the Atlantic system: its political and legal foundations; its capital formation; its maritime organization; and its distribution of coerced migrants from Europe and Africa. Only in the Americas—momentarily in Brazil, more durably in the Caribbean—can the role of Jewish traders be described as significant. If we consider the whole complex of major class actors in the transatlantic slave trade, the share of Jews in this vast network is extremely modest. (Drescher, 2010)
Jews had helped the Dutch as a middleman in Brazil controlling about 17 percent of the trade for the Dutch, during the 1640s when the Dutch had become the largest suppliers of slaves to the New World. Indeed, Drescher (1993) notes that when it comes to the Dutch slave trade “Jews can be said to have had tangible significance, but even here their involvement was relatively marginal” and that “little direct involvement can be identified.”
It was at the first western margin of the Dutch transatlantic trade that Jews played their largest role. Around 1640, the Dutch briefly became Europe’s principal slave traders. They welcomed Jews as colonizers and as onshore middlemen in newly conquered Brazil. During the eight years between 1637 and 1644, Jewish merchants accounted for between 8 and 63 percent of first onshore purchasers of the twenty-five thousand slaves landed by the West India Company in Dutch-held Brazil. Perhaps a third of these captives must have reached planters through Jewish traders. (Drescher, 2010)
While it’s not an anti-Semitic attitude to talk about the (marginal) role of Jews in the slave trade, when one begins to talk about a mythical disproportionate role by the Jews in the slave trade, that’s when it does become anti-Semitic, as noted by Davis in the NYRB:
Much of the historical evidence regarding alleged Jewish or New Christian involvement in the slave system was biased by deliberate Spanish efforts to blame Jewish refugees for fostering Dutch commercial expansion at the expense of Spain. Given this long history of conspiratorial fantasy and collective scapegoating, a selective search for Jewish slave traders becomes inherently anti-Semitic unless one keeps in view the larger context and the very marginal place of Jews in the history of the overall system. It is easy enough to point to a few Jewish slave traders in Amsterdam, Bordeaux, or Newport, Rhode Island. But far from suggesting that Jews constituted a major force behind the exploitation of Africa, closer investigation shows that these were highly exceptional merchants, far outnumbered by thousands of Catholics and Protestants who flocked to share in the great bonanza.
In the first phase of the Middle Passage (1500-1640), about 800,000 Africans took the voyage. In the second phase of the Middle Passage (1640-1700) (the phase where it could be said that Jews had more than a miniscule involvement), about 817,000 Africans took the voyage. And in the third and final phase (1700 to 1807, when Britain passed the Slave Trade Act of 1807, barring the sale of slaves in Britain), 6,686,000 slaves took the voyage (Drescher, 2010). Nevertheless, even a 1975 article by historian Virginia Platt states that out of over 200 trade voyages between 1760 and 1776, merchant Aaron Lopez only sent a mere 14 ships to Africa for the procurement of slaves (Platt, 1975). (Note that it is claimed in the media that blacks have higher rates of hypertension than whites today because the slaves that were on the ships had genes that made salt retention possible, which then cause higher rates of hypertension today. But it’s a mere just-so story.)
Conclusion
It is clear that Jews played no more than a small, miniscule role—they were never dominant in the slave trade. Historians do wonder, though, why they played such a small role in the trade when they played large roles in other trades. While we don’t know the answer, we do definitively know that they did not play a large part in the trade:
Considering the number of African captives who passed into and through the hands of captors and dealers from capture in Africa until sale in America, it is unlikely that more than a fraction of 1 percent of the twelve million enslaved and relayed Africans were purchased or sold by Jewish merchants even once. If one expands the classes of participants to include all those in Europe, Africa, Asia, and the Americas who produced goods for the trade or who processed goods produced by the slaves, and all those who ultimately produced goods with slave labor and consumed slave-produced commodities, the conclusion remains the same. At no point along the continuum of the slave trade were Jews numerous enough, rich enough, and powerful enough to affect significantly the structure and flow of the slave trade or to diminish the suffering of its African victims. (Drescher, 2010)
Such exhaustive research puts to bed anti-Semitic claims from people like the NoI’s Louis Farrakhan (who was involved in the publication of The Secret Relationship Between Blacks and Jews, and neo-Nazis like David Duke and Brooklyn Nets star Kyrie Irving. Jews did own slaves in America, like in North Carolina. About 99.9 percent of the “big plantation owners” in the South were non-Jews.
Such fantastic claims of a disproportionate role in the slave trade by Jews, I think—at least for the NoI and similar parties—has to do with the claim that “Blacks are the real Jews” and that the Jews—in particular Ashkenazi Jews—are “fake Jews” and “Khazar” by a group that calls themselves “black Hebrew Israelites.” They claim that the real 12 Tribes of Israel are blacks, and that the white Jews are imposters, merely masquerading as Jews and this is one of the reasons why Jews played their disproportionate role in the trans-Atlantic slave trade. (I’ve had a few run-in with them before in Manhattan.)
Nevertheless, the claims pushed by these anti-Semitic groups are clearly false. Jews had a miniscule role in the slave trade.
The Case for Reparations for Black Americans
2050 words
Introduction
“Reparations” refers to the act or process of righting a historical wrong. Should we give reparations to black Americans, being that they are the descendants of slaves and thusly the reparations that would have been owed to them would be owed to their descendants? Also note that the slaves worked for free for hundreds of years, so untold amounts of money were stolen from them, so should we pay reparations to their descendants? Note that in most cases “should” claims and questions are moral claims and questions. Thus, this issue is one of morality. There is also the issue of Jim Crow laws and segregation. In this article I will argue that since the US government has given reparations to other groups it has wronged in the past, so too should black Americans receive reparations from the US government. Though I will not state exactly what or how much they should receive, I will cite some literature that speaks about it. I will merely argue that they should receive reparations. I will discuss one pro-argument and one anti-argument for reparations, and then give my own.
Reparations given to other groups in the past
Throughout the history of the United States, many heinous acts have been performed. Over the last 500+ years since colonialism, these people have been massacred and have had their identities almost erased systematically. In 1946, a commission was formed to hear grievances from Native Americans. The US government set aside 1.3 billion dollars for 173 tribes in 1946, but of course has been dodgy on payments. There is even a more recent push for reparations for Native Americans in California.
In WW2, about 127,000 Japanese were placed in internment camps, since it was worried that they would have been spying on America for Japan. (Most of these camps were near the west coast.) This was part of the anti-Asian sentiment of the time. In 1944 in Korematsu v. United States, SCOTUS upheld keeping Japanese Americans in these camps (a 6-3 decision). In 1988, the Regan administration gave $20,000 to each surviving internment camp prisoner, which is about $51,000 today. But the National Archives state:
The Japanese American Evacuation Claims Act of July 2, 1948, provided compensation to Japanese American citizens removed from the West Coast during World War II (WWII) for losses of real and personal property. Approximately 26,550 claims totaling $142,000 were filed. The program was administered by the Justice Department, which set a $100,000,000 limit on the total claims. Over $36,974,240 was awarded.
In the 1900s, America was under the spell of eugenic ideas. (Eugenic ideas go back centuries, to ancient Greece.) Eugenics wasn’t a theoretical or even mathematical idea, it was purely a social/political idea in that only the fit should breed (positive eugenics) and the unfit should not (negative eugenics). This then led to the forced sterilization, with “IQ” tests used as a vehicle for forced sterilization. The most famous case perhaps being that of Carry Buck, where a physician stated that her sterilization would be for the “good of society” since she scored low on an IQ test (the Binet)—Carrie had a mental age of 9 years while her mother Emma had a mental age of 7 years and 11 months, although Carrie’s daughter was actually quite a normal girl (Gould, 1984). Carrie was the first sterilization carried out in 1927 under a new law which states that epileptics and those who are feebleminded were to be sterilized. All in all, about 64,000 people were sterilized between 1907 and 1963, and the American Eugenics Society had sought to sterilize 1/10th of the US population (Farber, 2008). Some were even sterilized without their knowledge during the present day, showing gross misconduct on women’s bodily autonomy. Starting in 2022, the state of California paid out reparations to people who were sterilized during the eugenics movement and more recently people who were sterilized in their prison systems.
When it comes to reparations for black Americans, 77 percent of blacks agreed that descendants of enslaved people should receive reparations, while only 18 percent of whites agreed. About 3 in 10 US adults think that some form of reparations should be given to descendants of slavery, while about 68 percent believe that slavery descendants should not be paid, per Pew. It is estimated that it would take $10-12 trillion or $800,000 per black household to eliminate the black-white wealth gap. It has also been estimated that since the start of slavery, racism has cost blacks something along the lines of $70 trillion. Craemer et al (2020) argue that reparations should be something along the lines of $12-13 trillion. (Craemer estimates $20.3 trillion.) It has even been noted that wealth gaps between whites and blacks are associated with longevity differences between them, so reparations would close the gap some (Himmelstein et al, 2022). (Systemic racism also has a say in longevity differences, along with conscious or unconscious bias by physicians.) Nevertheless many white Americans reject the case for reparations due to, among other reasons, denying that there are lasting effects of slavery. , I won’t argue about how much reparations black Americans should receive, I will argue only if black Americans should receive reparations—and since other groups that were historically harmed in the US have received reparations, then it follows that black Americans should receive reparations.
As we can see from the above, the US government has given reparations to groups it has wronged in the past. But there is a good amount of philosophy on the morality of reparations and whether or not black Americans should receive reparations (which then becomes a moral argument). I will look at two of them—Bernard Boxill’s (2003) A Lockean Argument for Black Reparations (a pro-reparations argument) and Stephen Kershnar’s (2003) The inheritance-based claim for reparations (an anti-reparations argument). After I describe both arguments, I will then provide my own argument which I don’t think has been made in the literature that argues in favor of reparations for black Americans (though I won’t make any claims as to how much; I merely cited what some scholars have argued above.)
A pro-argument
Boxill developed two arguments in his paper—an inheritance argument and a counterfactual argument. Boxill (2003: 73) writes:
This reparation was never paid. Instead each white generation passed on its entire assets to the next white generation. I am not speaking of those few who inherited specific parcels of land or property from the supporters of slavery. I am speaking of whole generations. The whole of each generation of whites passed on its assets to the whole of the next white generation because each generation of whites specified that only whites of the succeeding generation were permitted to own or compete for the assets it was leaving behind. But as I have already shown, the slaves had titles to reparation against these assets. And we can assume that the present generation of African Americans are the slaves’ heirs. Hence the present generation of African Americans have inherited titles to a portion of the assets held by the present white population, with the qualification that they cannot insist on these titles if doing so would put the present white population in danger of perishing.
So this is how Boxill gets around a possible objection to the argument—many white Americans have inherited things from slave owners or who were complicit in slavery. This argument can be put in form like this:
(1) Slavery owners passed on assets to successive generations, with each generation passing on assets gained from slavery.
(2) Present-day black Americans are heirs to those who were enslaved.
(C) Therefore, the present white population owes reparations to the present black population in America since present white Americans are the heirs to assets that were gained through slavery of the descendants of present black Americans.
Danielson (2004) states the same, writing:
Some legal scholars suggest that the government should directly address the issue of reparations for slaves because America profited from slave labor for over two centuries, so America should compensate slaves for their labor. Slaves were deprived of fair wages for almost three hundred years and their descendents were therefore deprived of economic inheritance. The slave masters, ergo their descendents through inheritance, benefited from the withheld wages that rightfully belonged to their slaves.
So if a group of people that benefitted long ago from an action(s) still benefit today from said action(s), and the result of those actions was an untold amount of free and therefore stolen wages, it then follows that the group that benefitted from the action needs to pay reparations to the descendants of the group that was historically wronged.
An anti-argument
While Kershnar (1999) seems to provide a pro-argument for reparations based on inheritance, it seems that Kershnar (2002) has walked back on the claim and argues that inheritance-based claims for reparations fail. Kershnar (2002) argues that since slavery brought about the existence of black Americans, then without slavery there would be no black Americans and hence there would be no conversation about reparations. I don’t see how this matters—because the historical injustice DID happen, and so due to the moneys lost from free labor for hundreds of years, therefore, the case can be made that blacks are owed reparations.
He also argued that the US did not cause slavery, but it did permit it. This is true. However, it took a war to end slavery when the South attempted to secede from the US in order to continue the practice of slavery. It took the North winning the Civil War to abolish slavery. Though, the US government was implicit in slavery since its inception by allowing it to occur.
I will now provide and defense an argument that since other groups in the US were wronged in the past and have received reparations from the US government, so too should black Americans.
The case for reparations for black Americans argument
Here is my argument:
(P1) The US government has a history of giving reparations to people who have suffered injustices (like the Japanese and Natives).
(P2) Black Americans have suffered injustices (slavery, Jim Crow, segregation).
(C) So black Americans deserve reparations from the US government.
This argument uses modus ponens, so it is valid. P1 was argued for in the first section. P2 is common knowledge. So C would then follow—black Americans deserve reparations from the US government due to their ancestors being enslaved and the recent injustices they received in the 1900s during and after Reconstruction, leading up to the Civil Rights Movement of 1964. I don’t see how anyone could reject a premise and falsify the argument.
Conclusion
The legacy of slavery still continues today (most Americans today believe that the legacy of slavery still affects blacks today), and it’s partly reflected in low birth weights of black Americans (Jasienska, 2009). The untold negative effects of slavery have combined to further depress black Americans.
There is even a new bill in the works discussing what reparations for black Americans would look like, which will create a task force to study reparations for black Americans. One time has even argued that giving black Americans reparations would decrease COVID-19 transmission in black Americans (Richardson et al, 2021). One city—Evanston, Illinois—even enacted a plan to give reparations to black Americans and California has also stated that the legacy of slavery requires reparations too; they are now considering the next steps for reparations. Further, there is also a public health case for reparations. Seeing as the US Congress apologized for the enslavement of black Americans and segregation only in 2008, there is a better way to right these wrongs—not mere lip service—and that is to pay reparations to black Americans.
Slavery was a moral wrongdoing, and along with how blacks were treated after they were emancipated from the racist South (Jim Crow laws, segregation), this combines to create a powerful argument for the moral case for reparations for black Americans, since other groups in the country that were wronged received reparations, like victims of sterilization in the 1900s and new millennium, Japanese Americans during WW2, and Native Americans. Thus, it follows that black Americans, too, should receive reparations.
On Asian Immigration to the United States, Hyper-Selectivity, and Hereditarian Musings on Asian Academic Success
5500 words
Introduction
Hereditarians champion Asians (specifically East Asians) as proof of their gene-centric worldview—that their genetic constitution allows their stellar performance in educational and life outcomes. However, scholars have noted for decades that Asians are a specially selected group—using what is known as “hyper-selectivity” or “educational selectivity.” Immigrants that are more likely to have a college degree compared to those in their native country and their host nation; they bring over different kinds of class tools that then help their progeny in the next generation. This selectivity gives the children of immigrants—whether it be 1.5 generation (children that emigrated during adolescence) or second generation children—a better “starting point”, and, along with the cultural tools, allows them to succeed in America. In this article, I will describe the process of immigration of certain Asian groups to America, and then I will argue that what explains their success today is not genes as hereditarians try to argue, but the selectivity of the population in question and then I will argue against the hereditarian position.
Although they seem dissimilar, educational and hyper-selectivity share some common ground. Immigrant selectivity describes the fact that those who emigrate are not a random sample of the population from which they derive, but they have better educational accolades than those that stayed behind (Borjas, 1987; Borjas, Kauppinen, and Poutvaara, 2018; Sporlein and Kristen, 2019). There is then the concept of negative selection, too (contrasted with positive selection, which is what educational and hyper-selectivity are). There is both a positive and negative selection occurring, and immigrants are indeed a self-selected group with selection also occurring for unobserved traits (Aydemir, 2003). Indeed, migrants to less equal countries like the US are positively selected (Parry et al, 2017) and those that do migrate are more skilled, ambitious, and motivated (Cattaneo, 2007). Immigrants are in general more educated than those who do not migrate, but this differs depending on country of origin (Feliciano, 2005) while economic migrants are favorably self-selected (Chiswick, 1999).
From immigrant yellow peril to model minority
Asian immigration to the United States has been occurring in large numbers since the 1860s. During that time, Chinese immigrants wanted to escape the horrid situation in China and try their luck in the California gold rush and they had aspirations to return to China after they had made some money. They mostly came from the Guangdong province in China (Jorae, 2009). This was the first wave of Asian immigration to America. Between 1882 and 1943 the US government severely restricted the immigration of the Chinese into America since they were emigrating to work on the transcontinental railroad, and they passed the legislation so native-born Americans could get the jobs (Zellar, 2003; Gates, 2017). (It’s also worth noting that immigrant labor between 1880s and 1920s was a necessary condition for the industrial revolution; Hirschman and Mogford, 2009.) The first exclusionary act was the act of May, 6 1882, and it had lasting negative effects until at least the 1940s (Long et al, 2022). Chinese immigrants then began a “revolving door system” where young workers replaced older workers (Chew, Leach, and Liu, 2018). In 1885, the first Chinese-only school was opened. So in 1892 the second piece of legislation—the Geary Act—was passed, which was a further exclusionary tactic. Porteus and Babcock (1926: 37) noted how by 1888 that the Chinese in Hawaii “had infiltrated every trade and occupation in the islands.” It was then in 1942 where FDR repealed these two legislations on the Chinese.
But perceptions on the Chinese began to change. From being known as “the yellow peril” in the late 19th to early 20th century, a Gallup poll in 1942 stated that the Chinese were “hardworking, honest, brave, religious, intelligent, and practical” while in that same poll, the Japanese were described as “treacherous, sly, cruel, and warlike.” This of course speaks to the xenophobic attitudes of Americans at the time, and further speaks to the kind of “villain of the week” mentality.
The second wave of Asian immigration was the Japanese and the became the new source of cheap labor after the Chinese in the early 20th century. They were treated as the Chinese were treated previously, and due to a “gentleman’s agreement” between Japan in America in 1908, Japan limited migration of Japanese to America to non-laborers (Hirschman and Wong, 1987: 6). But the immigration act of 1924—the Johnson Reed Act—even barred Asian immigration from countries from which it previously allowed. Nevertheless, previous attitudes on the Chinese and Japanese show one important thing—that racist ideals toward a group of people can and do change over the years.
When it comes to the Taiwanese, they had already secured a spot in America by having a large amount of Taiwanese immigrants that who had college degrees before 1965. After the Hart-Cellar act was passed they stayed in the country and then sponsored their highly educated family members to America, and so this is an explanation for why there is hyper-selectivity (Model, 2017).
From a “peril” and “treacherous and warlike” to “hardworking, honest and intelligent” in mere decades. Americans in the early 20th century, in fact, looked at Asians back then as blacks are looked at today, with similar claims made about genital and brain size to Asians back then.
Asians are said to be “model minorities” today, due to their educational attainment and higher incomes. Lee and Zhou (2015: 31-32) state three things about “model minority” status:
(1) It overlooks the fact that Asians aren’t a monolith and comprise many different ethnic groups that don’t have the same model outcomes.
(2) It has been used to claim that “race doesn’t matter” in America since Asians can apparently make it in America despite non-white status.
(3) It pits Asian Americans against other minorities.
It has been said that the model minority stereotype “masks a history of discrimination“, “holds Asian Americans back at work” and that it “hurts us all.” I will explain higher educational attainment below, but when it comes to higher incomes, Asian families are more likely to live in extended (auxiliary) families which contribute to the income of the household (Reyes, 2019). Asian American families have an average of 3.5 people, which makes them larger than the average US family. As Jennifer Lee notes:
High household incomes among Asian Americans can also be explained by “the fact that some live in multi-generational homes with more than one person earning an income,” said Jennifer Lee, a sociology professor at the University of California at Irvine, and co-author of the book “The Asian-American Achievement Paradox.” “You have parents, grandparents, an aunt, some children.”
Nevertheless, the history of Asians in America—whether it’s when they first arrived and the racism they faced or today being seen as “model minorities”, is suggestive as to why they are so successful in America today. They are so successful because it’s not merely any kind of people of the country in question that emigrate, it’s a specific kind of people with specific outlooks and qualifications. This, in effect, then explains the how and why of Asian academic achievement.
Hyper-selectivity and the Asian American experience
Hyper-selectivity refers to the “higher percentage of college graduates among immigrants compared to non-migrants from their country of origin, and a higher percentage of college graduates compared to the host country” (Lee and Zhou, 2015: 15). This selective process began in the 1960s, and the federal policies themselves select a particular kind of entrant into the country (Juun, 2007; Ho, 2017; Model, 2017). Asians in America can be said to be a “middleman minority” (Hirschman and Wong, 1987), where a “middleman minority” refers to “minority entrepreneurs who mediate between the dominant and subordinate group” (Douglas and Saenz, 2008; see also Bonacich, 1973). It is an occupational pattern rather than a status (Lou, 1988). Lee and Rong (1988) seek explanations of Asian educational success in terms of family structure, along with middleman and niche theories of migration.
Some would uphold a culturalist thesis—that what explains exceptional educational outcomes for Asians would be their culture. For example, Asian Americans study about one hour more per day than whites (Tang, 2021), though one 2011 analysis found that Asians spent more time studying and doing homework—Asians spent 13 hours per week studying while whites only studies for 5. Asian Americans spend significantly more time studying than other racial groups (Ramey and Shao, 2017). When it comes to homework, black students spent 36 minutes on homework, “Hispanic” students spent 50 minutes, white students spent 56 minutes, and Asians spent 2 hours and 14 minutes doing homework, while they also spent more time on other supplementary educational tasks (Dunachik and Park, 2022). Asian American parents were also more likely to spend 20 minutes with their children helping with their homework (Garcia, 2013).
Some would state that this is due to an “Asian culture”, but reality tells a different story. The hyper-selectivity of Asians explains this, and their successes cannot be reduced to their culture. Lee and Zhou (2017) state that “Asian immigrants to the United States are hyper-selected, which results in the transmission and recreation of middle-class specific cultural frames, institutions, and practices, including a strict success frame as well as an ethnic system of supplementary education to support the success frame for the second generation.” Yiu (2013) notes that Chinese in Spain have much lower educational attainment and ambitions in comparison to other ethnies in Spain. Merely twenty percent of Chinese youth were enrolled in post-secondary school, while 40 percent of all youths and 30 percent of all immigrants were (Yiu, 2013).
Context matters. And the ambitions of a group of people would then depend on national context. This is what Noam (2014) found for the Chinese in the Netherlands—where Chinese Americans accept the cultural values of high educational attainment, Chinese Dutch oppose them:
In the United States and
the Netherlands the second-generation Chinese approach their ethnocultural values regarding education in dissimilar ways—either accepting or opposing them—yet they both adjust them to their national context.
What is termed the “immigrant paradox” is stronger in Asian and African than other immigrants (Crosnoe and Turley, 2017). Tran et al (2018) note how likely a certain immigrant group would be to have a higher degree in comparison to those in their country of origin:
Among the population age twenty-five and older, first-generation immigrants reported significantly higher percentages of having a bachelor’s degree or higher than their nonmigrant counterparts in respective home countries. This achievement gap is most striking between Chinese nonmigrants and Chinese immigrants in the United States, but also substantial for the other three groups. Only 3.6 percent of nonmigrant Chinese reported having a college education, but 52.7 percent of immigrant Chinese held a bachelor’s degree. This hyper-selectivity ratio of 17:1 between immigrant and nonmigrant means that Chinese immigrants were disproportionately well educated relative to non-migrants. This ratio is about 8:1 for Asian Indians. This gap is also quite stark among Nigerians. Immigrant Nigerians (63.8 percent) were six times more likely than their nonmigrant counterparts to report having a bachelor’s degree or more (11.5 percent). Their hyper-selectivity ratio is about 6:1. Similarly, 23.5 percent of immigrant Cubans reported having a college degree relative to only 14.2 percent of nonmigrant Cubans, a gap of 9 percent. Among Armenians, the corresponding gap is about 10 percent.
Genetic and cultural hypotheses have been contrasted in an attempt to explain why Asian Americans excel over and above whites. Sue and Okazaki (1990) take a structuralist interpretation—they argue that Asians believe that education is paramount for social mobility. Lynn (1991) rejects Sue and Okazaki’s relative functionalism hypothesis, though it should be noted that hereditarian beliefs about genes and IQ are highly suspect and, frankly, do not work. There is also the fact that, as Sue and Okazaki (1990: 48), note that “Lynn failed to take into account the fact that the Japanese samples tended to have higher socioeconomic standing and a higher representation of urban than rural children than did the American samples from which the norms were constructed.” (Also see Sautman, 1994 and Yee, 1992: 111.) Sue and Okazaki showed that Asians differed from white Americans on one question—they were more likely than white Americans to believe that success in life was related to school success, and this is consistent with the Lee and Zhou account.
In Lynn’s (1991) reply to Sue and Okazaki, he notes that their relative functionalism hypothesis has to be dismissed, but he did not discount the role of motivation, staying longer in school and doing more homework. He then—in typical Lynn style—claims that these traits have high heritability and so a genetic hypothesis should not be discounted. Sue and Okazaki (1991) responded, discussing Lynn’s views on CWT, Asian adoptees, and what he says about their relative functionalism hypothesis. In any case, Lynn’s reply is in no way satisfactory, since his belief that genes contribute to IQ scores (that IQ is genetically mediated) is false. Nevertheless, Flynn showed that when IQ is held constant, that when compared with whites, that “Asian’s achievements exceed those of Whites by a huge amount.”
PumpkinPerson claims, using the Coleman report (Coleman, 1966) that “the incredible scores of Oriental Americans is not at all explained by selective immigration” and that he “decided to compare them in the first grade before environment has had much time to cause differences.” I will take both if these claims in turn.
(1) This is false. While selection wasn’t really a thing for Chinese immigrants, it has been noted that the children of Chinese immigrants during the Exclusion period had “greater human capital than those of unrestricted immigrants, despite restricted immigrants having lower skill” which “suggests particularly strong intergenerational transmission of skill among Chinese immigrants of the exclusion era” (Chen, 2015). It is a truism that the Chinese of this time period were not selected in the nature that Asian immigrants are today, but discrimination did lead to their assimilation (Chen and Xie, 2020). Indeed, second-generation Chinese Americans attending American schools had good schooling (Djang, 1935: 101). And for Japanese Americans, Hirschman and Wong (1986: 9) point out:
Another important feature of Asian immigration was the educational selectivity of different streams of immigrants. While the educational composition of recent Asian immigrants has been extraordinary (Chen 1977; North 1974; Pernia 1976), this was not always the case. Most of the early Asian immigrants to the United States, like their counterparts from Europe, arrived with only minimal educational qualifications. The important exception was early Japanese immigrants. Data from the 1960 Census show that Japanese immigrants, above age 65 in 1960, had a median eight years of schooling-comparable to the figure for the white population of the same age (U.S. Bureau of the Census 1963a, 1963c). This finding is corroborated by earlier studies which report a very selective pattern of Japanese immigration to the United States, particularly to the mainland (Ichihashi 1932; Kitano 1976; Petersen 1971; Strong [1934] 1970).
(2) The home environment before first-grade does have a large effect on outcomes (e.g., Brooks-Gunn et al, 1996). Of course exposure to different kinds of things in the household would explain certain outcomes later in life, such as test scores.
In the book Temperament and Race, Porteus and Babcock (1926: 119-120) discussed the racial rankings of grades by one researcher, with the following chart, showing similar findings to Coleman:

They also discussed the Thorndike Examination of High School Graduates in Hawaii from 1922-1923, that the Chinese and Japanese scored below whites but this could be seen as them not having full English proficiency. Chun (1940: 35) showed that “Anglo Saxons” has Binet IQs of 100, and IQs of 87 and 85 for the Chinese and Japanese respectively, and this is similar to what Porteus and Babcock (1926) showed for Chinese and Japanese too. This also could be due to low English proficiency. Chun (1940) also shows that there were a large amount of schools for the Chinese as well. Coupled with the fact that immigrants aren’t a random sample of the population from which they derive, selection therefore explains these values. It’s quite clear that the Chinese had good education since the 1880s with the introduction of Chinese schools on the mainland and in Hawaii, and along with the fact that Japanese immigrants had education on par with whites at the time, of course the selectivity of the population along with the education they got clearly mattered.
When it comes to Asian immigration post-1965, “The new preference system allowed highly skilled professionals, primarily doctors, nurses, and engineers from Asian countries, to immigrate and eventually to sponsor their families” (Hirschman 2015), while the Act resulted in a majority of nurses that came from Asia (Rockett et al, 1989; Masselink and Jones, 2015). Erika Lee notes in The Making of Asian America (2017: 287):
As in the past, Asian immigrants are highly regulated by immigration laws, but the emphasis of U. S. Laws in admitting family-sponsored immigrants and professional, highly skilled individuals has meant that the majority of New arrivals from to join family already here and bring a different set of educational and professional skills than earlier immigrants.”
Hsin and Xie (2014) showed that, rather than “cognitive ability” and sociodemographics, higher academic effort explains the Asian-White achievement gap. They argue that beliefs in academic effort along with immigrant status explains the relationship. Teachers have higher, more positive expectations for Asian students, and that such positive stereotypes will further influence their excelling, which is a pygmalion effect (Hsin and Xie, 2014). And so, the Asian-White achievement gap can be explained by higher academic effort, not IQ or SES, it’s driven by the Asian-White difference in academic effort. I don’t see an issue using teacher ratings, since teacher ratings have shown an even higher correlation between the accuracy of teacher’s assessments and IQ, at .65 as one study notes (Hoge and Coladarci, 1989) while a newer analysis showed a correlation of .80 (Kaufmann, 2019). Lee (2014) described why Asians have higher academic effort in comparison to Americans:
differences in the cultural frame and the resources used to support it help to explain why the children of some Asian immigrant groups get ahead, despite their socioeconomic disadvantage.
However, Hsin and Xie (2014) do note a suite of negative effects:
Studies show that Asian-American youth are less psychologically adjusted (32) and socially engaged (33) in school than their white peers. They may experience more conflict in relationships with parents because of the high educational expectations their parents place on them (32). Asian-American youth are under pressure to meet extraordinarily high standards because they consider other high achieving coethnics, rather than native-born whites, to be their reference group (7).
Even low-SES Asians have a high drive to succeed in academics, having work ethic similar to the white and Asian middle-class, and one attempted explanation is due to Confucian values (Liu, and Xie, 2016). Though Lee and Zhou (2020) have successfully argued against this claim, stating that second generation Chinese in Spain do not have such high educational attainment in Spain (Yiu, 2013), refuting the reduction of educational attainment to Confucian beliefs of Asians, since other Asian immigrants that do not share such Confucian beliefs are also hyper-selected. And while Asian American parents do hold higher educational expectations for their children in comparison to white American parents (Kao, 1995), this too is consistent with the Lee and Zhou account.
In a series of papers, Sakamoto (2017) and Sakamoto and Wang (2020) try to argue against the hyper-selectivity thesis. Sakamoto and Wang, I think, underestimate the importance of hyper-selectivity in explaining Asian educational achievements. They argue that cultural factors explain Asian American success, while Zhou and Lee (2017) argue that it’s due to selective migration patters that favor highly-able immigrants. Sakamoto and Wang claim that cultural factors explain the most about Asian achievement, but Zhou and Lee state that cultural factors alone cannot account for their achievement—cultural factors like Confucianism. While individual effort does play a role, as Hsin and Xie (2014) argue, of course cultural and structural factors also play a role, the argument given by Sakamoto and Wang can be refuted by the following argument:
(1) If selective migration is a significant factor in explaining the success of Asian Americans, then class background can’t be the sole explanation of their success. (2) Selective migration is a significant factor that explains the success of Asian Americans. (3) But Sakamoto and Wang claim that class background is basically the only reason for higher Asian American achievement. Since (3) contradicts (2) and (2) is true, then we can reject (3). Thus, the argument in Sakamoto and Wang does not refute the argument in Zhou and Lee.
Further, not all Asian immigrants enjoy the same level of success, since other Asian immigrants (like South Asians) are less likely to have selective migratory patterns than East Asians. Therefore, this shows that selective migration, and not culture, is paramount in explaining Asian American academic achievement. Hyper-selectivity on its own does not set the stage for Asian American achievement, but it does set the stage for the remaking of cultural practices which then forster educational success. Culture does matter, but not in the way that most conceptualize it. Sakamoto and Wang do not refute Zhou and Lee, since Zhou and Lee (2017: 8) provide evidence that “culture has structural roots and that cultural patterns emerge from structural circumstances of contemporary immigration.”
Hereditarian explanations of Asian educational achievement
For decades, hereditarians have argued that Asian educational achievements in contrast to whites’ are due to their “cognitive ability” (“IQ”), which is genetically mediated, on the basis of heritability estimates. For instance, hereditarians use data from transracial adoptees to try to argue that genetic differences cause differences in IQ between Asians and whites and then whites and blacks. However, this can be explained by adoptions’ beneficial effects for IQ and the Flynn effect (Thomas, 2017).
Hereditarians claim that since they argue for East Asian superiority, that they therefore are not racists. Sautman (1994: 80) noted how since hereditarians claim that they since they speak of East Asians being superior to whites, they therefore show a lack of bias in their assessment of racial differences:
In clustering East Asians and whites as genetically-favored and Africans, Southeast Asians and others as disfavored, Western race theorists use East Asians as a “racial wedge” against other non-whites. They argue that highlighting East Asian, not white, superiority shows an absence of bias. Thus, a criminologist who legs putatively higher crime rates of US blacks to r-strategy reproduction, underscores that he is “not a member of the least criminal racial group” (i.e. East Asians). A professor of management writes that whites will feel more comfortable in recognizing black inferiority if they know that East Asians outscore whites on IQ tests. a British journalist has queried “If they [East Asians] can be cleverer than we are, why can’t we be cleverer than some other group?”
This is just as Hilliard (2012: 86) remarks:
[Herrnstein, Murray and Rushton] used this representation of whites as more cognitively advanced than blacks but less than Asians to silence those critics who insisted that the race researchers’ findings were ethnically self-serving. Rushton thus posed the question, “If my work was motivated by racism, why would I want Asians to have bigger brains than whites?” … it became useful to tout the Asians’ cognitive superiority but only so long as whites remained above blacks in the cognitive hierarchy.
The phrase “Mongoloid idiot” was coined, due to supposed similarities between Asians and people with Down syndrome. Along with being a sexual danger to white women, this then corresponded with how they were perceived—race scientists concluded that they had smaller brains than whites. This is noted in Lieberman’s (2001) Table 1 on the ever-changing skull size differences between the races.

The hierarchy changed right as East Asia began to modernize and have an economic boom (Lieberman, 2001). So we go from racism against East Asians, naming syndromes after them, saying they have small brains and large penises, to model minorities, high IQ, larger brains, lower sexual drive and booming economies. This speaks to the contextual-dependence of such claims, and that attitudes toward certain groups do indeed change over time.
To attempt to explain IQ and other differences between races, Lynn proposed that the harshness of cold winters shaped the cognitive skills of Europeans and East Asians over millenia, and that this explains why Asians score higher than whites and whites over blacks (Lynn, 1991, 2006a: 135-136, 2019; Rushton, 1997: 228-230, 2012). Many issues with these just-so stories and evolutionary theories (r/K theory) have been levied, showing that they merely “explain” observations, with no novel predictions, nevermind the anthropological misunderstandings from Lynn, Rushton, Jensen, and Kanazawa. Lynn (1991) attempted to show that children from Hong Kong showed higher reaction times and had higher IQs than British children, which he interpreted as having a neurological basis. Though, due to omissions and misinterpretations of data, we cannot accept Lynn’s conclusions (Thomas, 2011).
Lynn (2006b) repeats the same claims he has since he started to collate studies on national “IQs” (see Richardson, 2004). Beginning in 2002, Lynn and Vanhanen attempted to collate a mass of IQ studies around the world and then show the “intelligence of nations” (Lynn and Vanhanen, 2002; Lynn and Becker, 2019). Though, ignoring the fact that Lynn cherry-picked Chinese IQ studies that fit his a priori beliefs, “‘National IQ’ datasets do not provide accurate, unbiased or comparable measures of cognitive ability worldwide” (Sear, 2022; also see Moreale and Levendis, 2012; Ebbeson, 2020).
On that same note, the Chinese are notorious for cheating on standardized tests, they are cheating on the SAT, GRE, and other examinations, and they pay up to $6000 to have people take tests for them. There was, also, a large UCLA cheating ring which was recently busted. There is also the fact that the OECD allows China to administer the PISA in select regions, so the claim cannot be made that PISA results are representative of China. There is also the fact that the Chinese have what is called a “hukuo system” which is a tool for controlling migration from rural to urban areas. And so, even though some children may for example attend school in Shanghai, when it comes to for hukuo, they must return to their province of origin. It’s clear that the Chinese game standardized tests. They are cheating the PISA system by being selective on the students they administer the test to in Shanghai by doing hukuo.
Lynn (2010) argued that it was unnecessary to contribute the success of East Asians to Confucian values (this is true), and that IQ explains East Asian success in math and science. Though, what does explain their success is their selectivity, not their IQ. Lynn (2006a: 89) claimed that “The Chinese and Japanese who emigrated to the United States in the second half of the nineteenth century were largely peasants who came to do unskilled work on the construction of the railways and other building work.” While this is true to a point, it’s irrelevant and skirts around the fact that, as Hirschman and Wong noted, Japanese immigrants had educational parity with whites before the 1960s and the fact that Chinese laborers were indeed selected and this also affected their children in a positive manner.
In a now-retracted paper, Rushton (1992) opined that one “theoretical possibility” to explain why Asians have more “K” traits compared to “r” traits (see Anderson, 1991 for critique), is that evolution is progressive and that Asians are “more “advanced”” than are other groups. But the fact of the matter is, evolution isn’t progressive. Nevertheless, Rushton (1995) attempted to defend his arguments from Yee (1992) by saying the same old, bringing up Lynn’s study on reaction time and IQ (refuted by Thomas) along with Jensen’s (refuted by Sautman). He brings up the “evidence” from transracial adoption studies (see Thomas, 2017). Rushton then brings up brain size, talking about the larger brains of Asians (see above from Lieberman on how this seems to change with the times). Rushton then discusses “other variables”, like his crime data (refuted by Cernovsky and Littman, 2019), testosterone, and twinning (see Allen et al, 1992). This is all beside the point that Gorey and Cryns (1995) showed that any behavioral differences between Rushton’s three races can be explained by environment while Peregrine, Ember, and Ember (2003) showed no cross-cultural statistical support for Rushton’s theory.
To the hereditarian, Asians are upheld to say that they are not racists, since why would a racist state that Asians are “better” than whites? This, though, gives hereditarians cover. Nevertheless, the arguments used by hereditarians for Asian academic achievement and IQ fail, since they rely on numerous false assumptions and arguments.
Conclusion
Immigration in the past was mixed between positive and negative selection, but today is largely positive (Abramitzky and Boustan, 2017). In recent years, Asian immigrants were more highly selected than non-Asian immigrants (Huang, 2022). Asians have been the largest percentage of immigrants since 2009 (National Academy of Science, 2018). Lynn (2006a: 97) claims that “environmentalists do not offer any explanation for the consistently high IQ of East Asians, and it is doubtful whether any credible environmental explanation can be found.” But this claim fails since hyper-selectivity explains Asian educational achievements over whites.
The study of race differences, then, is completely political (Jackson, 2006). Since science is a social activity, then one’s political leanings and values would influence the science they seek out to do (Barnes, Zieff, and Anderson, 1999). This is wonderfully illustrated by the claims of hereditarians about Asians who are just using them as a cover to peddle racist inferiority tropes about blacks.
I have described how Asians have come in waves to America over the past 150 years. I have also shown how most immigrants today, and specifically Asians, are positively selected. I have further described a process of selection in certain Asians during the early 1900s. The hyper-selectivity thesis explains Asian American achievement, due to what hyper-selectivity is and the processes that they go through. I then explained how hereditarians attempt to use Asians as a cover for their racism, but their arguments are invalid and rely on numerous false assumptions. Having said all of that, here are the arguments:
The hyper-selectivity thesis does not ignore challenges faced by working class and lower-income Asians, it merely highlights unique characters of the Asian American experience which allow them to overcome economic barriers and then achieve high levels of academic and economic success. It also does not ignore the role of racism and discrimination, but it suggests that even in the face of this, they have unique characteristics due to their selectivity that still enable them to highly achieve. And it is supported by a large body of empirical and theoretical evidence which shows the robustness of the phenomenon across different contexts and time periods. Thus, the thesis is of value to understanding the Asian American experience in the United States. Furthermore, we can reject the genetic hypothesis of Lynn, as Sue and Okazaki have successfully argued. Having said all that, I have formalized the arguments made in this article.
P1: If the unique cultural and socioeconomic resources of Asian American immigrants have allowed them to achieve high levels of success, then hyper-selectivity is true.
P2: Empirical evidence shows that Asian immigrants and their children have achieved high levels of success, outperforming other racial and ethnic groups in the US in education and income.
C: Thus, the hyper-selectivity thesis is true.
P1: If Asian American immigrants possess unique cultural and socioeconomic resources which allow them to receive high levels of success, then hyper-selectivity is true.
P2: If Asian American immigrants have achieved high levels of success in the US, then they possess unique cultural and socioeconomic resources.
C: Thus, if Asian American immigrants have high levels of success in the US, then hyper-selectivity is true.
Now let me connect these two arguments:
P1: If hyper-selectivity is true, then the academic achievements of Asian Americans is not due solely to socioeconomic Status.
P2: If the academic achievements of Asian Americans isn’t solely due to socioeconomic status, then the achievement gap between groups cannot be fully explained by socioeconomic status (but it can be explained by effort, not cognitive ability).
P3: Hyper-selectivity is true (see arguments above).
C: Thus the achievement gap between Asians and other races cannot be fully explained by socioeconomic status (1, 2, and 3)
P4: (Using addition) Overwhelming evidence shows that Asian Americans outperform other races in America, regardless of socioeconomic status.
C2: So hyper-selectivity remains the best explanation of Asian American academic success, despite critics who state it’s solely due to socioeconomic status (2, 3, and 4 using addition).
Does the G-Spot Exist?
2100 words
Many people believe that a thing called “the G-spot”—Grafenberg spot—exists. The g-spot has been referred to as the female prostate (Puppo, 2014) and it also has been theorized that it is an extension of the clitoris (O’Connell et al, 2005). Recent debates in the urology literature are raging, with one side saying that the g-spot exists while the other side says it does not.
In the 1660s, anatomist and physiologist Regnier de Graaf studied male testicles. In 1668, he made a drawing of dissected male testicles, theorizing that the tubule of the epididymus was necessary for sperm to ejaculate into the vagina, since they knew at the time that the testes was necessary for what was later to be called spermatoza. Now we know that he was right (Turner, 2015). He also correctly theorized that the genesis of life is within the fertilized egg, so he “was the first researcher to solve the mystery of reproduction” (Thiery, 2009). It is possible that de Graaf had knowledge of the erogenous zone inside of the vagina that caused immense sexual pleasure when enough pressure was put on it, but it was the German gynecologist Ernst Grafenberg who identified what would today be known as “the g-spot” in the anterior wall of the vagina (Rabinerson et al, 2007; Edwards, 2022). But recently there have been many papers that attempt to show that it is either reality or a myth. Does it exist?
In 1981, Perry and Whipple taught kegel exercises to women to help their stress urinary incontinence. Women who had lost fluid through their urethra had strong pelvic floor muscles while women who had stress urinary incontinence had weak pelvic floor muscles. Women with strong pelvic floor muscles reported that they lost fluid from their urethra during sexual stimulation and some reported it even during orgasm. So they then found that women who had stronger pelvic floor muscles experience were more likely to experience female ejaculation compared to women with weak pelvic floor muscles. They reported feeling it in the anterior wall of the vagina, and they found that when their anterior wall of their vagina was stimulated with two fingers using a “come here motion”, swelled when stimulated. They then called it the “Grafenberg spot”, or g-spot (Whipple, 2015).
Whether or not the g-spot exists has implications for whether or not there is a distinction between clitoral and vaginal orgasms. If the spot is real, then vaginal orgasms are possible. But if the spot is not real, then vaginal orgasms are impossible and all orgasms are clitoral orgasms. Puppo et al (2015) claim that the so-called vaginal orgasms that women report are “always caused by the surrounding erectile organs (triggers of female orgasm).” What Puppo (2014) calls “the female penis; female erectile organs” “are believed to be responsible for female orgasm.” (also see Whipple, 2015). The g-spot is said to be located below the front of the vagina. Schubach (2002) claims there is identity between the female prostate, g-spot, and Skene’s gland, while also stating that the g-spot is not really a “spot” but more of an area. Using histology, Thabet et al (2009) showed that about 18 percent of their sample of Egyptian women didn’t have a g-spot.
The g-spot is basically said to be a vaginal erogenous zone that, when sufficiently stimulated, can produce a vaginal orgasm independent of clitoral stimulation. But Mollaioli et al (2021) reviewed the history of the vagina in reproductive anatomy, stating that it was once thought that the vagina was an inert organ only for delivering babies. They conclude:
that the G-spot surely exists and is present, developed, and active on a tremendously individual basis. However, it is not a spot, and to reduce the risks of misinterpretations and vacuous discussions, it cannot be called G anymore. It is indeed a functional, hormone-dependent area, which may trigger VAOs and in some cases also FEs, well defined as CUV.
There is also what is termed the “A-spot”, which is the anterior fornix erogenous zone, and is said to be 2 inches above the g-spot, the “U-spot”, which is above the urethral opening, and the “C-spot” (clitourethrovaginal complex) (Jannini et al, 2014; Vieira-Baptista, 2021). While it is generally accepted that the anterior vaginal wall is the most sensitive part of the vagina, there seems to be no clear-cut anatomic thing—despite claims to the contrary—that can be termed a “g-spot” in the vagina. Now, this doesn’t mean that vaginal orgasms aren’t a thing, as many women can attest to.
The g-spot is defined as a physiological response, but it has no apparent anatomic correlate and if there is a physiologic response, then there must be an anatomic correlate that allows the physiologic response according to Ostrzenski (2019). Using a cadaver, Ostrzenski (2012) observed that it does indeed have an anatomic structure, near the upper part of the urethral meanus. He observed that it “appeared as a well-delineated sac“, which has anatomic similarities to erectile tissue. But he has some financial conflicts of interest here, since he, as a gynecologist, runs a “g-spot fat augmentation and g-spot surgical augmentation”, offering a plastic surgery intervention (Herold et al, 2015; Ostrzenski, 2018; Triana, 2019).
Reviews of the “spot” agree that there is no single anatomic area in the vagina that we can call “the g-spot” (Jannini at al, 2014; Vieira-Baptista, 2021). But Maratos et al (2015) showed that there is evidence for an “in vivo morphological correlate” of the g-spot and that it’s visibility in MRI can be enhanced using certain techniques (also see Wylie, 2016). Hoag et al (2017) argue that there is no discrete anatomic entity that can be putatively termed as a “g-spot”, but Ostrzenski (2018) claims that the “spot” is observable in their Hoag et al’s figure 4A. In a study of 309 Turkish women, about half of the sample (n=151) stated that the g-spot does exist, and those that had a belief in it had better scores in genital perception and sexual functioning (Kaya and Caliskan, 2018). Buisson et al (2010) used an ultrasound on a volunteer couple to ascertain the existence of the g-spot. They observed the penis inflating the vagina, which then led to a stretched clitoral root that “has consequently a very close relationship with the anterior vaginal wall. This could explain the pleasurable sensitivity of this anterior vaginal area called the G-spot.” This shows the importance of what one thinks about the g-spot and their sexual satisfaction. However, Sivaslioglu et al (2021) studied live tissue (not tissue from a cadaver) and concluded that there is no g-spot on the anterior vaginal wall.
In an ultrasonographic study, Gravina et al (2008) observed a correlation of .863 between the thickness of the distal urethrovaginal segment and vaginal orgasm and a .884 correlation between vaginal wall thickness and the likelihood of experiencing a vaginal orgasm. They also found that women with a thinner vaginal wall were less likely to report having a vaginal orgasm. This could be explained by more nerve endings in women who have thicker vaginal walls.
It is claimed that women who prefer longer penises are more likely to achieve a vaginal orgasm (Costa, Miller, and Brody, 2012; evo-psycho Geoffrey Miller is also an author on this paper. There is a just-so story there saying that the female orgasm could be an adaptation or byproduct (see Puts, Dawood and Welling, 2012 and Wheatley and Puts, 2015). However, adaptationist hypotheses are nothing more than just-so stories, and this is another example of panglossian thinking. In any case, 95 percent of women report clitoral orgasm, 65 percent of women report vaginal orgasm and 35 percent of women report an orgasm due to stimulation of the cervix (Jannini at al, 2019).
Other researchers reject the claim that there is one spot that causes a vaginal orgasm, and that the vagina is not passive, but is dynamically active in causing pleasure to the woman. Due to the anatomic and dynamic relationships between the clitoris, urethra, and anterior vaginal walls (where the spot is hypothesized to be located), this “led to the concept of a clitourethrovaginal (CUV) complex, defining a variable, multifaceted morphofunctional area that, when properly stimulated during penetration, could induce orgasmic responses” (Jannini et al, 2014).
Conclusion
The debate on the existence of the g-spot and vaginal orgasms continues with no clear-cut answer in the literature. It is such a vexing question, and there are many people with many different views on its structure and physiology. I think that the CUV complex is a better candidate than an actual localized “spot” or “button” in the vagina, as it speaks to the dynamicness of the vagina. Pfaus et al (2016) conclude:
The distinction between different orgasms, then, is not between sensations of the external clitoris and internal vagina, but between levels of what a woman understands a ‘whole’ orgasm to consist of. This depends on the experience with direct stimulation of the external clitoris, internal clitoris, and/or cervix, but also with knowledge of the arousing and erotic cues that predict orgasm, knowledge of her own pattern of movements that lead to it, and experience with stimulation of multiple external and internal genital and extra-genital sites (e.g. lips, nipples, ears, neck, fingers, and toes) that can be associated with it. Orgasms do not have to come from one site, nor from all sites; and they do not have to be the same for every woman, nor for every sexual experience even in the same woman, to be whole and valid. And it is likely that such knowledge changes across the lifespan, as women experience different kinds of orgasms from different types of sensations in different contexts and/or with different partners. Thus, what constitutes a ‘whole’ orgasm depends on how a woman sums the parts and the individual manner in which she scales them along flexible dimensions of arousal, desire, and pleasure. The erotic body map a woman possesses is not etched in stone, but rather is an ongoing process of experience, discovery, and construction which depends on her brain’s ability to create optimality between the habits of what she expects and an openness to new experiences.
While for a negative view, Kilchevsky et al (2012) conclude:
The distal part of the anterior vaginal wall appears to be the most sensitive region of the vagina, yet the existence of an anatomical “G-spot” on the anterior wall remains to be demonstrated. Objective investigative measures, either not available or not applied when Hines first published his review article over a decade ago, still fail to provide irrefutable evidence for the G-spot’s existence. This may be, in part, because of the extreme variability of the female genitalia on an individual level or, more likely, that this mythical location does not exist.
I think there is something to the anterior vaginal wall that would lead to full-body orgasms, but Hoch (1986) states that “the entire anterior vaginal wall” was “found to be erotically sensitive in most of the women examined.” It is indeed accepted that the anterior vaginal wall is the most sensitive part of the vagina, but that doesn’t mean that the g-spot is a thing (Pan et al, 2015). Ling et al (2014) showed that the proximal and distal third of the anterior vaginal wall were had more innervations (nerve endings) and better vascularization, which implies that the vagina may have a sex-sensitive function just like the clitoris. Song et al, 2015 also showed that the distal part of the anterior vaginal wall had more innervations in “seven fresh Korean cadavers.” But such studies of vaginal innervation were noted in one review to be contradictory (Vieira-Baptista et al, 2021).
The experiences of women who claim to have had a vaginal orgasm should not be discarded, but it is possible that as a paper cited noted above, that it’s merely a clitoral orgasm too. Nevertheless, I don’t see this debate settled anytime soon, and both sides have good arguments. What I think would be best is to just accept the C-spot, clitourethrovaginal complex, and this is a larger erogenous zone—not a spot—comprised of the urethra, vaginal wall, paraurethral glands, and the root of the clitoris, since most of the clitoral components are under the skin (Pauls, 2015). Though Puppo (2015) claims that the entire clitoris “is an external organ.” However, there seem to be “clitoral bulbs” in between the cura and vaginal wall (O’Connell et al, 2005).
On the So-Called “Laws of Behavioral Genetics”
2400 words
In the year 2000, psychologist Erik Turkheimer proposed three “laws of behavioral genetics” (LoBG hereafter):
● First Law. All human behavioral traits are heritable.
● Second Law. The effect of being raised in the same family is smaller than the effect of genes.
● Third Law. A substantial portion of the variation in complex human behavioral traits is not accounted for by the effects of genes or families. (Turkheimer, 2000: 160)
In March of 2021, I asked Turkheimer how he defined “law.” He responded: “With tongue in cheek. In fact, it’s a null hypothesis: an expected result when nothing in particular is going on.“
In 2015, Chabris et al (2015) proposed a 4th “law”, that a typical behavioral trait is associated with many variants which each explain a small amount of behavioral variability. They state that the “4th law” explains the failure of candidate gene studies and also the need for higher sample sizes in GWA studies. (It seems they are not aware that larger sample sizes increase the probability of spurious correlations—which is all GWA studies are; Claude and Longo, 2016; Richardson, 2017; Richardson and Jones, 2019) Nice ad hoc hypothesis to save their thinking.
One huge proponent of the LoBG is JayMan, who has been on a crusade for years pushing this nonsense. He added a “5th law” proposed by Emil Kirkegaard, which states that “All phenotypic relationships are to some degree genetically mediated or confounded.”
But what is a “law” and are these “laws of behavioral genetics” laws in the actual sense? First I will describe what a “law” is and if there even are biological laws. Then I will address each “law” in turn. I will then conclude that the LoBG aren’t real “laws”, they are derived from faulty thinking about the relationship between genes, traits, environment and the system and how the “laws” were derived rest on false assumptions.
What is a law? Are there biological laws?
Laws are “true generalizations that are “purely quantitative” … They have counterfactual force” (Sober, 1993: 458). Philosopher of mind Donald Davidson argued that laws are strict and exceptionless (Davidson, 1970; David-Hillel, 2003). That is, there must be no exceptions for that law. Sober (1993) discusses Rosenberg’s and Beatty’s arguments against laws of biology—where Rosenberg states that the only law in biology is “natural selection.” (See Fodor, 2008 and Fodor and Piattelli-Palmarini, 2009, 2010 for the argument against that claim and for arguments against the existence of laws of selection that can distinguish between causes and correlates of causes.) It has even been remarked that there are “so few” laws in biology (Dhar and Giuliani, 2010; also see Ruse, 1970).
Biology isn’t reducible to chemistry or physics (Marshal, 2021), since there are certain things about biology that neither chemistry or physics have. If there are laws of biology, then they will be found at the level of the organism or its ecology (Rull, 2022). In fact, it seems that although three laws of biology have been proposed (Trevors and Sailer Jr., 2008), they appear to be mere regularities, including McShea and Brandon’s (2010) first law of biology; all “laws of biology” seem to be mere laws of physics (Wayne, 2020). The “special sciences”, it seems, “are not fit for laws” (Kim, 2010). There seem to be, though, no uncontroversial laws or regularities in biology (Hamilton, 2007).
Now that I have described what laws are and have argued that there probably aren’t any biological laws, what does that mean for the LoBG? I will take each “law” in turn.
“Laws” of behavioral genetics
(1) All human behavioral traits are heritable.
JayMan gives derivations for the “laws”, and (1) and (2) have their bases in twin studies. We know that the equal environments assumption is false (Charney, 2012; Joseph, 2014; Joseph et al, 2015), and so if the EEA is false then we must reject genetic claims from twin study proponents. Nevertheless, the claim that these “laws” have any meaning gets pushed around a lot.
When it comes to the first law, the claim is that “ALL human behavioral traits are heritable”—note the emphasis on “ALL.” So this means that if we find only ONE behavioral trait that isn’t heritable, then the first law is false.
Reimann, Schilke, and cook (2017) used a sample of MZ and DZ twins and asked questions related to trust and distrust. They, of course, claim that “MZ and DZ twins share comparable environments in their upbringing“—which is false since MZ twins have more comparable environments. Nevertheless, they conclude that while trust has a heritability or 30%, “ACE analyses revealed that the estimated heritability [for] distrust is 0%.” This,therefore, means, that the “1st law” is false.
This “first law”, the basis of which is twin, family, and adoption studies, is why we have poured countless dollars into this research, and of course people have their careers (in what is clear pseudoscience) to worry about, so they won’t stop these clearly futile attempts in their search for “genes for” behavior.
(2) The effect of being raised in the same family is smaller than genes.
This claim is clearly nonsense, and one reason why is that the first “law” is false. In any case, there is one huge effect on, children’s outcomes due to birth order and how, then, parental attitudes–particularly mothers—affect child outcomes (Lehmann, Nuevo-Chiquero, and Vidal-Fernandez, 2018).
Why would birth order have an effect? Quite simply, the first-born child will get more care and attention than children who are born after, and so variations in parental behavior due to birth order can explain differences in education and life outcomes. They conclude that “broad shifts in parental behavior appear to set later-born children on a lower path for cognitive development and academic achievement, with lasting impact on adult outcomes.” Thus, Murray’s (2002) claim that birth order doesn’t matter and JayMan’s claim that “that the family/rearing environment has no effect on eventual outcomes” is clearly false. Thus, along with this and the falsity of the “1st law”, the “2nd law” is false, too.
(3) A substantial portion of the variation in complex human behavioral traits is not accounted for by the effects of genes or families.
This “law” covers the rest of the variance not covered in the first two “laws.” It was coined due to the fact that the first two “laws” had variance left that wasn’t “explained” by them. So this is basically unique experience. This is what behavioral genetics call “non-shared environment.” Of course, unique experiences (that is, subjective experiences) would definitely “shape who we are”, and part of our unique experiences are cultural. We know that cultural differences can have an impact on psychological traits (Prinz, 2014: 67). So the overall culture would explain why these differences aren’t “accounted for” in the first two “laws.”
Yet, we didn’t need the LoBG for us to know that individual differences are difference-makers for differences in behavior and psychology. So this means that what we choose to do can affect our propensities and then, of course, our behavior. Non-shared environmental effects are specific to the individual, and can include differing life events. That is, they are random. Non-shared environment, then, is parts of the environment that aren’t shared. Going back to Lehmann, Nuevo-Chiquero, and Vidal-Fernandez (2018) above, although children to grow up in the same family under the same household, they are different ages and so they also experience different life events. They also experience the same things differently, due to the subjectivity of experience.
In any case, the dichotomy between shared and non-shared environment is a dichotomy that upholds the behavioral geneticists main tool—the heritability estimate—from which these “laws” derive (from studies of twins, adoptees, and families). So, due to how the law was formulated (since there were still portions “unaccounted for” by the first two “laws”), it doesn’t really matter and since it rests on the first two false “laws”, therefore the third “law” is also false.
(4) Human behavioral traits are associated with many genes of small effect which contribute to a small amount of behavioral variability.
This “law” was formulated by Chabris et al (2015) due to the failure of molecular genetic studies which hoped to find genes with large effects to explain behavior. This “law” also “explains why the results of “candidate-gene” studies, which focus on a handful of genetic variants, usually fail to replicate in independent samples.” What this means to me is simple—it’s an ad-hoc account, meaning it was formulated to save the gene-searching by behavioral geneticists since the candidate gene era was a clear failure, as Jay Joseph noted in his discussion of the” 4th law.”
So here is the time line:
(1) Twin studies show above-0 heritabilities for behavioral traits.
(2) Since twin studies show high heritabilities for behavioral traits, then there must be genes that will be found upon analyzing the genome using more sophisticated methods.
(3) Once we started to peer into the genome after the completion of the human genome project, we then came to find candidate genes associated with behavior. Candidate gene studies “look at the genetic variation associated with disease within a limited number of pre-specified genes“, they refer to genes “believed to be” associated with a trait in question. Kwon and Goat (2000) wrote that “The candidate gene approach is useful for quickly determining the association of a genetic variant with a disorder and for identifying genes of modest effect.” But Sullivan (2020) noted that “Historical candidate gene studies didn’t work, and can’t work.” Charney (2022) noted that the candidate gene era was a “failure” and is now a “cautionary tale.”
Quite clearly, they were wrong then, and the failure of the candidate gene era led to the ad-hoc “4th law.” This has then followed us to the GWAS and PGS era, where it is claimed that we aren’t finding all of the heritability that twin studies say we should find with GWAS, since the traits under review are due to many genes of small effect. It’s literally just a shell game—when one claim is shown to be false, just make a reason why what you thought would be found wasn’t found, and then you can continue to search for genes “for” behavior. But genetic interactions create a “phantom heritability” (Zuk et al, 2011), while behavioral geneticists assume that the interactions are additive. They simply outright ignore interactions, although they pay it lip service.
So why, then, should we believe behavioral geneticists today in 2023 that we need larger and larger samples to find these mythical genes “for” behavior using GWAS? We shouldn’t. They will abandon GWAS and PGS in a few years when the new kid on the block shows up that they can they champion and claim that the mythical genes “for” behavior will finally be found.
(5) All phenotypic relationships are to some degree genetically mediated or confounded.
This claim is something that comes up a lot—the claim of genetic confounding (and mediation). A confound is a third variable that influences both the dependent and independent variable. The concept of genetic confounding was introduced during the era where it was debated whether or not smoking caused lung cancer (Pingault et al, 2021). (Do note that Ronald Fisher (1957), who was a part of this debate, claimed that smoking and lung cancer were both “influenced by a common cause, in this case individual genotype.“
However, in order for the genetic confounding claim to work, they need to articulate a mechanism that explains the so-called genetic confounding. They need to articulate a genetic mechanism which causally explains X and Y, explains X independent of Y and explains Y independent of X. So for the cop-out genetic confounding claim to hold any water: G confounds X and Y iff there is a genetic mechanism which causally explains X and Y, causally explains X independent of Y and Y independent of X.
Conclusion
The “laws of behavioral genetics” uphold the false dichotomy of genes and environment, nature and nurture. Though, developmental systems theorists have rightly argued that it is a false dichotomy (Homans, 1979; Moore, 2002; Oyama, 2002; Moczek, 2012) and that it is just not biologically plausible (Lewkowicz, 2012). In fact, the h2 statistics assumes that G and E are independent, non-interacting factors, so if the claim is false then—for one of many reasons—we shouldn’t accept their conclusions. The fact that G and E interact means that, of course, we should reject h2 estimates, and along with it, the entire field of behavioral genetics.
Since the EEA is false, h2 equals c2. Furthermore, h2 equals 0. So Polderman’s (2015) meta analysis doesn’t show that for all traits in the analysis that h2 equals 49%. (See Jay Joseph’s critique.) Turkheimer (2000: 160) claimed that the nature-nurture debate is over, since everything is heritable. However, the debate is over because developmental systems approach has upended the false dichotomy of nature vs nurture, since all developmental resources interact and are therefore irreducible to development.
However, for the field to continue to exist, they need to promulgate the false dichotomy, since their heritability estimates depend on it. They also need to hold onto the claim that twin, family and adoption studies can show the “genetic influence” on traits to justify the continued search for genes “for” behavior. Zuk and Spencer (2020) called the nature-nurture “debate” “a zombie idea, one that, no matter how many times we think we have disposed of it, springs back to life.” This is just like Oyama (2000) who compared arguing against gene determinism like battling the undead (Griffiths, 2006).
Jay Joseph proposed a 5th “law” in 2015 where he stated:
Behavior genetic Laws 1-4 should be ignored because they are based on many false assumptions, concepts, and models, on negative gene finding attempts, and on decades of unsubstantiated gene discovery claims.
The “laws” should quite obviously be ignored. Since the whole field of behavioral genetics is based on them, why not abandon the search for “genes for behavior”? At the end of the day, it seems like there are no “laws” of behavioral genetics, since laws are strict and exceptionless. So why do they keep up with their claims that their “laws” tell us anything about human behavior? Clearly, it’s due to the ideology of those who hold that the all-important gene causes traits and behavior, so they will do whatever it takes to “find” them. But in 2023, we know that this claim is straight up false.
The Answer to Hereditarianism is Developmental Systems Theory
4150 words
Introduction
It is claimed that genes (DNA sequences) have a special, privileged role in the development of all traits. But once we understand what genes do and their role in development, then we will understand that the role ascribed to genes by gene-selectionists and hereditarians outright fails. Indeed, the whole “nature vs nurture” debate implies that genes determine traits and that it’s possible to partition the relative contributions to traits in a genetic and environmental way. This, however, is far from reality (like heritability estimates).
DST isn’t a traditional scientific theory—it is more a theoretical perspective on developmental biology, heredity, and evolution, though it does make some general predictions (Griffiths and Hochman, 2015). But aspects of it have been used to generate novel predictions in accordance with the extended evolutionary synthesis (Laland et al, 2015).
Wilson (2018: 65) notes six themes of DST:
Joint determination by multiple causes
Development is a process of multiple interacting sources.
Context sensitivity and contingency
Development depends on the current state of the organism.
Extended inheritance
An organism inherits resources from the environment in addition to genes.
Development as a process of construction
The organism helps shape its own environment, such as the way a beaver builds a dam to raise the water level to build a lodge.
Distributed control
Idea that no single source of influence has central control over an organism’s development.
Evolution as construction
The evolution of an entire developmental system, including whole ecosystems of which organisms are parts, not just the changes of a particular being or population.
Genes (DNA sequences) as resources and outcomes
Hereditarians have a reductionist view of genes and what they do. Genes, to the hereditarian, are causes of not only development but of traits and evolution, too. However the hereditarian is sorely mistaken—there is no a priori justification for treating genes as privileged causes over and above other developmental resources (Noble, 2012). I take Noble’s argument there to mean that strong causal parity is true—where causal parity means that all developmental resources are on par with each other, with no other resource having primacy over another. They all need to “dance in tune” with the “music of life” to produce the phenotype, to borrow Noble’s (2006, 2017) analogy. Hereditarian dogma also has its basis in the neo-Darwinian Modern Synthesis. The modern synthesis has gotten causality in biology wrong. Genes are, simply put, passive, not active, causes:
Genes, as DNA sequences, do not of course form selves in any ordinary sense. The DNA molecule on its own does absolutely nothing since it reacts biochemically only to triggering signals. It cannot even initiate its own transcription or replication. … It would therefore be more correct to say that genes are not active causes; they are, rather, caused to give their information by and to the system that activates them. The only kind of causation that can be attributed to them is passive, much in the way a computer program reads and uses databases. (Noble, 2011)
These ideas, of course, are also against the claim that genes are blueprints or recipes, as Plomin (2018) claims in his most recent book (Joseph, 2022). This implies that they are context-independent; we have known for years that genes are massively context-sensitive. The line of argument that hereditarians push is that genes are context-insensitive, that is they’re context-independent. But since DNA is but one of the developmental resources the physiological system uses to create the phenotype, this claim fails. Genes are not causes on their own.
Behavioral geneticist and evolutionary psychologist J. P. Rushton (1997: 64) claims that a study shows that “genes are like blueprints or recipes providing a template for propelling development forward to some targeted endpoint.” That is, Rushton is saying that there is context-independent “information” in genes, and that genes, in essence, guide development toward a targeted endpoint. Noah Carl (2019) claims that the hereditarian hypothesis “states that these differences [in cognitive ability] are partly or substantially explained by genetics.” When he says the differences are “partly or substantially explained by genetics”, he’s talking about “cognitive ability” being caused by genes. The claim that genes cause (either partly or substantially) cognitive ability—and all traits, for that matter—fails and it fails since genes don’t do what hereditarians think they do. (Nevermind the conceptual reasons.) These claims are laughable, due to what Noble, Oyama, Moore and Jablonka and Lamb have argued. It is outright false that genes are like blueprints or recipes. Rushton’s is reductionist in a sociobiology-type way, while Plomin’s is reductionist in a behavioral genetic type way.
In The Dependent Gene, David Moore (2002: 81) talks about the context-dependency of genes:
Such contextual dependence renders untenable the simplistic belief that there are coherent, long-lived entities called “genes” that dictate instructions to cellular machinery that merely constructs the body accordingly. The common belief that genes contain context-independent “information”—and so are analogous to “blueprints” or “recipes”—is simply false.
Genes are always expressed in context and cannot be divorced from said context, like hereditarians attempt using heritability analyses. Phenotypes aren’t “in the genes”, they aren’t innate. They develop through the lifespan (Blumberg, 2018).
Causal parity and hereditarianism
Hereditarianism can be said to be a form of genetic reductionism (and mind-brain identity). The main idea of reductionism is to reduce the whole to the sum of its parts and then analyze those parts. Humans (the whole) are made up of genes (the parts), so to understand human behavior, and humans as a whole, we must then understand genes, so the story goes.
Cofnas (2020) makes several claims regarding the hereditarian hypothesis and genes:
But if we find that many of the same SNPs predict intelligence in different racial groups, a risky prediction made by the hereditarian hypothesis will have passed a crucial test.
…
But if work on the genetics and neuroscience of intelligence becomes sufficiently advanced, it may soon become possible to give a convincing causal account of how specific SNPs affect brain structures that underlie intelligence (Haier, 2017). If we can give a biological account of how genes with different distributions lead to race differences, this would essentially constitute proof of hereditarianism. As of now, there is nothing that would indicate that it is particularly unlikely that race differences will turn out to have a substantial genetic component. If this possibility cannot be ruled out scientifically, we must face the ethical question of whether we ought to pursue the truth, whatever it may be.
Haier is a reductionist of not only the gene variety but the neuro variety—he attempts to reduce “intelligence” to genes and neurology (brain physiology). I have though strongly criticized the use of fMRI neuroimaging studies regarding IQ; cognitive localizations in the brain are untenable (Uttal, 2001, 2011) and this is because mind-brain identity is false.
Cofnas asks “How can we disentangle the effects of genes and environment?” and states the the behavioral geneticist has two ways—correlations between twins and adoptees and GWAS. Unfortunately for Cofnas, twin and adoption studies show no such thing (see Ho, 2013), most importantly because the EEA is false (Joseph, 2022a, b). GWAS studies are also fatally confounded (Janssens and Joyner, 2019) and PGS doesn’t show what behavioral geneticists need it to show (Richardson, 2017, 2022). The concept of “heritability” is also a bunk notion (Moore and Shenk, 2016). (Also see below for further discussion on heritability.) At the end of the day, we can’t do what the hereditarian needs to be done for their explanations to hold any water. And this is even before we look at the causal parity between genes and other developmental resources. Quite obviously, the hereditarian hypothesis is a gene-centered view, and it is of course a reductionist view. And since it is a reductionist, gene-centered view, it is then false.
Genetic, epigenetic, and environmental factors operate as a system to form the phenotype. Since this is true, therefore, both genetic and epigenetic determinism is false (also see Wagoner and Uller, 2015). It’s false because the genes one is born with, or develops with, don’t dictate or determine anything, especially not academic achievement as hereditarian gene-hunters would so gleefully claim. And one’s early experience need not dictate an expected outcome, since development is a continuous process. Although, that does not mean that environmental maladies that one experiences during childhood won’t have lasting effects into adulthood due to possibly affecting their psychology, anatomy or physiology.
The genome is responsive, that is, it is inert before it is activated by the physiological system. When we put DNA in a petri dish, it does nothing. It does nothing because DNA cannot be said to be a separate replicator from the cell (Noble, 2018). So genes don’t do anything independent of the context they’re in; they do what they do DUE TO the context they’re in. This is like Gottlieb’s (2007) probabilistic epigenesis, where the development of an organism is due to the coaction of irreducible bidirectional biological and environmental influences. David S. Moore, in The Developing Genome: An Introduction to Behavioral Epigenetics states this succinctly:
Genes—that is, DNA segments—are always influenced by their contexts, so there is never a perfect relationship between the presence of a gene and the ultimate appearance of a phenotype. Genes do not determine who we become, because nongenetic factors play critical roles in trait development; genes do what they do at least in part because of their contexts.
What he means by “critical roles in trait development” is clear if one understands Developmental Systems Theory (DST). DST was formulated by Susan Oyama (1985) in her landmark book “The Ontogeny of Information. In the book, she argues that nature and nurture are not antagonistic to each other, they are cooperative in shaping the development of organisms. Genes do not play a unique informational role in development. Thus, nature vs. nurture is a false dichotomy—it’s nature interacting with nurture, or GxE. This interactionism between nature and nurture—genes and environment—is a direct refutation of hereditarianism. What matters is context, and the context is never independent from what is going on during development. Genes aren’t the units of selection, the developmental system is, as Oyama explains in Evolution’s Eye:
If one must have a “unit” of evolution, it would be the interactive developmental system: life cycles of organisms in their niches. Evolution would then be change in the constitution and distribution of these systems (Oyama, 2000b)
Genes are important, of course, for the construction of the organism—but so are other resources. Without genes, there would be nothing for the cell to read to initiate transcription. However, without the cellular environment, we wouldn’t have DNA. Lewontin puts this wonderfully in the introduction to the 2000 edition of Ontogeny:
There are no “gene actions” outside environments, and no “environmental actions” can occur in the absence of genes. The very status of environment as a contributing cause to the nature of an organism depends on the existence of a developing organism. Without organisms there may be a physical world, but there are no environments. In like manner no organisms exist in the abstract without environments, although there may be naked DNA molecules lying in the dust. Organisms are the nexus of external circumstances and DNA molecules that make these physical circumstances into causes of development in the first place. They become causes only at their nexus, and they cannot exist as causes except in their simultaneous action. That is the essence of Oyama’s claim that information comes into existence only in the process of Ontogeny. (2000, 15-16)
Genes aren’t causes on their own, they are resources for development. And being resources for development, they have no privileged level of causation over other developmental resources, such as “methylation patterns, membrane templates, cytoplasmic gradients, centrioles, nests, parental care, habitats, and cultures” (Griffiths and Stotz, 2018). All of these things, and more of course, need to work in concert with each other.
Indeed, this is the causal parity argument—the claim that genes aren’t special developmental resources, that they are “on par” with other developmental resources (Griffiths and Gray, 1994; Griffiths and Stotz, 2018). Gene knockout studies show that the loss of a gene can be compensated by other genes—which is known as “genetic compensation.” None of the developmental resources play a more determinative role than other resources (Noble, 2012; Gamma and Liebrenz, 2019). This causal parity, then, has implications for thinking about trait ontogeny.
The causal parity of genes and other developmental factors also implies that genes cannot constitute sufficient causal routes to traits, let alone provide complete explanations of traits. Full-blown explanations will integrate various kinds of causes across different levels of organizational hierarchy, and across the divide between the internal and the external. The impossibly broad categories of nature vs. nurture that captured the imagination of our intellectual ancestors a century ago are no longer fit for the science of today. (Gamma and Liebrenz, 2019)
Oyama (2000a 40) articulates the casual parity thesis like this:
What I am arguing for here is a view of causality that gives formative weight to all operative influences, since none is alone sufficient for the phenomenon or for any of its properties, and since variation in any or many of them may or may not bring about variation in the result, depending on the configuration of the whole.
While Griffiths and Hochman (2015) formulate it like this:
The ‘parity thesis’ is the claim that if some role is alleged to be unique to nucleic acids and to justify relegating nongenetic factors to a secondary role in explaining development, it will turn out on closer examination that this role is not unique to nucleic acids, but can be played by other factors.
Genes are necessary pre-conditions for trait development, just as the other developmental resources are necessary pre-conditions for trait development. No humans without genes—this means that genes are necessary pre-conditions. If genes then humans—this implies that genes are sufficient for human life, but they are but one part of what makes humans human, when all of the interactants are present, then the phenotype can be constructed. So all of the developmental resources interacting are sufficient.
The nature vs. nurture dichotomy can be construed in such a way that they are competing explanations. However, we now know that the dichotomy is a false one and that the third way—interactionism—is how we should understand development. Despite hereditarian protestations, DST/interactionism refutes their claims. The “information” in the genes, then, cannot explain how organisms are made, since information is constructed dialectically between the resources and the system. There are a multiplicity of causal factors that are involved in this process, and genes can’t be privileged in this process. Thus the phrase “genetic causation” isn’t a coherent concept. Moreover, DNA sequences aren’t even coherent outside of cellular context (Noble, 2008).
Griffiths and Stotz (2018) put the parity argument like this:
In The Ontogeny of Information Oyama pioneered the parity argument, or the ‘parity thesis’, concerning genetic and environmental causes in development (see also Griffiths and Gray 1994; Griffiths and Gray 2005; Griffiths and Knight 1998; Stotz 2006; Stotz and Allen 2012). Oyama relentlessly tracked down failures of parity of reasoning in earlier theorists. The same feature is accorded great significance when a gene exhibits it, only to be ignored when a non-genetic factor exhibits it. When a feature thought to explain the unique importance of genetic causes in development is found to be more widely distributed across developmental causes, it is discarded and another feature is substituted. Griffiths and Gray (1994) argued in this spirit against the idea that genes are the sole or even the main source of information in development. Other ideas associated with ‘parity’ are that the study of development does not turn on a single distinction between two classes of developmental resources, and that the distinctions useful for understanding development do not all map neatly onto the distinction between genetic and non-genetic.
Shea (2011) tries to argue that genes do have a special role, and that is to transport information. Genes are, of course, inherited, but so is every other part of the system (resources). Claiming that there is information “in the genes” is tantamount to saying that there is a special role for DNA in development. But, as I hope will be clear, this claim fails due to the nature of DNA and its role in development.
This line of argument leads to one clear conclusion—genes are followers, they are not leaders; most evolution begins with environmentally-mediated phenotypic change, and then genetic changes occur (West-Eberhard, 2003). Ho and Saunders (1979) state that variation in organisms is constructed during development due to an interaction between genetic and non-genetic factors. That is, they follow what is needed to do by the developmental system, they aren’t leading development, they are but one party in the whole symphony of development. Development can be said to be irreducible, so we cannot reduce development to genes or anything else, as all interactants need to be present for development to be carried out. Since genes are activated by other factors, it is incoherent to talk of “genetic causes.” Genes affect the phenotype only when they are expressed, and other resources, too, affect the phenotype this is, ultimately, an argument genes against as blueprints, codes, recipes, or any other kind of flowery language one can used to impute what amounts to intention to inert DNA.
Even though epigenetics invalidates all genetic reductionism (Lerner and Overton, 2017), genetic reductionist ideas still persist. They give three reasons why genetic reductionist ideas still persist despite the conceptual, methodological, and empirical refutations. (1) Use of terms like “mechanism”, “trait”, and “interaction”; (2) constantly shifting to other genes once their purported “genes for” traits didn’t workout; and (3) they “buried opponents under repetitive results” (Panofsky, quoted in Lerner and Overton, 2017). The fact of the matter is, there are so many lines of evidence and argument that refute hereditarian claims that it is clear the only reason why one would still be a hereditarian in this day and age is that they’re ignorant—that is racist.
Genes, that is, are servants, not masters, of the development of form and individual differences. Genes do serve as templates for proteins: but not under their own direction. And, as entirely passive strings of chemicals, it is logically impossible for them to initiate and steer development in any sense. (Richardson, 2016)
DST and hereditarian behavioral genetics
I would say that DST challenges three claims from hereditarian behavioral genetics (HBG hereafter):
(1) The claim that we can neatly apportion genes and environment into different causes for the ontogeny of traits;
(2) Genes are the only thing that are inherited and that genes are the unit of selection and a unique—that is, special and privileged cause over and above other resources;
(3) That genes vs environment, blank skate vs human nature, are a valid dichotomy.
(1) HBG needs to rely on the attempting to portion out causes of traits into gene and environmental causes. The heritability statistic presumes additivity, thy is, it assumes no interaction. This is patently false. Charney (2016) gives the example of schizophrenia—it is claimed that 50 percent of the heritability of schizophrenia is accounted for by 8000 genes, which means that each SNP accounts for 1/8000 of the half of the heritability. This claim is clearly false, as genetics aren’t additive, and the additivity assumption precludes the interaction of genes with genes, and environment, which create new interactive environments. Biological systems are not additive, they’re interactive. Heritability estimates, therefore, are attempts at dichotomizing what is not dichitomizable (Rose, 2005).
An approach that partitions variance into independent main effects will never resolve the debate because, by definition, it has no choice but to perpetuate it. (Goldhaber, 2012)
This approach, of course, is the approach that attempts to partition variance into G and E components. The assumption is that G and E are additive. But as DST theorists have argued for almost 40 years, they are not additive, they are interactive and so not additive, therefore heritability estimates fail on conceptual grounds (as well as many others). Heritability estimates have been—and continue to today—been at the heart of the continuance of the nature vs nurture distinction, the battle, if you will. But if we accept Oyama’s causal parity argument—and due to the reality of how genes work in the system, I see no reason why we shouldn’t—then we should reject hereditarianism. Hereditarians have no choice but to continue the false dichotomy of nature vs nurture. Their “field” depends on it. But despite the fact that the main tool for the behavioral geneticist lies on false pretenses (twin and adoption studies), they still try to show that heritability estimates are valid in explaining trait variation (Segalowitz, 1999; Taylor, 2006, 2010).
(2) More than genes are inherited. Jablonka and Lamb (2005) argue that there are four dimensions—interactants—to evolution: genetic, epigenetic, behavioral, and symbolic. They show the context-dependency of the genome, meaning that genotype does not determine phenotype. What does determine the phenotype, as can be seen from the discussion here, is the interacting of developmental resources in development. Clearly, there are many other inheritance systems other than genes. There is also the fact that the gene as popularly conceived does not exist—so it should be the end of the gene as we know it.
(3) Lastly, DST throws out the false dichotomy of genes and environment, nature and nurture. DST—in all of its forms—rejects the outright false dichotomy of nature vs nurture. They are not in a battle with each other, attempting to decide who is to be the determining factor in trait ontogeny. They interact, and this interaction is irreducible. So we can’t reduce development to genes or environment (Moore, 2016) Development isn’t predetermined, it’s probabilistic. The stability of phenotypic form isn’t found in the genes (Moore and Lickliter, 2023)
Conclusion
Genes are outcomes, not causes, of evolution and they are not causes of trait ontogeny on their own. The reality is that strong causal parity is true, so genes cannot be regarded as a special developmental resource from other resources—that is, genes are not privileged resources. Since they are not privileged resources, we need to, then, dispense with any and all concepts of development that champion genes as being the leader of the developmental process. The system is, not genes, with genes being but one of many of the interactants that shape phenotypic development.
By relying on the false narrative that genes are causes and that they cause not only our traits but our psychological traits and what we deem “good” and “bad”, we would then be trading social justice for hereditarianism (genetic reductionism).
These recommended uses of bad science reinforce fears of institutionalized racism in America and further the societal marginalization of minority groups; these implications of their recommendations are never publicly considered by those who promulgate these flawed extensions of counterfactual genetic reductionism. (Lerner, 2021)
Such [disastrous societal] applications can only rob people of life chances and destroy social justice. Because developmental science has the knowledge base to change the life course trajectories of people who are often the targets of genetic reductionist ideas, all that remains to eradicate genetic reductionism from scientific discussion is to have sufficient numbers of developmental scientists willing to proclaim loudly and convincingly that the naked truth is that the “emperor” (of genetic reductionism) has no clothes. (Lerner, 2021: 338)
Clearly, hereditarians need the nature vs nurture debate to continue so they can push their misunderstandings about genes ans psychology. However, given our richer understanding of genes and how they work, we now know that hereditarianism is untenable, and DST conceptions of the gene and development as a whole have led us to that conclusion. Lerner (2017) stated that as soon as the failure of one version of genetic reductionism is observed, another one pious up—making it like a game of whack-a-mole.
The cure to hereditarian genetic reductionism is a relational developmental systems (RDS) model. This model has its origins with Uri Bronfenbrenner’s ecological systems theory (Bronfenbrenner and Ceci, 1994; Ceci, 1996; Patel, 2011; Rosa and Tudge, 2013. Development is about the interacting and relation between the individual and environment, and this is where RDS theory comes in. Biology, physiology, culture, and history are studied to explain human development (Lerner, 2021). Hereditarian ideas cannot give us anything like what models derived from developmental systems ideas can. An organism-environment view can lead to a more fruitful, and the organism and environment are inseparable (Jarvilehto, 1998; Griffiths and Gray, 2002). And it is for these reasons, including many, many more, that hereditarian genetic reductionist ideas should become mere sand in the wind.
Having said all that, here’s the argument:
P1: If hereditarianism is true, then strong causal parity is false.
P2: Strong causal parity is true.
C: Therefore hereditarianism must be false.