NotPoliticallyCorrect
Please keep comments on topic.

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 301 other subscribers

Follow me on Twitter

Archives

The “Great Replacement Theory”

2550 words

Introduction

The “Great Replacement Theory” (GRT hereafter) is a white nationalist conspiracy theory (conceptualized by French philosopher Renaud Camus) where there is an intentional effort by some shadowy group (i.e., Jews and global elites) to bring mass amounts of immigrants with high TFRs to countries with whites where whites have low TFRs in order to displace and replace whites in those countries (Beirich, 2021). Vague statements have been made about their “IQs” in that they would be easier to “control” and that they would then intermix with whites to further decrease the IQ of the nation and then be more controllable, all the while the main goal of the GRT—the destruction of the white race—would come to fruition. Here, I will go through the logic of what I think the two premises of the GRT are, and then I will show how the two premises (which I hold to obviously be true) don’t guarantee the conclusion that the GRT is true and that there is an intentional demographic replacement. I will discuss precursors of this that are or almost are 100 years old. I will then discuss what “theory” and “conspiracy theory” means and how, by definition, the GRT is both a theory (an attempted explanation of observed facts) and a conspiracy theory (suggesting a secret plan for the destruction and replacement of the white race).

The genesis of the GRT

The idea of the GRT is older than what spurred it’s discussion in the new millennium, but it can be traced in its modern usage to French political commentator Renaud Camus in his book Le Grand Remplacement.

But one of the earliest iterations of the GRT is the so-called “Kalergi plan.” Kalergi was also one of the founders of the pan-European union (Wiedemer, 1993). Kalergi, in his 1925 book Practical Idealism, wrote that “The man of the future will be of mixed race. Today’s races and classes will gradually disappear owing to the vanishing of space, time, and prejudice. The Eurasian-Negroid race of the future will replace the diversity of peoples with a diversity of individuals.” Which is similar to what Grant (1922: 110) wrote in The Passing of the Great Race:

All historians are familiar with the phenomenon of a rise and decline in civilization such as has oc- curred time and again in the history of the world but we have here in the disappearance of the Cro-Magnon race the earliest example of the replacement of a very superior race by an inferior one. There is great danger of a similar replacement of a higher by a lower type here in America unless the native American uses his superior intelligence to protect himself and his children from competition with intrusive peoples drained from the lowest races of eastern Europe and western Asia.

The idea of a great replacement is obviously much older than what spurred it on today. Movement was much tougher back then as the technology for mass migrations was just beginning to become more mainstream (think of the mass migrations from the 1860s up until the 1930s in America from European groups). Even the migration of other whites from Europe was used as a kind of “replacement” of protestant Anglo-Saxon ways of life. Nonetheless, these ideas of a great replacement are not new, and these two men (one of which—Kalergi—wasn’t using the quote in a nefarious way, contra the white nationalists who use this quote as evidence of the GRT and the plan for it in the modern day) are used as evidence that it is occurring.

Kalergi envisioned a positive blending of the races, whereas Grant expressed concerns of replacement by so-called “inferior” groups replacing so-called “superior” groups. Grant—in trying to argue that Cro-Magnon man was the superior race, replaced by the inferior one—expressed worry of intentional demographic replacement, which is the basis of the GRT today and what the GRT essentially reduces to. The combination of these opposing perspectives of the mixing of races (the positive one from Kalergi and the negative one from Grant) show that the idea of a great replacement is much older than Camus’ worry in his book. (And, as I will argue, the fact that the 2 below premises are true doesn’t guarantee the conclusion of the GRT.)

The concept of the GRT

The GRT has two premises:

(1) Whites have fewer children below TFR
(2) Immigrants have more children above TFR

Which then should get us to:

(C) Therefore, the GRT is true.

But how does (C) follow from (1) and (2)? The GRT suggests not only a demographic shift in which the majority (whites) are replaced and displaced by minorities (in this case mostly “Hispanics” in America), but that this is intentional—that is, it is one man or group’s intention for this to occur. The two premises above refer to factual, verifiable instances: Whites have fewer children; immigrants coming into America have more children. BUT just because those two premises are true, this does NOT mean that the conclusion—GRT is true—follows from the two premises. The two premises focus on the fertility rates of two groups (American whites and immigrants to America), but acceptance of both of those premises does not mean that there is an act of intentional displacement occurring. We can allow the truth of both premises, but that doesn’t lead to the truth of the GRT. Because that change is intentionally driven by some super secret, shadowy and sinister group (the Jews or some other kind of amalgamation of elites who want easy “slave labor”).

The GRT was even endorsed by the Buffalo shooter who heniously shot and killed people in a Tops supermarket. He was driven by claims of the GRT. (The US Congress condemned the GRT as a “White supremacist conspiracy theory“, and I will show how it is a theory and even a conspiracy theory below.) The shooter even plagiarized the “rationale section” of his manifesto (Peterka-Benton and Benton, 2023). This shows that such conspiracy theories like the GRT can indeed lead to radicalization of people.

Even ex-presidential hopeful Vivek Ramaswamy made reference to the GRT, stating thatgreat replacement theory is not some grand right-wing conspiracy theory, but a basic statement of the Democratic Party’s platform.” Even former Fox News political commentator Tucker Carlson has espoused these beliefs on his former show on Fox News. The belief in such conspiratorial thinking can quite obviously—as seen with the Buffalo shooter—have devestating negative consequences (Adam-Troian et al, 2023). Thus, these views have hit the mainstream as something that’s “plausible” on the minds of many Americans.

Such thinking obviously can be used for both Europe and America—where the Islamization/Africanization of Europe and the browning of America with “Hispanics” and other groups—where there is a nefarious plot to replace the white population of both locations, and these mostly derive on places like 4chan where they try to “meme” what they want into reality (Aguilar, 2023).

On theories and conspiracy theories

Some may say that the GRT isn’t a theory nor is it even a conspiracy theory—it’s a mere observation. I’ve already allowed that both premises of the argument—whites have fewer children below TFR while immigrants have more children above TFR—is true. But that doesn’t mean that the conclusion follows that the GRT is true. Because, as argued above, it is intentional demographic replacement. Intentional by whom? Well the Jews and other global elites who want a “dumb” slave population that just listens, produces and has more children so as to continue the so-called enslavement of the lower populations.

But, by definition, the GRT is a theory and even a conspiracy theory. The GRT is a theory in virtue of it being an explanation for observed demographic changes and the 2 premises I stated above. It is a conspiracy theory because it suggests a deliberate, intentional plan by the so-called global elite to replace whites with immigrants. Of course labeling something as a conspiracy theory doesn’t imply that it’s inaccurate nor invalid, but I would say that the acceptance of both premises DO NOT guarantee the conclusion that those who push the GRT want it to.

The acceptance of both premises doesn’t mean that the GRT is true. The differential fertility of two groups, where one group (the high fertility group) is migrating into the country of another group (the low fertility group) doesn’t mean that there is some nefarious plot by some group to spur race mixing and the destruction and replacement of one group over another.

As shown above, people may interpret and respond to the GRT in different ways. Some may use it in a way to interpret and understand demographic changes while not committing henious actions, while others—like the Buffalo shooter—may use the information in a negative way and take many innocent lives on the basis of belief in the theory. Extreme interpretations of the GRT can lead to the shaping of beliefs which then contribute to negative actions based on the belief that their group is being replaced (Obaidi et al, 2021). Conspiracy theories also rely on the intent to certain events, of which the proponents of the GRT do.

Some white nationalists who hold to the GRT state that the Jews are behind this for a few reasons—one of which I stated above (that they want dumber people to come in who have higher TFRs to replace the native white population in the country)—and another reason which has even less support (if that’s even possible) which is that the Jews are orchestrating the great migration of non-whites into European countries as revenge and retaliation for Europeans expelling Jews from European countries during the middle ages (109 countries). This is the so-called “white genocide” conspiracy theory. This is the kind of hate that Trump ran with in his presidential run and in his time in office as president of the United States (Wilson, 2018). This can also be seen with the phrase “Jews/You will not replace us!” during the Charlottesville protests of 2017 (Wilson, 2021). “You” in the phrase “You will not replace us!” could refer to Jews, or it could refer to the people that the Jews are having migrate into white countries to replace the white population. Beliefs in such baseless conspiracy theories gave led to mass murder in America, Australia, and Norway (Davis, 2024).

One of the main actors in shaping the view that Jews are planning to replace (that is, genocide) Whites is white nationalist and evolutionary psychologist Kevin MacDonald, more specifically in his book series on the origin of Jewish evolutionary group strategies, with A People that Shall Dwell Alone (1994), Separation and it’s Discontents (1998a), and The Culture of Critique (1998b). It is a main argument in this book series that the Jews have an evolved evolutionary group strategy that has them try to undermine and destroy white societies (see Blutinger, 2021 and also Nathan Cofnas’ responses to MacDonald ‘s theory). MacDonald’s theory of a group evolutionary strategy is nothing more than a just-so story. Such baseless views have been the “rationale” of many mass killings in the 2010s (eg Fekete, 2011; Nilsson, 2022). Basically it’s “white genocide is happening and the Jews are behind it so we need to kill those who the Jews are using to enact their plan and we need to kill Jews.” (Note that this isn’t a call for any kind of violence it’s just a simplified version of what many of these mass killers imply in their writings and motivations for carrying out their henious attacks.) One thing driving these beliefs and that jd the GRT is that of anti-Semitism (Allington, Buarque, and Flores, 2020). Overall, such claims of a GRT or “white genocide” flourish online (Keulennar and Reuters, 2023). In this instance, it is claimed that Jews are using their ethnic genetic interests and nepotism to spur these events.

Conclusion

I have discussed the GRT argument and with it so-called “white genocide” (since the two are linked). The 2 premises of the GRT are tru—that American whites have low TFR and those who are emigrating have high TFR—but but that the premises are true doesn’t guarantee the conclusion that there is some great replacement occurring, since it reduces to a kind of intentional demographic replacement by some group (say, the Jews and other elites in society who want cheap, dumb, easily controllable labor who have more children). The GRT is happening, it is claimed, since the Jews want revenge on whites for kicking them out of so many countries. That is, the GRT is an intentional demographic replacement. Those who push the GRT take the two true premises and then incorrectly conclude that there is some kind of plan to eradicate whites through both the mixing of races and bringing in groups of people who have more children than whites do.

I have scrutinized what I take to be the main argument of GRT proponents and have shown that the conclusion they want doesn’t logically follow. Inherent in this is a hasty generalization fallacy and fallacy of composition (in the argument as I have formalized it). This shows the disconnect between both premises and the desired conclusion. Further, the classification of the GRT as a conspiracy theory comes from the attribution of intention to eliminate and eradicate white through the mass migration of non-white immigrant groups who have more children than whites along with racial mixing.

The Buffalo shooting in a Tops supermarket in 2022 shows the impact of these beliefs on people who want there to be some kind of plan or theory for the GRT. Even mainstream pundits and a political candidate have pushed the GRT to a wider audience. And as can be seen, belief in such a false theory can, does, and has led to the harm and murder of innocent people.

Lastly, I showed how the GRT is a theory (since it is an attempt at an explanation for an observed trend) and a conspiracy theory (since the GRT holds that there is a secret plan, with people behind the scenes in the shadows orchestrating the events of the GRT). Such a shift in demographics need not be the result of some conspiracy theory with the intention to wipe out one race of people. Of course some may use the GRT to try to understand how and why the demographics are changing in the West, but it is mostly used as a way to pin blame on why whites aren’t having more children and why mass immigration is occurring.

All in all, my goal here was to show that the GRT has true premises but the conclusion doesn’t follow, and that it is indeed a theory and a conspiracy theory. I have also shown how such beliefs can and have led to despicable actions. Clearly the impact of beliefs on society can have negative effects. But by rationally thinking about and analyzing such claims, we can show that not only are they baseless, but that it’s not merely an observation of observed trends. Evidence and logic should be valued here, while we reject unwanted, centuries-old stereotypes of the purported plan of racial domination of certain groups.

Race, Racism, Stereotypes, and Crime: An Argument for Why Racism is Morally Wrong

2300 words

Introduction

(1) Crime is bad. (2) Racism causes crime. (C) Thus, racism is morally wrong. (1) is self-evident based on people not wanting to be harmed. (2) is known upon empirical examination, like the TAAO and it’s successful novel predictions. (C) then logically follows. In this article, I will give the argument in formal notation and show its validity while defending the premises and then show how the conclusion follows from the premises. I will then discuss two possible counter arguments and then show how they would fail. I will show that you can derive normative conclusions from ethical and factual statements (which then bypasses the naturalistic fallacy), and then I will give the general argument I am giving here. I will discuss other reasons why racism is bad (since it leads to negative physiological and mental health outcomes), and then conclude that the argument is valid and sound and I will discuss how stereotypes and self-fulfilling prophecies also contribute to black crime.

Defending the argument

This argument is obviously valid and I will show how.

B stands for “crime is bad”, C stands for “racism causes crime”, D stands for racism is objectively incorrect, so from B and C we derive D (if C causes B and B is bad, then D is morally wrong). So the argument is “(B ^ C) -> D”. B and C lead to D, proving validity.

Saying “crime is bad” is an ethical judgement. The term “bad” is used as a moral or ethical judgment. “Bad” implies a negative ethical assessment which suggests that engaging in criminal actions is morally undesirable or ethically wrong. The premise asserts a moral viewpoint, claiming that actions that cause harm—including crime—are inherently bad. It implies a normative stance which implies that criminal behavior is wrong or morally undesirable. So it aligns with the idea that causing harm, violating laws or infringing upon others is morally undesirable.

When it comes to the premise “racism causes crime”, this needs to be centered on the theory of African American offending (TAAO). It’s been established that blacks experiencing racism is causal for crime. So the premise implies that racism is a factor in or contributes to criminal behavior amongst blacks who experience racism. Discriminatory practices based on race (racism) could lead to social inequalities, marginalization and frustration which would then contribute to criminal behavior among the affected person. This could also highlight systemic issues where racist policies or structures create an environment conducive to crime. And on the individual level, experiences of racism could influence certain individuals to engage in criminal activity as a response or coping mechanism (Unnever, 2014Unnever, Cullen, and Barnes, 2016). Perceived racial discrimination “indirectly predicted arrest, and directly predicted both illegal behavior and jail” (Gibbons et al, 2021). Racists propose that what causes the gap is a slew of psychological traits, genetic factors, and physiological variables, but even in the 1960s, criminologists and geneticists rejected the genetic hypothesis of crime (Wolfgang,1964). However we do know there is a protective effect when parents prepare their children for bias (Burt, Simons, and Gibbons, 2013). Even the role of institutions exacerbates the issue (Hetey and Eberhardt, 2014). And in my article on the Unnever-Gabbidon theory of African American offending, I wrote about one of the predictions that follows from the theory which was borne out when it was tested.

So it’s quite obvious that the premise “racism causes crime” has empirical support.

So if B and C are true then D follows. The logical connection between B and C leads to the conclusion that “racism is morally wrong”, expressed by (B ^ C) -> D. Now I can express this argument using modus ponens.

(1) If (B ^ C) then D. (Expressed as (B ^ C) -> D).

(2) (B ^ C) is true.

(3) Thus, D is true.

When it comes to the argument as a whole it can be generalized to harm is bad and racism causes harm so racism is bad.

Furthermore, I can generalize the argument further and state that not only that crime is bad, but that racism leads to psychological harm and harm is bad, so racism is morally wrong. We know that racism can lead to “weathering” (Geronimus et al, 2006, 2011; Simons, 2021) and increased allostatic load (Barr 2014: 71-72). So racism leads to a slew of unwanted physiological issues (of which microaggressions are a species of; Williams, 2021).

Racism leads to negative physiological and mental health outcomes (P), and negative physiological and mental health outcomes are undesirable (Q), so racism is morally objectionable (R). So the factual statement (P) establishes a link between negative health outcomes, providing evidence that racism leads to these negative health outcomes. The ethical statement (Q) asserts that negative health outcomes are morally undesirable which aligns with a common ethical principle that causing harm is morally objectionable. Then the logical connection (Q ^ P) combines the factual observation of harm caused by racism with the ethical judgment that harm is morally undesirable. Then the normative conclusion (R) follows, which asserts that racial is morally objectionable since it leads to negative health outcomes. So this argument is (Q ^ P) -> R.

Racism can lead to stereotyping of certain groups as more prone to criminal behavior, and this stereotype can be internalized and perpetuated which would then contribute to biased law enforcement and along with it unjust profiling. It can also lead to systemic inequalities like in education, employment and housing which are then linked to higher crime rates (in this instance, racism and stereotyping causes the black-white crime gap, as predicted by Unnever and Gabbidon, 2011 and then verified by numerous authors). Further, as I’ve shown, racism can negatively affect mental health leading to stress, anxiety and trauma and people facing these challenges would be more vulnerable to engage in criminal acts.

Stereotypes and self-fulfilling prophecies

In his book Concepts and Theories of Human Development, Lerner (2018: 298) discusses how stereotyping and self-fulfilling prophecies would arise from said stereotyping. He says that people, based on their skin color, are placed into an unfavorable category. Then negative behaviors were attributed to the group. Then these behaviors were associated with different experience in comparison to other skin color groups. These different behaviors then delimit the range of possible behaviors that could develop. So the group was forced into a limited number of possible behaviors, the same behaviors they were stereotyped to have. So the group finally develops the behavior due to being “channeled” (to use Lerner’s word) which is then “the end result of the physically cued social stereotype was a self-fulfilling prophecy” (Lerner, 2018: 298).

From the analysis of the example I provided and, as well, from empirical literature in support of it (e.g., Spencer, 2006; Spencer et al., 2015), a strong argument can be made that the people of color in the United States have perhaps experienced the most unfortunate effects of this most indirect type of hereditary contribution to behavior–social stereotypes. Thus, it may be that African Americans for many years have been involved in an educational and intellectual self-fulfilling prophecy in the United States. (Lerner, 2018: 299)

This is an argument about how social stereotypes can spur behavioral development, and it has empirical support. Lerner’s claim that perception influences behavior is backed by Spencer, Swanson and Harpalani’s (2015) article on the development of the self and Spencer, Dupree, and Hartman’s (1997) phenomenological variant of ecological systems theory (PVEST). (Also see Cunningham et al, 2023). Spencer, Swanson and Harpalani (2015: 764) write:

Whether it is with images of the super-athlete, criminal, gangster, or hypersexed male, it seems that most of society’s views of African Americans are defined by these stereotypes. The Black male has, in one way or another, captured the imagination of the media to such a wide extent that media representations create his image far more than reality does. Most of the images of the Black male denote physical prowess or aggression and downplay other characteristics. For example, stereotypes of Black athletic prowess can be used to promote the notion that Blacks are unintelligent (Harpalani, 2005). These societal stereotypes, in conjunction with numerous social, political, and economic forces, interact to place African American males at extreme risk for adverse outcomes and behaviors.

A -> B—So stereotypes can lead to self-fulfilling prophecies (if there are stereotypes, then they can result in self-fulfilling prophecies). B -> C—Self-fulfilling prophecies can increase the chance of crime for blacks (if there are self-fulfilling prophecies, then they can increase the chance of crime for blacks. So A -> C—Stereotypes can increase the chance of crime for blacks (if there are stereotypes, then they can increase the chance of crime for blacks). Going back to the empirical studies on the TAAO, we know that racism and stereotypes cause the black-white crime gap (Unnever, 2014Unnever, Cullen, and Barnes, 2016Herda, 20162018Scott and Seal, 2019), and so the argument by Spencer et al and Lerner is yet more evidence that racism and stereotypes lead to self-fulfilling prophecies which then cause black crime. Behavior can quite clearly be shaped by stereotypes and self-fulfilling prophecies.

Responses to possible counters

I think there are 3 ways that one could try to refute the argument—(1) Argue that B is false, (2) argue that C is false, or (3) argue that the argument commits the is-ought fallacy.

(1) Counter premise: B’: “Not all crimes are morally bad, some may be morally justifiable or necessary in certain contexts. So if not all crimes are morally bad, then the conclusion that racism is morally wrong based on the premises (B ^ C) isn’t universally valid.”

Premise B reflects a broad ethical judgment which is based on social norms that generally view actions that cause harm morally undesirable. My argument is based on consequences—that racism causes crime. The legal systems of numerous societies categorize certain actions as crimes since they are deemed morally reprehensible and harmful to individuals and communities. Thus, there is a broad moral stance against actions that cause harm which is reflected in the societal normative stance against actions which cause harm.

(2) Counter premise: C’: “Racism does not necessarily cause crime. Since racism does not necessarily cause crime, then the conclusion that racism is objectively wrong isn’t valid.”

Premise C states that racism causes crime. When I say that, it doesn’t mean that every instance of racism leads to an instance of crime. Numerous social factors contribute to criminal actions, but there is a relationship between racial discrimination (racism) and crime:

Experiencing racial discrimination increases the likelihood of black Americans engaging in criminal actions. How does this follow from the theory? TAAO posits that racial discrimination can lead to feelings of frustration and marginalization, and to cope with these stressors, some individuals may resort to commuting criminal acts as a way to exert power or control in response to their experiences of racial discrimination. (Unnever, 2014Unnever, Cullen, and Barnes, 2016Herda, 20162018Scott and Seal, 2019)

(3) “The argument commits the naturalistic fallacy by inferring an “ought” from an “is.” It appears to derive a normative conclusion from factual and ethical statements. So the transition from descriptive premises to moral judgments lacks a clear ethical justification which violates the naturalistic fallacy.” So this possible counter contends that normative statement B and the ethical statement C isn’t enough to justify the normative conclusion D. Therefore it questions whether the argument has good justification for an ethical transition to the conclusion D.”

I can simply show this. Observe X causing Y (C). Y is morally undesirable (B). Y is morally undesirable and X causes Y (B ^ C). So X is morally objectionable (D). So C begins with an empirical finding. B then is the ethical premise. The logical connection is then established with B ^ C (which can be reduced to “Harm is morally objectionable and racism causes harm”). This then allows me to infer the normative conclusion—D—allowing me to bypass the charge of committing the naturalistic fallacy. Thus, the ethical principle that harm is morally undesirable and that racism causes harm allows me to derive the conclusion that racism is objectively wrong. So factual statements can be combined with ethical statements to derive ethical conclusions, bypassing the naturalistic fallacy.

Conclusion

This discussion centered on my argument (B ^ C) -> D. The argument was:

(P1) Crime is bad (whatever causes harm is bad). (B)

(P2) Racism causes crime. (C)

(C) Racism is morally wrong. (D)

I defended the truth of both premises, and then I answered two possible objections, both rejecting B and C. I then defended my argument against the charge of it committing the naturalistic fallacy by stating that ethical statements can be combined with factual statements to derive normative conclusions. Addressing possible counters (C’ and B’), I argued that there is evidence that racism leads to crime (and other negative health outcomes, generalized as “harm”) in black Americans, and that harm is generally seen as bad, so it then follows that C’ and B’ fail. Spencer’s and Lerner’s arguments, furthermore, show how stereotypes can spur behavioral development, meaning that social stereotypes increase the chance of adverse behavior—meaning crime. It is quite obvious that the TAAO has strong empirical support, and so since crime is bad and racism causes crime then racism is morally wrong. So to decrease the rate of black crime we—as a society—need to change our negative attitudes toward certain groups of people.

Thus, my argument builds a logical connection between harm being bad, racism causing harm and moral undesirability. In addressing potential objections and clarifying the ethical framework I ren, So the general argument is: Harm is bad, racism causes harm, so racism is morally wrong.

The Rockefeller Foundation’s Failure in Finding a General Intelligence Factor in Dogs

2000 words

Introduction

Hereditarians have been trying to prove the existence of a genetic basis of intelligence for over 100 years. In this time frame, they have used everything from twin, family and adoption studies to tools from the molecular genetics era like GCTA and GWAS. Using heritability estimates, behavior geneticists claim that since intelligence is highly heritable, that there must thusly be a genetic basis to intelligence controlled by many genes of small effect, meaning it’s highly polygenic.

In his outstanding book Misbehaving Science, Panofsky (2014) discusses an attempt funded by the Rockefeller Foundation (RF) at showing a genetic basis to dog intelligence to prove that intelligence had a genetic basis. But it didn’t end up working out for them—in fact, it showed the opposite. The investigation which was funded by the RF showed quite the opposite result that they were looking for—while they did find evidence of some genetic differences between the dog breeds studied, they didn’t find evidence for the existence of a “general factor of intelligence” in the dogs. This issue was explored in Scott and Fuller’s 1965 book Genetics and the Social Behavior of the Dog. These researchers, though, outright failed in their task to discover a “general intelligence” in dogs. Modern-day research also corroborates this notion.

The genetic basis of dog intelligence?

This push to breed a dog that was highly intelligent was funded by the Rockefeller Foundation for ten years at the Jackson Laboratory. Panofsky (2014: 55) explains:

Over the next twenty years many scientists did stints at Jackson Laboratory working on its projects or attending its short courses and training programs. These projects and researchers produced dozens of papers, mostly concerning dogs and mice, that would form much of the empirical base of the emerging field. In 1965 Scott and John Fuller, his research partner, published Genetics and the Social Behavior of the Dog. It was the most important publication to come out of the Jackson Lab program. Scott and Fuller found many genetic differences between dog breeds; they did not find evidence for general intelligence or temperament. Dogs would exhibit different degrees of intelligence or temperamental characteristics depending on the situation. This evidence of interaction led them to question the high heritability of human intelligence—thus undermining a goal of the Rockefeller Foundation sponsors who had hoped to discredit the idea that intelligence was the product of education. Although the behavioral program at Jackson Laboratory declined after this point, it had been the first important base for the new field.

Quite obviously this was the opposite result of what they wanted—dog intelligence was based on the situation and therefore context-dependent.

Scott and Fuller (1965) discuss how they used to call their tests “intelligence tests” but then switched to calling them “performance tests”, “since the animals
seemed to solve their problems in many ways other than through pure thought or
intellect” (Scott and Fuller 1965: 37), while also writing that “no evidence was found for a general factor of intelligence which would produce good performance on all tests” (1965, 328). They also stated that they found nothing like the general intelligence factor in dogs like that is found in humans (1965: 472) while also stating that it’s a “mistaken notion” to believe in the general intelligence factor (1965: 512). They then conclude, basically, that situationism is valid for dogs, writing that their “general impression is that an individual from any dog breed will perform well in a situation in which he can be highly motivated and for which he has the necessary physical capacities” (1965: 512). Indeed, Scott noted that due to the heritability estimates of dog intelligence Scott came to the conclusion that human heritability estimates “are far too high” (quoted in Paul, 1998: 279). This is something that even Schonemann (1997) noted—and it’s “too high” due to the inflation of heritability due to the false assumptions of twin studies, which lead to the missing heritability crisis. One principle finding was that genetic differences didn’t appear early in development, which were then molded by further experience in the world. Behavior was highly variable between individuals and similar within breeds.

The results were quite unexpected but scientifically exciting. During the very early stages of development there was so little behavior observed that there was little opportunity for genetic differences to be expressed. When the complex patterns of behavior did appear, they did not show pure and uncontaminated effects of heredity. Instead, they were extraordinarily variable within an individual and surprisingly similar between individuals. In short, the evidence supported the conclusion that genetic differences in behavior do not appear all at once early in development, to be modified by later experience, but are themselves developed under the influence of environmental factors and may appear in full flower only relatively late in life. (Scott and Fuller, 1965)

The whole goal of this study by the Jackson Lab was to show that there was a genetic basis to intelligence in dogs and that they therefore could breed a dog that was intelligent and friendly (Paul, 1998). They also noted that there was no breed which was far and above the best at the task in question. Scott and Fuller found that performance on their tests was strongly affected by motivational and emotional factors. They also found that breed differences were strongly influenced by the environment, where two dogs from different breeds became similar when raised together. We know that dogs raised with cats showed more favorable disposition towards them (Fox, 1958; cf Feuerstein and Terkel, 2008, Menchetti et al, 2020). Scott and Fuller (1965: 333) then concluded that:

On the basis of the information we now have, we can conclude that all breeds show about the same average level of performance in problem solving, provided they can be adequately motivated, provided physical differences and handicaps do not affect the tests, and provided interfering emotional reactions such as fear can be eliminated. In short, all the breeds appear quite similar in pure intelligence.

The issue is that by believing that heritability shows anything about how “genetic” a trait is, one then inters that there has to be a genetic basis to the trait in question, and that the higher the estimate, the more strongly controlled by genes the trait in question is. However, we now know this claim to be false (Moore and Shenk, 2016). More to the point, the simple fact that IQ shows higher heritability than traits in the animal kingdom should have given behavioral geneticists pause. Nonetheless, it is interesting that this study that was carried out in the 1940s showed a negative result in the quest to show a genetic basis to intelligence using dogs, since dogs and humans quite obviously are different. Panofsky (2014: 65) also framed these results with that of rats that were selectively bred to be “smart” and “dumb”:

Further, many animal studies showed that strain differences in behavior were not independent of environment. R. M. Cooper and J. P. Zubek’s study of rats selectively bred to be “dull” and “bright” in maze-running ability showed dramatic differences between the strains in the “normal” environment. But in the “enriched” and especially the “restricted” developmental environments, both strains’ performance were quite similar. Scott and Fuller made a similar finding in their comparative study of dog breeds: “The behavior traits do not appear to be preorganized by heredity. Rather a dog inherits a number of abilities which can be organized in different ways to meet different situations.” Thus even creatures that had been explicitly engineered to embody racial superiority and inferiority could not demonstrate the idea in any simple way

Psychologist Robert Tryon (1940) devised a series of mazes, ran rats through them and then selected rats that learned quicker and slower (Innis, 1992). These differences then seemed to persists across these rat generations. Then Searle (1949) discovered that the so-called “dumb” rats were merely afraid of the mechanical noise of the maze, showing that Tryon selected for—unknowingly—emotional capacity. Marlowitz (1969) then concluded “that the labels “maze-bright” and “maze-dull” are inexplicit and inappropriate for use with these strains.”

Dogs and human races are sometimes said to be similar, in which a dog breed can be likened to a human race (see Norton et al, 2019). However, dog breeds are the result of conscious human selection for certain traits which then creates the breed. So while Scott and Fuller did find evidence for a good amount of genetic differences between the breeds they studied, they did not find any evidence of a genetic basis of intelligence or temperament. This is also good evidence for the claim that a trait can be heritable (have high heritability) but have no genetic basis. Moreover, we know that high levels of training improve dog’s problem solving ability (Marshall-Pescini et al, 2008, 2016). Further, perceived differences in trainability are due to physical capabilities and not cognitive ones (Helton, 2008). And in Labrador Retrievers, post-play training also improved training performance (Affenzeller, Palme, and Zulch, 2017; Affenzeller, 2020). Dogs’ body language during operant conditioning was also related to their success rate in learning (Hasegawa, Ohtani, and Ohta, 2014). We also know that dogs performed tasks better and faster the more experience they had with them, not being able to solve the task before seeing it demonstrated by the human administering the task (Albuquerque et al, 2021). Gnanadesikan et al (2020) state that cognitive phenotypes seem to vary by breed, and that these phenotypes have strong potential to be artificially selected, but we have seen that this is an error. Morrill et al (2022) found no evidence that the behavioral tendencies of certain breeds reflected intentional selection by humans but could not discount the possibility.

Conclusion

Dog breeds have been used by hereditarians for decades as a model for that of intelligence differences between human races. The analogy that dog breeds and human races are also similar has been used to show that there is a genetic basis for human race, and that human races are thusly a biological reality. (Note that I am a pluralist about race.) But we have seen that in the 40s the study which was undertaken to prove a hereditary basis to dog intelligence and then liken it to human intelligence quite obviously failed. This then led one of the authors to conclude—correctly—that human heritability estimates are inflated (which has led to the missing heritability problem of the 2000s).

Upon studying the dogs in their study, they found that there was no general factor of intelligence in these dogs, and that the situation was paramount in how the dog would perform on the task in question. This then led Scott to conclude that human heritability estimates are too high, a conclusion echoed by modern day researchers like Schonemann. The issue is, if dogs with their numerous breeds and genetic variation defy a single general factor, what would that mean for humans? This is just more evidence that “general intelligence” is a mere myth, a statistical abstraction. There was also no evidence for a general temperament, since breeds that were scared in one situation were confident in another (showing yet again that situationism held here). The failure of the study carried out by the RF then led to the questioning of the high heritability of human intelligence (IQ), which wasn’t forgotten as the decades progressed. Nonetheless, this study casted doubt on the claim that intelligence had a genetic basis.

Why, though, would a study of dogs be informative here? Well, the goal was to show that intelligence in dogs had a hereditary component and that thusly a kind of designer dog could be created that was friendly and intelligent, and this could then be likened to humans. But when the results were the opposite of what they desired, the project was quickly abandoned. If only modern-day behavioral geneticists would get the memo that heritability isn’t useful for what they want it to be useful for (Moore and Shenk, 2016)

A Critical Examination of Responses to Berka’s (1983) and Nash’s (1990) Philosophical Inquiries on Mental Measurement from Brand et al (2003)

2750 words

Introduction

What I term “the Berka-Nash measurement objection” is—I think—one of the most powerful arguments against not only the concept of IQ “measurement” but against psychological “measurement” as a whole—this also compliments my irreducibility of the mental arguments. (Although there are of course contemporary authors who argue that IQ—and other psychological traits—are immeasurable, the Berka-Nash measurement objection I think touches the heart of the matter extremely well). The argument that Karel Berka (1983) mounted in Measurement: Its Concepts, Theories, and Problems is a masterclass in defining what “measurement” means and the rules needed for what designates X is a true measure and Y as a true measurement device. Then Roy Nash (1990) in Intelligence and Realism: A Materialist Critique of IQ brought Berka’s critique of extraphysical (mental) measurement to a broader audience, simplifying some of the concepts that Berka discussed and likened it to the IQ debate, arguing that there is no true property that IQ tests measure, therefore IQ tests aren’t a measurement device and IQ isn’t a measure.

I have found only one response to this critique of mental measurement by hereditarians—that of Brand et al (2003). Brand et al think they have shown that Berka’s and Nash’s critique of mental measurement is consistent with IQ, and that IQ can be seen as a form of “quasi-quantification.” But their response misses the mark. In this article I will argue how it misses the mark and it’s for these reasons: (1) they didn’t articulate the specified measured object, object of measurement and measurement unit for IQ and they overlooked the challenges that Berka discussed about mental measurement; (2) they ignored the lack of objectively reproducible measurement units; (3) they misinterpreted what Berka meant by “quasi-quantification” and then likening it to IQ; and (4) they failed to engage with Berka’s call for precision and reliability.

IQ, therefore, isn’t a measurable construct since there is no property being measured by IQ tests.

Brand et al’s arguments against Berka

The response from Brand et al to Berka’s critiques of mental measurement in the context of IQ raises critical concerns of Berka’s overarching analysis on measurement. So examining their arguments against Berka reveals a few shortcomings which undermine the central tenets of Berka’s thesis of measurement. From failing to articulate the fundamental components of IQ measurement, to overlooking the broader philosophical issues that Berka addressed, Brand et al’s response falls short in providing a comprehensive rebuttal to Berka’s thesis, and in actuality—despite the claims from Brand et al—Berka’s argument against mental measurement doesn’t lend credence to IQ measurement—it effectively destroys it, upon a close, careful reading of Berka (and then Nash).

(1) The lack of articulation of a specified measured object, object of measurement and measurement unit for IQ

This is critical for any claim that X is a measure and that Y is a measurement device—one needs to articulate the specified measured object, object of measurement and measurement unit for what they claim to be measuring. To quote Berka:

If the necessary preconditions under which the object of measurement can be analyzed on a higher level of qualitative aspects are not satisfied, empirical variables must be related to more concrete equivalence classes of the measured objects. As a rule, we encounter this situation at the very onset of measurement, when it is not yet fully apparent to what sort of objects the property we are searching for refers, when its scope is not precisely delineated, or if we measure it under new conditions which are not entirely clarified operationally and theoretically. This situation is therefore mainly characteristic of the various cases of extra-physical measurement, when it is often not apparent what magnitude is, in fact, measured, or whether that which is measured really corresponds to our projected goals.” (Berka, 1983: 51)

Both specific postulates of the theory of extraphysical measurement, scaling and testing – the postulates of validity and reliability – are then linked to the thematic area of the meaningfulness of measurement and, to a considerable extent, to the problem area of precision and repeatability. Both these postulates are set forth particularly because the methodologists of extra-physical measurement are very well aware that, unlike in physical measurement, it is here often not at all clear which properties are the actual object of measurement, more precisely, the object of scaling or counting, and what conclusions can be meaningfully derived from the numerical data concerning the assumed subject matter of investigation. Since the formulation, interpretation, and application of these requirements is a subject of very vivid discussion, which so far has not reached any satisfactory and more or less congruent conclusions, in our exposition we shall limit ourselves merely to the most fundamental characteristics of these postulates.” (Berka, 1983: 202-203)

At any rate, the fact that, in the case of extraphysical measurement, we do not have at our disposal an objectively reproducible and significantly interpretable measurement unit, is the most convincing argument against the conventionalist view of a measurement, as well as against the anti-ontological position of operationalism, instrumentalism, and neopositivism.” (Berka, 1983: 211)

One glaring flaw—and I think it is the biggest—in Brand et al’s response is their failure to articulate the specified measured object, object of measurement and measurement unit for IQ. Berka’s insistence on precision in measurement requires a detailed conception of what IQ tests aim to measure—we know this is “IQ” or “intelligence” or “g, but they then of course would have run into how to articulate and define it in a physical way. Berka emphasized that the concept of measurement demands precision in defining what is being measured (the specified measured object), the entity being measured (the object of measurement), and the unit applied for measurement (the measurement unit). Thus, for IQ to be a valid measure and for IQ tests to be a valid measurement device, it is crucial to elucidate exactly what the tests measure the nature of the mental attribute which is supposedly under scrutiny, and the standardized unit of measurement.

Berka’s insistence on precision aligns with a fundamental aspect of scientific measurement—the need for a well defined and standardized procedure to quantify a particular property. This is evidence for physical measurement, like the length of an object being measured using meters. But when transitioning to the mental, the challenge lies in actually measuring something that lacks a unit of measurement. (And as Richard Haier (2014) even admits, there is no measurement unit for IQ like inches, liters or grams.) So without a clear and standardized unit for mental properties, claims of measurement are therefore suspect—and impossible. Moreover, by sidestepping this crucial aspect of what Berka was getting at, their argument remains vulnerable to Berka’s foundational challenge regarding the essence of what is being measured along with how it is quantified.

Furthermore, Brand et al failed to grapple with what Berka wrote on mental measurement. Brand et al’s response would have been more robust if it had engaged with Berka’s exploration of the inherent intracacies and nuances involved in establishing a clear object of measurement for IQ, and any mental attributes.

Measurement units have to be a standardized and universally applicable quantity or physical property while allowing for standardized comparisons across different measures. And none exists for IQ, nor any other psychological trait. So we can safely argue that psychometrics isn’t measurement, even without touching contemporary arguments against mental measurement.

(2) Ignoring the lack of objectively reproducible measurement units

A crucial aspect of Berka’s critique involves the absence of objectively reproducible measurement units in the realm of measurement. Berka therefore contended that in the absence of such a standardized unit of measurement, the foundations for a robust enterprise of measurement are compromised. This is yet another thing that Brand et al overlooked in their response.

Brand et al’s response lacks a comprehensive examination of how the absence of objectively reproducible measurement units in mental measurement undermines the claim that IQ is a measure. They do not engage with Berka’s concern that the lack of such units in mental measurement actually hinders the claim that IQ is a measure. So the lack of attention to the absence of objectively reproducible measurement units in mental measurement actually weakens, and I think destroys, Brand et al’s response. They should have explored the ramifications of a so-called measure without a measurement unit. So this then brings me to their claims that IQ is a form of “quasi-quantification.”

(3) Misinterpretation of “quasi-quantification” and its application to IQ

Brand et al hinge their defense of IQ on Berka’s concept of “quasi-quantification”, which they misinterpret. Berka uses “quasi-quantification” to describe situations where the properties being measured lack the clear objectivity and standardization found in actual physical measurements. But Brand et al seem to interpret “quasi-quantification” as a justification for considering IQ as a valid form of measurement.

Brand et al’s misunderstanding of Berka’s conception of “quasi-quantification” is evidence in their attempt to equate it with a validation of IQ as a form of measurement. Berka was not endorsing it as a fully-fledged form of measurement, but he highlighted the limitations and distinctiveness compared to traditional quantification and measurement. Berka distinguishes between quantification, pseudo-quantification, and quasi-quantification. Berka explicitly states that numbering and scaling—in contrast to counting and measurement—cannot be regarded as kinds of quantification. (Note that “counting” in this framework isn’t a variety of measurement, since measurement is much more than enumeration, and counted elements in a set aren’t magnitudes.) Brand et al fail to grasp this nuanced difference, while mischaracterizing quasi-quantification as a blanket acceptance of IQ as a form of measurement.

Berka’s reservations of quasi-quantification are rooted in the challenges and complexities associated with mental properties, acknowledging that they fall short of the clear objectivity found in actual physical measurements. So Brand et al’s interpretation overlooks this critical aspect, which leads them to erroneously argue that accepting IQ as quasi-quantification is sufficient to justify its status as measurement.

Brand et al’s arguments against Nash

Nash’s book, on the other hand, is a much more accessible and pointed attack on the concept of IQ and it’s so-called “measurement.” He spends the book talking about the beginnings of IQ testing to the Flynn Effect, Berka’s argument and then ends with talking about test bias. IQ doesn’t have a true “0” point (like temperature, which IQ-ists have tried to liken to IQ, and the thermometer to IQ tests—there is no lawful property like the relation between mercury and temperature in a thermometer and IQ and intelligence, so again the hereditarian claim fails). But most importantly, Nash made the claim that there is actually no property to be measured by IQ tests—what did he mean by this?

Nash of course doesn’t deny that IQ tests rank individuals on their performance. So the claim that IQ is a metric property is already assumed in IQ theory. But the very fact that people are ranked doesn’t justify the claim that people are then ranked according to a property revealed by their performance (Nash, 1990: 134). Moreover, if intelligence/”IQ” were truly quantifiable, then the difference between 80 and 90 IQ and 110 and 120 IQ would represent the same cognitive difference between both groups of scores. But this isn’t the case.

Nash is a skeptic of the claim that IQ tests measure some property. (As I am.) So he challenges the idea that there is a distinct and quantifiable property that can be objectively measured by IQ tests (the construct “intelligence”). Nash also questions whether intelligence possesses the characteristics necessary for measurement—like a well-defined object of measurement and measurement unit. Nash successfully argued that intelligence cannot be legitimately expressed in a metric concept, since there is no true measurement property. But Brand et al do nothing to attack the arguments of Berka and Nash and they do not at all articulate the specified measured object, object of measurement and measurement unit for IQ, which was the heart of the critique. Furthermore, a precise articulation of the specified measured object when it comes to the metrication of X (any psychological trait) is necessary for the claim that X is a measure (along with articulating the object of measurement and measurement unit). But Brand et al did not address this in their response to Nash, which I think is very telling.

Brand et al do rightly note Nash’s key points, but they fall far, far from the mark in effectively mounting a sound argument against his view. Nash argues that IQ test results can only, at best, be used for ordinal comparisons of “less than, equal to, greater than” (which is also what Michell, 2022 argues, and the concludes the same as Nash). This is of course true, since people take a test and their performance is based on the type of culture they are exposed to (their cultural and psychological tools). Brand et al failed to acknowledge this and grapple with its full implications. But the issue is, Brand et al did not grapple at all with this:

The psychometric literature is full of plaintive appeals that despite all the theoretical difficulties IQ tests must measure something, but we have seen that this is an error. No precise specification of the measured object, no object of measurement, and no measurement unit, means that the necessary conditions for metrication do not exist. (Nash, 1990: 145)

All in all, a fair reading of both Berka and Nash will show that Brand et al slithered away from doing any actual philosophizing on the phenomena that Berka and Nash discussed. And, therefore, that their “response” is anything but.

Conclusion

Berka’s and Nash’s arguments against mental measurement/IQ show the insurmountable challenges that the peddlers of mental measurement have to contend with. Berka emphasized the necessity of clearly defining the measured object, object of measurement and measurement unit for a genuine quantitative measurement—these are the necessary conditions for metrication, and they are nonexistent for IQ. Nash then extended this critique to IQ testing, then concluding that the lack of a measurable property undermines the claim that IQ is a true measurement.

Brand et al’s response, on the other hand, was pitiful. They attempted to reconcile Berka’s concept of “quasi-quantification” with IQ measurement. Despite seemingly having some familiarity with both Berka’s and Nash’s arguments, they did not articulate the specified measured object, object of measurement and measurement unit for IQ. If Berka really did agree that IQ is “quasi-quantification”, then why did Brand et al not articulate what needs to be articulated?

When discussing Nash, Brand et al failed to address Nash’s claim that IQ can only IQ can only allow for ordinal comparisons. Nash emphasized numerous times in his book that an absence of a true measurement property challenges the claim that IQ can be measured. Thus, again, Brand et al’s response did not successfully and effectively engage with Nash’s key points and his overall argument against the possibility of intelligence/IQ (and mental measurement as a whole).

Berka’s and Nash’s critiques highlight the difficulties of treating intelligence (and psychological traits as a whole) as quantifiable properties. Brand et al did not adequately address and consider the issues I brought up above, and they outright tried to weasle their way into having Berka “agree” with them (on quasi-quantification). So they didn’t provide any effective counterargument against them, nor did they do the simplest thing they could have done—which was articulate the specified measured object, object of measurement and measurement unit for IQ. The very fact that there is no true “0” point is devestating for claims that IQ is a measure. I’ve been told on more than one occasion that “IQ is a unit-less measure”—but they doesn’t make sense. That’s just trying to cover for the fact that there is no measurement unit at all, and consequently, no specified measured object and object of measurement.

For these reasons, the Berka-Nash measurement objection remains untouched and the questions raised by them remain unanswered. (It’s simple: IQ-ists just need to admit that they can’t answer the challenge and that psychological traits aren’t measurable like physical traits. But then their whole worldview would crumble.) Maybe we’ll wait another 40 and 30 years for a response to the Berka-Nash measurement objection, and hopefully it will at least try harder than Brand et al did in their failure to address these conceptual issues raised by Berka and Nash.

Jensen’s Default Hypothesis is False: A Theory of Knowledge Acquisition

2000 words

Introduction

Jensen’s default hypothesis proposes that individual and group differences in IQ are primarily explained genetic factors. But Fagan and Holland (2002) question this hypothesis. For if differences in experience lead to differences in knowledge, and differences in knowledge lead to differences in IQ scores, then Jensen’s assumption that blacks and whites have the same opportunity to learn the content is questionable, and I’d think it false. It is obvious that there are differences in opportunity to acquire knowledge which would then lead to differences in IQ scores. I will argue that Jensen’s default hypothesis is false due to this very fact.

In fact, there is no good reason to accept Jensen’s default hypothesis and the assumptions that come with it. Of course different cultural groups are exposed to different kinds of knowledge, so this—and not genes—would explain why different groups score differently on IQ tests (tests of knowledge, even so-called culture-fair tests are biased; Richardson, 2002). I will argue that we need to reject Jensen’s default hypothesis on these grounds, because it is clear that groups aren’t exposed to the same kinds of knowledge, and so, Jensen’s assumption is false.

Jensen’s default hypothesis is false due to the nature of knowledge acquisition

Jensen (1998: 444) (cf Rushton and Jensen, 2005: 335) claimed that what he called the “default hypothesis” should be the null that needs to be disproved. He also claimed that individual and group differences are “composed of the same stuff“, in that they are “controlled by differences in allele frequencies” and that these differences in allele frequencies also exist for all “heritable” characters, and that we would find such differences within populations too. So if the default hypothesis is true, then it would suggest that differences in IQ between blacks and whites are primarily attributed to the same genetic and environmental influences that account for individual differences within each group. So this implies that genetic and environmental variances that contribute to IQ are therefore the same for blacks and whites, which supposedly supports the idea that group differences are a reflection of individual differences within each group.

But if the default hypothesis were false, then it would challenge the assumption that genetic and environmental influences in IQ between blacks and whites are proportionally the same as seen in each group. Thus, this allows us to talk about other causes of variance in IQ between blacks and whites—factors other than what is accounted for by the default hypothesis—like socioeconomic, cultural, and historical influences that play a more substantial role in explaining IQ differences between blacks and whites.

Fagan and Holland (2002) explain their study:

In the present study, we ensured that Blacks and Whites were given equal opportunity to learn the meanings of relatively novel words and we conducted tests to determine how much knowledge had been acquired. If, as Jensen suggests, the differences in IQ between Blacks and Whites are due to differences in intellectual ability per se, then knowledge for word meanings learned under exactly the same conditions should differ between Blacks and Whites. In contrast to Jensen, we assume that an IQ score depends on information provided to the learner as well as on intellectual ability. Thus, if differences in IQ between Blacks and Whites are due to unequal opportunity for exposure to information, rather than to differences in intellectual ability, no differences in knowledge should obtain between Blacks and Whites given equal opportunity to learn new information. Moreover, if equal training produces equal knowledge across racial groups, than the search for racial differences in IQ should not be aimed at the genetic bases of IQ but at differences in the information to which people from different racial groups have been exposed.

There are reasons to think that Jensen’s default hypothesis is false. For instance, since IQ tests are culture-bound—that is, culturally biased—then they are biased against a group so they therefore are biased for a group. Thus, this introduces a confounding factor which challenges the assumption of equal genetic and environmental influences between blacks and whites. And since we know that cultural differences in the acquisition of information and knowledge vary by race, then what explains the black-white IQ gap is exposure to information (Fagan and Holland, 2002, 2007).

The Default Hypothesis of Jensen (1998) assumes that differences in IQ between races are the result of the same environmental and genetic factors, in the same ratio, that underlie individual differences in intelligence test performance among the members of each racial group. If Jensen is correct, higher and lower IQ individuals within each racial group in the present series of experiments should differ in the same manner as had the African-Americans and the Whites. That is, in our initial experiment, individuals within a racial group who differed in word knowledge should not differ in recognition memory. In the second, third, and fourth experiments individuals within a racial group who differed in knowledge based on specific information should not differ in knowledge based on general information. The present results are not consistent with the default hypothesis.(Fagan and Holland, 2007: 326)

Historical and systematic inequalities could also lead to differences in knowledge acquisition. The existence of cultural biases in educational systems and materials can create disparities in knowledge acquisition. Thus, if IQ tests—which reflect this bias—are culture-bound, it also questions the assumption that the same genetic and environmental factors account for IQ differences between blacks and whites. The default hypothesis assumes that genetic and environmental influences are essentially the same for all groups. But SES/class differences significantly affect knowledge acquisition, so if challenges the default hypothesis.

For years I have been saying, what if all humans have the same potential but it just crystallizes differently due to differences in knowledge acquisition/exposure and motivation? There is a new study that shows that although some children appeared to learn faster than others, they merely had a head start in learning. So it seems that students have the same ability to learn and that so-called “high achievers” had a head start in learning (Koedinger et al, 2023). They found that students vary significantly in their initial knowledge. So although the students had different starting points (which showed the illusion of “natural” talents), they had more of a knowledge base but all of the students had a similar rate of learning. They also state that “Recent research providing human tutoring to increase student motivation to engage in difficult deliberate practice opportunities suggests promise in reducing achievement gaps by reducing opportunity gaps (6364).

So we know that different experiences lead to differences in knowledge (it’s type and content), and we also know that racial groups for example have different experiences, of course, in virtue of their being different social groups. So these different experiences lead to differences in knowledge which are then reflected in the group IQ score. This, then, leads to one raising questions about the truth of Jensen’s default hypothesis described above. Thus, if individuals from different racial groups have unequal opportunities to be exposed to information, then Jensen’s default hypothesis is questionable (and I’d say it’s false).

Intelligence/knowledge crystalization is a dynamic process shaped by extensive practice and consistent learning opportunities. So the journey towards expertise involves iterative refinement with each practice opportunity contribute to the crystallization of knowledge. So if intelligence/knowledge crystallizes through extensive practice, and if students don’t show substantial differences in their rates of learning, then it follows that the crystalization of intelligence/knowledge is more reliant on the frequency and quality of learning opportunities than on inherent differences in individual learning rates. It’s clear that my position enjoys some substantial support. “It’s completely possible that we all have the same potential but it crystallizes differently based on motivation and experience.” The Fagan and Holland papers show exactly that in the context of the black-white IQ gap, showing that Jensen’s default hypothesis is false.

I recently proposed a non-IQ-ist definition of intelligence where I said:

So a comprehensive definition of intelligence in my view—informed by Richardson and Vygotsky—is that of a socially embedded cognitive capacity—characterized by intentionality—that encompasses diverse abilities and is continually shaped by an individual’s cultural and social interactions.

So I think that IQ is the same way. It is obvious that IQ tests are culture-bound and tests of a certain kind of knowledge (middle-class knowledge). So we need to understand how social and cultural factors shape opportunities for exposure to information. And per my definition, the idea that intelligence is socially embedded aligns with the notion that varying sociocultural contexts do influence the development of knowledge and cognitive abilities. We also know that summer vacation increases educational inequality, and that IQ decreases during the summer months. This is due to the nature of IQ and achievement tests—they’re different versions of the same test. So higher class children will return to school with an advantage over lower class children. This is yet more evidence in how knowledge exposure and acquisition can affect test scores and motivation, and how such differences crystallize, even though we all have the same potential (for learning ability).

Conclusion

So intelligence is a dynamic cognitive capacity characterized by intentionality, cultural context and social interactions. It isn’t a fixed trait as IQ-ists would like you to believe but it evolves over time due to the types of knowledge one is exposed to. Knowledge acquisition occurs through repeated exposure to information and intentional learning. This, then, challenges Jensen’s default hypothesis which attributes the black-white IQ gap primarily to genetics.Since diverse experiences lead to varied knowledge, and there is a certain type of knowledge in IQ tests, individuals with a broad range of life experiences varying performance on these tests which then reflect the types of knowledge one is exposed to during the course of their lives. So knowing what we know about blacks and whites being different cultural groups, and what we know about different cultures having different knowledge bases, then we can rightly state that disparities in IQ scores between blacks and whites are suggested to be due to environmental factors.

Unequal exposure to information creates divergent knowledge bases which then influence the score on the test of knowledge (IQ test). And since we now know that despite initial differences in initial performance that students have a surprising regularity in learning rates, this suggests that once exposed to information, the rate of knowledge acquisition remains consistent across individuals which then challenges the assumption of innate disparities in learning abilities. So the sociocultural context becomes pivotal in shaping the kinds of knowledge that people are exposed to. Cultural tools environmental factors and social interactions contribute to diverse cognitive abilities and knowledge domains which then emphasize the contextual nature of not only intelligence but performance in IQ tests. So what this shows is that test scores are reflective of the kinds of experience the testee was exposed to. So disparities in test scores therefore indicate differences in learning opportunities and cultural contexts

So a conclusive rejection of Jensen’s default hypothesis asserts that the black-white IQ gap is due to exposure to different types of knowledge. Thus, what explains disparities in not only blacks and whites but between groups is unequal opportunities to exposure of information—most importantly the type of information found on IQ tests. My sociocultural theory of knowledge acquisition and crystalization offers a compelling counter to hereditarian perspectives, and asserts that diverse experiences and intentionality learning efforts contribute to cognitive development. The claim that all groups or individuals are exposed to similar types of knowledge as Jensen assumes is false. By virtue of being different groups, they are exposed to different knowledge bases. Since this is true, and IQ tests are culture-bound and tests of a certain kind of knowledge, then it follows that what explains group differences in IQ and knowledge would therefore be differences in exposure to information.

What If Charles Darwin Never Existed and the Theory of Natural Selection Was Never Formulated?

2200 words

Introduction

Let’s say that we either use a machine to teleport to another reality where Darwin didn’t exist or one where he died early, before formulating the theory of natural selection (ToNS). Would our evolutionary knowledge suffer? Under what pretenses could we say that our evolutionary knowledge wouldn’t suffer? Well, since Darwin humbly stated that what he said wasn’t original and that he just assembled numerous pieces of evidence to cohere to make his ToNS, then obviously we know that species changed over time. That’s what evolution is—change over time—and Darwin, in formulating his ToNS, attempted to prove that it was a mechanism of evolutionary change. But if Darwin never existed or if the ToNS was never formulated by him, I don’t think that our evolutionary knowledge would suffer. This is because people before Darwin observed that species change over time, like Lamarck and Darwin’s grandfather, Erasmus Darwin.

So in this article I will argue that had Darwin not existed or died young and had not formulated the ToNS, we would still have adequate theories of speciation, trait fixation and evolutionary change and processes, since naturalists at the time knew that species changed over time. I will discuss putative mechanisms of evolutionary change and show that without Darwin or the ToNS that we would still be able to have coherent theories of speciation events and trait fixation. Mechanisms like genetic drift, mutation and neutral evolution, environmental constraints, Lamarckian mechanisms, epigenetic factors, and ecological interactions would have been some plausible mechanisms sans Darwin and his ToNS even in the modern day as our scientific knowledge advanced without Darwin.

What if Darwin never existed?

For years I have been critical of Darwin’s theory of natural selection as being a mechanism for evolutionary change since it can’t distinguish between causes and correlates of causes. I was convinced by Fodor’s (2008) argument and Fodor and Piattelli-Palmarini’s (2010) argument in What Darwin Got Wrong that Darwin was wrong about natural selection being a mechanism of evolutionary change. I even recently published an article on alternatives to natural selection (which will be the basis of the argument in this article).

So, if Darwin never existed, how would the fact that species can change over time (due to, for instance, selective breeding) be explained? Well, before Charles Darwin, we had his grandfather Erasmus Darwin and Jean Baptiste Lamarck, of Lamarckian inheritance fame. So if Charles Darwin didn’t exist, there would still be enough for a theory of evolution had Darwin not been alive to formulate the ToNS.

We now know that Charles did read Erasmus’ The Temple of Nature (TToN) (1803) due to the annotations in his copy, and that the TOnF bore resemblance not to Darwin’s On the Origin of Species, but to The Descent of Man (Hernandez-Avilez and Ruiz-Guttierez, 2023). So although it is tentative, we know that Charles had knowledge of Erasmus’ writings on evolution. But before TToN, Erasmus wrote Zoonomia (1794), where he proposed a theory of common descent and also speculated on the transmutation of species over time. Being very prescient for the time he was writing in, he also discussed how the environment can influence the development of organisms, and how variations in species can arise due to the environment (think directed mutations). Erasmus also discussed the concept of use and disuse—where traits that an organism would use more would develop while traits they would use less would diminish over time—which was then a pre-cursor to Lamarck’s thoughts.

An antecedent to the “struggle for existence” is seen in Erasmus’ 1794 work Zoonomia (p. 503) (which Darwin underlined in his annotations, see Hernandez-Avilez and Ruiz-Guttierez, 2023):

The birds, which do not carry food to their young, and do not therefore marry, are armed with spurs for the purpose of fighting for the exclusive possession of the females, as cocks and quails. It is certain that these weapons are not provided for their defence against other adversaries, because the females of these species are without this armour. The final cause of this contest amongst the males seems to be, that the strongest and most active animal should propagate the species, which should thence become improved.

Jean Baptiste Lamarck wrote Philosophie Zoologique (Philosophical Zoology) in 1809. His ideas on evolution were from the same time period as Erasmus’, and they discussed similar subject matter. Lamarck believed that nature could explain species differentiation, and that behavioral changes which were environmentally induced could explain changes in species eventually leading to speciation. Lamarck’s first law was that use or disue would cause appendages to enlarge or shrink while his second law was that the changes in question were heritable. We also know that in many cases that development precedes evolution (West-Eberhard, 2005; Richardson, 2017) so these ideas in the modern day along with the observations to show they’re true also lend credence to Lamarck’s ideas.

First Law: In every animal that has not reached the end of its development, the more frequent and sustained use of any organ will strengthen this organ little by little, develop it, enlarge it, and give to it a power proportionate to the duration of its use; while the constant disuse of such an organ will insensibly weaken it, deteriorate it, progressively diminish its faculties, and finally cause it to disappear.

Second Law: All that nature has caused individuals to gain or lose by the influence of the circumstances to which their race has been exposed for a long time, and, consequently, by the influence of a predominant use or constant disuse of an organ or part, is conserved through generation in the new individuals descending from them, provided that these acquired changes are common to the two sexes or to those which have produced these new individuals (Lamarck 1809, p. 235). [Quoted in Burkhardt Jr., 2013]

Basically, Lamarck’s idea was that acquired traits during an organism’s lifetime could be passed onto descendants. If an organism developed a particular trait in response to its environment, then that trait could be inherited by its descendants. He was also one of the first—along with Erasmus—to go against the accepted wisdom of the time and propose that species could change over time and that they weren’t fixed. Basically, I think that Lamarck’s main idea was that the environment could have considerable effects on the evolution of species, and that these environmentally-induced changes could be heritable.

Well today, we have evidence that Lamarck was right, for example with the discovery and experiments showing that directed mutation is a thing. There was a lot that Lamarck got right and which has been integrated into the current evolutionary theory. We also know that there is evidence that “parental environment-induced epigenetic alterations are transmitted through both the maternal and paternal germlines and exert sex-specific effects” (Wang, Liu, and Sun, 2017). So we can then state Lamarck’s dictum: environmental change leads to behavioral change which leads to morphological change (Ward, 2018) (and with what we know about how the epigenetic regulation of the transposable elements regulates punctuated equilibrium, see Zeh, Zeh, and Ishida, 2009, we have a mechanism that can lead to this). And since we know that environmental epigenetics and transgenerational epigenetic provides mechanisms for Lamarck’s proposed process (Skinner, 2015), it seems that Lamarck has been vindicated. Indeed, Lamarckian inheritance is now seen as a mechanism of evolutionary change today (Koonin, 2014).

So knowing all of this, what if Charles Darwin never existed? How would the course of evolutionary theory be changed? We know that Darwin merely put the pieces of the puzzle together (from animal breeding, to the thought that transmutation could occur, etc.), but I won’t take anything away from Darwin, since even though I think he was wrong on a mechanism of evolution being natural selection, he did a lot of good work to put the pieces of the puzzle together into a theory of evolution that—at the time—could explain the fixation of traits and speciation (though I think that there are other ways to show that without relying on natural selection). The components of the theory that Darwin proposed were all there, but he was the one that coalesced them into a theory (no matter if it was wrong or not). Non-Darwinian evolution obviously was “the in thing” in the 19th century, and I don’t see how or why it would change. But Bowler (2013) argues that Alfred Russell Wallace would have articulated a theory of nature selection based on competition between varieties, not individuals as Darwin did. He argues that an equivalent of Darwin’s ToNS wouldn’t have been articulated until one recognized the similarities between what would become natural selection and artificial selection (where humans attempt to consciously select for traits) (Bowler, 2008). Though I do think that the ToNS is wrong, false, and incoherent, I do recognize how one would think that it’s a valid theory in explaining the evolution of species and the fixation of traits in biological populations. (Though I do of course think that my proposed explanation in linking saltation, internal physiological mechanisms and decimationism would have played a part in a world without Charles Darwin in explaining what we see around us.)

Now I will sketch out how I think our understanding of evolutionary theory would go had Charles Darwin not existed.

Although Lamarckism was pretty much discredited when Darwin articulated the ToNS (although Darwin did take to some of Lamarck’s ideas), the Lamarckian emphasis of the role of the environment shaping the traits of organisms would have persisted and remained influential. Darwin was influenced by many different observations that were known before he articulated his theory, and so even if Darwin didn’t exist to articulate the ToNS, the concept that species changed over time (that is, the concept that species evolved) was persistent before Darwin’s observations which led to his theory, along with the numerous lines of evidence that led Darwin to formulating the ToNS after his voyage on The Beagle. So while Darwin’s work did accelerate the exceptance of evolution, it is therefore very plausible that other mechanisms that don’t rely on selection would have been articulated. Both Erasmus and Lamarck had a kind of teleology in their thinking, which is alive today in modern conceptions of the EES like in that of arguments forwarded by Denis Noble (Noble and Noble, 2020, 2022) Indeed, Lamarck was one of the first to propose a theory of change over time.

Punctuated equilibrium (PE) can also be integrated with these ideas. PE is where rapid speciation events occur and then there is a period of stasis, and this can then be interpreted as purposeful evolutionary change based on the environment (similar to directed mutations). So each punctuated episode could align with Lamarck’s idea that organisms actively adapt to specific conditions, and it could also play a role in explaining the inheritance of acquired characters. So organisms could rapidly acquire traits due to environmental cues thsg the embryo’s physiology detects (since physiology is homeodynamic), there would be a response to the environmental change, and this would then contribute to the bursts of evolutionary change. Further, in periods of stasis, it could be inferred that there would be really no changing in the environment—not enough anyway, to lead to the change in the traits of a species—and so organisms would have been in equilibrium with their environment minting the traits until a new change in the environmental challenges triggered a burst of evolutionary change which would kick the species out of stasis and lead to punctuated events of evolutionary change. Therefore, this model (which is a holistic approach) would allow for a theory of evolution in which it is responsive, directed, and linked with the striving of organisms in their environmental context.

Conclusion

So in a world without Charles Darwin, the evolutionary narrative would have been significantly shaped by Erasmus and Lamarck. This alternative world would focus on Lamarckian concepts, the idea of transmutation over time, purposeful adaptation over time along with directed mutations and the integration of PE with these other ideas to give us a fuller and better understanding of how organisms change over time—that is, how organisms evolve. The punctuated episodic bursts of evolutionary change can be interpreted as purposeful evolutionary change based on Lamarckian concepts. Environmental determinism and stability shape the periods between bursts of change. And since we know that organisms in fact can adapt to complex, changing environments due to their physiology (Richardson, 2020), eventually as our scientific knowledge advanced we would then come to this understanding.

Therefore, the combination of Erasmus’ and Lamarck’s ideas would have provided a holistic, non-reductive narrative to explain the evolution of species. While I do believe that someone would have eventually articulated something similar to Darwin’s ToNS, I think that it would have been subsumed under the framework of built off of Erasmus and Lamarck. So there was quite obviously enough evolutionary thought and ideas before Darwin for there to be a relevant and explanatory theory of evolution had Darwin not been alive to formulate the ToNS, and this shows how such mechanisms to explain the origin of life, speciation, and trait fixation would have occurred, even in the absence of Darwin.

The Illusion of Separation: A Philosophical Analysis of “Variance Explained”

2050 words

Introduction

“Variance explained” (VE) is a statistical concept which is used to quantify the proportion of variance in a trait that can be accounted for or attributed to one or more independent variables in a statistical model. VE is represented by “R squared”, which ranges from 0 to 100 percent. An r2 of 0 percent means that none of the variance in the dependent variable is explained by the independent variable whereas an r2 of 100 percent means that all of the variance is explained. But VE doesn’t imply causation, it merely quantifies the degree of association or predictability between two variables.

So in the world of genetics, heritability and GWAS, the VE concept has been employed as a fundamental measure to quantify the extent to which a specific trait’s variability can be attributed to genetic factors. One may think that it’s intuitive to think that G and E factors can be separated and their relative influences can be seen and disentangled for human traits. But beneath its apparent simplicity lies a philosophically contentious issue, most importantly, due to the claim/assumption that G and E factors can be separated into percentages.

But I think the concept of VE in psychology/psychometrics and GWAS is mistaken, because (1) it implies a causal relationship that may not exist; (2) implies reductionism; (3) upholds the nature-nurture dichotomy; (4) doesn’t account for interaction and epigenetics; and (5) doesn’t account for context-dependency. In this article, I will argue that the concept of VE is confused, since it assumes too much while explaining too little. Overall, I will explain the issues using a conceptual analysis and then give a few arguments on why I think the phrase is confused.

Arguments against the phrase “variance explained”

While VE doesn’t necessarily imply causation, in psychology/psychometrics and GWAS literature, it seems to be used as somewhat of a causal phrase. The phrase also reduces the trait in question to a single percentage, which is of course not accurate—so basically it attempts at reducing T to a number, a percentage.

But more importantly, the notion of VE is subject to philosophical critique in virtue of the implications of what the phrase inherently means, particularly when it comes to the separation of genetic and environmental factors. The idea of VE most often perpetuates the nature-nurture dichotomy, assuming that G and E can be neatly separated into percentages of causes of a trait. Thus this simplistic division between G and E oversimplifies the intricate interplay between genes, environment and all levels of the developmental system and the irreducible interaction between all developmental resources that lead to the reliable ontogeny of traits (Noble, 2012).

Moreover, VE can be reductionist in nature, since it implies that a certain percentage of a trait’s variance can be attributable to genetics, disregarding the dynamic and complex interactions between genes and other resources in the developmental system. Therefore, this reductionism fails to capture the holistic and emergent nature of human development and behavior. So just like the concept of heritability, the reductionism inherent in the concept of VE focuses on isolating the contributions of G and E, rather than treating them as interacting factors that are not reducible.

Furthermore, we know that epigenetics demonstrates that environmental factors can influence gene expression which then blurs the line between G and E. Therefore, G and E are not separable entities but are intertwined and influence each other in unique ways.

It also may inadvertently carry implicit value judgements about which traits or outcomes are deemed desirable or significant. In a lot circles, a high heritability is seen as evidence for the belief that a trait is strongly influenced by genes—however wrong that may be (Moore and Shenk, 2016). Further, it could also stigmatize environmental influences if a trait is perceived as primarily genetic. This, then, could contribute to a bias that then downplays the importance of environmental factors which would then overlook their importance and potential impact in individual development and behavior.

This concept, moreover, doesn’t provide clarity on questions like identity and causality. Even if a high percentage of variance is attributed to genetics, it doesn’t necessarily reveal the causal mechanisms or genetic factors responsible, which then leads to philosophical indeterminancy regarding the nature of causation. Human traits are highly complex and the attempt to quantify them and break then apart into heat percentages or variances explained by G and E vastly oversimplifies the complexity of these traits. This oversimplification then further contributes to philosophical indeterminancy about the nature and true origins (which would be the irreducible interactions between all developmental resources) of these traits.

The act of quantifying variance also inherently involves power dynamics, where certain variables are deemed more significant or influential than others. This, then, introduces a potential bias that may reflect existing societal norms or power structures. “Variance explained” may inadvertently perpetuate and reinforce these power dynamics by quantifying and emphasizing certain factors over others. (Like eg the results of Hill et al, 2019 and Barth, Papageorge, and Thom, 2020 and see Joseph’s critique of these claims). Basically, these differences between people in income and other socially-important traits are due to genetic differences between them. (Even though there is no molecular genetic evidence for the claim made in The Bell Curve that we are becoming more genetically stratified; Conley and Domingue, 2016.)

The concept of VE also implies a kind of predictive precision that may not align with the uncertainty of human behavior. The illusion of certainty created by high r2 values can lead to misplaced confidence in predictions. In reality, the complexity of human traits often defies prediction and overreliance on VE may create a false sense of certainty.

We also have what I call the “veil of objectivity” argument. This argument challenges the notion that VE provides an entirely objective view. Behind the numerical representation lies a series of subjective decisions, like the selection of variables to the interpretation of results. From the initial selection of variables to be studied to the interpretation of their results, researchers exercise subjective judgments which then could introduce biases and assumptions. So if “variance explained” is presumed to offer an entirely objective view of human traits, then the numerical representation represents an objective measure of variance attribution. If, behind this numerical representation, subjective decisions are involved in variable selection and results interpretation, then the presumed objectivity implied by VE becomes a veil masking underlying subjectivity. So if subjective decisions are integral to the process of VE, then the presumed objectivity of the numerical representation serves as a veil concealing the subjective aspects of the research process. So if the veil of objectivity conceals subjective decisions, then there exists a potential for biases and assumptions which then would influence the quantitative analysis. Thus, if biases and assumptions are inherent in the quantitative analysis due to the veil of objectivity, then the objectivity attributed to VE is compromised, and a more critical examination of subjective elements becomes imperative. This argument of course is for “IQ” studies, heritability studies of socially-important human traits and the like, along with GWASs. In interpreting associations, GWASs and h2 studies also fall prey to the veil of objectivity argument, since as seen above, many people would like the hereditarian claim to be true. So when it comes to GWAS and heritability studies, VE refers to the propagation of phenotypic variance attributed to genetic variance.

So the VE concept assumes a clear separation between genetic and environmental factors which is often reductionist and unwarranted. It doesn’t account for the dynamic nature and influence of these influences, nor—of course—the influence of unmeasured factors. The concepts oversimplification can lead to misunderstandings and has ethical implications, especially when dealing with complex human traits and behaviors. Thus, the VE concept is conceptually flawed and should be used cautiously, if at all, in the fields in which it is applied. It does not adequately represent the complex reality of genetic and environmental influences on human traits. So the VE concept is conceptually limited.

If the concept of VE accurately separates genetic and environmental influences, then it should provide a comprehensive and nuanced representation of factors that contribute to a trait. But the concept does not adequately consider the dynamic interactions, correlations, contextual dependencies, and unmeasured variables. So if the concept does not and cannot address these complexities, then it cannot accurately separate genetic and environmental influences. So if a concept can’t accurately separate genetic and environmental influences, then it lacks coherence in the context of genetic and behavioral studies. Thus the concept of VE lacks coherence in the context of genetic and behavioral studies, as it does not and cannot adequately separate genetic and environmental influences.

Conclusion

In exploring the concept of VE and it’s application in genetic studies, heritability research and GWAS, a series of nuanced critiques have been uncovered that challenge its conceptual coherence. The phrase quantifies the proportion of variance in a trait that is attributed to certain variables, typically genetic and environmental ones. The reductionist nature of VE is apparent since it attempts to distill interplay between G and E into percentages (like h2 studies). But this oversimplification neglects the complexity and dynamic nature of these influences which then perpetuates the nature-nurture dichotomy which fails to capture the intricate interactions between all developmental resources in the system. The concepts inclination to overlook G-E interactions, epigenetic influences, and context-dependents variablity further speaks to its limitations. Lastly, normative assumptions intertwined with the concept thenninteouvde ethical considerations as implicit judgments may stigmatize certain traits or downplay the role and importance of environmental factors. Philosophical indeterminancy, therefore, arises from the inability of the concept of VE to offer clarity on identity, causality, and the complex nature of human traits.

So by considering the reductionist nature, the perpetuation of the false dichotomy between nature and nurture, the oversight of G-E interactions, and the introduction of normative assumptions, I have demonstrated through multiple cases that the phrase “variance explained” falls short in providing a nuanced and coherent understanding of the complexities involved in the study of human traits.

In all reality, the issue of this concept is refuted by the fact that the interaction between all developmental resources shows that the separation of the influences/factors is an impossible project, along with the fact that we know that there is no privileged level of causation. Claims of “variance explained”, heritability, and GWAS all push forth the false notion that the relative contributions of genes and environment can be be quantified into the causes of a trait in question. However, we know now that this is false since this is conceptually confused, since the organism and environment are interdependent. So the inseparability of nature and nurture, genes and environment, means that the The ability for GWAS and heritability studies to meet their intended goals will necessarily fall short, especially due to the missing heritability problem. The phrase “variance explained by” implies a direct causal link between independent and dependent variables. A priori reasoning suggests that the intracacies of human traits are probabilistic and context-dependent and it implicated a vast web of bidirectional influences with feedback loops and dynamic interactions. So if the a priori argument advocates for a contextual, nuanced and probabilistic view of human traits, then it challenges the conceptual foundations of VE.

At the molecular level, the nurture/nature debate currently revolves around reactive genomes and the environments, internal and external to the body, to which they ceaselessly respond. Body boundaries are permeable, and our genome and microbiome are constantly made and remade over our lifetimes. Certain of these changes can be transmitted from one generation to the next and may, at times, persist into succeeding generations. But these findings will not terminate the nurture/nature debate – ongoing research keeps arguments fueled and forces shifts in orientations to shift. Without doubt, molecular pathways will come to light that better account for the circumstances under which specific genes are expressed or inhibited, and data based on correlations will be replaced gradually by causal findings. Slowly, “links” between nurture and nature will collapse, leaving an indivisible entity. But such research, almost exclusively, will miniaturize the environment for the sake of accuracy – an unavoidable process if findings are to be scientifically replicable and reliable. Even so, increasing recognition of the frequency of stochastic, unpredictable events ensures that we can never achieve certainty. (Locke and Pallson, 2016)

Mechanisms that Transcend Natural Selection in the Evolutionary Process: Alternatives to Natural Selection

2250 words

Fodor’s argument was a general complaint against adaptationism. Selection can’t be the mechanism of evolution since it can’t distinguish between causes and correlates of causes—so it thusly can’t account for the creation (arrival) of new species. Here, I will provide quotes showing that the claim that natural selection is a mechanism is ubiquitous in the literature—claims that either Darwin discovered the mechanism or claims that it is a mechanism—and that’s what Fodor was responding to. I will then provide an argument combining saltation, internal physiological mechanisms and decimationism and the EES into a coherent explanatory framework to show that there are alternatives to Darwinian evolution, and that these thusly explain speciation and the proliferation of traits while natural selection can’t since it isn’t a mechanism.

Grant and Grant, 2007: “the driving mechanism of evolutionary change was natural selection”

American Museum of Natural History: “Natural selection is a simple mechanism that causes populations of living things to change over time.”

Andrews et al, 2010: “Natural selection is certainly an important mechanism of allele-frequency change, and it is the only mechanism that generates adaptation of organisms to their environments.”

Pianka: “Natural selection is the only directed evolutionary mechanism resulting in conformity between an organism and its environment”

Cottner and Wassenberg, 2020: “This mechanism is natural selection: individuals who inherit adaptations simply out-compete (by out-surviving and out-reproducing) individuals that do not possess the adaptations.”

So natural selection is seen as the mechanism by which traits become fixed in organisms and how speciation happens. Indeed, Darwin (1859: 54) wrote in On the Origin of Species:

“From these several considerations I think it inevitably follows, that as new species in the course of time are formed through natural selection, others will become rarer and rarer, and finally extinct.”

[And some more of the same from authors in the modern day]

“The role of natural selection in speciation, first described by Darwin, has finally been widely accepted” (Via, 2009)

“Selection must necessarily be involved in speciation” (Barton, 2010)

“Darwin’s theory shows how some natural phenomena may be explained (including at least adaptations and speciation)” (SEP, Natural Selection)

“Natural selection has always been considered a key component of adaptive divergence and speciation (2, 15–17)” (Schneider, 2000)

“Natural selection plays a prominent role in most theories of speciation” (Schulter and Nagel, 1995)

So quite obviously, natural selection is seen as a mechanism, and this mechanism supposedly explains speciation of organisms. But since Fodor (2008) and Fodor and Piattelli-Palmarini (2010) showed that natural selection isn’t a mechanism and can’t explain speciation, then there are obviously other ways that evolution happened. There are alternatives to natural selection, and that’s where I will now turn. I will discuss saltation, internal physiological mechanisms and decimationism and then cohere them into a framework that shows how species can arise sans selection.

Explaining speciation

Saltation is the concept of abrupt and substantial changes which lead to the creation of new species, and it challenges phyletic gradualism through natural selection. Instances of sudden genetic alterations along with other goings-on in the environment that lead to things such as directed mutation can eventually result in the emergence of distinct species. Saltation, therefore, challenges Darwinism showing that certain traits can arise quickly, which lead to the emergence of new species within a short time frame. We also have internal physiological mechanisms which play a role in speciation while influencing the development and divergence of traits within biological populations. They don’t rely on external selective pressures—although goings-on in the environment of course can affect physiology—this emphasizes internal factors like developmental constraints, epigenetic modifications and genetic regulatory networks. These can then lead to the expression of novel traits and then on to speciation without the need for external selective forces. And finally decimationism—which emphasizes periodic mass extinction as drivers of evolutionary change—offers another alternative.

Catastrophic events create holes in ecological niches which then allow for the rapid adaptation and diversification of surviving species. So the decimation and recurrent re-colonizing of ecological niches can then lead to the establishment of distinct lineages (species), which then highlight the role of external and non-selective factors in the process of evolution.

So the interaction between saltation, internal physiological mechanisms, and decimationism thusly provides a novel and comprehensive framework for understanding speciation. Sudden genetic changes and other changes to the system can the initiate the development of unique physiological traits (due to the interaction of the developmental resources, and so any change to one resource would cause a cascading change to the system), while internal mechanisms then ensure the stabilization and heritability of the traits within the population. And when this is coupled with environmental upheaval caused by decimation leading to mass extinctions, these processes then contribute to the formation of new species which then offers a framework and novel perspective of the ARRIVAL of the fittest (Darwin’s theory said nothing about arrival, only the struggle for existence), which extends beyond the concept of natural selection.

So if abrupt genetic and other internal changes (saltation) can passively respond to external stimuli and/or environmental pressures, leading to the emergence of distinct traits within a population, and if internal physiological mechanisms influence the expression and development of these traits, then it follows that saltation, coupled with internal physiological mechanisms, can explain and contribute to the rise of new species. If periodic mass extinctions (decimationism) create ecological vacuums and opportunities for adaptive radiation, and if internal physiological mechanisms play a role in the heritability and stability of traits, then it follows that decimationism in conjunction with internal physiological mechanisms can contribute to the speciation of surviving lineages. Also note that all of this is consistent with Gould’s punctuated equilibrium (PE) model.

Punctuated equilibrium was proposed by Gould and Eldgridge as an alternative to phyletic gradualism (Eldgridge and Gould, 1971). It proposes that species evolve rapidly and not gradually. A developmental gene hypothesis also exists for PE (Casanova and Conkel, 2020).

One prediction of PE is rapid speciation events. During periods of punctuated equilibrium, there will be relatively short intervals of rapid speciation which then result in the emergence of new species. This follows from the theory in that it posits that speciation occurs rapidly, concentrated in short bursts, which lead to the prediction that distinct species should emerge more quickly during these punctuated periods. So if species undergo long periods of stasis with occasional rapid change, then it logically follows that new species should arise quickly during these punctuated periods. Seeing that the PE model was developed to explain the lack of transitional fossils, it proposes that species undergo a long period of morphological stasis, with evolutionary changes occurring in short bursts during speciation events, which therefore provides a framework that accounts for the intermittent presence of transitional fossils in the fossil record.

Another prediction is that during periods of stasis (equilibrium), species will exhibit stability in terms of morphology and adaptation. This follows from the theory in that PE posits that stability characterizes a majority of a species existence and that change should occur in quick bursts. Thus, between these bursts, there should be morphological stability. So the prediction is that observable changes are concentrated in specific intervals.

The epigenome along with transposable elements have been argued to be at the heart of PE, and that “physiological stress, associated with major climatic change or invasion of new habitats, disrupts epigenetic silencing, resulting in TE reactivation, increased TE expression and/or germ-line infection by exogenous retroviruses” (Zeh, Zeh, and Ishida, 2009: 715). Further, this hypothesis—that the epigenetic regulation of transposable elements regulates PE—makes testable predictions (Zeh, Zeh and Ishida, 2009: 721). This is also a mechanism to further explain how stress-induced directed mutations occur. Thus, there is an epigenetic basis for the rapid transformation of species which involves the silencing of transposable elements. So calls for an epigenetic synthesis have been made (Crews and Gore, 2012). We, furthermore, know that Lamarckian inheritance is a major mechanism of evolution (Koonin, 2014). We also know that epigenetic processes like DNA methylation contribute to the evolutionary course (Ash, Colot, and Oldroyd, 2021). Such epigenetic mechanisms have been given solid treatment in West-Eberhard’s (2003) Developmental Plasticity and Evolution. (See also West-Eberhard, 2005 on how developmental plasticity leads to the origin of species differences and Wund, 2015 on the impact of phenotypic plasticity on the evolutionary process.)

Integrating the mechanisms into the EES

So in integrating saltation, internal physiological mechanisms, decimationism, epigenetic processes, phenotypic evaluation and directed mutations into the EES (extended evolutionary synthesis), we can then get a more comprehensive framework. Phenotypic plasticity allows organisms to exhibit various phenotypes in response to various environmental cues, so this introduces a broader aspect of adaptability that go beyond genetic change while emphasizing the capacity of populations to change based on what is going on in the immediate environment during development.

Generic drift and neutral evolution also at a role. So beyond the selective pressures emphasized by the modern synthesis, the EES recognizes that genetic changes can occur through stochastic mechanisms which then influence the genetic constitution of a population. Evo-devo then contributes to the synthesis by highlighting the role of developmental processes in evolutionary outcomes. Thus, by understanding how changes in gene regulation during development contribute to morphological diversity, evo-devo therefore provides insight into evolutionary mechanisms which transcend so-called natural selection.

Moreover, the integration of epigenetic inheritance and cultural evolution also extends the scope of the EES. Epigenetic mechanisms can influence gene expression without a change to the DNA sequence, and can contribute to heritability and adaptability. Cultural evolution, then, while acknowledging the power of transmitted knowledge and practices on adaptive success, also broadens our understanding of evolution beyond biological factors. Thus, by incorporating all of the discussed mechanisms, the EES fosters a unique approach in integrating numerous different mechanisms while recognizing that the evolutionary process is influenced by a mixture of biological, environmental, cultural and developmental factors. There is also the fact that the EES has better predictive and explanatory power than the modern synthesis—it also makes novel predictions (Laland et al, 2015).

Conclusion

This discussion has delved into diverse facets of evolutionary theory, showed that natural selection is seen as a mechanism in the modern day, that Darwin and modern day authors see natural selection as the mechanism of speciation, and has considered a few mechanisms of evolution beyond natural selection. Fodor’s argument was introduced to question the applicability of “selection-for” traits, and challenged the notion of natural selection as a mechanism of evolutionary change. Fodor’s argument therefore paved the way for the mechanisms I discussed and opened the door for the reevaluation of saltation, internal physiological mechanisms, decimationism and the EES more broadly in explaining the fact of evolution. So this discussion has shown that we have to think about evolution not as selection-centric, but in a more holistic manner.

There are clearly epigenetic mechanisms which influence speciation on a PE model, and these epigenetic mechanisms then also contribute to the broader understanding of evolution beyond PE. In the PE model, where speciation events are characterized by rapid and distinct changes, epigenetic mechanisms play a crucial role in influencing the trajectory of evolutionary transitions. These epigenetic mechanisms, then, continue to the heritability of traits and the adaptability of populations. These epigenetic mechanisms also extend beyond their impact of speciation within the PE model. So by influencing gene expression in response to environmental cues, epigenetic changes then provide a dynamic layer to the evolutionary process which allow populations to adapt more rapidly to changing conditions. Therefore, epigenetic mechanisms become integral components in explaining evolutionary dynamics which then align with the principles of the EES.

The integration of these concepts into the EES then further broadens our understanding of evolution. So by incorporating genetic drift, phenotypic plasticity, evo-devo, epigenetic inheritance, directed mutation, and cultural evolution, the EES provides a comprehensive framework which recognizes the complexity of evolutionary process beyond mere reductive genetic change. Phenotypic plasticity allows organisms to be adaptively plastic to respond to cues during development and change the course of their development to respond to what is occurring in the environment without relying solely on genetic changes. Genetic drift then introduces stochastic processes and neutral evolution. Evo-devo then contributes to the synthesis by highlighting the role of developmental processes in evolutionary outcomes. Epigenetic inheritance also brings a non-genetic layer to heritability, acknowledging the impact of environmentally responsive gene regulation. Cultural evolution then recognizes the transmission of knowledge and practices within populations as a factor which influences adaptive success. So putting this all together, these integrations then suggests that evolution is a multifaceted interplay of irreducible levels (Noble, 2012) which then challenges natural selection as a a primary or sole mechanism of evolution and as a mechanism at all, since we can explain what natural selection purports to explain without reliance on it.

So if evolutionary processes encompass mechanisms beyond natural selection like saltation, internal physiological mechanisms, decimationism, punctuated equilibrium, and phenotypic plasticity, and if we are to reject natural selection as an explanation for trait fixation and speciation based on Fodor’s argument, and if these mechanisms are an integral part of the EES, then the EES offers a more comprehensive framework in understanding evolution. Evolutionary processes do encompass mechanisms beyond natural selection as evidenced by critiques of selection-centric views and those views that are seen as alternatives to natural selection like saltation, internal physiological mechanisms and decimationism. Thus, by incorporating the aforementioned mechanisms, we will have a better understanding evolution than if merely relying on the non-mechanism of natural selection to explain trait fixation and sp

Rushton, Race, and Twinning

2500 words

As is the case with the other lines of evidence that intend to provide sociobiological evidence in support of the genetic basis of human behavior and development (relating to homology, heritability, and adaptation), Rushton’s work reduces to no evidence at all. (Lerner, 2018)

Introduction

From 1985 until his death in 2012, J. P. Rushton attempted to marshal all of the data and support he could for a theory called r-K selection theory or Differential K theory (Rushton, 1985). The theory posited that while humans were the most K species of all, some human races were more K than others, so it then followed that some human races were more r than others. Rushton then collated mass amounts of data and wrote what would become his magnum opus, Race, Evolution and Behavior (Rushton, 1997). So in the r/K theory first proposed by MacArthur and Wilson, unstable, unpredictable environments favored an r strategy whereas a stable, predictable environments favored a K strategy. (See here for my response to Rushton’s r/K.)

So knowing this, one of the suite of traits Rushton put on his r/K matrix was twinning rates. Rushton (1997: 6) stated:

the rate of dizygotic twinning, a direct index of egg production, is less than 4 per 1,000 births among Mongoloids, 8 per 1,000 among Caucasoids, and 16 or greater per 1,000 among Negroids.

I won’t contest the claim that the rate in DZ twinning is higher by race—because it’s pretty well-established with recent data that blacks are more likely to have twins than whites (that is, blacks have a slightly higher chance of having twins than whites, who have a slightly higher chance of having twins than Asians) (Santana, Surita, and Cecatti, 2018; Wang, Dongarwar, and Salihu, 2020; Monden, Pison, and Smits, 2021)—I’m merely going to contest the causes of DZ twinning. Because it’s clear that Rushton was presuming this to be a deeply evolutionary trait since a highs rate of twins—in an evolutionary context—would mean that there would be a higher chance for children of a particular family to survive and therefore spread their genes and thusly would, in his eyes, lend credence to his claim that Africans were more r compared to whites who were more r compared to Asians.

But to the best of my knowledge, Rushton didn’t explain why, biologically, blacks would have more twins than whites—he merely said “This race has more twins than this race, so this lends credence to my theory.” That is, he didn’t posit a biological mechanism that would instantiate a higher rate of twinning in blacks compared to whites and Asians and then explain how environmental effects wouldn’t have any say in the rate of twinning between the races. However, I am privy to environmental factors that would lead to higher rates of twinning and I am also privy to the mechanisms of action that allow twinning to occur (eg phytoestrogens, FSH, LH, and IGF). And while these are of course biological factors, I will show that there are considerable effects of environmental interactions like diet on the levels of these hormones which are associated with twinning. I will also explain how these hormones are related to twinning.

While the claim that there is a difference in rate of DZ twinning by race seems to be true, I don’t think it’s a biological trait, nevermind an evolutionary one as Rushton proposed (because even if Rushton’s r/K were valid, “Negroids” would be K and “Mongoloids” would be r, Anderson, 1991). Nonetheless, Rushton’s r/K theory is long-refuted, though he did call attention to some interesting observations (which other researchers never ignored, they just didn’t attempt some grand theory of racial differences).

Follicle stimulating hormone, leutinizing hormone, and insulin-like growth factor

We know that older women are more likely to have twins while younger women are less likely (Oleszczuk et al, 2001), so maternal age is a factor. As women age, a hormone called follicle stimulating hormone (FSH) increases due to a decline in estrogen, and it is one of the earliest signs of female reproductive aging (McTavish et al, 2007), being one of the main biomarkers of ovarian reserve tested on day 3 of the menstrual cycle (Roudebush, Kivens, and Mattke, 2008). It is well established that twinning is different in different geographic locations, that the rate of MZ twins is constant at around 3.5 to 4 per 1,000 births (so what is driving the differences is the birth of DZ twins), and that it increases due to an increase in FSH (Santana, Surita, and Cecatti, 2018). We also know that pre-menopausal women who have given birth to DZ twins have higher levels of FSH on the third day of their menstrual cycle (Lambalk et al, 1998).

So if FSH levels stay too high for too long then multiple eggs are released, which could lead to an increase in DZ twinning. FSH stimulates the maturation and growth of ovarian follicles, each of which contains an immature egg called an oocyte. FSH acts on the ovaries to promote the development of multiple ovarian follicles during pregnancy, a process which is called recruitment. In a normal menstrual cycle, only one follicle is stimulated to release one egg; but when FSH levels are elevated, this results in the development and maturation of more than one follicle which is known as polyovulation. Polyovulation then increases the chance of the release of multiple eggs during ovulation. Thus, if more than one egg is released during a menstrual cycle, and they both are fertilized, it can then lead to the development of DZ twins.

Along with FSH, we also have luetenizing hormone (LH). So FSH and LH act synergistically (Raju et al, 2013). LH, like FSH, isn’t directly responsible for the increase in twinning, but the process that it allows (playing a role in ovulation) is a crucial factor in twinning. So LH is responsible for triggering ovulation, which is the release of a mature egg from the ovarian follicle. (Ovulation occurs typically 24 to 36 hours after LH increases.) In a typical menstrual cycle, only one follicle is stimulated to release one egg, which is triggered by the surge in LH. But if there are multiple mature follies in the ovaries (which could be influenced by FSH), then a surge in LH can lead to the release of more than one egg. So the interaction of LH with other hormone like FSH, along with the presence of multiple mature follicles, can be associated with having a higher chance of having DZ twins. FSH therapies are also used in assisted reproduction (eg Munoz et al, 1995 in mice; Ferraretti et al, 2004; Pang, 2005; Pouwer, Farquhar, and Kremer, 2015; Fatemi et al, 2021).

So when it comes to FSH, we know that malnutrition may play a role in twinning, and also that wild yams—a staple food in Nigeria—increases phytoestrogens which increase FSH in the body of women (Bartolus, et al, 1999). Wild yams have been used to increase estrogen in women’s bodies (due to the phytoestrogens they contain), and it enhances estradiol through the mechanism of binding to estrogen receptor sites (Hywood, 2008). And since Nigeria has the highest rate of twinning in the world (Santana, Surita, and Cecatti, 2018), and their diet is wild yam-heavy (Bartolus, et al, 1999), it seems that this fact would go a long way in explaining why they have higher rates of twinning. Mount Sinai says thatAlthough it does not seem to act like a hormone in the body, there is a slight risk that wild yam could produce similar effects to estrogen.” It acts as a weak phytoestrogen (Park et al, 2009). (But see Beckham, 2002.) But when phytoestrogens are consumed, they can then bind to estrogen receptors in the body and trigger estrogenic effects which could then lead to the potential stimulation and release of multiple eggs which would increase the chance of DZ twinning.

One study showed that black women, in comparison to white women, had “lower follicular phase LH:FSH ratios” (Reuttman et al, 2002; cf Marsh et al, 2011), while Randolph et al (2004) showed that black women had higher FSH than Asian and white women. So the lower LH:FSH ratio could affect the timing and regulation of ovulation, and a lower LH:FSH level could reduce the chances of premature ovulation and could affect the release of multiple eggs.

Lastly, when it comes to insulin-like growth factor (IGF), this could be influenced by a high protein diet or a high carb diet. Diets high in high glycemic carbs can lead to increase insulin production which would then lead to increased IGF levels. Just like with FSH and LH, increased levels of IGF could also in concert with the other two hormones influence the maturation and release of multiple eggs during a menstrual cycle which would then increase the chance of twinning (Yoshimura, 1998). IGF can also stimulate the growth and development of multiple follicles (Stubbs et al, 2013) and have them mature early if IGF levels are high enough (Mazerbourgh and Monget, 2018). This could then also lead to polyovulation, triggering the release of more than one egg during ovulation. IGF can also influence the sensitivity of the ovaries to hormonal signals, like those from the pituitary gland, which then leads to enhanced ovarian sensitivity to hormones like FSH and LH which then, of course, would act synergistically increasing the rate of dizygotic twinning. (See Mazerbourgh and Monget, 2018 for a review of this.)

So we know that black women have higher levels of IGF-1 and free IGF-1—but lower IGF-2 and IGFBP-3—than white women (Berrigan et al, 2010; Fowke et al, 2011). The higher IGF-1 levels in black women could lead to increase ovarian sensitivity to FSH and LH, and thus enhanced ovarian sensitivity could lead to the promotion and release of multiple eggs during ovulation. The lower IGF-2 levels could contribute to the balance of IGF-1 and IGF-2, which would then further influence the ovarian sensitivity to other hormones. IGFBP-3 is a binding protein which regulated the bioavailability of IGF-1, so lower levels of IGFBP-3 could lead to higher concentrations of free IGF-1, which would then further stimulate the ovarian follicles and could lead to polyovulation, leading to increased twinning. Though there is some evidence that this difference does have a “genetic basis” (Higgins et al, 2005), we know that dietary factors do have an effect on IGF levels (Heald et al, 2003).

Rushton’s misinterpretations

Rushton got a ton wrong, but he was right about some things too (which is to be expected if you’re looking to create some grand theory of racial differences). I’m not too worried about that. But what I AM worried about, is Rushton’s outright refusal to address his most serious critics in the literature, most importantly Anderson (1991) and Graves (2002 a, b). If you check his book (Rushton, 1997: 246-248), his responses are hardly sufficient to address the devestating critiques of his theory. (Note how Rushton never responded to Graves, 2002—ever.) Gorey and Cryns (1995) showed how Rushton cherry-picked what he liked for his theory while stating that “any behavioral differences which do exist between blacks, whites and Asian Americans for example, can be explained in toto by environmental differences which exist between them” while Ember, Ember, and Peregrine (2003) concluded similarly. (Rushton did respond to Gorey and Cryns, but not Ember, Ember, and Peregrine.) Cernovsky and Littman (2019) also showed how Rushton cherry-picked his INTERPOL crime data.

Now that I have set the stage for Rushton’s “great” scholarship, let’s talk about the response he got to his twinning theory.

Allen et al (1992) have a masterful critique of Rushton’s twinning theory. They review twinning stats in other countries across different time periods and come to conclude that “With such a wide overlap between races, and such great variation within races, twinning rate is probably no better than intelligence as an index of genetic status for racial groups.” They also showed that the twinning mechanism didn’t seem to be a relevant factor in survival, until the modern day with the advancement of our medical technologies, that is. So since twinning increases the risk for death in the mother (Steer, 2007; Santana et al, 2018). Rushton also misinterpreted numerous traits associated with twinning:

individual twin proneness and its correlates do not provide Rushton’s desired picture of a many-faceted r- strategy (even if such individual variation could have evolutionary meaning). With the exception of shorter menstrual cycles found in one study, the traits Rushton cites as r-selected in association with twinning are either statistical artifacts of no reproductive value or figments of misinterpretation.

Conclusion

I have discussed a few biological variables that lead to higher rates of twinning and I have cited some research which shows that black women have higher rates of some of the hormones that are related to higher rates of twinning. But I have also shown that it’s not so simple to jump to a genetic conclusion, since these hormones are of course mediated by environmental factors like diet.

Rushton quite clearly takes these twinning rate differences to be “genetic” in nature, but we are in the 2020s now, not the 1980s, and we now know that genes are necessary, but passive players in the formation of phenotypes (Noble, 2011, 2012, 2016; Richardson, 2017, 2021; Baverstock, 2021; McKenna, Gawne, and Nijhout, 2022). These new ways of looking at genes—as passive, not active causes, and as not special from any other developmental resources—shows how the reductionist thinking of Rushton and his contemporaries were straight out false. Nonetheless, while Rushton did get it right that there is a racial difference in twinning, the difference, I think, isn’t a genetic difference and I certainly don’t think they it lends credence to his Differential K theory, since Anderson showed that if we were to accept Rushton’s premises, then African would be K and Asians would be r. So while there also are differences in menarche between blacks and whites, this too also seems to be environmentally driven.

Rushton’s twinning thesis was his “best bet” at attempting to show that his r/K theory was “right” about racial differences. But the numerous devestating critiques of not only Rushton’s thesis on twinning but his r/K Differential K theory itself shows that Rushton was merely a motivated reasoner (David Duke also consulted with Rushton when Duke wrote his book My Awakening, where Duke describes how psychologists led to his “racial awakening”), so “The claim that Rushton was acting only as a scientist is not credible given this context” (Winston, 2020). Even the usefulness of psychometric life history theory has been recently questioned (this derives from Rushton’s Differential K, Sear, 2020).

But it is now generally accepted that Rushton’s r/K and the current psychometric life history theory that rose from the ashes of Rushton’s theory just isn’t a good way to conceptualize how humans live in the numerous biomes we live in.

Racial Differences in Motor Development: A Bio-Cultural View of Motor Development

3050 words

Introduction

Psychologist J. P. Rushton was perhaps most famous for attempting to formulate a grand theory of racial differences. He tried to argue that, on a matrix of different traits, the “hierarchy” was basically Mongoloids > Caucasoids > Negroids. But Rushton’s theory was met with much force, and many authors in many of the different disciplines in which he derived his data to formulate his theory attacked his r/K selection theory also known as Differential K theory (where all humans are K, but some humans are more K than others, so some humans are more r than others). Nonetheless, although his theory has been falsified for many decades, did he get some things right about race? Well, a stopped clock is right twice a day, so it wouldn’t be that outlandish to believe that Rushton got some things right about racial differences, especially when it comes to physical differences. While we can be certain that there are physical differences in groups we term “racial groups” and designate “white”, “black”, “Asian”, “Native American”, and “Pacific Islander” (the five races in American racetalk), this doesn’t lend credence to Rushton’s r/K theory.

In this article, I will discuss Rushton’s claims on motor development between blacks and whites. I will argue that he basically got this right, but it is of no consequence to the overall truth of his grand theory of racial differences. We know that there are physical differences between racial groups. But that there are physical differences between racial groups doesn’t entail that Rushton’s grand theory is true. The only entailment, I think, that can be drawn from that is there is a possibility that physical differences between races could exist between them, but it is a leap to attribute these differences to Rushton’s r/K theory, since it is a falsified theory on logical, empirical and methodological grounds. So I will argue that while Rushton got this right, a stopped clock is right twice a day but this doesn’t mean that his r/K theory is true for human races.

Was Rushton right? Evaluating newer studies on black-white motor development

Imagine three newborns: one white, one black and the third Asian and you observe the first few weeks of their lives. Upon observing the beginnings of their lives, you begin to notice differences in motor development between them. The black infant is more motorically advanced than the white infant who is more motorically advanced than the Asian infant. The black infant begins to master movement, coordination and dexterity showing a remarkable level of motoric dexterity, while the white infant shows less motoric dexterity than the black infant, and the Asian infant still shows lower motoric dexterity than the white infant.

These disparities in motor development are evidence in the early stages of life, so is it genetic? Cultural? Bio-cultural? I will argue that what explains this is a bio-cultural view, and so it will of course eschew reductionism, but of course as infants grow and navigate through their cultural milieu and family lives, this will have a significant effect on their experiences and along with it their motoric development.

Although Rushton got a lot wrong, it seems that he got this issue right—there does seem to be differences in precocity of motor development between the races, and the references he cites below in his 2000 edition of Race, Evolution, and Behavior—although most are ancient compared to today’s standards—hold to scrutiny today, where blacks walk earlier than whites who walk earlier than Asians.

Rushton (2000: 148-149) writes:

Revised forms of Bayley’s Scales of Mental and Motor Development administered in 12 metropolitan areas of the United States to 1,409 representative infants aged 1-15 months showed black babies scored consistently above whites on the Motor Scale (Bayley, 1965). This difference was not limited to any one class of behavior, but included: coordination (arm and hand); muscular strength and tonus (holds head steady, balances head when carried, sits alone steadily, and stands alone); and locomotion (turns from side to back, raises self to sitting, makes stepping movements, walks with help, and walks alone).

Similar results have been found for children up to about age 3 elsewhere in the United States, in Jamaica, and in sub-Saharan Africa (Curti, Marshall, Steggerda, & Henderson, 1935; Knobloch & Pasamanik, 1953; Williams & Scott, 1953; Walters, 1967). In a review critical of the literature Warren (1972) nonetheless reported evidence for African motor precocity in 10 out of 12 studies. For example, Geber (1958:186) had examined 308 children in Uganda and reported an “all-round advance of development over European standards which was greater the younger the child.” Freedman (1974, 1979) found similar results in studies of newboms in Nigeria using the Cambridge Neonatal Scales (Brazelton & Freedman, 1971).

Mongoloid children are motorically delayed relative to Caucasoids. In a series of studies carried out on second- through fifth-generation Chinese-Americans in San Francisco, on third- and fourth-generation Japanese-Americans in Hawaii, and on Navajo Amerindians in New Mexico and Arizona, consistent differences were found between these groups and second- to fourth-generation European-Americans using the Cambridge Neonatal Scales (Freedman, 1974, 1979; Freedman & Freedman, 1969). One measure involved pressing the baby’s nose with a cloth, forcing it to breathe with its mouth. Whereas the average Chinese baby fails to exhibit a coordinated “defense reaction,” most Caucasian babies turn away or swipe at the cloth with the hands, a response reported in Western pediatric textbooks as the normal one.

On other measures including “automatic walk,” “head turning,” and “walking alone,” Mongoloid children are more delayed than Caucasoid children. Mongoloid samples, including the Navajo Amerindians, typically do not walk until 13 months, compared to the Caucasian 12 months and Negro 11 months (Freedman, 1979). In a standardization of the Denver Developmental Screening Test in Japan, Ueda (1978) found slower rates of motoric maturation in Japanese as compared with Caucasoid norms derived from the United States, with tests made from birth to 2 months in coordination and head lifting, from 3 to 5 months in muscular strength and rolling over, at 6 to 13 months in locomotion, and at 15 to 20 months in removing garments.

Regarding newer studies on this matter, there are differences between European and Asian children in the direction that Rushton claimed. Infants from Hong Kong displayed a difference sequence of rolling compared to Canadian children. There does seem to be a disparity in motoric development between Asian and white children (Mayson, Harris, and Bachman, 2007). These authors do cite some of the same studies like the DDST (which is currently outdated) which showed how Asian children were motorically delayed compared to white children. And although they put caution on their findings of their literature review, it’s quite clear that this pattern exists and it is a bio-cultural one. So they conclude their literature review writing “the literature reviewed suggests differences in rate of motor development among children of various ethnic origins, including those of Asian and European descent” and that “Limited support suggests also that certain developmental milestones, such as rolling, may differ between infants of Asian and European origin.” Further, cultural practices in northern China—for example, lying them on their backs on sandbags—stall the onset of walking in babies sitting, crawling, and walking by a few months (Karasik et al, 2011).

This is related to the muscles that are used to roll from a supine to prone position and vice versa. Since some Asian children spend a longer time in apparatuses that aren’t conducive to growing a strong muscular base to be able to roll from the supine to prone position, to crawl and eventually walk, this is the “cultural” in the “bio-cultural” approach I will argue for.

One study on Norwegian children found that half of the children were waking by 13 months (the median) while 25 percent were walking by 12 months and 75 percent were walking by 14 months (Storvold, Aarethun, and Bratberg, 2013). One reason for the delayed response time could be supine sleeping, which was put into effect during the Back to Sleep program to mitigate causes of death from SIDS. Although it obviously saved tens of thousands of infant lives, it came at a cost of slightly stunted motoric development. It also seems that there is poor predictive value for infant milestones such as walking when it comes to health (Jenni et al, 2012).

Black Caribbean, black African and Indian infants were less likely to show delays in gross motor milestones compared to white infants. But Pakistani and Bangladeshi infants were more likely to be delayed in motoric development and communicative gestures, which was partly attributed to socio-cultural factors (Kelly et al, 2006). Kelly et al (2006: 828) also warn against genetic conclusions based on their large findings of difference between white and African and Caribbean infants:

The differences we observed between Black African and Black Caribbean compared with White infants are large and remain unaffected after adjusting for important covariates. This makes it tempting to conclude that the remaining effect must be a consequence of genetic differences. However, such a conclusion would be prematurely drawn. First, we have not included the measurement of genetic factors in our analysis, and, therefore, the presence of such effects cannot be demonstrated. Second, speculating on such effects should only be done alongside recognition that the model we have been able to test contains imperfect measurement.

It has also been observed that black and white children achieved greater mastery of motoric ability (locomotor skills) compared to Asian children but there was no difference by age group (Adeyemi-Walker et al, 2018). It was also found that infants with higher motor development scores had a lower weight weight relative to their length as they grew. So it was found that delayed motor development was associated with higher weight relative to length (Shoaibi et al, 2018). Black infants are also more motorically advanced and this is seen at up to two years of age (Malina, 1988) while black children perform better on tests of motor ability than white children (Okano et al, 2001). Kilbride et al (1970) also found that Baganda infants in Uganda showed better motoric ability than white American children. Campbell and Heddeker (2001) also showed that black infants were more motorically advanced than infants of other races.

It is clear that research like this blows up the claim that there should be a “one-size fits all” chart for motoric development in infants and that there should be race-specific milestones. This means that we should throw out the WEIRD assumptions when it comes to motoric development of infants (Karasik et al, 2011). They discuss research in other cultures where African, Caribbean and Indian caregivers massage the muscles of babies, stretch their limbs, toss them in their air, sit them up, and walk with them while helping them which then shapes their muscles and has them learn the mind-muscle connections needed to be able to learn how to eventually walk. And it also seems that random assignment to exercise excelerates how quickly an infant walks. White infants also sit at 6 months while black infants sit at 4 months. Nonetheless, it is clear that culture and context can indeed shape motoric development in groups around the world.

A bio-cultural view of motor development

When it comes to biological influences on motor development, sex and age are two important variables (Escolano-Perez, Sanchez-Lopez, and Herrero-Nivela, 2021). Important to this, of course, is that the individual must be normal, and they must have a normal brain with normal vision and spatial skills. They must be able to hear (to eventually follow commands and hear what is going on in their environment to change their course of action if need be). Further, the child’s home environment and gestational age influence different portions of motoral development (Darcy, 2022). After infants begin crawling, their whole world changes and they process visual motion better and faster, being able to differentiate between different speeds and directions, so a stimulating environment for the infant can spur the development of the brain (Van der Meer and Van der Weel, 2022). Biological maturation and body weight also affect motor development. Walking develops naturally, but walking and motor competence need to be nurtured for the child to reach their full potential; lower motor competence is related to higher body weight (Drenowatz and Greier, 2019).

One study on Dutch and Israeli infants even found—using developmental niche construction—that “infant motor development indeed is at least partly culturally constructed [which] emphasizes the importance of placing infant motor development studies into their ‘cultural cradle(Oudgeneong, Atun-Eni, and Schaik, 2020). Gross motor development—rolling over, crawling, alternating kicks, moving from lying to sitting, and having tummy time—is recognized by the WHO. Further, children from different cultures have different experiences, which also could lead to, for example, not doing things that are conducive to the development of gross motor development (Angulo-Barroso et al, 2010). Moreover, motor development is embodied, enculturated, embedded, and enabling (Adolph and Hoch, 2020). It is also known that differences in the cultural environment “have a non-negligible effect on motor development” (Bril, 1986). Motor development also takes place in physical environments and is purposive and goal-directed (Hallemans, Verbeque, and de Walle, 2020).

So putting this all together, we have conceptualized motor development as a dynamic process which is influenced by a complex interplay of biological and cultural factors (Barnes, Zieff, and Anderson, 1999). Biological factors like sex, age, health, sensory abilities, and socio-cultural factors like home environment and developmental niches explain motor development and differences in them between individuals. The cultural differences, though, can impede motoral development, and not allow one to reach milestones they would have otherwise reached in a different cultural environment, just like if one couldn’t hear or see would have trouble reaching developmental milestones.

Children of course grow up in cultural environments and contexts and so they are culturally situated. So what this means is that both the cultural and social environment the child finds themselves in will of course then influence their physical and mental development and lead them to their milestones they hit which is dictated by the normal biology they have which then is allowed by the socio-cultural environment they are born into. So we have the bio-cultural view on motor development, and beyond the cultural environment the child finds themselves in, the interactions they have between parents and caregivers—more knowledgeable others—can be pertinent to their motor development and reaching of developmental milestones. Cultural practices and expectations could emphasize certain milestones over others and then guide the child towards the trajectory. So the framework recognizes that normal biology and sensory perceptions are needed for the development of normal motor development, but that cultural and social differences in that context will spur motor development in the child who finds themselves in different cultures.

Conclusion

Was Rushton right about this? Yes, I think he was. The recent literature on the matter speaks to this. But that doesn’t mean that his r/K selection theory is true. There are differences in motor development between races. But what is interesting is the interaction between biological and cultural factors that spur motor development. The question of black motor precocity, however, is a socio-political question, since science is a social convention influenced by the values of the scientist in question. Now, to the best of my knowledge, Rushton himself never carried out studies on this, he just collated them to use them for his racial trait matrix. However, it’s quite clear that Rushton was politically politically and socially motivated to prove that his theory was true.

But physical differences between the races are easy enough to prove, and of course they are due to biological and cultural interactions. There are differences in skin color and their properties between blacks and whites (Campiche et al, 2019). There is a 3 percent center of mass difference between blacks and whites which explains why each race excels at running and swimming (Bejan, Jones, and Charles, 2010). There are differences in body composition between Asians and whites which means, at the same BMI, Asians would have thicker skin folds and higher body fat than whites (Wang et al, 1994WHO expert consultation, 2004; Wang et al, 2011). Just like at the same BMI, blacks have lower body fat and thinner skin folds than whites (Vickery, Cureton, and Collins, 1988; Wagner and Heyward, 2000Flegal et al, 2010). There are differences in menarche and thelarche between blacks and whites (Wagner and Heyward, 2000; Kaplowitz, 2008; Reagan et al, 2013; Cabrera et al, 2014; Deardorff et al, 2014; ). There are differences in anatomy and physiology and somatotype between blacks and whites and these differences would explain how the races would perform on the big four lifts. There are interesting and real physical differences between races.

So obviously, what is considered “normal” is different in different cultures, and motor development is no different. So just like I think we should have different BMI and skin fold charts for different races, so too should we have different developmental milestones for different races and cultures. The discussion here is clear, since what is “average” and “normal” is different based on race and culture. Like for instance, black babies begin walking around 11 months, white babies around 12 months and Native American babies at 13 months. So while parents may be worried that their child didn’t hit a certain developmental milestone like walking, sitting, rolling, taking a bio-cultural approach will assuage these worries.

Nonetheless, while Rushton was right about race and motor development, we need to center his research project in context. He was clearly motivated, despite the numerous and forceful critiques of his framework, to prove that he was right. But the continuance of Rushton pushing his theory up until his death shows me that he was quite obviously socially and politically motivated, contrary to what he may have said.

We have approached this paper from the stance that science is a social activity, with all observations influenced by, as well as reflective of, the values of scientists and the political leanings of the sociocultural context within which research is conducted. We suggest that when questions of group difference are pursued in science, awareness of how the categories themselves have been shaped by social and historical forces, as well as of the potential effects on society, is important. (Barnes, Zieff, and Anderson, 1999)

Blog Stats

  • 932,663 hits
Follow NotPoliticallyCorrect on WordPress.com

Keywords