NotPoliticallyCorrect

Home » 2023

Yearly Archives: 2023

A Critical Examination of Responses to Berka’s (1983) and Nash’s (1990) Philosophical Inquiries on Mental Measurement from Brand et al (2003)

2750 words

Introduction

What I term “the Berka-Nash measurement objection” is—I think—one of the most powerful arguments against not only the concept of IQ “measurement” but against psychological “measurement” as a whole—this also compliments my irreducibility of the mental arguments. (Although there are of course contemporary authors who argue that IQ—and other psychological traits—are immeasurable, the Berka-Nash measurement objection I think touches the heart of the matter extremely well). The argument that Karel Berka (1983) mounted in Measurement: Its Concepts, Theories, and Problems is a masterclass in defining what “measurement” means and the rules needed for what designates X is a true measure and Y as a true measurement device. Then Roy Nash (1990) in Intelligence and Realism: A Materialist Critique of IQ brought Berka’s critique of extraphysical (mental) measurement to a broader audience, simplifying some of the concepts that Berka discussed and likened it to the IQ debate, arguing that there is no true property that IQ tests measure, therefore IQ tests aren’t a measurement device and IQ isn’t a measure.

I have found only one response to this critique of mental measurement by hereditarians—that of Brand et al (2003). Brand et al think they have shown that Berka’s and Nash’s critique of mental measurement is consistent with IQ, and that IQ can be seen as a form of “quasi-quantification.” But their response misses the mark. In this article I will argue how it misses the mark and it’s for these reasons: (1) they didn’t articulate the specified measured object, object of measurement and measurement unit for IQ and they overlooked the challenges that Berka discussed about mental measurement; (2) they ignored the lack of objectively reproducible measurement units; (3) they misinterpreted what Berka meant by “quasi-quantification” and then likening it to IQ; and (4) they failed to engage with Berka’s call for precision and reliability.

IQ, therefore, isn’t a measurable construct since there is no property being measured by IQ tests.

Brand et al’s arguments against Berka

The response from Brand et al to Berka’s critiques of mental measurement in the context of IQ raises critical concerns of Berka’s overarching analysis on measurement. So examining their arguments against Berka reveals a few shortcomings which undermine the central tenets of Berka’s thesis of measurement. From failing to articulate the fundamental components of IQ measurement, to overlooking the broader philosophical issues that Berka addressed, Brand et al’s response falls short in providing a comprehensive rebuttal to Berka’s thesis, and in actuality—despite the claims from Brand et al—Berka’s argument against mental measurement doesn’t lend credence to IQ measurement—it effectively destroys it, upon a close, careful reading of Berka (and then Nash).

(1) The lack of articulation of a specified measured object, object of measurement and measurement unit for IQ

This is critical for any claim that X is a measure and that Y is a measurement device—one needs to articulate the specified measured object, object of measurement and measurement unit for what they claim to be measuring. To quote Berka:

If the necessary preconditions under which the object of measurement can be analyzed on a higher level of qualitative aspects are not satisfied, empirical variables must be related to more concrete equivalence classes of the measured objects. As a rule, we encounter this situation at the very onset of measurement, when it is not yet fully apparent to what sort of objects the property we are searching for refers, when its scope is not precisely delineated, or if we measure it under new conditions which are not entirely clarified operationally and theoretically. This situation is therefore mainly characteristic of the various cases of extra-physical measurement, when it is often not apparent what magnitude is, in fact, measured, or whether that which is measured really corresponds to our projected goals.” (Berka, 1983: 51)

Both specific postulates of the theory of extraphysical measurement, scaling and testing – the postulates of validity and reliability – are then linked to the thematic area of the meaningfulness of measurement and, to a considerable extent, to the problem area of precision and repeatability. Both these postulates are set forth particularly because the methodologists of extra-physical measurement are very well aware that, unlike in physical measurement, it is here often not at all clear which properties are the actual object of measurement, more precisely, the object of scaling or counting, and what conclusions can be meaningfully derived from the numerical data concerning the assumed subject matter of investigation. Since the formulation, interpretation, and application of these requirements is a subject of very vivid discussion, which so far has not reached any satisfactory and more or less congruent conclusions, in our exposition we shall limit ourselves merely to the most fundamental characteristics of these postulates.” (Berka, 1983: 202-203)

At any rate, the fact that, in the case of extraphysical measurement, we do not have at our disposal an objectively reproducible and significantly interpretable measurement unit, is the most convincing argument against the conventionalist view of a measurement, as well as against the anti-ontological position of operationalism, instrumentalism, and neopositivism.” (Berka, 1983: 211)

One glaring flaw—and I think it is the biggest—in Brand et al’s response is their failure to articulate the specified measured object, object of measurement and measurement unit for IQ. Berka’s insistence on precision in measurement requires a detailed conception of what IQ tests aim to measure—we know this is “IQ” or “intelligence” or “g, but they then of course would have run into how to articulate and define it in a physical way. Berka emphasized that the concept of measurement demands precision in defining what is being measured (the specified measured object), the entity being measured (the object of measurement), and the unit applied for measurement (the measurement unit). Thus, for IQ to be a valid measure and for IQ tests to be a valid measurement device, it is crucial to elucidate exactly what the tests measure the nature of the mental attribute which is supposedly under scrutiny, and the standardized unit of measurement.

Berka’s insistence on precision aligns with a fundamental aspect of scientific measurement—the need for a well defined and standardized procedure to quantify a particular property. This is evidence for physical measurement, like the length of an object being measured using meters. But when transitioning to the mental, the challenge lies in actually measuring something that lacks a unit of measurement. (And as Richard Haier (2014) even admits, there is no measurement unit for IQ like inches, liters or grams.) So without a clear and standardized unit for mental properties, claims of measurement are therefore suspect—and impossible. Moreover, by sidestepping this crucial aspect of what Berka was getting at, their argument remains vulnerable to Berka’s foundational challenge regarding the essence of what is being measured along with how it is quantified.

Furthermore, Brand et al failed to grapple with what Berka wrote on mental measurement. Brand et al’s response would have been more robust if it had engaged with Berka’s exploration of the inherent intracacies and nuances involved in establishing a clear object of measurement for IQ, and any mental attributes.

Measurement units have to be a standardized and universally applicable quantity or physical property while allowing for standardized comparisons across different measures. And none exists for IQ, nor any other psychological trait. So we can safely argue that psychometrics isn’t measurement, even without touching contemporary arguments against mental measurement.

(2) Ignoring the lack of objectively reproducible measurement units

A crucial aspect of Berka’s critique involves the absence of objectively reproducible measurement units in the realm of measurement. Berka therefore contended that in the absence of such a standardized unit of measurement, the foundations for a robust enterprise of measurement are compromised. This is yet another thing that Brand et al overlooked in their response.

Brand et al’s response lacks a comprehensive examination of how the absence of objectively reproducible measurement units in mental measurement undermines the claim that IQ is a measure. They do not engage with Berka’s concern that the lack of such units in mental measurement actually hinders the claim that IQ is a measure. So the lack of attention to the absence of objectively reproducible measurement units in mental measurement actually weakens, and I think destroys, Brand et al’s response. They should have explored the ramifications of a so-called measure without a measurement unit. So this then brings me to their claims that IQ is a form of “quasi-quantification.”

(3) Misinterpretation of “quasi-quantification” and its application to IQ

Brand et al hinge their defense of IQ on Berka’s concept of “quasi-quantification”, which they misinterpret. Berka uses “quasi-quantification” to describe situations where the properties being measured lack the clear objectivity and standardization found in actual physical measurements. But Brand et al seem to interpret “quasi-quantification” as a justification for considering IQ as a valid form of measurement.

Brand et al’s misunderstanding of Berka’s conception of “quasi-quantification” is evidence in their attempt to equate it with a validation of IQ as a form of measurement. Berka was not endorsing it as a fully-fledged form of measurement, but he highlighted the limitations and distinctiveness compared to traditional quantification and measurement. Berka distinguishes between quantification, pseudo-quantification, and quasi-quantification. Berka explicitly states that numbering and scaling—in contrast to counting and measurement—cannot be regarded as kinds of quantification. (Note that “counting” in this framework isn’t a variety of measurement, since measurement is much more than enumeration, and counted elements in a set aren’t magnitudes.) Brand et al fail to grasp this nuanced difference, while mischaracterizing quasi-quantification as a blanket acceptance of IQ as a form of measurement.

Berka’s reservations of quasi-quantification are rooted in the challenges and complexities associated with mental properties, acknowledging that they fall short of the clear objectivity found in actual physical measurements. So Brand et al’s interpretation overlooks this critical aspect, which leads them to erroneously argue that accepting IQ as quasi-quantification is sufficient to justify its status as measurement.

Brand et al’s arguments against Nash

Nash’s book, on the other hand, is a much more accessible and pointed attack on the concept of IQ and it’s so-called “measurement.” He spends the book talking about the beginnings of IQ testing to the Flynn Effect, Berka’s argument and then ends with talking about test bias. IQ doesn’t have a true “0” point (like temperature, which IQ-ists have tried to liken to IQ, and the thermometer to IQ tests—there is no lawful property like the relation between mercury and temperature in a thermometer and IQ and intelligence, so again the hereditarian claim fails). But most importantly, Nash made the claim that there is actually no property to be measured by IQ tests—what did he mean by this?

Nash of course doesn’t deny that IQ tests rank individuals on their performance. So the claim that IQ is a metric property is already assumed in IQ theory. But the very fact that people are ranked doesn’t justify the claim that people are then ranked according to a property revealed by their performance (Nash, 1990: 134). Moreover, if intelligence/”IQ” were truly quantifiable, then the difference between 80 and 90 IQ and 110 and 120 IQ would represent the same cognitive difference between both groups of scores. But this isn’t the case.

Nash is a skeptic of the claim that IQ tests measure some property. (As I am.) So he challenges the idea that there is a distinct and quantifiable property that can be objectively measured by IQ tests (the construct “intelligence”). Nash also questions whether intelligence possesses the characteristics necessary for measurement—like a well-defined object of measurement and measurement unit. Nash successfully argued that intelligence cannot be legitimately expressed in a metric concept, since there is no true measurement property. But Brand et al do nothing to attack the arguments of Berka and Nash and they do not at all articulate the specified measured object, object of measurement and measurement unit for IQ, which was the heart of the critique. Furthermore, a precise articulation of the specified measured object when it comes to the metrication of X (any psychological trait) is necessary for the claim that X is a measure (along with articulating the object of measurement and measurement unit). But Brand et al did not address this in their response to Nash, which I think is very telling.

Brand et al do rightly note Nash’s key points, but they fall far, far from the mark in effectively mounting a sound argument against his view. Nash argues that IQ test results can only, at best, be used for ordinal comparisons of “less than, equal to, greater than” (which is also what Michell, 2022 argues, and the concludes the same as Nash). This is of course true, since people take a test and their performance is based on the type of culture they are exposed to (their cultural and psychological tools). Brand et al failed to acknowledge this and grapple with its full implications. But the issue is, Brand et al did not grapple at all with this:

The psychometric literature is full of plaintive appeals that despite all the theoretical difficulties IQ tests must measure something, but we have seen that this is an error. No precise specification of the measured object, no object of measurement, and no measurement unit, means that the necessary conditions for metrication do not exist. (Nash, 1990: 145)

All in all, a fair reading of both Berka and Nash will show that Brand et al slithered away from doing any actual philosophizing on the phenomena that Berka and Nash discussed. And, therefore, that their “response” is anything but.

Conclusion

Berka’s and Nash’s arguments against mental measurement/IQ show the insurmountable challenges that the peddlers of mental measurement have to contend with. Berka emphasized the necessity of clearly defining the measured object, object of measurement and measurement unit for a genuine quantitative measurement—these are the necessary conditions for metrication, and they are nonexistent for IQ. Nash then extended this critique to IQ testing, then concluding that the lack of a measurable property undermines the claim that IQ is a true measurement.

Brand et al’s response, on the other hand, was pitiful. They attempted to reconcile Berka’s concept of “quasi-quantification” with IQ measurement. Despite seemingly having some familiarity with both Berka’s and Nash’s arguments, they did not articulate the specified measured object, object of measurement and measurement unit for IQ. If Berka really did agree that IQ is “quasi-quantification”, then why did Brand et al not articulate what needs to be articulated?

When discussing Nash, Brand et al failed to address Nash’s claim that IQ can only IQ can only allow for ordinal comparisons. Nash emphasized numerous times in his book that an absence of a true measurement property challenges the claim that IQ can be measured. Thus, again, Brand et al’s response did not successfully and effectively engage with Nash’s key points and his overall argument against the possibility of intelligence/IQ (and mental measurement as a whole).

Berka’s and Nash’s critiques highlight the difficulties of treating intelligence (and psychological traits as a whole) as quantifiable properties. Brand et al did not adequately address and consider the issues I brought up above, and they outright tried to weasle their way into having Berka “agree” with them (on quasi-quantification). So they didn’t provide any effective counterargument against them, nor did they do the simplest thing they could have done—which was articulate the specified measured object, object of measurement and measurement unit for IQ. The very fact that there is no true “0” point is devestating for claims that IQ is a measure. I’ve been told on more than one occasion that “IQ is a unit-less measure”—but they doesn’t make sense. That’s just trying to cover for the fact that there is no measurement unit at all, and consequently, no specified measured object and object of measurement.

For these reasons, the Berka-Nash measurement objection remains untouched and the questions raised by them remain unanswered. (It’s simple: IQ-ists just need to admit that they can’t answer the challenge and that psychological traits aren’t measurable like physical traits. But then their whole worldview would crumble.) Maybe we’ll wait another 40 and 30 years for a response to the Berka-Nash measurement objection, and hopefully it will at least try harder than Brand et al did in their failure to address these conceptual issues raised by Berka and Nash.

Jensen’s Default Hypothesis is False: A Theory of Knowledge Acquisition

2000 words

Introduction

Jensen’s default hypothesis proposes that individual and group differences in IQ are primarily explained genetic factors. But Fagan and Holland (2002) question this hypothesis. For if differences in experience lead to differences in knowledge, and differences in knowledge lead to differences in IQ scores, then Jensen’s assumption that blacks and whites have the same opportunity to learn the content is questionable, and I’d think it false. It is obvious that there are differences in opportunity to acquire knowledge which would then lead to differences in IQ scores. I will argue that Jensen’s default hypothesis is false due to this very fact.

In fact, there is no good reason to accept Jensen’s default hypothesis and the assumptions that come with it. Of course different cultural groups are exposed to different kinds of knowledge, so this—and not genes—would explain why different groups score differently on IQ tests (tests of knowledge, even so-called culture-fair tests are biased; Richardson, 2002). I will argue that we need to reject Jensen’s default hypothesis on these grounds, because it is clear that groups aren’t exposed to the same kinds of knowledge, and so, Jensen’s assumption is false.

Jensen’s default hypothesis is false due to the nature of knowledge acquisition

Jensen (1998: 444) (cf Rushton and Jensen, 2005: 335) claimed that what he called the “default hypothesis” should be the null that needs to be disproved. He also claimed that individual and group differences are “composed of the same stuff“, in that they are “controlled by differences in allele frequencies” and that these differences in allele frequencies also exist for all “heritable” characters, and that we would find such differences within populations too. So if the default hypothesis is true, then it would suggest that differences in IQ between blacks and whites are primarily attributed to the same genetic and environmental influences that account for individual differences within each group. So this implies that genetic and environmental variances that contribute to IQ are therefore the same for blacks and whites, which supposedly supports the idea that group differences are a reflection of individual differences within each group.

But if the default hypothesis were false, then it would challenge the assumption that genetic and environmental influences in IQ between blacks and whites are proportionally the same as seen in each group. Thus, this allows us to talk about other causes of variance in IQ between blacks and whites—factors other than what is accounted for by the default hypothesis—like socioeconomic, cultural, and historical influences that play a more substantial role in explaining IQ differences between blacks and whites.

Fagan and Holland (2002) explain their study:

In the present study, we ensured that Blacks and Whites were given equal opportunity to learn the meanings of relatively novel words and we conducted tests to determine how much knowledge had been acquired. If, as Jensen suggests, the differences in IQ between Blacks and Whites are due to differences in intellectual ability per se, then knowledge for word meanings learned under exactly the same conditions should differ between Blacks and Whites. In contrast to Jensen, we assume that an IQ score depends on information provided to the learner as well as on intellectual ability. Thus, if differences in IQ between Blacks and Whites are due to unequal opportunity for exposure to information, rather than to differences in intellectual ability, no differences in knowledge should obtain between Blacks and Whites given equal opportunity to learn new information. Moreover, if equal training produces equal knowledge across racial groups, than the search for racial differences in IQ should not be aimed at the genetic bases of IQ but at differences in the information to which people from different racial groups have been exposed.

There are reasons to think that Jensen’s default hypothesis is false. For instance, since IQ tests are culture-bound—that is, culturally biased—then they are biased against a group so they therefore are biased for a group. Thus, this introduces a confounding factor which challenges the assumption of equal genetic and environmental influences between blacks and whites. And since we know that cultural differences in the acquisition of information and knowledge vary by race, then what explains the black-white IQ gap is exposure to information (Fagan and Holland, 2002, 2007).

The Default Hypothesis of Jensen (1998) assumes that differences in IQ between races are the result of the same environmental and genetic factors, in the same ratio, that underlie individual differences in intelligence test performance among the members of each racial group. If Jensen is correct, higher and lower IQ individuals within each racial group in the present series of experiments should differ in the same manner as had the African-Americans and the Whites. That is, in our initial experiment, individuals within a racial group who differed in word knowledge should not differ in recognition memory. In the second, third, and fourth experiments individuals within a racial group who differed in knowledge based on specific information should not differ in knowledge based on general information. The present results are not consistent with the default hypothesis.(Fagan and Holland, 2007: 326)

Historical and systematic inequalities could also lead to differences in knowledge acquisition. The existence of cultural biases in educational systems and materials can create disparities in knowledge acquisition. Thus, if IQ tests—which reflect this bias—are culture-bound, it also questions the assumption that the same genetic and environmental factors account for IQ differences between blacks and whites. The default hypothesis assumes that genetic and environmental influences are essentially the same for all groups. But SES/class differences significantly affect knowledge acquisition, so if challenges the default hypothesis.

For years I have been saying, what if all humans have the same potential but it just crystallizes differently due to differences in knowledge acquisition/exposure and motivation? There is a new study that shows that although some children appeared to learn faster than others, they merely had a head start in learning. So it seems that students have the same ability to learn and that so-called “high achievers” had a head start in learning (Koedinger et al, 2023). They found that students vary significantly in their initial knowledge. So although the students had different starting points (which showed the illusion of “natural” talents), they had more of a knowledge base but all of the students had a similar rate of learning. They also state that “Recent research providing human tutoring to increase student motivation to engage in difficult deliberate practice opportunities suggests promise in reducing achievement gaps by reducing opportunity gaps (6364).

So we know that different experiences lead to differences in knowledge (it’s type and content), and we also know that racial groups for example have different experiences, of course, in virtue of their being different social groups. So these different experiences lead to differences in knowledge which are then reflected in the group IQ score. This, then, leads to one raising questions about the truth of Jensen’s default hypothesis described above. Thus, if individuals from different racial groups have unequal opportunities to be exposed to information, then Jensen’s default hypothesis is questionable (and I’d say it’s false).

Intelligence/knowledge crystalization is a dynamic process shaped by extensive practice and consistent learning opportunities. So the journey towards expertise involves iterative refinement with each practice opportunity contribute to the crystallization of knowledge. So if intelligence/knowledge crystallizes through extensive practice, and if students don’t show substantial differences in their rates of learning, then it follows that the crystalization of intelligence/knowledge is more reliant on the frequency and quality of learning opportunities than on inherent differences in individual learning rates. It’s clear that my position enjoys some substantial support. “It’s completely possible that we all have the same potential but it crystallizes differently based on motivation and experience.” The Fagan and Holland papers show exactly that in the context of the black-white IQ gap, showing that Jensen’s default hypothesis is false.

I recently proposed a non-IQ-ist definition of intelligence where I said:

So a comprehensive definition of intelligence in my view—informed by Richardson and Vygotsky—is that of a socially embedded cognitive capacity—characterized by intentionality—that encompasses diverse abilities and is continually shaped by an individual’s cultural and social interactions.

So I think that IQ is the same way. It is obvious that IQ tests are culture-bound and tests of a certain kind of knowledge (middle-class knowledge). So we need to understand how social and cultural factors shape opportunities for exposure to information. And per my definition, the idea that intelligence is socially embedded aligns with the notion that varying sociocultural contexts do influence the development of knowledge and cognitive abilities. We also know that summer vacation increases educational inequality, and that IQ decreases during the summer months. This is due to the nature of IQ and achievement tests—they’re different versions of the same test. So higher class children will return to school with an advantage over lower class children. This is yet more evidence in how knowledge exposure and acquisition can affect test scores and motivation, and how such differences crystallize, even though we all have the same potential (for learning ability).

Conclusion

So intelligence is a dynamic cognitive capacity characterized by intentionality, cultural context and social interactions. It isn’t a fixed trait as IQ-ists would like you to believe but it evolves over time due to the types of knowledge one is exposed to. Knowledge acquisition occurs through repeated exposure to information and intentional learning. This, then, challenges Jensen’s default hypothesis which attributes the black-white IQ gap primarily to genetics.Since diverse experiences lead to varied knowledge, and there is a certain type of knowledge in IQ tests, individuals with a broad range of life experiences varying performance on these tests which then reflect the types of knowledge one is exposed to during the course of their lives. So knowing what we know about blacks and whites being different cultural groups, and what we know about different cultures having different knowledge bases, then we can rightly state that disparities in IQ scores between blacks and whites are suggested to be due to environmental factors.

Unequal exposure to information creates divergent knowledge bases which then influence the score on the test of knowledge (IQ test). And since we now know that despite initial differences in initial performance that students have a surprising regularity in learning rates, this suggests that once exposed to information, the rate of knowledge acquisition remains consistent across individuals which then challenges the assumption of innate disparities in learning abilities. So the sociocultural context becomes pivotal in shaping the kinds of knowledge that people are exposed to. Cultural tools environmental factors and social interactions contribute to diverse cognitive abilities and knowledge domains which then emphasize the contextual nature of not only intelligence but performance in IQ tests. So what this shows is that test scores are reflective of the kinds of experience the testee was exposed to. So disparities in test scores therefore indicate differences in learning opportunities and cultural contexts

So a conclusive rejection of Jensen’s default hypothesis asserts that the black-white IQ gap is due to exposure to different types of knowledge. Thus, what explains disparities in not only blacks and whites but between groups is unequal opportunities to exposure of information—most importantly the type of information found on IQ tests. My sociocultural theory of knowledge acquisition and crystalization offers a compelling counter to hereditarian perspectives, and asserts that diverse experiences and intentionality learning efforts contribute to cognitive development. The claim that all groups or individuals are exposed to similar types of knowledge as Jensen assumes is false. By virtue of being different groups, they are exposed to different knowledge bases. Since this is true, and IQ tests are culture-bound and tests of a certain kind of knowledge, then it follows that what explains group differences in IQ and knowledge would therefore be differences in exposure to information.

What If Charles Darwin Never Existed and the Theory of Natural Selection Was Never Formulated?

2200 words

Introduction

Let’s say that we either use a machine to teleport to another reality where Darwin didn’t exist or one where he died early, before formulating the theory of natural selection (ToNS). Would our evolutionary knowledge suffer? Under what pretenses could we say that our evolutionary knowledge wouldn’t suffer? Well, since Darwin humbly stated that what he said wasn’t original and that he just assembled numerous pieces of evidence to cohere to make his ToNS, then obviously we know that species changed over time. That’s what evolution is—change over time—and Darwin, in formulating his ToNS, attempted to prove that it was a mechanism of evolutionary change. But if Darwin never existed or if the ToNS was never formulated by him, I don’t think that our evolutionary knowledge would suffer. This is because people before Darwin observed that species change over time, like Lamarck and Darwin’s grandfather, Erasmus Darwin.

So in this article I will argue that had Darwin not existed or died young and had not formulated the ToNS, we would still have adequate theories of speciation, trait fixation and evolutionary change and processes, since naturalists at the time knew that species changed over time. I will discuss putative mechanisms of evolutionary change and show that without Darwin or the ToNS that we would still be able to have coherent theories of speciation events and trait fixation. Mechanisms like genetic drift, mutation and neutral evolution, environmental constraints, Lamarckian mechanisms, epigenetic factors, and ecological interactions would have been some plausible mechanisms sans Darwin and his ToNS even in the modern day as our scientific knowledge advanced without Darwin.

What if Darwin never existed?

For years I have been critical of Darwin’s theory of natural selection as being a mechanism for evolutionary change since it can’t distinguish between causes and correlates of causes. I was convinced by Fodor’s (2008) argument and Fodor and Piattelli-Palmarini’s (2010) argument in What Darwin Got Wrong that Darwin was wrong about natural selection being a mechanism of evolutionary change. I even recently published an article on alternatives to natural selection (which will be the basis of the argument in this article).

So, if Darwin never existed, how would the fact that species can change over time (due to, for instance, selective breeding) be explained? Well, before Charles Darwin, we had his grandfather Erasmus Darwin and Jean Baptiste Lamarck, of Lamarckian inheritance fame. So if Charles Darwin didn’t exist, there would still be enough for a theory of evolution had Darwin not been alive to formulate the ToNS.

We now know that Charles did read Erasmus’ The Temple of Nature (TToN) (1803) due to the annotations in his copy, and that the TOnF bore resemblance not to Darwin’s On the Origin of Species, but to The Descent of Man (Hernandez-Avilez and Ruiz-Guttierez, 2023). So although it is tentative, we know that Charles had knowledge of Erasmus’ writings on evolution. But before TToN, Erasmus wrote Zoonomia (1794), where he proposed a theory of common descent and also speculated on the transmutation of species over time. Being very prescient for the time he was writing in, he also discussed how the environment can influence the development of organisms, and how variations in species can arise due to the environment (think directed mutations). Erasmus also discussed the concept of use and disuse—where traits that an organism would use more would develop while traits they would use less would diminish over time—which was then a pre-cursor to Lamarck’s thoughts.

An antecedent to the “struggle for existence” is seen in Erasmus’ 1794 work Zoonomia (p. 503) (which Darwin underlined in his annotations, see Hernandez-Avilez and Ruiz-Guttierez, 2023):

The birds, which do not carry food to their young, and do not therefore marry, are armed with spurs for the purpose of fighting for the exclusive possession of the females, as cocks and quails. It is certain that these weapons are not provided for their defence against other adversaries, because the females of these species are without this armour. The final cause of this contest amongst the males seems to be, that the strongest and most active animal should propagate the species, which should thence become improved.

Jean Baptiste Lamarck wrote Philosophie Zoologique (Philosophical Zoology) in 1809. His ideas on evolution were from the same time period as Erasmus’, and they discussed similar subject matter. Lamarck believed that nature could explain species differentiation, and that behavioral changes which were environmentally induced could explain changes in species eventually leading to speciation. Lamarck’s first law was that use or disue would cause appendages to enlarge or shrink while his second law was that the changes in question were heritable. We also know that in many cases that development precedes evolution (West-Eberhard, 2005; Richardson, 2017) so these ideas in the modern day along with the observations to show they’re true also lend credence to Lamarck’s ideas.

First Law: In every animal that has not reached the end of its development, the more frequent and sustained use of any organ will strengthen this organ little by little, develop it, enlarge it, and give to it a power proportionate to the duration of its use; while the constant disuse of such an organ will insensibly weaken it, deteriorate it, progressively diminish its faculties, and finally cause it to disappear.

Second Law: All that nature has caused individuals to gain or lose by the influence of the circumstances to which their race has been exposed for a long time, and, consequently, by the influence of a predominant use or constant disuse of an organ or part, is conserved through generation in the new individuals descending from them, provided that these acquired changes are common to the two sexes or to those which have produced these new individuals (Lamarck 1809, p. 235). [Quoted in Burkhardt Jr., 2013]

Basically, Lamarck’s idea was that acquired traits during an organism’s lifetime could be passed onto descendants. If an organism developed a particular trait in response to its environment, then that trait could be inherited by its descendants. He was also one of the first—along with Erasmus—to go against the accepted wisdom of the time and propose that species could change over time and that they weren’t fixed. Basically, I think that Lamarck’s main idea was that the environment could have considerable effects on the evolution of species, and that these environmentally-induced changes could be heritable.

Well today, we have evidence that Lamarck was right, for example with the discovery and experiments showing that directed mutation is a thing. There was a lot that Lamarck got right and which has been integrated into the current evolutionary theory. We also know that there is evidence that “parental environment-induced epigenetic alterations are transmitted through both the maternal and paternal germlines and exert sex-specific effects” (Wang, Liu, and Sun, 2017). So we can then state Lamarck’s dictum: environmental change leads to behavioral change which leads to morphological change (Ward, 2018) (and with what we know about how the epigenetic regulation of the transposable elements regulates punctuated equilibrium, see Zeh, Zeh, and Ishida, 2009, we have a mechanism that can lead to this). And since we know that environmental epigenetics and transgenerational epigenetic provides mechanisms for Lamarck’s proposed process (Skinner, 2015), it seems that Lamarck has been vindicated. Indeed, Lamarckian inheritance is now seen as a mechanism of evolutionary change today (Koonin, 2014).

So knowing all of this, what if Charles Darwin never existed? How would the course of evolutionary theory be changed? We know that Darwin merely put the pieces of the puzzle together (from animal breeding, to the thought that transmutation could occur, etc.), but I won’t take anything away from Darwin, since even though I think he was wrong on a mechanism of evolution being natural selection, he did a lot of good work to put the pieces of the puzzle together into a theory of evolution that—at the time—could explain the fixation of traits and speciation (though I think that there are other ways to show that without relying on natural selection). The components of the theory that Darwin proposed were all there, but he was the one that coalesced them into a theory (no matter if it was wrong or not). Non-Darwinian evolution obviously was “the in thing” in the 19th century, and I don’t see how or why it would change. But Bowler (2013) argues that Alfred Russell Wallace would have articulated a theory of nature selection based on competition between varieties, not individuals as Darwin did. He argues that an equivalent of Darwin’s ToNS wouldn’t have been articulated until one recognized the similarities between what would become natural selection and artificial selection (where humans attempt to consciously select for traits) (Bowler, 2008). Though I do think that the ToNS is wrong, false, and incoherent, I do recognize how one would think that it’s a valid theory in explaining the evolution of species and the fixation of traits in biological populations. (Though I do of course think that my proposed explanation in linking saltation, internal physiological mechanisms and decimationism would have played a part in a world without Charles Darwin in explaining what we see around us.)

Now I will sketch out how I think our understanding of evolutionary theory would go had Charles Darwin not existed.

Although Lamarckism was pretty much discredited when Darwin articulated the ToNS (although Darwin did take to some of Lamarck’s ideas), the Lamarckian emphasis of the role of the environment shaping the traits of organisms would have persisted and remained influential. Darwin was influenced by many different observations that were known before he articulated his theory, and so even if Darwin didn’t exist to articulate the ToNS, the concept that species changed over time (that is, the concept that species evolved) was persistent before Darwin’s observations which led to his theory, along with the numerous lines of evidence that led Darwin to formulating the ToNS after his voyage on The Beagle. So while Darwin’s work did accelerate the exceptance of evolution, it is therefore very plausible that other mechanisms that don’t rely on selection would have been articulated. Both Erasmus and Lamarck had a kind of teleology in their thinking, which is alive today in modern conceptions of the EES like in that of arguments forwarded by Denis Noble (Noble and Noble, 2020, 2022) Indeed, Lamarck was one of the first to propose a theory of change over time.

Punctuated equilibrium (PE) can also be integrated with these ideas. PE is where rapid speciation events occur and then there is a period of stasis, and this can then be interpreted as purposeful evolutionary change based on the environment (similar to directed mutations). So each punctuated episode could align with Lamarck’s idea that organisms actively adapt to specific conditions, and it could also play a role in explaining the inheritance of acquired characters. So organisms could rapidly acquire traits due to environmental cues thsg the embryo’s physiology detects (since physiology is homeodynamic), there would be a response to the environmental change, and this would then contribute to the bursts of evolutionary change. Further, in periods of stasis, it could be inferred that there would be really no changing in the environment—not enough anyway, to lead to the change in the traits of a species—and so organisms would have been in equilibrium with their environment minting the traits until a new change in the environmental challenges triggered a burst of evolutionary change which would kick the species out of stasis and lead to punctuated events of evolutionary change. Therefore, this model (which is a holistic approach) would allow for a theory of evolution in which it is responsive, directed, and linked with the striving of organisms in their environmental context.

Conclusion

So in a world without Charles Darwin, the evolutionary narrative would have been significantly shaped by Erasmus and Lamarck. This alternative world would focus on Lamarckian concepts, the idea of transmutation over time, purposeful adaptation over time along with directed mutations and the integration of PE with these other ideas to give us a fuller and better understanding of how organisms change over time—that is, how organisms evolve. The punctuated episodic bursts of evolutionary change can be interpreted as purposeful evolutionary change based on Lamarckian concepts. Environmental determinism and stability shape the periods between bursts of change. And since we know that organisms in fact can adapt to complex, changing environments due to their physiology (Richardson, 2020), eventually as our scientific knowledge advanced we would then come to this understanding.

Therefore, the combination of Erasmus’ and Lamarck’s ideas would have provided a holistic, non-reductive narrative to explain the evolution of species. While I do believe that someone would have eventually articulated something similar to Darwin’s ToNS, I think that it would have been subsumed under the framework of built off of Erasmus and Lamarck. So there was quite obviously enough evolutionary thought and ideas before Darwin for there to be a relevant and explanatory theory of evolution had Darwin not been alive to formulate the ToNS, and this shows how such mechanisms to explain the origin of life, speciation, and trait fixation would have occurred, even in the absence of Darwin.

The Illusion of Separation: A Philosophical Analysis of “Variance Explained”

2050 words

Introduction

“Variance explained” (VE) is a statistical concept which is used to quantify the proportion of variance in a trait that can be accounted for or attributed to one or more independent variables in a statistical model. VE is represented by “R squared”, which ranges from 0 to 100 percent. An r2 of 0 percent means that none of the variance in the dependent variable is explained by the independent variable whereas an r2 of 100 percent means that all of the variance is explained. But VE doesn’t imply causation, it merely quantifies the degree of association or predictability between two variables.

So in the world of genetics, heritability and GWAS, the VE concept has been employed as a fundamental measure to quantify the extent to which a specific trait’s variability can be attributed to genetic factors. One may think that it’s intuitive to think that G and E factors can be separated and their relative influences can be seen and disentangled for human traits. But beneath its apparent simplicity lies a philosophically contentious issue, most importantly, due to the claim/assumption that G and E factors can be separated into percentages.

But I think the concept of VE in psychology/psychometrics and GWAS is mistaken, because (1) it implies a causal relationship that may not exist; (2) implies reductionism; (3) upholds the nature-nurture dichotomy; (4) doesn’t account for interaction and epigenetics; and (5) doesn’t account for context-dependency. In this article, I will argue that the concept of VE is confused, since it assumes too much while explaining too little. Overall, I will explain the issues using a conceptual analysis and then give a few arguments on why I think the phrase is confused.

Arguments against the phrase “variance explained”

While VE doesn’t necessarily imply causation, in psychology/psychometrics and GWAS literature, it seems to be used as somewhat of a causal phrase. The phrase also reduces the trait in question to a single percentage, which is of course not accurate—so basically it attempts at reducing T to a number, a percentage.

But more importantly, the notion of VE is subject to philosophical critique in virtue of the implications of what the phrase inherently means, particularly when it comes to the separation of genetic and environmental factors. The idea of VE most often perpetuates the nature-nurture dichotomy, assuming that G and E can be neatly separated into percentages of causes of a trait. Thus this simplistic division between G and E oversimplifies the intricate interplay between genes, environment and all levels of the developmental system and the irreducible interaction between all developmental resources that lead to the reliable ontogeny of traits (Noble, 2012).

Moreover, VE can be reductionist in nature, since it implies that a certain percentage of a trait’s variance can be attributable to genetics, disregarding the dynamic and complex interactions between genes and other resources in the developmental system. Therefore, this reductionism fails to capture the holistic and emergent nature of human development and behavior. So just like the concept of heritability, the reductionism inherent in the concept of VE focuses on isolating the contributions of G and E, rather than treating them as interacting factors that are not reducible.

Furthermore, we know that epigenetics demonstrates that environmental factors can influence gene expression which then blurs the line between G and E. Therefore, G and E are not separable entities but are intertwined and influence each other in unique ways.

It also may inadvertently carry implicit value judgements about which traits or outcomes are deemed desirable or significant. In a lot circles, a high heritability is seen as evidence for the belief that a trait is strongly influenced by genes—however wrong that may be (Moore and Shenk, 2016). Further, it could also stigmatize environmental influences if a trait is perceived as primarily genetic. This, then, could contribute to a bias that then downplays the importance of environmental factors which would then overlook their importance and potential impact in individual development and behavior.

This concept, moreover, doesn’t provide clarity on questions like identity and causality. Even if a high percentage of variance is attributed to genetics, it doesn’t necessarily reveal the causal mechanisms or genetic factors responsible, which then leads to philosophical indeterminancy regarding the nature of causation. Human traits are highly complex and the attempt to quantify them and break then apart into heat percentages or variances explained by G and E vastly oversimplifies the complexity of these traits. This oversimplification then further contributes to philosophical indeterminancy about the nature and true origins (which would be the irreducible interactions between all developmental resources) of these traits.

The act of quantifying variance also inherently involves power dynamics, where certain variables are deemed more significant or influential than others. This, then, introduces a potential bias that may reflect existing societal norms or power structures. “Variance explained” may inadvertently perpetuate and reinforce these power dynamics by quantifying and emphasizing certain factors over others. (Like eg the results of Hill et al, 2019 and Barth, Papageorge, and Thom, 2020 and see Joseph’s critique of these claims). Basically, these differences between people in income and other socially-important traits are due to genetic differences between them. (Even though there is no molecular genetic evidence for the claim made in The Bell Curve that we are becoming more genetically stratified; Conley and Domingue, 2016.)

The concept of VE also implies a kind of predictive precision that may not align with the uncertainty of human behavior. The illusion of certainty created by high r2 values can lead to misplaced confidence in predictions. In reality, the complexity of human traits often defies prediction and overreliance on VE may create a false sense of certainty.

We also have what I call the “veil of objectivity” argument. This argument challenges the notion that VE provides an entirely objective view. Behind the numerical representation lies a series of subjective decisions, like the selection of variables to the interpretation of results. From the initial selection of variables to be studied to the interpretation of their results, researchers exercise subjective judgments which then could introduce biases and assumptions. So if “variance explained” is presumed to offer an entirely objective view of human traits, then the numerical representation represents an objective measure of variance attribution. If, behind this numerical representation, subjective decisions are involved in variable selection and results interpretation, then the presumed objectivity implied by VE becomes a veil masking underlying subjectivity. So if subjective decisions are integral to the process of VE, then the presumed objectivity of the numerical representation serves as a veil concealing the subjective aspects of the research process. So if the veil of objectivity conceals subjective decisions, then there exists a potential for biases and assumptions which then would influence the quantitative analysis. Thus, if biases and assumptions are inherent in the quantitative analysis due to the veil of objectivity, then the objectivity attributed to VE is compromised, and a more critical examination of subjective elements becomes imperative. This argument of course is for “IQ” studies, heritability studies of socially-important human traits and the like, along with GWASs. In interpreting associations, GWASs and h2 studies also fall prey to the veil of objectivity argument, since as seen above, many people would like the hereditarian claim to be true. So when it comes to GWAS and heritability studies, VE refers to the propagation of phenotypic variance attributed to genetic variance.

So the VE concept assumes a clear separation between genetic and environmental factors which is often reductionist and unwarranted. It doesn’t account for the dynamic nature and influence of these influences, nor—of course—the influence of unmeasured factors. The concepts oversimplification can lead to misunderstandings and has ethical implications, especially when dealing with complex human traits and behaviors. Thus, the VE concept is conceptually flawed and should be used cautiously, if at all, in the fields in which it is applied. It does not adequately represent the complex reality of genetic and environmental influences on human traits. So the VE concept is conceptually limited.

If the concept of VE accurately separates genetic and environmental influences, then it should provide a comprehensive and nuanced representation of factors that contribute to a trait. But the concept does not adequately consider the dynamic interactions, correlations, contextual dependencies, and unmeasured variables. So if the concept does not and cannot address these complexities, then it cannot accurately separate genetic and environmental influences. So if a concept can’t accurately separate genetic and environmental influences, then it lacks coherence in the context of genetic and behavioral studies. Thus the concept of VE lacks coherence in the context of genetic and behavioral studies, as it does not and cannot adequately separate genetic and environmental influences.

Conclusion

In exploring the concept of VE and it’s application in genetic studies, heritability research and GWAS, a series of nuanced critiques have been uncovered that challenge its conceptual coherence. The phrase quantifies the proportion of variance in a trait that is attributed to certain variables, typically genetic and environmental ones. The reductionist nature of VE is apparent since it attempts to distill interplay between G and E into percentages (like h2 studies). But this oversimplification neglects the complexity and dynamic nature of these influences which then perpetuates the nature-nurture dichotomy which fails to capture the intricate interactions between all developmental resources in the system. The concepts inclination to overlook G-E interactions, epigenetic influences, and context-dependents variablity further speaks to its limitations. Lastly, normative assumptions intertwined with the concept thenninteouvde ethical considerations as implicit judgments may stigmatize certain traits or downplay the role and importance of environmental factors. Philosophical indeterminancy, therefore, arises from the inability of the concept of VE to offer clarity on identity, causality, and the complex nature of human traits.

So by considering the reductionist nature, the perpetuation of the false dichotomy between nature and nurture, the oversight of G-E interactions, and the introduction of normative assumptions, I have demonstrated through multiple cases that the phrase “variance explained” falls short in providing a nuanced and coherent understanding of the complexities involved in the study of human traits.

In all reality, the issue of this concept is refuted by the fact that the interaction between all developmental resources shows that the separation of the influences/factors is an impossible project, along with the fact that we know that there is no privileged level of causation. Claims of “variance explained”, heritability, and GWAS all push forth the false notion that the relative contributions of genes and environment can be be quantified into the causes of a trait in question. However, we know now that this is false since this is conceptually confused, since the organism and environment are interdependent. So the inseparability of nature and nurture, genes and environment, means that the The ability for GWAS and heritability studies to meet their intended goals will necessarily fall short, especially due to the missing heritability problem. The phrase “variance explained by” implies a direct causal link between independent and dependent variables. A priori reasoning suggests that the intracacies of human traits are probabilistic and context-dependent and it implicated a vast web of bidirectional influences with feedback loops and dynamic interactions. So if the a priori argument advocates for a contextual, nuanced and probabilistic view of human traits, then it challenges the conceptual foundations of VE.

At the molecular level, the nurture/nature debate currently revolves around reactive genomes and the environments, internal and external to the body, to which they ceaselessly respond. Body boundaries are permeable, and our genome and microbiome are constantly made and remade over our lifetimes. Certain of these changes can be transmitted from one generation to the next and may, at times, persist into succeeding generations. But these findings will not terminate the nurture/nature debate – ongoing research keeps arguments fueled and forces shifts in orientations to shift. Without doubt, molecular pathways will come to light that better account for the circumstances under which specific genes are expressed or inhibited, and data based on correlations will be replaced gradually by causal findings. Slowly, “links” between nurture and nature will collapse, leaving an indivisible entity. But such research, almost exclusively, will miniaturize the environment for the sake of accuracy – an unavoidable process if findings are to be scientifically replicable and reliable. Even so, increasing recognition of the frequency of stochastic, unpredictable events ensures that we can never achieve certainty. (Locke and Pallson, 2016)

Mechanisms that Transcend Natural Selection in the Evolutionary Process: Alternatives to Natural Selection

2250 words

Fodor’s argument was a general complaint against adaptationism. Selection can’t be the mechanism of evolution since it can’t distinguish between causes and correlates of causes—so it thusly can’t account for the creation (arrival) of new species. Here, I will provide quotes showing that the claim that natural selection is a mechanism is ubiquitous in the literature—claims that either Darwin discovered the mechanism or claims that it is a mechanism—and that’s what Fodor was responding to. I will then provide an argument combining saltation, internal physiological mechanisms and decimationism and the EES into a coherent explanatory framework to show that there are alternatives to Darwinian evolution, and that these thusly explain speciation and the proliferation of traits while natural selection can’t since it isn’t a mechanism.

Grant and Grant, 2007: “the driving mechanism of evolutionary change was natural selection”

American Museum of Natural History: “Natural selection is a simple mechanism that causes populations of living things to change over time.”

Andrews et al, 2010: “Natural selection is certainly an important mechanism of allele-frequency change, and it is the only mechanism that generates adaptation of organisms to their environments.”

Pianka: “Natural selection is the only directed evolutionary mechanism resulting in conformity between an organism and its environment”

Cottner and Wassenberg, 2020: “This mechanism is natural selection: individuals who inherit adaptations simply out-compete (by out-surviving and out-reproducing) individuals that do not possess the adaptations.”

So natural selection is seen as the mechanism by which traits become fixed in organisms and how speciation happens. Indeed, Darwin (1859: 54) wrote in On the Origin of Species:

“From these several considerations I think it inevitably follows, that as new species in the course of time are formed through natural selection, others will become rarer and rarer, and finally extinct.”

[And some more of the same from authors in the modern day]

“The role of natural selection in speciation, first described by Darwin, has finally been widely accepted” (Via, 2009)

“Selection must necessarily be involved in speciation” (Barton, 2010)

“Darwin’s theory shows how some natural phenomena may be explained (including at least adaptations and speciation)” (SEP, Natural Selection)

“Natural selection has always been considered a key component of adaptive divergence and speciation (2, 15–17)” (Schneider, 2000)

“Natural selection plays a prominent role in most theories of speciation” (Schulter and Nagel, 1995)

So quite obviously, natural selection is seen as a mechanism, and this mechanism supposedly explains speciation of organisms. But since Fodor (2008) and Fodor and Piattelli-Palmarini (2010) showed that natural selection isn’t a mechanism and can’t explain speciation, then there are obviously other ways that evolution happened. There are alternatives to natural selection, and that’s where I will now turn. I will discuss saltation, internal physiological mechanisms and decimationism and then cohere them into a framework that shows how species can arise sans selection.

Explaining speciation

Saltation is the concept of abrupt and substantial changes which lead to the creation of new species, and it challenges phyletic gradualism through natural selection. Instances of sudden genetic alterations along with other goings-on in the environment that lead to things such as directed mutation can eventually result in the emergence of distinct species. Saltation, therefore, challenges Darwinism showing that certain traits can arise quickly, which lead to the emergence of new species within a short time frame. We also have internal physiological mechanisms which play a role in speciation while influencing the development and divergence of traits within biological populations. They don’t rely on external selective pressures—although goings-on in the environment of course can affect physiology—this emphasizes internal factors like developmental constraints, epigenetic modifications and genetic regulatory networks. These can then lead to the expression of novel traits and then on to speciation without the need for external selective forces. And finally decimationism—which emphasizes periodic mass extinction as drivers of evolutionary change—offers another alternative.

Catastrophic events create holes in ecological niches which then allow for the rapid adaptation and diversification of surviving species. So the decimation and recurrent re-colonizing of ecological niches can then lead to the establishment of distinct lineages (species), which then highlight the role of external and non-selective factors in the process of evolution.

So the interaction between saltation, internal physiological mechanisms, and decimationism thusly provides a novel and comprehensive framework for understanding speciation. Sudden genetic changes and other changes to the system can the initiate the development of unique physiological traits (due to the interaction of the developmental resources, and so any change to one resource would cause a cascading change to the system), while internal mechanisms then ensure the stabilization and heritability of the traits within the population. And when this is coupled with environmental upheaval caused by decimation leading to mass extinctions, these processes then contribute to the formation of new species which then offers a framework and novel perspective of the ARRIVAL of the fittest (Darwin’s theory said nothing about arrival, only the struggle for existence), which extends beyond the concept of natural selection.

So if abrupt genetic and other internal changes (saltation) can passively respond to external stimuli and/or environmental pressures, leading to the emergence of distinct traits within a population, and if internal physiological mechanisms influence the expression and development of these traits, then it follows that saltation, coupled with internal physiological mechanisms, can explain and contribute to the rise of new species. If periodic mass extinctions (decimationism) create ecological vacuums and opportunities for adaptive radiation, and if internal physiological mechanisms play a role in the heritability and stability of traits, then it follows that decimationism in conjunction with internal physiological mechanisms can contribute to the speciation of surviving lineages. Also note that all of this is consistent with Gould’s punctuated equilibrium (PE) model.

Punctuated equilibrium was proposed by Gould and Eldgridge as an alternative to phyletic gradualism (Eldgridge and Gould, 1971). It proposes that species evolve rapidly and not gradually. A developmental gene hypothesis also exists for PE (Casanova and Conkel, 2020).

One prediction of PE is rapid speciation events. During periods of punctuated equilibrium, there will be relatively short intervals of rapid speciation which then result in the emergence of new species. This follows from the theory in that it posits that speciation occurs rapidly, concentrated in short bursts, which lead to the prediction that distinct species should emerge more quickly during these punctuated periods. So if species undergo long periods of stasis with occasional rapid change, then it logically follows that new species should arise quickly during these punctuated periods. Seeing that the PE model was developed to explain the lack of transitional fossils, it proposes that species undergo a long period of morphological stasis, with evolutionary changes occurring in short bursts during speciation events, which therefore provides a framework that accounts for the intermittent presence of transitional fossils in the fossil record.

Another prediction is that during periods of stasis (equilibrium), species will exhibit stability in terms of morphology and adaptation. This follows from the theory in that PE posits that stability characterizes a majority of a species existence and that change should occur in quick bursts. Thus, between these bursts, there should be morphological stability. So the prediction is that observable changes are concentrated in specific intervals.

The epigenome along with transposable elements have been argued to be at the heart of PE, and that “physiological stress, associated with major climatic change or invasion of new habitats, disrupts epigenetic silencing, resulting in TE reactivation, increased TE expression and/or germ-line infection by exogenous retroviruses” (Zeh, Zeh, and Ishida, 2009: 715). Further, this hypothesis—that the epigenetic regulation of transposable elements regulates PE—makes testable predictions (Zeh, Zeh and Ishida, 2009: 721). This is also a mechanism to further explain how stress-induced directed mutations occur. Thus, there is an epigenetic basis for the rapid transformation of species which involves the silencing of transposable elements. So calls for an epigenetic synthesis have been made (Crews and Gore, 2012). We, furthermore, know that Lamarckian inheritance is a major mechanism of evolution (Koonin, 2014). We also know that epigenetic processes like DNA methylation contribute to the evolutionary course (Ash, Colot, and Oldroyd, 2021). Such epigenetic mechanisms have been given solid treatment in West-Eberhard’s (2003) Developmental Plasticity and Evolution. (See also West-Eberhard, 2005 on how developmental plasticity leads to the origin of species differences and Wund, 2015 on the impact of phenotypic plasticity on the evolutionary process.)

Integrating the mechanisms into the EES

So in integrating saltation, internal physiological mechanisms, decimationism, epigenetic processes, phenotypic evaluation and directed mutations into the EES (extended evolutionary synthesis), we can then get a more comprehensive framework. Phenotypic plasticity allows organisms to exhibit various phenotypes in response to various environmental cues, so this introduces a broader aspect of adaptability that go beyond genetic change while emphasizing the capacity of populations to change based on what is going on in the immediate environment during development.

Generic drift and neutral evolution also at a role. So beyond the selective pressures emphasized by the modern synthesis, the EES recognizes that genetic changes can occur through stochastic mechanisms which then influence the genetic constitution of a population. Evo-devo then contributes to the synthesis by highlighting the role of developmental processes in evolutionary outcomes. Thus, by understanding how changes in gene regulation during development contribute to morphological diversity, evo-devo therefore provides insight into evolutionary mechanisms which transcend so-called natural selection.

Moreover, the integration of epigenetic inheritance and cultural evolution also extends the scope of the EES. Epigenetic mechanisms can influence gene expression without a change to the DNA sequence, and can contribute to heritability and adaptability. Cultural evolution, then, while acknowledging the power of transmitted knowledge and practices on adaptive success, also broadens our understanding of evolution beyond biological factors. Thus, by incorporating all of the discussed mechanisms, the EES fosters a unique approach in integrating numerous different mechanisms while recognizing that the evolutionary process is influenced by a mixture of biological, environmental, cultural and developmental factors. There is also the fact that the EES has better predictive and explanatory power than the modern synthesis—it also makes novel predictions (Laland et al, 2015).

Conclusion

This discussion has delved into diverse facets of evolutionary theory, showed that natural selection is seen as a mechanism in the modern day, that Darwin and modern day authors see natural selection as the mechanism of speciation, and has considered a few mechanisms of evolution beyond natural selection. Fodor’s argument was introduced to question the applicability of “selection-for” traits, and challenged the notion of natural selection as a mechanism of evolutionary change. Fodor’s argument therefore paved the way for the mechanisms I discussed and opened the door for the reevaluation of saltation, internal physiological mechanisms, decimationism and the EES more broadly in explaining the fact of evolution. So this discussion has shown that we have to think about evolution not as selection-centric, but in a more holistic manner.

There are clearly epigenetic mechanisms which influence speciation on a PE model, and these epigenetic mechanisms then also contribute to the broader understanding of evolution beyond PE. In the PE model, where speciation events are characterized by rapid and distinct changes, epigenetic mechanisms play a crucial role in influencing the trajectory of evolutionary transitions. These epigenetic mechanisms, then, continue to the heritability of traits and the adaptability of populations. These epigenetic mechanisms also extend beyond their impact of speciation within the PE model. So by influencing gene expression in response to environmental cues, epigenetic changes then provide a dynamic layer to the evolutionary process which allow populations to adapt more rapidly to changing conditions. Therefore, epigenetic mechanisms become integral components in explaining evolutionary dynamics which then align with the principles of the EES.

The integration of these concepts into the EES then further broadens our understanding of evolution. So by incorporating genetic drift, phenotypic plasticity, evo-devo, epigenetic inheritance, directed mutation, and cultural evolution, the EES provides a comprehensive framework which recognizes the complexity of evolutionary process beyond mere reductive genetic change. Phenotypic plasticity allows organisms to be adaptively plastic to respond to cues during development and change the course of their development to respond to what is occurring in the environment without relying solely on genetic changes. Genetic drift then introduces stochastic processes and neutral evolution. Evo-devo then contributes to the synthesis by highlighting the role of developmental processes in evolutionary outcomes. Epigenetic inheritance also brings a non-genetic layer to heritability, acknowledging the impact of environmentally responsive gene regulation. Cultural evolution then recognizes the transmission of knowledge and practices within populations as a factor which influences adaptive success. So putting this all together, these integrations then suggests that evolution is a multifaceted interplay of irreducible levels (Noble, 2012) which then challenges natural selection as a a primary or sole mechanism of evolution and as a mechanism at all, since we can explain what natural selection purports to explain without reliance on it.

So if evolutionary processes encompass mechanisms beyond natural selection like saltation, internal physiological mechanisms, decimationism, punctuated equilibrium, and phenotypic plasticity, and if we are to reject natural selection as an explanation for trait fixation and speciation based on Fodor’s argument, and if these mechanisms are an integral part of the EES, then the EES offers a more comprehensive framework in understanding evolution. Evolutionary processes do encompass mechanisms beyond natural selection as evidenced by critiques of selection-centric views and those views that are seen as alternatives to natural selection like saltation, internal physiological mechanisms and decimationism. Thus, by incorporating the aforementioned mechanisms, we will have a better understanding evolution than if merely relying on the non-mechanism of natural selection to explain trait fixation and sp

Rushton, Race, and Twinning

2500 words

As is the case with the other lines of evidence that intend to provide sociobiological evidence in support of the genetic basis of human behavior and development (relating to homology, heritability, and adaptation), Rushton’s work reduces to no evidence at all. (Lerner, 2018)

Introduction

From 1985 until his death in 2012, J. P. Rushton attempted to marshal all of the data and support he could for a theory called r-K selection theory or Differential K theory (Rushton, 1985). The theory posited that while humans were the most K species of all, some human races were more K than others, so it then followed that some human races were more r than others. Rushton then collated mass amounts of data and wrote what would become his magnum opus, Race, Evolution and Behavior (Rushton, 1997). So in the r/K theory first proposed by MacArthur and Wilson, unstable, unpredictable environments favored an r strategy whereas a stable, predictable environments favored a K strategy. (See here for my response to Rushton’s r/K.)

So knowing this, one of the suite of traits Rushton put on his r/K matrix was twinning rates. Rushton (1997: 6) stated:

the rate of dizygotic twinning, a direct index of egg production, is less than 4 per 1,000 births among Mongoloids, 8 per 1,000 among Caucasoids, and 16 or greater per 1,000 among Negroids.

I won’t contest the claim that the rate in DZ twinning is higher by race—because it’s pretty well-established with recent data that blacks are more likely to have twins than whites (that is, blacks have a slightly higher chance of having twins than whites, who have a slightly higher chance of having twins than Asians) (Santana, Surita, and Cecatti, 2018; Wang, Dongarwar, and Salihu, 2020; Monden, Pison, and Smits, 2021)—I’m merely going to contest the causes of DZ twinning. Because it’s clear that Rushton was presuming this to be a deeply evolutionary trait since a highs rate of twins—in an evolutionary context—would mean that there would be a higher chance for children of a particular family to survive and therefore spread their genes and thusly would, in his eyes, lend credence to his claim that Africans were more r compared to whites who were more r compared to Asians.

But to the best of my knowledge, Rushton didn’t explain why, biologically, blacks would have more twins than whites—he merely said “This race has more twins than this race, so this lends credence to my theory.” That is, he didn’t posit a biological mechanism that would instantiate a higher rate of twinning in blacks compared to whites and Asians and then explain how environmental effects wouldn’t have any say in the rate of twinning between the races. However, I am privy to environmental factors that would lead to higher rates of twinning and I am also privy to the mechanisms of action that allow twinning to occur (eg phytoestrogens, FSH, LH, and IGF). And while these are of course biological factors, I will show that there are considerable effects of environmental interactions like diet on the levels of these hormones which are associated with twinning. I will also explain how these hormones are related to twinning.

While the claim that there is a difference in rate of DZ twinning by race seems to be true, I don’t think it’s a biological trait, nevermind an evolutionary one as Rushton proposed (because even if Rushton’s r/K were valid, “Negroids” would be K and “Mongoloids” would be r, Anderson, 1991). Nonetheless, Rushton’s r/K theory is long-refuted, though he did call attention to some interesting observations (which other researchers never ignored, they just didn’t attempt some grand theory of racial differences).

Follicle stimulating hormone, leutinizing hormone, and insulin-like growth factor

We know that older women are more likely to have twins while younger women are less likely (Oleszczuk et al, 2001), so maternal age is a factor. As women age, a hormone called follicle stimulating hormone (FSH) increases due to a decline in estrogen, and it is one of the earliest signs of female reproductive aging (McTavish et al, 2007), being one of the main biomarkers of ovarian reserve tested on day 3 of the menstrual cycle (Roudebush, Kivens, and Mattke, 2008). It is well established that twinning is different in different geographic locations, that the rate of MZ twins is constant at around 3.5 to 4 per 1,000 births (so what is driving the differences is the birth of DZ twins), and that it increases due to an increase in FSH (Santana, Surita, and Cecatti, 2018). We also know that pre-menopausal women who have given birth to DZ twins have higher levels of FSH on the third day of their menstrual cycle (Lambalk et al, 1998).

So if FSH levels stay too high for too long then multiple eggs are released, which could lead to an increase in DZ twinning. FSH stimulates the maturation and growth of ovarian follicles, each of which contains an immature egg called an oocyte. FSH acts on the ovaries to promote the development of multiple ovarian follicles during pregnancy, a process which is called recruitment. In a normal menstrual cycle, only one follicle is stimulated to release one egg; but when FSH levels are elevated, this results in the development and maturation of more than one follicle which is known as polyovulation. Polyovulation then increases the chance of the release of multiple eggs during ovulation. Thus, if more than one egg is released during a menstrual cycle, and they both are fertilized, it can then lead to the development of DZ twins.

Along with FSH, we also have luetenizing hormone (LH). So FSH and LH act synergistically (Raju et al, 2013). LH, like FSH, isn’t directly responsible for the increase in twinning, but the process that it allows (playing a role in ovulation) is a crucial factor in twinning. So LH is responsible for triggering ovulation, which is the release of a mature egg from the ovarian follicle. (Ovulation occurs typically 24 to 36 hours after LH increases.) In a typical menstrual cycle, only one follicle is stimulated to release one egg, which is triggered by the surge in LH. But if there are multiple mature follies in the ovaries (which could be influenced by FSH), then a surge in LH can lead to the release of more than one egg. So the interaction of LH with other hormone like FSH, along with the presence of multiple mature follicles, can be associated with having a higher chance of having DZ twins. FSH therapies are also used in assisted reproduction (eg Munoz et al, 1995 in mice; Ferraretti et al, 2004; Pang, 2005; Pouwer, Farquhar, and Kremer, 2015; Fatemi et al, 2021).

So when it comes to FSH, we know that malnutrition may play a role in twinning, and also that wild yams—a staple food in Nigeria—increases phytoestrogens which increase FSH in the body of women (Bartolus, et al, 1999). Wild yams have been used to increase estrogen in women’s bodies (due to the phytoestrogens they contain), and it enhances estradiol through the mechanism of binding to estrogen receptor sites (Hywood, 2008). And since Nigeria has the highest rate of twinning in the world (Santana, Surita, and Cecatti, 2018), and their diet is wild yam-heavy (Bartolus, et al, 1999), it seems that this fact would go a long way in explaining why they have higher rates of twinning. Mount Sinai says thatAlthough it does not seem to act like a hormone in the body, there is a slight risk that wild yam could produce similar effects to estrogen.” It acts as a weak phytoestrogen (Park et al, 2009). (But see Beckham, 2002.) But when phytoestrogens are consumed, they can then bind to estrogen receptors in the body and trigger estrogenic effects which could then lead to the potential stimulation and release of multiple eggs which would increase the chance of DZ twinning.

One study showed that black women, in comparison to white women, had “lower follicular phase LH:FSH ratios” (Reuttman et al, 2002; cf Marsh et al, 2011), while Randolph et al (2004) showed that black women had higher FSH than Asian and white women. So the lower LH:FSH ratio could affect the timing and regulation of ovulation, and a lower LH:FSH level could reduce the chances of premature ovulation and could affect the release of multiple eggs.

Lastly, when it comes to insulin-like growth factor (IGF), this could be influenced by a high protein diet or a high carb diet. Diets high in high glycemic carbs can lead to increase insulin production which would then lead to increased IGF levels. Just like with FSH and LH, increased levels of IGF could also in concert with the other two hormones influence the maturation and release of multiple eggs during a menstrual cycle which would then increase the chance of twinning (Yoshimura, 1998). IGF can also stimulate the growth and development of multiple follicles (Stubbs et al, 2013) and have them mature early if IGF levels are high enough (Mazerbourgh and Monget, 2018). This could then also lead to polyovulation, triggering the release of more than one egg during ovulation. IGF can also influence the sensitivity of the ovaries to hormonal signals, like those from the pituitary gland, which then leads to enhanced ovarian sensitivity to hormones like FSH and LH which then, of course, would act synergistically increasing the rate of dizygotic twinning. (See Mazerbourgh and Monget, 2018 for a review of this.)

So we know that black women have higher levels of IGF-1 and free IGF-1—but lower IGF-2 and IGFBP-3—than white women (Berrigan et al, 2010; Fowke et al, 2011). The higher IGF-1 levels in black women could lead to increase ovarian sensitivity to FSH and LH, and thus enhanced ovarian sensitivity could lead to the promotion and release of multiple eggs during ovulation. The lower IGF-2 levels could contribute to the balance of IGF-1 and IGF-2, which would then further influence the ovarian sensitivity to other hormones. IGFBP-3 is a binding protein which regulated the bioavailability of IGF-1, so lower levels of IGFBP-3 could lead to higher concentrations of free IGF-1, which would then further stimulate the ovarian follicles and could lead to polyovulation, leading to increased twinning. Though there is some evidence that this difference does have a “genetic basis” (Higgins et al, 2005), we know that dietary factors do have an effect on IGF levels (Heald et al, 2003).

Rushton’s misinterpretations

Rushton got a ton wrong, but he was right about some things too (which is to be expected if you’re looking to create some grand theory of racial differences). I’m not too worried about that. But what I AM worried about, is Rushton’s outright refusal to address his most serious critics in the literature, most importantly Anderson (1991) and Graves (2002 a, b). If you check his book (Rushton, 1997: 246-248), his responses are hardly sufficient to address the devestating critiques of his theory. (Note how Rushton never responded to Graves, 2002—ever.) Gorey and Cryns (1995) showed how Rushton cherry-picked what he liked for his theory while stating that “any behavioral differences which do exist between blacks, whites and Asian Americans for example, can be explained in toto by environmental differences which exist between them” while Ember, Ember, and Peregrine (2003) concluded similarly. (Rushton did respond to Gorey and Cryns, but not Ember, Ember, and Peregrine.) Cernovsky and Littman (2019) also showed how Rushton cherry-picked his INTERPOL crime data.

Now that I have set the stage for Rushton’s “great” scholarship, let’s talk about the response he got to his twinning theory.

Allen et al (1992) have a masterful critique of Rushton’s twinning theory. They review twinning stats in other countries across different time periods and come to conclude that “With such a wide overlap between races, and such great variation within races, twinning rate is probably no better than intelligence as an index of genetic status for racial groups.” They also showed that the twinning mechanism didn’t seem to be a relevant factor in survival, until the modern day with the advancement of our medical technologies, that is. So since twinning increases the risk for death in the mother (Steer, 2007; Santana et al, 2018). Rushton also misinterpreted numerous traits associated with twinning:

individual twin proneness and its correlates do not provide Rushton’s desired picture of a many-faceted r- strategy (even if such individual variation could have evolutionary meaning). With the exception of shorter menstrual cycles found in one study, the traits Rushton cites as r-selected in association with twinning are either statistical artifacts of no reproductive value or figments of misinterpretation.

Conclusion

I have discussed a few biological variables that lead to higher rates of twinning and I have cited some research which shows that black women have higher rates of some of the hormones that are related to higher rates of twinning. But I have also shown that it’s not so simple to jump to a genetic conclusion, since these hormones are of course mediated by environmental factors like diet.

Rushton quite clearly takes these twinning rate differences to be “genetic” in nature, but we are in the 2020s now, not the 1980s, and we now know that genes are necessary, but passive players in the formation of phenotypes (Noble, 2011, 2012, 2016; Richardson, 2017, 2021; Baverstock, 2021; McKenna, Gawne, and Nijhout, 2022). These new ways of looking at genes—as passive, not active causes, and as not special from any other developmental resources—shows how the reductionist thinking of Rushton and his contemporaries were straight out false. Nonetheless, while Rushton did get it right that there is a racial difference in twinning, the difference, I think, isn’t a genetic difference and I certainly don’t think they it lends credence to his Differential K theory, since Anderson showed that if we were to accept Rushton’s premises, then African would be K and Asians would be r. So while there also are differences in menarche between blacks and whites, this too also seems to be environmentally driven.

Rushton’s twinning thesis was his “best bet” at attempting to show that his r/K theory was “right” about racial differences. But the numerous devestating critiques of not only Rushton’s thesis on twinning but his r/K Differential K theory itself shows that Rushton was merely a motivated reasoner (David Duke also consulted with Rushton when Duke wrote his book My Awakening, where Duke describes how psychologists led to his “racial awakening”), so “The claim that Rushton was acting only as a scientist is not credible given this context” (Winston, 2020). Even the usefulness of psychometric life history theory has been recently questioned (this derives from Rushton’s Differential K, Sear, 2020).

But it is now generally accepted that Rushton’s r/K and the current psychometric life history theory that rose from the ashes of Rushton’s theory just isn’t a good way to conceptualize how humans live in the numerous biomes we live in.

Racial Differences in Motor Development: A Bio-Cultural View of Motor Development

3050 words

Introduction

Psychologist J. P. Rushton was perhaps most famous for attempting to formulate a grand theory of racial differences. He tried to argue that, on a matrix of different traits, the “hierarchy” was basically Mongoloids > Caucasoids > Negroids. But Rushton’s theory was met with much force, and many authors in many of the different disciplines in which he derived his data to formulate his theory attacked his r/K selection theory also known as Differential K theory (where all humans are K, but some humans are more K than others, so some humans are more r than others). Nonetheless, although his theory has been falsified for many decades, did he get some things right about race? Well, a stopped clock is right twice a day, so it wouldn’t be that outlandish to believe that Rushton got some things right about racial differences, especially when it comes to physical differences. While we can be certain that there are physical differences in groups we term “racial groups” and designate “white”, “black”, “Asian”, “Native American”, and “Pacific Islander” (the five races in American racetalk), this doesn’t lend credence to Rushton’s r/K theory.

In this article, I will discuss Rushton’s claims on motor development between blacks and whites. I will argue that he basically got this right, but it is of no consequence to the overall truth of his grand theory of racial differences. We know that there are physical differences between racial groups. But that there are physical differences between racial groups doesn’t entail that Rushton’s grand theory is true. The only entailment, I think, that can be drawn from that is there is a possibility that physical differences between races could exist between them, but it is a leap to attribute these differences to Rushton’s r/K theory, since it is a falsified theory on logical, empirical and methodological grounds. So I will argue that while Rushton got this right, a stopped clock is right twice a day but this doesn’t mean that his r/K theory is true for human races.

Was Rushton right? Evaluating newer studies on black-white motor development

Imagine three newborns: one white, one black and the third Asian and you observe the first few weeks of their lives. Upon observing the beginnings of their lives, you begin to notice differences in motor development between them. The black infant is more motorically advanced than the white infant who is more motorically advanced than the Asian infant. The black infant begins to master movement, coordination and dexterity showing a remarkable level of motoric dexterity, while the white infant shows less motoric dexterity than the black infant, and the Asian infant still shows lower motoric dexterity than the white infant.

These disparities in motor development are evidence in the early stages of life, so is it genetic? Cultural? Bio-cultural? I will argue that what explains this is a bio-cultural view, and so it will of course eschew reductionism, but of course as infants grow and navigate through their cultural milieu and family lives, this will have a significant effect on their experiences and along with it their motoric development.

Although Rushton got a lot wrong, it seems that he got this issue right—there does seem to be differences in precocity of motor development between the races, and the references he cites below in his 2000 edition of Race, Evolution, and Behavior—although most are ancient compared to today’s standards—hold to scrutiny today, where blacks walk earlier than whites who walk earlier than Asians.

Rushton (2000: 148-149) writes:

Revised forms of Bayley’s Scales of Mental and Motor Development administered in 12 metropolitan areas of the United States to 1,409 representative infants aged 1-15 months showed black babies scored consistently above whites on the Motor Scale (Bayley, 1965). This difference was not limited to any one class of behavior, but included: coordination (arm and hand); muscular strength and tonus (holds head steady, balances head when carried, sits alone steadily, and stands alone); and locomotion (turns from side to back, raises self to sitting, makes stepping movements, walks with help, and walks alone).

Similar results have been found for children up to about age 3 elsewhere in the United States, in Jamaica, and in sub-Saharan Africa (Curti, Marshall, Steggerda, & Henderson, 1935; Knobloch & Pasamanik, 1953; Williams & Scott, 1953; Walters, 1967). In a review critical of the literature Warren (1972) nonetheless reported evidence for African motor precocity in 10 out of 12 studies. For example, Geber (1958:186) had examined 308 children in Uganda and reported an “all-round advance of development over European standards which was greater the younger the child.” Freedman (1974, 1979) found similar results in studies of newboms in Nigeria using the Cambridge Neonatal Scales (Brazelton & Freedman, 1971).

Mongoloid children are motorically delayed relative to Caucasoids. In a series of studies carried out on second- through fifth-generation Chinese-Americans in San Francisco, on third- and fourth-generation Japanese-Americans in Hawaii, and on Navajo Amerindians in New Mexico and Arizona, consistent differences were found between these groups and second- to fourth-generation European-Americans using the Cambridge Neonatal Scales (Freedman, 1974, 1979; Freedman & Freedman, 1969). One measure involved pressing the baby’s nose with a cloth, forcing it to breathe with its mouth. Whereas the average Chinese baby fails to exhibit a coordinated “defense reaction,” most Caucasian babies turn away or swipe at the cloth with the hands, a response reported in Western pediatric textbooks as the normal one.

On other measures including “automatic walk,” “head turning,” and “walking alone,” Mongoloid children are more delayed than Caucasoid children. Mongoloid samples, including the Navajo Amerindians, typically do not walk until 13 months, compared to the Caucasian 12 months and Negro 11 months (Freedman, 1979). In a standardization of the Denver Developmental Screening Test in Japan, Ueda (1978) found slower rates of motoric maturation in Japanese as compared with Caucasoid norms derived from the United States, with tests made from birth to 2 months in coordination and head lifting, from 3 to 5 months in muscular strength and rolling over, at 6 to 13 months in locomotion, and at 15 to 20 months in removing garments.

Regarding newer studies on this matter, there are differences between European and Asian children in the direction that Rushton claimed. Infants from Hong Kong displayed a difference sequence of rolling compared to Canadian children. There does seem to be a disparity in motoric development between Asian and white children (Mayson, Harris, and Bachman, 2007). These authors do cite some of the same studies like the DDST (which is currently outdated) which showed how Asian children were motorically delayed compared to white children. And although they put caution on their findings of their literature review, it’s quite clear that this pattern exists and it is a bio-cultural one. So they conclude their literature review writing “the literature reviewed suggests differences in rate of motor development among children of various ethnic origins, including those of Asian and European descent” and that “Limited support suggests also that certain developmental milestones, such as rolling, may differ between infants of Asian and European origin.” Further, cultural practices in northern China—for example, lying them on their backs on sandbags—stall the onset of walking in babies sitting, crawling, and walking by a few months (Karasik et al, 2011).

This is related to the muscles that are used to roll from a supine to prone position and vice versa. Since some Asian children spend a longer time in apparatuses that aren’t conducive to growing a strong muscular base to be able to roll from the supine to prone position, to crawl and eventually walk, this is the “cultural” in the “bio-cultural” approach I will argue for.

One study on Norwegian children found that half of the children were waking by 13 months (the median) while 25 percent were walking by 12 months and 75 percent were walking by 14 months (Storvold, Aarethun, and Bratberg, 2013). One reason for the delayed response time could be supine sleeping, which was put into effect during the Back to Sleep program to mitigate causes of death from SIDS. Although it obviously saved tens of thousands of infant lives, it came at a cost of slightly stunted motoric development. It also seems that there is poor predictive value for infant milestones such as walking when it comes to health (Jenni et al, 2012).

Black Caribbean, black African and Indian infants were less likely to show delays in gross motor milestones compared to white infants. But Pakistani and Bangladeshi infants were more likely to be delayed in motoric development and communicative gestures, which was partly attributed to socio-cultural factors (Kelly et al, 2006). Kelly et al (2006: 828) also warn against genetic conclusions based on their large findings of difference between white and African and Caribbean infants:

The differences we observed between Black African and Black Caribbean compared with White infants are large and remain unaffected after adjusting for important covariates. This makes it tempting to conclude that the remaining effect must be a consequence of genetic differences. However, such a conclusion would be prematurely drawn. First, we have not included the measurement of genetic factors in our analysis, and, therefore, the presence of such effects cannot be demonstrated. Second, speculating on such effects should only be done alongside recognition that the model we have been able to test contains imperfect measurement.

It has also been observed that black and white children achieved greater mastery of motoric ability (locomotor skills) compared to Asian children but there was no difference by age group (Adeyemi-Walker et al, 2018). It was also found that infants with higher motor development scores had a lower weight weight relative to their length as they grew. So it was found that delayed motor development was associated with higher weight relative to length (Shoaibi et al, 2018). Black infants are also more motorically advanced and this is seen at up to two years of age (Malina, 1988) while black children perform better on tests of motor ability than white children (Okano et al, 2001). Kilbride et al (1970) also found that Baganda infants in Uganda showed better motoric ability than white American children. Campbell and Heddeker (2001) also showed that black infants were more motorically advanced than infants of other races.

It is clear that research like this blows up the claim that there should be a “one-size fits all” chart for motoric development in infants and that there should be race-specific milestones. This means that we should throw out the WEIRD assumptions when it comes to motoric development of infants (Karasik et al, 2011). They discuss research in other cultures where African, Caribbean and Indian caregivers massage the muscles of babies, stretch their limbs, toss them in their air, sit them up, and walk with them while helping them which then shapes their muscles and has them learn the mind-muscle connections needed to be able to learn how to eventually walk. And it also seems that random assignment to exercise excelerates how quickly an infant walks. White infants also sit at 6 months while black infants sit at 4 months. Nonetheless, it is clear that culture and context can indeed shape motoric development in groups around the world.

A bio-cultural view of motor development

When it comes to biological influences on motor development, sex and age are two important variables (Escolano-Perez, Sanchez-Lopez, and Herrero-Nivela, 2021). Important to this, of course, is that the individual must be normal, and they must have a normal brain with normal vision and spatial skills. They must be able to hear (to eventually follow commands and hear what is going on in their environment to change their course of action if need be). Further, the child’s home environment and gestational age influence different portions of motoral development (Darcy, 2022). After infants begin crawling, their whole world changes and they process visual motion better and faster, being able to differentiate between different speeds and directions, so a stimulating environment for the infant can spur the development of the brain (Van der Meer and Van der Weel, 2022). Biological maturation and body weight also affect motor development. Walking develops naturally, but walking and motor competence need to be nurtured for the child to reach their full potential; lower motor competence is related to higher body weight (Drenowatz and Greier, 2019).

One study on Dutch and Israeli infants even found—using developmental niche construction—that “infant motor development indeed is at least partly culturally constructed [which] emphasizes the importance of placing infant motor development studies into their ‘cultural cradle(Oudgeneong, Atun-Eni, and Schaik, 2020). Gross motor development—rolling over, crawling, alternating kicks, moving from lying to sitting, and having tummy time—is recognized by the WHO. Further, children from different cultures have different experiences, which also could lead to, for example, not doing things that are conducive to the development of gross motor development (Angulo-Barroso et al, 2010). Moreover, motor development is embodied, enculturated, embedded, and enabling (Adolph and Hoch, 2020). It is also known that differences in the cultural environment “have a non-negligible effect on motor development” (Bril, 1986). Motor development also takes place in physical environments and is purposive and goal-directed (Hallemans, Verbeque, and de Walle, 2020).

So putting this all together, we have conceptualized motor development as a dynamic process which is influenced by a complex interplay of biological and cultural factors (Barnes, Zieff, and Anderson, 1999). Biological factors like sex, age, health, sensory abilities, and socio-cultural factors like home environment and developmental niches explain motor development and differences in them between individuals. The cultural differences, though, can impede motoral development, and not allow one to reach milestones they would have otherwise reached in a different cultural environment, just like if one couldn’t hear or see would have trouble reaching developmental milestones.

Children of course grow up in cultural environments and contexts and so they are culturally situated. So what this means is that both the cultural and social environment the child finds themselves in will of course then influence their physical and mental development and lead them to their milestones they hit which is dictated by the normal biology they have which then is allowed by the socio-cultural environment they are born into. So we have the bio-cultural view on motor development, and beyond the cultural environment the child finds themselves in, the interactions they have between parents and caregivers—more knowledgeable others—can be pertinent to their motor development and reaching of developmental milestones. Cultural practices and expectations could emphasize certain milestones over others and then guide the child towards the trajectory. So the framework recognizes that normal biology and sensory perceptions are needed for the development of normal motor development, but that cultural and social differences in that context will spur motor development in the child who finds themselves in different cultures.

Conclusion

Was Rushton right about this? Yes, I think he was. The recent literature on the matter speaks to this. But that doesn’t mean that his r/K selection theory is true. There are differences in motor development between races. But what is interesting is the interaction between biological and cultural factors that spur motor development. The question of black motor precocity, however, is a socio-political question, since science is a social convention influenced by the values of the scientist in question. Now, to the best of my knowledge, Rushton himself never carried out studies on this, he just collated them to use them for his racial trait matrix. However, it’s quite clear that Rushton was politically politically and socially motivated to prove that his theory was true.

But physical differences between the races are easy enough to prove, and of course they are due to biological and cultural interactions. There are differences in skin color and their properties between blacks and whites (Campiche et al, 2019). There is a 3 percent center of mass difference between blacks and whites which explains why each race excels at running and swimming (Bejan, Jones, and Charles, 2010). There are differences in body composition between Asians and whites which means, at the same BMI, Asians would have thicker skin folds and higher body fat than whites (Wang et al, 1994WHO expert consultation, 2004; Wang et al, 2011). Just like at the same BMI, blacks have lower body fat and thinner skin folds than whites (Vickery, Cureton, and Collins, 1988; Wagner and Heyward, 2000Flegal et al, 2010). There are differences in menarche and thelarche between blacks and whites (Wagner and Heyward, 2000; Kaplowitz, 2008; Reagan et al, 2013; Cabrera et al, 2014; Deardorff et al, 2014; ). There are differences in anatomy and physiology and somatotype between blacks and whites and these differences would explain how the races would perform on the big four lifts. There are interesting and real physical differences between races.

So obviously, what is considered “normal” is different in different cultures, and motor development is no different. So just like I think we should have different BMI and skin fold charts for different races, so too should we have different developmental milestones for different races and cultures. The discussion here is clear, since what is “average” and “normal” is different based on race and culture. Like for instance, black babies begin walking around 11 months, white babies around 12 months and Native American babies at 13 months. So while parents may be worried that their child didn’t hit a certain developmental milestone like walking, sitting, rolling, taking a bio-cultural approach will assuage these worries.

Nonetheless, while Rushton was right about race and motor development, we need to center his research project in context. He was clearly motivated, despite the numerous and forceful critiques of his framework, to prove that he was right. But the continuance of Rushton pushing his theory up until his death shows me that he was quite obviously socially and politically motivated, contrary to what he may have said.

We have approached this paper from the stance that science is a social activity, with all observations influenced by, as well as reflective of, the values of scientists and the political leanings of the sociocultural context within which research is conducted. We suggest that when questions of group difference are pursued in science, awareness of how the categories themselves have been shaped by social and historical forces, as well as of the potential effects on society, is important. (Barnes, Zieff, and Anderson, 1999)

“Missing Heritability” and Missing Children: On the Issues of Heritability and Hereditarian Interpretations

3100 words

“Biological systems are complex, non-linear, and non-additive. Heritability estimates are attempts to impose a simplistic and reified dichotomy (nature/nurture) on non-dichotomous processes.” (Rose, 2006)

Heritability estimates do not help identify particular genes or ascertain their functions in development or physiology, and thus, by this way of thinking, they yield no causal information.” (Panofsky, 2016: 167)

“What is being reported as ‘genetic’, with high heritability, can be explained by difference-making interactions between real people. In other words, parents and children are sensitive, reactive, living beings, not hollow mechanical or statistical units.” (Richardson, 2022: 52)

Introduction

In the world of behavioral genetics, it is claimed that studies of twins, adoptees and families can point us to the interplay between genetic and environmental influences on complex behavioral traits. To study this, they use a concept called “heritability”—taken from animal breeding—which estimates the the degree of variation in a phenotypic trait that is due to genetic variation amongst individuals in the studied population. But upon the advent of molecular genetic analysis after the human genome project, something happened that troubled behavioral genetic researchers: The heritability estimates gleaned from twin, family and adoption studies did not match the estimates gleaned from the molecular genetic studies. This then creates a conundrum—why do the estimates from one way of gleaning heritability don’t match to other ways? I think it’s because biological models represent a simplistic (and false) model of biological causation (Burt and Simon, 2015; Lala, 2023). This is what is termed “missing heritability.” This raises questions that aren’t dissimilar to when a child dissappears.

Imagine a missing child. Imagine the fervor a family and authorities go through in order to find the child and bring them home. The initial fervor, the relentless pursuit, and the agonizing uncertainty constitute a parallel narrative in behavioral genetics, where behavioral geneticists—like the family of a missing child and the authorities—find themselves grappling with unforseen troubles. In this discussion, I will argue that the additivity assumption is false, that this kind of thinking is a holdover from the neo-Darwinian Modern Synthesis, that hereditarians have been told for decades that heritability just isn’t useful for what they want to do, and finally “missing heritability” and missing children are in some ways analogous, but that there is a key difference: The missing children actually existed, while the “missing heritability” never existed at all.

The additivity assumption

Behavioral geneticists pay lip service to “interactions”, but then conceptualize these interactions as due to additive heritability (Richardson, 2017a: 48-49). But the fact of the matter is, genetic interactions create phantom heritability (Zuk et al, 2012). When it comes to the additive claim of heritability, that claim is straight up false.

The additive claim is one of the most important things for the utility of the concept of heritability for the behavioral geneticist. The claim that heritability estimates for a trait are additive means that the contribution of each gene variant is independent and they all sum up to explain the overall heritability (Richardson 2017a: 44 states that “all genes associated with a trait (including intelligence) are like positive or negative charges“). But in reality, gene variants aren’t independent effects, they interact with other genes, the environment and other developmental resources. In fact, violations of the additivity assumption are large (Daw, Guo, and Harris, 2015).

Gene-gene, gene-environment, and environmental factors can lead to overestimates of heritability, and they are non-additive. So after the 2000s with the completion of the human genome project, these researchers realized that the genetic variants that heritability they identified using molecular genetics did not jive with the heritability they computed from twin studies from the 1920s until the late 1990s and then even into the 2020s. So the expected additive contribution of heritability fell short in actually explaining the heritability gleaned from twin studies using molecular genetic data.

Thinking of heritability as a complex jigsaw puzzle may better help to explain the issue. The traditional view of heritability assumes that each genetic piece fits neatly into the puzzle to then complete the overall genetic picture. But in reality, these pieces may not be additive. They can interact in unexpected ways which then creates gaps in our understanding, like a missing puzzle piece. So the non-additive effects of gene variants which includes interactions and their complexities, can be likened to missing pieces in the heritability puzzle. The unaccounted-for genetic interactions and nuances then contribute to what is called “missing heritability.” So just as one may search and search for missing puzzle pieces, so to do behavioral geneticists search and search for the “missing heritability”.

So heritability assumes no gene-gene and gene-environment interaction, no gene-environment correlation, among other false or questionable assumptions. But the main issue, I think, is that of the additivity assumption—it’s outright false and since it’s outright false, then it cannot accurately represent the intricate ways in which genes and other developmental resources interact to form the phenotype.

If heritability estimates assume that genetic influences on a trait are additive and independent, then heritability estimates oversimplify genetic complexity. If heritability estimates oversimplify genetic complexity, then heritability estimates do not adequately account for gene-environment interactions. If heritability does not account for gene-environment interactions, then heritability fails to capture the complexity of trait inheritance. Thus, if heritability assumes that genetic influences on a trait are additive and independent, then heritability fails to capture the complexity of trait inheritance due to its oversimplified treatment of genetic complexity and omission of gene-environment interactions.

One more issue, is that of the “heritability fallacy” (Moore and Shenk, 2016). One commits a heritability fallacy when they assume that heritability is an index of genetic influence on traits and that heritability can tell us anything about the relative contribution of trait inheritance and ontogeny. Moore and Shenk (2016) then make a valid conclusion based on the false belief that heritability us anything about the “genetic strength” on a trait:

In light of this, numerous theorists have concluded that ‘the term “heritability,” which carries a strong conviction or connotation of something “[in]heritable” in the everyday sense, is no longer suitable for use in human genetics, and its use should be discontinued.’31 Reviewing the evidence, we come to the same conclusion. Continued use of the term with respect to human traits spreads the demonstrably false notion that genes have some direct and isolated influence on traits. Instead, scientists need to help the public understand that all complex traits are a consequence of developmental processes.

“Missing heritability”, missing children

Twin studies traditionally find heritability to be estimated between 50 and 80 percent for numerous traits (eg Polderman et al, 2015; see Joseph’s critique). But as alluded to earlier, molecular studies have found heritabilities of 10 percent or lower (eg, Sniekers et al, 2017; Savage et al, 2018; Zabaneh et al, 2018). This discrepancy between different heritability estimates using different tools is what is termed “missing heritability” (Mathhews and Turkheimer, 2022). But the issue is, increasing the sample sizes will merely increase the chance of spurious correlations (Calude and Longo, 2018), which is all these studies show (Richardson, 2017b; Richardson and Jones, 2019).

This tells me one important thing—behavioral geneticists have so much faith in the heritability estimates gleaned from twin studies that they assume that the heritability is “missing” in the newer molecular genetic studies. But if something is “missing”, then that implies that it can be found. They have so much faith that eventually, as samples get higher and higher in GWAS and similar studies, that we will find the heritability that is missing and eventually, be able to identify genetic variants responsible for traits of interest such as IQ. However I think this is confused and a simple analogy will show why.

When a child goes missing, it is implied that they will be found by authorities, whether dead or alive. Now I can liken this to heritability. The term “missing heritability” comes from the disconnect between heritability estimates gleaned from twin studies and heritability estimates gleaned from molecular genetic studies like GWAS. So the implication here is, since twin studies show X percent heritability (high heritability), and molecular genetic studies show Y percent heritability (low heritability) – which is a huge difference between estimates between different tools – then the implication is that there is “missing heritability” that must be explained by rare variants or other factors.

So just like parents and authorities try so hard to find their missing children, so to do behavioral geneticists try so hard to find their “missing heritability.” As families endure anguish as they try to find their children, this is then mirrored in the efforts of behavioral geneticists to try and close the gap between two different kinds of tools that glean heritability.

But there is an important issue at play here—namely the fact that missing children actually exist, but “missing heritability” doesn’t, and that’s why we haven’t found it. Although some parents, sadly, may never find their missing children, the analogy here is that behavioral geneticists will never find their own “children” (their missing heritability) because it simply does not exist.

Spurious correlations

Even increasing the sample sizes won’t do anything, since the larger the sample size, the bigger chance for spurious correlations, and that’s all GWAS studies for IQ are (Richardson and Jones, 2019), while correlations with GWAS are inevitable and meaningless (Richardson, 2017b). Denis Noble (2018) puts this well:

As with the results of GWAS (genome-wide association studies) generally, the associations at the genome sequence level are remarkably weak and, with the exception of certain rare genetic diseases, may even be meaningless (1321). The reason is that if you gather a sufficiently large data set, it is a mathematical necessity that you will find correlations, even if the data set was generated randomly so that the correlations must be spurious. The bigger the data set, the more spurious correlations will be found (3). The current rush to gather sequence data from ever larger cohorts therefore runs the risk that it may simply prove a mathematical necessity rather than finding causal correlations. It cannot be emphasized enough that finding correlations does not prove causality. Investigating causation is the role of physiology.

Nor does finding higher overall correlations by summing correlations with larger numbers of genes showing individually tiny correlations solve the problem, even when the correlations are not spurious, since we have no way to find the drugs that can target so many gene products with the correct profile of action.

The Darwinian model

But the claim that there is a line that goes from G (genes) to P (phenotype) is just a mere holdover from the neo-Darwinian modern synthesis. The fact of the matter is, “HBD” and hereditarianism are based on reductionistic models of genes and how they work. But the reality is, genes don’t work how they think they do, reality is much more complex than they assume. Feldman and Ramachandran (2018) ask “Missing compared to what?”, effectively challenging the “missing heritability” claim. As Feldman and Ramachandran (2018) ask, would Herrnstein and Murray have written The Bell Curve if they believed that the heritability of IQ were 0.30? I don’t think they would have. In any case, such a belief in the heritability of IQ being between 0.4 and 0.8 shows the genetic determinist assumptions which are inherent in this type of “HBD” genetic determinist thinking.

Amusingly, as Ned Block (1995) noted, Murray said in an interview that “60 percent of the intelligence comes from heredity” and that that heritability is “not 60 percent of the variation. It is 60 percent of the IQ in any given person.” Such a major blunder from one of the “intellectual spearheads” of the “HBD race realist” movement…

Behavioral geneticists claim that the heritability is missing only because sample sizes are low, and as sample sizes increase, the missing heritability based on associated genes will be found. But this doesn’t follow at all since increasing sample sizes will just increase spurious hits of genes correlated with the trait in question but it says absolutely nothing about causation. Nevertheless, only a developmental perspective can provide us mechanistic knowledge and so-called heritability of a phenotype cannot give us such information because heritability isn’t a mechanistic variable and doesn’t show causation.

Importantly, a developmental perspective provides mechanistic knowledge that can yield practical treatments for pathologies. In contrast, information about the “heritability” of a phenotype—the kind of information generated by twin studies—can never be as useful as information about the development of a phenotype, because only developmental information produces the kind of thorough understanding of a trait’s emergence that can allow for successful interventions. (Moore 2015: 286)

The Darwinian model and it’s assumptions are inherent in thinking about heritability and genetic causation as a whole and are antithetical to developmental, EES-type thinking. Since hereditarianism and HBD-type thinking are neo-Darwinist, it then follows that such thinking is inherent in their beliefs, assumptions, and arguments.

Conclusion

Assumptions of heritability simply do not hold. Heritability, quite simply, isn’t a characteristic of traits but it is a characteristic of “relationships in a population observed in a particular setting” (Oyama, 1985/2000). Heritability estimates tell us absolutely nothing about development, nor the causes of development. Heritability is a mere breeding statistic and tells us nothing at all about the causes of development or whether or not genes are “causal” for a trait in question (Robette, Genin, and Clerget-Darpoux, 2022). It is key to understand that heritability along with the so-called “missing heritability” are based on reductive models of genetics that just do not hold, especially with newer knowledge that we have from systems biology (eg, Noble, 2012).

The assumption that heritability estimates tell us anything useful about genetics, traits, and causes along with a reductive belief in genetic causation for the ontogeny of traits has wasted millions of dollars. Now we need to grapple with the fact that heritability just doesn’t tell us anything about genetic causes of traits, but that genes are necessary, not sufficient, causes for traits because no genes (along with other developmental resources) means no organism. Also coming from twin, family and adoption studies are Turkheimer’s (2000) so-called “laws of behavioral genetics.” Further, the falsity of the EEA (equal environments assumption) is paramount here, and since the EEA is false, genetic conclusions from such studies are invalid (Joseph et al, 2015). There is also the fact that heritability is based on a false biological model. The issue is that heritability rests on a “conceptual model is unsound and the goal of heritability studies is biologically nonsensical given what we now know about the way genes work” (Burt and Simons, 2015: 107). What Richardson (2022) terms “the agricultural model of heritability” is known as false. In fact, the heritability of “IQ” is higher than any heritability found in the animal kingdom (Schonemann, 1997). Why this doesn’t give any researcher pause is beyond me.

Nonetheless, the Darwinian assumptions that are inherent in behavioral genetic, HBD “race realist” thinking are false. And the fact of the matter is, increasing the sample size of molecular genetic studies will only increase the chances of spurious correlations and picking up population stratification. So, it seems that using heritability to show genetic and environmental causes is a bust and has been a bust ever since Jensen revived the race and IQ debate in 1969, along with the subsequent responses that Jensen received against his argument which then led to the 1970s as being a decade in which numerous arguments were made against the concept of heritability (eg, Layzer, 1974).

It has also been pointed out to racial hereditarians for literally decades that heritability is is a flawed metric (Layzer, 1974; Taylor, 1980; Bailey, 1997Schonemann, 1997Guo, 2000Moore, 2002Rose, 2006Schneider, 2007Charney, 20122013Burt and Simons, 2015Panofsky, 2014Joseph et al, 2015Moore and Shenk, 2016Panofsky, 2016Richardson, 2017; Lerner, 2018). These issues—among many more—lead Lerner to conclude:

However, the theory and research discussed across this chapter and previous ones afford the conclusion that no psychological attribute is pre-organized in the genes and unavailable to environmental influence. That is, any alleged genetic difference (or “inferiority”) of African Americans based on the high heritability of intelligence would seem to be an attribution built on a misunderstanding of concepts basic to an appropriate conceptualization of the nature–nurture controversy. An appreciation of the coaction of genes and context—of genes↔context relations—within the relational developmental system, and of the meaning, implications, and limitations of the heritability concept, should lead to the conclusion that the genetic-differences hypothesis of racial differences in IQ makes no scientific sense. (Lerner, 2018: 636)

That heritability doesn’t address mechanisms and ignores genetic factors, along with being inherently reductionist means that there is little to no utility of heritability for humans. And the complex, non-additive, non-linear aspects of biological systems are attempts at reducing biological systems to their component parts, (Rose, 2006), making heritability, again, inherently reductionist. We have to attempt to analyzed causes, not variances (Lewontin, 1974), which heritability cannot do. So it’s very obvious that the hereditarian programme which was revived by Jensen (1969)—and based on twin studies which were first undertaken in the 1920s—is based on a seriously flawed model of genes and how they work. But, of course, hereditarians have an ideological agenda to uphold, so that’s why they continue to pursue “heritability” in order to “prove” that “in part”, racial differences in many socio-behavioral traits—IQ included—are due to genes. But this type of argumentation quite clearly fails.

The fact of the matter is, “there are very good reasons to believe gene variations are at best irrelevant to common disorders and at worst a distraction from the social and political roots of major public health problems generally and of their unequal distribution in particular” (Chaufan and Joseph 2013: 284). (Also see Joseph’s, 2015 The Trouble with Twin Studies for more argumentation against the use of heritability and it’s inflation due to false assumptions along with arguments against “missing heritability.”) In fact, claims of “missing heritability” rest on “genetic determinist beliefs, a reliance on twin research, the use of heritability estimates, and the failure to seriously consider the possibility that presumed genes do not exist” (Joseph, 2012). Although it has been claimed that so-called rare variants explain the “missing heritability” (Genin, 2020), this is nothing but cope. So the heritability was never missing, it never existed at all.

The Multilingual Encyclopedia: On the Context-Dependency of Human Knowledge and Intelligence

3250 words

Introduction

Language is the road map of a culture. It tells you where its people come from and where they are going. – Rita May Brown

Communication bridges gaps. The words we use and the languages we speak along with the knowledge that we share serve as a bridge to weave together human culture and intelligence. So imagine a multilingual encyclopedia that encompasses the whole of human knowledge, a book of human understanding from the sciences, the arts, history and philosophy. This encyclopedia is a testament to the universal nature of human knowledge, but it also shows the interplay between culture, language, knowledge and human intelligence.

In my most recent article, I argued that human intelligence is shaped by cultural and social context and that this is shaped by interactions in a cultural and social context. So here I will argue that: there are necessary aspects of knowledge; knowledge is context-dependent; language, culture and knowledge interact with the specific contexts to form intelligence, mind and rationality; and my multilingual encyclopedia analogy shows that while there are what is termed “universal core knowledge”, these would then become context-dependent based on the needs for different cultures and I will also use this example to again argue against IQ. Finally I will conclude that the arguments in this article and the previous one show how the mind is socially formed based on the necessary physical substrates but that the socio-cultural contexts are what is necessary for human intelligence, mindedness, and rationality.

Necessary aspects of knowledge

There are two necessary and fundamental aspects of knowledge and thought—that of cognition and the brain. The brain is a necessary pre-condition for human mindedness, and cognition is influenced by culture, although my framework posits that cognitive processes play a necessary role in human cognition, just as the brain plays a necessary physical substrate for these processes. While cognition and knowledge are intertwined, they’re not synonymous. To cognize is to actively think about something that you want to, meaning it is an action. There is a minimal structure and it’s accounted for by cognition, like pattern recognition, categorization, sequential processing, sensory integration, associative memory and selective attention. And these processes are necessary, they are inherent in “cognition” and they set the stage for more complex mental abilities, which is what Vygotsky was getting at with the social formation of mind with his theory.

Individuals do interpret their experiences through a cultural lense, since culture provides the framework for understanding, categorizing, and making sense of experiences. I recognize the role of individual experiences and personal interpretations. So while cultural lenses may shape initial perceptions, people can also think critically and reflect on their interpretations over time due to the differing experiences they have.

Fundamental necessary aspects of knowledge like sensory perception are also pivotal. By “fundamental”, I mean “necessary”—that is, we couldn’t think or cognize without the brain and it therefore follows we couldn’t think without cognition. These things are necessary for thinking, language, culture and eventually intelligence, but what is sufficient for mind, thinking, language and rationality are the specific socio-cultural interactions and knowledge formulations that we get by being engrossed in linguistically-mediated cultural environments.

The context-dependence of knowledge

“Context-dependent knowledge” refers to information or understanding that can take on different meaning or interpretations based on the specific context in which it is applied or used. But I also mean something else by this: I mean that an individual’s performance on IQ tests is influenced by their exposure to specific cultural, linguistic, and contextual factors. Thus, this means that IQ tests aren’t culture-neutral or universally applicable, but they are biased towards people who share similar class-cultural backgrounds and experiences.

There is something about humans that allow us to be receptive to cultural and social contexts to form mind, language, rationality and intelligence (and I would say that something is the immaterial self). But I wouldn’t call it “innate.” Thus, so-called “innate” traits need certain environmental contexts to be able to manifest themselves. So called “innate” traits are experience-dependent (Blumberg 2018).

So while humans actively adapt, shape, and create cultural knowledge through cultural processes, knowledge acquisition isn’t solely mediated by culture. Individual experiences matter, as do interactions with the environment along with the accumulation of knowledge from various cultural contexts. So human cognitive capacity isn’t entirely a product of culture, and human cognition allows for critical thinking, creative problem solving, along with the ability to adapt cultural knowledge.

Finally, knowledge acquisition is cumulative—and by this, I mean it is qualitatively cumulative. Because as individuals acquire knowledge from their cultural contexts, individual experiences etc, this knowledge then becomes internalized in their cognitive framework. They can then build on thus existing knowledge to further adapt and shape culture.

The statement “knowledge is context-dependent” is a description of the nature of knowledge itself. It means that knowledge can take on different meaning or interpretations in different contexts. So when I say “knowledge is context-dependent”, I am acknowledging that it applies in all contexts, I’m discussing the contextual nature of knowledge itself.

Examples of the context-dependence of universal knowledge for example, are how English-speakers use the “+” sign for addition, while the Chinese have “加” or “Jiā”. So while this fundamental principle is the same, these two cultures have different symbols and notations to signify the operation. Furthermore, there are differences in thinking between Eastern and Western cultures, where thinking is more analytic in Western cultures and more holistic in Eastern cultures (Yates and de Oliveira, 2016; also refer to their paper for more differences between cultures in decision-making processes). There are also differences between cultures in visual attention (Jurkat et al, 2016). While this isn’t “knowledge” per se, it does attest to how cultures are different in their perceptions and cognitive processes, which underscores the broader idea that cognition, including visual attention, is influenced by cultural contexts and social situations. Even the brain’s neural activity (the brain’s physiology) is context-dependent—thus culture is context-dependent (Northoff, 2013).

But when it comes to culture, how does language affect the meaning of culture and along with it intelligence and how it develops?

Language, culture, knowledge, and intelligence

Language plays a pivotal role in shaping the meaning of culture, and by extension, intelligence and its development. Language is not only a way to communicate, but it is also a psychological tool that molds how we think, perceive and relate to the world around us. Therefore, it serves as the bridge between individual cognition and shares cultural knowledge, while acting as the interface through which cultural values and norms are conveyed and internalized.

So language allows us to encode and decode cultural information, which is how, then, culture is generationally transmitted. Language provides the framework for expressing complex thoughts, concepts, and emotions, which enables us to discuss and negotiate the cultural norms that define our societies. Different languages offer unique structures for expressing ideas, which can then influence how people perceive and make sense of their cultural surroundings. And important for this understanding is the fact that a human can’t have a thought unless they have language (Davidson, 1982).

Language is also intimately linked with cognitive development. Under Vygotsky’s socio-historical theory of learning and development, language is a necessary cognitive tool for thought and the development of higher mental functions. So language not only reflects our cognitive abilities, it also plays an active role in their formation. Thus, through social interactions and linguistic exchanges, individuals engage in a dynamic process of cultural development, building on the foundation of their native language and culture.

Feral children and deaf linguistic isolates show this dictum: that there is a critical window in which language could be acquired and thusly the importance of human culture in human development (Vyshedakiy, Mahapatra, and Dunn, 2017). Cases of feral children, then, show us how children would develop without human culture and shows the importance of early language hearing and use for normal brain development. In fact, this shows how social isolation has negative effects on children, and since human culture is inherently social, it shows the importance of human culture and society in forming and nurturing the formation of mind, intelligence, rationality and knowledge.

So the relationship between language, culture and intelligence is intricate and reciprocal. Language allows us to express ourselves and our cultural knowledge while shaping our cognitive processes and influencing how we acquire and express our intelligence. On the other hand, intelligence—as shaped by cultural contexts—contributes to the diversification of language and culture. The interplay underscores how language impacts our understanding of intelligence within it’s cultural framework.

Furthermore, in my framework, intelligence isn’t a static, universally-measureable trait, but it is a dynamic and constantly-developing trait shaped by social and cultural interactions along with individualsm experiences, and so intentionality is inherent in it. Moreover, in the context of acquiring cultural knowledge, Vygotsky’s ZPD concept shows that individuals can learn and internalize things outside of their current toolkit as guided by more knowledgeable others (MKOs). It also shows that learning and development occur mostly in this zone between what someone can do alone and what someone can do with help which then allows them to expand their cognitive abilities and cultural understanding.

Cultural and social exposure

Cultural and social exposure are critical to my conception of intelligence. Because, as we can see in cases of feral children, there is a clear developmental window of opportunity to gain language and to think and act like a human due to the interaction of the individual in human culture. The base cognitive capacities that we are born with and develop throughout infancy to toddlerhood to childhood and then adulthood aren’t just inert, passive things that merely receive information through vision and then we gain minds, intelligence and then become human. Critically, they need to be nurtured through culture and socialization. The infant needs the requisite experiences doing certain things to be able to learn how to roll over, crawl, and finally walk. They need to be exposed to different things in order to be exposed to the culture they were borne into correctly. So while we are born into both cultural, and linguistically-mediated environments, it’s these three types of environment—along with what the individual does themselves when they finally learn to walk, talk, and gain their mind, intelligence and rationality—that shape individual humans, the knowledge they gain and ultimately their intelligence.

If humans possess foundational cognitive capacities that aren’t entirely culturally determined or influenced, and culture serves as a mediator in shaping how these capacities are expressed and applied, then it follows that culture influences cognitive development while cognitive abilities provide the foundation for being able to learn at all, as well as being able to speak and to internalize the culture and language they are exposed to. So if culture interacts dynamically with cognitive capacities, and crucial periods exist during which cultural learning is particularly influential (cases of feral children), then it follows that early cultural exposure and socialization are critical. So it follows that my framework acknowledges both cognitive capacities and cultural influences in shaping human cognition and intelligence.

In his book Vygotsky and the Social Formation of Mind, Wertsch (1985) noted that Vygotsky didn’t discount the role of biology (like in development in the womb), but that after a certain point, biology no longer can be viewed as the sole or even primary factor in force of change for the individual, and that the explanation necessarily shifts to a sociocultural explanation:

However, [Vygotsky] argued that beyond a certain point in development, biological forces can no longer be viewed as the sole, or even the primary, force of change. At this point there is a fundamental reorganization of the forces of development and a need for a corresponding reorganization in the system of explanatory principles. Specifically, in Vygotsky’s view the burden of explanation shifts from biological to social factors. The latter operate within a given biological framework and must be compatible with it, but they cannot be reduced to it. That is, biological factors are still given a role in this new system, but they lose their role as the primary force of change. Vygotsky contrasted embryological and psychological development on this basis:

The embryological development of the child … in no way can be considered on the same level as the postnatal development of the child as a social being. Embryological development is a completely unique type of development subordinated to other laws than is the development of the child’s personality, which begins at birth. Embryological development is studied by an independent science—embryology, which cannot be considered one of the chapters of psychology … Psychology does not study heredity or prenatal development as such, but only the role and influence of heredity and prenatal development of the child in the process of social development. ([Vygotsky] 1972, p. 123)

The multilingual encyclopedia

Imagine a multilingual encyclopedia that encompasses knowledge of multiple disciplines from the sciences to the humanities to religion. This encyclopedia has what I term universal core knowledge. This encyclopedia is maintained by experts from around the world and is available in many languages. So although the information in the encyclopedia is written in different languages and upheld by people from different cultures, fundamental scientific discoveries, historical events and mathematical theorems remain constant across all versions of the encyclopedia. So this knowledge is context-independent because it holds true no matter the language it’s written in or the cultural context it is presented in. But the encyclopedia’s entries are designed to be used in specific contexts. The same scientific principles can be applied in labs across the world, but the specific experiments, equipment and cultural practices could vary. Moreover, historical events could be studied differently in different parts of the world, but the events themselves are context-independent.

So this thought experiment challenges the claim that context-independent knowledge requires an assertion of absolute knowledge. Context-independent knowledge exists in the encyclopedia, but it isn’t absolute. It’s merely a collection of universally-accepted facts, principles and theories that are applied in different contexts taking into account linguistic and cultural differences. Thus the knowledge in the encyclopedia is context-independent in that it remains the same across the world, across languages and cultures, but it is used in specific contexts.

Now, likening this to IQ tests is simple. When I say that “all IQ tests culture-bound, and this means that they’re class-specific”, this is a specific claim. What this means, in my view, is that people grow up in different class-cultural environments, and so they are exposed to different knowledge bases and kinds of knowledge. Since they are exposed to different knowledge bases and kinds of knowledge, when it comes time for test time, if they aren’t exposed to the knowledge bases and kinds of knowledge on the test, they necessarily won’t score as high as someone who was immersed in the knowledge bases and kinds of knowledge. Cole’s (2002) argument that all tests are culture-bound is true. Thus IQ tests aren’t culture-neutral, they are all culture-bound, and culture-neutral tests are an impossibility. This further buttresses my argument that intelligence is shaped by the social and cultural environment, underscoring the idea that the specific knowledge bases and cognitive resources that individuals are exposed to within their unique socio-cultural contexts play a pivotal role in the expression and development of their cognitive abilities.

IQ tests are mere cultural artifacts. So IQ tests, like the entries in the multilingual encyclopedia, are not immune to cultural biases. So although the multilingual encyclopedia has universal core knowledge, the way that the information is presented in the encyclopedia, like explanations and illustrations, would be culturally influenced by the authors/editors of the encyclopedia. Remember—this encyclopedia is an encyclopedia of the whole of human knowledge written in different languages, seen through different cultural lenses. So different cultures could have ways of explaining the universal core knowledge or illustrating the concepts that are derived from them.

So IQ tests, just like the entries in the encyclopedia, are only usable for certain contexts. While the entries in the encyclopedia could be usable for more than one context of idea one has, there is a difference for IQ testing. The tests are created by people from a narrow social class and so the items on them are therefore class-specific. This then results in cultural biases, because people from different classes and cultures are exposed to varying different knowledge bases, so people will be differentially prepared for test-taking on this basis alone. So the knowledge that people are exposed to based on their class membership or even different cultures within America or even from an immigrant culture would influence test scores. So while there is universal core knowledge, and some of this knowledge may be on IQ tests, the fact is that different classes and cultures are exposed to different knowledge bases, and so that’s why they score differently—the specific language and numerical skills on IQ tests are class-specific (Brito, 2017). I have noted how culturally-dependent IQ tests are for years, and this interpretation is reinforced when we consider knowledge and its varying interpretations found in the multilingual encyclopedia, which then highlights the intricate relationship between culture, language, and IQ. This then serves to show that IQ tests are mere knowledge tests—class-specific knowledge tests (Richardson, 2002).

So my thought experiment shows that while there are fundamental scientific discoveries, historical events and mathematical theorems that remain constant throughout the world and across different languages and cultures, the encyclopedia’s entries are designed to be used in specific contexts. So the multilingual encyclopedia thought experiment supports my claim that even when knowledge is context-independent (like that of scientific discoveries, historical facts), it can become context-dependent when it is used and applied within specific cultural and linguistic contexts. This, then, aligns with the part of my argument that knowledge is not entirely divorced from social, cultural and contextual influences.

Conclusion

The limitations of IQ tests become evident when we consider how individuals produce and acquire knowledge and the cultural and linguistic diversity and contexts that define our social worlds. The analogy of the multilingual encyclopedia shows that while certain core principles remain constant, the way that we perceive and apply knowledge is deeply entwined within the cultural and social contexts in which we exist. This dynamic relationship between culture, language, knowledge and intelligence, then, underscores the need to recognize the social formation of mind and intelligence.

Ultimately, human socio-cultural interactions, language, and the knowledge we accumulate together mold our understanding of intelligence and how we acquire it. The understanding that intelligence arises through these multifaceted exchanges and interactions within a social and cultural framework points to a more comprehensive perspective. So by acknowledging the vital role of culture and language in the formation of human intelligence, we not only deconstruct the limitations of IQ tests, but we also lay the foundation for a more encompassing way of thinking about what it truly means to be intelligent, and how it is shaped and nurtured by our social lives in our unique cultural contexts and the experiences that we have.

Thus, to truly grasp the essence of human intelligence, we don’t need IQ tests, and we certainly don’t need claims like genes causing IQ or psychological traits and this then is what makes certain people or groups more intelligent than others; we have to embrace the fact that human intelligence thrives within the web of social and cultural influences and interactions which then collectively form what we understand as the social formation of mind.

Intelligence without IQ: Towards a Non-IQist Definition of Intelligence

3000 words

Introduction

In the disciplines of psychology and psychometrics, intelligence has long been the subject of study, attempting to reduce intelligence to a number based on what a class-biased test spits out when an individual takes an IQ test. But what if intelligence resisted quantification, and we can’t state that IQ tests can put a number to one’s intelligence? The view I will present here will conceptualize intelligence as a psychological trait, and since it’s a psychological trait, it’s then resistant to being reduced to anything physical and it’s also resistant to quantification. I will draw on Vygotsky’s socio-cultural theory of learning and development and his emphasis on the role of culture, social interactions and cultural tools in shaping intelligence and then I will explain that Vygotsky’s theory supports the notion that intelligence is socially and contextually situated. I will then draw on Ken Richardson’s view that intelligence is a socially dynamic trait that’s irreducible, created by sociocultural tools.

All in all, the definition that I will propose here will be irrelevant to IQ. Although I do conceptualize psychological traits as irreducible, it is obvious that IQ tests are class-specific knowledge tests—that is they are biased against certain classes and so it follows that they are biased for certain classes. But the view that I will articulate here will suggest that intelligence is a complex and multifaceted construct that is deeply influenced by cultural and social factors and that it resists quantification because intentionality is inherent in it. And I don’t need to posit a specified measured object, object of measurement and measurement unit for my conception because I’m not claiming measurability.

Vygotsky’s view

Vygotsky is most well-known for his concepts of private speech, more knowledgeable others, and the zone of proximal development (ZPD). Intelligence involves the internalization of private speech, where individuals engage in a self-directed dialogue to solve problems and guide their actions. This internalized private speech then represents an essential aspect of one’s cognitive development, and reflects an individual’s ability to think and reason independently.

Intelligence is then nurtured through interactions with more knowledgeable others (MKOs) in a few ways. MKOs are individuals who possess a deeper understanding or expertise in specific domains. MKOs provide guidance, support, and scaffolding, helping individuals to reach higher levels of cognitive functioning and problem solving.

Along with MKOs, the ZPD is a crucial aspect in understanding intelligence. It represents a range of tasks that individuals can’t perform independently, but can achieve with guidance and support—it is the “zone” where learning and cognitive development take place. e. So intelligence isn’t only about what one can do alone, but also what one can achieve with the assistance of a MKO. Thus, in this context, intelligence is seen as a dynamic process of development where individuals continuously expand their ZPD through sociocultural interactions. So MKOs play a pivotal role in facilitating learning and cognitive development by providing the necessary help to individuals within their ZPD. The ZPD concept underscores the fact and idea that learning is most effective when it is in this zone, where the learner is neither too challenged or too comfortable, but is then guided by a MKO to reach higher levels of competence in what they’re learning.

So the takeaway from this discussion is this: Intelligence isn’t merely a product of individual cognitive abilities, but it is deeply influenced by cultural and social interactions. It encompasses the capacity for private speech which demonstrates an individual’s capacity to think and reason independently. It also involves learning and development ad facilitated by MKOs who contribute to an individual cognitive growth. And the ZPD underscores the importance of sociocultural guidance in shaping and expanding an individual’s intelligence, while reflecting the dynamic and collaborative nature of cognitive development within the sociocultural context. So intelligence, as understood here, is inseparable from Vygotsky’s concepts of private speech, more knowledgeable others and the ZPD and it highlights the dynamic interplay between individual cognitive processes and sociocultural interactions in the development of intelligence.

Davidson (1982) stated that “Neither an infant one week old nor a snail is a rational creature. If the infant survives long enough, he will probably become rational, while this is not true of the snail.” And on Vygotsky’s theory, the infant becomes rational—that is, intelligent—by interacting with MKOs, and internalizing private speech when they learn to talk and think in cultural contexts in their ZPD. Infants quite clearly have the capacity to become rational, and they begin to become rational through interactions with MKOs and caregivers who guide their cognitive growth within their ZPD. This perspective, then, highlights the role of social and cultural influences in the development of infant’s intelligence and their becoming rational creatures. Children are born into both cultural and linguistically-mediated environments, which is put well by Vasileva and Balyasnikova (2019):

Based on the conceptualization of cultural tools by Vygotsky (contrary to more traditional socio-cultural schools), it follows that a child can be enculturated from birth. Children are not only born in a human-created environment, but in a linguistically mediated environment that becomes internalized through development.

Richardson’s view

Ken Richardson has been a critic of IQ testing since the 1970s being one editor of the edited volume Race and Intelligence: The Fallacies Behind the Race-IQ Controversy. He has published numerous books critiquing the concept of IQ, most recently Understanding Intelligence (Richardson, 2022). (In fact, Richardson’s book was what cured me of my IQ-ist delusions and set me on the path to DST.) Nonetheless,

Richardson (2017: 273) writes:

Again, these dynamics would not be possible without the co- evolution of interdependencies across levels: between social, cognitive, and aff active interactions on the one hand and physiological and epigenetic processes on the other. As already mentioned, the burgeoning research areas of social neuroscience and social epigenetics are revealing ways in which social/cultural experiences ripple through, and recruit, those processes.

For example, different cognitive states can have different physiological, epigenetic, and immune-system consequences, depending on social context. Importantly, a distinction has been made between a eudaimonic sense of well-being, based on social meaning and involvement, and hedonic well-being, based on individual plea sure or pain. These different states are associated with different epigenetic processes, as seen in the recruitment of different transcription factors (and therefore genes) and even immune system responses.18 All this is part of the human intelligence system.

In that way human evolution became human history. Collaboration among brains and the emergent social cognition provided the conceptual breakout from individual limits. It resulted in the rapid progress seen in human history from original hunter-gatherers to the modern, global, technologiocal society—all on the basis of the same biological system with the same genes.

So intelligence emerges from the specific activities, experiences, and resources that individuals encounter throughout their development. Richardson’s view, too, is a Vygotskian one. And like Vygotsky, he emphasizes the significant cultural and social aspects in shaping human intelligence. He rejects the claim that human intelligence is reducible to a number (on IQ tests), genes, brain physiology etc.

Human intelligence cannot be divorced from the sociocultural context in which it is embedded and operates in. So in this view, intelligence is not “fixed” as the genetic reductionist IQ-ists would like you to believe, but instead it can evolve and adapt over time in response to learning, the environment, and experiences. Indeed, this is the basis for his argument on the intelligent developmental system. Indeed, Richardson (2012) even argues that “IQ scores might be more an index of individuals’ distance from the cultural tools making up the test than performance on a singular strength variable.” And due to what we know about the inherent bias in the items on IQ tests (how they’re basically middle-class cultural knowledge tests), it seems that Richardson is right here. Richardson (1991; cf 2001) even showed that when Raven’s progressive matrices items were couched in familiar contexts, the children were able to complete them, even when the same exact rules were there between Richardson’s re-built items and the abstract Raven’s items. This shows that couching items in cultural context even with the same rules as the Raven shows that cultural context matters for these kinds of items.

Returning the concept of cultural tools that Richardson brought up in the previous quote (which is derived from Vygotsky’s theory), cultural tools encompass language, knowledge, and problem solving abilities which are culturally-specific and influenced by that culture. These tools are embedded in IQ tests, influencing the problems presented and the types of questions. Thus, it follows that if one is exposed to different psychological and cultural tools (basically, if one is exposed to different knowledge bases of the test), then they will score lower on a test compared to another person whom is exposed to the item content and structure of the test. So individuals who are more familiar with the cultural references, language patterns, and knowledge will score better than those that don’t. Of course, there is still room here for differences in individual experiences, and these differences influence how individuals approach problem solving on the tests. Thus, Richardson’s view highlights that IQ scores can be influenced by how closely aligned an individual’s experiences are with the cultural tools that are embedded on the test. He has also argued that non-cognitive, cultural, and affective factors explain why individuals score differently on IQ tests, with IQ not measuring the ability for complex cognition (Richardson, 2002; Richardson and Norgate, 2014, 2015).

So contrary to how IQ-ists want to conceptualize intelligence (as something static, fixed, and genetic), Richardson’s view is more dynamic, and looks to the cultural and social context of the individual.

Culture, class, and intelligence

Since I have conceptualized intelligence as a socially embedded and culturally-influenced and dynamic trait, class and culture are deeply intertwined in my conception of intelligence. My definition recognizes that intelligence is culturally-influenced by cultural contexts. Culture provides different tools (cultural and psychological) which then develop and individual’s cognitive abilities. Language is a critical cultural (also psychological) tool which shapes how individuals think and communicate. So intelligence, in my conception and definition, encompasses the ability to effectively use these cultural tools. Furthermore, individuals from different cultures may developm unique problem solving strategies which are embedded in their cultural experiences.

Social class influences access to educational and cultural resources. Higher social classes often have greater access to quality education, books, and cultural experiences and this can then influence and impact an individual’s cognitive development and intelligence. My definition also highlights the limitations of reductionist approaches like IQ tests. It has been well-documented that IQ tests have class-specific knowledge and skills on them, and they also include knowledge and scenarios which are more familiar to individuals from certain social and cultural backgrounds. This bias, then, leads to disparities in IQ scores due to the nature of IQ tests and how the tests are constructed.

A definition of intelligence

Intelligence: Noun

Intelligence, as a noun, refers to the dynamic cognitive capacity—characterized by intentionality—possessed by individuals. It is characterized by a connection to one’s social and cultural context. This capacity includes a wide range of cognitive abilities and skills, reflecting the multifaceted nature of human cognition. This, then, shows that only humans are intelligent since intentionality is a human-specific ability which is due to the fact that we humans are minded beings and minds give rise and allow intentional action.

A fundamental aspect of intelligence is intentionality, which signifies that cognitive processes are directed towards single goals, problem solving, or understanding within the individual’s social and cultural context. So intelligence is deeply rooted in one’s cultural and social context, making it socially embedded. It’s influenced by cultural practices, social interactions, and the utilization of cultural tools for learning and problem solving. So this dynamic trait evolves over time as individuals engage with their environment and integrate new cultural and social experiences into their cognitive processes.

Intelligence is the dynamic capacity of individuals to engage effectively with their sociocultural environment, utilizing a diverse range of cognitive abilities (psychological tools), cultural tools, and social interactions. Richardson’s perspective emphasizes that intelligence is multifaceted and not reducible to a single numerical score, acknowledging the limits of IQ testing. Vygotsky’s socio-cultural theory underscores that intelligence is deeply shaped by cultural context, social interactions, and the use of cultural tools for problem solving and learning. So a comprehensive definition of intelligence in my view—informed by Richardson and Vygotsky—is that of a socially embedded cognitive capacity—characterized by intentionality—that encompasses diverse abilities and is continually shaped by an individual’s cultural and social interactions.

In essence, within this philosophical framework, intelligence is an intentional multifaceted cognitive capacity that is intricately connected to one’s cultural and social life and surroundings. It reflects the dynamic interplay of intentionality, cognition and socio-cultural influences. Thus is closely related to the concept of cognition in philosophy, which is concerned with how individuals process information, make sense of the world, acquire knowledge and engage in thought processes.

What IQ-ist conceptions of intelligence miss

The two concepts I’ll discuss are the two most oft-cited concepts that hereditarian IQ-ists talk about—that of Gottfredson’s “definition” of intelligence and Jensen’s attempt at relating g (the so-called general factor of intelligence) to PC1.

Gottfredson’s “definition” is the most-commonly cited one in the psychometric IQ-ist literature:

Intelligence is a very general mental capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly and learn from experience. It is not merely book learning, a narrow academic skill, or test-taking smarts. Rather, it reflects a broader and deeper capability for comprehending our surroundings-“catching on,” “ making sense” of things, or “figuring out” what to do.

I have pointed out the nonsense that is her “definition” since she says it’s “not merely book learning, a narrow academic skill or test-taking smarts“, yet supposedly, IQ tests “measure” this, and it’s based on… Book learning, is an academic skill and knowledge of the items on the test. That this “definition” is cited as something that is related to IQ tests is laughable. A research paper from OpenAI even cited this “definition” in their paper Sparks of Artifical Intelligence: Early Experiments with GPT4” (Bubeck et al, 2023), but the reference was seemingly removed. Strange…

Spearman “discovered” g in 1903, but his g theory was refuted mere years later. (Nevermind the fact that Spearman saw what he wanted to see in his data; Schlinger, 2003.) In fact, Spearman’s g falsified in 1947 by Thurstone and then again in 1992 by Guttman (Heene, 2008). Then Jensen came along trying to revive the concept, and he likened it to PC1. Here are the steps that show the circularity in Jensen’s conception:

(1) If there is a general intelligence factor “g,” then it explains why people perform well on various cognitive tests.

(2) If “g” exists and explains test performance, the absence of “g” would mean that people do not perform well on these tests.

(3) We observe that people do perform well on various cognitive tests (i.e., test performance is generally positive).

(4) Therefore, since “g” would explain this positive test performance, we conclude that “g” exists.

Nonetheless, Jensen’s g is an unfalsifiable tautology—it’s circular. These are the “best” conceptions of intelligence the IQ-ists have and they’re either self-contradictory nonsense (Gottfredson’s), already falsified (Spearman’s) or unfalsifiable circular tautology (Jensen’s). What makes Spearman’s g even more nonsensical was that he posited g as a mental energy (Jensen, 1999), and more recently it has been proposed that this mental energy can be found in mitochondrial cells (Geary, 2018201920202021). Though I have also shown how this is nonsense.

Conclusion

In this article, I have conceptualized intelligence as a socially embedded and culturally-influenced cognitive capacity characterized by intentionality. It is a dynamic trait which encompasses diverse abilities and is continually shaped by an individual’s cultural and social context and social interactions. I explained Vygotsky’s theory and also explained how his three main concepts relate to the definition I have provided. I then discussed Richardson’s view of intelligence (which is also Vygotskian), and showed how IQ tests are merely an index of one’s distance from the cultural tools that are embedded on the IQ test.

In discussing my conception of intelligence, I then contrasted it with the two “best” most oft-cited conceptions of “intelligence” in the psychological/psychometric literature (Gottfredson’s and Spearman’s/Jensen’s). I then showed how they fail. My conception of intelligence isn’t reductionist like the IQ-ists (they try to reduce intelligence/IQ to genes or physiology or brain structure), but it is inherently holistic in recognizing how intelligence develops over the course of the lifespan, from birth to death. My definition recognizes intelligence as a dynamic, changing trait that’s not fixed like the hereditarians claim it is, and in my conception there is no use for IQ tests. At best, IQ tests merely show what kind of knowledge and experiences one was exposed to in their lives due to the cultural tools inherent on the test. So my inherently Vygotskian view shows how intelligence can be conceptualized and then developed during the course of the human lifespan.

Intelligence, as I have conceived of it, is a dynamic and constantly-developing trait, which evolved through our experiences, cultural backgrounds, and how we interact with the world. It is a multifaceted, context-sensitive capacity. Note that I am not claiming that this is measurable, it cannot be reduced to a single quantifiable measure. And since intentionality is inherent in the definition, this further underscores how it resists quantification and measurability.

In sum, the discussions here show that the IQ-ist concept is lacking—it’s empty. And how we should understand intelligence is that of an irreducible, socially and culturally-influenced, dynamic and constantly-developing trait, which is completely at-ends with the hereditarian conception. Thus, I have argued for intelligence without IQ, since IQ “theory” is empty and it doesn’t do what they claim it does (Nash, 1990). I have been arguing for the massive limitations in IQ for years, and my definition here presents a multidimensional view, highlights the cultural and contextual influence, and emphasizes it’s dynamic nature. The same cannot be said for reductionist hereditarian conceptions.