NotPoliticallyCorrect

Home » 2023 » March

Monthly Archives: March 2023

The AR Gene, Aggression and Prostate Cancer: Yet Another Hereditarian Reduction-to-Biology Fails

2100 words

Due to the outright failure in linking testosterone to differences in racial genetics, prostate cancer (PCa) and aggression, those who would still claim genetic or biological causation for differences in PCa incidence and aggression/crime which would then be linked to race needed to search other avenues for their long-awaited discovery and mechanism for the proposed relationships. This is where the AR (androgen receptor) gene comes in. The AR gene allows the creation of androgen receptors, and this is where androgen “dock”, if you will, allowing the physiological system to use it to carry out what it needs to. In this article, I will discuss the AR gene, CAG repeats, aggression, PCa, a just-so story and finally what may explain the differences in PCa acquisition and aggression between races, not appealing to genes.

Though, hereditarianism is concerned with the so-called biological/genetic transmission of socially-desired traits and also genetic causation of socially-undesired traits, due to the fall of the testosterone-causes-aggressive-behavior paradigm, surely some other biological mechanism could explain why blacks have higher rates of aggression, crime and PCa along with testosterone? Surely, if testosterone isn’t driving the relationship, it would somehow be implicated with it, through some other mechanism in some other kind of way? This is where the androgen receptor gene (AR gene) comes into play. So since CAG repeat length is assumed to be related to androgen receptor sensitivity, and since one report states that lower CAG repeats are associated with aggression (Simmons and Roney, 2011), this is where the hereditarian looks to next for their proposed relationships between testosterone, aggression and PCa. Simmons and Roney also, similarly to Rushton’s r/K, claim that “shorter AR-CAG repeats would have been beneficial for males inhabiting tropical regions because this genetic trait would have encouraged an androgenic response, reportedly along with higher testosterone levels” (Oubre, 2020: 293). Just-so stories all the way down.

Racial differences in AR gene and aggression/PCa

Such claims that the AR polymorphism followed racial lines and was correlated with PCa incidence began to appear in the late 1990s (eg, Giovannuci et al, 1997; Pettaway, 1999). It has been shown that the number of CAG repeats on the AR gene is related to heightened activity on the androgen receptor, and that blacks are more likely to have fewer CAG repeats on the AR gene (Sartor et al, 1997; Platz et al, 2000; Bennett et al, 2002; Gilligan et al, 2004; Ackerman et al, 2013). (But also see Gilligan et al (2004), Lange et al, (2008), and Sun and Lee (2013) for contrary evidence to these claims.) African populations have shorter CAG repeats than non-African populations on the AR gene (Samtal et al, 2022), and since carriers or short CAG repeats had a higher incidence of PCa (Weng et al, 2017; Qin et al, 2022), then this would be the next-best spot to look after the testosterone/PCa/aggression hypothesis failed so spectacularly. But since “Androgen receptor (AR) mediates the peripheral effects of testosterone” (Tirabassi et al, 2015), this has been a new haven for the hereditarian to go to and look for their relationship between aggression and biology.

Fewer CAG repeats has been linked to self-reported aggression (Mettman et al, 2014; Butovskaya et al, 2015; Fernandez-Castillo and Cormand, 2016). Though unfortunately for hereditarian theorists, they also need to look elsewhere, since the number of CAG repeats wasn’t related to aggressive behavior in men nor in women (Valenzuela et al, 2022). Vermeer (2010) found no relationship between CAG repeats and adolescent risk-taking, depression, dominance, or self-esteem. These findings are contrary to other claims, such as this from Geniole et al (2019): “Testosterone thus appears to promote human aggression through an AR-related mechanism“. Rajender et al (2008) showed that rapists and murders had fewer CAG repeats than controls (18.44 repeats, 17.59 repeats, and 21.19 repeats respectively). This is significant due to what was referenced above about testosterone modulating human aggression through an androgen receptor mechanism. Butovskaya et al (2012) also found no relationship between the AR gene and any of the aggression subscales they used.

Since shorter CAG repeats on the AR gene were also related to the severity of PCa incidence (Giovannuci et al, 1997), then what explains the 2 times higher incidence of PCa in blacks compared to whites and 3 to 4 times higher incidence in Asians (Hinata and Fujisawa, 2022; Yamoah et al, 2022) should be related to AR gene and CAG repeats. (Though shorter CGN repeats don’t increase PCa risk in whites and blacks; Li et al, 2017.) However, when blacks and whites had similar preventative care, differences almost entirely vanished (Dess et al, 2019; Yamoah et al, 2022). Lewis and Cropp (2020) have a good review of PCa incidence in blacks. Thus, external—not internal—factors influenced mortality rates, and even though there may be some biological factors that cause either a higher incidence of PCa or survival once it metastatizes, that doesn’t preclude the possibility of inequities on healthcare which cause this relationship (Reddick, 2018). But how can we explain this in an evolutionary context, either recently or in the deep past? Don’t worry, the just-so storytellers have us covered.

Just-so stories and androgen receptors

Urological surgeon William Aiken (2011), publishing in the prestigious journal Medical Hypotheses “speculated” that slaves thsg survived the Middle Passage were more sensitive to androgens, which would then protect them from the conditions they found themselves in on the slave ships during the Passage. This, he surmised, is why African descendants are way disproportionately represented in sprinting records and why, then, blacks have a higher incidence of PCa than whites. Aiken (2011: 1122) explains the “reasoning” behind his hypothesis:

This hypothesis emerged from an exploration of the possible interplay between historical events and biological mechanisms resulting in the similarity in the disproportionate racial and geographic distributions in seemingly unrelated phenomena such as sprinting ability and prostate cancer. The hypothesis is equally a synthesis of the interpretations of observations of a disparate nature such as the high incidence and mortality rates of prostate cancer amongst men of African descent in the Americas while West Africans residing in urban West African centre’s have a lower prostate cancer incidence and mortality [2], the 3-fold greater prostate cancer incidence in Afro-Trinidadians compared to Indo Asian-Trinidadians despite exposure to largely similar environmental conditions [5], the improvement in athletic sprinting performance observed when athletes take anabolic steroids [3], the observation that both sprinting ability and prostate cancer are related to specific hand patterns which in turn are related to antenatal exposure to high testosterone levels [6,7], the observation that prostate cancer is androgen-dependent and undergoes involution when testosterone is inhibited or withdrawn [4], the observation that West Africans born in West Africa are under-represented amongst the elite sprinters [1] despite their relatively large populations and despite West Africa being the region of origin of the ancestors of today’s elite sprinters and finally the observation that prostate cancer is related to androgen receptor responsiveness which in turn is related to its CAG-repeat length [8].

One of Aiken’s predictions is that black Americans and Caribbean blacks should have lower shorter CAG repeats than the populations of descent in Africa. Unfortunately for him, West Africans seem to have shorter CAG repeats than descendants of the Middle Passage (Kittles et al, 2001). Not least, neither of the two predictions he proposed to explain the relationship are risky or novel. By risky prediction I mean a hypothesis that would disprove the overarching hypothesis should the relationship not hold under scrutiny. By novel fact I mean a predicted fact that’s not used in the construction of the hypothesis. Quite clearly, Aiken’s hypothesis doesn’t meet this criteria, and so it is a just-so story.

But such fantastical, selection-type stories have been in the media relatively recently. Like Oprah’s and Dr. Oz’s assertions that blacks that survived the Middle Passage did so in virtue of their ability to retain salt during the voyage which then, today, leads to higher incidences of hypertension. This is known as the slavery hypertension hypothesis (Lujan and DiCarlo, 2018) and is, of course, also a just-so story. Just like the just-so story cited above, those Africans who took the voyage across the sea had some kind of advantage which explained why they survived and, consequently, explained relationships between maladies in their descendants. These types of stories—no matter how well-crafted—are nothing more than stories that explain what they purport to explain with no novel evidence that would raise the probability of the hypothesis being true.

Aggression

Aggression is related to crime, in that it is surmised that more aggressive individuals would then commit more crimes. I’ve noted the failure of hereditarian explanations over the years, so what do I think best explains the relationship between aggression and crime and, ultimately, criminal activity? Well, since crime is an action, it is therefore irreducible. I would propose a kind of situationism in explaining this.

Situational action theory (SAT) (eg Wilkstrom, 2010, 2019) is a cousin of situationism, and is a kind of moral action theory, placing the agent in situations (environments) which then would lead to criminal action as a discourse to take. “The core principle of SAT is that crime is ultimately the outcome of certain ‘kinds of people’ being exposed to certain ‘kinds of situations’” (Messner, 2012). For instance, a good example of this would be for black Americans. Mazur’s (2016) honor culture hypothesis states that blacks who are constantly vigilant for threats to their status and self have higher rates of testosterone in virtue of the fact that aggression increases testosterone.

So this would then be an example of the kind of relationship that SAT would look for. So SAT and the honor culture hypothesis are interactionist in that they recognize the interaction between the agent and the environment (situations) the agent finds themselves in. Violence is merely situational action (Wikstrom and Treiber, 2009), so to explain a violent crime, we need to know the status of the agent and the environment that the crime occurred in, along with the victim and motivating factors for the action in question. The fact of the matter is, actions are irreducible and what is irreducible isn’t physical, so physical (biological) explanations won’t work here. Further, the longer that people stay in criminogenic environments, the more likely they are to commit crime, due to the situations they find themselves in. Thus, a kind of analytic criminology should be employed to discover how and why crimes occur (Wikstrom and Kroneberg, 2022). Considerations in biology should not be looked at when talking about actions and their causes.

Prostate cancer

I have discussed this in the past: What best explains the incidences in PCa between races is diet. For instance, blacks have lower rates of vitamin D than other races (Guiterrez et al, 2022; Thamattoor, 2021). People with lower levels of vitamin D are more likely to acquire PCa, and those with the lowest levels of vitamin D were more likely to have aggressive PCa (Xie et al, 2017). Since consuming high IUs of vitamin D seems to stave off PCa (Khan and Parton, 2004; Naier-Shalliker et al, 2021), and since there seems to be a dose-response relationship between vitamin D consumption and PCa mortality (Song et al, 2018) along wkth vitamin D seeming to reverse low-grade PCa (Samson, 2015), it stands to reason that the higher incidences of PCa in blacks in comparison to whites are due to socio-environmental dietary factors. We don’t need any assumed biological/genetic factors in order to explain the relationship when we know the etiology of PCa.

Conclusion

Due to the old, 1980s and 1990s explanations from hereditarians on the etiology of PCa and aggression with its link to race and testosterone, researchers had to look to other avenues in order to find the “biological etiology” between the relationships. They then pivoted to the AR gene and CAG repeats to explain the relationship between PCa and testosterone when the original testosterone-causes-PCa-and-aggression claim was refuted (Tricker et al, 1996; Book, Starzyk, and Quinsey, 2001; O’Connor et, 2002; Stattin et al, 2004; Archer, Graham-Kevan, and Davies, 2005Book and Quinsey, 2005; Michaud, Billups, and Partin, 2015; Boyle et al, 2016).

But as can be seen, again, the relationships between the proposed explanations in order to continue pushing their biological/genetic theories of PCa and aggression linked with testosterone and race also fails. Rushton’s r/K theory, for instance, implicated testosterone as a “master switch” (Rushton, 1999). Attempted reductions to biology were also seen in (the now-retracted) Rushton and Templer (2012) (see responses here and here). Reductions to biology quite clearly fail, but that doesn’t deter the hereditarian from pushing the racist theory that genes and biology explain the poor outcomes of blacks.

Advertisement

Vygotsky’s Socio-Historical Theory of Learning and Development, Knowledge Social Class, and IQ

4050 words

Three of the main concepts that Soviet psychologist Lev Vygotsky is known for is cultural and psychological tools, private speech, and the zone of proximal development (ZPD). The ZPD is the distance between what a learner can do with and without help—the gap between actual and potential development. Vygotsky’s socio-historical theory of learning and development states that human development and learning take place in certain social and cultural contexts. When one thinks about how knowledge acquisition occurs, quite obviously, one can surmise that knowledge acquisition (learning) and human development take place in specific cultural and social contexts and so knowledge is culture-dependent (Richardson, 2002).

In this article, I will discuss the intersection of culture and Vygotsky’s concepts of private speech, cultural and psychological tools, and the zone of proximal development along with how these relate to IQ. Basically, the argument will be that what one is exposed to in childhood and during development will dictate how one performs on a test, and that the ZPD predicts school performance better than “IQ.”

What is culture and where does it come from?

This question is asked a lot by “HBDers” and I think it is a loaded question. It is a loaded question because they are fishing for a specific kind of answer—they want you to answer that culture derives from a people’s genetic constitution. This, though, fails. It fails because of how culture is conceptualized. Culture is simply what is socially transmitted by groups of people. It is physically visible (public) though the meaning of each cultural thing is invisible—it is private to the people who espouse the certain culture.

The basic source culture is values, beliefs, and norms. Cultures lay down strict norms of what is OK and what isn’t, like for example the foods they eat and along with it beliefs and attitudes shared by the social group. So a basic definition of culture would be: beliefs and ways of life that a social group shares—it is a human activity which is socially transmitted. Knowing this, we can see how learning and in some ways development, can be culturally-loaded. Since a culture dictates not only what is learned, but also how to think in a certain culture, we can then begin to see how different cultures lead people to think in different ways and along with it how different cultures lead to differences in not only knowledge but the acquisition of that knowledge.

UNESCO defines culture as “the set of distinctive spiritual, material, intellectual and emotional features of society or a social group, that encompasses, not only art and literature but lifestyles, ways of living together, value systems, traditions and beliefs” (UNESCO, 2001). (What is Culture?)

the term “culture” can refer to the set of norms, practices and values that characterize minority and majority groups (Stanford Encyclopedia of Philosophy, Culture)

Material culture consists of tangible objects that people create: tools, toys, buildings, furniture, images, and even print and digital media—a seemingly endless list of items. … Non-material culture includes such things as: beliefs, values, norms, customs, traditions, and rituals (Culture as Thought and Action)

Since society consists of individuals who then become a group living in a certain region, then it stands to reason that learning and human development are due to these kinds of cultural and social interactions between individuals which make up a certain society and therefore culture. The types of things that allow me to survive, learn, and grow in one culture won’t allow me to survive, learn, and grow to the same degree in another culture.

Now that I’ve touched on what culture is, where does it come from? Why are there different cultures? Quite simply, cultures are different because people are different and although different cultures are comprised of individuals, these individuals themselves comprise a group. These groups of people live in different environments/ecologies (physical environment), and so considerations of these ecologies lead not only to a group to begin to construct a society that is necessarily in-tune with the environment, it also leads to “mental environments” between the people that comprise the group in question. So then we can say that culture comes from the way that groups of people live their lives.

If we think about culture as thought and action, then we can begin to get at what culture really is. Values and beliefs influence our thought, attitudes, and behavior. “Culture influences action…by...shaping a repertoire or “toolkit” of habits, skills, and styles from which people construct “strategies of action“” (Swidler, 1986). Action is distinct from behavior, in that action is future- or goal-directed whereas behavior is due to antecedent conditions. That is, actions are done for reasons, to actualize a goal of the agent that is performing the action. Crudely, culture can be then said to be what a group of people does. Culture is “human-created environment, artifacts, and practices” (Vasileva and Balyasnikova, 2019).

How culture, then, comes into play in Vygotsky’s socio-historical theory of learning and development is now clear—the ways that people interact with others in a specific culture then dictates the knowledge that they acquire which then shapes their mental abilities. This theory is a purely developmental theory. The socio-historical theory makes three claims: Social interaction plays a role in learning, knowledge acquisition, and development; language is an essential cultural/psychological tool in learning, and learning occurs within the zone of proximal development (ZPD). How that I have shown how I will be using the term “culture”, it is clear that what it means for Vygotsky’s theory of human learning and development is relevant. Now I will discuss cultural and psychological tools and then turn to those three aforementioned tenets that make up the theory.

Psychological and cultural tools

Psychological tools are symbols, signs, text and language, to name a few. They are internally oriented, but in their external appearance take their form in the aforementioned ways. Language and mathematics are two kind of psychological tools, but we can also rightly say they they are cultural tools as well (in the case of language).

Cultural tools are tools specific to a culture which allows an individual to navigate that culture. Cultural tools don’t determine thinking but they do constrain it, since the “information about the expected or appropriate actions in relation to a particular performance in a community. This is indirectly social in that it is not interpersonal, though it nevertheless stems from the social context” (Gauvain, 2001:129). Language can be seen as both a cultural and psychological tool; humans are born into culturally- and linguistically-mediated environments, and so they are immediately immersed in culture from the day they are born (Vasileva and Balyasnikova, 2019).

Cultural tools include historically evolved patterns of co-action; the informal and institutionalized rules and procedures governing them; the shared conceptual representations underlying them; styles of speech and other forms of communication; administrative, management and accounting tools; specific hardware and technological tools; as well as ideologies, belief systems, social values, and so on (Vygotsky, 1988).(Richardson, 2002: 288)

Robbins (2005: 146) writes:

Another important concept within sociocultural theory, which we can highlight through Rogoff’s (1995, 1998) contextual or community focus of analysis, is the use of cultural tools (both material and psychological) in the development of understanding. As Lemke (2001) points out, we grow and live within a range of different contexts, and our lives within these communities and institutions give us tools for making sense of, and to, those around us. Vygotsky described psychological tools as those that can be used to direct the mind and behaviour, while technical tools are used to bring about changes in other objects (Daniels, 2001). Commonly cited examples of cultural tools include language, different kinds of numbering and counting, writing schemes, mnemonic technical aids, algebraic symbol systems, art works, diagrams, maps, drawings, and all sorts of signs (John-Steiner & Mahn, 1996; Stetsenko, 1999).

So cultural tools, then, become “internalized in individuals as the dominant ‘psychological tools’” (Richardson, 2002: 288).

Social interaction plays a role in learning

This seems quite intuitive. As a human develops, they begin to take cues from their overall environment and those that are rearing them. They are immersed in a specific culture immediately from birth. They then begin to internalize certain aspects of the environment around them, and then begin to internalize the specific cultural and psychological tools inherent to that specific culture.

Tomasello (2019: 13) states that his theory is that “uniquely human forms of cognition and sociality emerge in human ontogeny through, and only through, species-unique forms of sociocultural activity” and so it is not only Vygotskian, but neo-Vygotskian. So children are in effect scaffolded by the culture they are immersed in, which is how “more knowledgeable others” (MKO) affect the learning trajectory of the child. A MKO is an individual who has a better understanding of, or a higher ability than, the learner. So MKOs aren’t merely for teaching children, they are strewn throughout the world teaching less knowledgeable others. These MKOs guide individuals in their ZPD, since the MKO would have greater access to certain knowledge that the LKO wouldn’t, they would then be able to guide the LKO in their learning, able to provide instruction to the LKO so they could then perform a certain task. Learning to play baseball, right a bike, lift weights, are but a few ways that MKOs guide the development and task-acquisition of children—these are perfect examples of the concept of “scaffolding.”

Although Vygotsky never used the term “scaffolding”, it’s a direct implication of his socio-historical theory of learning and development. The concept of scaffolding has been argued to be related to the ZPD, but see Shabani, Khatib, and Ebadi (2010) and Xi and Lantolf (2021) for criticism of this relationship. However, it has been experimentally shown that the concept of scaffolding along with the ZPD can be used to extend a student’s ZPD for critical thinking (Wass, Harland, and Mercer, 2011). That is, the students can better reach their potential and therefore become independent learners.

What this means is that culture is significant in learning, language is necessary for culture, and people learn from others in their communities. Interacting with other people while developing, and even after, are how humans develop. Since we are a social species, it stands to reason that these concepts like MKOs and the significance of the cultural context in the acquisition of certain skills and learning play a significant role in the development of all children and even adults. Thus, each stage of the development of a child builds upon a previous stage, and so, play could also be seen as a form of learning—a form of sociocultural learning. Imaginative play, then, allows the self-regulation of children and also challenges them just enough in their ZPD.

Private speech

“Private speech” is when a child talks to themselves while they are performing a task (Alderson-Day and Fernyhough, 2015). It is one’s “inner speech”, their own “voice” in their heads. It is the act of talking to one’s self as they perform a task, and this is ubiquitous around the world, implying that it is a hallmark of human cognizing (Vissers, Tomas, and Law, 2020). This is basically the “voice” you head in your head as you live your daily life. It is, of course, a natural consequence of thinking and talking. Speech acts are a natural process of think acts, as Vygotsky argued, which is similar to Davidson’s (1982) argument against the possibility of animal mentality since for organisms to be thinking and rational they must be able to express numerous thoughts and interpret the speech of others. This kind of speech, furthermore, has been shown to been related to working memory and cognitive reflexivity (Skipper, 2022).

The zone of proximal development

The ZPD is what a learner can and cannot do without help. Vygotsky originally developed it to oppose the concept of “IQ” (Neugeurela, Garcia, and Buescher, 2015; Kazemi, Bagheri, and Rassei, 2020; Offori-Attah, 2021). This concept is perhaps the most-used and discussed concept that Vygotsky forwarded. Central to this concept, which is a part of Vygotsky’s overall theory of child development, is imitation. Imitation is a goal-directed activity, and so it is an action. There is intention behind the imitation because the imitator is copying what the MKO is doing. But Vygotsky was using “imitation” in a way that is not normally used. To be able to imitate, one has to be able to be able to do carry out the imitation of what they are seeing from the MKO. So Vygotsky’s concept of the ZPD is that a child can learn something that he doesn’t know how to do by imitating an MKO, having the MKO guide them through to complete the task. It has been argued that ZPD can improve a learner’s thinking ability, along with making learning more relevant and efficient to the learner since it gives the learner the ability to learn from instruction and having a MKO guide them to compete a task, which then becomes internalized (Abdurrahman, Abdullah, and Osman (2019).

So the ZPD indicates what a child can do independently, and then they are given harder, guided problems which they then imitate and further internalize. MKOs are able to recognize where a child is in their development and can help them then complete harder tasks. The ZPD is related to learning not only in school but also in play (Hakkarainen and Bredikyte, 2008). For instance, the Strong Museum of Play states that “Learners develop concepts and skills through meaningful play. Play supports physical, emotional, cognitive, and social development.” Children definitely learn from play, and this interactive kind of learning also has them better understand their body, since play is in part a physical activity (a guided, goal-directed, intention). Play is” developmentally beneficial (Eberle, 2014; UNICEF, 2018), and it is beneficial and related to the ZPD since a child can learn to do something either from a peer or coach that knows how to do the action they want to learn and then internalize. An individual that is playing is an active participant in their own learning. Play, in effect, creates the ZPD (Hakkarainen and Bredikyte, 2014). Though Vygotsky’s conception of “play” is different than used in common parlance. Play

is limited to the dramatic or make-believe play of preschoolers. Vygotsky’s play theory therefore differs from other play theories, which also include object-oriented exploration, constructional play, and games with rules. Real play activities, according to Vygotsky, include the following components: (a) creating an imaginary situation, (b) taking on and acting out roles, and (c) following a set of rules determined by specific roles (Bodrova & Leong, 2007). (Scharer, 2017: 63)

Further, “symbolic play may scaffold development because it facilitates infants’ communicative success by promoting them to ‘co-constructors of meaning’” (Creaghe and Kidd, 2022). “Play creates a zone of proximal development of the child. In play a child always behaves beyond his average age, above his daily behavior; in play it is as though he were a head taller than himself” (Vygotsky, 1978, 102 quoted in Gray and Feldman, 2004: 113).

The scaffolding occurs due to the relationship between play, the ZPD and what an individual then internalizes and then becomes embedded in their muscle memory. This is where MKOs come into play. When one is first learning to work out, they may seek out a personal knowledgeable in the mechanics of the human body to learn how to lift weights. Through instruction, they then begin to learn and then internalize the movements in their heads, and then they can just perform the lift well after successive attempts of doing a certain motion. Or take baseball. Baseball coaches would be the MKOs, and they then teach children to play baseball and they learn how to hit pitches, catch balls, throw and how to be a part of a team. Through the action of play, then, one can reach their ZPD and even extend it.

ZPD and IQ

Further, Vygotsky showed that the whether or not one has a large or small ZPD better “predicts” performance than does “IQ” and he also noted that those who scored higher on IQ tests “did so at the cost of their zone of proximal development“, since they exhaust their ZPD earlier leaving a smaller ZPD.

Vygotsky reported that not only did the size of the children’s ZPD turn out to correlate well with their success in school (large ZPD children were more successful than small ZPD children) but that ZPD size was actually a better predictor of school performance than IQ. (Poehner, 2008: 35; cf Smirni and Smirni, 2022)

It has even been experimentally demonstrated that children with high IQs have a smaller ZPD, children with low IQs have a larger ZPD (Kusmaryono and Kusmaningsih, 2021). It has also been shown that those who received ZPD scaffolding instruction improved more and even outperformed the other group on subsequent IQ tests after a first test was administered (Stanford-Binet and Mensa) (Ghelot, 2021). The responsiveness to remediation, and not “IQ” was a better predictor of school performance (Amini, Hassaskhah, and Sibet, 2017) and the degree of responsiveness wasn’t related to high or low IQ, since some learners had a high responsiveness and low score while others had a high score but low responsiveness (Poehner, 2017: 156). Those who took a test in one year and did not get better in subsequent years, Vygotsky argued, merely meant that they were not pushed outside of what they already know. So children with large ZPD were more likely to be successful irrespective of IQ while children with small ZPD were less likely to be successful, irrespective of IQ. Though the concepts of ZPD and IQ are seen as not contradictory, but related (Modarresi and Jeddy, 2021), quite clearly since “IQ” isn’t a measure of learning ability it merely shows what one has learned and so has been exposed to while the ZPD shows how one would do into the future due to how large their ZPD is. It shows not only where someone has reached, but also shows where they can reach. Thus, instead of the (undeserved) emphasis of IQ, we should therefore put the ZPD in its place, since it is a dynamic (relational) assessment and not a standardized test (Din, 2017).

What’s class got to do with it?

Since children acquire knowledge and beliefs based on their class background (what they are exposed to in their daily lives as they grow), then it follows that children will be differentially prepared for taking certain kinds of tests. So if the content on the tests is biased toward a group, then it is biased against a group. It is biased against a group since they are not exposed to the relevant material and kinds of thinking needed to be able to perform the test in a sufficient manner. Knowing what we now know about the acquisition of cultural and psychological tools, we can state that “high IQ may simply be an accident of immersion in middle-class cultural tools (aspects of literacy, numeracy, cultural knowledge, and so on) … the environment is made up of socially structured devices and cultural tools, and in which development consists of the acquisition of such cultural tools” (Richardson: 1998: 163-164). It is due to these considerations that culture-fair IQ tests are an impossibility, since people are encompassed in different cultures (what amount to learning environments where they acquire knowledge and cultural and psychological tools) are therefore an impossibility since abilities are cultural devices—culture-free tests are therefore an illusion (Cole, 2002; Richardson, 2002).

So if there are different cultural groups, then they by definition have different cultures. If they have different cultures, then they have different experiences (of course), and so, they acquire different kinds of knowledge and along with it cultural and psychological tools. It is then we can then rightly state that therefore different cultural groups would then be differentially prepared for doing certain tasks. And so, if one’s culture is more dominant and if one culture’s way of thinking is more prevalent, then it follows that people will be prepared for a certain test at different stages of being able to perform the tasks or answer the questions. Social status, also, isn’t merely just related to material things, it also influences how we think and act (Richardson and Jones, 2019) and so emotional and motivational—affective—factors would therefore play a role in one’s test score, since they are constructed from a narrow range of test items, constructed to get the results that were a priori to the test constructors. So since one’s class is related to affective factors, since IQ tests reflect mere class-specific items, it follows that the “affective state is one of the most important aspects of learning” (Shelton-Strong and Maynard, 2018). It is then, by using the concepts of cultural and psychological tools (which occur in social relations) that we can then rightly state that IQ tests are best looked at as mere class surrogates.

Conclusion

Basically, “in order to understand the individual, one must first understand the social relations in which the individual exists” (Wertsch, 1985: 63). Vygotsky’s theory is one in which the mind is formed and constructed through social and cultural interactions with those who are already immersed in the culture that the individual’s mind is developing in. And so, by using the concepts of cultural and psychological tools, we can then see how and why different classes are differentially prepared for taking tests, which is then reflected in the score outcomes. Since growing individuals learn what they are exposed to and they learn from those who are already immersed in the culture at large, then it follows that individuals learn culturally-specific forms of learning and thusly acquire different “tool sets” in which they then navigate the social world they are in. The concepts of private speech, cultural and psychological tools, MKO, scaffolding and the ZPD all coalesce to a theory of learning and development in which the learner is an active participant in their development, and so, these things also combine to show how and why groups score differently on IQ tests.

Knowledge is the content of thought, and the ability to speak is how we convey thoughts to others and how we actualize the thoughts we have into action. Thus all higher human cognitive functioning is social in nature (van der Veer, 2009). Though it is wrongly claimed that IQ is shown to be a measure of learning potential, it is rightly said that the ZPD is social in nature (Khalid, 2015). IQ doesn’t show one’s learning potential, it merely shows what one was or was not exposed to in regard to the relevant test items (Lavin and Nakano, 2017). Culture is a fluid and dynamic experience (Rublik, 2017) in which one is engrossed in the culture they are born into, and so, by understanding this, we can then understand why different groups of people score differently on IQ tests, without the need for genes or biological processes.

Though there have been good criticisms of Vygotsky’s socio-historical theory of learning and development. Though much of Vygotsky’s theorizing has led to predictions and do have some empirical support (Morin, 2012). One argument against the ZPD is that it doesn’t explain development or how it really occurs. If you think about development from a Vygotskian perspective, we see that it is as much of a cultural and social activity than is mere individual learning. By learning from people more knowledgeable than themselves, they are then able to learn how to do something, and through repetition, able to do it on their own without the MKO.

The fact of the matter is, IQ tests aren’t as good as either teacher assessment (Kaufman, 2019) or the ZPD in predicting where a learner will end up. It is for these reasons (and more) we should stop using IQ tests and we should us the relational ZPD. (One can also look at the ZPD as related to considerations from relational developmental systems theory as well; Lerner, 2011, 2013; Lerner, Johnson, and Buckingham, 2015; Ettekal et al, 2017; Bell, 2019). It is for these reasons that standardized tests should not be used anymore, and we should use tests of dynamic assessment. The empirical research on the issue bears out this claim.

Who Believes in an Afterlife in America? If Heaven Exists, Will There Be Races?

2500 words

(Note: I don’t believe in an afterlife and I’m not a theist.)

What do Americans think about the existence of an afterlife and what are the differences between races?

What do Americans think about the existence of an afterlife—of heaven and hell? The existence of an afterlife to American citizens is clear—more Americans believe in heaven but not in hell, per Pew. But 26% of the respondents didn’t believe in either heaven or hell. But those who did not believe in heaven or hell but did believe in an afterlife were asked to describe their views:

More Americans believe in heaven than in hell

Respondents who believe in neither heaven nor hell but do still believe in an afterlife were given the opportunity to describe their idea of this afterlife in the form of an open-ended question that asked: “In your own words, what do you think the afterlife is like?”

Within this group, about one-in-five people (21%) express belief in an afterlife where one’s spirit, consciousness or energy lives on after their physical body has passed away, or in a continued existence in an alternate dimension or reality. One respondent describes their view as “a resting place for our spirits and energy. I don’t think it’s like the traditional view of heaven but I’m also not sure that death is the end.” And another says, “I believe that life continues and after my current life is done, I will go on in some other form. It won’t be me, as in my traits and personality, but something of me will carry on.”

Blacks were slightly more likely to believe in heaven over whites, though a super majority of both races do believe in heaven, while way more blacks than whites believed in the existence of hell. Others professed less-widely-held views on the afterlife, like existing as a spirit, consciousness, or energy in the afterlife. Those who believe state that heaven is free from earthly matters, such as suffering while in hell it is the opposite—hell is nothing but eternal suffering, not due to any fire and brimstone, but because it is eternal separation from God. America is, to my surprise, still a very superstitious country when it comes to God and Satan and the existence of heaven and hell. People believe that their prayers can be answered and that interactions between the living and the dead are possible. Black Americans are more likely to believe that their prayers can be directly answered in comparison to white Americans (83 percent compared to 65 percent, respectively) , while 67 percent of Americans think it’s possible. Black Americans also believe that revelations from a higher power are possible in comparison to white Americans (85 percent and 66 percent, respectively), while black Americans are more likely to believe that they have experienced contact from a higher power compared to white Americans (53 percent compared to 25 percent, respectively).

Blacks are also slightly more likely than whites to believe in near-death experiences (79 percent compared to 73 percent, respectively). Thus, blacks are more superstitious than whites. The Pew poll also tracks other studies—black Americans and Caribbean Blacks were more likely to be religious than whites (Joseph et al, 1996; Franzini et al, 2005; Taylor, Chatters, and Jackson, 2007; Chatters et al, 2009). Men in general are less religious than women, but black men are less religious than black women but more religious than white women. But although blacks are more likely than whites to believe in an afterlife and be religious, there is an apparent shift away (and Americans seem to be shifting away from being religious ever so slightly, though 81 percent of Americans are still believers) from religiosity in the black community; but they are still more likely to pray, say grace and attend church than other racial groups.

Moreover, black men over age 50 who attend church had a 47 percent reduction in all-cause mortality compared to those who did not attend (Bruce et al, 2022), so there seems to be a protective effect that occurs due to attending church services (Assari and Lankarani, 2018; Carter-Edwards et al, 2018; Majee et al, 2022). It has been found that blacks consistently report lower odds of having depression, and the answer is probably due to attending religious services (Reese et al, 2012). However, when it comes to church attendance, for white women their attendance at church is either nonexistent or protective when it comes to body mass while for black women consistent relations between church attendance and body mass have been shown (Godbolt et al, 2018). Given the fact that black women have been consistently more likely to be obese than white women since at least the late 80s and 90s (Gillum, 1987; Kumanyika, 1987; Allison et al, 1997) and today (Tilghman, 2003; Johnson et al, 2012; Agyemang and Powell-Wiley, 2014; Tucker et al, 2021), this finding is not surprising. But the effects of racism can not only explain the higher rates of obesity in black women (Cozier et al, 2014), it could also explain the higher rates of “weathering” of black women’s bodies (Geronimus et al, 2006).

Nevertheless, blacks are more likely to be religious and report religious experiences in comparison to whites, and blacks are also more likely to be religious in comparison to the general US population. Why may blacks be more religious than whites? This is a question I will try to answer in the future.

Is it possible for races to exist in heaven?

Some Christians claim that there will be racial/ethnic diversity in both heaven and hell. The article Will heaven be multicultural and have different races? claims that:

The ultimate answer to your question is found in Revelation 21-22 which describes the new heaven and earth. In Revelation 21:24 we are told that people from the various nations will be in heaven. That is, those who believe in Jesus Christ and follow Him will live there for eternity. But the culture of heaven will be God’s culture. Everything is new. Heaven and earth will be new. The old will have disappeared and the new will have come. Sin will be gone and racial prejudices and alliances will be gone.

While the article Will There Be Ethnic Diversity in Heaven? claims that “ethnic diversity seems to be maintained and apparent in Heaven, for eternity“, the article Racial Diversity in Hell claims that:

The difference between heaven and hell is that in heaven—that is, in the new heaven and new earth—there will be perfect racial and ethnic harmony, but in hell, racial and ethnic animosities will reach their fullest fury and last forever.

So what is RACE? In my view, race is a suite of physical characteristics which are demarcated by geographic ancestry, as argued by Hardimon and Spencer. So if race is physical, then if a thing isn’t physical—that is, if a thing is immaterial—then there would be no way to identify which racial group they were a part of while they were alive. If we take the afterlife to be a situation in which a person has died but they then exist again as a disembodied soul/mind, then there can’t possibly be races in heaven, since what identified the person as part of a racial group (the physical) doesn’t exist anymore.

In the book The Myth of an Afterlife, Drange (2015: 329-330) articulates what he calls the nonidentification argument, where it is inconceivable for a person to be identified if they are bodiless, and if they are bodiless and race is a property of physical bodies, then it would follow that there wouldn’t be races in heaven since disembodied souls, by definition, lack physical bodies—there would be no way for the identities of people to be established, and so if people’s identities cannot be established, then it follows that their racial identities cannot be established either.

  1. Bodiless people would have no sense organs and no body of any sort.
  2. Therefore, they could not feel anything by touch or see or hear anything (in the most common senses of “see” and “hear”).
  3. Thus, if they were to have any thoughts about who they are, then they would have no way to determine for sure that the thoughts are (genuine) memories, as opposed to mere figments of imagination.
  4. So, bodiless people would have no way to establish their own identities.
  5. Also, there would be no way for their identities to be established by anyone else.
  6. Hence, there would be no way whatever for the identities of bodiless people to be established.
  7. But for a person to be in an afterlife at all, it is conceptually necessary for his or her identity to be capable of being established.
  8. It follows that a totally disembodied personal afterlife is not conceivable.

Drange’s argument is against a certain conception of the afterlife, mainly if it is one where souls are disembodied, it follows that there would be no way to identify them, and so it follows that there would be no races in heaven, since race is a physical property of humans and their bodies. But there are different ways of looking at the possibility of races in heaven, depending on which theory of race one holds to.

Nathan Placencia (2021) argues that whether or not races exist in heaven depends on which philosophy of race you hold to, but he does make the positive claim that there may be racial identities in heaven. For racial constructivists, since race exists merely due to social conventions and racialization, then race wouldn’t exist. For the racial skeptic, since race doesn’t exist as a biological category, then races don’t exist. That is, since racial naturalism is false, then races of any kind cannot exist, where racial naturalism is basically like the hereditarian conception (or non-conception, if you will) of race (see Kaplan and Winther, 2015). Racial naturalists argue that race is grounded in genetically-mediated biological differences. I am of course sympathetic to the view, though I do hold that race is a social construct of a biological reality and I am a pluralist about race. The last conception that Placencia discusses is that of deflationary realism, where race is genetically-grounded but not itself normatively important (Hardimon, 2017). So Placencia claims that for the racial constructivists and skeptics, races won’t exist in heaven while for the deflationary realist, the “answer is maybe” on whether or not race will exist in heaven which then of course depends on what the resurrected heavenly bodies would look like.

Believers in heaven state that Believers will have new, physical bodies in heaven. But Jesus wasn’t immediately recognizable to his followers, though they did come to know that it was actually him after spending time with him. So theists of course then believe that we get new physical bodies in heaven but that we would look different than we did while we had a physical, earthly existence. Certain chapters in Revelations (21:4, 22:4) talk about God wiping away tears and a name appearing on their foreheads, so this then implies that there would be new, physical bodies in heaven. But now the question is, would heavenly bodies fall under racial lines as we currently understand them in this life? The question is obviously unanswerable, but certain texts in the Bible after Jesus’ resurrection state that he did look different than he did while he was alive in earth.

Baker-Hytch (2021: 182) argues that “the new creation is depicted as an everlasting reality whose human inhabitants from all nations will have resurrection bodies that—after the pattern of Jesus’ resurrection body—neither age nor die and that will partake in shared pleasures such as eating and drinking together.” So there is a trend in Christian and theistic thought that in heaven, we will all have new heavenly bodies and not exist as mere disembodied souls. But talk of new heavenly bodies faces an issue—if they are bodies in the sense that we think of bodies now, the bodies that we inhabit now, then would they grow old, decay and eventually die? Would God then give us new heavenly bodies? It would stand to reason that, if God is indeed all-powerful and all-knowing, then he would have thought these issues through and so heavenly bodies wouldn’t have the same properties as physical, earthly bodies and so they wouldn’t get older, die and eventually decay.

Conclusion

If the afterlife is completely disembodied, then it follows that race wouldn’t exist in the afterlife, since there would be no way for the identities of persons to be established, and thusly there would be no way for the race of the disembodied soul to be established. Most theists contend that we will have new, heavenly bodies in heaven, but whether or not they would look the same as the former earthly bodies is up in the air, since Jesus after his resurrection apparently looked different, since it states in the Bible that it took some time for Jesus’ followers to recognize him. So, if Heaven exists, will there be races? The concept RACE is a physical one. So if there are disembodied souls in heaven, and they have no physical bodies, then races won’t exist in heaven.

I obviously am a realist about race who holds to radical pluralism about racial kinds—there can be many concepts of race which are true and are context-dependent. Though I do not believe in an afterlife, I do believe that if an afterlife is nothing but disembodied souls living in heaven wkth God, then it follows that there won’t be races in heaven since there are no physical bodies on which to ground racial ontologies. On the other hand, if what most theists contend is true—that we get new heavenly bodies after our death and entrance into the afterlife—whether or not race would exist in heaven is questionable and it depends on which concept of RACE one holds to. If one is a constructivist or skeptic (AKA eliminativist or anti-realist) about race, then race wouldn’t exist in heaven as race is due to social conventions and the concept of racialization of groups as races. But if one is a deflationary realist about race (which I myself am), then the answer to the question of whether or not races would exist in heaven is maybe.

Nevertheless, whether or not one believes in the existence of an afterlife is slightly drawn on racial lines, with blacks being more likely to believe in an afterlife compared to whites, while are more likely to believe that their prayers can be directly answered and that they can talk to a higher power in comparison to whites.

So depending on how races get squared away in heaven upon receiving new heavenly bodies, it is unknown whether or not races will exist in heaven.

The Argument from Causality and the Argument from Prediction for a Mind-Independent World

1200 words

How can we know that a mind-independent world exists outside of our senses if our senses are subjective? We have first-personal perspectives (FPP) and so, if our first-personal experience is subjective, how can we know that an objective world exists outside of our senses? Well, I have two arguments for the existence of a mind-independent world—what I call “the argument from prediction” and “the argument from causality.”

P1: If there is a physical world independent of human minds, then we can make consistent predictions and perceive it.

P2: We can make consistent predictions about the world and perceive them.

C: So there is a physical world independent of human minds.


P1: If there is causality in the world, then there is a world independent of human minds.

P2: There is causality in the world.

C: Therefore there is a world independent of human minds.

In this article, I will justify each premise of both arguments.

For this article, I will be operating under this definition of mind-independence: X is mind-independent if the existence of X is not dependent on a thinking or perceiving thing. This is a form of metaphysical realism, where there are two theories:

a. that physical objects do not depend for their existence on being perceived or conceived by mind, and

b. that there are physical objects.

The argument from prediction

Premise 1: Only if there were a world independent of our minds could we then make predictions about what occurs in the world. For if what we perceive wasn’t independent of our minds, then we wouldn’t be able to make predictions about the world (ones that turn out to be true, of course). A mind-independent thing is a thing that exists without a thing that thinks and perceives it; so it would exist without an external observer. If humans weren’t here anymore, and if all animals went extinct while the earth was still intact, then the world would still exist.

Premise 2: We constantly make predictions about scientific phenomena, and while of course some of the predictions are wrong, some are right. If some of them are right, then it follows that there is a world out there that’s independent of our minds. A great example is the prediction of Halley’s comet. Edmund Halley—discoverer of the Halley’s comet—observed that 3 comets which appeared in 1531, 1607, and 1682 had similar orbits. He then reasoned that they were the same comet. So using these 3 data points, he stated: “Hence I dare venture to foretell, that it will return again in the year 1758.” Halley didn’t live to see his prediction come to fruition, but 16 years after his death—right on time—the comet appeared and verified his prediction. This is actually a solid example of the fact that we need to make risky, novel predictions, which could falsify a hypothesis in question, but if the observation holds, is evidence for the hypothesis in question. It is, of course, because there is a mind-independent world that Halley’s prediction came to fruition and this prediction best justifies the truth of P2.

Conclusion: Thus, due to considerations that we can make predictions about what occurs in the world, we can then say that there is a mind-independent world. The example of Halley’s comet best illustrates this. It is only due to the fact that we can make consistent predictions about what occurs in the world that we can then successfully conclude that a mind-independent world exists.

Quite obviously, scientific inquiry allows us to generate risky, novel predictions and therefore knowledge, and it does allow for correct predictions about the futures states of the world.

So similar to my argument from prediction is my argument from causality. Predictions are derived from possible causal effects. So, if one has a hunch on a possible arrow of causation and thusly makes a risky, novel prediction based on their hunch, therefore if the prediction comes to fruition then the causal inference may be valid. And since “causal explanations necessarily generate predictions“, thus, the two arguments I’ve mounted are indeed related.

The argument from causality

Premise 1: P1 is simple—if there is causality in the world then there is a mind-independent world. If there were no external observers in the world, there would then still be chains of causation, say the wind knocking a tree over or waves crashing into cliffs taking down chunks of it into the ocean. The world is a physical thing, and cause and effect is related to physical things—the relations between physical things can be predicted and we can then use scientific experiments to show the causal relations between each variable.

Premise 2: This premise is undoubtedly true. For example, if I take a rubber band, place it on my left thumb and pull it back with my right finger and let go of it with my right finger, then it will become a projectile and go in the general area that I aimed at. These considerations are of course one of scientific realism—and the SEP article on scientific realism states that “a general recipe for realism is widely shared: our best scientific theories give true or approximately true descriptions of observable and unobservable aspects of a mind-independent world.” And it is our scientific predictions that use causation as the benchmark which shows that a mind-independent world indeed exists.

Conclusion: So we can then rightly state that due to causality existing in the world, that a mind-independent world does exist. If we did not observe causation, then we could say that there is no mind-independent world.

Conclusion

Reality is clearly mind-independent, based on these two arguments. If it weren’t, then we wouldn’t be able to make successful, novel scientific predictions and there would be no causality in the world. The fact that we use the scientific method to generate novel predictions is tantamount to the claim that there is a mind-independent world. Physical objects exist, and since only the physical is measured, then we can make scientific theories about phenomena and then generate predictions about what we think might occur should certain conditions hold.

The existence of a mind-independent world is put well by Lavazza (2016):

If there were no external reality independent of the knowing mind—a reality that can be investigated insofar as it is accessible by our senses and our tools, predictable in its change and mostly interpretable according to law-like regularities—scientific inquiry would be neither practicable nor would it give us knowledge. And in any case this knowledge would not be effective and practical in the sense of allowing for correct predictions about future states of the world.

Basically, both arguments can be reduced to:

P1: If we generate successful, novel, risky predictions using the scientific method, and if causal explanations necessarily generate predictions, then there is a mind-independent world.

P2: We generate successful, novel risky predictions using the scientific method using causal explanations which necessarily generate predictions.

C: Therefore, there is a mind-independent world.

The three arguments I’ve given are valid, and I have argued for the truth of each premise. So it follows that a mind-independent world does indeed exist.

Directed Mutations, Epigenetics and Evolution

2400 words

A mutation can be said to be directed if it arises due to the needs of the developing organism, and they occur at higher frequencies if it is beneficial (Foster, 2000; Saier et al, 2017). If there is some sort of stress, then an adaptive mutation would occur. The existence of this kind of mechanism has been debated in the literature, but its existence spells trouble for neo-Darwinian theory, whose proponents claim that mutations are random and then “selected-for” in virtue of their contributions to fitness. Indeed, this concept challenges a core tenet of neo-Darwinism (Sarkar, 1991). I will argue that directed mutation/non-random mutation/stress-directed adaptation (DM, directed mutation for short) spells trouble for the neo-Darwinian paradigm.

The issue at hand

The possibility of DMs were argued for by Cairns, Overbaugh, and Miller (1988), where they argue that environmental pressure can cause adaptive changes to genes that would be beneficial to the organism. This then spurred a long debate about whether or not such mutations were possible (see Sarkar, 1991; Fox Keller, 1992; Brisson, 2003; Jablonka and Lamb, 2014). Although Cairns, Overbaugh, and Miller were wrong—that is, they were not dealing with mutations that were due to the environmental disturbances they posed (Jablonka and Lamb, 2014: 84)—their paper did bring up the possibility that some mutations could be a direct consequence of environmental disturbances which would then be catapulted by the homeodynamic physiology of the organism.

Saier et al (2017) state the specific issue with DM and its existence:

Recently, strong support for directed mutation has emerged, not for point mutations as independently proposed by Cairns, Hall and their collaborators, but for transposon-mediated mutations (1213). If accepted by the scientific community, this concept could advance (or revise) our perception of evolution, allowing increased rates of mutational change in times of need. But this concept goes against the current dogma that states that mutations occur randomly, and only the beneficial ones are selected for (1415). The concept of directed mutation, if established, would require the reversal of a long accepted precept.

This is similar to the concept of phenotypic plasticity. It is the phenomenon of a given genotype expressing different phenotypes due to environmental factors. This concept is basically a physiological one. When talking about how plastic a phenotype is, its relation to the physiology of the organism is paramount. We know that physiological changes are homeodynamic. That is, changes in physiology are constantly happening due to the effects of the environment the organism finds itself in. For example, acute changes in heart rate occur due to what happens in the environment, like say a predator chase it’s prey. The heart rates of both predator and prey increases as blood flow increases due to stress hormones. I will discuss phenotypic plasticity on its own in the future, but for now I will just note that genetic and environmental factors influence the plasticity of phenotypes (Ledon-Rettig and Ragsdale, 2021) and that phenotypic plasticity and development play a role in evolution (West-Eberhard, 2003, 2005; Wund, 2015

The fact of the matter is, phenotypic plasticity is directly related to the concept of directed mutation, due to DM being a largely physiological concept. I will argue that this refutes a central Darwinian premise. Namely that since directed mutations are possible, then they are not random. If they are not random, then due to what occurs during the development of an organism, a directed mutation could be adaptive. This, then, is the answer to how phenotypic traits become fixed in the genome without the need for natural selection.

Directed mutations

Sueoka (1988) showed that basically all organisms are subject to directed mutations. It has been noted by mathematicans that on a purely random mutational model, that there would not be enough time to explain all of the phenotypic diversity we see today (Wright, 2000). Doubt is placed on three principles of neo-Darwinism: mutations occur independently of the environment the organism is in (this is empirically false); mutations are due to replication errors (this is true, but not always the case) and mutation rates are constant (Brisson, 2003).

One of the main claims of the neo-Darwinian paradigm is that mutations occur at random, and the mutation is selected-for or against based on its relationship to fitness. Fodor’s argument has refuted the concept of natural selection, since “selection-for” is an intensional context and so can’t distinguish between correlated traits. However, we know now that since physiology is sensitive to the environment, and since adaptive changes to physiology would occur not only in an organism but during its development, it then follows that directed mutations would be a thing, and so they wouldn’t be random as neo-Darwinian dogma would claim.

In her review Stress-directed adaptive mutations and evolution, Wright (2004) concludes:

In nature, where cell division must often be negligible as a result of multiple adverse conditions, beneficial mutations for evolution can arise in specific response to stressors that target related genes for derepression. Specific transcription of these genes then results in localized DNA secondary structures containing unpaired bases vulnerable to mutation. Many environmental stressors can also affect supercoiling and [stress-directed mutation] directly.

But what are the mechanisms of DMs? “Mechanism” in this meaning would “refer to the circumstances affecting mutation rates” (Wright, 2000). She also defines what “random” means in neo-Darwinian parlance: “a mutation is random if it is unrelated to the metabolic function of the gene and if it occurs at a rate that is undirected by specific selective conditions of the environment.” Thus, the existence of DMs would then refute this tenet of neo-Darwinism. Two of the mechanisms of such DMs are transcriptional activation and supercoiling. Transcriptional activation (TA) can cause changes to single-stranded DNA (ssDNA) and also supercoiling (the addition of more coils onto DNA). TA can be caused by either derepression (which is a mechanism which occurs due to the absence of some molecule) or induction (the activation of an inactive gene which then becomes transcribed). Thus, knowing this, “genetic derepression may be the only mechanism by which particular environmental conditions of stress target specific regions of the genome for higher mutation rates (hypermutation)” (Wright, 2000). Such responses rely on a quick response, and this is due to the plastic phenotypes of the organism which then allow such DMs to occur. It then follows that stress-induced changes would allow organisms to survive in new environments, without a need for neo-Darwinian “mechanisms”—mainly natural selection. Thus, the biochemical mechanism for such mutations is transcriptional activation. Such stress-directed mutation could be seen as “quasi-Lamarckian” (Koonin and Wolf, 2009).

In nature, nutritional stress and associated genetic derepression must be rampant. If mutation rates can be altered by the many variables controlling specific, stress-induced transcription, one might reasonably argue that many mutations are to some extent directed as a result of the unique metabolism of every organism responding to the challenges of its environment. (Wright, 2000)

This is noted wonderfully by Jablonka and Lamb (2014: 92) in Evolution in Four Dimensions:

No longer can we think about mutation solely in terms of random failures in DNA maintenance and repair. We now know that stress conditions can affect the operation of the enzyme systems that are responsible for maintaining and repairing DNA, and parts of these systems sometimes seem to be coupled with regulatory elements that control how, how much, and where DNA is altered.

Jablonka and Lamb present solid evidence that mutations are semi-directed. Such mutations, as we have seen, are able to be induced by the environment in response to stress, which is due to our plastic, homeodynamic physiology. They discuss “four dimensions” of evolution which are DNA, epigenetic, behavioral and cultural. Their works (including their Epigenetic Inheritance and Evolution: The Lamarckian Dimension; see Jablonka and Lamb, 2015) provide solid evidence and arguments against the neo-Darwinian view of evolution. The fact of the matter is, there are multiple inheritance systems over and above DNA, which then contribute to nonrandom, directed mutations. The fact of the matter is, Lamarckism wasn’t wrong and Jablonka and Lamb have strongly argued for that conclusion. Epigenetics clearly influences evolution, and this therefore vindicates Lamarckism. Epigenetic variation can be inherited too (Jablonka and Lamb, 1989). Since phenotypic plasticity is relevant in how organisms adapt to their environment, then epigenetic mechanisms contribute to evolution (Ashe, Colot, and Oldroyd, 2021). Such changes that arise due to epigenetic mechanisms can indeed influence mutation (Meyer, 2015), and I would say—more directly—that certain epigenetic mechanisms play a part in how an adaptive, directed mutation would arise during the development of an organism. Stochastic epigenetic variation can indeed become adaptive (Feinberg and Irizarry, 2010).

Non-random mutations have been known to be pretty ubiquitous (Tsunoyama, Bellgard, and Gojobori, 2001). This has even been shown in the plant Arabidopis (Monroe et al, 2022), which shows that basically, mutations are not random (Domingues, 2023). A similar concept to DMs is blind stochasticity. Noble and Noble (2017, 2018; cf Noble, 2017) have shown that organisms harness stochastic processes in order to adapt to the environment—to harness function. A stochastic process is a state of a system that cannot be predicted even knowing the current state of said system.

Even all the way back in 1979, such changes were beginning to be noticed by evolutionists, such as Ho and Saunders (1979) who write that variations in the phenotype

are produced by interactions between the organism and the environment during development. We propose, therefore, that the intrinsic dynamical structure of the epigenetic system itself, in its interaction with the environment, is the source of non-random variations which direct evolutionary change, and that a proper study of evolution consists in the working out of the dynamics of the epigenetic system and its response to environmental stimuli as well as the mechanisms whereby novel developmental responses are canalized.

The organism participates in its own evolution (as considerations from niche construction show), and “evolutionary novelties” can and do arise nonrandomly (Ho, 2010). This is completely at-odds with the neo-Darwinian paradigm. Indeed, the creators of the Modern Synthesis ignored developmental and epigenetic issues when it came to formulating their theory. Fortunately, in the new millennium, we have come to understand and appreciate how development and evolution occur and how dynamic the physiological system itself truly is.

There have been critical takes on the concept of DM (Lenski and Mittler, 2003; Charlesworth, Barton, and Charlesworth, 2017; see Noble and Shapiro, 2021 for critique), like for example Futuyama (2017) who claims that DM is “groundless.” However, James Shapiro’s (1992; 2013, 2014) concept of natural genetic engineering states that cells can restructure their genomes so this “means viewing genetic change as a coordinated cell biological process, the reorganization of discrete genomic modules, resulting in the formation of new DNA structures” (Shapiro, 1993). DNA is harnessed by and for the physiological system to carry out certain tasks. Since development is self-organizing and dynamic (Smith and Thelen, 2003; Saetzler, Sonnenschein, and Soto, 2012) and since development is spurred on by physiological processes, along with the fact that physiology is sensitive to the goings-on of the environment that the developing organism finds itself in, then it follows that mutations can and would arise due to need, which would refute claims from neo-Darwinians who claim that mutations arise due to chance and not need.

Conclusion

It is clear that mutations can be (1) adaptive and (2) environmentally-induced. Such adaptive mutations, clearly, arise due to need and not chance. If they arise due to need and not chance, then they are directed and adaptive. They are directed by the plastic physiology of the organism which constructs the phenotype in a dialectical manner, using genes as its passive products, not active causes. This is because biological causation is multi-leveled, not one-way (Noble, 2012). There is also the fact of the matter that “genetic change is far from random and often not gradual” (Noble, 2013).

As can be seen in this discussion, adaptive, directed mutations are a fact of life, and so, one more domino of neo-Darwinism has fallen. Berkley claims that “The genetic variation that occurs in a population because of mutation is random“; “mutations are random“, but as we’ve seen here, this is not the case. Through the biological process of physiology and its relationship to the ebbs and flows of the environment, the organism’s phenotype that is being constructed by the self-organizing system can respond to changes in the cellular and overall environment and thusly direct changes in the phenotype and genes which would then enhance survival due to the environmental insult.

Lamarckism has been vindicated over the past 25 or so years, and it’s due to a better understanding of epigenetic processes in evolution and in the developing organism. Since what Lamarck is known for is the claim that the environment can affect the phenotype in a heritable manner, and since we now know that DNA is not the only thing inherited but epigenetically-modified DNA sequences are too, it follows that Lamarck was right. What we need to understand development and evolution is the Extended Evolutionary Synthesis, which does make novel predictions and predictions that the neo-Darwinian paradigm doesn’t (Laland et al, 2015).

Such directed changes in the genome which are caused by the physiological system due to the plastic nature of organismal construction refute a main premise of the neo-Darwinian paradigm. This is the alternative to neo-Darwinian natural selection, as Fodor noted in his attack on neo-Darwinism:

The alternative possibility to Darwin’s is that the direction of phenotypic change is very largely determined by endogenous variables. The current literature suggests that alterations in the timing of genetically controlled developmental processes is often the endogenous variable of choice; hence the ‘devo’ in ‘evo-devo’.

Darwin got quite a bit wrong, and it’s of no fault of his own. But those who claim that Darwin discovered mechanisms or articulated the random process of mutations quite obviously need to update their thoughts in the new millennium on the basis of new information informed by systems biologists and epigeneticists. The process of the construction of organisms is dynamic and self-organizing, and this is how phenotypic traits become fixed in populations of organisms. Plasticity is in fact a major driver of evolution along with the concept of genetic assimilation, which results in the canalization of the plastic trait which then eliminates the plastic response from the environment (Sommer, 2020). Phenotypic plasticity can have adaptive traits arise, but natural selection can’t be the mechanism of evolution due to Fodor’s considerations. Development can lead to evolution, not only evolution leading to development (West-Eberhard, 2003). In fact, development in many cases precedes evolution.

Fodor’s Argument and Mechanisms

2500 words

It’s been almost 5 years since I read What Darwin Got Wrong (WDGW) (Fodor and Piattelli-Palmarini, 2009; F&PP) which changed my view on the theory of natural selection (ToNS). In the book, they argue that natural selection cannot possibly be a mechanism since it cannot distinguish between correlated traits, since there is no mind (agent) doing the selecting nor are there laws of selection for trait fixation across all ecologies. Fodor had originally published Why Pigs Don’t Have Wings in the London Review of Books in 2007, and then he published Against Darwinism (Fodor, 2008) where he mounted and formulated his argument against the ToNS.

A precursor to the argument against the ToNS

Although Fodor had begun articulating his argument in the late 2000s, he already had a precursor to it in a 1990 paper A Theory of Content (Fodor, 1990: 72-73):

The Moral, to repeat, is that (within certain broad limits , presently to be defined) Darwin doesn’t care how you describe the intentional objects of frog snaps. All that matters for selection is how many flies the frog manages to ingest in consequence of its snapping, and this number comes out exactly the same whether one describes the function of the snap-guidance mechanisms with respect to a world that is populated by flies that are, de facto, ambient black dots, or with respect to a world that is populated by ambient black dots that are, de facto, flies.19 “Erst kommt das Fressen, denn kommt die Morale.” Darwin cares how many flies you eat, but not what description you eat them under.(Similarly, by the way, flies may be assumed to be indifferent to the descriptions under which frogs eat them.) So it’s no use looking to Darwin to get you out of the disjunction problem.

In Against Darwinism and WDGW, F&PP reformulate and add to this argument, stating that it is a selection-for problem:

In a nutshell: if the assumption of local coextensivity holds (as, of course, it perfectly well might), then fixing the cause of the frog’s snaps doesn’t fix the content of its intention in snapping: either an intention to snap at a fly or an intention to snap at an ABN would be compatible with a causal account of what the frog has in mind when it snaps. So causal accounts of content encounter a selection-for problem: If something is a fly if and only if it is an ABN, the frog’s behaviour is correctly described either as caused by flies or as caused by ABNs. So, it seems, a causal theory of content cannot distinguish snaps that manifest intentions to catch the one from snaps that manifest intentions to catch the other. (Fodor and Piattelli-Palmarini, 2010: 108)

The argument

Fodor has formulated the argument two times, in Against Darwinism (Fodor, 2008: 11-12) and then again in WDGW (Fodor and Piattelli-Palmarini, 2010: 114).

Contrary to Darwinism, the theory of natural selection can’t explain the distribution of phenotypic traits in biological populations.

(i) To do so would require a notion of ‘selection for’ a trait. ‘Selects for….’ (unlike ‘selects…) is opaque to substitution of co-referring expressions at the ‘…’ position.

(ii) If T1 and T2 are coextensive traits, the distinction between selection for T and selection for T2 depends on counterfactuals about which of them. The truth makers for such counterfactuals must be either (a) the intensions of the agent that affects the selection, or (b) laws that determine the how the relative fitness of having the traits would be selected in a possible world where the coextension does not hold.

(iii) But:

Not (a) because there is no agent of natural selection.

Not (b) because considerations of contextual sensitivity make it unlikely that there are laws of relative fitness (‘laws of selection).

QED (Fodor, 2008)


  1. Selection-for is a causal process.
  2. Actual causal relations aren’t sensitive to counterfactual states of affairs: if it wasn’t the case that A, then the fact that it’s being A would have caused its being B doesn’t explain its being the case that B.
  3. But the distinction between traits that are selected-for and their free-riders turns on the truth (or falsity) of relevant counterfactuals.
  4. So if T and T’ are coextensive, selection cannot distinguish the case in which T free-rides on T’ from the case that T’ free-rides on T.
  5. So the claim that selection is the mechanism of evolution cannot be true. (Fodor and Piattelli-Palmarini, 2010: 114)

Selection for

Ernst Mayr wrote in One Long Argument that “Selection-for specifies the particular phenotypic attribute and corresponding component of the genotype (DNA) that is responsible for the success of the selected individual.” Selection-for a trait is needed for adaptationism to be true. But what kinds of hallmarks of adaptation are there, that a free-riding trait that’s not an adaptation would have? Selection-for problems need to appeal to counterfactuals, and to appeal to counterfactuals, they need laws of relative fitness. Selection-for problems arise when so-called explanations require distinguishing the causal role of coextensive properties (traits) (Fodor and Piattelli-Palmarini, 2010: 111).

But for there to be counterfactual-supporting laws that would be able to distinguish the correlation and select the fit trait over the free-riding trait, there need to be laws that apply across all ecologies and phenotypes. But the case of whether or not T1 or T2 is conducive to fitness is massively context-sensitive. So there can’t be laws of relative fitness. T may be helpful in one ecology and not another. T may also be helpful for survival in one organism in a specific ecology but not another organism. Therefore, there is no law that explains why T would win a trait competition over T2; so there aren’t any laws of relative fitness. The ToNS implies a filtering mechanism which is the environment. But the environment can only access the correlation, and it explains the selection of both traits without being able to say anything about what’s selected-for.

A mechanism?

What is a causal mechanism? A causal mechanism is a sequence of events or processes governed by lawlike regularities.  Basically, causal claims are claims about the existence of a mechanism. In the ToNS instance, the lawlike regularities would refer to laws of relative fitness. In the Fodorian sense, the mechanism would need to be sensitive to correlated traits. But the only thing that would be sensitive would be laws of relative fitness. So the question is, how can natural selection be a mechanism if it can’t ground counterfactuals which distinguish selection of from selection-for? Laws would support counterfactuals. The kind of law that Fodor is looking for is “All else being equal, the probability that a t1 wins a competition with a t2 in ecological situation E is p.” Basically, if there are laws they can support the counterfactuals. However, due to massive context-sensitivity, it seems unlikely that there are ceteris paribus laws of relative fitness. If there are, why hasn’t anyone articulated them?

But the fact of the matter is this: Fodor has successfully argued against the claim that natural selection is a mechanism, and he is not even alone on that, since others argue that natural selection isn’t a mechanism, not using his arguments (eg Skipper and Millstein, 2005; Havstad, 2011).

The question is: How can natural selection be a mechanism if it can’t ground the counterfactuals that distinguish selection of from selection for? Because if T and T’ are correlated, the same story explains the selection of both traits, so we can’t use the ToNS to show which was selected-for its causal contributions to fitness and which was the free-riding trait that just came along for the ride. Thus, the ToNS doesn’t explain the trait and so if it doesn’t explain the trait then it doesn’t predict the trait. Natural selection makes no prediction as to which trait will be selected-for. That’s not to say that we (humans) can’t know what was selected-for from what was merely selected, and Fodor never claimed otherwise, contrary to those who claim that he did make that claim. The best example is Pigliucci (2010) who states in his review that:

functional analyses rooted in physiology, genetics and developmental biology, and why observations of selection in the field are whenever possible coupled with manipulative experiments that make it possible to distinguish between [correlated traits].

Fodor was emphatic about this—he never claimed that humans couldn’t distinguish between correlated traits, only that the, using the ToNS, we can’t know which trait was selected-for its contribution to fitness since, due to the correlation, the same story explains both traits. Fodor and Piattelli-Palmarini (2010) in Replies to Our Critics stated as much:

Many of the objections that have been raised against us seem unable to discriminate this claim from such quite different ones that we didn’t and don’t endorse, such as: when traits are coextensive, there is no fact of the matter about which is a cause of fitness; or, when traits are coextensive, there is no way to tell which of them is a cause of fitness; or when traits are coextensive Science cannot determine which is a cause of fitness…etc. Such views are, we think, preposterous on the face of them; we wouldn’t be caught dead holding them. To the contrary, it is precisely because there is a fact of the matter about which phenotypic traits cause fitness, and because there is no principled reason why such facts should be inaccessible to empirical inquiry, that the failure of TNS to explain what distinguishes causally active traits from mere correlates of causally of active traits, shows that something is seriously wrong with TNS.

In his 2018 book Agents and Goals in Evolution, Samir Okasha (2018) states that Darwin was the first to employ what he terms “type II agential thinking”, which is to personify “mother nature” as selecting fit traits. Darwin’s analogy between natural and artificial selection fails, though. It fails because in the case of artificial selection, there is a mind attempting to select fit traits while in the case of natural selection, there is no mind, so the only way around this is laws of relative fitness.

Gould and Lewontin (1979), although criticizing adaptive explanations, also held that natural selection is the most powerful mechanism of evolution. This claim—that natural selection is a mechanism—is ubiquitous in the literature. And it is Gould and Lewontin’s Spandrel argument that partly inspired the correlated/coextensive trait argument devised by Fodor. However, as F&PP note, Gould and Lewontin didn’t take their argument to its logical conclusion, which of course was the rejection of Darwinian natural selection in explaining the fixation of traits in organisms.

What are selected are not traits, but organisms. And just because an organism with T was selected, doesn’t mean that T was the cause of fitness. We can then say that the phrase “survival of the fittest” is a tautology, since the fit are those who survive and those who survive are fit. But Hunt (2014) claims that we can reformulate it: that it should be defined as a theory that attempts to predict and retrodict evolutionary change which acts upon organisms through the environment. However, this reformulation runs right into Fodor’s argument, since there is no way for the exogenous selector (the environment) to distinguish between correlated traits. Portin (2012) claims that we can reduce the tautology to “those who reproduce more, reproduce more”, stating that it merely “seems” like a tautology. Even Coyne and Dawkins, in their books Why Evolution is True and The Greatest Show on Earth make the mistake multiple times in explaining natural selection in a tautologous way (Hunt, 2012). The fact of the matter is, natural selection is nothing more than an oxymoronic tautology (Brey, 2002).

To explain something means to identify a causal mechanism for that explanation. In the case of natural selection, if we are to explain why T1 over T2 was selected-for, we would then need to identify the causal mechanism that would be able to distinguish between correlated traits. The ToNS is a probabilistic theory. If NS is to be a mechanism, then since it is probabilistic, it has to be a stochastic mechanism. Though leading accounts of mechanisms are deterministic. Therefore, NS can’t be a mechanism (Skipper and Millstein, 2005). There are, as a matter of fact, no stable organizations of the component parts in NS, so it can’t be a mechanism.

Conclusion

Nanay (2022: 175) tells us that Fodor’s book was disregarded by many academic publishers, which then speaks to what Leal (2022) notes as the emotionality from Fodor’s detractors:

I knew about [What Darwin Got Wrong] long before the 2009 publication date (as early as 2005, as I recall), because all my friends and colleagues working in philosophy of biology kept telling me how they had to reject, in no uncertain terms, this book from all major and then not so major academic publishers. Fodor then took it to a non-academic publisher, and it did get published,

So the question is, how does the ToNS explain the trait if it doesn’t predict which of the two correlated traits move to fixation if it can’t discern between the two correlated traits? How does the ToNS predict which trait is fitness-enhancing when two traits are correlated prior to performing an experiment? That’s, again, where humans come in. We can perform experimental manipulations and then discern the fit trait from the correlated trait that does not cause fitness. But we can’t merely use the ToNS to do this.

There is also the fact that most if not all respondents to Fodor did not understand the argument, and this can be seen from their responses to him, like Pigliucci’s where he talks about experimentation. Humans can have access to the the fit trait, but the environment—the exogenous filter—only has access to the correlation, and so the same story explains both traits.

So neo-Darwinism is false and natural selection isn’t a mechanism. What, then, is the alternative? Fodor ended Why Pigs Don’t Have Wings writing:

The alternative possibility to Darwin’s is that the direction of phenotypic change is very largely determined by endogenous variables. The current literature suggests that alterations in the timing of genetically controlled developmental processes is often the endogenous variable of choice; hence the ‘devo’ in ‘evo-devo’.

“Endogenous variables” mean causes within the organism. For instance West-Eberhard (2003: 179) noted that: “If genes are usually followers rather than leaders in evolution—that is, most gene-frequency change follows, rather than initiates, the evolution of adaptive traits—then the most important role of mutation in evolution may be to contribute not so much to the origin of phenotypic novelties as to the store of genetic variation available for long-term gradual genetic change under selection.” So that is one way that endogenous variables can direct phenotypic change. This has been noted by many recent authors (eg Noble et al, 2014).

One of the Darwinian premises is that mutations occur randomly. However, we now know that this is not always the case: “genetic change is far from random and often not gradual” (Noble, 2013). All of the assumptions of neo-Darwinism have been disproved, most importantly, the theory of natural selection being the causal mechanism of evolution.

P1: Natural selection is a mechanism iff it can distinguish between causes and correlates of causes.

P2: Natural selection can’t distinguish between causes and correlates of causes.

C: Therefore, natural selection isn’t a mechanism.

The Myth of “General Intelligence”

5000 words

Introduction

“General Intelligence” or g is championed as the hallmark “discovery” of psychology. First “discovered” by Charles Spearman in 1904, noting that schoolchildren who scored highly on one test scored highly on others and vice versa for lower-scoring children, he assumed that due to the correlation between tests, that there must be an underlying physiological basis to the correlation, which he posited to be some kind of “mental energy”, stating that the central nervous system (CNS) explained the correlation. He proclaimed that g really existed and that he had verified Galton’s claim of a unitary general ability (Richardson, 2017: 82-83). Psychometricians then claim, from these intercorrelations of scores, that what is hidden from us is then revealed, and that the correlations show that something exists and is driving the correlation in question. That’s the goal of psychometrics/psychology—to quantify and then measure psychological traits/mental abilities. However, I have argued at length that it is a conceptual impossibility—the goal of psychometrics is an impossibility since psychometrics isn’t measurement. Therefore, claims that IQ tests measure g is false.

First, I will discuss the reification of g and it’s relation to brain properties. I will argue that if g is a thing then it must have a biological basis, that is it must be a brain property. Reductionists like Jensen have said as much. But it’s due to the reification of g as a concrete, physical thing that has people hold such beliefs. Second, I will discuss Geary’s theory that g is identical with mitochondrial functioning. I will describe what mitochondria does, and what powers it, and then discuss the theory. I will have a negative view of it, due to the fact that he is attempting to co-opt real, actual functions of a bodily process and attempt to weave g theory into it. Third, I will discuss whether or not psychological traits are indeed quantifiable and measurable, and whether or not there is a definition psychometricians can use to ground their empirical investigations. I will argue negatively for all three. Fourth, I will discuss Herrnstein and Murray’s 6 claims in The Bell Curve about IQ and provide a response to each in turn. Fifth, I will discuss the real cause of score variation, which isn’t reduction to a so-called assumed existence of a biological process/mechanism, but which is due to affective factors and exposure to the specific type of knowledge items on the test. Lastly, I will conclude and give an argument for why g isn’t a thing and is therefore immeasurable.

On reifications and brain properties

Contrary to protestations from psychometricians, they in fact do reifiy correlations and then claim that there exists some unitary, general factor that pervades all mental tests. If reification is treating the abstract as something physical, and if psychometrics treat g as something physical, then they are reifying g based on mere intercorrelations between tests. I am aware that, try as they might, they do attempt to show that there is an underlying biology to g, but these claims are defeated by the myriad arguments I’ve raised against the reducibility of the mental to the physical. Another thing that Gould gets at is that psychometricians claim that they can rank people—this is where the psychometric assumption that because we can put a number to their reified thing, that there is something being measured.

Reification is “the propensity to convert an abstract concept (like intelligence) into a hard entity (like an amount of quantifiable brain stuff)” (Gould, 1996: 27). So g theorists treat g as a concrete, physical, thing, which then guides their empirical investigations. They basically posit that the mental has a material basis, and they claim that they can, by using correlations between different test batteries, we can elucidate the causal biological mechanisms/brain properties responsible for the correlation.

Spearman’s theory—and IQ—is a faculty theory (Nash, 1990). It is a theory in which it is claimed that the mind is separated into different faculties, where mental entities cause the intellectual performance. Such a theory needs to keep up the claim that a cognitive faculty is causally efficacious for information processing. But the claim that the mind is “separated” into different faculties fails, and it fails since the mind is a single sphere of consciousness, it is not a complicated arrangement of mental parts. Physicalists like Jensen and Spearman don’t even have a sound philosophical basis on which to ground their theories. Their psychology is inherently materialist/physicalist, but materialism/physicalism is false and so it follows that their claims do not hold any water. The fact of the matter is, Spearman saw what he wanted to see in his data (Schlinger, 2003).

I have already proven that since dualism is true, then the mental is irreducible to the physical and since psychometrics isn’t measurement, then what psychometricians claim to do just isn’t possible. I have further argued that science can’t study first-personal subjective states since science is third-personal and objective. The fact is the matter is, hereditarian psychologists are physicalist, but it is impossible for a purely physical thing to be able to think. Claims from psychometricians about their “mental tests” basically reduce to one singular claim: that g is a brain property. I have been saying this for years—if g exists, it has to be a brain property. But for it to be a brain property, one needs to provide defeaters for my arguments against the irreducibility of the mental and they also need to argue against the arguments that psychometrics isn’t measurement and that psychology isn’t quantifiable. They can assume all they want that it is quantifiable and that since they are giving tests, questionnaires, likert scales, and other kinds of “assessments” to people that they are really measuring something; but, ultimately, if they are actually measuring something, then that thing has to be physical.

Jensen (1999) made a suite of claims trying to argue for a physical basis for g,—to reduce g to biology—though, upon conceptual examination (which I have provided above) these claims outright fail:

g…[is] a biological [property], a property of the brain

The ultimate arbiter among various “theories of intelligence” must be the physical properties of the brain itself. The current frontier of g research is the investigation of the anatomical and physiological features of the brain that cause g.

…psychometric g has many physical correlates…[and it] is a biological phenomenon.

As can be seen, Jensen is quite obviously claiming that g is a biological brain property—and this is what I’ve been saying to IQ-ists for years: If g exists, then it MUST be a property of the brain. That is, it MUST have a physical basis. But for g proponents to show this is in fact reality, they need to attempt to discredit the arguments for dualism, that is, they need to show that the mental is reducible to the physical. Jensen is quite obviously saying that a form of mind-brain identity is true, and so my claim that it was inevitable for hereditarianism to become a form of mind-brain identity theory is quite obviously true. The fact of the matter is, Jensen’s beliefs are reliant upon an outmoded concept of the gene, and indeed even a biologically implausible heritability (Richardson, 1999; Burt and Simons, 2014, 2015).

But Jensen (1969) contradicted himself when it comes to g. On page 9, he writes that “We should not reify g as an entity, of course, since it is only a hypothetical construct intended to explain covariation am ong tests. It is a hypothetical source of variance (individual differences) in test scores.” But then 10 pages later on pages 19-20 he completely contradicts himself, writing that g is “a biological reality and not just a figment of social conventions.” That’s quite the contradiction: “Don’t reifiy X, but X is real.” Jensen then spent the rest of his career trying to reduce g to biology/the brain (brain properties), as we see above.

But we are now in the year 2023, and so of course there are new theoretical developments which attempt to show that Spearman’s hypothesized mental energy really does exist, and that it is the cause of variations in scores and of the positive manifold. This is now where we will turn.

g and mitochondrial functioning

In a series of papers, David Geary (2018, 2019, 2020, 2021) tries to argue that mitochondriaal functioning is the core component in g. At last, Spearman’s hypothetical construct has been found in the biology of our cells—or has it?

One of the main functions of mitochondria is to oxidative phosphorylation to produce adenosine triphosphate (ATP). All living cells use ATP as fuel, it acts as a signaling molecule, it is also involved in cellular differentiation and cell death (Khakh and Burnstock, 2009). The role of mitochondrial functioning in spurring disease states has been known for a while, such as with cardiovascular diseases such as cardiomyopathy (Murphy et al, 2016, Ramaccini at al, 2021).

So due to the positive manifold, where performance in one thing is correlated with a performance in another, Geary assumes—as Spearman and Jensen did before him—that there must be some underlying biological mechanism which then explains the correlation. Geary then uses established outcomes of irregular mitochondrial functioning to then argue that the mental energy that Spearman was looking for could be found in mitochondrial functioning. Basically, this mental energy is ATP. I don’t deny that mitochondriaal functioning plays a role in the acquisition of disease states, indeed this has been well known (eg, Gonzales et al, 2022). What I deny is Gary’s claim that mitochondrial functioning has identity with Spearman’s g.

His theory is, like all other hereditarian-type theories, merely correlative—just like g theory. He hasn’t shown any direct, causal, evidence of mitochondrial functioning in “intelligence” differences (nor for a given “chronological age). That as people age their bodies change which then has an effect on their functioning doesn’t mean that the powerhouse of the cell—ATP—is causing said individual differences and the intercorrelations between tests (Sternberg, 2020). Indeed, environmental pollutants affect mitochondrial functioning (Byun and Baccarelli, 2014; Lambertini and Byun, 2016). Indeed, most—if not all—of Geary’s hypotheses do not pass empirical investigation (Schubert and Hagemann, 2020). So while Geary’s theory is interesting and certainly novel, it fails in explaining what he set out to.

Quantifiable, measurable, definable, g?

The way that g is conceptualized is that there is a quantity of it—where one has “more of it” than other people, and this, then, explains how “intelligent” they are in comparison to others—so implicit in so-called psychometric theory is that whatever it is their tests are tests of, something is being quantified. But what does it mean to quantify something? Basically, what is quantification? Simply, it’s the act of giving a numerical value to a thing that is measured. Now we have come to an impasse—if it isn’t possible to measure what is immaterial, how can we quantify it? That’s the thing, we can’t. The g approach is inherently a biologically reductionist one. Biological reductionism is false. So the g approach is false.

Both Gottfredson (1998) and Plomin (1999) make similar claims to Jensen, where they talks about the “biology of g” and the “genetics of g“. Plomin (1999) claims that studies of twins show that g has a substantial heritability, while Gottfredson (1998) claims that heritability of IQ increases to up until adulthood where it “rises to 60 percent in adolescence and to 80 percent by late adulthood“, citing Bouchard’s MISTRA (Minnesota Study of Twins Reared Apart). (See Joseph 2022 for critique and for the claim that the heritability of IQ in that study is 0 percent.) They, being IQ-ists, of course assume a genetic component to this mystical g. However, there arguments are based on numerous false assumptions and studies with bad designs (and hidden results), and so they must be rejected.

If X is quantitative, then X is measurable. If X is measurable, then X has a physical basis. Psychological traits don’t have a physical basis. So psychological traits aren’t quantitative and therefore not measurable. Geary’s attempt at arguing for identity between g and mitochondrial functioning is an attempt at a specified measured object for g, though his theory just doesn’t hold. Stating truisms about a biological process and then attempting to liken the process with the construct g just doesn’t work; it’s just a post-hoc rationalization to attempt to liken g with an actual biological process.

Furthermore, if X is quantitative, then there is a specified measured object, object of measurement and measurement unit for X. But this is where things get rocky for g theorists and psychometricians. Psychometry is merely pseudo-measurement. Psychometricians cannot give a specified measured object, and if they can’t give a specified measured object they cannot give an object of measurement. They thusly also cannot construct a measurement unit. Therfore, “the necessary conditions for metrication do not exist” (Nash, 1990: 141). Even Haier (2014, 2018) admits that IQ test scores don’t have a unit that is like inches, liters, or grams. This is because those are ratio scales and IQ is ordinal. That is, there is no “0-point” for IQ, like there is for other actual, real measures like temperature. That’s the thing—if you have a thing to be measured, then you have a physical object and consequently a measument unit. But this is just not possible for psychometry. I then wonder why Haier doesn’t follow what he wrote to its logical conclusion—that the project of psychometrics is just not possible. Of course the concept of intelligence doesn’t have a referent, that is, it doesn’t name a property like height, weight, or temperature (Midgley, 2018:100-101). Even the most-cited definition of intelligence—Gottfredson’s—still fails, since she contradicts herself in her very definition.

Of course IQ “ranks” people by their performance—some people perform better on the test than others (which is an outcome of prior experience). So g theorists and IQ-ists assume that the IQ test is measuring some property that varies between groups which then leads to score differences on their psychometric tests. But as Roy Nash (1990: 134) wrote:

It is impossible to provide a satisfactory, that is non-circular, definition of the supposed ‘general cognitive ability’ IQ tests attempt to measure and without that definition IQ theory fails to meet the minimal conditions of measurement.

But Boeck and Kovas (2020) try to sidestep this issue with an extraordinary claim, “Perhaps we do not need a definition of intelligence to investigate intelligence.” How can we investigate something sans a definition of the object of investigation? How can we claim that a thing is measured if we have no definition, and no specified measured object, object of measurement and measurement unit, as IQ-ists seem to agree with? Again, IQ-ists don’t take these conclusions to their further logical conclusion—that we simply just cannot measure and quantify psychological traits.

Haier claims that PGS and “DNA profiles” may lead to “new definitions of intelligence” (however ridiculous a claim). He also, in 2009, had a negative outlook on identifying a “neuro g” since “g-scores derived from different test batteries do not necessarily have equivalent neuro-anatomical substrates, suggesting that identifying a “neuro-g” will be difficult” (Haier, 2009). But one more important reason exists, and it won’t just make it “difficult” to identify a neuro g, it makes it conceptually impossible. That is the fact that cognitive localizations are not possible, and that we reify a kind of average in brain activations when we look at brain scans using fMRI. The fact of the matter is, neuroreduction just isn’t possible, empirically (Uttal, 2001, 2014, 2012), nor is it possible conceptually.

Herrnstein and Murray’s 6 claims

Herrnstein and Murray (1994) make six claims about IQ (and also g):

(1) There is such a thing as a general factor of cognitive on which human beings differ.

Of course implicit in this claim is that it’s a brain property, and that people have this in different quantities. However, the discussion above puts this claim to bed since psychological traits aren’t quantitative. This, of course comes from the intercorrelations of test scores. But we will see that most of the source of variation isn’t even entirely cognitive and is largely affective and due to one’s life experiences (due to the nature of the item content).

(2) All standardized tests of academic aptitude or achievement measure this general factor to some degree, but IQ tests expressly designed for that purpose measure it most accurately.

Of course Herrnstein and Murray are married to the idea that these tests are measures of something, that since they give different numbers due to one’s performance, there must be an underlying biology behind the differences. But of course, psychometry isn’t true measurement.

(3) IQ scores match, to a first degree, whatever it is that people mean when they use the word intelligent or smart in ordinary language.

That’s because the tests are constructed to agree with prior assumptions on who is or is not “intelligent.” As Terman constructed his Stanford-Binet to agree with his own preconceived notions of who is or is not “intelligent”: “By developing  an exclusion-inclusion criteria that favored the  aforementioned groups, test developers created a norm “intelligent” (Gersh, 1987, p.166) population “to differentiate subjects of known superiority from subjects of known inferiority” (Terman, 1922, p. 656)” (Bazemoore-James, Shinaprayoon, and Martin 2017). Of course, since newer tests are “validated”(that is, correlated with) older, tests (Richardson, 199120002002, 2017Howe, 1997), this assumption is still alive today.

(4) IQ scores are stable, although not perfectly so, over much of a person’s life.

IQ test scores are malleable, and this of course would be due to the experience one has in their lives which would then have them ready to take a test. Even so, if this claim were true, it wouldn’t speak to the “biology” of g.

(5) Properly administered IQ tests are not demonstrably biased against social, economic, ethnic, or racial groups.

This claim is outright false and can be known quite simply: the items on IQ tests derive from specific classes, mainly the white middle-class. Since this is true, it would then follow that people who are not exposed to the item content and test structures wouldn’t be as prepared as those who are. Thus, IQ tests are biased against different groups, and if they are biased against different groups it also follows that they are biased for certain groups, mainly white Americans. (See here for considerations on Asians.)

(6) Cognitive ability is substantially heritable, apparently no less than 40 percent and no more than 80 percent.

It’s nonsense to claim that one can apportion heritability into genetic and environmental causes, due to the interaction between the two. IQ-ists may claim that twin, family, and adoption studies show that IQ is X amount heritable so there must thusly be a genetic component to differences in test scores. But the issue with heritability has been noted for decades (see Charney, 2012, 2016, 2022; Joseph, 2014, Moore and Shenk, 2016, Richardson, 2017) so this claim also fails. There is also the fact that behavioral genetics doesn’t have any “laws.” It’s simply fallacious to believe that nature and nurture, genes and environment, contribute additively to the phenotype, and that their relative contributions to the phenotype can be apportioned. But hereditarians need to keep that facade up, since it’s the only way their ideas can have a chance at working.

What explains the intercorrelations?

We still need an explanation of the intercorrelations between test scores. I have exhaustively argued that the usual explanations from hereditarianism outright fail—g isn’t a biological reality and IQ tests aren’t a measure at all because psychometrics isn’t measurement. So what explains the intercorrelations? We know that IQ tests are comprised of different items, whether knowledge items or more “abstract” items like the Raven. Therefore, we need to look to the fact that people aren’t exposed to certain things, and so if one comes across something novel that they’ve never been exposed to, they thusly won’t know how to answer it and their score will then be affected due to their ignorance of the relationship between the question and answer on the test. But there are other things irrespective of the relationship between one’s social class and the knowledge they’re exposed to, but social class would still then have an effect on the outcome.

IQ is, merely, numerical surrogates for class affiliation (Richardson, 1999; 2002; 2022). The fact of the matter is, all human cognizing takes place in specific cultural contexts in which cultural and psychological tools are used. This means, quite simply, that culture-fair tests are impossible and, therefore, that such tests are necessarily biased against certain groups, and so they are biased for certain groups. Lev Vygotsky’s sociocultural theory of cognitive development and his concepts of psychological and cultural tools is apt here. This is wonderfully noted by Richardson (2002: 288):

IQ tests, the items of which are designed by members of a rather narrow social class, will tend to test for the acquisition of a rather particular set of cultural tools: in effect, to test, or screen, for individuals’ psychological proximity to that set per se, regardless of intellectual complexity or superiority as such.

Thinking is culturally embedded and contextually-specific (although irreducible to physical things), mediated by specific cultural tools (Richardson, 2002). This is because one is immersed in culture immediately from birth. But what is a cultural tool? Cultural tools include language (Weitzman, 2013) (it’s also a psychological tool), along with “different kinds of numbering and counting, writing schemes, mnemonic technical aids, algebraic symbol systems, art works, diagrams, maps, drawings, and all sorts of signs (John-Steiner & Mahn, 1996; Stetsenko, 1999)” (Robbins, 2005). Children are born into cultural environments, and also linguistically-mediated environments (Vasileva and Balyasnikova, 2019). But what are psychological tools? One psychological tool (which would also of course be cultural tools) would be words and symbols (Vallotton and Ayoub, 2012).

Vygotsky wrote: “In human behavior, we can observe a number of artificial means aimed at mastering one’s own psychological processes. These means can be conditionally called psychological tools or instruments… Psychological tools are artificial and intrinsically social, rather than natural and individual. They are aimed at controlling human behavior, no matter someone else’s or one’s own, just as technologies are aimed at controlling nature” (Vygotsky, 1982, vol. 1, p. 103, my translation). (Falikman, 2021).

The source of variation in IQ tests, after having argued that social class is a compound of the cultural tools one is exposed to. Furthermore, it has been shown that the language and numerical skills used on IQ tests are class-dependent (Brito, 2017). Thus, the compounded cultural tools of different classes and racial groups then coalesce to explain how and why they score the way they do. Richardson (2002: 287-288) writes

that the basic source of variation in IQ test scores is not entirely (or even mainly) cognitive, and what is cognitive is not general or unitary. It arises from a nexus of sociocognitive-affective factors determining individuals’ relative preparedness for the demands of the IQ test. These factors include (a) the extent to which people of different social classes and cultures have acquired a specific form of intelligence (or forms of knowledge and reasoning); (b) related variation in ‘academic orientation’ and ‘self-efficacy beliefs’; and (c) related variation in test anxiety, self-confidence, and so on, which affect performance in testing situations irrespective of actual ability.

Basically, what explains the intercorrelations of test scores—so-called g—are affective, non-cognitive factors (Richardson and Norgate, 2015). Being prepared for the tests, being exposed to the items on the tests (from which are drawn from the white middle-class) explains IQ score differences, not a mystical g that some have more of than others. That is, what explains IQ score variation is one’s “distance” from the middle-class—this follows due to the item content on the test. At the end of the day, IQ tests don’t measure the ability for complex cognition. (Richardson and Norgate, 2014). So one can see that differing acquisition of cultural tools by different cultures and classes would then explain how and why individuals of those groups then attain different knowledge. This, then, would license the claim that one’s IQ score is a mere outcome of their proximity to the certain cultural tools in use in the tests in question (Richardson, 2012).

The fact of the matter is, children do not enter school with the same degree of readiness (Richardson, 2022), and this is due to their social class and the types of things they are exposed to in virtue of their class membership (Richardson and Jones, 2019). Therefore, the explanation for these differences in scores need not be some kind of energy that people have in different quantities, it’s only the fact that from birth we are exposed to different cultures and therefore different cultural and psychological tools which then causes differences in the readiness of children for school. We don’t need to posit any supposed biological mechanism for that, when the answer is clear as day.

Conclusion

As can be seen from this discussion, it is clear that IQ-ist claims of g as a biological brain property fail. They fail because psychometrics isn’t measurement. They fail because psychometricians assume that what they are “measuring” (supposedly psychological traits) have a physical basis and have the necessary components for metrication. They fail because the proposed biology to back up g theory don’t work, and claiming identity between g and a biological process doesn’t mean that g has identity between that biological process. Merely describing facts about physiology and then attempting to liken it to g doesn’t work.

Psychologists try so very hard for psychology to be a respected science, even when what they are studying bares absolutely no relationship to the objects of scientific study. Their constructs are claimed to be natural kinds, but they are merely historically contingent. Due to the way these tests are constructed, is it any wonder why such score differences arise?

The so-called g factor is also an outcome of the way tests are constructed:

Subtests within a battery of intelligence tests are included n the basis of them showing a substantial correlation with the test as a whole, and tests which do not show such correlations are excluded. (Tyson, Jones, and Elcock, 2011: 67)

This is why there is a correlation between all subtests that comprise a test. Because it is an artificial creation of the test constructors, just like their normal curve. Of course if you pick and choose what you want in your battery or test, you can then coax it to get the results you want and then proclaim that what explains the correlations are some sort of unobserved, hidden variable that individuals have different quantities of. But the assumption that there is a quantity of course assumes that there is a physical basis to that thing. Physicalists like Jensen, Spearman, and then Haier of course presume that intelligence has a physical basis and is either driven by genes or can be reduced to neurophysiology. These claims don’t pass empirical and conceptual analysis. For these reasons and more, we should reject claims from hereditarian psychologists when they claim that they have discovered a genetic or neurophysiological underpinning to “intelligence.”

At the end of the day, the goal of psychometrics is clearly impossible. Try as they might, psychometricians will always fail. Their “science” will never be on the level of physics or chemistry, and that’s because they have no definition of intelligence, nor a specified measured object, object of measurement and measurement unit. They know this, and they attempt to construct arguments to argue their way out of the logical conclusions of those facts, but it just doesn’t work. “General intelligence” doesn’t exist. It’s a mere creation of psychologists and how they make their tests, so it’s basically just like the bell curve. Intelligence as an essence or quality is a myth; just because we have a noun “intelligence” doesn’t mean that there really exists a thing called “intelligence” (Schlinger, 2003). The fact is the matter is, intelligence is simply not an explanatory concept (Howe, 1997).

IQ-ist ideas have been subject to an all-out conceptual and empirical assault for decades. The model of the gene they use is false, (DNA sequences have no privileged causal role in development), heritability estimates can’t do what they need them to do, how the estimates are derived rest on highly environmentally-confounded studies, the so-called “laws” of behavioral genetics are anything but, they lack definitions and specified measured objects, objects of measurement and measurement units. It is quite simply clear that hereditarian ideas are not only empirically false, but they are conceptually false too. They don’t even have their concepts in order nor have they articulated exactly WHAT it is they are doing, and it clearly shows. The reification of what they claim to be measuring is paramount to that claim.

This is yet another arrow in the quiver of the anti-hereditarian—their supposed mental energy, their brain property, simply does not, nor can it, exist. And if it doesn’t exist, then they aren’t measuring what they think they’re measuring. If they’re not measuring what they think they’re measuring, then they’re showing relationships between score outcomes and something else, which would be social class membership along with everything else that is related with social class, like exposure to the test items, along with other affective variables.

Now here is the argument (hypothetical syllogism):

P1: If g doesn’t exist, then psychometricians are showing other sources of variation for differences in test scores.

P2: If psychometricians are showing other sources of variation for differences in test scores and we know that the items on the tests are class-dependent, then IQ score differences are mere surrogates for social class.

C: Therefore, if g doesn’t exist, then IQ score differences are mere surrogates for social class.