Home » Black-White IQ
Category Archives: Black-White IQ
HBD and (the Lack of) Novel Predictions
2250 words
a predicted fact is a novel fact for a theory if it was not used to construct that theory — where a fact is used to construct a theory if it figures in the premises from which that theory was deduced. (Musgrave, 1988; cf Mayo, 1991: 524)
Introduction
Previously I demonstrated that the HBD movement is a racist movement. I showed this by arguing that it perfectly tracks with John Lovchik’s definition of racism, which is where “racism is a system of ranking human beings for the purpose of gaining and justifying an unequal distribution of political and economic power.” There is, however, a different issue—an issue that comes from the philosophy of science. So a theory is scientific if and only if it is based on empirical evidence, subject to falsifiability and testability, open to modification or rejection based on further experimentation or observation and—perhaps most importantly—is capable of generating novel predictions, where a novel prediction goes beyond existing knowledge and expectation and can be verified through empirical testing.
Here I will show that HBD doesn’t make any novel predictions, and I will also discuss one old attempt at showing that it does and that it is an example of a degenerative research programme. Effectively, I will argue that contrary to what is claimed, HBD is a degenerating research programme.
On so-called novel predictions
HBD and evolutionary psychology falls prey to the same issues that invalidate both of them. They both rely on ad hoc and post hoc storytelling. In a previous article on novel predictions, I stated:
A risky, novel prediction refers to a prediction made by a scientific theory or hypothesis that goes beyond what is expected or already known within an existing framework (novelness). It involves making a specific claim about a future observation or empirical result that, if confirmed, would provide considerable evidence in support of the scientific theory or hypothesis.
So EP and HBD are cut from the same cloth. John Beerbower (2016) puts the issue succinctly:
At this point, it seems appropriate to address explicitly one debate in the philosophy of science—that is, whether science can, or should try to, do more than predict consequences. One view that held considerable influence during the first half of the twentieth century is called the predictivist thesis: that the purpose of science is to enable accurate predictions and that, in fact, science cannot actually achieve more than that. The test of an explanatory theory, therefore, is its success at prediction, at forecasting. This view need not be limited to actual predictions of future, yet to happen events; it can accommodate theories that are able to generate results that have already been observed or, if not observed, have already occurred. Of course, in such cases, care must be taken that the theory has not simply been retrofitted to the observations that have already been made—it must have some reach beyond the data used to construct the theory.
HBDers promote the tenets that intelligence (IQ), along with behavior and socioeconomic outcomes are strongly associated with genetic differences among individuals and groups. They also use the cold winter theory (CWT) to try to intersect these tenets and show how they evolved over time. According to the CWT, the challenges of surviving in colder climates such as the need to hunt, plan ahead, and cooperate exerted selective pressures which favored genes which fostered higher intelligence in populations that inhabited these regions. I have previously shown years back that the CWT lacks novel predictive power, and that there are devastating response to the CWT which show the invalidity of the theory. Rushton used it in his long-refuted r/K selection theory for human races. Further, for example Jablonski and Chaplin (2000) successfully predicted that “multiple convergences of light skin evolved in different modern human populations and separately in Neanderthals” (Chaplin and Jablonski, 2009: 457). This was a successfully predicted novel fact, something that HBD doesn’t do.
Urbach (1974) (see Deakin, 1976 for response) in criticizing “environmentalism” and contrasting it with “hereditarianism”, claimed that hereditarianism made novel predictions. He also claimed that the “hard core” of the hereditarian research programme was that (1) cognitive ability of all people is due to general intelligence and individual and (2) group differences are due to heredity. We know that (1) is false, since general intelligence is a myth and we know that (2) is false since group differences are due to environmental factors since Jensen’s default hypothesis is false (along with the fact that Asians are a selected population). Further Urbach (1974: 134-135) writes that 4 novel facts of hereditarianism are “(i) of the degree of family resemblances in IQ, (ii) of IQ-related social mobility, (iii) of the distribution of IQ’s, and (iv) of the differences in sibling regression for American Negroes and whites.”
But the above aren’t novel predictions.
(i) Hereditarianism predicts that intelligence has a significant hereditary component, leading to similarities in IQ scores among family members. (Nevermind the fact that environments are inherited by these family members as well.) The prediction appears specific, but it’s not novel in the framework of hereditarianism. The idea that IQ is heritable and that family members share similarities in IQ has been a main tenet of hereditarianism for decades, even in 1974 at the time of publication of Urbach’s paper,rather than offering a new or unexpected insight.
(ii) Hereditarianism also suggests that differences in IQ also have implications for social mobility, with people with higher IQs having a greater change for more upward social mobility. This, too, isn’t novel within the hereditarian framework since even in 1974 and the decades before then this was known.
(iii) Hereditarianism also predicts that IQ scores follow a normal distribution, with a majority of people clustering around the middle. This, too, isn’t a novel prediction, since even Binet unconsciously built his test to have a normal distribution (Nash, 1987: 71). (Also note that Binet knew that his scales weren’t measures but thought that for practical measures they were; Michell, 2012.) Terman constructed his test to also have it. Urbach (1974: 131) states that “even if researchers had set out to obtain a particular distribution of IQ’s, there was no divine guarantee that their efforts would have been successful.” But we know that the process of building a normal distribution is done by choosing only items that conform to the normal distribution are selected, since items most are likely to get right are kept while on both ends items are also kept. In their psychometrics textbook, Rusk and Golombok (2009: 85) state that “it is common practice to carry out item analysis in such a way that only items that contribute to normality are selected.” Jensen (1980: 71) even stated “It is claimed that the psychometrist can make up a test that will yield any kind of score distribution he pleases. This is roughly true, but some types of distributions are much easier to obtain than others.”
(iv) Lastly, hereditarianism predicts that differences in sibling regression or the extent to which sibling IQ scores deviate from the population mean could vary between racial and ethnic groups. The prediction seems specific, but it reflects assumptions of genetic influences on psychological trait—which already were assumptions of hereditarian thought at that time and even today. Thus, it’s not a new or unexpected insight.
Therefore, the so-called novel predictions referenced by Urbach are anything but and reflect existing assumptions and concepts in the field at the time of publication, or he’s outright wrong (as is the case with the normal distribution).
Modern day hereditarians may claim that the correlation between genetics and IQ/educational attainment validates their theories and therefore counts as novel. However, the claim that genes would correlate with IQ has been a central tenet in this field for literally 100 years. Thus, a prediction that there would be a relationship between genes and IQ isn’t new. Nevermind the fact that correlations are spurious and meaningless (Richardson, 2017; Richardson and Jones, 2019) along with the missing heritability problem. Also note that as sample size increase, so to does the chance for spurious correlations, (Calude and Longo, 2016). The hereditarian may also claim that predicting group differences in IQ based on genetic and environmental factors is a novel prediction. Yet again, the idea that these contribute to IQ has been known for decades. The general prediction isn’t novel at all.
So quite obviously, using the above definition of “novel fact” from Musgrave, HBD doesn’t make any novel predictions of previously unknown facts not used in the construction of the theory. The same, then, would hold true for an HBDer who may say something along the lines of “I predict that a West African descendant will win the 100m dash at the next Olympics.” This doesn’t qualify as a novel prediction of a novel fact, either. This is because it relies on existing knowledge related to athletics and racial/ethnic demographics. It’s based in historical data and trends of West African descendants having been successful at previous 100m dash events at the Olympics. Therefore, since it’s not a novel insight that goes beyond the bounds of the theory, it doesn’t qualify as “novel” for the theory.
Why novel predictions matter
Science thrives on progress, so without theories/hypotheses that make novel predictions, a scientific program would stagnate. The inability of hereditarianism to generate risky, novel predictions severely limits it’s ability in explaining human behavior. Novel predictions also provide opportunities for empirical testing, so without novel predictions, hereditarianism lacks the opportunity for rigorous empirical testing. But a proponent could say that whether or not the predictions are novel, there are still predictions that come to pass based on hereditarian ideas.
Without novel prediction, hereditarianism is confined to testing hypotheses that are well-known or widely accepted in the framework or the field itself. This then results in a narrow focus, where researchers merely confirm their pre-existing beliefs instead of challenging them. Further, constantly testing beliefs that aren’t novel leads to confirmation bias where researchers selectively seek out what agrees with them while ignoring what doesn’t (Rushton was guilty of this with his r/K selection theory). Without the generation of novel predictions, hereditarianism lacks innovation. Lastly, the non-existence of novel predictions raises questions about the progressiveness of the framework. True scientific progress is predicated on the formulation of testing novel hypotheses which challenge existing paradigms. Merely claiming that a field generates testable and successful novel predictions and therefore that field is a progressive one is unfounded.
Thus, all hereditarianism does is accommodate, there is no true novel predictive power from it. So instead of generating risky, novel predictions that could potentially falsity the framework, hereditarians merely resort to post-hoc explanations, better known as just-so stories to fit their preconceived notions about human behavior and diversity. HBD claims are also vague and lack the detail needed for rigorous testing—the neck isn’t stuck out far enough for where if the prediction fails that the framework would be refuted. That’s because the predictions are based on assumptions they already know. Thus, HBD is merely narrative construction, and we can construct narratives about any kind of trait we observe today have the story conform with the fact that the trait still exists today. Therefore hereditarianism is in the same bad way as evolutionary psychology.
I have previously compared and contrasted hereditarian explanations of crime with the Unnever-Gabbidon theory of African American offending (TAAO) (Unnever and Gabbidon, 2011). I showed how hereditarian explanations of crime not only fail, but that hereditarian explanations lack novel predictive power. On the other hand, Unnever and Gabbidon explicitly state hypotheses and predictions which would follow from. The TAAO, and when they were tested they were found to hold validating the TAAO.
Conclusion
In this discussion I have tried to show that hereditarian/HBD theories make no novel predictions. They are merely narrative construction. The proposed evolutionary explanation for racial differences in IQ relying on the CWT is ad hoc, meaning it’s a just-so story. Lynn even had to add in something about population size and mutation rates since Arctic people, who have the biggest brain size, don’t have the highest IQ which is nothing more than special pleading.
Urbach’s (1974) four so-called novel predictions of hereditarianism are anything but, since they are based on assumptions already held by hereditarianism. They represent extensions or reformulation of existing assumptions, while also relying on retrospective storytelling.
I have provided a theory (the TAAO) which does make novel predictions. If the predictions wouldn’t have held, then the theory would have been falsified. However, tests of the theory found that they hold (Burt, Simons, and Gibbons, 2013; Unnever, 2014; Unnever, Cullen, and Barnes, 2016; Herda, 2016, 2018; Burt, Lei, and Simons, 2017; Gaston and Doherty, 2018; Scott and Seal, 2019). The hereditarian dream of having the predictive and explanatory power that the TAAO does quite obviously fails.
Therefore, the failure of hereditarianism to produce successful, risky novel predictions should rightly raise concerns about its scientific validity and the scientific credibility of the program. So the only rational view is to reject hereditarianism as a scientific enterprise, since it doesn’t make novel predictions and it’s merely, quite obviously, a way to make prejudices scientific. Clearly, based on what a novel prediction of a novel fact entails, HBD/hereditarian theory doesn’t make any such predictions of novel facts.
Jensen’s Default Hypothesis is False: A Theory of Knowledge Acquisition
2000 words
Introduction
Jensen’s default hypothesis proposes that individual and group differences in IQ are primarily explained genetic factors. But Fagan and Holland (2002) question this hypothesis. For if differences in experience lead to differences in knowledge, and differences in knowledge lead to differences in IQ scores, then Jensen’s assumption that blacks and whites have the same opportunity to learn the content is questionable, and I’d think it false. It is obvious that there are differences in opportunity to acquire knowledge which would then lead to differences in IQ scores. I will argue that Jensen’s default hypothesis is false due to this very fact.
In fact, there is no good reason to accept Jensen’s default hypothesis and the assumptions that come with it. Of course different cultural groups are exposed to different kinds of knowledge, so this—and not genes—would explain why different groups score differently on IQ tests (tests of knowledge, even so-called culture-fair tests are biased; Richardson, 2002). I will argue that we need to reject Jensen’s default hypothesis on these grounds, because it is clear that groups aren’t exposed to the same kinds of knowledge, and so, Jensen’s assumption is false.
Jensen’s default hypothesis is false due to the nature of knowledge acquisition
Jensen (1998: 444) (cf Rushton and Jensen, 2005: 335) claimed that what he called the “default hypothesis” should be the null that needs to be disproved. He also claimed that individual and group differences are “composed of the same stuff“, in that they are “controlled by differences in allele frequencies” and that these differences in allele frequencies also exist for all “heritable” characters, and that we would find such differences within populations too. So if the default hypothesis is true, then it would suggest that differences in IQ between blacks and whites are primarily attributed to the same genetic and environmental influences that account for individual differences within each group. So this implies that genetic and environmental variances that contribute to IQ are therefore the same for blacks and whites, which supposedly supports the idea that group differences are a reflection of individual differences within each group.
But if the default hypothesis were false, then it would challenge the assumption that genetic and environmental influences in IQ between blacks and whites are proportionally the same as seen in each group. Thus, this allows us to talk about other causes of variance in IQ between blacks and whites—factors other than what is accounted for by the default hypothesis—like socioeconomic, cultural, and historical influences that play a more substantial role in explaining IQ differences between blacks and whites.
Fagan and Holland (2002) explain their study:
In the present study, we ensured that Blacks and Whites were given equal opportunity to learn the meanings of relatively novel words and we conducted tests to determine how much knowledge had been acquired. If, as Jensen suggests, the differences in IQ between Blacks and Whites are due to differences in intellectual ability per se, then knowledge for word meanings learned under exactly the same conditions should differ between Blacks and Whites. In contrast to Jensen, we assume that an IQ score depends on information provided to the learner as well as on intellectual ability. Thus, if differences in IQ between Blacks and Whites are due to unequal opportunity for exposure to information, rather than to differences in intellectual ability, no differences in knowledge should obtain between Blacks and Whites given equal opportunity to learn new information. Moreover, if equal training produces equal knowledge across racial groups, than the search for racial differences in IQ should not be aimed at the genetic bases of IQ but at differences in the information to which people from different racial groups have been exposed.
There are reasons to think that Jensen’s default hypothesis is false. For instance, since IQ tests are culture-bound—that is, culturally biased—then they are biased against a group so they therefore are biased for a group. Thus, this introduces a confounding factor which challenges the assumption of equal genetic and environmental influences between blacks and whites. And since we know that cultural differences in the acquisition of information and knowledge vary by race, then what explains the black-white IQ gap is exposure to information (Fagan and Holland, 2002, 2007).
The Default Hypothesis of Jensen (1998) assumes that differences in IQ between races are the result of the same environmental and genetic factors, in the same ratio, that underlie individual differences in intelligence test performance among the members of each racial group. If Jensen is correct, higher and lower IQ individuals within each racial group in the present series of experiments should differ in the same manner as had the African-Americans and the Whites. That is, in our initial experiment, individuals within a racial group who differed in word knowledge should not differ in recognition memory. In the second, third, and fourth experiments individuals within a racial group who differed in knowledge based on specific information should not differ in knowledge based on general information. The present results are not consistent with the default hypothesis.(Fagan and Holland, 2007: 326)
Historical and systematic inequalities could also lead to differences in knowledge acquisition. The existence of cultural biases in educational systems and materials can create disparities in knowledge acquisition. Thus, if IQ tests—which reflect this bias—are culture-bound, it also questions the assumption that the same genetic and environmental factors account for IQ differences between blacks and whites. The default hypothesis assumes that genetic and environmental influences are essentially the same for all groups. But SES/class differences significantly affect knowledge acquisition, so if challenges the default hypothesis.
For years I have been saying, what if all humans have the same potential but it just crystallizes differently due to differences in knowledge acquisition/exposure and motivation? There is a new study that shows that although some children appeared to learn faster than others, they merely had a head start in learning. So it seems that students have the same ability to learn and that so-called “high achievers” had a head start in learning (Koedinger et al, 2023). They found that students vary significantly in their initial knowledge. So although the students had different starting points (which showed the illusion of “natural” talents), they had more of a knowledge base but all of the students had a similar rate of learning. They also state that “Recent research providing human tutoring to increase student motivation to engage in difficult deliberate practice opportunities suggests promise in reducing achievement gaps by reducing opportunity gaps (63, 64).”
So we know that different experiences lead to differences in knowledge (it’s type and content), and we also know that racial groups for example have different experiences, of course, in virtue of their being different social groups. So these different experiences lead to differences in knowledge which are then reflected in the group IQ score. This, then, leads to one raising questions about the truth of Jensen’s default hypothesis described above. Thus, if individuals from different racial groups have unequal opportunities to be exposed to information, then Jensen’s default hypothesis is questionable (and I’d say it’s false).
Intelligence/knowledge crystalization is a dynamic process shaped by extensive practice and consistent learning opportunities. So the journey towards expertise involves iterative refinement with each practice opportunity contribute to the crystallization of knowledge. So if intelligence/knowledge crystallizes through extensive practice, and if students don’t show substantial differences in their rates of learning, then it follows that the crystalization of intelligence/knowledge is more reliant on the frequency and quality of learning opportunities than on inherent differences in individual learning rates. It’s clear that my position enjoys some substantial support. “It’s completely possible that we all have the same potential but it crystallizes differently based on motivation and experience.” The Fagan and Holland papers show exactly that in the context of the black-white IQ gap, showing that Jensen’s default hypothesis is false.
I recently proposed a non-IQ-ist definition of intelligence where I said:
So a comprehensive definition of intelligence in my view—informed by Richardson and Vygotsky—is that of a socially embedded cognitive capacity—characterized by intentionality—that encompasses diverse abilities and is continually shaped by an individual’s cultural and social interactions.
So I think that IQ is the same way. It is obvious that IQ tests are culture-bound and tests of a certain kind of knowledge (middle-class knowledge). So we need to understand how social and cultural factors shape opportunities for exposure to information. And per my definition, the idea that intelligence is socially embedded aligns with the notion that varying sociocultural contexts do influence the development of knowledge and cognitive abilities. We also know that summer vacation increases educational inequality, and that IQ decreases during the summer months. This is due to the nature of IQ and achievement tests—they’re different versions of the same test. So higher class children will return to school with an advantage over lower class children. This is yet more evidence in how knowledge exposure and acquisition can affect test scores and motivation, and how such differences crystallize, even though we all have the same potential (for learning ability).
Conclusion
So intelligence is a dynamic cognitive capacity characterized by intentionality, cultural context and social interactions. It isn’t a fixed trait as IQ-ists would like you to believe but it evolves over time due to the types of knowledge one is exposed to. Knowledge acquisition occurs through repeated exposure to information and intentional learning. This, then, challenges Jensen’s default hypothesis which attributes the black-white IQ gap primarily to genetics.Since diverse experiences lead to varied knowledge, and there is a certain type of knowledge in IQ tests, individuals with a broad range of life experiences varying performance on these tests which then reflect the types of knowledge one is exposed to during the course of their lives. So knowing what we know about blacks and whites being different cultural groups, and what we know about different cultures having different knowledge bases, then we can rightly state that disparities in IQ scores between blacks and whites are suggested to be due to environmental factors.
Unequal exposure to information creates divergent knowledge bases which then influence the score on the test of knowledge (IQ test). And since we now know that despite initial differences in initial performance that students have a surprising regularity in learning rates, this suggests that once exposed to information, the rate of knowledge acquisition remains consistent across individuals which then challenges the assumption of innate disparities in learning abilities. So the sociocultural context becomes pivotal in shaping the kinds of knowledge that people are exposed to. Cultural tools environmental factors and social interactions contribute to diverse cognitive abilities and knowledge domains which then emphasize the contextual nature of not only intelligence but performance in IQ tests. So what this shows is that test scores are reflective of the kinds of experience the testee was exposed to. So disparities in test scores therefore indicate differences in learning opportunities and cultural contexts
So a conclusive rejection of Jensen’s default hypothesis asserts that the black-white IQ gap is due to exposure to different types of knowledge. Thus, what explains disparities in not only blacks and whites but between groups is unequal opportunities to exposure of information—most importantly the type of information found on IQ tests. My sociocultural theory of knowledge acquisition and crystalization offers a compelling counter to hereditarian perspectives, and asserts that diverse experiences and intentionality learning efforts contribute to cognitive development. The claim that all groups or individuals are exposed to similar types of knowledge as Jensen assumes is false. By virtue of being different groups, they are exposed to different knowledge bases. Since this is true, and IQ tests are culture-bound and tests of a certain kind of knowledge, then it follows that what explains group differences in IQ and knowledge would therefore be differences in exposure to information.
Mind, Culture, and Test Scores: Dualistic Experiential Constructivism’s Insights into Social Disparities
2450 words
Introduction
Last week I articulated a framework I call Dualistic Experiential Constructivism (DEC). DEC is a theoretical framework which draws on mind-body dualism, experiential learning, and constructivism to explain human development, knowledge acquisition, and the formation of psychological traits and mind. In the DEC framework, knowledge construction and acquisition are seen as due to a dynamic interplay between individual experiences and the socio-cultural contexts that they occur in. It has a strong emphasis on the significance of personal experiences, interacting with others, shaping cognitive processes, social understanding and the social construction of knowledge by drawing on Vygotsky’s socio-historical theory of learning and development, which emphasizes the importance of cultural tools and the social nature of learning. It recognizes that genes are not sufficient for psychological traits, but necessary for them. It emphasizes that the manifestation of psychological traits and mind are shaped by experiences, interactions between the socio-cultural-environmental context.
My framework is similar to some other frameworks, like constructivism, experiential learning theory (Kolb) (Wijnen-Meyer et al, 2022), social constructivism, socio-cultural theory (Vygotsky), relational developmental systems theory (Lerner and Lerner, 2019) and ecological systems theory (Bronfenbrenner, 1994).
DEC shares a key point with constructivism—that of rejecting passive learning and highlight the importance of the learner’s active engagement in the construction of knowledge. Kolb’s experiential learning theory proposes that people learn best through direct experiences and reflecting on those experiences, while DEC emphasizes the fact significance of experiential learning in shaping one’s cognitive processes and understanding of knowledge. DEC also relies heavily on Vygotsky’s socio-historical theory of learning and development, where both the DEC and Vygotsky’s theory emphasize the role of socio-cultural factors in shaping human development along with the construction of knowledge. Vygotsky’s theory also highlights the importance of social interaction, cultural and psychological tools and historical contexts, which DEC draws from. Cognitive development and knowledge arise from dynamic interactions between individuals and their environment while also acknowledging the reciprocal influences between the individual and their social context. (This is how DEC can also be said to be a social constructivist position.) DEC is also similar to Uri Bronfenbrenner’ecological systems theory, which emphasizes the influence of multiple environmental systems on human development. With DEC’s focus on how individuals interact with their cultural contexts, it is therefore similar to ecological systems theory. Finally, DST shares similarities with Learner’s relational developmental systems theory focusing on interactions, genes as necessary but not sufficient causes for the developing system, rejecting reductionism and acknowledging environmental and cultural contexts in shaping human development. They are different in the treatment of mind-body dualism and the emphasis on cultural tools in shaping cognitive development and knowledge acquisition.
Ultimately, DEC posits that individuals actively construct knowledge through their engagement with the world, while drawing upon their prior experiences, interactions with others and cultural resources. So the socio-cultural context in which the individual finds themselves in plays a vital role in shaping the nature of learning experiences along with the construction of meaning and knowledge. Knowing this, how would race, gender, and class be integrated into the DEC and how would this then explain test score disparities between different classes, men and women, and races?
Social identities and test score differences: The impact of DEC on gender, race and class discrepancies
Race, class, and gender can be said to be social identities. Since they are social identities, they aren’t inherent or fixed characteristics in individuals, they are social categories which influence an individual’s experiences, opportunities, and interaction within society. These social identities are shaped by cultural, historical, and societal factors which intersect in complex ways, leading to different experiences.
When it comes to gender, it has been known that boys and girls have different interests and so they have different knowledge basis. This has been evidenced since Terman specifically constructed his test to eliminate differences between men and women in his Stanford-Binet, and also evidenced by the ETS changing the SAT to reflect these differences between men and women (Rosser, 1989; Mensh and Mensh, 1991). So when it comes to the construction of knowledge and the engagement with the world, an individual’s gender influences the way they perceive the world, and interpret social dynamics and act in social situations. There is also gendered test content, as Rosser (1989) shows for the SAT. Thus, the concept of gender in society influences test scores since men and women are exposed to different kinds of knowledge; the fact that there are “gendered test items” (items that reflect or perpetuate gender biases, stereotypes or assumptions in its presentation).
But men and women have negligible differences in full-scale IQ, so how can DEC work here? It’s simple: men are better spatially and women are better verbally. Thus, by choosing which items they want on the test, test constructors can build the conclusions they want into the test. DEC emphasizes socio-cultural influences on knowledge exposure, stating that unique socio-cultural and historical experiences and contexts influences one’s knowledge acquisition. Cultural/social norms and gendered socialization can also shape one’s interests and experiences, which would then influence knowledge production. Further, test content could have gender bias (as Rosser, 1989 pointed out), and subjects that either sex are more likely to have interest in could have skewed answer outcomes (as Rosser showed). Stereotype threat is also another thing that could influence this, with one study conceptualizing stereotype threat gender as being responsible for gender differences in advanced math (Spencer, Steele, and Quinn, 1999). Although stereotype threat affects different groups in different ways, one analysis showed empirical support “for mediators such as anxiety, negative thinking, and mind-wandering, which are suggested to co-opt working memory resources under stereotype threat” (Pennington et al, 2016). Lastly, intersectionality is inherent in DEC. Of course the experiences of a woman from a marginalized group would be different from the experiences of a woman from a privileged group. So these differences could influence how gender intersects with other identities when it comes to knowledge production.
When it comes to racial differences in test scores, DEC would emphasis the significance of understanding test score variations as reflecting multifaceted variables resulting from the interaction of cultural influences, experiential learning, societal contexts and historical influences. DEC rejects the biological essentialism and reductionism of hereditarianism and their claims of innate, genetic differences in IQ—it contextualizes test score differences. It views test scores as dynamic outcomes, which are influenced by social contexts, cultural influences and experiential learning. It also highlights cultural tools as mediators of knowledge production which would then influence test scores. Language, communication styles, educational values and other cultural resources influence how people engage with test content and respond to test items. Of course, social interactions play a large part in the acquisition of knowledge in different racial groups. Cultural tools are shared and transmitted through social interactions within racial communities. Historical legacies and social structures could impact access to cultural tools along with educational opportunities that would be useful to score well on the test, which then would affect test performance. Blacks and whites are different cultural groups, so they’re exposed to different kinds of knowledge which then influences their test scores.
Lastly, we come to social class. People from families in higher social strata benefit from greater access to educational resources—along with enriching experiences—like attending quality pre-schools and having access to educational materials, materials that are likely to be in the test items on the test. The early learning experiences then set the foundation for performing well on standardized tests. Lower class people could have limited access to these kinds of opportunities, which would impact their readiness and therefore performance on standardized tests. Cultural tools and language also play a pivotal role in shaping class differences in test scores. Parents of higher social class could is language and communication patterns that could potentially contribute to higher test scores. Conversely, lower social classes could have lack of exposure to the specific language conventions used in test items which would then influence their performance. Social interactions also influence knowledge production. Higher social classes foster discussions and educational discourses which support academic achievement, and also the peer groups in would also provide additional academic support and encouragement which would lend itself to higher test scores. On the other hand, lower class groups have limited academic support along with fewer opportunities for social interactions which are conducive to learning the types of items and structure of the test. It has also been shown that there are SES disparities in language acquisition due to the home learning environment, and this contributes to the achievement gap and also school readiness (Brito, 2017). Thus, class dictates if one is or is not ready for school due to their exposure to language in their home learning environment. Therefore, in effect, IQ tests are middle-class knowledge tests (Richardson, 2001, 2022). So one who is not exposed to the specific, cultural knowledge on the test wouldn’t score as well as someone who is. Richardson (1999; cf, Richardson, 2002) puts this well:
So relative acquisition of relevant background knowledge (which will be closely associated with social class) is one source of the elusive common factor in psychometric tests. But there are other, non-cognitive, sources. Jensen seems to have little appreciation of the stressful effects of negative social evaluation and systematic prejudice which many children experience every day (in which even superficial factors like language dialect, facial appearance, and self-presentation all play a major part). These have powerful effects on self concepts and self-evaluations. Bandura et al (1996) have shown how poor cognitive self-efficacy beliefs acquired by parents become (socially) inherited by their children, resulting in significant depressions of self-expectations in most intellectual tasks. Here, g is not a general ability variable, but one of ‘self-belief’.
…
Reduced exposure to middle-class cultural tools and poor cognitive self-efficacy beliefs will inevitably result in reduced self-confidence and anxiety in testing situations.
…
In sum, the ‘common factor’ which emerges in test performances stems from a combination of (a) the (hidden) cultural content of tests; (b) cognitive self-efficacy beliefs; and (c) the self-confidence/freedom-from-anxiety associated with such beliefs. In other words, g is just an mystificational numerical surrogate for social class membership. This is what is being distilled when g is statistically ‘extracted’ from performances. Perhaps the best evidence for this is the ‘Flynn effect,’ (Fkynn 1999) which simply corresponds with the swelling of the middle classes and greater exposure to middle-class cultural tools. It is also supported by the fact that the Flynn effect is more prominent with non-verbal than with verbal test items – i.e. with the (covertly) more enculturated forms.
I can also make this argument:
(1) If children of different class levels have experiences of different kinds with different material, and (2) if IQ tests draw a disproportionate amount of test items from the higher classes, then (3) higher class children should have higher scores than lower-class children.
The point that ties together this analysis is that different groups are exposed to different knowledge bases, which are shaped by their unique cultural tools, experiential learning activities, and social interactions. Ultimately, these divergent knowledge bases are influenced by social class, race, and gender, and they play a significant role in how people approach educational tests which therefore impacts their test scores and academic performance.
Conclusion
DEC offers a framework in which we can delve into to explain how and why groups score differently on academic tests. It recognizes the intricate interplay between experiential learning, societal contexts, socio-historical contexts and cultural tools in shaping human cognition and knowledge production. The part that the irreducibility of the mental plays is pivotal in refuting hereditarian dogma. Since the mental is irreducible, then genes nor brain structure/physiology can explain test scores and differences in mental abilities. In my framework, the irreducibility of the mental is used to emphazies the importance of considering subjective experiences, emotions, conscious awareness and the unique perspectives of individuals in understanding human learning.
Using DEC, we can better understand how and why races, social classes and men and women score differently from each other. It allows us to understand experiential learning and how groups have access to different cultural and psychological tools in shaping cognitive development which would then provide a more nuanced perspective on test score differences between different social groups. DEC moves beyond the rigid gene-environment false dichotomy and allows us to understand how groups score differently, while rejecting hereditarianism and explaining how and why groups score differently using a constructivist lens, since all human cognizing takes place in cultural contexts, it follows that groups not exposed to certain cultural contexts that are emphasized in standardized testing may perform differently due to variations in experiential learning and cultural tools.
In rejecting the claim that genes cause or influence mental abilities/psychological traits and differences in them, I am free to reason that social groups score differently not due to inherent genetic differences, but as a result of varying exposure to knowledge and cultural tools. With my DEC framework, I can explore how diverse cultural contexts and learning experiences shape psychological tools. This allows a deeper understanding of the dynamic interactions between the individual and their environment, emphasizing the role of experiential learning and socio-cultural factors in knowledge production. Gene-environment interactions and the irreducibility of the mental allow me to steer clear of genetic determinist explanations of test score differences and correctly identity such differences as due to what one is exposed to in their lives. In recognizing G-E interactions, DEC acknowledges that genetic factors are necessary pre-conditions for the mind, but genes alone are not able to explain how mind arises due to the irreducibility principle. So by considering the interplay between genes and experiential learning in different social contexts, DEC offers a more comprehensive understanding of how individuals construct knowledge and how psychological traits and mind emerge, steering away from genetically reductionistic approaches to human behavior, action, and psychological traits.
I also have argued how mind-body dualism and developmental systems theory refute hereditarianism, thus framework I’ve created is a further exposition which challenges traditional assumptions in psychology, providing a more holistic and nuanced understanding of human cognition and development. By incorporating mind-body dualism, it rejects the hereditarian perspective of reducing psychology and mind to genes and biology. Thus, hereditarianism is discredited since it has a narrow focus on genetic determinism/reductionism. It also integrates developmental systems theory, where development is a dynamic process influenced by multiple irreducible interactions between the parts that make up the system along with how the human interacts with their environment to acquire knowledge. Thus, by addressing the limitations (and impossibility) of hereditarian genetic reductionism, my DEC framework provides a richer framework for explaining how mind arises and how people acquire different psychological and cultural tools which then influence their outcomes and performance on standardized tests.
The Concept of Genotypic IQ is False and Socially Destructive
2050 words
Introduction
The concept of “genotypic IQ” (GIQ) refers to a theoretical genetic potential of IQ. Basically, GIQ is one’s IQ without any corresponding environmental insults, and of course it is due the interaction of many genes each with small effect (which is the justification for GWAS). This, though, is like the concept of true score. “A true score is the hypothetical average of a thousand parallel testings of someone’s intellectual abilities.” Nevertheless, this concept of GIQ is used by hereditarians to proclaim that “genotypic intelligence is deteriorating” (Lynn, 1998) and this is due to “dysgenic fertility“, which is “a negative correlation between intelligence and the number of children” (Lynn and Harvey, 2008: 112), while “genotypic IQ” is “Genotypic intelligence is the genetic component of intelligence and it is this that has been declining” (Lynn and Harvey, 2008: 113) or is the IQ they have if they have access to optimal environments. I will argue in this article why the concept of GIQ is nonsense.
What is GIQ?
So GIQ is the so-called genetic component of intelligence. This, of course, is based on the assumption that genes are causative for IQ. This is based on the assumption that, however weakly, heritability can tell us anything about genetic causation (it can’t).
Lynn (2015) talks about the GIQs of Africans, pygmies, and aborigines. He also claims that the IQ of African Americans is “solely genetically determined“, since it hasn’t changed in some 80 years. This claim, though, is false (Dickens and Flynn, 2006). Nevertheless, the claim of GIQ arises due to the assumption—which hasn’t been tested, nor can it—that IQ and other psychological traits are caused/influenced by genes. I have argued at length that this claim is false.
It seems that the only people discussing this concept are the usual suspects (Lynn, 2015, 2018; Woodley of Menie, 2015; Madison, Woodley of Menie, and Sanger, 2016; Kirkegaard, Lasker, and Kura, 2019; Piffer, 2023). The decline in so-called genotypic IQ is used as a cudgel to try to argue that “dysgenic effects” of low IQ women having more children is leading to this effect. Weiss (2021: 35) puts it like this:
If women with a low IQ give birth to their children earlier than women with a high IQ, the mean genotypic IQ of the population will also decrease (Comings 1996), even if the number of children in both population strata should be the same. If the number of children across the IQ distribution is not equal (Blake 1989), the next generation will have a different IQ distribution.
Quite obviously, the hereditarian claim of GIQ is that some individuals—and of course groups—are genetically more intelligent than others. Nevertheless, a women “with a low IQ” doesn’t have a low IQ due to genetics; if we think about the nature of IQ and the types of items on the test, we then come to the conclusion that these tests aren’t a test of one’s genetic potential for learning ability (as many have claimed), but it’s merely what one has been exposed to and learned.
We have also used this concept of GIQ to attempt to show that these genes we have found to be associated with IQ have been in decline. Cretan (2016)—in a paper titled “Was Cro-Magnon the Most Intelligent Modern Human?“—tries to argue that GIQ has decreased since Neolithic times, and that the decrease in height and brain size since then is expected, since they are moderately correlated. However, the so-called brain size increase seems to be an artifact (Deacon, 1990a, 1990b). Cretan (2016: 158-159) writes:
Genotypic” intelligence changes across millennia because the genetic variants, or alleles, that enable people to develop higher intelligence change their frequencies due to mutation and selection. Evolution by mutation and selection implies that at a certain selection pressure favoring higher intelligence, the genotypic intelligence of a population remains constant. At selection pressures below this break-even point, intelligence will decrease; at higher selection pressure, intelligence will increase. In the complete absence of selection, genotypic IQ will not remain constant.
As we can see, this concept of GIQ and the so-called decrease in it has been sounding hereditarian alarm bells for decades. People like Lynn and Jensen push eugenic ideals on the basis of low intelligence people having more children, pushing for a negative eugenic practice to prevent people with low IQ from having children. Jensen, in his infamous 1969 paper, was pretty much explicit with these aims, and then in 1970 he stated that heritability can tell use one’s genetic standing when it comes to intelligence. Richard Lynn, in his review of Cattell’s Beyondism, called for “realistically phasing out” certain populations, but that it wasn’t eugenic:
“Is there a danger that current welfare policies, unaided by eugenic foresight, could lead to the genetic enslavement of a substantial segment of our population?” – Jensen, 1969: 95, How Much Can We Boost IQ and Scholastic Achievement?
“What the evidence on heritability tells us is that we can, in fact, estimate a person’s genetic standing on intelligence from his score on an IQ test.” – Jensen, 1970, Can We and Should We Study Race Difference?
…
“What is called for here is not genocide, the killing off of the populations of incompetent cultures. But we do need to think realistically in terms of “phasing out” of such peoples.” [Lynn]
This is an example of negative eugenics—preventing those who were thought to have undesirable traits from breeding. William Shockley—who was Arthur Jensen’s inspiration—talked about paying people to undergo sterilization. This was called the voluntary sterilization bonus plan:
Shockley is proposing varying bonuses to anyone with an IQ under 100 who agrees to be sterilized upon reaching child-bearing age. He would pay volunteers $1,000 for every IQ point below 100, with “$30,000 put into a trust fund for a 70-IQ moron, potentially capable of producing 20 children.”
…
Under the plan, bonuses would also go to potential parents based on the “best scientific estimates” of their having such “genetically carried disabilities as hemophilia, sickle cell anemia, epilepsy, Huntington’s chorea and so on,” with taxpayers getting no money to participate.
This is another example of negative eugenics, but there is of course also positive eugenics—encouraging those with desired traits to have more children. In his article Bright New World, Moen (2016) discusses this kind of positive eugenics, while endorsing the claim of GIQ. Moen proposed that women should be paid modest sums of cash to have children with high IQ sperm donors, not their husbands:
Here I would like to suggest an alternative way to raise global IQ: giving prospective mothers modest monetary incentives to have children that genetically belong not to their husbands (or to ordinary sperm donors) but to high-IQ sperm donors.
These are the kinds of views and ultimate consequences that derive from such thinking that there is GIQ. Since we know that IQ can’t be genetic, there can be no GIQ. If there can be no GIQ, then such proposals like these negative and positive eugenic ideas that I just cited would merely just be getting rid of people that are not socially desireable—mainly the lower class, along with blacks since they are more likely to be lower class and have lower IQs (due to knowledge exposure and differential access to cultural and psychological tools). This concept of GIQ has, since the advent of IQ tests in America, been used to sterilize people in the name of eugenics. The moral wrongness of eugenics is why we should reject this concept, nevermind the irreducibility arguments. Eugenic policies discriminate against people based on arbitrary criteria and violate their reproductive rights.
Arguments against GIQ
Now that I have described what GIQ is and how it has been used in the past in the name of eugenics, here are a few arguments to invalidate the concept.
P1: If IQ is solely determined by one’s genetic makeup, then IQ scores should remain stable through one’s lifetime.
P2: IQ scores do not remain relatively stable through one’s lifetime.
C: Thus, IQ is not solely determined by one’s genetic makeup.
P1: If IQ is solely determined by genetics, then individuals with high IQ parents should also have high IQ scores.
P2: If individuals with high IQ parents also have high IQ scores, then adoption should not affect their IQ scores.
P3: Adoption does affect the IQ scores of individuals with high IQ parents.
C: Thus intelligence is not solely determined by genetics.
This argument contradicts the main claim of GIQ, since adoption has been shown to raise IQ (see Capron and Duyme, 1989; Locurto, 1990; Flynn, 1993; Duyme, Dumaret, and Tomkiwicz, 1999; Kendler et al, 2015; see Nisbett et al, 2012 for review).
P1: If the concept of GIQ were true, then one’s IQ would be determined by their genetics.
P2: Genes don’t determine traits, nevermind psychological ones.
C: Therefore, the concept of GIQ is false.
P1: If psychological traits are reducible to genetics, then environment plays no role in shaping IQ and the concept of GIQ is true.
P2: The environment plays a significant role in shaping IQ, as adoption studies show.
C: Therefore psychological traits are not reducible to genetics and the concept of GIQ is false.
And
P1: If psychological traits are irreducible, then the concept of GIQ is false.
P2: Psychological traits are irreducible.
C: Therefore, the concept of GIQ is false.
Both of these argument draw on the irreducibility of the mental arguments I’ve been making for years. If the mental is irreducible to the physical, then the concept of GIQ can’t possibly be true.
P1: Either the concept of GIQ is true and implies that IQ is determined by genes alone, or the concept of GIQ is false and other factors other than genes contribute to IQ.
P2: If the concept of GIQ is true and implies genetic determinism, then it ignores the significant impact that environmental factors have on IQ and may perpetuate discrimination against those with low IQ.
P3: If the concept of GIQ is false and other factors other than genes contribute to IQ, then efforts should be focused on addressing these other factors rather than assuming that genes are the sole determinant of IQ.
C: Thus, either the concept of GIQ perpetuates discriminatory attitudes if true, or it distracts from addressing the true determinants of IQ if false.
P1 is logically true, while P2 and P3 are supported by scientific evidence, so the argument is plausible.
The concept of GIQ assumes that IQ is largely determined by genetics, and that individuals have different genetic potentials for IQ. There is no clear, consistent definition of intelligence. The factors that contribute to IQ are complex and multifaceted. So any attempt at reducing one’s IQ to their genes or to make predictions about one’s IQ from their genes along is inherently flawed and oversimplified. Thus, the concept of GIQ is not a valid or useful way of understanding intelligence, and so attempts to use it to make policy or social decisions would be misguided. So this argument challenges the concept of GIQ, since there is no accepted definition of intelligence. That’s more than enough to discount the concept entirely.
Conclusion
I have described the concept of GIQ that many hereditarians in the literature have espoused. It is described as one’s genetic potential for IQ sans environmental insults. The usual suspects are arguing for a GIQ. However as can be seen historically, this concept had led to destructive consequences for groups of people and individuals who are deemed less intelligent. It has been argued that those who have low IQs should not have children and that either people should be paid to not have children and get sterilized or to influence high IQ mother’s to have children not with their husbands but high IQ sperm donors. Eugenics is morally wrong so we should not do that, nevermind the fact that genes don’t work how hereditarians need them to. Nevertheless, I have given a few arguments that the concept of GIQ is misleading at worst and socially destructive at best. This is yet another reason why we should ban IQ tests.
Thus, the concept of GIQ is merely false eugenic nonsense.
Devestating Objections to the Rushton-Lynn Cold Winters Theory
3500 words
Introduction
Cold winters theory (CWT) attempts to explain the variation in IQ scores between countries. According to the theory, what explains a suite of observed differences is differential evolution by natural selection in different environments. Due to the exodus out of Africa, this led to the colonization of new biomes with novel things that early Man would not have been accustomed to. Thus, they would then need to be able to adapt their actions and behavior to their new environment. Since they were in novel environments, early man would then need to acquire new skills to survive. So those who could not, had a lower chance to reproduce, and so, there was selection-for and selection-against certain traits. So, over time, this led to differences in the phenotype between groups that evolved in different environments, and the driver of this was natural selection. Hereditarians have said as much, and this theory is a cornerstone to their thinking. The observed differences, in order to be of any use to hereditarians, must be due to evolution, particularly due to evolution by natural selection.
However, although natural selection isn’t itself a mechanism (Fodor, 2008; Fodor and Piattelli-Palmarini, 2010), it is generally understood that natural selection actually decreases genetic variation in a trait (Howe, 1997: 70; Richardson, 2017: 46) . Thus, if the differences in IQ between races were due to natural selection, then there would be decreased, not increased, variability in IQ/intelligence between races.
Emil Kirkegaard has a good overview of the history of this theory. Nevertheless, I myself have made critiques of CWT, which rely on the fact that it makes no risky, novel predictions (contra Lynn). In this article, I will mount some more arguments against CWT, and I will further show how the logic for the theory crumbles due to the use of shoddy reasoning and the use of ad hoc hypotheses to save the theory from falsification. I will conclude that the CWT has no scientific value and is nothing more than a just-so story that explains what it purports to explain while not successfully predicting novel evidence.
Cold winter theory – Lynn
One of the earliest instance of CWT can be found in Wallace (1864). In his article, he states things that contemporary hereditarians would then argue. In 1987, Richard Lynn argued that the selective pressures of cold winters explains the high IQs of “Mongoloids” (Asians) (Lynn, 1987). Lynn states that the higher IQs of Asians can be explained by the selective pressures of cold environments. He posits adaptations that evolved in Asians, which cold winter environments then selected-for. In 1991, Richard Lynn argued that surviving in novel environments that our species didn’t evolve in led to selective pressures which increased the IQs of “Caucasoids” and “Mongoloids.” The two groups had to survive in cognitively demanding environments and, due to the cold, needed to create shelters, make clothes and fire along with hunting game. So this explains why the two groups have evolved greater intelligence than Africans. Although Ian Deary is himself an IQ-ist, he rightly states that Lynn’s theory is nothing more than a just-so story:
Another review of the thorny issue which Lynn deals with in the first paper may be judged worthwhile if there is a wealth of convincing new evidence, or a Flynn-like (1987) fine-toothcombing of the past evidence. Neither of these objectives is achieved. Therefore, the Pandora’s box has been opened once more, some may say, to no great purpose. What of Lynn’s evolutionary account of the origins of intelligence test score differences between groups? It puts me in mind of Kipling’s Just So stories. When one is more used to examining factor analyses or anova tables the type of evolutionary evidence that is offered here is difficult to evaluate. One suspects that there is an infinite number of more or less plausible historical accounts of the causes of racial differences in IQ test scores, and that all would leave aside uncomfortable facts (like the intelligence needed to exist in hot arid climates). The issue addressed in Lynn’s first paper is difficult enough, but the evidence is far too sparse to be telling the story of how the eskimo got his/her flat nose. (Deary, 1991: 157)
Thus, if this relationship were to hold, then those who experienced the harshest, coldest conditions should have the highest IQs. However, this is not what we see. Arctic people have IQs around 91, and so, this seems to be a piece of evidence against CWT. Lynn, though, has an ad hoc hypothesis for why they don’t have higher IQs—they had a small population size and so high IQ generic mutations didn’t have a large chance to appear and then become stabilized in the genome like they did for Asians (population size for Arctic people 56,000; for Asians 1.4 billion; Lynn, 2006: 157). So due to geographic isolation along with a small population size, Arctic people did not have the chance to gain higher IQs. This is nothing more than an ad hoc hypothesis—an ad hoc hypothesis is produced “for this”, and a hypothesis is ad hoc if it cannot be independently verified. It’s a case of special pleading, as Scott McGreal’s argues.
The fact of the matter about CWT, is that the conclusion was known first (higher IQs in certain geographic areas), and then a form of reverse reasoning was used in order to attempt to ascertain the causes of the observed differences between groups. This is known as reverse engineering, where reverse engineering is defined as “a process of figuring out the design of a mechanism on the basis of an analysis of the tasks it performs” (Buller, 2005: 92). This is also one of Smith’s (2016: 227-228) just-so story triggers:
1) proposing a theory-driven rather than a problem-driven explanation, 2) presenting an explanation for a change without providing a contrast for that change, 3) overlooking the limitations of evidence for distinguishing between alternative explanations (underdetermination), 4) assuming that current utility is the same as historical role, 5) misusing reverse engineering, 6) repurposing just-so stories as hypotheses rather than explanations, and 7) attempting to explain unique events that lack comparative data.
Lynn (1990) attempted to integrate gonadotropin levels, testosterone and prostate cancer into the theory, stating that by having fewer children and showing mote care to them, non-African populations then shifted to a K strategy, which then led to a concomitant decrease in testosterone and subsequently aggressive tendencies (Rushton, 2000: 263). However, this is based on the false assumption that testosterone is directly responsible for aggression, meaning that as testosterone increases so does aggression. They have the cause and effect backwards, though—aggression leads to an increase in testosterone, so Lynn’s explanation fails.
Rushton then comes along and champions Lynn’s “contributions to science” (Rushton, 2012), while also praising Lynn’s theory as explain why northerly populations evolved higher IQs and larger brains than southerly populations (Rushton, 2005), while making the grandiose claim that “documenting global race differences in intelligence and analysing how these have evolved may be his crowning achievement” (Rushton, 2012: 855). Rushton wrote an Amazon review of Lynn’s book, and then again in the white nationalist magazine VDare. Of course Rushton would go to bat for Lynn, since Lynn’s theory is a cornerstone of Rushton’s r/K selection theory, which is where we will now turn.
Cold winter theory – Rushton
Starting in 1985, Rushton began arguing that there was a suite of dozens of traits that the races differed on (Rushton, 1985). He collated his arguments in his first book, Race, Evolution, and Behavior (Rushton, 1995), and he argued that what explained the differences in these traits between his races were the selective factors that influenced and dictated survival in those environments. Rushton and Jensen (2005: 265-266; cf Andrade and Redondo, 2019) argued that there are genetically-driven differences in IQ scores between races (blacks and whites, in this instance), and one of the largest reasons for these differences was the different types of environments the two races were exposed to:
Evolutionary selection pressures were different in the hot savanna where Africans lived than in the cold northern regions Europeans experienced, or the even colder Arctic regions of East Asians. These ecological differences affected not only morphology but also behavior. It has been proposed that the farther north the populations migrated out of Africa, the more they encountered the cognitively demanding problems of gathering and storing food, gaining shelter, making clothes, and raising children successfully during prolonged winters (Rushton, 2000). As these populations evolved into present-day Europeans and East Asians, the ecological pressures selected for larger brains, slower rates of maturation, and lower levels of testosterone—with concomitant reductions in sexual potency, aggressiveness, and impulsivity; increases in family stability, advanced planning, self-control, rule following, and longevity; and the other characteristics listed in Table 3.
So this is where Rushton’s r/K selection comes in. He proposed that “some groups of people are more K selected than others” (Rushton, 1990: 137). So if some groups are more K selected than others, then some groups would have different trait values when compared to others, and this seems to support Rushton’s theory. However, Rushton’s theory can be explained environmentally, without appealing to genetics (Gorey and Cryns, 1995) and it also has not been independently replicated (Peregrine, Ember and Ember, 2003).
Devestating Objections to CWT
Objection 1: The fact of the matter is, when it comes to CWT, this is a perfect example of ideas and beliefs that shift with the time based on current observations. Aristotle argued that since the ancient Greeks had the middle geographic position between Asia and the rest of Europe, they were spirited and intelligent and therefore continued to be free while those who inhabited cold places like Europe lacked intelligence and skill, they had spirit while those in Asia were intelligent while being skilful in temperament, while also being subject to slavery. It was the Greeks who were right in the middle—they were just right, like Goldilocks—to have both all of the good and none of the bad traits they associated with those in other geographic locales. Meloni (2019: 42) cited one Roman officer who stated that recruitment of individuals from cold climates “as they had too much blood and, hence, inadequate intelligence. Instead, he argued, troops from temperate climates be recruited, as they possess the right amount of blood, ensuring their fitness for camp discipline (Irby, 2016).” This is solid evidence that who is or is not “intelligent” can and has changed with the times, along with other explanations of differences between people. This, then, proves the contingency of the concept of “more intelligent people”, and that people will marshal any kind of evidence for their pet theories at the time they have observed them and work backwards to form an argument, a kind of inference to best explanation. Thus, an evolutionary psychologist or IQ-ist transported back to antiquity would have formulated a different theory of intelligence, which obviously would have been at-odds with what they try to argue for today.
Objection 2: In 2019, I contrasted the CWT with the vitamin D hypothesis. I argued that there was one successful novel prediction made by the VDH—namely the convergent evolution of skin color in hominids that left Africa (Chaplan and Jablonski, 2009: 452), which was successfully predicted by Chaplan and Jablonski (2000). I wrote:
If high ‘intelligence’ is supposedly an adaptation to cold temperatures, then what is the observation that disconfirms a byproduct hypothesis? On the other hand, if ‘intelligence’ is a byproduct, which observation would disconfirm an adaptationist hypothesis? No possible observation can confirm or disconfirm either hypothesis, therefore they are just-so stories. Since a byproduct explanation would explain the same phenomena since byproducts are also inherited, then just saying that ‘intelligence’ is a byproduct of, say, needing larger heads to dissipate heat (Lieberman, 2015). One can make any story they want to fit the data, but if there is no prediction of novel facts then how useful is the hypothesis if it explains the data it purports to explain and only the data it purports to explain?
It is possible to think up any kind of story to explain any observation to give it an air of scientific objectivity. Of course it is possible to argue that other climates can select higher intelligence, as Anderson (1991), Graves (2002), and Flynn (2019) have argued. Sternberg, Grigorenko, and Kidd (2005) have also argued that it is possible to think of any kind of explanation/story for any kind of observed data. Nevertheless, the fact of the matter is this: There is no reason to accept the CWT, since there is no independent evidence for the theory in question.
Objection 3: If the Lynn-Rushton CWT were correct, then we would observe lower variation in IQ scores between whites and Asians, since it is well-accepted that natural selection reduced genetic variation in traits that are important for survival (Howe, 1997: 70; Richardson, 2017: 46). In the hereditarian conception, of course intelligence is important for survival, and so if the hereditarian argument for CWT is true, then we should observe lower variance in IQs in whites and Asians compared to blacks, but we don’t see this. (Also see Bird, 2020 for an argument against the hereditarian hypothesis, showing that there is no natural selection in blacks and whites on cognitive performance.)
Objection 4: Hereditarians have relied on the concept of heritability for decades. If T is highly heritable, then T has a genetic component and what explains the variance in T is genetics, not environment. Many critiques of the heritability concept have been mounted (eg Moore and Shenk, 2016), and they spell trouble for the hereditarian CWT and the hereditarian hypothesis as a whole. But these estimates are derived from highly confounded studies, and so the “laws” derived from them are anything but.
Objection 5: Rushton and Lynn posit that Asians are K- while Africans are r-selected. Rushton rightly stated that Africans endure endemic and infectious disease, which he wrongly stated was an r trait. He also stated that cold winters shaped K traits in Asians and European populations. However, based on the (accepted at the time) tenets of r/K selection, it would actually be Africans that are K and Asians that are r, since groups that move out of environments they evolved in and into new ones are freed from density-dependent control (Anderson, 1991: 59).
Objection 6: The irreducibility of the mental to the physical means that psychology can’t be an object of selection since it is not physical. Intelligence is posited as a psychological trait, so it cannot be selected. This is a devestating objection to not only the CWT but to most hereditarian hypotheses which reduce mental states to brain states or genes. Such irreducibility arguments make hereditarianism untenable.
Arguments against CWT
With all this being said, here are a few arguments derived from the discussions above. It is well-established that the CWT hardly had any evidentiary basis. It’s merely the argument of ideologues.
P1: If CWT were true, then there would be independent evidence for it.
P2: There is no independent evidence for the CWT.
P3: The correlation between race and IQ is better explained by social and environmental factors than by the CWT.
P4: The evidence cited in support of the CWT, including Lynn’s national IQ data, is fraudulent and lacks scientific rigor.
C: Therefore, the CWT is false.
Premise 1: This is a basic tenet of scientific explanation. Independent evidence refers to evidence not used in the construction of the hypothesis. The only evidence for CWT is the observation of differences in IQ between people that inhabit different geographic locations. So if CWT were true, it is entailed that there should be independent, novel evidence to support the hypothesis. It is evidence that isn’t based on the original assumptions or data used to construct the hypothesis. If there is, then that raises the probability that the state of affairs that is proposed is true. Independent, novel evidence is important, since it helps confirm or disconfirm a theory or hypothesis by providing additional support from sources that were not originally taken into account. Evidence is novel when it is not already known or expected based on prior knowledge or previous observations. So novel evidence would, in this instance, refer to evidence that supports the theory and is distinct from the evidence that is used to support it. So in order for CWT to be scientifically valid, there would need to be independent evidence that shows a direct causal link between intelligence and cold winters.
Premise 2: This is a denial of the claim that there is independent evidence that supports CWT, on the accepted definition of “novel, independent evidence.”
Premises 3 and 4: These two premises are linked—access to education along with nutrition better explains the relationship between latitude and IQ. There is also the fact that Lynn’s “national IQs” are fraudulent (Sear, 2022). Thus, there is no evidentiary reason to accept Lynn’s IQs (the only reason is bias and that it “explains” the differing civilizational states of different races). It’s merely working backwards (returning to reverse engineering) since they have their conclusion in mind and then construct an argument to prove their already-held conclusion.
So the Conclusion follows—CWT is false since there is no independent, novel evidence for it. Therefore the only reason to believe it is bias in thinking against groups of people.
P1: The CWT suggests that differences in average IQ scores between racial groups can be largely explained by differences in the coldness of the winter climates that these groups evolved in.
P2: All of the evidence used to support the CWT is based on previously existing data, such as Lynn’s national IQ data or historical temperature records.
P3: There is no new independent evidence that supports the CWT beyond this existing data.
C: Thus, there is no novel, independent evidence for the CWT.
Or
P1: If there is new independent evidence for the CWT, then the CWT can be independently supported.
P2: There is no novel independent evidence for CWT beyond the existing data.
C: So the CWT cannot be supported by new independent evidence.
These arguments are valid and I hold them to be sound, based on the discussion in this article and my previous articles on the matter of CWT and the prediction of novel facts of the matter.
Conclusion
We don’t need evolutionary stories to explain IQ differences between countries (Wicherts, Borsboom, and Dolan, 2010). Lynn’s national IQ data is highly suspect and should not be used (Sears, 2022). High intelligence would be useful in all environments. The Rushton-Lynn CWT states that those who migrated to more northerly, colder biomes needed to plan ahead for the winter, and they would also need to plan and create hunting parties to procure food. This, of course, is ridiculous. Because you need to plan ahead to survive anywhere. Moreover, Will et al (2021) state that their:
analyses detected no such association of temperature with brain size. … These results suggest that brain size within Homo is less influenced by environmental variables than body size during the past 1.0 Ma.
This is of course a huge strike against the Rushton-Lynn CWT. Anthropological evidence also conflicts with the CWT (MacEachern, 2006).
Since I have shown that the evidentiary bases of the CWT doesn’t hold, then it isn’t logical to hold the belief that the CWT is true. Views like this are expressed in Rushton (2000: 228-231), Jensen (1998: 170, 434-436) and Lynn (2006: Chapters 15, 16, and 17). Since the main proponents of the model hold eugenist ideas, then it can be posited that they have underlying alterior motives for pushing this theory. Even a claim that there is “molecular genetic evidence” for CWT fails, due to, again, the irreducibility of the mental.
Nevertheless, there is no novel, independent evidence for the belief that cold winters shaped our minds and racial differences in psychological traits after the exodus out of Africa. There can be no evidence for it since we lack time machines and we can’t deconfound correlated traits. So these considerations point to the conclusion that the CWT is a mere story based on data which was then used to work backwards from an already-held conclusion. Thus, CWT is false.
Rushton, Lynn, Kanazawa (2008, 2012), (Kanazawa assumed a flat earth in his 2008 paper; Wicherts et al, 2012) Hart (2009), and Winegard, Winegard, and Anomaly (2020) therefore, are nothing more than just-so storytellers since they lack novel evidence for their assertions. So the so-called argument for evolutionary differences in intelligence/IQ rests on a house of cards that is simple to push over. The six objections laid out in this article are devestating for the CWT. There never was any evidentiary support for CWT—the kind that scientific hypotheses need in order to be valid, it’s merely an ideological series of statements, not an actual scientific hypothesis.
The Distinction Between Psychological and Racial Hereditarianism
2000 words
Introduction
The hereditarian hypothesis posits that genetic/biological factors are responsible for IQ (“intelligence”) and other psychological traits. The claim is basically, IQ is heritable. It is heritable on the basis of twin, family and adoption studies, along with results from GCTA, GWAS and other newer tools that were created in order to lend credence to the twin, family and adoption estimates.
I have distinguished before between what I call “psychological hereditarianism” and “racial hereditarianism.” In this article, I will distinguish between the two more, and while psychological hereditarianism isnt necessarily racist, it can be used for racist aims.
Psychological hereditarianism
Psychological hereditarianism is the belief that psychological differences between people are due largely to genetic or biological factors rather than environmental ones. Claims such as this have been coming from twin studies for decades, and it has been commonly said that such studies have proven that aspects of our psychological constitution are genetically heritable, that is genetically transmitted.
Four kinds of studies exist which lend credence to psychological hereditarianism—family studies, twin studies, adoption studies, and GWAS.
Family studies
Family studies examine the similarities in individuals of the same family when it comes to their cognitive abilities (scores on IQ tests). These studies show that those who share more genes have similar scores than those who don’t. To the hereditarian, this is evidence for their hypothesis that genetic factors contribute to psychological traits and differences in them. Correlations are used to measure the strength of the relationship. An expected value of 50 percent (.5 correlation) between siblings is expected, as they share half of their genes. The correlation that is expected between unrelated individuals is 0, since they presumably don’t share genes (that is, they’re not from the same family).
However, there is one huge issue for family studies—environmental confounding. While people in the same families of course share the same genes, they also share the same environments. So family studies can’t be used as evidence for the psychological hereditarian hypothesis. Behavioral geneticists agree that these studies can’t be used for the genetic hypothesis for psychological traits, but they disagree with the implications of this claim for the next thing I will discuss.
Twin studies
Twin studies again use the correlation coefficient and compare twins raised together or “apart”, to then argue that genes play a substantial role in the etiology of psychological traits like “IQ.” These studies have found that identical twins have more similar cognitive abilities than fraternal twins, which to the twin researchers points to the conclusion that genetic factors contribute to substantially to psychological traits like IQ and other traits. However, the main limitation of such studies comes down to twins reared together. It is assumed that identical and fraternal twins share equally similar environments. This claim, as admitted by twin researchers themselves, is false (Joseph, 2014; Joseph et al, 2015). They then pivot to two arguments—Argument A and Argument B (Joseph et al, 2015)—but A is merely circular and B needs to be shown to be true by twin researchers, that is, they need to rule out and identify trait-relevant factors.
Limitations of twin studies include: not being generalizable to the general population; they’re based on many of the same (false) assumptions that were originally formulated on the 1920s at the advent of twin studies; the findings are misunderstood and blown out of proportion; they lead to volunteer/recruitment bias; and it doesn’t allow the disentangling of G and E since they interact (Sahu and Prasuna, 2016). The “advantages” of these studies aren’t even advantages, since it is conceptually impossible to tease out the relative contributions of G and E to a trait. Nevertheless, twin studies don’t show that psychological hereditarianism is true, and perhaps the most famous twin study of all—the MISTRA—hid the data of its fraternal twins (the controls). Joseph (2022) has an in depth critique of the MISTRA and why conclusions from it should be outright rejected.
Adoption studies
The issues with adoption studies are large, as large as the issues with twin studies. Assignment of adoptees to homes isn’t random, they look for homes that are closer to the homes of the biological mother. This restriction of range reduces the correlation between the adopted children and adopted parent. Adoptees also experience the womb of their biological mother’s (obviously). The adoptive parents are also given information about the adoptee’s family, and this along with conscious and unconscious treatment of the adoptee may help in making the adopted child different (see Richardson and Norgate, 2006; Moore, 2006; Joseph, 2014). Basically, the additive gene model is false, and adoptions don’t simulate a random design.
GWAS
The larger issue at hand here is how the aforementioned have been used to search the genome for the genes that lead to the high heritabilities of IQ. This has then led to the creation of polygenic scores. These studies examine the association between genes and IQ in large samples of individuals. These studies compare the genomes of people who have a certain trait, and they then look for correlations between the genes and the traits in that population. GWASs may miss rare genes with large effects. These studies only merely show associations between genes and traits, not causation. Another issue is population stratification—which is “differences in allele frequencies between cases and controls due to systematic differences in ancestry” (Freedman et al, 2004). GWAS, then, are compromised by this stratification, and attempts to correct for it have been found wanting (Richardson, 2017; Richardson and Jones, 2019; Richardson, 2022). There is also the fact that larger sample sizes won’t help the endeavor of proving that genes contribute to psychological traits—since large databases contain arbitrary correlations, then by increasing the sample size this then highly increases the chance for spurious correlations (Claude and Longo, 2017). At the end of the day, the associations found are weak and could possibly even be meaningless (Noble, 2018). There is also the fact that PGS ignore development and epigenetics (Moore, 2023). Basically, genes don’t work how hereditarians need them to.
The fact of the matter is, these research methods continue to push the false dichotomy of nature vs nurture (the first instance of which appeared in a 13th century French novel on gender). There is also the fact that the “laws of behavioral genetics” rest on twin, family and adoption studies. So if the assumptions of these studies are false, then there is no reason why we should accept the conclusions from them. There are no “laws” in biology, especially not the “laws of behavioral genetics.”
Racial hereditarianism
Racial hereditarianism, on the other hand, is the belief that there are inherent, genetic differences in cognitive ability and other psychological traits between racial and ethnic groups. One—most often unstated—claim is that one group of people are inferior to another (as can be evidenced by the labels of the categories used by Terman), and it has been used to justify discriminatory policies and forced sterilization of people found to have lower IQs. Genetic inheritance explains the how and why of some races having higher IQs than others.
The most famous racial hereditarians are Lynn, Rushton, and Jensen. Over the last 50+ years, these authors have dedicated their lives to proving that certain racial groups have higher IQs than others for genetic reasons. These differences aren’t due just to environment or culture, they say, there is a significant genetic component to the differences in scores between racial and ethnic groups. Since IQ is related to success in life—that is, since IQ is needed for success—then what explains average life outcomes between racial and ethnic groups are their IQs and the ultimate cause is their genes which ultimately cause their IQ scores. Due to the strength of genetic factors on IQ, they say (like Jensen), social programs are doomed to fail.
The argument against psychological hereditarianism and racial hereditarianism
The argument against these is simple—the mental is irreducible to the physical and so, while there are of course correlations between “traits” like IQ and genes, that doesn’t mean they’re causal and due to the irreducibility of the mental to the physical, we can’t find what they need us to find in order to prove their theses.
P1: If racial hereditarianism is true, then cognitive differences between racial groups are primarily due to genetic factors.
P2: There is no empirical (or logical) evidence that supports the claim that cognitive differences between racial groups are primarily due to genetic factors.
C: Thus, racial hereditarianism is false.
P1: If psychological hereditarianism is true, then individual differences in psychological traits are due primarily to genetic factors.
P2: There is no empirical (or logical) evidence that supports the claim that individual differences in psychological traits are primarily due to genetic factors.
C: Thus, psychological hereditarianism is false.
The irreducible complexity of mental states/psychological traits means that it’s impossible for them to be caused or influenced by genetics meaning that both psychological and racial hereditarianism are false. Both psychological and racial hereditarianism, as their unstated assumptions, rely on a type of physicalism to where mental states can be reduced to genes or the brain/brain states. Both kinds are a physicalist theory of mind, and since physicalism is false so are psychological and racial hereditarianism. This is yet more evidence that hereditarianism is false and so it strengthens the argument for banning IQ tests.
Conclusion
Both forms of hereditarianism I’ve discussed here are false, and ultimately they are false since the mental is reducible to the physical. Both of them, however, are inherently reductionist and attempt to reduce people to their genes or their brains. They have, in the past, led to the sterilization of certain people deemed “unfit.” Of course, the hereditarian hypothesis isn’t necessarily racist, though it can be used for racist aims. It can also be used for classist aims. It can be launched at whatever a society deems “unfit”, and then they can try to correlate biological factors with what they deem “unfit.” The very notion that certain races are superior or inferior on intelligence is a form of racism. Such ideas have been used in the recent past in order to justify discriminatory policies against people. So while the psychological hereditarian hypothesis may not be racist (it could be classist, though), how it has been articulated and then even put into practice is inherently racist. In any case, here’s the argument that the hereditarian hypothesis is a racist hypothesis.
P1: If the hereditarian hypothesis is true, then differences in IQ and other traits among racial and ethnic groups are primarily due to genetic factors rather than environmental or social factors.
P2: Differences in IQ and other traits among racial and ethnic groups are not primarily due to genetic factors, but rather environmental or social factors.
C1: Therefore, the hereditarian hypothesis is not true.
P3: If the hereditarian hypothesis is not true, then it cannot be used to make claims about inferiority or superiority.
P4: The hereditarian hypothesis has, historically been used to make claims about the innate superiority or inferiority of certain racial groups, thereby justifying discriniminatory policies and harmful stereotypes.
C2: Therefore, the hereditarian hypothesis is a racist hypothesis.
I’ve shown how P1 and P2 are true exhaustively, so C1 follows from those 2 premises. P3 follows from the conclusion in C1, and P4 is a historical fact. So C2 follows. So by referring to the hereditarian hypothesis as a racist hypothesis, I mean that the hypothesis has been entangled with racist and discriniminatory policies since it’s inception.
So I have articulated a distinction between psychological and racial hereditarianism, where psychological hereditarianism is about the genetic transmission of psychological traits and where racial hereditarianism is the belief that there are inherent racial differences in psychological traits due to genetic differences between groups. While there are of course genetic differences between groups and individuals, it doesn’t follow that said genetic differences cause differences in psychological traits, which is the main claim of hereditarianism. The issue of the reducibility of the mental isn’t an empirical matter, it’s a conceptual one. So the hereditarian hypothesis, therefore, is refuted on conceptual, a priori grounds.
Knowledge, Culture, Logic, and IQ
5050 words
… what IQ tests actually assess is not some universal scale of cognitive strength but the presence of skills and knowledge structures more likely to be acquired in some groups than in others. (Richardson, 2017: 98)
For the past 100 years, the black-white IQ gap has puzzled psychometricians. There are two camps—hereditarians (those who believe that individual and group differences in IQ are due largely to genetics) and environmentalists/interactionists (those who believe that individual and group differences in IQ are largely due to differences in learning, exposure to knowledge, culture and immediate environment).
Knowledge
However, one of the most forceful arguments for the environmentalist (i.e., that the cause for differences in IQ are due to the cultural and social environment; note that an interactionist framework can be used here, too) side is one from Fagan and Holland (2007). They show that half of the questions on IQ tests had no racial bias, whereas other problems on the test were solvable with only a specific type of knowledge – knowledge that is found specifically in the middle class. So if blacks are more likely to be lower class than whites, then what explains lower test scores for blacks is differential exposure to knowledge – specifically, the knowledge to complete the items on the test.
But some hereditarians say otherwise – they claim that since knowledge is easily accessible for everyone, then therefore, everyone who wants to learn something will learn it and thus, the access to information has nothing to do with cultural/social effects.
A hereditarian can, for instance, state that anyone who wants to can learn the types of knowledge that are on IQ tests and that they are widely available everywhere. But racial gaps in IQ stay the same, even though all racial groups have the same access to the specific types of cultural knowledge on IQ tests. Therefore, differences in IQ are not due to differences in one’s immediate environment and what they are exposed to—differences in IQ are due to some innate, genetic differences between blacks and whites. Put into premise and conclusion form, the argument goes something like this:
P1 If racial gaps in IQ were due specifically to differences in knowledge, then anyone who wants to and is able to learn the stuff on the tests can do so for free on the Internet.
P2 Anyone who wants to and is able to learn stuff can do so for free on the Internet.
P3 Blacks score lower than whites on IQ tests, even though they have the same access to information if they would like to seek it out.
C Therefore, differences in IQ between races are due to innate, genetic factors, not any environmental ones.
This argument is strange. One would have to assume that blacks and whites have the same access to knowledge—we know that lower-income people have less access to knowledge in virtue of the environments they live in. For instance, they may have libraries with low funding or bad schools with teachers who do not care enough to teach the students what they need to succeed on these standardized tests (IQ tests, the SAT, etc are all different versions of the same test). (2) One would have to assume that everyone has the same type of motivation to learn what amounts to answers for questions on a test that have no real-world implications. And (3) the type of knowledge that one is exposed to dictates what one can tap into while they are attempting to solve a problem. All three of these reasons can cascade in causing the racial performance in IQ.
Familiarity with the items on the tests influences a faster processing of information, allowing one to correctly identify an answer in a shorter period of time. If we look at IQ tests as tests of middle-class knowledge of skills, and we rightly observe that blacks are lower class than whites who are more likely to be middle class, then it logically follows that the cause of differences in IQ between blacks and whites are cultural – and not genetic – in origin. This paper – and others – solves the century-old debate on racial IQ differences – what accounts for differences in IQ scores is differential exposure to knowledge. Claiming that people have the same type of access to knowledge and, thusly, won’t learn it if they won’t seek it out does not make sense.
Differing experiences lead to differing amounts of knowledge. If differing experiences lead to differing amounts of knowledge, and IQ tests are tests of knowledge—culturally-specific knowledge—then those who are not exposed to the knowledge on the test will score lower than those who are exposed to the knowledge. Therefore, Jensen’s Default Hypothesis is false (Fagan and Holland, 2002). Fagan and Holland (2002) compared blacks and whites on for their knowledge of the meaning of words, which are highly “g”-loaded and shows black-white differences. They review research showing that blacks have lower exposure to words and are therefore unfamiliar with certain words (keep this in mind for the end). They mixed in novel words with previously-known words to see if there was a difference.
Fagan and Holland (2002) picked out random words from the dictionary, then putting them into a sentence to attempt to give the testee some context. They carried out five experiments in all, and each one showed that, when equal opportunity was given to the groups, they were “equal in knowledge” (IQ). So, whites were more likely to know the items more likely to be found on IQ tests. Thus, there were no racial differences between blacks and whites when looked at from an information-processing point of view. Therefore, to expain racial differences in IQ, we must look to differences in the cultural/social environment. Fagan (2000) for instance, states that “Cultures may differ in the types of knowledge their members have but not in how well they process. Cultures may account for racial differences in IQ.”
The results of Fagan and Holland (2002) are completely at-ends with Jensen’s Default Hypothesis—that the 15-point gap in IQ is due to the same environmental and cultural factors that underlie individual differences in the group. However, as Fagan and Holland (2002: 382) show that:
Contrary to what the default hypothesis would predict, however, the within racial group analyses in our study stand in sharp contrast to our between racial group findings. Specifically, individuals within a racial group who differed in general knowledge of word meanings also differed in performance when equal exposure to the information to be tested was provided. Thus, our results suggest that the average difference of 15 IQ points between Blacks and Whites is not due to the same genetic and environmental factors, in the same ratio, that account for differences among individuals within a racial group in IQ.
Exposure to information is critical, in fact. For instance, Ceci (1996) shows that familiarity with words dictates speed of processing to use in identifying the correct answer to the problem. In regard to differences in IQ, Ceci (1996) does not deny the role of biology—indeed, it’s a part of his bio-ecological model of IQ, which is a theory that postulates the development of intelligence as an interaction between biological dispositions and the environment in which those dispositions manifest themselves. Ceci (1996) does note that there are biological constraints on intelligence, but that “… individual differences in biological constraints on specific cognitive abilities are not necessarily (or even probably) directly responsible for producing the individual differences that have been reported in the psychometric literature.” That such potentials, though may be “genetic” in origin, of course, does not license the claim that genetic factors contribute to variance in IQ. “Everyone may possess them to the same degree, and the variance may be due to environment and/or motivations that led to their differential crystallization.” (Ceci, 1996: 171)
Ceci (1996) also further shows that people can differ in intellectual performance due to 3 things: (1) the efficiency of underlying cognitive potentials that are relevant to the cognitive ability in question; (2) the structure of knowledge relevant to the performance; and (3) contextual/motivational factors relevant to crystallize the underlying potentials gained through one’s knowledge. Thus, if one is lacking in the knowledge of the items on the test due to what they learned in school, then the test will be biased against them since they did not learn the relevant information on the tests.
Cahan and Cohen (1989) note that nine-year-olds in fourth grade had higher IQs than nine-year-olds in third grade. This is to be expected, if we take IQ scores as indices of—cultural-specific—knowledge and skills and this is because fourth-graders have been exposed to more information than third-graders. In virtue of being exposed to more information than their same-age cohort in different grades, they then score higher on IQ tests because they are exposed to more information.
Cockroft et al (2015) studied South African and British undergrads on the WAIS-III. They conclude that “the majority of the subtests in the WAIS-III hold cross-cultural biases“, while this is “most evident in tasks which tap crystallized, long-term learning, irrespective of whether the format is verbal or non-verbal” so “This challenges the view that visuo-spatial and non-verbal tests tend to be culturally fairer than verbal ones (Rosselli and Ardila, 2003)”.
IQ tests “simply reflect the different kinds of learning by children from different (sub)cultures: in other words, a measure of learning, not learning ability, and are merely a redescription of the class structure of society, not its causes … it will always be quite impossible to measure such ability with an instrument that depends on learning in one particular culture” (Richardson, 2017: 99-100). This is the logical position to hold: for if IQ tests test class-specific type of knowledge and certain classes are not exposed to said items, then they will score lower. Therefore, since IQ tests are tests of a certain kind of knowledge, IQ tests cannot be “a measure of learning ability” and so, contra Gottfredson, ‘g’ or ‘intelligence’ (IQ test scores) cannot be called “basic learning ability” since we cannot create culture—knowledge—free tests because all human cognizing takes place in a cultural context which it cannot be divorced from.
Since all human cognition takes place through the medium of cultural/psychological tools, the very idea of a culture-free test is, as Cole (1999) notes, ‘a contradiction in terms . . . by its very nature, IQ testing is culture bound’ (p. 646). Individuals are simply more or less prepared for dealing with the cognitive and linguistic structures built in to the particular items. (Richardson, 2002: 293)
Heine (2017: 187) gives some examples of the World War I Alpha Test:
1. The Percheron is a kind of
(a) goat, (b) horse, (c) cow, (d) sheep.
2. The most prominent industry of Gloucester is
(a) fishing, (b) packing, (c) brewing, (d) automobiles.
3. “There’s a reason” is an advertisement for
(a) drink, (b) revolver, (c) flour, (d) cleanser.
4. The Knight engine is used in the
(a) drink, (b) Stearns, (c) Lozier, (d) Pierce Arrow.
5. The Stanchion is used in
(a) fishing, (b) hunting, (c) farming, (d) motoring.
Such test items are similar to what are on modern-day IQ tests. See, for example, Castles (2013: 150) who writes:
One section of the WAIS-III, for example, consists of arithmetic problems that the respondent must solve in his or her head. Others require test-takers to define a series of vocabulary words (many of which would be familiar only to skilled-readers), to answer school-related factual questions (e.g., “Who was the first president of the United States?” or “Who wrote the Canterbury Tales?”), and to recognize and endorse common cultural norms and values (e.g., “What should you do it a sale clerk accidentally gives you too much change?” or “Why does our Constitution call for division of powers?”). True, respondents are also given a few opportunities to solve novel problems (e.g., copying a series of abstract designs with colored blocks). But even these supposedly culture-fair items require an understanding of social conventions, familiarity with objects specific to American culture, and/or experience working with geometric shapes or symbols. [Since this is questions found on the WAIS-III, then go back and read Cockroft et al, 2015 since they used the British version which, of course, is similar.]
If one is not exposed to the structure of the test along with the items and information on them, how, then, can we say that the test is ‘fair’ to other cultural groups (social classes included)? For, if all tests are culture-bound and different groups of people have different cultures, histories, etc, then they will score differently by virtue of what they know. This is why it is ridiculous to state so confidently that IQ tests—however imperfectly—test “intelligence.” They test certain skills and knowledge more likely to be found in certain groups/classes over others—specifically in the dominant group. So what dictates IQ scores is differential access to knowledge (i.e., cultural tools) and how to use such cultural tools (which then become psychological tools.)
Lastly, take an Amazonian people called The Pirah. They have a different counting system than we do in the West called the “one-two-many system, where quantities beyond two are not counted but are simply referred to as “many”” (Gordon, 2005: 496). A Pirah adult was shown an empty can. Then the investigator put six nuts into the can and took five out, one at a time. The investigator then asked the adult if there were any nuts remaining in the can—the man answered that he had no idea. Everett (2005: 622) notes that “Piraha is the only language known without number, numerals, or a concept of counting. It also lacks terms for quantification such as “all,” “each,” “every,” “most,” and “some.””
(hbdchick, quite stupidly, on Twitter wrote “remember when supermisdreavus suggested that the tsimane (who only count to s’thing like two and beyond that it’s “many”) maybe went down an evolutionary pathway in which they *lost* such numbers genes?” Riiiight. Surely the Tsimane “went down an evolutionary pathway in which they *lost* such numbers genes.” This is the idiocy of “HBDers” in action. Of course, I wouldn’t expect them to read the actual literature beyond knowing something basic (Tsimane numbers beyond “two” are known as “many”) and the positing a just-so story for why they don’t count above “two.”
Non-verbal tests
Take a non-verbal test, such as the Bender-Gestalt test. There are nine index cards which have different geometrical designs on them, and the testee needs to copy what he saw before the next card is shown. The testee is then scored on how accurate his recreation of the index card is. Seems culture-fair, no? It’s just shapes and other similar things, how would that be influenced by class and culture? One would, on a cursory basis, claim that such tests have no basis in knowledge structure and exposure and so would rightly be called “culture-free.” While the shapes that come on Ravens tests are novel, the rules governing them are not.
Hoffmann (1966) studied 80 children (20 Kickapoo Indians (KIs), 20 low SES blacks (LSBs), 20 low SES whites (LSWs), and 20 middle-class whites (MCWs)) on the Bender-Gestalt test. The Kickapoo were selected from 5 urban schools; 20 blacks from majority-black elementary schools in Oklahoma City; 20 whites in low SES areas of Oklahoma; and 20 whites from middle-schools in Oklahoma from majority-white schools. All of the children were aged 8-10 years of age and in the third grade, while all had IQs in the range of 90-110. They were matched on a whole slew of different variables. Hoffman (1966: 52) states “that variations in cultural and socio-economic background affect Bender Gestalt reproduction.”
Hoffman (1966: 86) writes that:
since the four groups were shown to exhibit no significant differences in motor, or perceptual discrimination ability it follows that differences among the four groups of boys in Bender Gestalt performance are assignable to interpretative factors. Furthermore, significant differences among the four groups in Bender performance illustrates that the Bender Gestalt test is indeed not a so called “culture-free” test.
Hoffman concluded that MCWs, KIs, LSBs, and LSWs did not differ in copying ability, nor did they differ significantly in discriminating in different phases in the Bender-Gestalt; there also was no bias in figures that had two of the different sexes on them. They did differ in their reproductions of Bender-Gestalt designs, and their differing performance can be, of course, interpreted differently by different people. If we start from the assumption that all IQ tests are culture-bound (Cole, 2004), then living in a different culture from the majority culture will have one score differently by virtue of having differing—culture-specific knowledge and experience. The four groups looked at the test in different ways, too. Thus, the main conclusion is that:
The Bender Gestalt test is not a “culture-free” test. Cultural and socio-economic background appear to significantly affect Bender Gestalt reproduction. (Hoffman, 1966: 88)
Drame and Ferguson (2017) and Dutton et al (2017) also show that there is bias in the Raven’s test in Mali and Sudan. This, of course, is due to the exposure to the types of problems on the items (Richardson, 2002: 291-293). Thus, their cultures do not allow exposure to the items on the test and they will, therefore, score lower in virtue of not being exposed to the items on the test. Richardson (1991) took 10 of the hardest Raven’s items and couched them in familiar terms with familiar, non-geometric, objects. Twenty eleven-year-olds performed way better with the new items than the original ones, even though they used the same exact logic in the problems that Richardson (1991) devised. This, obviously, shows that the Raven is not a “culture-free” measure of inductive and deductive logic.
The Raven is administered in a testing environment, which is a cultural device. They are then handed a paper with black and white figures ordered from left to right. Note that Abel-Kalek and Raven (2006: 171) write that Raven’s items “were transposed to read from right to left following the custom of Arabic writing.” So this is another way that the tests are biased and therefore not “culture-free.”) Richardson (2000: 164) writes that:
For example, one rule simply consists of the addition or subtraction of a figure as we move along a row or down a column; another might consist of substituting elements. My point is that these are emphatically culture-loaded, in the sense that they reflect further information-handling tools for storing and extracting information from the text, from tables of figures, from accounts or timetables, and so on, all of which are more prominent in some cultures and subcultures than others.
Richardson (1991: 83) quotes Keating and Maclean (1987: 243) who argue that tests like the Raven “tap highly formal and specific school skills related to text processing and decontextualized rule application, and are thus the most systematically acculturated tests” (their emphasis). Keating and Maclean (1987: 244) also state that the variation in scores between individuals is due to “the degree of acculturation to the mainstream school skills of Western society” (their emphasis). That’s the thing: all types of testing is biased towards a certain culture in virtue of the kinds of things they are exposed to—not being exposed to the items and structure of the test means that it is in effect biased against certain cultural/social groups.
Davis (2014) studied the Tsimane, a people from Bolivia, on the Raven. Average eleven-year-olds scored 78 percent or more of the questions correct whereas lower-performing individuals answered 47 percent correct. The eleven-year-old Tsimane, though, only answered 31 percent correct. There was another group of Tsimane who went to school and lived in villages—not living in the rainforest like the other group of Tsimane. They ended up scoring 72 percent correct, compared to the unschooled Tsimane who scored only 31 percent correct. “… the cognitive skills of the Tsimane have developed to master the challenges that their environment places on them, and the Raven’s test simply does not tap into those skills. It’s not a reflection of some kind of true universal intelligence; it just reflects how well they can answer those items” (Heine, 2017: 189). Thus, measures of “intelligence” are not an innate skill, but are learned through experience—what we learn from our environments.
Heine (2017: 190) discusses as-of-yet-to-be-published results on the Hadza who are known as “the most cognitively complex foragers on Earth.” So, “the most cognitively complex foragers on Earth” should be pretty “smart”, right? Well, the Hadza were given six-piece jigsaw puzzles to complete—the kinds of puzzles that American four-year-olds do for fun. They had never seen such puzzles before and so were stumped as to how to complete them. Even those who were able to complete them took several minutes to complete them. Is the conclusion then licensed that “Hadza are less smart than four-year-old American children?” No! As that is a specific cultural tool that the Hadza have never seen before and so, their performance mirrored their ignorance to the test.
Logic
The term “logical” comes from the Greek term logos, meaning “reason, idea, or word.” So, “logical reasoning” is based on reason and sound ideas, irrespective of bias and emotion. A simple syllogistic structure could be:
If X, then Y
X
∴ Y
We can substitute terms, too, for instance:
If it rains today, then I must bring an umbrella.
It’s raining today.
∴ I must bring an umbrella.
Richardson (2000: 161) notes how cross-cultural studies show that what is or is not logical thinking is not objective nor simple, but “comes in socially determined forms.” He notes how cross-cultural psychologist Sylvia Scribner showed some syllogisms to Kpelle farmers, which were couched in terms that were familiar to them. One syllogism given to them was:
All Kpelle men are rice farmers
Mr. Smith is not a rice farmer
Is he a Kpelle man? (Richardson, 2002: 162)
The individual then continuously replied that he did not know Mr. Smith, so how could he know whether or not he was a Kpelle man? Another example was:
All people who own a house pay a house tax
Boima does not pay a house tax
Does Boima own a house? (Richardson, 2000: 162)
The answer here was that Boima did not have any money to pay a house tax.
In regard to the first syllogism, Mr. Smith is not a rice farmer so he is not a Kpelle man. Regarding the second, Boima does not pay a house tax, so Boima does not own a house. The individual could give a syllogism that is something like:
All the deductions I can make are about individuals I know.
I do not know Mr. Smith.
Therefore I cannot make a deduction about Mr. Smith. (Richardson, 2000: 162)
They are using what are familiar terms to them, and so, they get the answer right for their culture based on the knowledge that they have. These examples, therefore, show that what can pass for “logical reasoning” is based on the time and place where it is said. The deductions the Kpelle made were perfectly valid, though they were not what the syllogism-designers had in mind. In fact, I would say that there are many—equally valid—ways of answering such syllogisms, and such answers will vary by culture and custom.
The bio-ecological framework, culture, and social class
The bio-ecological model of Ceci and Bronfenbrenner is a model of human development that relies on gene and environment interactions. The model is a Vygotskian one—in that learning is a social process where the support from parents, teachers, and all of society play an important role in the ontogeny of higher psychological functioning. (For a good primer on Vygotskian theory, see Vygotsky and the Social Formation of Mind, Wertsch, 1985.) Thus, it is a model of human development that, most hereditarians would say, that “they use too.” Though this is of course, contested by Ceci who compares his bio-ecological framework with other theories (Ceci, 1996: 220, table 10.1):

Cognition (thinking) is extremely context-sensitive. Along with many ecological influences, individual differences in cognition are understood best with the bio-ecological framework which consists of three components: (1) ‘g’ doesn’t exist, but multiple cognitive potentials do; (2) motivational forces and social/physical aspects of a task or setting, how elaborate a knowledge domain is not only important in the development of the human, but also, of course, during testing; and (3) knowledge and aptitude are inseparable “such that cognitive potentials continuously access one’s knowledge base in the cascading process of producing cognitions, which in turn alter the contents and structure of the knowledge base” (Ceci, 1996: 123).
Block (1995) notes that “Blacks and Whites are to some extent separate cultural groups.” Sternberg (2004) defines culture as “the set of attitudes, values, beliefs and behaviors shared by a group of people, communicated from one generation to the next via language or some other means of communication.” In regard to social class—blacks and whites differ in social class (a form of culture), Richardson (2002: 298) notes that “Social class is a compound of the cultural tools (knowledge and cognitive and psycholingustic structures) individuals are exposed to; and beliefs, values, academic orientations, self-efficacy beliefs, and so on.” The APA notes that “Social status isn’t just about the cars we drive, the money we make or the schools we attend — it’s also about how we feel, think and act …” And the APS notes that social class can be seen as a form of culture. Since culture is a set of attitudes, beliefs and behaviors shared by a group of people, social classes, therefore, are forms of culture as different classes have different attitudes, beliefs and behaviors.
Ceci (1996 119) notes that:
large-scale cultural differences are likely to affect cognition in important ways. One’s way of thinking about things is determined in the course of interactions with others of the same culture; that is, the meaning of a cultural context is always negotiated between people of that culture. This, in turn, modifies both culture and thought.
While Manstead (2018) argues that:
There is solid evidence that the material circumstances in which people develop and live their lives have a profound influence on the ways in which they construe themselves and their social environments. The resulting differences in the ways that working‐class and middle‐ and upper‐class people think and act serve to reinforce these influences of social class background, making it harder for working‐class individuals to benefit from the kinds of educational and employment opportunities that would increase social mobility and thereby improve their material circumstances.
In fact, the bio-ecological model of human development (and IQ) is a developmental systems-type model. The types of things that go into the model are just like Richardson’s (2002) “sociocognitive affective nexus.” Richardson (2002) posits that the sources of IQ variation are mostly non-cognitive, writing that such factors include (pg 288):
(a) the extent to which people of different social classes and cultures have acquired a specific form of intelligence (or forms of knowledge and reasoning); (b) related variation in ‘academic orientation’ and ‘self-efficacy beliefs’; and (c) related variation in test anxiety, self-confidence, and so on, which affect performance in testing situations irrespective of actual ability
Cole (2004) concludes that:
Our imagined study of cross-cultural test construction makes it clear that tests of ability are inevitably cultural devices. This conclusion must seem dreary and disappointing to people who have been working to construct valid, culture-free tests. But from the perspective of history and logic, it simply confirms the fact, stated so clearly by Franz Boas half a century ago, that “mind, independent of experience, is inconceivable.”
The role of context is huge—and most psychometricians ignore it, as Ceci (1996: 107) quotes Bronfenbrenner (1989: 207) who writes:
It is a noteworthy feature of all preceding (cognitive approaches) that they make no reference whatsoever to the environment in which the person actually lives and grows. The implicit assumption is that the attributes in question are constant across place; the person carries them with her wherever she goes. Stating the issue more theoretically, the assumption is that the nature of the attribute does not change, irrespective of the context in which one finds one’s self.
Such contextual differences can be found in the intrinsic and extrinsic motivations of the individual in question. Self-efficacy, what one learns and how they learn it, motivation instilled from parents, all form part of the context of the specific individual and how they develop which then influences IQ scores (middle-class knowledge and skills scores).
(For a primer on the bio-ecological model, see Armour-Thomas and Gopaul-Mcnicel, 1997; Papierno et al, 2005; Bronfenbrenner and Morris, 2007; and O’Toole, 2016.)
Conclusion
If blacks and whites are, to some extent, different cultural groups, then they will—by definition—have differing cultures. So “cultural differences are known to exist, and cultural differences can have an impact on psychological traits [also in the knowledge one acquires which then is one part of dictating test scores] (see Prinz, 2014: 67, Beyond Human Nature). If blacks and whites are “separate cultural groups” (Block, 1995) and if they have different experiences by virtue of being cultural groups, then they will score differently on any test of ability (including IQ; see Fagan and Holland, 2002, 2007) as all tests of ability are culture-bound (see Cole, 2004).
1 Blacks and whites are different cultural groups.
2 If (1), then they will have different experiences by virtue of being different cultural groups.
3 So blacks and whites, being different cultural groups, will score differently on tests of ability, since they are exposed to different knowledge structures due to their different cultures.
So, what accounts for the intercorrelations between tests of “cognitive ability”? They validate the new test with older, ‘more established’ tests so “based on this it is unlikely that a measure unrelated to g will emerge as a winner in current practice … [so] it is no wonder that the intelligence hierarchy for different racial/ethnic groups remains consistent across different measures. The tests are highly correlated among each other and are similar in item structure and format” (Suzuki and Aronson, 2005: 321).
Therefore, what accounts for differences in IQ is not intellectual ability, but cultural/social exposure to information—specifically the type of information used in the construction of IQ tests—along with the test constructors attempting to construct new tests that correlate with the old tests, and so, they get the foregone conclusion of their being racial differences, for example, in IQ which they trumpet as evidence for a “biological cause”—but it is anything but: such differences are built into the test (Simon, 1997). (Note that Fagan and Holland, 2002 also found evidence for test bias as well.)
Thus, we should take the logical conclusion: what explains racial IQ differences are not biological factors, but environmental ones—specifically in the exposure of knowledge—along with how new tests are created (see Suzuki and . All human cognizing takes place in specific cultural contexts—therefore “culture-free tests” (i.e., tests devoid of cultural knowledge and context) are an impossibility. IQ tests are experience-dependent so if one is not exposed to the relevant experiences to do well in a testing situation, then they will score lower than they would have if they were to have the requisite culturally-specific knowledge to perform well on the test.
The Evolution of Human Skin Variation
4050 words
Human skin variation comes down to how much UV radiation a population is exposed to. Over time, this leads to changes in genetic expression. If that new genotype is advantageous in that environment, it will get selected for. To see how human skin variation evolved, we must first look to chimpanzees since they are our closest relative.
The evolution of black skin
Humans and chimps diverged around 6-12 mya. Since we share 99.8 percent of our genome with them, it’s safe to say that when we diverged, we had pale skin and a lot of fur on our bodies (Jablonski and Chaplin, 2000). After we lost the fur on our bodies, we were better able to thermoregulate, which then primed Erectus for running (Liberman, 2015). The advent of fur loss coincides with the appearance of sweat glands in Erectus, which would have been paramount for persistence hunting in the African savanna 1.9 mya, when a modern pelvis—and most likely a modern gluteus maximus—emerged in the fossil record (Lieberman et al, 2006). This sets the stage for one of the most important factors in regards to the ability to persistence hunt—mainly, the evolution of dark skin to protect against high amounts of UV radiation.
After Erectus lost his fur, the unforgiving UV radiation beamed down on him. Selection would have then occurred for darker skin, as darker skin protects against UV radiation. Dark skin in our genus also evolved between 1 and 2 mya. We know this since the melanocortin 1 receptor promoting black skin arose 1-2 mya, right around the time Erectus appeared and lost its fur (Lieberman, 2015).
However, other researchers reject Greaves’ explanation for skin cancer being a driver for skin color (Jablonksi and Chaplin, 2014). They cite Blum (1961) showing that skin cancer is acquired too late in life to have any kind of effect on reproductive success. Skin cancer rates in black Americans are low compared to white Americans in a survey from 1977-8 showing that 30 percent of blacks had basal cell carcinoma while 80 percent of whites did (Moon et al, 1987). This is some good evidence for Greaves’ hypothesis; that blacks have less of a rate of one type of skin cancer shows its adaptive benefits. Black skin evolved due to the need for protection from high levels of UVB radiation and skin cancers.
Highly melanized skin also protects against folate destruction (Jablonksi and Chaplin, 2000). As populations move away from high UV areas, the selective constraint to maintain high levels of folate by blocking high levels of UV is removed, whereas selection for less melanin prevails to allow enough radiation to synthesize vitamin D. Black skin is important near the equator to protect against folate deficiency. (Also see Nina Jablonski’s Ted Talk Skin color is an illusion.)
The evolution of white skin
The evolution of white skin, of course, is much debated as well. Theories range from sexual selection, to diet, to less UV radiation. All three have great explanatory power, and I believe that all of them did drive the evolution of white skin, but with different percentages.
The main driver of white skin is living in colder environments with fewer UV rays. The body needs to synthesize vitamin D, so the only way this would occur in areas with low UV rays.
White skin is a recent trait in humans, appearing only 8kya. A myriad of theories have been proposed to explain this, from sexual selection (Frost, 2007), which include better vitamin D synthesis to ensure more calcium for pregnancy and lactation (which would then benefit the intelligence of the babes) (Jablonski and Chaplin, 2000); others see light skin as the beginnings of more childlike traits such as smoother skin, a higher pitched voice and a more childlike face which would then facilitate less aggressiveness in men and more provisioning (Guthrie, 1970; from Frost, 2007); finally, van den Berghe and Frost (1986) proposed that selection for white skin involved unconscious selection by men for lighter-skinned women which is used “as a measure of hormonal status and thus childbearing potential” (Frost, 2007). The three aforementioned hypotheses have sexual selection for lighter skin as a proximate cause, but the ultimate cause is something completely different.
The hypothesis that white skin evolved to better facilitate vitamin D synthesis to ensure more calcium for pregnancy and lactation makes the most sense. Darker-skinned individuals have a myriad of health problems outside of their ancestral climate, one of which is higher rates of prostate cancer due to lack of vitamin D. If darker skin is a problem in cooler climates with fewer UV rays, then lighter skin, since it ensures better vitamin D synthesis, will be selected for. White skin ensures better and more vitamin D absorption in colder climates with fewer UV rays, therefore, the ultimate cause of the evolution of white skin is a lack of sunlight and therefore fewer UV rays. This is because white skin absorbs more UV rays which is better vitamin D synthesis.
Peter Frost believes that Europeans became white 11,000 years ago. However, as shown above, white skin evolved around 8kya. Further, contrary to popular belief, Europeans did not gain the alleles for white skin from Neanderthals (Beleza et al, 2012). European populations did not lose their dark skin immediately upon entering Europe—and Neanderthal interbreeding didn’t immediately confer the advantageous white skin alleles. There was interbreeding between AMH and Neanderthals (Sankararaman et al, 2014). So if interbreeding with Neanderthals didn’t infer white skin to proto-Europeans, then what did?
A few alleles spreading into Europe that only reached fixation a few thousand years ago. White skin is a relatively recent trait in Man (Beleza et al, 2012). People assume that white skin has been around for a long time, and that Europeans 40,000 ya are the ancestors of Europeans alive today. That, however, is not true. Modern-day European genetic history began about 6,500 ya. That is when the modern-day European phenotype arose—along with white skin.
Furthermore, Eurasians were still a single breeding population 40 kya, and only diverged recently, about 25,000 to 40,000 ya (Tateno et al, 2014). The alleles that code for light skin evolved after the Eurasian divergence. Polymorphisms in the genes ASIP and OCA2 may code for dark and light skin all throughout the world, whereas SLC24A5, MATP, and TYR have a predominant role in the evolution of light skin in Europeans but not East Asians, which suggests recent convergent evolution of a lighter pigmentation phenotype in European and East Asian populations (Norton et al, 2006). Since SLC24A5, MATP, and TYR are absent in East Asian populations, then that means that East Asians evolved light skin through completely different mechanisms than Europeans. So after the divergence of East Asians and Europeans from a single breeding population 25-40kya, there was convergent evolution for light pigmentation in both populations with the same selection pressure (low UV).
Some populations, such as Arctic peoples, don’t have the skin color one would predict they should have based on their ancestral environment. However, their diets are high in shellfish which is high in vitamin D, which means they can afford to remain darker-skinned in low UV areas. UV rays reflect off of the snow and ice in the summer and their dark skin protects them from UV light.
Black-white differences in UV absorption
If white skin evolved to better synthesize vitamin D with fewer (and less intense) UV rays, then those with blacker skin would need to spend a longer time in UV light to synthesize the same amount of vitamin D. Skin pigmentation, however, is negatively correlated with vitamin D synthesis (Libon, Cavalier, and Nikkels, 2013). Black skin is less capable of vitamin D synthesis. Furthermore, blacks’ skin color leads to an evolutionary environmental mismatch. Black skin in low UV areas is correlated with rickets (Holick, 2006), higher rates of prostate cancer due to lower levels of vitamin D (Gupta et al, 2009; vitamin D supplements may also keep low-grade prostate cancer at bay).
Libon, Cavalier, and Nikkels, (2013) looked at a few different phototypes (skin colors) of black and white subjects. The phototypes they looked at were II (n=19), III (n=1), and VI (n-11; whites and blacks respectively). Phototypes are shown in the image below.


To avoid the influence of solar UVB exposure, this study was conducted in February. On day 0, both the black and white subjects were vitamin D deficient. The median levels of vitamin D in the white subjects was 11.9 ng/ml whereas for the black subjects it was 8.6 ng/ml—a non-statistically significant difference. On day two, however, concentrations of vitamin D in the blood rose from 11.9 to 13.3 ng/ml—a statistically significant difference. For the black cohort, however, there was no statistically significant difference in vitamin D levels. On day 6, levels in the white subjects rose from 11.6 to 14.3 ng/ml whereas for the black subjects it was 8.6 to 9.57 ng/ml. At the end of day 6, there was a statistically significant difference in circulating vitamin D levels between the white and black subjects (14.3 ng/ml compared to 9.57 ng/ml).
Different phototypes absorb different amounts of UV rays and, therefore, peoples with different skin color absorb different levels of vitamin D. Lighter-skinned people absorb more UV rays than darker-skinned people, showing that white skin’s primary cause is to synthesize vitamin D.
UVB exposure increases vitamin D production in white skin, but not in black skin. Pigmented skin, on the other hand, hinders the transformation of 7-dehydrocholesterol to vitamin D. This is why blacks have higher rates of prostate cancer—they are outside of their ancestral environment and what comes with being outside of one’s ancestral environment are evolutionary mismatches. We have now spread throughout the world, and people with certain skin colors may not be adapted for their current environment. This is what we see with black Americans as well as white Americans who spend too much time in climes that are not ancestral to them. Nevertheless, different-colored skin does synthesize vitamin D differently, and knowledge of this will increase the quality of life for everyone.
Even the great Darwin wrote about differences in human skin color. He didn’t touch human evolution in On the Origin of Species (Darwin, 1859), but he did in his book Descent of Man (Darwin, 1871). Darwin talks about the effects of climate on skin color and hair, writing:
It was formerly thought that the colour of the skin and the character of the hair were determined by light or heat; and although it can hardly be denied that some effect is thus produced, almost all observers now agree that the effect has been very small, even after exposure during many ages. (Darwin, 1871: 115-116)
Darwin, of course, championed sexual selection as the cause for human skin variation (Darwin, 1871: 241-250). Jared Diamond has the same view, believing that natural selection couldn’t account for hair loss, black skin and white skin weren’t products of natural selection, but female mate preference and sexual selection (Greaves, 2014).
Parental selection for white skin
Judith Rich Harris, author of the book The Nurture Assumption: Why Kids Turn Out the Way They Do (Harris, 2009), posits another hypothesis for the evolution of light skin for those living in northern latitudes—parental selection. This hypothesis may be controversial to some, as it states that dark skin is not beautiful and that white skin is.
Harris posits that selection for lighter skin was driven by sexual selection, but states that parental selection for lighter skin further helped the fixation of the alleles for white skin in northern populations. Neanderthals were a furry population, as they had no clothes, so, logic dictates that if they didn’t have clothes then they must have had some sort of protection against the cold Ice Age climate, therefore they must have had fur.
Harris states that since lighter skin is seen as more beautiful than darker skin, then if a woman birthed a darker/furrier babe than the mother would have committed infanticide. Women who birth at younger ages are more likely to commit infanticide, as they still have about twenty years to birth a babe. On the other hand, infanticide rates for mothers decrease as she gets older—because it’s harder to have children the older you get.
Harris states that Erectus may have been furry up until 2 mya, however, as I’ve shown, Erectus was furless and had the ability to thermoregulate—something that a hairy hominin was not able to do (Lieberman, 2015).
There is a preference for lighter-skinned females all throughout the world, in Africa (Coetzee et al, 2012); China and India (Naidoo et al, 2016; Dixson et al, 2007); and Latin America and the Philipines (Kiang and Takeuchi, 2009). Light skin is seen as attractive all throughout the world. Thus, since light skin allows better synthesize of vitamin D in colder climes with fewer UV rays, then there would have been a myriad of selective pressures to push that along—parental selection for lighter-skinned babes being one of them. This isn’t talked about often, but infanticide and rape have both driven our evolution (more on both in the future).
Harris’ parental selection hypothesis is plausible, and she does use the right dates for fur loss which coincides with the endurance running of Erectus and how he was able to thermoregulate body heat due to lack of fur and more sweat glands. This is when black skin began to evolve. So with migration into more northerly climes, lighter-skinned people would have more of an advantage than darker-skinned people. Infanticide is practiced all over the world, and is caused—partly—by a mother’s unconscious preferences.
Skin color and attractiveness
Lighter skin is seen as attractive all throughout the world. College-aged black women find lighter skin more attractive (Stephens and Thomas, 2012). It is no surprise that due to this, a lot of black women lighten their skin with chemicals.
In a sample of black men, lighter-skinned blacks were more likely to perceive discrimination than their darker-skinned counterparts (Uzogara et al, 2014). Further, in appraising skin color’s effect on in-group discrimination, medium-skinned black men perceived less discrimination than lighter- and darker-skinned black men. Lastly—as is the case with most studies—this effect was particularly pronounced for those in lower SES brackets. Speaking of SES, lighter-skinned blacks with higher income had lower blood pressure than darker-skinned blacks with higher income (Sweet et al, 2007). The authors conclude that a variety of psychosocial stress due to discrimination must be part of the reason why darker-skinned blacks with a high SES have worse blood pressure—but I think there is something else at work here. Darker skin on its own is associated with high blood pressure (Mosley et al, 2000). I don’t deny that (perceived) discrimination can and does heighten blood pressure—but the first thing that needs to be looked at is skin color.
Lighter-skinned women are seen as more attractive (Stephen et al, 2009). This is because it signals fertility, femininity, and youth. One more important thing it signals is the ability to carry a healthy child to term since lighter skin in women is associated with better vitamin D synthesis which is important for a growing babe.
Skin color and intelligence
There is a high negative correlation between skin color and intelligence, about –.92 (Templer and Arikawa, 2006). They used the data from Lynn and Vanhanen’s 2002 book IQ and the Wealth of Nations and found that there was an extremely strong negative correlation between skin color and IQ. However, data wasn’t collected for all countries tested and for half of the countries the IQs were ‘estimated’ from other surrounding countries’ IQs.
Jensen (2006) states that the main limitation in the study design of Arikawa and Templer (2006) is that “correlations obtained from this type of analysis are completely non-informative regarding any causal or functional connection between individual differences in skin pigmentation and individual differences in IQ, nor are they informative regarding the causal basis of the correlation, e.g., simple genetic association due to cross-assortative mating for skin color and IQ versus a pleiotropic correlation in which both of the phenotypically distinct but correlated traits are manifested by one and the same gene.”
Lynn (2002) purported to find a correlation of .14 in a representative sample of American blacks (n=430), concluding that the proportion of European genes in African Americans dictates how intelligent that individual black is. However, Hill (2002) showed that when controlling for childhood environmental factors such as SES, the correlation disappears and therefore, a genetic causality cannot be inferred from the data that Lynn (2002) used.
Since Lynn found a .14 correlation between skin color and IQ in black Americans, that means that only .0196 percent of the variation in IQ within black American adults can be explained by skin color. This is hardly anything to look at and keep in mind when thinking about racial differences in IQ.
However, other people have different ideas. Others may say that since animal studies find that lighter animals are less sexually active, are less aggressive, have a larger body mass, and greater stress resistance. So since this is seen in over 40 species of vertebrate, some fish species, and over 30 bird species (Rushton and Templer, 2012) that means that it should be a good predictor for human populations. Except it isn’t.
we know the genetic architecture of pigmentation. that is, we know all the genes (~10, usually less than 6 in pairwise between population comparisons). skin color varies via a small number of large effect trait loci. in contrast, I.Q. varies by a huge number of small effect loci. so logically the correlation is obviously just a correlation. to give you an example, SLC45A2 explains 25-40% of the variance between africans and europeans.
long story short: it’s stupid to keep repeating the correlation between skin color and I.Q. as if it’s a novel genetic story. it’s not. i hope don’t have to keep repeating this for too many years.
Finally, variation in skin color between human populations are primarily due to mutations on the genes MC1R, TYR, MATP (Graf, Hodgson, and Daal, 2005), and SLC24A5 (also see Lopez and Alonso, 2014 for a review of genes that account for skin color) so human populations aren’t “expected to consistently exhibit the associations between melanin-based coloration and the physiological and behavioural traits reported in our study” (Ducrest, Keller, and Roulin, 2008). Talking about just correlations is useless until causality is established (if it ever is).
Conclusion
The evolution of human skin variation is complex and is driven by more than one variable, but some are stronger than others. The evolution of black skin evolved—in part—due to skin cancer after we lost our fur. White skin evolved due to sexual selection (proximate cause) and to better absorb UV rays for vitamin D synthesis in colder climes (the true need for light skin in cold climates). Eurasians split around 40kya, and after this split both evolved light skin pigmentation independently. As I’ve shown, the alleles that code for skin color between blacks and whites don’t account for differences in aggression, nor do they account for differences in IQ. The genes that control skin color (about a dozen) pale in comparison to the genes that control intelligence (thousands of genes with small effects). Some other hypotheses for the evolution of white skin are on par with being as controversial as the hypothesis that skin color and intelligence co-evolved—mainly that mothers would kill darker-skinned babies because they weren’t seen as beautiful as lighter-skinned babies.
The evolution of human skin variation is extremely interesting with many competing hypotheses, however, to draw wild conclusions based on just correlations in regards to human skin color and intelligence and aggression, you’re going to need more evidence than just correlations.
References
Bang KM, Halder RM, White JE, Sampson CC, Wilson J. 1987. Skin cancer in black Americans: A review of 126 cases. J Natl Med Assoc 79:51–58
Beleza, S., Santos, A. M., Mcevoy, B., Alves, I., Martinho, C., Cameron, E., . . . Rocha, J. (2012). The Timing of Pigmentation Lightening in Europeans. Molecular Biology and Evolution,30(1), 24-35. doi:10.1093/molbev/mss207
Blum, H. F. (1961). Does the Melanin Pigment of Human Skin Have Adaptive Value?: An Essay in Human Ecology and the Evolution of Race. The Quarterly Review of Biology,36(1), 50-63. doi:10.1086/403275
Coetzee V, Faerber SJ, Greeff JM, Lefevre CE, Re DE, et al. (2012) African perceptions of female attractiveness. PLOS ONE 7: e48116.
Darwin, C. (1859). On the origin of species by means of natural selection, or, the preservation of favoured races in the struggle for life. London: J. Murray.
Darwin, C. (1871). The descent of man, and selection in relation to sex. London: John Murray, Albemarle Street.
Dixson, B. J., Dixson, A. F., Li, B., & Anderson, M. (2006). Studies of human physique and sexual attractiveness: Sexual preferences of men and women in China. American Journal of Human Biology,19(1), 88-95. doi:10.1002/ajhb.20584
Ducrest, A., Keller, L., & Roulin, A. (2008). Pleiotropy in the melanocortin system, coloration and behavioural syndromes. Trends in Ecology & Evolution,23(9), 502-510. doi:10.1016/j.tree.2008.06.001
Frost, P. (2007). Human skin-color sexual dimorphism: A test of the sexual selection hypothesis. American Journal of Physical Anthropology,133(1), 779-780. doi:10.1002/ajpa.20555
Graf, J., Hodgson, R., & Daal, A. V. (2005). Single nucleotide polymorphisms in theMATP gene are associated with normal human pigmentation variation. Human Mutation,25(3), 278-284. doi:10.1002/humu.20143
Greaves, M. (2014). Was skin cancer a selective force for black pigmentation in early hominin evolution? Proceedings of the Royal Society B: Biological Sciences,281(1781), 20132955-20132955. doi:10.1098/rspb.2013.2955
Gupta, D., Lammersfeld, C. A., Trukova, K., & Lis, C. G. (2009). Vitamin D and prostate cancer risk: a review of the epidemiological literature. Prostate Cancer and Prostatic Diseases,12(3), 215-226. doi:10.1038/pcan.2009.7
Guthrie RD. 1970. Evolution of human threat display organs. Evol Biol 4:257–302.
Harris, J. R. (2006). Parental selection: A third selection process in the evolution of human hairlessness and skin color. Medical Hypotheses,66(6), 1053-1059. doi:10.1016/j.mehy.2006.01.027
Harris, J. R. (2009). The nurture assumption: why children turn out the way they do. New York: Free Press.
Hill, Mark E. 2002. Skin color and intelligence in African Americans: A reanalysis of Lynn’s data. Population and Environment 24, no. 2:209–14
Holick, M. F. (2006). Resurrection of vitamin D deficiency and rickets. Journal of Clinical Investigation,116(8), 2062-2072. doi:10.1172/jci29449
Jablonski, N. G., & Chaplin, G. (2000). The evolution of human skin coloration. Journal of Human Evolution,39(1), 57-106. doi:10.1006/jhev.2000.0403
Jablonski, N. G., & Chaplin, G. (2014). Skin cancer was not a potent selective force in the evolution of protective pigmentation in early hominins. Proceedings of the Royal Society B: Biological Sciences,281(1789), 20140517-20140517. doi:10.1098/rspb.2014.0517
Jensen, A. R. (2006). Comments on correlations of IQ with skin color and geographic–demographic variables. Intelligence,34(2), 128-131. doi:10.1016/j.intell.2005.04.003
Kiang, L., & Takeuchi, D. T. (2009). Phenotypic Bias and Ethnic Identity in Filipino Americans. Social Science Quarterly,90(2), 428-445. doi:10.1111/j.1540-6237.2009.00625.x
Libon, F., Cavalier, E., & Nikkels, A. (2013). Skin Color Is Relevant to Vitamin D Synthesis. Dermatology,227(3), 250-254. doi:10.1159/000354750
Lieberman, D. E. (2015). Human Locomotion and Heat Loss: An Evolutionary Perspective. Comprehensive Physiology, 99-117. doi:10.1002/cphy.c140011
Lieberman, D. E., Raichlen, D. A., Pontzer, H., Bramble, D. M., & Cutright-Smith, E. (2006). The human gluteus maximus and its role in running. Journal of Experimental Biology,209(11), 2143-2155. doi:10.1242/jeb.02255
López, S., & Alonso, S. (2014). Evolution of Skin Pigmentation Differences in Humans. ELS. doi:10.1002/9780470015902.a0021001.pub2
Lynn, R. (2002). Skin color and intelligence in African Americans. Population and Environment, 23, 365–375.
Mosley, J. D., Appel, L. J., Ashour, Z., Coresh, J., Whelton, P. K., & Ibrahim, M. M. (2000). Relationship Between Skin Color and Blood Pressure in Egyptian Adults : Results From the National Hypertension Project. Hypertension,36(2), 296-302. doi:10.1161/01.hyp.36.2.296
Naidoo, L.; Khoza, N.; Dlova, N.C. A fairer face, a fairer tomorrow? A review of skin lighteners. Cosmetics 2016, 3, 33.
Norton, H. L., Kittles, R. A., Parra, E., Mckeigue, P., Mao, X., Cheng, K., . . . Shriver, M. D. (2006). Genetic Evidence for the Convergent Evolution of Light Skin in Europeans and East Asians. Molecular Biology and Evolution,24(3), 710-722. doi:10.1093/molbev/msl203
Rushton, J. P., & Templer, D. I. (2012). Do pigmentation and the melanocortin system modulate aggression and sexuality in humans as they do in other animals? Personality and Individual Differences,53(1), 4-8. doi:10.1016/j.paid.2012.02.015
Sankararaman, S., Mallick, S., Dannemann, M., Prüfer, K., Kelso, J., Pääbo, S., . . . Reich, D. (2014). The genomic landscape of Neanderthal ancestry in present-day humans. Nature,507(7492), 354-357. doi:10.1038/nature12961
Stephen, I. D., Smith, M. J., Stirrat, M. R., & Perrett, D. I. (2009). Facial Skin Coloration Affects Perceived Health of Human Faces. International Journal of Primatology,30(6), 845-857. doi:10.1007/s10764-009-9380-z
Stephens, D., & Thomas, T. L. (2012). The Influence of Skin Color on Heterosexual Black College Women’s Dating Beliefs. Journal of Feminist Family Therapy,24(4), 291-315. doi:10.1080/08952833.2012.710815
Sweet, E., Mcdade, T. W., Kiefe, C. I., & Liu, K. (2007). Relationships Between Skin Color, Income, and Blood Pressure Among African Americans in the CARDIA Study. American Journal of Public Health,97(12), 2253-2259. doi:10.2105/ajph.2006.088799
Tateno, Y., Komiyama, T., Katoh, T., Munkhbat, B., Oka, A., Haida, Y., . . . Inoko, H. (2014). Divergence of East Asians and Europeans Estimated Using Male- and Female-Specific Genetic Markers. Genome Biology and Evolution,6(3), 466-473. doi:10.1093/gbe/evu027
Templer, D. I., & Arikawa, H. (2006). Temperature, skin color, per capita income, and IQ: An international perspective. Intelligence,34(2), 121-139. doi:10.1016/j.intell.2005.04.002
Uzogara, E. E., Lee, H., Abdou, C. M., & Jackson, J. S. (2014). A comparison of skin tone discrimination among African American men: 1995 and 2003. Psychology of Men & Masculinity, 15(2), 201–212. http://doi.org/10.1037/a0033479
van den Berghe PL, Frost P. 1986. Skin color preference, sexual dimorphism and sexual selection: a case of gene-culture coevolution? Ethn Racial Stud 9:87–113.
Heritability, the Grandeur of Life, and My First Linkfest on Human Evolution and IQ
1100 words
Benjamin Steele finally replied to my critique of his ‘strong evidence and argument’ on race, IQ and adoption. He goes on to throw baseless ad hominem attacks as well as appealing to motive (assuming my motivation for being a race realist; assuming that I’m a ‘racist’, whatever that means). When I do address his ‘criticisms’ of my response to him, I will not address his idiotic attacks as they are a waste of time. He does, however, say that I do not understand heritability. I understand that the term ‘heritable’ doesn’t mean ‘genetic’. I understand that heritability is the proportion of phenotypic variance attributed to genetic variance. I do not believe that heritability means a trait is X percent genetic. 80 percent of the variation in the B-W IQ gap is genetic, with 20 percent explained by environmental effects. Note that I’m not claiming that heritable means genetic. All that aside, half of his reply to me is full of idiotic, baseless and untrue accusations which I will not respond to. So, Mr. Steele, if you do decide to reply to my response to you this weekend, please leave the idiocy at the door. Anyway, I will tackle that this weekend. Quick note for Mr. Steele (in case he reads this): if you don’t believe me about the National Crime and Victimization Survey showing that police arrest FEWER blacks than are reported by the NCVS, you can look it up yourself, ya know.
I’m beginning to understand why people become environmentalists. I’ve recently become obsessed with evolution. Not only of Man, but of all of the species in the world. Really thinking about the grandeur of life and evolution and what leads to the grand diversity of life really had me thinking one day. It took billions of years for us to get to the point we did today. So, why should we continue to destroy environments, displacing species and eventually leading them to extinction? I’m not saying that I fully hold this view yet, it’s just been on my mind lately. Once a species is extinct, that’s it, it’s gone forever. So shouldn’t we do all we possibly can to preserve the wonder of life that took so long to get to the point that we did today?
Some interesting articles to read:
Study: IQ of firstborns differ from siblings (This is some nice evidence for Lassek and Gaulin’s theory stating why first-born children have higher IQs than their siblings: they get first dibs on the gluteofemoral fat deposits that are loaded with n-3 fatty acids, aiding in brain size and IQ.)
Why attitude is more important than IQ (Psychologist Carol Dweck states that attitude is more important than IQ and that attitudes come in one of two types: a fixed mindset or a growth mindset. Those with a fixed mindset believe ‘you are who you are’ and nothing can change it while those with a growth mindset believe they can improve with effort. Interesting article, I will find the paper and comment on it when I read it.)
Positively Arguing IQ Determinism And Effect Of Education (Intelligent people search for intellectually stimulating things whereas less intelligent people do not. This, eventually, will lead to the construction of environments based on that genotype.)
A scientist’s new theory: Religion was key to humans’ social evolution (Nicholas Wade pretty much argues the same in his book The Faith Instinct: How Religion Evolved and Why It Endures. It is interesting to note that archaeologists have discovered what looks to be the beginnings of religiousity around 10kya, coinciding with the agrigultural revolution. I will look into this in the future.)
Galápagos giant tortoises show that in evolution, slow and steady gets you places (Interesting read, on tortoise migration)
Will Mars Colonists Evolve Into This New Kind of Human? (Very interesting and I hope to see more articles like this in the future. Of course, due to being a smaller population, evolution will occur faster due to differing selection pressures. Smaller populations incur more mutations at a faster rate than larger populatons. Will our skin turn a reddish tint? Bone density will decline leading to heavier bones. The need for C-sections due to heavier bones will lead to futher brain size increases. This is also going on on Earth at the moment, as I have previously discussed. Of course differences in culture and technology will lead the colonizers down different paths. I hope I am alive to see the first colonies on Mars and the types of long-term effects of the evolution of Man on the Red Planet.)

Evolution debate: Are humans continuing to evolve? (Of course we are)
Did seaweed make us who we are today? (Seaweed has many important vitamins and minerals that are imperative for brain development and growth—most importantly, it has poly-unsaturated fatty acids (PUFAs) and B12. We are only able to acquire these fatty acids through our diet—our body cannot synthesize the fatty acid on its own. This is just growing evidence for how important it is to have a good ratio of n-3 to n-6.)
Desert people evolve to drink water poisoned with deadly arsenic (More evidence for rapid evolution in human populations. AS3MT is known to improve arsonic metabolism in Chile and Argentina. Clearly, those who can handle the water breed/don’t die while those who cannot succumb to the effects of arsenic poisoning. Obviously, over time, this SNP will be selected for more and more while those who cannot metabolize the arsonic do not pass on their genes. This is a great article to show to anti-human-evolution deniers.)
Here Are the Weird Ways Humans Are Evolving Right Now (CRISPR and gene editing, promotion of obesity through environmental factors (our animals have also gotten fatter, probably due to the feed we give them…), autism as an adaptation (though our definition for autism has relaxed in the past decade). Human evolution is ongoing and never stops, even for Africans. I’ve seen some people claim that since they never left the continent that they are ‘behind in evolution’, yet evolution is an ongoing process and never stops, cultural ‘evolution’ (change) leading to more differences.)
‘Goldilocks’ genes that tell the tale of human evolution hold clues to variety of diseases (We really need to start looking at modern-day diseases through an evolutionary perspective, such as obesity, to better understand why these ailments inflict us and how to better treat our diseases of civilization.)
Understanding Human Evolution: Common Misconceptions About The Scientific Theory (Don’t make these misconceptions about evolution. Always keep up to date on the newest findings.)
Restore Western Civilization ( Enough said. As usual, gold from Brett Stevens. Amerika.org should be one of the first sites you check every day.)
I guess this was my first linkfest (ala hbd chick). I will post one a week.
White People Not 100 Percent Human? Afrocentrist Debunked
850 words
I just came across this video on YouTube published yesterday called “White people are not 100% human (Race differences) (I.Q debunked)“, with, of course, outrageous claims (the usual from Afrocentrists). I already left a comment proving his nonsense incorrect, but I thought I’d further expound on it here.
His first ‘evidence’ that whites aren’t 100 percent human is showing some individuals who are born with tails. Outliers are meaningless, of course. The cause of the human tail is due to the unsuccessful inhibition of the Wnt3-a gene. When this gene isn’t successful in signaling the cell death of the tail in early embryonic development, a person is then born with a small vestigial tail. This doesn’t prove anything.
His next assertion is that since “94 percent of whites test positive for Rh blood type” and that “as a result, they are born with a tail”, then whites must have interbred with rhesus monkeys in the past. This is ridiculous. This blood type was named in error. The book Blood Groups and Red Cell Antigens sums it up nicely:
The Rh blood group is one of the most complex blood groups known in humans. From its discovery 60 years ago where it was named (in error) after the Rhesus monkey, it has become second in importance only to the ABO blood group in the field of transfusion medicine. It has remained of primary importance in obstetrics, being the main cause of hemolytic disease of the newborn (HDN).
…
It was wrongly thought that the agglutinating antibodies produced in the mother’s serum in response to her husbands RBCs were the same specificity as antibodies produced in various animals’ serum in response to RBCs from the Rhesus monkey. In error, the paternal antigen was named the Rhesus factor. By the time it was discovered that the mother’s antibodies were produced against a different antigen, the rhesus blood group terminology was being widely used. Therefore, instead of changing the name, it was abbreviated to the Rh blood group.
As you can see, this is another ridiculous and easily debunked claim. One only needs to do a bit of non-biased reading into something to get the truth, which some people are not capable of.
What he says next, I don’t really have a problem with. He just shows articles stating that Neanderthals had big brains to control their bodies and that they had a larger, elongated visual cortex. However, there is archeological evidence that our cognitive superiority over Neanderthals is a myth (Villa and Roebroeks, 2014). What he shows in this section is the truest thing he’ll say, though.
Then he shows how African immigrants to America have a higher educational achievement than whites and immigrant East Asians. However, it’s clear he’s not heard of super-selection. The people with the means to leave will, and, most likely, those with the means are the more intelligent ones in the group. We also can’t forget about ‘preferential treatment’, AKA Affirmative Action.
The concept of ‘multiple intelligences’ is then brought up. The originator of the theory, Howard Gardner, rejects general intelligence, dismisses factor analysis, doesn’t defend his theory with quantitative data, instead, drawing on anthropology to zoology findings for his claims, being completely devoid of any psychometric or quantitative data (Herrnstein and Murray, 1994: 18). The Alternative Hypothesis also has a thorough debunking of this claim.
He then makes the claim that hereditarians assume that environment/experience play no factor in performance on IQ tests/life success. We know that both the individual heritability is 80/20 genetics and environment, with the black-white gap being the same (Rushton and Jensen 2005: 279). Another easily refuted claim.
The term ‘inferior’ is brought up due to whites’ supposed ‘inferiority’, though we know that terms such as those have no basis in evolutionary biology.
He claims that a black man named Jesse Russel invented the cell phone, when in reality a white man named Martin Cooper did. He claims that Lewis Latimer invented the filament lightbulb, when a man named Joseph Swan obtained the patent in the UK in 1860. Of course, individual outliers are meaningless to group success, as they don’t reflect the group average as a whole, so these discussions are meaningless.
He finally claims that the “black Moors civilized Europe”. Europeans didn’t need to “be civilized”, I guess people don’t understand that empires/kingdoms rise and fall and go through highs and lows. That doesn’t stop people from pushing a narrative, though. Further, the Moors were not black. People love attempting to create their own fantasy history in which their biases are a reality.
I don’t know why people have to make these idiotic and easily refuted videos. Lies that push people further from the truth of racial differences, genetics, and history as a whole. Biases such as these just cloud people’s minds to the truth, and when the truth is shown to them, refuting their biases and twisting of history, genetics, and IQ, they then look at it as an attack on what they deem to be true despite all of the conflicting, non-biased evidence shown to them. Afrocentric loons need to be refuted, lest people believe their lies, misconceptions and twistings of history.