Unless you’ve been living under a rock since the new year, you have heard of the “coup attempt” at the Capitol building on Wednesday, January 6th. Upset at the fact that the election was “stolen” from Trump, his supporters showed up at the building and rushed it, causing mass chaos. But, why did they do this? Why the violence when they did not get their way in a fair election? Well, Michael Ryan, author of The Genetics of Political Behavior: How Evolutionary Psychology Explains Ideology (2020) has the answer—what he terms “rightists” and “leftists” evolved at two different times in our evolutionary history which, then, explains the trait differences between the two political parties. This article will review part of the book—the evolutionary sections (chapters 1-3).
EP and ideology
Explaining why individuals who call themselves “rightists and leftists” behave and act differently than the other is Ryan’s goal. He argues, at length, that the two parties have two different personality profiles. This, he claims, is due to the fact that the ancestors of rightists and leftists evolved at two different times in human history. He calls this “Trump Island” and “Obama Island”—apt names, especially due to what occurred last week. Ryan claims that what makes Trump different from, say, Obama, is that his ancestors evolved at a different place in a different time compared to Obama’s ancestors. He further claims using the Stanford Prison Experiment that “we may not all be capable of becoming Nazis, after all. Just some, and conservatives especially so” (pg 12).
In the first chapter he begins with the usual adaptationism that Evolutionary Psychologists use. Reading between the lines in his implicit claims, he is arguing that “rightists and leftists” are natural kinds—that is, they are *two different kinds of people.* He explains some personality differences between rightists and leftists and then says that such trait differences are “rooted in biology and governed by genes” (pg 17). Ryan then makes a strong adaptationist claim—that traits are due to adaptation to the environment (pg 17). What makes you and I different from Trump, he claims, is that our ancestors and his ancestors evolved in different places at different times where different traits would be imperative to survival. So, over time, different traits got selected-for in these two populations leading to the trait differences we see today. So each environment led to the fixation of different adaptive traits which explains the differences we see today between the two parties, he claims.
Ryan then shifts from the evolution of personality differences to… The evolution of the beaks of Darwin’s finches and Tibetan adaptation to high-altitude living (pg 18), as if the evolution of physical traits is anything like the evolution of psychological traits. His folly is assuming that these physical traits can then be likened to personality/mental traits. The ancestors of rightists and leftists, like Darwin’s finches Ryan claims, evolved on different islands in different moments of evolutionary time. They evolved different brains and different adaptive behaviors on the basis of the evolution of those different brains. Trump’s ancestors were authoritarian, and this island occurred early in human history “which accounts for why Trump’s behavior seems so archaic at times” (pg 18).
The different traits that leftists show in comparison to rightists is due to the fact that their island came at a different point in evolutionary time—it was not recent in comparison to the so-called archaic dominance behavior portrayed by Trump and other rightists. Ryan says that Obama Island was more crowded than Trump Island where, instead of scowling, they smiled which “forges links with others and fosters reciprocity” (pg 19). So due to environmental adversity, they had a more densely populated “island”—in this novel situation, compared to the more “archaic” earlier time—the small bands needed to cooperate, rather than fight with each other, to survive. So this, according to Ryan, explains why studies show more smiling behavior in leftists compared to rightists.
Some of our ancestors evolved traits such as cooperativeness the aided the survival of all even though not everyone acquired the trait … Eventually a new genotype or subpopulation emerged. Leftist traits became a permanent feature of our genome—in some at least. (pg 19-20)
So the argument goes: Differences between rightists and leftists show us that the two did not evolve at the same points in time since they show different traits today. Different traits were adaptive at different points in time, some more archaic, some more modern. Since Trump Island came first in our evolutionary history, those whose ancestors evolved there show more archaic behavior. Since Obama Island came first, they show newer, more modern behaviors. Due to environmental uncertainty, those on Obama Island had to cooperate with each other. The trait differences between these two subpopulations were selected for in their environment that they evolved in, which is why they are different today. Now today, this led to the “arguing over the future direction of our species. This is the origin of human politics” (pg 20).
Models of evolution
Ryan then discusses four models of evolution: (1) the standard model, where “natural selection” is the main driver of evolutionary change; (2) epigenetic models like Jablonka’s and Lamb’s (2005) in Evolution in Four Dimensions; (3) where behavioral changes change genes; and (4) where organisms have phenotypic plasticity and is a way for the organism to respond to sudden environmental changes. “Leftists and rightists“, writes Ryan, “are distinguished by their own versions of phenotypic plasticity. They change behavior more readily than rightists in response to changing environmental signals” (pg 29-30).
In perhaps the most outlandish part of the book, Ryan articulates one of my now-favorite just-so stories. The passage is worth quoting in-full:
Our direct ancestor Homo erectus endured for two million years before going extinct 400,000 years ago when earth temperatures dropped far below the norm. Descendants of erectus survived till as recently as 14,000 years ago in Asia. The round head and shovel-shaped teeth of some Asians, including Vladimir Putin, are an erectile legacy. Archeologists believe erectus was a mix of Ted Bundy and Adolf Hitler. Surviving skulls point to a life of constant violence and routine killing. Erectile skulls are thick like a turtle’s, and the brow’s are ridged for protection from potentially fatal blows. Erectus’ life was precarious and violent. To survive, it had to evolve traits such as vigilant fearfulness, prejudice against outsiders, bonding with kin allies, callousness toward victims, and a penchant for inflexible habits of life that were known to guarantee safety. It had to be conservative. 34 Archeologists suggest that some of our most characteristic conservative emotions such as nationalism and xenophobia were forged at the time of Homo erectus. 35 (pg 33-34)
It is clear that Ryan is arguing that rightists have more erectus-like traits whereas leftists have more modern, Sapiens traits. “The contemporary coexistence of a population with more “modern” traits and a population with more “archaic” traits came into being” (pg 37). He is implicitly assuming that the two “populations” he discusses are natural kinds and with his “modern” “archaic” distinction (see Crisp and Cook 2005 who argue against a form of this distinction) he is also implying that there is a sort of “progress” to evolution.
Twin studies, it is claimed, show “one’s genetically informed psychological disposition” (Hatemi et al, 2014); they “suggest that leftists and rightists are born not made” while a so-called “consensus has emerged amongst scientists: political behavior is genetically controlled and heritable” (pg 43). But, Beckway and Morris (2008), Charney (2008), and Joseph (2009; 2013) argue that twin studies can do no such thing due to the violation of the equal environments assumption (Joseph, 2014; Joseph et al, 2015). Thus, Ryan’s claims of the “genetic origins” of political behavior rest on studies that cannot prove or disprove “genetic causation” (Shulitziner, 2017)—but since the EEA is false we must discount “genetic causation” for psychological traits, not least because it is impossible for genes to cause/influence psychological traits (see argument (iii)).
The arguments he provides are a form of inference to best explanation (IBE) (Smith, 2016). However, this is how just-so stories are created: the conclusion is already in mind, and then the story is crafted using “natural selection” to explain how a trait came to fixation and why it currently exists today. The whole book is full of such adaptive stories. Claiming that we have the current traits we do in the distributions they are in in the “populations” because they were, at a certain point in our evolutionary history, adaptive which then led to the individuals with those traits passing on more of their genes, eventually leading to trait fixation. (See Fodor and Piattelli-Palmarini, 2010).
Ryan makes such outlandish claims such as “Rightists are more likely than leftists to keep their desks neat. If in the distant past you knew exactly where the weapons were, you could find them quickly and react to danger more effectively. 26” (pg 45). He talks about how “time-consuming and effort-demanding accuracy of perception [were] more characteristic of leftist cognition … leftist cognition is more reflective” while “rightist cognition is intuitive rather than reflective” (pg 47). Rightists being more likely to endorse the status quo, he claims, is “an adaptive trait when scarce resources made energy management essential to getting by” (pg 48) Rightist language, he argues, uses more nouns since they are “more concrete, an anxious personalities prefer concrete to abstract language because it favors categorial rigidity and guarantees greater certainty” while leftists “use words that suggest anxiety, anger, threats, certainty, resistance to change, power, security, and conformity” (pg 49). There is “a connection between archaic physiology and rightist moral ideology” (pg 52). Certain traits that leftists have were “adaptive traits [that] were suited to later stage human evolution” (pg 53). Ryan just cites studies that show differences between rightists and leftists and then uses some great leaps and mental gymnastics to try to mold the findings as being due to evolution in the two different time periods he describes in chapter 1 (Trump and Obama Island).
I have not read one page in this book that does not have some kind of adaptive just-so story attempting to explain certain traits/behaviors between rightists and leftists in evolutionary terms. Ryan uses the same kind of “reasoning” that Evolutionary Psychologists use—have your conclusion in mind first and then craft an adaptive story to explain why the traits you see today are there. Ryan outright says that “[t]raits are the result of adaptation to the environment” (pg 17), which is a rare—strong adaptationist—claim to make.
His book ticks off all of the usual EP things: strong adaptationism, just-so storytelling, the claim that traits were selected-for due to their contribution in certain environments at different points in time. The strong adaptationist claims, for example, are where he says that erectus’ large brow “are rigid for protection from potentially fatal blows” (pg 34). Such strong adaptationist claims imply that Ryan believes that all traits are the result of adaptation and that they, as a result, are still here today because they all serve a function in our evolutionary past. His arguments are, for the most part, all evolutionary and follow the same kinds of patterns that the usual EP arguments do (see Smith, 2016 for an explication of just-so stories and what constitutes them). Due to the problems with evolutionary psychology, his adaptive claims should be ignored.
The arguments that Ryan provides are not scientific and, although they give off a veneer of being scientific by invoking “natural selection” and adaptationism, they are anything but. It is just a long-winded explanation for how and why rightists and leftists—liberals and conservatives—are different and why they cannot change, since these differences are “encoded” into our genome. The implicit claim of the book, then, that rightists and leftists are two different—natural—kinds, lies on the false bed of EP and, therefore, the arguments provided in the book fail to sway anyone that does not believe such fantastic storytelling masquerading as science. While he does discuss other evolutionary theories, such as epigenetic ones from Jablonka and Lamb (2005), the book is largely strongly adaptationist using “natural selection” to explain why we still have the traits we do in different “populations” today.
‘Health inequalities are the systematic, avoidable and unfair differences in health outcomes that can be observed between populations, between social groups within the same population or as a gradient across a population ranked by social position.’ (McCartney et al, 2019)
Health inequities, however, are differences in health that are judged to be avoidable, unfair, and unjust. (Sudana and Blas, 2013)
Asking “Is X racist?” is the wrong question to ask. If X is factual, then making the claim cannot be racist (facts themselves cannot be racist). But, one can perform a racist action—either consciously or subconsciously—on the basis of a fact. Facts themselves cannot be racist, but one can use facts to be racist. One can hold a belief and the belief can be racist (X group is better than Y group at Z), but systemic racism would be the result (the outcome) of holding said belief. (Some examples of systemic racism can be found in Gee and Ford, 2011.) Someone who holds the belief that, say, whites are more “intelligent” than blacks or Jews are more “intelligent” than whites could be said to be racist—they hold a racist belief and are making an invalid inference based on a fact (blacks score 15 points lower in IQ tests compared to whites so blacks are less intelligent). Truth cannot be racist, but truth can be used to attempt to justify certain policies.
I have argued that we should ban IQ tests on the basis that, if we believe that the hereditarian hypothesis is true and it is false, then we can enact policies on the basis of false information. If we enact policies on the basis of false information, then certain groups may be harmed. If certain groups may be harmed, then we should ban whatever led to the policy in question. If the policy in question is derived from IQ tests, then IQ tests must be banned. This is one example on how we can use a fact (like the IQ gap between blacks and whites) and use that fact for a racist action (to shuttle those who perform under a certain expectation into certain remedial classes based on the fact that they score lower than some average value). Believing that X group has a higher quality of life, educational achievement, and life outcomes on the basis of IQ scores—or their genes—is a racist belief but this racist belief can then be used to perform a racist action.
I have also discussed different definitions of “racism.” Each definition discussed can be construed as having a possible action attached to it. Racism is an action—something that we perform on the basis of certain beliefs, motivated by “what can be” possible in the future. Beliefs can be racist; we can say that it is an ideology that one acts on that has real causes/consequences to people. Truth can’t be racist; people can can use the truth to perform and justify certain actions. Racism, though, can be said to be a “cultural and structural system” that assigns value based on race; further, actions and intent of individuals are not necessary for structural mechanisms of racism (e.g., Bonilla-Silva, 1997).
We can, furthermore, use facts about differences between races in health outcomes and say that certain rationalizations of certain outcomes can be construed as racist. “It’s in the genes!” or similar statements could be construed as racist, since it implies that certain inequalities would be “immutable” on the basis of a strong genetic determination of disease.
Racism is indeed a public health issue. For instance, physicians can hold biases on race—just like the average person. For instance, differences in healthcare between majority and minority populations can said to be systemic in nature (Reschovsky and O’Malley, 2008). This needs to be talked about since racism can and is a determinant of health—as many places in the country are beginning to recognize. Racism is rightly noted as a public health crisis because it leads to disparate outcomes between whites and blacks based on certain assumptions on the ancestral background of both groups.
Quach et al (2012) showed that not receiving referrals to a specialist is discriminatory—Asians, too were also exposed to medical discrimination, along with blacks. Such discrimination can also lead to accelerated cellular aging (on the basis of measured telomere lengths where shorter telomeres indicate a higher biological compared to chronological age; Shammas et al, 2012) in black men and women (Geronimus et al, 2006; 2011; Schrock et al, 2017; Forrester et al, 2019). We understand the reasons why such discrimination on the basis of race happens, and we understand the mechanism by which it leads to adverse health outcomes between races (chronic elevation in allostatic load leading to higher than normal levels of certain stress hormones which will, eventually, lead to differences in health outcomes).
The idea that genes or behavior lead to differences in health outcomes is racist (Bassett and Graves, 2018). This can then lead to racist actions—that their genetic constitution impedes them from being “near-par” with whites, or that their behavior is the cause of the health disparities (sans context). Valles (2018: 186) writes:
…racism is a cause with devastating health effects, but it manifests via many intermediary mechanisms ranging from physician implicit biases leading to over-treatment, under-treatment and other clinical errors (Chapman et al. 2013; Paradies et al. 2015) to exposing minority communities to waterborne contaminants because of racist political disenfranchisement and neglect of community infrastructure (e.g., the infamous Flint Water Crisis afflicting my Michigan neighbors) (Krieger 2016; Sherwin 2017; Michigan Civil Rights Commission 2017).
There is a distinction between “equity” and “equality.” For instance, to continue with the public health example, take public health equality and public health equity. In this instance, “equality” means giving everyone the same thing whereas “equity” means giving individuals what they need to be the healthiest individual they can possibly be. “Strong equality of health” is “where every person or group has equal health“, while weak health equity “states that every person or group should have equal health except when: (a) health equality is only possible by making someone less healthy, or (b) there are technological limitations on further health improvement” (Norheim and Asada, 2009). But we should not attempt to “level-down” people’s health to achieve equity; we should attempt to “level up” people’s health, though. That is, it is impossible to reach a strong health equality (making all groups equal), but we should—and indeed, have a moral responsibility to—attempt to lift up those who are worse-off. Poverty is what is objectionable, inequality is not. It is impossible to achieve true equality between groups, but we can—and indeed we have a moral obligation to—lift up those who are in poverty, which is, also a social determinant of health (Braveman and Gottlieb, 2014; Frankfurt, 2015; Islam, 2019).
We achieve health equity when all individuals have the same access to be the healthiest individuals they can be; we achieve health equality when all health outcomes are the same for all groups. Health equity is, further, the absence of avoidable differences between different groups (Evans, 2020). One of these is feasible, the other is not. But racism does not allow us to achieve health equity.
The moral foundation for public health thus rests on general obligations in beneficence to promote good health. (Powers and Faden, 2006: 24)
Social justice is not only a matter of how individuals fare, but also about how groups fare relative to one another whenever systemic racism is linked to group membership. (Powers and Faden, 2006: 103)
…inequalities in well-being associated with severe poverty are inequalities of the highest moral urgency. (Powers and Faden, 2006: 114)
Public health is directly a matter of social justice. If public health is directly a matter of social justice, and if health outcomes due to discrimination are caused by social injustice, then we need to address the causes of such inequalities, which would be for example, conscious or unconscious prejudice against certain groups.
Certain inequalities between groups are, therefore, due to systemic racism which is an action which can be conscious or unconscious. But which inequalities matter most? In my view, the inequalities that matter most are inequalities that impede an individual or a group from having a certain quality of life. Racism can and does lead to health inequalities and by addressing the causes for such actions, we can then begin to ameliorate the causes of structural racism. This is more evidence that the social can indeed manifest in biology.
Holding certain beliefs can lead to certain actions that can be construed as racist and negatively impact health outcomes for certain groups. By committing ourselves to a framework of social just and health, we can then attempt to ameliorate inequities between social class/races, etc. that have plagued us for decades. We should strive for equity in health, which is a goal of social justice. We should not believe that such differences are “innate” and that there is nothing that we can do about group differences (some of which are no doubt caused by systemically racist policies). Health equity is something we should strive to do and we have a moral obligation to do so; health equality is not obligatory and it is not even a feasible idea.
If we can avoid health certain outcomes for certain groups on the basis of beliefs that we hold, then we should do so.
The use of polygenic scores has caused much excitement in the field of socio-genomics. A polygenic score is derived from statistical gene associations using what is known as a genome-wide association study (GWAS). Using genes that are associated with many traits, they propose, they will be able to unlock the genomic causes of diseases and socially-valued traits. The methods of GWA studies also assume that the ‘information’ that is ‘encoded’ in the DNA sequence is “causal in terms of cellular phenotype” (Baverstock, 2019).
For instance it is claimed by Robert Plomin that “predictions from polygenic scores have unique causal status. Usually correlations do not imply causation, but correlations involving polygenic scores imply causation in the sense that these correlations are not subject to reverse causation because nothing changes the inherited DNA sequence variation.”
Take the stronger claim from Plomin and Stumm (2018):
GPS are unique predictors in the behavioural sciences. They are an exception to the rule that correlations do not imply causation in the sense that there can be no backward causation when GPS are correlated with traits. That is, nothing in our brains, behaviour or environment changes inherited differences in DNA sequence. A related advantage of GPS as predictors is that they are exceptionally stable throughout the life span because they index inherited differences in DNA sequence. Although mutations can accrue in the cells used to obtain DNA, like any cells in the body these mutations would not be expected to change systematically the thousands of inherited SNPs that contribute to a GPS.
This is a strange claim for two reasons.
(1) They do not, in fact, imply causation since the scores derived from GWA studies which are associational and therefore cannot show causes—GWA studies are pretty much giant correlational studies that scan the genomes of hundreds of thousands of people and look for genes that are more likely to be in the sample population for the disease/”trait” in question. These studies are also heavily skewed to European populations and, even if they were valid for European populations (which they are not), they would not be valid for non-European ethnic groups (Martin et al, 2017; Curtis, 2018; Haworth et al, 2018).
(2) The claim that “nothing changes inherited DNA sequence variation” is patently false; what one experiences throughout their lives can most definitely change their inherited DNA sequence variation (Baedke, 2018; Meloni, 2019).
But, as pointed out by Turkheimer, Plomin and Stumm are assuming that no top-down causation exists (see, e.g., Ellis, Noble, and O’Connor, 2011). We know that both top-down (downward) and bottom-up (upward) causation exists (e.g., Noble, 2012; see Noble 2017 for a review). Plomin, it seems, is coming from a very hardline view of genes and how they work. A view, it looks like to me, that derives from the Darwinian view of genes and how they ‘work.’
Such work also is carried out under the assumption that ‘nature’ and ‘nurture’ are independent and can therefore be separated. Indeed, the title of Plomin’s 2018 book Blueprint implies that DNA is a blueprint. In the book he has made the claim that DNA is a “fortune-teller” and that things like PGSs are “fortune-telling devices” (Plomin, 2018: 6). PGSs are also carried out based on the assumption that the heritability estimates derived from twin/family/adoption studies tell us anything about how “genetic” a trait is. But, since the EEA is false (Joseph, 2014; Joseph et al, 2015) then we should outright reject any and all genetic interpretations of these kinds of studies. PGS studies are premised on the assumption that the aforementioned twin/adoption/family studies show the “genetic variation” in traits. But if the main assumptions are false, then their conclusions crumble.
Indeed, lifestyle factors are better indicators of one’s disease risk compared to polygenic scores, and so “This means that a person with a “high” gene score risk but a healthy lifestyle is at lower risk than a person with a “low” gene score risk and an unhealthy lifestyle” (Joyner, 2019). Janssens (2019) argues that PRSs (polygenic risk scores) “do not ‘exist’ in the same way that blood pressure does … [nor do they] ‘exist’ in the same way clinical risk models do …” Janssens and Joyner (2019) also note that “Most [SNP] hits have no demonstrated mechanistic linkage to the biological property of interest. By showing mechanistic relations between the proposed gene(s) and the disease phenotype, researchers would, then, be on their way to show “causation” for PGS/PRS.
Nevertheless, Sexton et al (2018) argue that “While research has shown that height is a polygenic trait heavily influenced by common SNPs [7–12], a polygenic score that quantifies common SNP effect is generally insufficient for successful individual phenotype prediction.” Smith-Wooley et al (2018) write that “… a genome-wide polygenic score … predicts up to 5% of the variance in each university success variable.” But think about the words “predicts up to”—this is a meaningless phrase. Such language is, of course, causal when they—nor anyone else—has shown that such scores are indeed casual (mechanistically).
What these studies are indexing are not causal genic variants for disease and other “traits”, they are showing the population structure of the population sampled in question (Richardson, 2017; Richardson and Jones, 2019). Furthermore, the demographic history of the sample in question can also mediate the stratification in the population (Zaidi and Mathieson, 2020). Therefore, claims that PGSs are causal are unfounded—indeed, GWA studies cannot show causation. GWA studies survive on the correlational model—but, as has been shown by many authors, the studies show spurious correlations, not the “genetics” of any studied “trait” and they, therefore, do not show causation.
One further nail-in-the-coffin for hereditarian claims for PGS/PRS and GWA studies is due to the fact that the larger the dataset (the larger the number of datapoints), there will be many more spurious correlations found (Calude and Longo, 2017). When it comes to hereditarian claims, this is relevant to twin studies (e.g., Polderman et al, 2015) and GWA studies for “intelligence” (e.g., Sniekers et al, 2017). It is entirely possible, as is argued by Richardson and Jones (2019) that the results from GWA studies “for intelligence” are entirely spurious, since the correlations may appear due to the size of the dataset, not the nature of it (Calude and Longo, 2017). Zhou and Zao (2019) argue that “For complex polygenic traits, spurious correlation makes the separation of causal and null SNPs difficult, leading to a doomed failure of PRS.” This is troubling for hereditarian claims when it comes to “genes for” “intelligence” and other socially-valued traits.
How can hereditarians show PGS/PRS causation?
This is a hard question to answer, but I think I have one. The hereditarian must:
(1) provide a valid deductive argument, in that the conclusion is the phenomena to be explained; (2) provide an explanans (the sentences adduced as the explanation for the phenomenon) that has one lawlike generalization; and (3) show the remaining premises which state the preceding conditions have to have empirical content and they have to be true.
An explanandum is a description of the events that need explaining (in this case, PGS/PRS) while an explanans does the explaining—meaning that the sentences are adduced as explanations of the explanans. Garson (2018: 30) gives the example of zebra stripes and flies. The explanans is Stripes deter flies while the explanandum is Zebras have stripes. So we can then say that zebras have stripes because stripes deter flies.
Causation for PGS would not be shown, for example, by showing that certain races/ethnies have higher PGSs for “intelligence”. The claim is that since Jews have higher PGSs for “intelligence” then it follows that PGSs can show causation (e.g., Dunkel et al, 2019; see Freese et al, 2019 for a response). But this just shows how ideology can and does color one’s conclusions they glean from certain data. That is NOT sufficient to show causation for PGS.
PGSs cannot, currently, show causation. The studies that such scores are derived from fall prey to the fact that spurious correlations are inevitable in large datasets, which also is a problem for other hereditarian claims (about twins and GWA studies for “intelligence”). Thus, PGSs do not show causation and the fact that large datasets lead to spurious correlations means that even by increasing the number of subjects in the study, this would still not elucidate “genetic causation.”
Ranking human worth on the basis of how well one compares in academic contests, with the effect that high ranks are associated with privilege, status, and power, does suggest that psychometry is best explored as a form of vertical classification and attending rankings of social value. (Garrison, 2009: 36)
Binet and Simon’s (1916) book The Development of Intelligence in Children is somewhat of a Bible for IQ-ists. The book chronicles the methods Binet and Simon used to construct their tests for children to identify those children who needed more help at school. In the book, they describe the anatomic measures they used. Indeed, before becoming a self-taught psychologist, Binet measured skulls and concluded that skull measurements did not correlate with teacher’s assessment of their students’ “intelligence” (Gould, 1995, chapter 5).
In any case, despite Binet’s protestations that Gould discusses, he wanted to use his tests to create what Binet and Simon (1916: 262) called an “ideal city.”
It now remains to explain the use of our measuring scale which we consider a standard of the child’s intelligence. Of what use is a measure of intelligence? Without doubt one could conceive many possible applications of the process, in dreaming of a future where the social sphere would be better organized than ours; where every one would work according to his own aptitudes in such a way that no particle force should be lost for society. That would be the ideal city. It is indeed far from us. But we have to remain among the sterner and matter-of-fact realities of life, since we here deal with practical experiments which are the most commonplace realities.
Binet disregarded his skull measurements as a correlate of ‘intelligence’ since they did not agree with teacher’s ratings. But then Binet and Simon (1916: 309) discuss how teachers assessed students (and gave an example). This is then how Binet made sure that the new psychological ‘measure’ that he devised related to how teachers assessed their students. Binet and Simon’s “theory” grouped certain children as “superior” and others as “inferior” in ‘intelligence’ (whatever that is), but did not pinpoint biology as the cause of the differences between the children. These groupings, though, corresponded to the social class of the children.
Thus, in effect, what Binet and Simon wanted to do was to organize society along a system of class social class lines while using his ‘intelligence tests’ to place the individual where they “belonged” on the hierarchy on the basis of their “intelligence”—whether or not this “intelligence” was “innate” or “learned.” Indeed, Binet and Simon did originally develop their scales to distinguish children who needed more help in school than others. They assumed that individuals had certain (intellectual) properties which then related to their class position. And that by using their scales, they can identify certain children and then place them into certain classes for remedial help. But a closer reading of Binet and Simon shows two hereditarians who wanted to use their tests for similar reasons that they were originally brought to America for!
Binet and Simon’s test was created to “separate natural intelligence and instruction” since they attempted to ‘measure’ the “natural intelligence” (Mensh and Mensh, 1991). Mensh and Mensh (1991: 23) continue:
Although Binet’s original aim was to construct an instrument for classifying unsuccessful school performers inferior in intelligence, it was impossible for him to create one that would do only that, i.e., function at only one extreme. Because his test was a projection of the relationship between concepts of inferiority and superiority—each of which requires the other—it was intrinsically a device for universal ranking according to alleged mental worth.
This “ideal city” that Binet and Simon imagine would have individuals work to their “known aptitudes”—meaning that individuals would work where their social class dictated they would work. This was, in fact, eerily similar to the uses of the test that Goddard translated and the test—the Stanford-Binet—that Terman developed in 1916.
Binet and Simon (1916: 92) also discuss further uses for their tests, irrespective of job placement for individuals:
When the work, which is here only begun, shall have taken its definite character, it will doubtless permit the solution of many pending questions, since we are aiming at nothing less than the measure of intelligence; one will this know how to compare the different intellectual levels not only according to age, but according to sex, social condition, and to race; applications of our method will be found useful to normal anthropology, and also to criminal anthropology, which touches closely upon the study of the subnormal, and will receive the principle conclusion of our study.
Binet, therefore, had similar views to Goddard and Terman, regarding “tests of intelligence” and Binet wanted to stratify society by ‘intelligence’ using his own tests (which were culturally biased against certain classes). Binet’s writings on the uses of his tests, ironically, mirrored what the creators of the Army Alpha and Beta tests believed. Binet believed that his tests could select individuals that were right for the role they would be designated to work. Binet, nevertheless, contradicted himself numerous times (Spring, 1972; Mensh and Mensh, 1991).
This dream of an “ideal city” was taken a step further when Binet’s test was brought and translated to America by Goddard and used for selecting military recruits (call it an “ideal country”). They would construct the test in order to “ensure” the right percentages of “the right” people who would be in their spot that was designated to them on the basis of their intelligence.
What Binet was attempting to do was to mark individual social value with his test. He claimed that we can use his (practical) test to select people for certain social roles. Thus, Binet’s dream for what his tests would do—and were then further developed by Goddard, Yerkes, Terman, et al—is inherent in what the IQ-ists of today want to do. They believe that there are “IQ cutoffs”, meaning that people with an IQ above or below a certain threshold won’t be able to do job X. However, the causal efficacy of IQ is what is in question along with the fact that IQ-ists have certain biases that they construct into their tests that they believe are ‘objective.’ But where Binet shifted from the IQ-ists of today and his contemporaries was that he believed that ‘intelligence’ is relative to one’s social situation (Binet and Simon, 1916: 266-267).
It is ironic that Gould believed that we could use Binet’s test (along with contemporary tests constructed and ‘validated’—correlated—with Terman’s Stanford-Binet test) for ‘good’; this is what Binet thought he would be done. But then, when the hereditarians had Binet’s test, they took Binet’s arguments to a logical conclusion. This also has to do with the fact that the test was constructed AND THEN they attempted to ‘see’ what was ‘measured’ with correlational studies. The ‘meaning’ of test scores, thusly, is seen after the fact with—wait for it—correlations with other tests that were ‘validated’ with other (unvalidated) tests.
This comes back to the claim that the mental can be ‘measured’ at all. If physicalism is false—and there are dozens of (a priori) arguments that establish this fact— and the mental is therefore irreducible to the physical, then psychological traits—and with it the mind—cannot be measured. It then follows that the mind cannot be measured. Further, rankings are not measures (Nash, 1990: 63), therefore, ability and achievement tests cannot be ‘measures’ of any property of individuals or groups—the object of measurement is the human and this was inherent in Binet’s original conception of his test that the IQ-ists in America attempted with their restrictions on immigration in the early 1900s.
This speaks to the fatalism that is inherent in IQ-ism—and was inherent since the creation of the first standardized tests (of which IQ tests are). These tests are—and have been since their inception—attempting to measure human worth and the differences and value between persons. The IQ-ist claims that “IQ tests must measure something.” And this ‘measurement’, it is claimed, is inherent in the fact that the tests have ‘predictive validity.’ But such claims of that a ‘property’ inherent in individuals and groups fails. The real ‘function’ of standardized testing is for assessment, and not measurement.
The “ideal city”, it seems, is just a city of IQ-ism—where one’s social roles are delegated by where they score on a test that is constructed to get the results the constructors want. Therefore, what Binet wanted his tests to do was (and some may ever argue it still is) being used to mark social worth (Garrison, 2004, 2009). Psychometry is therefore a political ring. It is inherently political and not “value-free.” Psychologists/psychometricians do not have an ‘objective science’, as the object of study (the human) can reflexively change their behavior when they know they are being studied. Their field is inherently political and they mark individuals and groups—whether they admit it or not. “Ideal cities” can lead to eugenic thinking, in any case, and to strive for “ideality” can lead to social harms—even if the intentions are ‘good.’
Discussions about whiteness and privilege have become more and more common. Whites, it is argued, have a form of unearned societal privilege which therefore explains certain gaps between whites and non-whites. White privilege is the privilege that whites have in society—this type of privilege does not have to be in America, it can hold for groups that are viewed as ‘white’ in other countries. This, then, perpetrates social views of race, hence these people are realists about race but in a social/political context and do not have to recognize race as biological (although race can become biologicized through social/cultural practices). This article will discuss (1) What white privilege is; (2) Who has white privilege; (3) Arguments against white privilege; and (4) If race doesn’t exist, why does white privilege matter?
What is white privilege?
The concept of white privilege, like most concepts, evolves with the times and current social thought. The concept was originally created in order to account for whites’ (unearned) privileges and the conscious bias that went into creating and then maintaining these privileges, to unconscious favoritism/psychological advantages that whites give other whites (Bennett, 2012: 75). That is, white privilege is “an invisible package of unearned assets that I can count on cashing in each day, but about which I was “meant” to remain oblivious. White privilege is like an invisible weightless knapsack of special provisions, maps, passports, codebooks, visas, clothes, tools , and blank checks” (McIntosh, 1988).
More easily, we can say that white privilege is—the privilege conferred, either consciously or subconsciously, to one based on their skin color or, as Sullivan (2016, 2019) argues, their class status ALONG WITH their whiteness is what we should be talking about—white privilege with CLASS in between ‘white’ and ‘privilege’. In this sense, one’s class status AND their whiteness is explanatory, not only the concept of whiteness (i.e., their socialrace). The concept of whiteness—one’s skin color—as the privilege leaves out numerous intricacies in how whiteness gives and upholds systemic discrimination. When we add the concept of ‘class’ into ‘white privilege’ we get what Sullivan terms ‘white class privilege’.
While yes, one’s race is an important variable in whether or not they have certain privileges, such privileges are held for middle- to upper-middle class whites. Thus, numerous examples of ‘white privilege’ are better understood as examples of ‘white class privilege’, since lower-class whites don’t have the same kinds of privileges, outlooks, and social status as middle- and upper-middle class whites. Of course, though, lower-class whites can benefit from their whiteness—they definitely can. But the force of Sullivan’s concept of ‘white class privilege’ is this: white privilege is not monolithic towards whites, and some non-whites are better-off (economically and in regard to health) than whites. Thus, according to Sullivan, ‘white privilege’ should be amended to ‘white class privilege’.
Who has white privilege?
Lower-class whites could, in a way, be treated differently than middle- and upper-class whites—even though they are of the same race. Lower-class whites can be seen to have ‘white privilege’ on the basis of everyday thought, since most think of the privilege as down to just skin color, yet there is an untalked about class dimension at play here, which, then, even gives blacks an advantage while upholding the privilege of the upper-class whites.
Non-whites who have are of a higher social class than whites would also receive different treatment. Sullivan states that the revised concept of ‘white class privilege’ must be used intersectionally—that is, privilege must be considered interacting with class, gender, national, and other social experiences. Sure, lower-class whites may be treated differently than higher-class blacks in certain contexts, but this does not mean that the lower-class white has ‘more privilege’ than the upper-class black. This shows that we should not assume that lower-class whites have the same kinds of privilege conferred by society as middle- and upper-class whites. Upper-class blacks and ‘Hispanics‘ may attempt to distinguish themselves from lower-class blacks and ‘Hispanics’, as Sullivan (2019: 18-19) explains:
Class privilege shows up as a feature of most if not all racial groups in which members with “more”—more money, education, or whatever else is valued in society—are treated better than those with “less.” For that reason, we might think that white class privilege actually is an intragroup pattern of advantage and disadvantage among whites, rather than an intergroup pattern that gives white people a leg up over non-white people. After all, many Black middle-class and upper-middle-class Americans also go to great lengths to make sure that they are not mistaken for the Black poor in public spaces: when they are shopping, working, walking, or driving in town, and so on (Lacy, 2007). A similar pattern can be found with middle-to-upper-class Hispanic/Latinx people in the United States, who can “protect” themselves from being seen as illegal immigrants by ensuring that they are not identified as poor (Masuoka and Junn, 2013).
Sullivan then goes on to state that these situations are not equivalent, since wealth, fame, and education do not protect upper-class blacks from racial discrimination. The certain privileges that upper-class whites have, thusly, do not transfer to upper-class blacks. Further, middle- to upper-class whites distinguish themselves as ‘good whites’ who are not racist, while dumping all of the racism accusations on lower-class whites. “…the line between “good” and “bad” white people drawn by many (good) white people is heavily classed. Good white people tend to be middle-to-upper-class, and they often dump responsibility for racism onto lower-class white people” (Sullivan, 2019: 35). Even though the lower-class whites get used as a ‘shield’, so to speak, by upper-class whites, they still have some semblance of white privilege, in that they are not assumed to be non-citizens to the US—something that ‘Hispanics’ do have to deal with (no matter their race).
While wealthy white people generally have more affordances than poor white people do, in a society that prizes whiteness all white people have some racial affordances, at least some of the time.
Paradoxically, whites are not the only ones that benefit off of ‘white privilege’—even non-whites can benefit, though it ultimately helps upper-class whites. They can benefit by being brought up in a white home, around whites (like being adopted or having one white parent while spending most of their childhood with their white family). Thus, white privilege can cross racial lines all the while still benefitting whites.
Sullivan (2019: chapter 2) discusses some blacks who benefit from white privilege. One of the people she discusses has a white parent. This is what gives her her lighter skin, but that is not where her privilege comes from (think colorism in the black community where lighter skin is more prized than darker skin). Her privilege came from “her implicit knowledge of white norms, sensibilities, and ways of doing things that came from living with and being accepted by white family members” (Sullivan, 2019: 26). This is what Sullivan calls “family familiarity” and is one of the ways that blacks can benefit from white privilege. Another way in which blacks can benefit from white privilege is due to “ancestral ties to whiteness.”
Colorism is the discrimination within the black community by skin color. Certain blacks may talk about “light-” and “dark-skinned” blacks and they may—ironically or not—discriminate on the basis of skin color. Such colorism is even somewhat instilled in the black community—where darker-skinned black sons and lighter-skinned black daughters report higher-quality parenting. Landor et al (2014) report that their “findings provide evidence that parents may have internalized this gendered colorism and as a result, either consciously or unconsciously, display higher quality of parenting to their lighter skin daughters and darker skin sons.” Thus, even certain blacks—in virtue of being ‘part white’—would benefit from white (skin) privilege within their own (black) community, which would therefore give them certain advantages.
Arguments against white privilege
Two recent articles with arguments against white privilege (Why White Privilege Is Wrong — Quillette and The Fallacy of White Privilege — and How It Is Corroding Society) erroneously argue that since other minority groups quickly rose up upon arrival to America, therefore white privilege is a myth. These kinds of takes, though, are quite confused. It does not follow that since other groups have risen upon entry into America and that since whites have worse outcomes on some—and not other—health outcomes, that therefore the concept of white privilege is ‘fallacious’; we just need something more fine-grained.
For example, the claims that X minority group is over-represented compared to whites in America gets used as a point that ‘white privilege’ does not exist (e.g., Avora’s article). Avora discusses the experiences and data from many black immigrants, proclaiming:
These facts challenge the prevailing progressive notion that America’s institutions are built to universally favor whites and “oppress” minorities or blacks. On the whole, whatever “systemic racism” exists appears to be incredibly ineffectual, or even nonexistent, given the multitude of groups who consistently eclipse whites.
How does that follow? In fact, how does the whole discussion of, for example, Japanese now outperforming whites follow that white privilege therefore is a ‘fallacy’? I ask the question, since Asian immigrants to America are hyper-selected (Noam, 2014; Zhou and Lee, 2017), meaning that what explains higher Asian academic achievement is academic effort (Hsin and Xie, 2014) and the fact that Asians are hyper-selected—meaning that they have a higher chance of having a higher degree.
The educational credentials of these recent [Asian] arrivals are striking. More than six-in-ten (61%) adults ages 25 to 64 who have come from Asia in recent years have at least a bachelor’s degree. This is double the share among recent non-Asian arrivals, and almost surely makes the recent Asian arrivals the most highly educated cohort of immigrants in U.S. history.
Compared with the educational attainment of the population in their country of origin, recent Asian immigrants also stand out as a select group. For example, about 27% of adults ages 25 to 64 in South Korea and 25% in Japan have a bachelor’s degree or more.2 In contrast, nearly 70% of comparably aged recent immigrants from these two countries have at least a bachelor’s degree. (The Rise of Asian Americans)
Avora even discuses some African immigrants, namely Nigerians and Ghanaians. However, just like Asian immigrants to America, Nigerian and Ghanaian immigrants to America are more likely to hold advanced degrees, signifying that they are indeed hyper-selected in comparison to the population that they derive from (Duvivier, Burch, and Boulet, 2017). Thus, to go along with the stats that Avora cites on the children of Nigerian immigrants, their parents already had higher degrees, signifying that they are indeed a hyper-selected group. This means that such ethnic groups cannot be used to show that white privilege is explanatory.
While Avora does discuss “class” in his article, he shows that it’s not only ‘white privilege’, but the class element that comes along with whiteness in America. He therefore unknowingly shows that once you add the ‘class’ factor and create the concept of ‘white class privilege’, that this privilege can cross racial lines and benefit non-whites.
In the Harinam and Henderson Quillette article, they argue that since there are some things that we say are ‘good’ that non-whites have more of than whites, therefore the concept of ‘white privilege’ does not explain the existence of disparities between ethnic groups in the US since some some bad things happen to whites and some good things happen to non-whites—but this is an oversimplification. The fact of the matter is, whites that do receive privileges over other ethnic/racial groups do so not in virtue of their (white) skin privilege, but in virtue of their class privilege. This can be seen with the above citations on class being the explanatory variable regarding Asian academic success (showing how class values get reproduced in the new country which then explains the academic success of Asians in America).
The fact that both of these articles believe that by showing some minority groups in America have more ‘good’ things than whites or better outcomes for bad things (like suicides) misses the point. That whites kill themselves more than other American ethnic groups does not mean that whites do not have privilege in America compared to other groups.
If race doesn’t exist, then why does white privilege matter?
Lastly, those who argue against the concept of white privilege may say that those who are against the concept of white privilege would then at the same time say that race—and therefore whites—do not exist so, in effect, what are they talking about if ‘whites’ don’t exist because race does not exist? This is of course a ridiculous statement. One can indeed reject claims from biological racial realists and believe that race exists and is a socially constructed reality. Thus, one can reject the claim that there is a ‘biological’ European race, and they can accept the claim that there is an ever-changing ‘white’ race, in which groups get added or subtracted based on current social thought (e.g., the Irish, Italians, Jews), changing with how society views certain groups.
Though, it is perfectly possible for race to exist socially and not biologically. So the social creation of races affords the arbitrarily-created racial groups to be in certain areas on the hierarchy of races. Roberts (2011: 15) states that “Race is not a biological category that is politically charged. It is a political category that has been disguised as a biological one.” She argues that we are not biologically separated into races, we are politically separated into them, signifying race as a political construct. Most people believe that the claim “Race is a social construct” means that “Race does not exist.” However, that would be ridiculous. The social constructivist just believes that society divides people into races based on how we look (i.e., how we are born) and then society divides us into races on the basis of how we look. So society takes the phenotype and creates races out of differences which then correlate with certain continents.
So, there is no contradiction in the claim that “Race does not exist” and the claim that “Whites have certain unearned privileges over other groups.” Being an antirealist about biological race does not mean that one is an antirealist about socialraces. Thus, one can believe that whites have certain privileges over other groups, all the while being antirealists about biological races (saying that “Races don’t exist biologically”).
In this article I have explained what white privilege is and who has it. I have also discussed arguments against white privilege and claims that those who argue against race are hypocrites since they still talk about “whites” while claiming that race exists. After showing the conceptual confusions that people have about white privilege, along with the fact that groups that do better than whites in America (the groups that supposedly show that white privilege is “a fallacy”), I then forward Sulllivan’s (2016, 2019) argument on white class privilege. This shows that their whiteness is not the sole reason why they prosper—their whiteness along with their middle-to-upper-middle-class status explains why they prosper. It also, furthermore, shows that while lower-class whites do have some sort of white privilege, they do not have all of the affordances of white privilege due to their class status. Blacks can, too, benefit from white privilege, whether it’s due to their proximity to whiteness or their ancestral heritage.
White privilege does exist, but to fully understand it, we must add in the nexus of class with it.
Summer vacation gives us a natural experiment to study the effects of vacation on IQ—and, unsurprisingly, the outcome is that one’s IQ is a function of what they are exposed to during the summer. We see the expected trajectories and outcomes in IQ based on the social class of the individual. A few studies since the 1920s/60s have been carried out on what occurs during summer vacation—and small, but noticeable—decreases in IQ are found. This only serves to further strengthen the claim that “IQ tests” are middle-class knowledge tests and that IQ is an outcome—not a cause.
Why may we see an IQ decrease in the summer? Well, for one, students are thrown out of the “school rhythm” that they get into the 9 months they are in school. Since they have three months they have off from their learning (say, June-September), when it then comes to test-taking, the students become less familiarized with these types of things, causing a decrease in scores. If “IQ tests” were indeed tests of middle-class knowledge and skills and if we think of an “IQ score” as a rough proxy of social class,, then we would expect that certain academic achievements related to IQ would raise or fall in certain contexts (i.e., one’s age, race, gender, social class, etc). This is what we find.
For instance, Cooper et al (1996) meta-analyzed 13 studies (while reviewing 39 studies). They found, due to summer vacation, student’s grade loss as tested before the summer and after was equivalent to losing one grade in that period of time. They also found that middle-class children had an increase in reading, while lower-class children had a decrease. This can be explained by, for example, the presence of books in the home and how they differ between social class. We know that the presence of books in the home is an indicator of academic performance (Evans, Kelley, and Sikora, 2014). This is important, because children who reported that they had easier access to books read more books (Kim, 2004), while voluntary reading programs do increase reading test scores (Kim and White, 2008).
“Growing up in the scholarly culture provides important academic skills“, note Evans, Kelley, and Sikora (2014: 19), and this is due to the fact that such tests are constructed by certain people with certain assumptions about the nature of the tests in question (Richardson, 2000, 2002). Thus, what explains the finding is the fact that those from higher-class families have more access to books, and so they avoid the decrease in reading skills during the summer. (Think of “summer reading” programs. I recall them from my youth. I remember reading The Hot Zone for a summer reading book once.) This replicates previous research from this team where they showed that children who grew up in homes with “many books” had three more years of schooling than children from “bookless homes”, and this was independent of the social class, education, and social class of the parent (Evans et al, 2010).
Cooper et al (1996) discuss Heyns’ (1978) book Summer Learning and the Effects of School where Heyns shows that summer learning is more dependent on parental occupation than is learning during the school year (Cooper et al, 1996: 243). Heyns’ data showed that summer vacation widened the gap in achievement between rich and poor (meaning high and low social class) and that it also widened the gap between blacks and whites. Cooper et al’s meta-analysis also showed that the gap in reading achievment between middle- and low-class learners during the summer was equivalent to a 3-month gap between them. While children in both classes show decreases in reading skills over the summer, lower-class students showed steeper declines than middle-class students. What this suggests is that class differences can—and do—in fact increase inequalities between the two classes. A lower-class status would then translate to being presented with fewer learning opportunities (meaning that they would have fewer opportunities to prepare for what amounts to middle-class knowledge tests), therefore explaining why the gap increases between the two social classes.
So, as Cooper et al (1996) show, summer vacation has an equal effect on math skills between middle-and lower-class children, while, when it comes to reading skills, lower-class students took a bigger hit (which can be explained by access to books in the home). So, to attempt to mitigate these disparities, we can, for example, mandate some type of summer math program for all classes, or instruction of reading for lower-class children since the analysis pointed to these two types of disparities. Of course, reading practice would be more readily available than math practice, which would explain why there is a disparity in differences between blacks and whites and between social classes.
Note that a decrease in mathematical skill was found by Paechter et al (2015) in a sample of Austrian children, who have a 9-week vacation. They write that “Losses or gains in a knowledge domain appear to depend on the degree of practice during the summer vacation“, and this is intuitive based on the nature of test-taking.
Entwisle and Alexander (1992; 1994) studied the “summer setback” between a random sample of blacks and whites in Baltimore, Maryland. In longitudinal fashion, they tested these black and white children before they entered the first grade. Math test scores were used as a proxy of how ‘stimulating’ a home was when it came to knowledge acqusition during the summer. They found that the two most important factors for math skills during the summer was that differences in family SES and how segregated the schools were. They also noted how school integration helps black students, and how white students do just as well, whether or not the school they are attending is integrated or not. (Also see Johnson and Nazaryan, 2019, who show the same—they also show that, regardless of race, children who attended integrated schools had better life outcomes than children who did not.) The 1994 paper also showed that linguistic differences between integrated Baltimore schools could also account for differences in reading skills. (Also see Patterson, 2015.)
Alexander, Entwisle, and Olson (2001) write (their emphasis):
When our study group started school their pre-reading and pre-math skills reflected their uneven family situations, and these initial differences were magnified across the primary grades because of summer setback despite the equalizing effect of their school experiences.
Class gaps grow in the summer, when “non-school influences dominate” (Condron, 2009) which, again, shows that these tests test certain types of knowledge found in certain classes over others which explains the disparities in certain things between groups. It is established that higher-SES children learn more over the summer (Burkam et al, 2004), and this is due, again, the types of content on the tests in question (since the tests are constructed by people from a narrow—higher—social class).
The lasting effects of the summer vacation learning gap is succinctly put by Alexander, Entwisle, and Olson (2007: 168):
(1) if the achievement gap by family SES during the elementary school years traces substantially to summer learning differences, and (2) if achievement scores are highly correlated across stages of young people’s schooling, and (3) if academic placements and attainments at the upper grades are selected on the basis of achievement scores, then (4) summer learning differences during the foundational early grades help explain achievement-dependent outcome differences across social lines in the upper grades, including the transition out of high school and, for some, into college.
Thus, summer vacation has a negative effect on all students, and this is particularly pronounced in differences between groups. So, we can either:
(1) Extend the school year; If we had a longer school year, we can then monitor children better and mitigate the problem areas that occur during the loss of school;
(2) Mandating summer school; If we had mandated summer school, then there would be less of an increase in academic achievement between classes, though such remedial classes have differing effects depending on context and the group studied (see McComb et al, 2001; Cooper et al, 2005).
(3) Make modifications to the school calendar. Since the hit to knowledge is not equal in all groups studied, then it would behoove us to target at-risk groups through the school year and then, possibly, have longer periods of breaks and not an all-at-once three-months off school as to better foster academic skills used for test-taking.
The heart of the problem of the ‘summer slide’ is due to less stimulating environments during the summer (which is different by race/social class), and thus, what explains the differences in amount of knowledge kept during the summer vacation is reflective of how well the household mimics the school environment since the tests in question are tests of middle-class knowledge and skills. This squares nicely with the research that schooling is important for IQ—even that it is causally efficacious regarding IQ (Ceci, 1990; Ritchie and Tucker-Drob, 2018). Even then, the gap between blacks and whites in test scores grows much more slowly during the school year than during summer vacation, indicating that “schools are, indeed, the great equalizers” (Downer, von Hippel, and Broh, 2004: 633)
This type of research does, indeed, buttress my claim that IQ is an outcome and not a cause. The claim that one is more ‘intelligent’ or that ‘one has a higher IQ than another’ (a claim that one is more ‘intelligent’ than another) is a descriptive and not an explanatory claim. We have at least three choices to think over when it comes to mitigating the problems that summer vacation brings to students—which is, relative to the school environment—‘duller’ environments which hampers learning and knowledge acquisition. Due to either lower levels of forgetting, or an advantage in continuing to learn over their less-advantaged peers, higher-SES children return to school with a subsequent advantage over lower-SES children and this is one way in which summer vacation widens inequalities between groups. Summer vacations, therefore, increase inequality between groups.
It has come to my attention that near the end of 2007, Nike boasted about releasing a running shoe that specifically targeted Native American communities. Nike developed the shoe to “to address the specific fit and width requirements for the Native American foot.” Since Native Americans have a high rate of obesity and diabetes (“diabesity”), then it seems that it would be a good thing to promote a shoe specifically for and to the population in question. But do such gestures translate to racist ideas or do they translate to a corporation wanting to be seen as promoting health (while their ultimate goal is profit)? Nike, specializing in athletic clothing, surely would be a good organization to spearhead such a movement, right? On what research is this initiative based on and does it hold water?
Through such outreach programs, Nike hopes to be seen to make social and community impacts when it comes to health. As Welch (2019: 12) notes, the N7 imitative hopes “to further promote sport and physical activity in Native American communities.” Such programs and specific items that would catch the eye of the consumers in question to heighten their physical activity and, subsequently, lessen their rates of fatal diseases, should be seen as a good thing, which would be irrespective of the feelings of the groups in question who see such outreach as racist.
The shoe was developed by a podiatrist named Rodney Stapp who served the Native community for his whole life (b. 1961, d. 2016). This was the first—and since then, only—time that Nike developed a shoe for a specific racial group. Was it a good idea? Was it racist? Even if it could be construed as racist, wouldn’t it be negated by targeting a group that has some of the highest rates of diabesity in America, therefore leading to a more active population and mitigation of the diseases in question? (See Broussard et al, 1991; Narayan, 1996; Acton et al, 2002). Since exercise seems to be necessary in managing diabetes and its symptoms (Colberg et al, 2010; Kirwan, Sacks, and Nieuwoudt, 2018; Borhade and Singh, 2020), then it seems that, irrespective of whether or not such gestures are racist, that such outreach and initiatives are a net good for the population in question.
Stapp was a big-name figure in the outreach to Native groups in Texas, and was the podiatrist that Nike consulted with in the development of their Nike Native American N7 shoe. Stapp was the one that contacted Nike to make such a shoe, since the patients that he serviced did not like the black and bulky shoes that were specially developed for diabetics—the efficacy of such shoes, though, have been debated in the literature (e.g., Brunner, 2015), while others have noted that diabetics have stated that the style and appearance of such diabetic shoes are the reasons why there is such low compliance in wearing them (Macfarlane and Jensen, 2003). In any case, wouldn’t marketing shoes toward specific demographics be a net-good, irrespective of the ultimate goals of the company if they would then promote healthier behaviors in the population in question?
Nike, though, has been criticized for the initiative, with Native right’s groups claiming that Nike is using Native plight for profit (Cole, 2008; Sanders, Phillips, and Alexander, 2018). It has been criticized by such groups since they have embroidered the shoe with feathers and sunsets, arrows, and different kinds of symbolism prevalent in Native cultures in the Americas. Here, I would not say that such things are racist on its face, it’s just a marketing ploy to sell more of their shoes. While it can be construed that such marketing is racist in a way, I think that the good such a program and shoe would do to reach at-risk populations outweighs any racist connotations that the shoes and the outreach program makes.
But most would have a problem with the claim that the shoe was developed specifically for “Native American feet”. Stapp claimed that “Indians tend to have a wider foot, but their heels are about average“, which would indicate slippage while running in a normal running shoe. Nike’s press release on the shoe says that “A strong emphasis was placed on providing a performance product that would cater to the specific needs of Native American foot shapes and help provide motivation to Native Americans predisposed to, or suffering from, health issues that can be improved by leading physically active lifestyles“, while also stating that “Research has engaged individuals from over 70 tribes as well as consulting podiatrists and members of Indian Health Services and the National Indian Health Board.“
(I am unable to find the research in question; hopefully someone can point me in the right direction so that I can find it.)
There is a history of such differences in the appendages between North and South Native Americans—where North Americans have longer and more slender feet than South Americans (e.g., Kate, 1918). Nike stated that the reason they developed the shoes were so that they could accommodate Native American’s wider feet, along with combating the diabesity epidemic that affects them. In 2015, though, Stapp stated that he believed the introduction of the shoe dropped amputations from 5-6 per year to 0-1 per year. If it is indeed true that the shoes were related in lowering the incidences of foot amputations in Native communities, then it would seem that the cause of that would be that they are moving more and getting more blood to their lower extremities which would then lead to lowered rates of amputation in these diabetic populations.
The claim that such shoes “racially profile” Natives is ridiculous. Stapp said that Nike asked him if there were differences in the feet of Native groups compared to others to which he answered “Yes.” Apparently, around the time of the marketing for the shoes, Nike was told that Native Americans had problems fitting into Nike’s ‘normal’ running shoes due to the width of their feet (being wider than average). Along with Natives supposedly having wider feet, since diabetes causes inflammation of tissue, which is concentrated in the feet—for instance, with diabetic foot ulcers (Pendsey, 2010; Schoen and Norman, 2014; Tuttolomondo, Maida, and Pinto, 2015; Amin and Doupis, 2016)—would seem that the call for such shoes to be develop would be a net-good for the population.
Though I can see how the claims that the shoe targeted at a specific racial group could be construed as racist, the net-good that a shoe does in getting to certain populations would outweigh the negative connotations that the racist accusation brings on Nike. Indeed, some of the developers of the shoe were Native, worked with Natives, and developed it to specifically target and help Natives manage a debilitating disease that leads to many negative health outcomes—like foot amputation and eventually death. So if exercise is conducive to managing diabetes and diabetic foot and the N7’s would then target certain populations with different average foot morphology, then it seems that the shoe has been a net-good for the population since, according to Stapp, seven years after the introduction of the shoe diabetic foot amputations went from 5-7 to 0-1 per year. While he may have had financial incentives to say that, I don’t think that it underscores the fact that Nike’s N7 program did not have positive benefits—even if they could be construed in a negative way (i.e., claims of racism).
The answer to the question “Should we market shoes to specific demographics” is “Yes.” It would be a good idea to, for example, make more demographic-specific shoes with specific embroideries in order to attempt to target certain at-risk populations that are more likely to acquire certain diseases on the basis of physical inactivity—like the Nike N7 program and Nike Native American N7 shoe attempt to do. It is for these reasons, then (irrespective of whether or not such morphologic claims of the feet of Natives are true) that the initiative in question is a good thing. The moral “should” question on whether or not we “should” market things—in this example, shoes—to certain demographics seems to rest on whether or not the marketing would have a positive effect on the lifestyles of the groups in question. If it does have positive effects, then we should market such programs toward at-risk populations, irrespective of claims that such marketing is racist toward certain groups.
Read the original article here. The titular person here is a blogger of population genetics and fossils concern Southeast Asia. Here represents among his latest synthesis of modern human origins. I believe it is mostly well done, in particular in regards to alluding to an Asian origin for the LCA of Sapiens, Neanderthals, and Denisovans which he expanded upon here.
The focus for today, however, concerns issues in representing the geographical positioning of Sapiens, which he alludes to a Asian origin, thought eh fossils he uses are not supportive as firmly as he suggests.
For the not-too-long time, fossil evidence did support this narrative, although fossils from the past 150000 years were very rare and even absent in Africa, there were some older human skulls forced to support this narrative. It is different with East Asia, we can find fossils with modern morphology that lived between 190-130 Kya (Zhirendong, Luijiang). Even the signals of dental modernity have appeared since 296 000 years ago (Panxian Dadong), about 100 000 years preceding the modern teeth of Misliya Cave in the Levant (194-177 Kya). And the morphology and modern face shapes have appeared since 900,000 years ago (Yunxian, Nanjing, Zhoukoudian). Includes Dali’s human face (550-260 Kya).
Regarding the evidence of “modern” tendencies, here is what the record shows
Among these sites, Fuyan (Daoxian) Cave, Luna Cave, Zhirendong Cave, and Huanglong Cave are currently considered as the best evidence in support of the early presence of H. sapiens in China, based on a clearer chronostratigraphic context and a more diagnostic morphology. There are other sites such as Ganqian (Tubo; Shen et al. 2001), Tongtianyan (Liujiang; Shen et al. 2002; Yuan, Chen, and Gao 1986), Dingcun (Chen,
Yuan, and Gao 1984; Pei 1985), and Jimuyan (Wei et al. 2011) that we consider of interest to assess the evolution of modern humans in China. However, because of the more ambiguous morphology of the fossils and/or uncertainties about their antiquity they are considered less unequivocal than those from Fuyan Cave, Luna Cave, Zhirendong Cave, and Huanglong Cave.
Liujiang, which the author of the blog post associates with the same interval as Zhirendong from a cave study, Liujiang was not of the sane site, nor was the context firmly grounded. See here for references. As for the interval 130-190k for the Zhiren cave, that exceeded the direct date of 106-110k for the fossils themselves. The date he uses here is from a study on the cave itself.
For Panxian, I commented on this from another post(now deleted).
The Panxian specimens were mainly archaic, while PH3 was found derived but in no specific fashion.
The facial features he speaks off from 900k from China speaks mainly of the mid-face, and fully modern faces didn’t appear until Antecessor.
Dali, since the 2017 study, was concluded to represent geneflow due to the lack of conformity to local erectus fossils relative to African and European ones.
One of the easily distinguishable features of modern humans is the shape and morphology of the skull. When compared to its predecessors, the modern human skull is more gruff, the face is flatter vertically, the chin protrudes, and the braincase) which is more globular. If a skull has most of these features, then it is classified under our species, modern humans. The older skulls that have been suggested as part of our modern human ancestry are the Omo I and Omo II skulls from Omo Kibish, in southern Ethiopia (Leaky et al. 1967). The two Omo skulls are around 195,000 years old (initially only 130000 years old), have a mixture of archaic and modern features, something that is not surprising if we view them as African archaic humans who probably met their modern human ancestors from elsewhere, before evolving into modern humans. . Because of this, they were both named Homo sapiens idaltu . Idaltu in the Afar language means ‘older’.
The mixture of archaic features would also be expected if early and transitional. somehow this is lost.
Several skulls from East and South Africa also tell about the same thing. Things are thought to have improved when three Herto skulls were found in 1997 around Afar, Ethiopia, aged 154,000-160000 years, which also have mixed archaic and modern craniofacial features . Herto’s skull was found in the same layer containing Middle Stone Age (MSA) and Later Stone Age (LSA) aftefacts. The location and artifacts and age of the Herto man closely match the Out of Africa model , and convince many scientists that the Herto man could be the closest anatomical ancestor of modern humans ( Out of Ethiopia ).
Same as explain before, meanwhile in terms of cranial features China is lacking as far as skulls are concerned.
Some fossils are classified as part of Homo sapiens , such as Omo I and Herto (although they are substantially different, and both still have primitive morphology, and some scientists consider Herto to be a subspecies of Homo sapiens ). Jebel Irhoud’s human status is being debated, some paleoanthropologists openly accept him as a close relative of Homo sapiens , some do not accept it because he considers Jebel Irhoud to be part of archaic Africa , and may even be part of a different evolutionary line from the evolutionary line of Homo sapiens . Florisbad man, previously classified as Homo helmei, it is not sufficient to represent the evolutionary line of Homo sapiens due to its primitive character and absence of a braincase .
He doesn’t provide citation or quotes to demonstrate who thinks thinks way or explain how China solves this with it’s specimens on comparable levels. Outside of teeth that generally don’t go beyond the interval of 120k, China is lacking. While he mentions in the article that the Broken Hill Skull fails as an ancestor, the mixed skulls he mention however are morphologically expected as the study mentions.
Meanwhile, the 130k Braincase of Singa shows modern morphology.
Overall, his latest article does a better job but still presents issues. I’ll summarize them here in this deleted comment.
“Let’s look at Africa. One of the oldest candidates for Homo ergaster , KNM-ER 3733, turns out to be 1.63 million years old, and all specimens from the Turkana Basin have their estimate between 1.6-1.43 million years. Nariokotome Boy or Turkana Boy (KNM-WT 15000; 1.5 million years), which has always been predicted as a representative of Homo erectus , turns out to be in the Homo ergaster evolution line , because it does not have a canine fossa .”Yet we know that Erectus is oldest in South Africa currently, I evenseen you post this research.https://www.smithsonianmag.com/science-nature/homo-erectus-australopithecus-saranthropus-south-africa-180974571/“The Konso skull from Southern Ethiopia is about 1.4 million years old. Buia and Daka (about 1 million years old) best fit the transition between Homo ergaster to Homo rhodesiensis , as well as Gombore II (~ 780 Kya). Daka which is hypothesized asHomo erectus has more morphological similarities to KNM ER3733, so sharing one morphology with Homo antecessor or Homo erectus East Asia does not necessarily make it part of Homo erectus . The youngest Homo ergaster, OH 12, is 780 Kya, has a skull capacity and facial shape similar to that of KNM ER3733. With an age difference of about 8,50000 years, the morphological continuity is still very clear. Imagine if they were still classified as Homo erectus ? Yet it is clear that their evolutionary path does not lead to the Homo sapiens line of evolution. In addition, some of the morphologies of OH 12 are also similar to KNM ER3883, D2282 and D2700.”Some references for the affinities?“The Homo habilis specimens from Koobi Fora range from 1.75 to 1.65 million years old. If KNM-ER 1802 is classified as Homo habilis (we must first verify it based on the presence of canine fossa , or it could be Homo rudolfensis , represented by KNM ER 62000), then the origin of its appearance is about 2 million years ago. So far, specimens representing Homo habilis claim the shallow canine fossaincluding OH 24, OH 62, OH 65, KNM-ER 1813, and KNM-ER 42703 with a time span of 1.86-1.44 million years (but still needs to be investigated further due to limited references). Of course this is younger when compared to Longgudong human teeth, more than 2.14 million years old. In fact, in several locations in Ciscaucasus there are many traces of artifacts that are more than 2 million years old.”The problem here is that Koobi Floora isn’t the oldest Habilis, the oldest is 2.3 mya years old in Afar. In fact the oldest Homo (that is broken away from Australopithecus morphologically) is 2.5 mya.Also, as far as artifacts goes, unless you have evidence proving otherwise, both the Oldowan and Achuelean are oldest in Africa at 2.6 and 1.7 mya ago respectively.“Meanwhile, KNM ER 2598, which was assumed to be a candidate for the Out of Africa I population (with an estimated initial age of 1.88-1.9 million years ago), was found on the surface which may have originated from a younger stratigraphic deposit. KNM-ER 1813 is also estimated to be 1.86 million years old, or to be in the Olduvai Subchron boundary (1.95-1.78 million years ago).”See above mention of the South Africa find. Also, proof to support this suggestion?“Alternatively, KNM-ER 1813 and other hominins in Area 123 could be younger than 1.65 million years ago.”Again, evidence?“After 1.65 million years ago, Turkana Basin humans were dominated by Homo ergaster , who was contemporary with Sangiran early humans (Sangiran 4 and S27), but younger than early humans Bumiayu. So, the best candidates for the ancient Javanese ancestors could be between the early humans Dmanisi, or the ancient humans Longgudong (> 2.14 million years) and Yuanmou (1.7-1.72 million years).”The most recent evidence rules out longgudong, seeing how it is defined best as habilis and distinct from Erectus in regards to the teeth and.Likewise, studies in 2001 and 2002 place the latter specimen to below1 mya, therefore no consensus.Btw, Naledi does have a Canine Fossa.Otherwise, I agree with the rest of the article.