‘Health inequalities are the systematic, avoidable and unfair differences in health outcomes that can be observed between populations, between social groups within the same population or as a gradient across a population ranked by social position.’ (McCartney et al, 2019)
Health inequities, however, are differences in health that are judged to be avoidable, unfair, and unjust. (Sudana and Blas, 2013)
Asking “Is X racist?” is the wrong question to ask. If X is factual, then making the claim cannot be racist (facts themselves cannot be racist). But, one can perform a racist action—either consciously or subconsciously—on the basis of a fact. Facts themselves cannot be racist, but one can use facts to be racist. One can hold a belief and the belief can be racist (X group is better than Y group at Z), but systemic racism would be the result (the outcome) of holding said belief. (Some examples of systemic racism can be found in Gee and Ford, 2011.) Someone who holds the belief that, say, whites are more “intelligent” than blacks or Jews are more “intelligent” than whites could be said to be racist—they hold a racist belief and are making an invalid inference based on a fact (blacks score 15 points lower in IQ tests compared to whites so blacks are less intelligent). Truth cannot be racist, but truth can be used to attempt to justify certain policies.
I have argued that we should ban IQ tests on the basis that, if we believe that the hereditarian hypothesis is true and it is false, then we can enact policies on the basis of false information. If we enact policies on the basis of false information, then certain groups may be harmed. If certain groups may be harmed, then we should ban whatever led to the policy in question. If the policy in question is derived from IQ tests, then IQ tests must be banned. This is one example on how we can use a fact (like the IQ gap between blacks and whites) and use that fact for a racist action (to shuttle those who perform under a certain expectation into certain remedial classes based on the fact that they score lower than some average value). Believing that X group has a higher quality of life, educational achievement, and life outcomes on the basis of IQ scores—or their genes—is a racist belief but this racist belief can then be used to perform a racist action.
I have also discussed different definitions of “racism.” Each definition discussed can be construed as having a possible action attached to it. Racism is an action—something that we perform on the basis of certain beliefs, motivated by “what can be” possible in the future. Beliefs can be racist; we can say that it is an ideology that one acts on that has real causes/consequences to people. Truth can’t be racist; people can can use the truth to perform and justify certain actions. Racism, though, can be said to be a “cultural and structural system” that assigns value based on race; further, actions and intent of individuals are not necessary for structural mechanisms of racism (e.g., Bonilla-Silva, 1997).
We can, furthermore, use facts about differences between races in health outcomes and say that certain rationalizations of certain outcomes can be construed as racist. “It’s in the genes!” or similar statements could be construed as racist, since it implies that certain inequalities would be “immutable” on the basis of a strong genetic determination of disease.
Racism is indeed a public health issue. For instance, physicians can hold biases on race—just like the average person. For instance, differences in healthcare between majority and minority populations can said to be systemic in nature (Reschovsky and O’Malley, 2008). This needs to be talked about since racism can and is a determinant of health—as many places in the country are beginning to recognize. Racism is rightly noted as a public health crisis because it leads to disparate outcomes between whites and blacks based on certain assumptions on the ancestral background of both groups.
Quach et al (2012) showed that not receiving referrals to a specialist is discriminatory—Asians, too were also exposed to medical discrimination, along with blacks. Such discrimination can also lead to accelerated cellular aging (on the basis of measured telomere lengths where shorter telomeres indicate a higher biological compared to chronological age; Shammas et al, 2012) in black men and women (Geronimus et al, 2006; 2011; Schrock et al, 2017; Forrester et al, 2019). We understand the reasons why such discrimination on the basis of race happens, and we understand the mechanism by which it leads to adverse health outcomes between races (chronic elevation in allostatic load leading to higher than normal levels of certain stress hormones which will, eventually, lead to differences in health outcomes).
The idea that genes or behavior lead to differences in health outcomes is racist (Bassett and Graves, 2018). This can then lead to racist actions—that their genetic constitution impedes them from being “near-par” with whites, or that their behavior is the cause of the health disparities (sans context). Valles (2018: 186) writes:
…racism is a cause with devastating health effects, but it manifests via many intermediary mechanisms ranging from physician implicit biases leading to over-treatment, under-treatment and other clinical errors (Chapman et al. 2013; Paradies et al. 2015) to exposing minority communities to waterborne contaminants because of racist political disenfranchisement and neglect of community infrastructure (e.g., the infamous Flint Water Crisis afflicting my Michigan neighbors) (Krieger 2016; Sherwin 2017; Michigan Civil Rights Commission 2017).
There is a distinction between “equity” and “equality.” For instance, to continue with the public health example, take public health equality and public health equity. In this instance, “equality” means giving everyone the same thing whereas “equity” means giving individuals what they need to be the healthiest individual they can possibly be. “Strong equality of health” is “where every person or group has equal health“, while weak health equity “states that every person or group should have equal health except when: (a) health equality is only possible by making someone less healthy, or (b) there are technological limitations on further health improvement” (Norheim and Asada, 2009). But we should not attempt to “level-down” people’s health to achieve equity; we should attempt to “level up” people’s health, though. That is, it is impossible to reach a strong health equality (making all groups equal), but we should—and indeed, have a moral responsibility to—attempt to lift up those who are worse-off. Poverty is what is objectionable, inequality is not. It is impossible to achieve true equality between groups, but we can—and indeed we have a moral obligation to—lift up those who are in poverty, which is, also a social determinant of health (Braveman and Gottlieb, 2014; Frankfurt, 2015; Islam, 2019).
We achieve health equity when all individuals have the same access to be the healthiest individuals they can be; we achieve health equality when all health outcomes are the same for all groups. Health equity is, further, the absence of avoidable differences between different groups (Evans, 2020). One of these is feasible, the other is not. But racism does not allow us to achieve health equity.
The moral foundation for public health thus rests on general obligations in beneficence to promote good health. (Powers and Faden, 2006: 24)
Social justice is not only a matter of how individuals fare, but also about how groups fare relative to one another whenever systemic racism is linked to group membership. (Powers and Faden, 2006: 103)
…inequalities in well-being associated with severe poverty are inequalities of the highest moral urgency. (Powers and Faden, 2006: 114)
Public health is directly a matter of social justice. If public health is directly a matter of social justice, and if health outcomes due to discrimination are caused by social injustice, then we need to address the causes of such inequalities, which would be for example, conscious or unconscious prejudice against certain groups.
Certain inequalities between groups are, therefore, due to systemic racism which is an action which can be conscious or unconscious. But which inequalities matter most? In my view, the inequalities that matter most are inequalities that impede an individual or a group from having a certain quality of life. Racism can and does lead to health inequalities and by addressing the causes for such actions, we can then begin to ameliorate the causes of structural racism. This is more evidence that the social can indeed manifest in biology.
Holding certain beliefs can lead to certain actions that can be construed as racist and negatively impact health outcomes for certain groups. By committing ourselves to a framework of social just and health, we can then attempt to ameliorate inequities between social class/races, etc. that have plagued us for decades. We should strive for equity in health, which is a goal of social justice. We should not believe that such differences are “innate” and that there is nothing that we can do about group differences (some of which are no doubt caused by systemically racist policies). Health equity is something we should strive to do and we have a moral obligation to do so; health equality is not obligatory and it is not even a feasible idea.
If we can avoid health certain outcomes for certain groups on the basis of beliefs that we hold, then we should do so.
The use of polygenic scores has caused much excitement in the field of socio-genomics. A polygenic score is derived from statistical gene associations using what is known as a genome-wide association study (GWAS). Using genes that are associated with many traits, they propose, they will be able to unlock the genomic causes of diseases and socially-valued traits. The methods of GWA studies also assume that the ‘information’ that is ‘encoded’ in the DNA sequence is “causal in terms of cellular phenotype” (Baverstock, 2019).
For instance it is claimed by Robert Plomin that “predictions from polygenic scores have unique causal status. Usually correlations do not imply causation, but correlations involving polygenic scores imply causation in the sense that these correlations are not subject to reverse causation because nothing changes the inherited DNA sequence variation.”
Take the stronger claim from Plomin and Stumm (2018):
GPS are unique predictors in the behavioural sciences. They are an exception to the rule that correlations do not imply causation in the sense that there can be no backward causation when GPS are correlated with traits. That is, nothing in our brains, behaviour or environment changes inherited differences in DNA sequence. A related advantage of GPS as predictors is that they are exceptionally stable throughout the life span because they index inherited differences in DNA sequence. Although mutations can accrue in the cells used to obtain DNA, like any cells in the body these mutations would not be expected to change systematically the thousands of inherited SNPs that contribute to a GPS.
This is a strange claim for two reasons.
(1) They do not, in fact, imply causation since the scores derived from GWA studies which are associational and therefore cannot show causes—GWA studies are pretty much giant correlational studies that scan the genomes of hundreds of thousands of people and look for genes that are more likely to be in the sample population for the disease/”trait” in question. These studies are also heavily skewed to European populations and, even if they were valid for European populations (which they are not), they would not be valid for non-European ethnic groups (Martin et al, 2017; Curtis, 2018; Haworth et al, 2018).
(2) The claim that “nothing changes inherited DNA sequence variation” is patently false; what one experiences throughout their lives can most definitely change their inherited DNA sequence variation (Baedke, 2018; Meloni, 2019).
But, as pointed out by Turkheimer, Plomin and Stumm are assuming that no top-down causation exists (see, e.g., Ellis, Noble, and O’Connor, 2011). We know that both top-down (downward) and bottom-up (upward) causation exists (e.g., Noble, 2012; see Noble 2017 for a review). Plomin, it seems, is coming from a very hardline view of genes and how they work. A view, it looks like to me, that derives from the Darwinian view of genes and how they ‘work.’
Such work also is carried out under the assumption that ‘nature’ and ‘nurture’ are independent and can therefore be separated. Indeed, the title of Plomin’s 2018 book Blueprint implies that DNA is a blueprint. In the book he has made the claim that DNA is a “fortune-teller” and that things like PGSs are “fortune-telling devices” (Plomin, 2018: 6). PGSs are also carried out based on the assumption that the heritability estimates derived from twin/family/adoption studies tell us anything about how “genetic” a trait is. But, since the EEA is false (Joseph, 2014; Joseph et al, 2015) then we should outright reject any and all genetic interpretations of these kinds of studies. PGS studies are premised on the assumption that the aforementioned twin/adoption/family studies show the “genetic variation” in traits. But if the main assumptions are false, then their conclusions crumble.
Indeed, lifestyle factors are better indicators of one’s disease risk compared to polygenic scores, and so “This means that a person with a “high” gene score risk but a healthy lifestyle is at lower risk than a person with a “low” gene score risk and an unhealthy lifestyle” (Joyner, 2019). Janssens (2019) argues that PRSs (polygenic risk scores) “do not ‘exist’ in the same way that blood pressure does … [nor do they] ‘exist’ in the same way clinical risk models do …” Janssens and Joyner (2019) also note that “Most [SNP] hits have no demonstrated mechanistic linkage to the biological property of interest. By showing mechanistic relations between the proposed gene(s) and the disease phenotype, researchers would, then, be on their way to show “causation” for PGS/PRS.
Nevertheless, Sexton et al (2018) argue that “While research has shown that height is a polygenic trait heavily influenced by common SNPs [7–12], a polygenic score that quantifies common SNP effect is generally insufficient for successful individual phenotype prediction.” Smith-Wooley et al (2018) write that “… a genome-wide polygenic score … predicts up to 5% of the variance in each university success variable.” But think about the words “predicts up to”—this is a meaningless phrase. Such language is, of course, causal when they—nor anyone else—has shown that such scores are indeed casual (mechanistically).
What these studies are indexing are not causal genic variants for disease and other “traits”, they are showing the population structure of the population sampled in question (Richardson, 2017; Richardson and Jones, 2019). Furthermore, the demographic history of the sample in question can also mediate the stratification in the population (Zaidi and Mathieson, 2020). Therefore, claims that PGSs are causal are unfounded—indeed, GWA studies cannot show causation. GWA studies survive on the correlational model—but, as has been shown by many authors, the studies show spurious correlations, not the “genetics” of any studied “trait” and they, therefore, do not show causation.
One further nail-in-the-coffin for hereditarian claims for PGS/PRS and GWA studies is due to the fact that the larger the dataset (the larger the number of datapoints), there will be many more spurious correlations found (Calude and Longo, 2017). When it comes to hereditarian claims, this is relevant to twin studies (e.g., Polderman et al, 2015) and GWA studies for “intelligence” (e.g., Sniekers et al, 2017). It is entirely possible, as is argued by Richardson and Jones (2019) that the results from GWA studies “for intelligence” are entirely spurious, since the correlations may appear due to the size of the dataset, not the nature of it (Calude and Longo, 2017). Zhou and Zao (2019) argue that “For complex polygenic traits, spurious correlation makes the separation of causal and null SNPs difficult, leading to a doomed failure of PRS.” This is troubling for hereditarian claims when it comes to “genes for” “intelligence” and other socially-valued traits.
How can hereditarians show PGS/PRS causation?
This is a hard question to answer, but I think I have one. The hereditarian must:
(1) provide a valid deductive argument, in that the conclusion is the phenomena to be explained; (2) provide an explanans (the sentences adduced as the explanation for the phenomenon) that has one lawlike generalization; and (3) show the remaining premises which state the preceding conditions have to have empirical content and they have to be true.
An explanandum is a description of the events that need explaining (in this case, PGS/PRS) while an explanans does the explaining—meaning that the sentences are adduced as explanations of the explanans. Garson (2018: 30) gives the example of zebra stripes and flies. The explanans is Stripes deter flies while the explanandum is Zebras have stripes. So we can then say that zebras have stripes because stripes deter flies.
Causation for PGS would not be shown, for example, by showing that certain races/ethnies have higher PGSs for “intelligence”. The claim is that since Jews have higher PGSs for “intelligence” then it follows that PGSs can show causation (e.g., Dunkel et al, 2019; see Freese et al, 2019 for a response). But this just shows how ideology can and does color one’s conclusions they glean from certain data. That is NOT sufficient to show causation for PGS.
PGSs cannot, currently, show causation. The studies that such scores are derived from fall prey to the fact that spurious correlations are inevitable in large datasets, which also is a problem for other hereditarian claims (about twins and GWA studies for “intelligence”). Thus, PGSs do not show causation and the fact that large datasets lead to spurious correlations means that even by increasing the number of subjects in the study, this would still not elucidate “genetic causation.”
Ranking human worth on the basis of how well one compares in academic contests, with the effect that high ranks are associated with privilege, status, and power, does suggest that psychometry is best explored as a form of vertical classification and attending rankings of social value. (Garrison, 2009: 36)
Binet and Simon’s (1916) book The Development of Intelligence in Children is somewhat of a Bible for IQ-ists. The book chronicles the methods Binet and Simon used to construct their tests for children to identify those children who needed more help at school. In the book, they describe the anatomic measures they used. Indeed, before becoming a self-taught psychologist, Binet measured skulls and concluded that skull measurements did not correlate with teacher’s assessment of their students’ “intelligence” (Gould, 1995, chapter 5).
In any case, despite Binet’s protestations that Gould discusses, he wanted to use his tests to create what Binet and Simon (1916: 262) called an “ideal city.”
It now remains to explain the use of our measuring scale which we consider a standard of the child’s intelligence. Of what use is a measure of intelligence? Without doubt one could conceive many possible applications of the process, in dreaming of a future where the social sphere would be better organized than ours; where every one would work according to his own aptitudes in such a way that no particle force should be lost for society. That would be the ideal city. It is indeed far from us. But we have to remain among the sterner and matter-of-fact realities of life, since we here deal with practical experiments which are the most commonplace realities.
Binet disregarded his skull measurements as a correlate of ‘intelligence’ since they did not agree with teacher’s ratings. But then Binet and Simon (1916: 309) discuss how teachers assessed students (and gave an example). This is then how Binet made sure that the new psychological ‘measure’ that he devised related to how teachers assessed their students. Binet and Simon’s “theory” grouped certain children as “superior” and others as “inferior” in ‘intelligence’ (whatever that is), but did not pinpoint biology as the cause of the differences between the children. These groupings, though, corresponded to the social class of the children.
Thus, in effect, what Binet and Simon wanted to do was to organize society along a system of class social class lines while using his ‘intelligence tests’ to place the individual where they “belonged” on the hierarchy on the basis of their “intelligence”—whether or not this “intelligence” was “innate” or “learned.” Indeed, Binet and Simon did originally develop their scales to distinguish children who needed more help in school than others. They assumed that individuals had certain (intellectual) properties which then related to their class position. And that by using their scales, they can identify certain children and then place them into certain classes for remedial help. But a closer reading of Binet and Simon shows two hereditarians who wanted to use their tests for similar reasons that they were originally brought to America for!
Binet and Simon’s test was created to “separate natural intelligence and instruction” since they attempted to ‘measure’ the “natural intelligence” (Mensh and Mensh, 1991). Mensh and Mensh (1991: 23) continue:
Although Binet’s original aim was to construct an instrument for classifying unsuccessful school performers inferior in intelligence, it was impossible for him to create one that would do only that, i.e., function at only one extreme. Because his test was a projection of the relationship between concepts of inferiority and superiority—each of which requires the other—it was intrinsically a device for universal ranking according to alleged mental worth.
This “ideal city” that Binet and Simon imagine would have individuals work to their “known aptitudes”—meaning that individuals would work where their social class dictated they would work. This was, in fact, eerily similar to the uses of the test that Goddard translated and the test—the Stanford-Binet—that Terman developed in 1916.
Binet and Simon (1916: 92) also discuss further uses for their tests, irrespective of job placement for individuals:
When the work, which is here only begun, shall have taken its definite character, it will doubtless permit the solution of many pending questions, since we are aiming at nothing less than the measure of intelligence; one will this know how to compare the different intellectual levels not only according to age, but according to sex, social condition, and to race; applications of our method will be found useful to normal anthropology, and also to criminal anthropology, which touches closely upon the study of the subnormal, and will receive the principle conclusion of our study.
Binet, therefore, had similar views to Goddard and Terman, regarding “tests of intelligence” and Binet wanted to stratify society by ‘intelligence’ using his own tests (which were culturally biased against certain classes). Binet’s writings on the uses of his tests, ironically, mirrored what the creators of the Army Alpha and Beta tests believed. Binet believed that his tests could select individuals that were right for the role they would be designated to work. Binet, nevertheless, contradicted himself numerous times (Spring, 1972; Mensh and Mensh, 1991).
This dream of an “ideal city” was taken a step further when Binet’s test was brought and translated to America by Goddard and used for selecting military recruits (call it an “ideal country”). They would construct the test in order to “ensure” the right percentages of “the right” people who would be in their spot that was designated to them on the basis of their intelligence.
What Binet was attempting to do was to mark individual social value with his test. He claimed that we can use his (practical) test to select people for certain social roles. Thus, Binet’s dream for what his tests would do—and were then further developed by Goddard, Yerkes, Terman, et al—is inherent in what the IQ-ists of today want to do. They believe that there are “IQ cutoffs”, meaning that people with an IQ above or below a certain threshold won’t be able to do job X. However, the causal efficacy of IQ is what is in question along with the fact that IQ-ists have certain biases that they construct into their tests that they believe are ‘objective.’ But where Binet shifted from the IQ-ists of today and his contemporaries was that he believed that ‘intelligence’ is relative to one’s social situation (Binet and Simon, 1916: 266-267).
It is ironic that Gould believed that we could use Binet’s test (along with contemporary tests constructed and ‘validated’—correlated—with Terman’s Stanford-Binet test) for ‘good’; this is what Binet thought he would be done. But then, when the hereditarians had Binet’s test, they took Binet’s arguments to a logical conclusion. This also has to do with the fact that the test was constructed AND THEN they attempted to ‘see’ what was ‘measured’ with correlational studies. The ‘meaning’ of test scores, thusly, is seen after the fact with—wait for it—correlations with other tests that were ‘validated’ with other (unvalidated) tests.
This comes back to the claim that the mental can be ‘measured’ at all. If physicalism is false—and there are dozens of (a priori) arguments that establish this fact— and the mental is therefore irreducible to the physical, then psychological traits—and with it the mind—cannot be measured. It then follows that the mind cannot be measured. Further, rankings are not measures (Nash, 1990: 63), therefore, ability and achievement tests cannot be ‘measures’ of any property of individuals or groups—the object of measurement is the human and this was inherent in Binet’s original conception of his test that the IQ-ists in America attempted with their restrictions on immigration in the early 1900s.
This speaks to the fatalism that is inherent in IQ-ism—and was inherent since the creation of the first standardized tests (of which IQ tests are). These tests are—and have been since their inception—attempting to measure human worth and the differences and value between persons. The IQ-ist claims that “IQ tests must measure something.” And this ‘measurement’, it is claimed, is inherent in the fact that the tests have ‘predictive validity.’ But such claims of that a ‘property’ inherent in individuals and groups fails. The real ‘function’ of standardized testing is for assessment, and not measurement.
The “ideal city”, it seems, is just a city of IQ-ism—where one’s social roles are delegated by where they score on a test that is constructed to get the results the constructors want. Therefore, what Binet wanted his tests to do was (and some may ever argue it still is) being used to mark social worth (Garrison, 2004, 2009). Psychometry is therefore a political ring. It is inherently political and not “value-free.” Psychologists/psychometricians do not have an ‘objective science’, as the object of study (the human) can reflexively change their behavior when they know they are being studied. Their field is inherently political and they mark individuals and groups—whether they admit it or not. “Ideal cities” can lead to eugenic thinking, in any case, and to strive for “ideality” can lead to social harms—even if the intentions are ‘good.’
Discussions about whiteness and privilege have become more and more common. Whites, it is argued, have a form of unearned societal privilege which therefore explains certain gaps between whites and non-whites. White privilege is the privilege that whites have in society—this type of privilege does not have to be in America, it can hold for groups that are viewed as ‘white’ in other countries. This, then, perpetrates social views of race, hence these people are realists about race but in a social/political context and do not have to recognize race as biological (although race can become biologicized through social/cultural practices). This article will discuss (1) What white privilege is; (2) Who has white privilege; (3) Arguments against white privilege; and (4) If race doesn’t exist, why does white privilege matter?
What is white privilege?
The concept of white privilege, like most concepts, evolves with the times and current social thought. The concept was originally created in order to account for whites’ (unearned) privileges and the conscious bias that went into creating and then maintaining these privileges, to unconscious favoritism/psychological advantages that whites give other whites (Bennett, 2012: 75). That is, white privilege is “an invisible package of unearned assets that I can count on cashing in each day, but about which I was “meant” to remain oblivious. White privilege is like an invisible weightless knapsack of special provisions, maps, passports, codebooks, visas, clothes, tools , and blank checks” (McIntosh, 1988).
More easily, we can say that white privilege is—the privilege conferred, either consciously or subconsciously, to one based on their skin color or, as Sullivan (2016, 2019) argues, their class status ALONG WITH their whiteness is what we should be talking about—white privilege with CLASS in between ‘white’ and ‘privilege’. In this sense, one’s class status AND their whiteness is explanatory, not only the concept of whiteness (i.e., their socialrace). The concept of whiteness—one’s skin color—as the privilege leaves out numerous intricacies in how whiteness gives and upholds systemic discrimination. When we add the concept of ‘class’ into ‘white privilege’ we get what Sullivan terms ‘white class privilege’.
While yes, one’s race is an important variable in whether or not they have certain privileges, such privileges are held for middle- to upper-middle class whites. Thus, numerous examples of ‘white privilege’ are better understood as examples of ‘white class privilege’, since lower-class whites don’t have the same kinds of privileges, outlooks, and social status as middle- and upper-middle class whites. Of course, though, lower-class whites can benefit from their whiteness—they definitely can. But the force of Sullivan’s concept of ‘white class privilege’ is this: white privilege is not monolithic towards whites, and some non-whites are better-off (economically and in regard to health) than whites. Thus, according to Sullivan, ‘white privilege’ should be amended to ‘white class privilege’.
Who has white privilege?
Lower-class whites could, in a way, be treated differently than middle- and upper-class whites—even though they are of the same race. Lower-class whites can be seen to have ‘white privilege’ on the basis of everyday thought, since most think of the privilege as down to just skin color, yet there is an untalked about class dimension at play here, which, then, even gives blacks an advantage while upholding the privilege of the upper-class whites.
Non-whites who have are of a higher social class than whites would also receive different treatment. Sullivan states that the revised concept of ‘white class privilege’ must be used intersectionally—that is, privilege must be considered interacting with class, gender, national, and other social experiences. Sure, lower-class whites may be treated differently than higher-class blacks in certain contexts, but this does not mean that the lower-class white has ‘more privilege’ than the upper-class black. This shows that we should not assume that lower-class whites have the same kinds of privilege conferred by society as middle- and upper-class whites. Upper-class blacks and ‘Hispanics‘ may attempt to distinguish themselves from lower-class blacks and ‘Hispanics’, as Sullivan (2019: 18-19) explains:
Class privilege shows up as a feature of most if not all racial groups in which members with “more”—more money, education, or whatever else is valued in society—are treated better than those with “less.” For that reason, we might think that white class privilege actually is an intragroup pattern of advantage and disadvantage among whites, rather than an intergroup pattern that gives white people a leg up over non-white people. After all, many Black middle-class and upper-middle-class Americans also go to great lengths to make sure that they are not mistaken for the Black poor in public spaces: when they are shopping, working, walking, or driving in town, and so on (Lacy, 2007). A similar pattern can be found with middle-to-upper-class Hispanic/Latinx people in the United States, who can “protect” themselves from being seen as illegal immigrants by ensuring that they are not identified as poor (Masuoka and Junn, 2013).
Sullivan then goes on to state that these situations are not equivalent, since wealth, fame, and education do not protect upper-class blacks from racial discrimination. The certain privileges that upper-class whites have, thusly, do not transfer to upper-class blacks. Further, middle- to upper-class whites distinguish themselves as ‘good whites’ who are not racist, while dumping all of the racism accusations on lower-class whites. “…the line between “good” and “bad” white people drawn by many (good) white people is heavily classed. Good white people tend to be middle-to-upper-class, and they often dump responsibility for racism onto lower-class white people” (Sullivan, 2019: 35). Even though the lower-class whites get used as a ‘shield’, so to speak, by upper-class whites, they still have some semblance of white privilege, in that they are not assumed to be non-citizens to the US—something that ‘Hispanics’ do have to deal with (no matter their race).
While wealthy white people generally have more affordances than poor white people do, in a society that prizes whiteness all white people have some racial affordances, at least some of the time.
Paradoxically, whites are not the only ones that benefit off of ‘white privilege’—even non-whites can benefit, though it ultimately helps upper-class whites. They can benefit by being brought up in a white home, around whites (like being adopted or having one white parent while spending most of their childhood with their white family). Thus, white privilege can cross racial lines all the while still benefitting whites.
Sullivan (2019: chapter 2) discusses some blacks who benefit from white privilege. One of the people she discusses has a white parent. This is what gives her her lighter skin, but that is not where her privilege comes from (think colorism in the black community where lighter skin is more prized than darker skin). Her privilege came from “her implicit knowledge of white norms, sensibilities, and ways of doing things that came from living with and being accepted by white family members” (Sullivan, 2019: 26). This is what Sullivan calls “family familiarity” and is one of the ways that blacks can benefit from white privilege. Another way in which blacks can benefit from white privilege is due to “ancestral ties to whiteness.”
Colorism is the discrimination within the black community by skin color. Certain blacks may talk about “light-” and “dark-skinned” blacks and they may—ironically or not—discriminate on the basis of skin color. Such colorism is even somewhat instilled in the black community—where darker-skinned black sons and lighter-skinned black daughters report higher-quality parenting. Landor et al (2014) report that their “findings provide evidence that parents may have internalized this gendered colorism and as a result, either consciously or unconsciously, display higher quality of parenting to their lighter skin daughters and darker skin sons.” Thus, even certain blacks—in virtue of being ‘part white’—would benefit from white (skin) privilege within their own (black) community, which would therefore give them certain advantages.
Arguments against white privilege
Two recent articles with arguments against white privilege (Why White Privilege Is Wrong — Quillette and The Fallacy of White Privilege — and How It Is Corroding Society) erroneously argue that since other minority groups quickly rose up upon arrival to America, therefore white privilege is a myth. These kinds of takes, though, are quite confused. It does not follow that since other groups have risen upon entry into America and that since whites have worse outcomes on some—and not other—health outcomes, that therefore the concept of white privilege is ‘fallacious’; we just need something more fine-grained.
For example, the claims that X minority group is over-represented compared to whites in America gets used as a point that ‘white privilege’ does not exist (e.g., Avora’s article). Avora discusses the experiences and data from many black immigrants, proclaiming:
These facts challenge the prevailing progressive notion that America’s institutions are built to universally favor whites and “oppress” minorities or blacks. On the whole, whatever “systemic racism” exists appears to be incredibly ineffectual, or even nonexistent, given the multitude of groups who consistently eclipse whites.
How does that follow? In fact, how does the whole discussion of, for example, Japanese now outperforming whites follow that white privilege therefore is a ‘fallacy’? I ask the question, since Asian immigrants to America are hyper-selected (Noam, 2014; Zhou and Lee, 2017), meaning that what explains higher Asian academic achievement is academic effort (Hsin and Xie, 2014) and the fact that Asians are hyper-selected—meaning that they have a higher chance of having a higher degree.
The educational credentials of these recent [Asian] arrivals are striking. More than six-in-ten (61%) adults ages 25 to 64 who have come from Asia in recent years have at least a bachelor’s degree. This is double the share among recent non-Asian arrivals, and almost surely makes the recent Asian arrivals the most highly educated cohort of immigrants in U.S. history.
Compared with the educational attainment of the population in their country of origin, recent Asian immigrants also stand out as a select group. For example, about 27% of adults ages 25 to 64 in South Korea and 25% in Japan have a bachelor’s degree or more.2 In contrast, nearly 70% of comparably aged recent immigrants from these two countries have at least a bachelor’s degree. (The Rise of Asian Americans)
Avora even discuses some African immigrants, namely Nigerians and Ghanaians. However, just like Asian immigrants to America, Nigerian and Ghanaian immigrants to America are more likely to hold advanced degrees, signifying that they are indeed hyper-selected in comparison to the population that they derive from (Duvivier, Burch, and Boulet, 2017). Thus, to go along with the stats that Avora cites on the children of Nigerian immigrants, their parents already had higher degrees, signifying that they are indeed a hyper-selected group. This means that such ethnic groups cannot be used to show that white privilege is explanatory.
While Avora does discuss “class” in his article, he shows that it’s not only ‘white privilege’, but the class element that comes along with whiteness in America. He therefore unknowingly shows that once you add the ‘class’ factor and create the concept of ‘white class privilege’, that this privilege can cross racial lines and benefit non-whites.
In the Harinam and Henderson Quillette article, they argue that since there are some things that we say are ‘good’ that non-whites have more of than whites, therefore the concept of ‘white privilege’ does not explain the existence of disparities between ethnic groups in the US since some some bad things happen to whites and some good things happen to non-whites—but this is an oversimplification. The fact of the matter is, whites that do receive privileges over other ethnic/racial groups do so not in virtue of their (white) skin privilege, but in virtue of their class privilege. This can be seen with the above citations on class being the explanatory variable regarding Asian academic success (showing how class values get reproduced in the new country which then explains the academic success of Asians in America).
The fact that both of these articles believe that by showing some minority groups in America have more ‘good’ things than whites or better outcomes for bad things (like suicides) misses the point. That whites kill themselves more than other American ethnic groups does not mean that whites do not have privilege in America compared to other groups.
If race doesn’t exist, then why does white privilege matter?
Lastly, those who argue against the concept of white privilege may say that those who are against the concept of white privilege would then at the same time say that race—and therefore whites—do not exist so, in effect, what are they talking about if ‘whites’ don’t exist because race does not exist? This is of course a ridiculous statement. One can indeed reject claims from biological racial realists and believe that race exists and is a socially constructed reality. Thus, one can reject the claim that there is a ‘biological’ European race, and they can accept the claim that there is an ever-changing ‘white’ race, in which groups get added or subtracted based on current social thought (e.g., the Irish, Italians, Jews), changing with how society views certain groups.
Though, it is perfectly possible for race to exist socially and not biologically. So the social creation of races affords the arbitrarily-created racial groups to be in certain areas on the hierarchy of races. Roberts (2011: 15) states that “Race is not a biological category that is politically charged. It is a political category that has been disguised as a biological one.” She argues that we are not biologically separated into races, we are politically separated into them, signifying race as a political construct. Most people believe that the claim “Race is a social construct” means that “Race does not exist.” However, that would be ridiculous. The social constructivist just believes that society divides people into races based on how we look (i.e., how we are born) and then society divides us into races on the basis of how we look. So society takes the phenotype and creates races out of differences which then correlate with certain continents.
So, there is no contradiction in the claim that “Race does not exist” and the claim that “Whites have certain unearned privileges over other groups.” Being an antirealist about biological race does not mean that one is an antirealist about socialraces. Thus, one can believe that whites have certain privileges over other groups, all the while being antirealists about biological races (saying that “Races don’t exist biologically”).
In this article I have explained what white privilege is and who has it. I have also discussed arguments against white privilege and claims that those who argue against race are hypocrites since they still talk about “whites” while claiming that race exists. After showing the conceptual confusions that people have about white privilege, along with the fact that groups that do better than whites in America (the groups that supposedly show that white privilege is “a fallacy”), I then forward Sulllivan’s (2016, 2019) argument on white class privilege. This shows that their whiteness is not the sole reason why they prosper—their whiteness along with their middle-to-upper-middle-class status explains why they prosper. It also, furthermore, shows that while lower-class whites do have some sort of white privilege, they do not have all of the affordances of white privilege due to their class status. Blacks can, too, benefit from white privilege, whether it’s due to their proximity to whiteness or their ancestral heritage.
White privilege does exist, but to fully understand it, we must add in the nexus of class with it.
Summer vacation gives us a natural experiment to study the effects of vacation on IQ—and, unsurprisingly, the outcome is that one’s IQ is a function of what they are exposed to during the summer. We see the expected trajectories and outcomes in IQ based on the social class of the individual. A few studies since the 1920s/60s have been carried out on what occurs during summer vacation—and small, but noticeable—decreases in IQ are found. This only serves to further strengthen the claim that “IQ tests” are middle-class knowledge tests and that IQ is an outcome—not a cause.
Why may we see an IQ decrease in the summer? Well, for one, students are thrown out of the “school rhythm” that they get into the 9 months they are in school. Since they have three months they have off from their learning (say, June-September), when it then comes to test-taking, the students become less familiarized with these types of things, causing a decrease in scores. If “IQ tests” were indeed tests of middle-class knowledge and skills and if we think of an “IQ score” as a rough proxy of social class,, then we would expect that certain academic achievements related to IQ would raise or fall in certain contexts (i.e., one’s age, race, gender, social class, etc). This is what we find.
For instance, Cooper et al (1996) meta-analyzed 13 studies (while reviewing 39 studies). They found, due to summer vacation, student’s grade loss as tested before the summer and after was equivalent to losing one grade in that period of time. They also found that middle-class children had an increase in reading, while lower-class children had a decrease. This can be explained by, for example, the presence of books in the home and how they differ between social class. We know that the presence of books in the home is an indicator of academic performance (Evans, Kelley, and Sikora, 2014). This is important, because children who reported that they had easier access to books read more books (Kim, 2004), while voluntary reading programs do increase reading test scores (Kim and White, 2008).
“Growing up in the scholarly culture provides important academic skills“, note Evans, Kelley, and Sikora (2014: 19), and this is due to the fact that such tests are constructed by certain people with certain assumptions about the nature of the tests in question (Richardson, 2000, 2002). Thus, what explains the finding is the fact that those from higher-class families have more access to books, and so they avoid the decrease in reading skills during the summer. (Think of “summer reading” programs. I recall them from my youth. I remember reading The Hot Zone for a summer reading book once.) This replicates previous research from this team where they showed that children who grew up in homes with “many books” had three more years of schooling than children from “bookless homes”, and this was independent of the social class, education, and social class of the parent (Evans et al, 2010).
Cooper et al (1996) discuss Heyns’ (1978) book Summer Learning and the Effects of School where Heyns shows that summer learning is more dependent on parental occupation than is learning during the school year (Cooper et al, 1996: 243). Heyns’ data showed that summer vacation widened the gap in achievement between rich and poor (meaning high and low social class) and that it also widened the gap between blacks and whites. Cooper et al’s meta-analysis also showed that the gap in reading achievment between middle- and low-class learners during the summer was equivalent to a 3-month gap between them. While children in both classes show decreases in reading skills over the summer, lower-class students showed steeper declines than middle-class students. What this suggests is that class differences can—and do—in fact increase inequalities between the two classes. A lower-class status would then translate to being presented with fewer learning opportunities (meaning that they would have fewer opportunities to prepare for what amounts to middle-class knowledge tests), therefore explaining why the gap increases between the two social classes.
So, as Cooper et al (1996) show, summer vacation has an equal effect on math skills between middle-and lower-class children, while, when it comes to reading skills, lower-class students took a bigger hit (which can be explained by access to books in the home). So, to attempt to mitigate these disparities, we can, for example, mandate some type of summer math program for all classes, or instruction of reading for lower-class children since the analysis pointed to these two types of disparities. Of course, reading practice would be more readily available than math practice, which would explain why there is a disparity in differences between blacks and whites and between social classes.
Note that a decrease in mathematical skill was found by Paechter et al (2015) in a sample of Austrian children, who have a 9-week vacation. They write that “Losses or gains in a knowledge domain appear to depend on the degree of practice during the summer vacation“, and this is intuitive based on the nature of test-taking.
Entwisle and Alexander (1992; 1994) studied the “summer setback” between a random sample of blacks and whites in Baltimore, Maryland. In longitudinal fashion, they tested these black and white children before they entered the first grade. Math test scores were used as a proxy of how ‘stimulating’ a home was when it came to knowledge acqusition during the summer. They found that the two most important factors for math skills during the summer was that differences in family SES and how segregated the schools were. They also noted how school integration helps black students, and how white students do just as well, whether or not the school they are attending is integrated or not. (Also see Johnson and Nazaryan, 2019, who show the same—they also show that, regardless of race, children who attended integrated schools had better life outcomes than children who did not.) The 1994 paper also showed that linguistic differences between integrated Baltimore schools could also account for differences in reading skills. (Also see Patterson, 2015.)
Alexander, Entwisle, and Olson (2001) write (their emphasis):
When our study group started school their pre-reading and pre-math skills reflected their uneven family situations, and these initial differences were magnified across the primary grades because of summer setback despite the equalizing effect of their school experiences.
Class gaps grow in the summer, when “non-school influences dominate” (Condron, 2009) which, again, shows that these tests test certain types of knowledge found in certain classes over others which explains the disparities in certain things between groups. It is established that higher-SES children learn more over the summer (Burkam et al, 2004), and this is due, again, the types of content on the tests in question (since the tests are constructed by people from a narrow—higher—social class).
The lasting effects of the summer vacation learning gap is succinctly put by Alexander, Entwisle, and Olson (2007: 168):
(1) if the achievement gap by family SES during the elementary school years traces substantially to summer learning differences, and (2) if achievement scores are highly correlated across stages of young people’s schooling, and (3) if academic placements and attainments at the upper grades are selected on the basis of achievement scores, then (4) summer learning differences during the foundational early grades help explain achievement-dependent outcome differences across social lines in the upper grades, including the transition out of high school and, for some, into college.
Thus, summer vacation has a negative effect on all students, and this is particularly pronounced in differences between groups. So, we can either:
(1) Extend the school year; If we had a longer school year, we can then monitor children better and mitigate the problem areas that occur during the loss of school;
(2) Mandating summer school; If we had mandated summer school, then there would be less of an increase in academic achievement between classes, though such remedial classes have differing effects depending on context and the group studied (see McComb et al, 2001; Cooper et al, 2005).
(3) Make modifications to the school calendar. Since the hit to knowledge is not equal in all groups studied, then it would behoove us to target at-risk groups through the school year and then, possibly, have longer periods of breaks and not an all-at-once three-months off school as to better foster academic skills used for test-taking.
The heart of the problem of the ‘summer slide’ is due to less stimulating environments during the summer (which is different by race/social class), and thus, what explains the differences in amount of knowledge kept during the summer vacation is reflective of how well the household mimics the school environment since the tests in question are tests of middle-class knowledge and skills. This squares nicely with the research that schooling is important for IQ—even that it is causally efficacious regarding IQ (Ceci, 1990; Ritchie and Tucker-Drob, 2018). Even then, the gap between blacks and whites in test scores grows much more slowly during the school year than during summer vacation, indicating that “schools are, indeed, the great equalizers” (Downer, von Hippel, and Broh, 2004: 633)
This type of research does, indeed, buttress my claim that IQ is an outcome and not a cause. The claim that one is more ‘intelligent’ or that ‘one has a higher IQ than another’ (a claim that one is more ‘intelligent’ than another) is a descriptive and not an explanatory claim. We have at least three choices to think over when it comes to mitigating the problems that summer vacation brings to students—which is, relative to the school environment—‘duller’ environments which hampers learning and knowledge acquisition. Due to either lower levels of forgetting, or an advantage in continuing to learn over their less-advantaged peers, higher-SES children return to school with a subsequent advantage over lower-SES children and this is one way in which summer vacation widens inequalities between groups. Summer vacations, therefore, increase inequality between groups.
It has come to my attention that near the end of 2007, Nike boasted about releasing a running shoe that specifically targeted Native American communities. Nike developed the shoe to “to address the specific fit and width requirements for the Native American foot.” Since Native Americans have a high rate of obesity and diabetes (“diabesity”), then it seems that it would be a good thing to promote a shoe specifically for and to the population in question. But do such gestures translate to racist ideas or do they translate to a corporation wanting to be seen as promoting health (while their ultimate goal is profit)? Nike, specializing in athletic clothing, surely would be a good organization to spearhead such a movement, right? On what research is this initiative based on and does it hold water?
Through such outreach programs, Nike hopes to be seen to make social and community impacts when it comes to health. As Welch (2019: 12) notes, the N7 imitative hopes “to further promote sport and physical activity in Native American communities.” Such programs and specific items that would catch the eye of the consumers in question to heighten their physical activity and, subsequently, lessen their rates of fatal diseases, should be seen as a good thing, which would be irrespective of the feelings of the groups in question who see such outreach as racist.
The shoe was developed by a podiatrist named Rodney Stapp who served the Native community for his whole life (b. 1961, d. 2016). This was the first—and since then, only—time that Nike developed a shoe for a specific racial group. Was it a good idea? Was it racist? Even if it could be construed as racist, wouldn’t it be negated by targeting a group that has some of the highest rates of diabesity in America, therefore leading to a more active population and mitigation of the diseases in question? (See Broussard et al, 1991; Narayan, 1996; Acton et al, 2002). Since exercise seems to be necessary in managing diabetes and its symptoms (Colberg et al, 2010; Kirwan, Sacks, and Nieuwoudt, 2018; Borhade and Singh, 2020), then it seems that, irrespective of whether or not such gestures are racist, that such outreach and initiatives are a net good for the population in question.
Stapp was a big-name figure in the outreach to Native groups in Texas, and was the podiatrist that Nike consulted with in the development of their Nike Native American N7 shoe. Stapp was the one that contacted Nike to make such a shoe, since the patients that he serviced did not like the black and bulky shoes that were specially developed for diabetics—the efficacy of such shoes, though, have been debated in the literature (e.g., Brunner, 2015), while others have noted that diabetics have stated that the style and appearance of such diabetic shoes are the reasons why there is such low compliance in wearing them (Macfarlane and Jensen, 2003). In any case, wouldn’t marketing shoes toward specific demographics be a net-good, irrespective of the ultimate goals of the company if they would then promote healthier behaviors in the population in question?
Nike, though, has been criticized for the initiative, with Native right’s groups claiming that Nike is using Native plight for profit (Cole, 2008; Sanders, Phillips, and Alexander, 2018). It has been criticized by such groups since they have embroidered the shoe with feathers and sunsets, arrows, and different kinds of symbolism prevalent in Native cultures in the Americas. Here, I would not say that such things are racist on its face, it’s just a marketing ploy to sell more of their shoes. While it can be construed that such marketing is racist in a way, I think that the good such a program and shoe would do to reach at-risk populations outweighs any racist connotations that the shoes and the outreach program makes.
But most would have a problem with the claim that the shoe was developed specifically for “Native American feet”. Stapp claimed that “Indians tend to have a wider foot, but their heels are about average“, which would indicate slippage while running in a normal running shoe. Nike’s press release on the shoe says that “A strong emphasis was placed on providing a performance product that would cater to the specific needs of Native American foot shapes and help provide motivation to Native Americans predisposed to, or suffering from, health issues that can be improved by leading physically active lifestyles“, while also stating that “Research has engaged individuals from over 70 tribes as well as consulting podiatrists and members of Indian Health Services and the National Indian Health Board.“
(I am unable to find the research in question; hopefully someone can point me in the right direction so that I can find it.)
There is a history of such differences in the appendages between North and South Native Americans—where North Americans have longer and more slender feet than South Americans (e.g., Kate, 1918). Nike stated that the reason they developed the shoes were so that they could accommodate Native American’s wider feet, along with combating the diabesity epidemic that affects them. In 2015, though, Stapp stated that he believed the introduction of the shoe dropped amputations from 5-6 per year to 0-1 per year. If it is indeed true that the shoes were related in lowering the incidences of foot amputations in Native communities, then it would seem that the cause of that would be that they are moving more and getting more blood to their lower extremities which would then lead to lowered rates of amputation in these diabetic populations.
The claim that such shoes “racially profile” Natives is ridiculous. Stapp said that Nike asked him if there were differences in the feet of Native groups compared to others to which he answered “Yes.” Apparently, around the time of the marketing for the shoes, Nike was told that Native Americans had problems fitting into Nike’s ‘normal’ running shoes due to the width of their feet (being wider than average). Along with Natives supposedly having wider feet, since diabetes causes inflammation of tissue, which is concentrated in the feet—for instance, with diabetic foot ulcers (Pendsey, 2010; Schoen and Norman, 2014; Tuttolomondo, Maida, and Pinto, 2015; Amin and Doupis, 2016)—would seem that the call for such shoes to be develop would be a net-good for the population.
Though I can see how the claims that the shoe targeted at a specific racial group could be construed as racist, the net-good that a shoe does in getting to certain populations would outweigh the negative connotations that the racist accusation brings on Nike. Indeed, some of the developers of the shoe were Native, worked with Natives, and developed it to specifically target and help Natives manage a debilitating disease that leads to many negative health outcomes—like foot amputation and eventually death. So if exercise is conducive to managing diabetes and diabetic foot and the N7’s would then target certain populations with different average foot morphology, then it seems that the shoe has been a net-good for the population since, according to Stapp, seven years after the introduction of the shoe diabetic foot amputations went from 5-7 to 0-1 per year. While he may have had financial incentives to say that, I don’t think that it underscores the fact that Nike’s N7 program did not have positive benefits—even if they could be construed in a negative way (i.e., claims of racism).
The answer to the question “Should we market shoes to specific demographics” is “Yes.” It would be a good idea to, for example, make more demographic-specific shoes with specific embroideries in order to attempt to target certain at-risk populations that are more likely to acquire certain diseases on the basis of physical inactivity—like the Nike N7 program and Nike Native American N7 shoe attempt to do. It is for these reasons, then (irrespective of whether or not such morphologic claims of the feet of Natives are true) that the initiative in question is a good thing. The moral “should” question on whether or not we “should” market things—in this example, shoes—to certain demographics seems to rest on whether or not the marketing would have a positive effect on the lifestyles of the groups in question. If it does have positive effects, then we should market such programs toward at-risk populations, irrespective of claims that such marketing is racist toward certain groups.
Read the original article here. The titular person here is a blogger of population genetics and fossils concern Southeast Asia. Here represents among his latest synthesis of modern human origins. I believe it is mostly well done, in particular in regards to alluding to an Asian origin for the LCA of Sapiens, Neanderthals, and Denisovans which he expanded upon here.
The focus for today, however, concerns issues in representing the geographical positioning of Sapiens, which he alludes to a Asian origin, thought eh fossils he uses are not supportive as firmly as he suggests.
For the not-too-long time, fossil evidence did support this narrative, although fossils from the past 150000 years were very rare and even absent in Africa, there were some older human skulls forced to support this narrative. It is different with East Asia, we can find fossils with modern morphology that lived between 190-130 Kya (Zhirendong, Luijiang). Even the signals of dental modernity have appeared since 296 000 years ago (Panxian Dadong), about 100 000 years preceding the modern teeth of Misliya Cave in the Levant (194-177 Kya). And the morphology and modern face shapes have appeared since 900,000 years ago (Yunxian, Nanjing, Zhoukoudian). Includes Dali’s human face (550-260 Kya).
Regarding the evidence of “modern” tendencies, here is what the record shows
Among these sites, Fuyan (Daoxian) Cave, Luna Cave, Zhirendong Cave, and Huanglong Cave are currently considered as the best evidence in support of the early presence of H. sapiens in China, based on a clearer chronostratigraphic context and a more diagnostic morphology. There are other sites such as Ganqian (Tubo; Shen et al. 2001), Tongtianyan (Liujiang; Shen et al. 2002; Yuan, Chen, and Gao 1986), Dingcun (Chen,
Yuan, and Gao 1984; Pei 1985), and Jimuyan (Wei et al. 2011) that we consider of interest to assess the evolution of modern humans in China. However, because of the more ambiguous morphology of the fossils and/or uncertainties about their antiquity they are considered less unequivocal than those from Fuyan Cave, Luna Cave, Zhirendong Cave, and Huanglong Cave.
Liujiang, which the author of the blog post associates with the same interval as Zhirendong from a cave study, Liujiang was not of the sane site, nor was the context firmly grounded. See here for references. As for the interval 130-190k for the Zhiren cave, that exceeded the direct date of 106-110k for the fossils themselves. The date he uses here is from a study on the cave itself.
For Panxian, I commented on this from another post(now deleted).
The Panxian specimens were mainly archaic, while PH3 was found derived but in no specific fashion.
The facial features he speaks off from 900k from China speaks mainly of the mid-face, and fully modern faces didn’t appear until Antecessor.
Dali, since the 2017 study, was concluded to represent geneflow due to the lack of conformity to local erectus fossils relative to African and European ones.
One of the easily distinguishable features of modern humans is the shape and morphology of the skull. When compared to its predecessors, the modern human skull is more gruff, the face is flatter vertically, the chin protrudes, and the braincase) which is more globular. If a skull has most of these features, then it is classified under our species, modern humans. The older skulls that have been suggested as part of our modern human ancestry are the Omo I and Omo II skulls from Omo Kibish, in southern Ethiopia (Leaky et al. 1967). The two Omo skulls are around 195,000 years old (initially only 130000 years old), have a mixture of archaic and modern features, something that is not surprising if we view them as African archaic humans who probably met their modern human ancestors from elsewhere, before evolving into modern humans. . Because of this, they were both named Homo sapiens idaltu . Idaltu in the Afar language means ‘older’.
The mixture of archaic features would also be expected if early and transitional. somehow this is lost.
Several skulls from East and South Africa also tell about the same thing. Things are thought to have improved when three Herto skulls were found in 1997 around Afar, Ethiopia, aged 154,000-160000 years, which also have mixed archaic and modern craniofacial features . Herto’s skull was found in the same layer containing Middle Stone Age (MSA) and Later Stone Age (LSA) aftefacts. The location and artifacts and age of the Herto man closely match the Out of Africa model , and convince many scientists that the Herto man could be the closest anatomical ancestor of modern humans ( Out of Ethiopia ).
Same as explain before, meanwhile in terms of cranial features China is lacking as far as skulls are concerned.
Some fossils are classified as part of Homo sapiens , such as Omo I and Herto (although they are substantially different, and both still have primitive morphology, and some scientists consider Herto to be a subspecies of Homo sapiens ). Jebel Irhoud’s human status is being debated, some paleoanthropologists openly accept him as a close relative of Homo sapiens , some do not accept it because he considers Jebel Irhoud to be part of archaic Africa , and may even be part of a different evolutionary line from the evolutionary line of Homo sapiens . Florisbad man, previously classified as Homo helmei, it is not sufficient to represent the evolutionary line of Homo sapiens due to its primitive character and absence of a braincase .
He doesn’t provide citation or quotes to demonstrate who thinks thinks way or explain how China solves this with it’s specimens on comparable levels. Outside of teeth that generally don’t go beyond the interval of 120k, China is lacking. While he mentions in the article that the Broken Hill Skull fails as an ancestor, the mixed skulls he mention however are morphologically expected as the study mentions.
Meanwhile, the 130k Braincase of Singa shows modern morphology.
Overall, his latest article does a better job but still presents issues. I’ll summarize them here in this deleted comment.
“Let’s look at Africa. One of the oldest candidates for Homo ergaster , KNM-ER 3733, turns out to be 1.63 million years old, and all specimens from the Turkana Basin have their estimate between 1.6-1.43 million years. Nariokotome Boy or Turkana Boy (KNM-WT 15000; 1.5 million years), which has always been predicted as a representative of Homo erectus , turns out to be in the Homo ergaster evolution line , because it does not have a canine fossa .”Yet we know that Erectus is oldest in South Africa currently, I evenseen you post this research.https://www.smithsonianmag.com/science-nature/homo-erectus-australopithecus-saranthropus-south-africa-180974571/“The Konso skull from Southern Ethiopia is about 1.4 million years old. Buia and Daka (about 1 million years old) best fit the transition between Homo ergaster to Homo rhodesiensis , as well as Gombore II (~ 780 Kya). Daka which is hypothesized asHomo erectus has more morphological similarities to KNM ER3733, so sharing one morphology with Homo antecessor or Homo erectus East Asia does not necessarily make it part of Homo erectus . The youngest Homo ergaster, OH 12, is 780 Kya, has a skull capacity and facial shape similar to that of KNM ER3733. With an age difference of about 8,50000 years, the morphological continuity is still very clear. Imagine if they were still classified as Homo erectus ? Yet it is clear that their evolutionary path does not lead to the Homo sapiens line of evolution. In addition, some of the morphologies of OH 12 are also similar to KNM ER3883, D2282 and D2700.”Some references for the affinities?“The Homo habilis specimens from Koobi Fora range from 1.75 to 1.65 million years old. If KNM-ER 1802 is classified as Homo habilis (we must first verify it based on the presence of canine fossa , or it could be Homo rudolfensis , represented by KNM ER 62000), then the origin of its appearance is about 2 million years ago. So far, specimens representing Homo habilis claim the shallow canine fossaincluding OH 24, OH 62, OH 65, KNM-ER 1813, and KNM-ER 42703 with a time span of 1.86-1.44 million years (but still needs to be investigated further due to limited references). Of course this is younger when compared to Longgudong human teeth, more than 2.14 million years old. In fact, in several locations in Ciscaucasus there are many traces of artifacts that are more than 2 million years old.”The problem here is that Koobi Floora isn’t the oldest Habilis, the oldest is 2.3 mya years old in Afar. In fact the oldest Homo (that is broken away from Australopithecus morphologically) is 2.5 mya.Also, as far as artifacts goes, unless you have evidence proving otherwise, both the Oldowan and Achuelean are oldest in Africa at 2.6 and 1.7 mya ago respectively.“Meanwhile, KNM ER 2598, which was assumed to be a candidate for the Out of Africa I population (with an estimated initial age of 1.88-1.9 million years ago), was found on the surface which may have originated from a younger stratigraphic deposit. KNM-ER 1813 is also estimated to be 1.86 million years old, or to be in the Olduvai Subchron boundary (1.95-1.78 million years ago).”See above mention of the South Africa find. Also, proof to support this suggestion?“Alternatively, KNM-ER 1813 and other hominins in Area 123 could be younger than 1.65 million years ago.”Again, evidence?“After 1.65 million years ago, Turkana Basin humans were dominated by Homo ergaster , who was contemporary with Sangiran early humans (Sangiran 4 and S27), but younger than early humans Bumiayu. So, the best candidates for the ancient Javanese ancestors could be between the early humans Dmanisi, or the ancient humans Longgudong (> 2.14 million years) and Yuanmou (1.7-1.72 million years).”The most recent evidence rules out longgudong, seeing how it is defined best as habilis and distinct from Erectus in regards to the teeth and.Likewise, studies in 2001 and 2002 place the latter specimen to below1 mya, therefore no consensus.Btw, Naledi does have a Canine Fossa.Otherwise, I agree with the rest of the article.
Hereditarianism is the theory that differences in psychology between individuals and groups have a ‘genetic’ or ‘innate’ (to capture the thought before the ‘gene’ was conceptualized) cause to them—which therefore would explain the hows and whys of, for example, the current social hierarchy. The term ‘racism’ has many referents—and using one of the many definitions of ‘racism’, one could say that the hereditarian theory is racist since it attempts to justify and naturalize the current social hierarchy.
In what I hope is my last word on the IQ/hereditarian debate, I will provide three conceptual arguments against hereditarianism: (1) psychologists don’t ‘measure’ any’thing’ with their psychological tests since there is no object of measurement, no specified measured object, and measurement unit for any specific trait; only physical things can be measured and psychological ‘traits’ are not physical so they cannot be measured (Berka, 1983; Nash, 1990; Garrison, 2009); (2) there is no theory or definition of “intelligence” (Lanz, 2000; Richardson, 2002; Richardson and Norgate, 2015; Richardson, 2017) so there can be no ‘measure’ of it, the example of temperature and thermometers will be briefly mentioned; (3) the logical impossibility of psychophysical reduction entails that mental abilities/psychological traits cannot be genetically inherited/transmitted; and (4) psychological theories are influenced by the current problems/going-on in society as well as society influencing psychological theories. These four objections are lethal to hereditarianism, the final showing that psychology is not an ‘objective science.’
(i) The Berka/Nash measurement objection
The Berka/Nash measurement objection is simply: if there is no specified measured object, object of measurement or measuring unit for the ‘trait’, then no ‘thing’ is truly being ‘measured’ as only physical things can be measured. Nash gives the example of a stick—the stick is the measured object, the length of the stick is the object of measurement (the property being measured) and inches, centimeters etc are the measuring units. Being that the stick is in physical space, its property can be measured—its length. Since psychological traits are not physical (this will also come into play for (ii) as well) nor do they have a physical basis, there can be no ‘measuring’ of psychological traits. Indeed, since scaling is accepted by fiat to be a ‘measure’ of something. This, though, leads to confusion, especially to psychologists.
The most obvious problem with the theory of IQ measurement is that although a scale of items held to test ‘intelligence’ can be constructed, there are no fixed points of reference. If the ice point of water at one atmosphere fixes 276.16 K, what fixes 140 points of IQ? Fellows of the Royal Society? Ordinal scales are perfectly adequate for certain measurements. Moh’s scale of scratch hardness consists of ten fixed points from talc to diamond, and is good enough for certain practical purposes. IQ scales (like attainment test scales) are ordinal scales, but this is not really to the point, for whatever the nature of the scale it could not provide evidence for the property IQ or, therefore, that IQ has been measured. (Nash, 1990: 131)
In first constructing its scales and only then preceding to induce what they ‘measure’ from correlational studies, psychometry has got into the habit of trying to do what cannot be done and doing it the wrong way round anyway. (Nash, 1990: 133)
The fact of the matter is, IQ tests don’t even meet the minimal theory of measurement since there is no—non-circular—definition of what this ‘general cognitive ability’ even is.
(ii) No theory or definition of intelligence
This also goes back to Nash’s critique of IQ (since there can be no non-circular definition of what “IQ tests” purport to measure): There is no theory or definition of intelligence therefore there CAN BE no ‘measure’ of it. Imagine saying that you have measured temperature without a theory behind it. Indeed, I have explained in another article that although IQ-ists like Jensen and Eysenck emphatically state that the ‘measuring’ of ‘intelligence’ with “IQ tests” is “just like” the measuring of temperature with thermometers, this claim fails as there is no physical basis to psychological traits/mental abilities so they, therefore, cannot be measured. If “intelligence” is not like height or weight, then “intelligence”‘ cannot be measured. “Intelligence” is not like height or weight. Therefore, “intelligence” cannot be measured.
We had a theory and definition of temperature and then the measuring tool was constructed to measure our new construct. The construct of temperature was then verified independently of the instrument used to originally measure it, with the thermoscope which then was verified with human sensation. Thus, temperature was verified in a non-circular way. On the other hand, “intelligence tests” are “validated” circularly, if the tests correlate highly with other older tests (like Terman’s Stanford-Binet), it is held that the new test ‘measures’ the construct of ‘intelligence’—even if none of the previous tests have themselves been validated!
Therefore, this too is a problem for IQ-ists—their scale was first constructed (to agree with the social hierarchy, no less; Mensh and Mensh, 1991) and then they set about trying to see what their scales ‘measure’ with correlational studies. But we know that since two things are correlated that doesn’t mean that one causes the other—there could be some unknown third variable causing the relationship or the relationship could be spurious. In any case, this conceptual problem, too, is a problem for the IQ-ist. IQ is nothing like temperature since temperature is an actual physical measure that was verified independently of the instrument constructed to measure the construct in the first place.
Claims of individuals as ‘intelligent’ (whatever that means) or not are descriptive, not explanatory—it is the reflection of one’s current “ability” (used loosely) in relation to their current age norms (Anastasi; Howe, 1997).
(iii) The logical impossibility of psychophysical reduction
I will start this section off with two (a priori) arguments:
Anything that cannot be described in material terms using words that only refer to material properties is immaterial.
The mind cannot be described in material terms using words that only refer to material properties.
Therefore the mind is immaterial; materialism is false.
If physicalism is true then all facts can be stated using a physical vocabulary.
But facts about the mind cannot be stated using a physical vocabulary.
So physicalism is false.
(Note that the arguments provided are valid, and I hold them to be sound thus an objector would need to reject then refute a premise.)
Therefore, if all facts cannot be stated using a physical vocabulary and if the mind cannot be described in material terms using words that only refer to material properties, then there can, logically, be no such thing as ‘mental measurement’—no matter what IQ-ists try to tell you.
Different physical systems can give rise to different mental phenomena—what is known as the argument from multiple realizability. Thus, since psychological traits/mental states are multiply realizable, then it is impossible for psychology to reduce to mental kinds to reduce to physical kinds—the mental kind can be realized by multiple physical states. Psychological states are either multiply realizable or they are type identical to the physio-chemical states of the brain. That is a kind of mind-brain identity thesis—that the mind is identical to the states of the brain. Although they are correlated, this does not mean that the mind is the brain or that the mind can be reduced to physio-chemical states, as Putnam’s argument from multiple realizability concludes. If type-physicalism is true, then it must be true that every and all mental properties can be realized in the same exact way. But, empirically, it is highly plausible that mental properties can be realized in multiple ways. Therefore, type-identity theory is false.
Psychophysical laws are laws connnecting mental abilities/psychological traits with physical states. But, as Davidson famously argued in his defense of Anomalous Monism, there can be no such laws linking mental and physical events. There are no mental laws therefore there can be no scientific theory of mental states. Science studies the physical. The mental is not physical. Thus, science cannot study the mental. Indeed, since there are no bridge laws that link the mental and physical and the mental is irreducible to and underdetermined by the physical, it then follows that science cannot study the mental. Therefore, a science of the mind is impossible.
Further note that the claim “IQ is heritable” reduces to “thinking is heritable”, since the main aspect of test-taking is thinking. Thinking is a mental activity which results in a thought. If thinking is a mental activity which results in a thought, then what is a thought? A thought is a mental state of considering a particular idea or answer to a question or committing oneself to an idea or an answer. These mental states are, or are related to, beliefs. When one considers a particular answer to a question, they are paving the way to holding a certain belief. So when they have committed themselves to an answer, they have committed themselves to a new belief. Since beliefs are propositional attitudes, believing p means adopting the belief attitude that p. So, since cognition is thinking, then thinking is a mental process that results in the formation of a propositional belief. Thus, since thinking is related to beliefs and desires (without beliefs and desires we would not be able to think), then thinking (cognition) is irreducible to physical/functional states, meaning that the main aspect of test-taking (thinking) is irreducible to the physical thus physical states don’t explain thinking which means the main aspect of (IQ) test-taking is irreducible to the physical.
(iv) Reflexivity in psychology
In this last section, I will discuss the reflexivity—circularity—problem for psychology. This is important for psychological theorizing since, to its practitioners, psychology is seen to be an ‘objective science.’ If you think about psychology (and science) and how it is practiced, it (they) investigates third-personal, not first-personal, states. Thus, there can be no science of the mind (what psychology purports to be) and psychology can, therefore, not be an ‘objective science’ as the hard sciences are. The ‘knowledge’ that we attain from psychology comes from, obviously, the study of people. As Wade (2010: 5) notes, the knowledge that people and society are the object of study “creates a reflexivity, or circular process of cause and effect, whereby the ‘objects’ of study can and do change their behavior and ideas according to the conclusions that their observers draw about their behavior and ideas.”
It is quite clear that such academic concepts do not arise independently—in the history of psychology, it has been used in an attempt to justify the current social hierarchy of the time (as seen in 1900s America, Germany, and Britain). Psychological theories are influenced by current social goings-on. Thus, it is influenced by the bias of the psychologists in question. “The views, attitudes, and values of psychologists influence the claims they make” (Jones, Elcock, and Tyson, 2011: 29).
… scientific ideas did not develop in a vacuum but rather reflected underlying political or economic trends. 15
The current social context influences the psychological discourse and the psychological discourse influences the current social context. The a priori beliefs that one holds will influence what they choose to study. An obvious example being, hereditarian psychologists who believe there are innate differences in ‘IQ’ (they use ‘IQ’ and ‘intelligence’ interchangeably as if there is an identity relation) will undertake certain studies in order to ‘prove’ that the relationship they believe to be true holds and that there is indeed a biological cause to mental abilities within and between groups and individuals. Do note, however, that we have the data (blacks score lower on IQ tests) and one must then make an interpretation. So we have three possible scenarios: (1) differences in biology cause differences in IQ; (2) differences in experience cause differences in IQ; or (3) the tests are constructed to get the results the IQ-ists want in order to justify the current social hierarchy. Mensh and Mensh (1991) have succinctly argued for (3) while hereditarians argue for (1) and environmentalists argue for (2). While it is indeed true that one’s life experiences can influence their IQ scores, we have seen that it is logically impossible for genes to influence/cause mental abilities/psychological traits.
The only tenable answer is (3). Such relationships, as noted by Mensh and Mensh (1991), Gould (1996), and Garrison (2009), between test scores and the social hierarchy are interpreted by the hereditarian psychologist thusly: (1) our tests measure an innate mental ability; (2) if our tests measure an innate mental ability, then differences in the social hierarchy are due to biology, not environment; (3) thus, environmental differences cannot account for what is innate between individuals so our tests measure innate biological potential for intelligence.
The [IQ] tests do what their construction dictates; they correlate a group’s mental worth with its place in the social hierarchy. (Mensh and Mensh, 1991)
Richards (1997) in his book on racism in the history of psychology, identified 331 psychology articles published between 1909 (the first conceptualization of the ‘gene’, no less) and 1940 which argued for biology as a difference-maker for psychological traits while noting that 176 articles for the ‘environment’ side were published in that same time period.
Note that the racist views of the psychologists in question more than likely influenced their research interests—they set out to ‘prove’ their a priori biases. Indeed, they even modeled their tests after such biases. Tests that were constructed that agreed with their a priori pre-suppositions on who was or was not intelligence was kept whereas those that did not agree with those notions were thrown out (as noted by Hilliard, 2012). This is just as Jones, Elcock, and Tyson (2011: 67) note with the ‘positive manifold’ (‘general intelligence’):
Subtests within a battery of intelligence tests are included n the basis of them showing a substantial correlation with the test as a whole, and tests which do not show such correlations are excluded.
From this, it directly follows that psychometry (and psychology) are not sciences and do not ‘measure’ anything (returning to (i) above). What psychometrics (and psychology) do is attempt to use their biased tests in order to sort individuals into where they ‘belong’ on the social hierarchy. Standardized testing (IQ tests were one of the first standardized tests, along with the SAT)—and by proxy psychometrics—is NOT a form of measurement. The hierarchy that the tests ‘find’ is presupposed to exist and then constructed into existence using the test to ‘prove’ their biases ‘right.’
Indeed, Hilliard (2012) noted that in South Africa in the 1950s that there was a 15-point difference in IQ between two white cultural groups. Rather than fan flames of political tension between the groups, the test was changed in order to eliminate the difference between the two groups. The same, she notes, was the case regarding IQ differences between men and women—Terman eliminated such differences by picking and choosing certain items that favored one group and balanced them out so that they scored near-equally. These are two great examples from the 20th century that demonstrate the reflexivity in psychology—how one’s a priori biases influence what they study and the types of conclusions they draw from data.
Psychology, at least when it comes to racial differences in ‘IQ’, is being used to confirm pre-existing prejudices and not find any ‘new objective facts.’ “… psychology [puts] a scientific gloss on the accepted social wisdom of the day” (Christian, 2008: 5). This can be seen with a reading into the history of “IQ tests” themselves. The point is, that psychology and society influence each other in a reflexive—circular—manner. Thus, psychology is not and cannot be an ‘objective science’ and when it comes to ‘IQ’ the biases that led to the bringing of the tests to America and concurrently social policy are still—albeit implicitly—believed today.
Psychology originally developed in the US in the 19th century in order to attempt to fix societal problems—there needed to be a science of the mind and psychology purported to be just that. They, thusly, needed a science of ‘human nature’, and it was for this reason that psychology developed in the US. The first US psychologists were trained in Germany and then returned to the US and developed an American psychology. Though, do note that in Germany psychology was seen as the science of the mind while in America it would then turn out to be the science of behavior (Jones, Elcock, and Tyson, 2011). This also does speak to the eugenic views held by certain IQ-ists in the 20th and into the 21st century.
In Nazi Germany, Jewish psychologists were purged since their views did not line-up with the Nazi regime.
Psychology appealed to the Nazi Party for two reasons: because psychological theory could be used to support Nazi ideology, and because psychology could be applied in service to the state apparatus. Those psychologists who remained adapted their theories to suit Nazi ideology, and developed theories that demonstrated the necessary inferiority of non-Aryan groups (Jones and Elcock, 2001). These helped to justify actions by the state in discriminating against, and ultimately attempting to eradicate, thse other groups. (Jones, Elcock, and Tyson (2011: 38-39)
These examples show that psychology is influenced by society but also that society influences psychological theorizing. Clearly, what psychologists choose to study, since society influences psychology, is a reflection of a society’s social concerns. In the case of IQ, crime, etc, the psychologist attempts to naturalize and biologicize such differences in order to explain them as ‘innate’ or ‘genetic’. The rise of IQ tests in America, too, also coincided with the worry that ‘national intelligence’ was declining and so, the IQ test would need to be used to ‘screen’ prospective immigrants. (See Richardson, 2011 for an in-depth consideration on the tests and conditions that the testees were exposed to on Ellis Island; also see Gould, 1996.)
(i) The Berka/Nash measurement objection is one of the most lethal arguments for IQ-ists. If they cannot state the specified measured object, the object of measurement, and the measuring unit for IQ then they cannot say that any’thing’ is being ‘measured’ by ‘IQ tests.’ This then brings us to (ii). Since there is no theory or definition of what is being ‘measured’, and if the tests were constructed first before the theory, then there will necessarily be a built-in bias to what is being ‘measured’ (namely, so-called ‘innnate mental potential’). (iii) Since it is logically impossible for psychology to reduce to physical structure, and since all facts cannot be stated using a physical vocabulary nor can the mind be described using material terms that only refer to material properties, then this is another blow to the claim that psychology is an ‘objective science’ and that some’thing’ is being ‘measured’ by their tests (constructed to agree with their a priori biases). And (iv) The bias that is inherent in psychology (for both the right and the left) influences the practitioners’ theorizing and how they interpret data. Society has influenced psychology (and psychology has influenced the society) and we only need to look at America and Nazi Germany in the 20th century to see that this holds.
The relationship between psychology and society is inseparable—it is a truism that what psychologists choose to study and how and why they formulate their conclusions will be influenced by the biases they already hold about society and how and why it is the way it is. For these reasons, psychology/psychometry are not ‘sciences’ and hereditarianism is not a logically sound position. Hereditarianism, then, stays what it was when it was formulated—a racist theory that attempts to bilogicize and justify the current social hierarchy. Thus, one should not accept that psychologists ‘measure’ any’thing’ with their tests; one should not accept the claim that mental abilities can be genetically transmitted/inherited; one should not accept the claim that psychology is an objective ‘science’ due to the reflexive relationship between psychology and society.
The arguments given show why hereditarianism should be abandoned—it is not a scientific theory, it just attempts to naturalize biological inequalities between individuals and groups (Mensh and Mensh, 1991; Gould, 1996; Garrison, 2009). Psychometrics (what hereditarians use to attempt to justify their claims) is, then, nothing more than a political ring.