Home » Race Realism
Category Archives: Race Realism
One of the weaknesses, in my opinion, to HBD is the focus on the Paleolithic and modern eras while glossing over the major developments in between. For instance, the links made between Paleolithic Western Europe’s Cromagnon Art and Modern Western Europe’s prowess (note the geographical/genetic discontinuity there for those actually informative on such matters).
Africa, having a worst archaeological record due to ideological histories and modern problems, leaves it rather vulnerable to reliance on outdated sources already discussed before on this blog. This lack of mention however isn’t strict.
Eventually updated material will be presented by a future outline of Neolithic to Middle Ages development in West Africa.
A recent example of an erroneous comparison would be in Heiner Rindermann’s Cogntivie Capitalism, pages 129-130. He makes multiple claims on precolonial African development to explained prolonged investment in magical thinking.
- Metallurgy not developed independently.
- No wheel.
- Dinka did not properly used cattle due to large, uneaten, portions left castrated.
- No domesticated animals of indigenous origin despite Europeans animals being just as dangerous, contra Diamond (lists African dogs, cats, antelope, gazelle, and Zebras as potential specimens, mentions European Foxes as an example of a “dangerous” animal to be recently domesticated along with African Antelopes in the Ukraine.
- A late, diffused, Neolithic Revolution 7000 years following that of the Middle East.
- Less complex Middle Age Structure.
- Less complex Cave structures.
Now, technically, much of this falls outside of what would be considered “neolithic”, even in the case of Africa. However, understanding the context of Neolithic development in Africa provides context to each of these points and periods of time by virtue of causality. Thus, they will be responded by archaeological sequence.
Dog domestication, Foxes, and human interaction.
The domestication of dogs occurred when Eurasian Hunter-Gathers intensified megafauna hunting, attracting less aggressive wild dogs to tame around 23k-25k ago. Rindermann’s mention of the fox experiment replicates this idea. Domestication isn’t a matter of breaking the most difficult of animals, it’s using the easiest ones to your advantage.
In this same scope, this needs to be compared to Africa’s case. In regards to behavior they are rarely solitary, so attracting lone individuals is already impractical. The species likewise developed under a different level of competition.
They were probably under as much competition from these predators as the ancestral African wild dogs were under from the guild of super predators on their continent.
What was different, though, is the ancestral wolves never evolved in an enviroment which scavenging from various human species was a constant threat, so they could develop behaviors towards humans that were not always characterized by extreme caution and fear.
Europe in particular shows that carnivore density was lower, and thus advantageous to hominids.
Consequently, the first Homo populations that arrived in Europe at the end of the late Early Pleistocene found mammal communities consisting of a low number of prey species, which accounted for a moderate herbivore biomass, as well as a diverse but not very abundant carnivore guild. This relatively low carnivoran density implies that the hominin-carnivore encounter rate was lower in the European ecosystems than in the coeval East African environments, suggesting that an opportunistic omnivorous hominin would have benefited from a reduced interference from the carnivore guild.
This would be a pattern based off of megafaunal extinction data.
The first hints of abnormal rates of megafaunal loss appear earlier, in the Early Pleistocene in Africa around 1 Mya, where there was a pronounced reduction in African proboscidean diversity (11) and the loss of several carnivore lineages, including sabertooth cats (34), which continued to flourish on other continents. Their extirpation in Africa is likely related to Homo erectus evolution into the carnivore niche space (34, 35), with increased use of fire and an increased component of meat in human diets, possibly associated with the metabolic demands of expanding brain size (36). Although remarkable, these early megafauna extinctions were moderate in strength and speed relative to later extinctions experienced on all other continents and islands, probably because of a longer history in Africa and southern Eurasia of gradual hominid coevolution with other animals.
This fundamental difference in adaptation to human presence and subsequent response is obviously a major detail in in-situ animal domestication.
Another example would be the failure of even colonialists to tame the Zebra.
This will just lead me to my next point. That is, what’s the pay-off?
Pastoralism and Utility
A decent test to understand what fauna in Africa can be utilized would the “experiments” of Ancient Egyptians, who are seen as the Eurasian “exception” to African civilization. Hyenas, and antelope from what I’ve, were kept under custody but overtime didn’t resulted in selected traits. The only domesticated animal in this region would be Donkeys, closer relatives to Zebras.
This brings to light another perspective to the Russian Fox experiments, that is, why have pet foxes not been a trend for Eurasians prior to the 20th century? It can be assumed then that attempts of animals domestication simply where not worth investment in the wake of already domesticated animals, even if one grew up in a society/genetic culture at this time that harnessed the skills.
For instance, a slow herd of Eland can be huddled and domesticated but will it pay off compared to the gains from investing into adapting diffused animals into a new environment? (This will be expanded upon as well into the future).
Elephants are nice for large colonial projects, but unique herding discouraging local diseases that also disrupts population density again effects the utility of large bodied animals. Investing in agriculture and iron proved more successful.
Cats actually domesticated themselves and lacked any real utility prior to feasting on urban pests. In Africa, with highly mobile groups as will be explained later, investment in cats weren’t going to change much. Wild Guineafowl, however, were useful to tame in West Africa and use to eat insects.
As can be seen here, Pastoralism is roughly as old in Africa diffused from the Middle East as compared to Europe. Both lacked independently raised species prior to it and making few innovations in regard to in situ beasts beyond the foundation. (Advancement in plant management preceding developed agriculture, a sort of skill that would parallel dog domestication for husbandry, will be discussed in a future article).
And given how advanced Mesoamericans became without draft animals, as mentioned before, their importance seems to be overplayed from a pure “indigenous” perspective. The role in invention itself ought be questioned as well in what we can actually infer.
Borrowed, so what?
In a thought experiment, lets consider some key details in diffusion. The invention of Animal Domestication or Metallurgy is by no means something to be glossed over as an independent invention. Over-fixating on this however in turn glosses over some other details on successful diffusion.
Why would a presumably lower apt population adopt a cognitively demanding skill, reorient it’s way of society around it, without attributing this change to an internal change of character compared to before? Living in a new type of economy system as a trend it undoubtedly bound to result in a new population in regards to using cognition to exploit resources. This would require contributions to their own to the process.
This applies regards to African Domesticated breeds,
Viewing domestication as an invention also produces a profound lack of curiosity about evolutionary changes in domestic species after their documented first appearances. [……] African domesticates, whether or not from foreign ancestors, have adapted to disease and forage challenges throughout their ranges, reflecting local selective pressures under human management. Adaptations include dwarfing and an associated increase in fecundity, tick resistance, and resistance to the most deleterious effects of several mortal infectious diseases. While the genetics of these traits are not yet fully explored, they reflect the animal side of the close co-evolution between humans and domestic animals in Africa. To fixate upon whether or not cattle were independently domesticated from wild African ancestors, or to dismiss chickens’ swift spread through diverse African environments because they were of Asian origin, ignores the more relevant question of how domestic species adapted to the demands of African environments, and how African people integrated them into their lives.
The same can be said for Metallurgy,
We do not yet knowwhether the seventh/sixth century Phoenician smelt-ing furnace from Toscanos, Spain (illustrated byNiemeyer in MA, p.87, Figure 3) is typical, but it isclearly very different from the oldest known iron smelt-ing technology in sub-Saharan Africa. Almost all pub-lished iron smelting furnaces of the first millennium calBC from Rwanda/Burundi, Buhaya, Nigeria, Niger,Cameroon, Congo, Central African Republic and Ga-bon are slag-pit furnaces, which are so far unknownfrom this or earlier periods in the Middle East or NorthAfrica. Early Phoenician tuyères, which have squareprofiles enclosing two parallel (early) or converging(later) narrow bores are also quite unlike those de-scribed for early sites in sub-Saharan Africa, which arecylindrical with a single and larger bore.
African ironworkers adapted bloomery furnacesto an extraordinary range of iron ores, some of whichcannot be used by modern blast furnaces. In bothnorthern South Africa (Killick & Miller 2014)andinthe Pare mountains of northern Tanzania (Louise Ilespers. comm., 2013) magnetite-ilmenite ores contain-ing up to 25 per cent TiO2(by mass) were smelted.The upper limit for TiO2in iron ore for modernblast furnaces is only 2 per cent by mass (McGan-non 1971). High-titanium iron ores can be smeltedin bloomery furnaces because these operate at lowertemperatures and have less-reducing furnace atmo-spheres than blast furnaces. In the blast furnace tita-nium oxide is partially reduced and makes the slagviscous and hard to drain, but in bloomery furnacesit is not reduced and combines with iron and siliconoxide to make a ﬂuid slag (Killick & Miller 2014). Blastfurnace operators also avoid ores containing morethan a few tenths of a percent of phosphorus or ar-senic, because when these elements are dissolved inthe molten iron, they segregate to grain boundaries oncrystallization, making the solid iron brittle on impact.
Bulls (and rams) are often, but not necessarily, castrated at a
fairly advanced age, probably in part to allow the conformation and characteristics of the animal to become evident before
the decision is made. A castrated steer is called muor buoc, an
entire bull thon (men in general are likened to muor which are
usually handsome animals greatly admired on that account; an
unusually brave, strong or successful man may be called thon,
that is, “bull with testicles”). Dinka do not keep an excess of
thon, usually one per 10 to 40 cows. Stated reasons for the
castration of others are for important esthetic and cultural
reasons, to reduce fighting, for easier control, and to prevent
indiscriminant or repeat breeding of cows in heat (the latter
regarded as detrimental to pregnancy and accurate
Since then, Pearl Millet, Rice, Yams, and Cowpeas have been confirmed to be indigenous crops to the area. This is against hypotheses of others. Multiple studies show late expansion southwards, thus likely linking them to Niger-Kongo speakers. Modern SSA genetics revealed farmer population expansion signals similar to that of Neolithic ancestry in Europeans to their own late date of agriculture in the region as well.
Made multiple remarks on Africa’s “exemplars”, trying to construct a sort of perpetual gap since the Paleolithic by citing Renfew’s Neuroscience, evolution and the sapient paradox: the factuality of value and of the sacred. However, Renfrew doesn’t quite support the comparisons he made and approaches a whole different point.
The discovery of clearly intentional patterning on fragments of red ochre from the Blombos Cave (at ca 70 000 BP) is interesting when discussing the origins of symbolic expression. But it is entirely different in character, and very much simpler than the cave paintings and the small carved sculptures which accompany the Upper Palaeolithic of France and Spain (and further east in Europe) after 40 000 BP.[….]
It is important to remember that what is often termed cave art—the painted caves, the beautifully carved ‘Venus’ figurines—was during the Palaeolithic (i.e. the Pleistocene climatic period) effectively restricted to one developmental trajectory, localized in western Europe. It is true that there are just a few depictions of animals in Africa from that time, and in Australia also. But Pleistocene art was effectively restricted to Franco-Cantabria and its outliers.
It was not until towards the end of the Pleistocene period that, in several parts of the world, major changes are seen (but see Gamble (2007) for a more nuanced view, placing more emphasis upon developments in the Late Palaeolithic). They are associated with the development of sedentism and then of agriculture and sometimes stock rearing. At the risk of falling into the familiar ‘revolutionary’ cliché, it may be appropriate to speak of the Sedentary Revolution (Wilson 1988; Renfrew 2007a, ch. 7).[….] Although the details are different in each area, we see a kind of sedentary revolution taking place in western Asia, in southern China, in the Yellow River area of northern China, in Mesoamerica, and coastal Peru, in New Guinea, and in a different way in Japan (Scarre 2005).
Weil (2014) paints a picture of African development in 1500, both relative to the rest of the world and heterogeneity within the continent itself, using as his indicators population density, urbanization, technological advancement, and political development. Ignoring North Africa, which was generally part of the Mediterranean world, the highest levels of development by many indicators are found in Ethiopia and in the broad swathe of West African countries running from Cameroon and Nigeria eastward along the coast and the Niger river. In this latter region, the available measures show a level of development just below or sometimes equal to that in the belt of Eurasia running from Japan and China, through South Asia and the Middle East, into Europe. Depending on the index used, West Africa was above or below the level of development in the Northern Andes and Mexico. Much of the rest of Africa was at a significantly lower level of development, although still more advanced than the bulk of the Americas or Australia.
This is a topic I’ve been wanting to do for a while. Though it can be said that many scientists who investigate topics receive public outcry to a return of racial segregationist ideology in academia to an unfair extent. It would be odd however to apply the same towards Richard Fuerle, and not in any ironic way. He basically peddled the Carleton Coon Multiregional Theory that not even Multi-regionalists would buy, but a quick Google search will lead you to those who would (not the most unbiased group).
The intent of this article is to show how a decent chunk of Fuerle’s arguments are indeed outdated and doesn’t jive with current evidence. While not a review of the whole book, this post will demonstrate enough basic facts that should convince you to discourage his arguments.
First page (the hardest in my opinion) and none in biology. For reference, I encourage commenters to cite from the book if they take issue with my criticisms, as I’m only paraphrasing from this point forward simply because of this.
Quick and simple (and somewhat setting a pattern), this is a trait that RR has talked about in the past with others still getting it wrong. Rather than a reduced or adaptive specialization, bone density in modern European came as a result of sedentary behavior from the Neolithic.
Sedentary living among Sub Saharans is far more recent, even with crops going back several millennia B.C.E intensification wasn’t that common until plantations were used during the slave trade. Shifting Cultivation, though variable, was the norm. I’ll touch upon this in a future article on the African Neolithic.
One of his other pitfall were the implications pf Shovel Teeth in Modern Populations.
- The high rate of such is indicative of modern phylogenic ancestry, supporting the case of Asians.
- The trait in Asians derives from Peking Man.
Both are pretty much refuted by archaic and modern variants being different. And contra to the expectations of his estimates of human divergences being millions of years old, Europeans are closer to Modern Africans than Neanderthals in dentition. This also refutes assertion on the primitive nature of Africans compared to other humans in the case of phylogenics. On the particular features, it’s another story.
In this case there’s no need to look any further than the works of Joel Irish, who I’m willing to bet is unparalleled in this topic in modern research.
Retention of primitive features was something that went back to African migrants into Eurasia, Homo Sapiens both recent and past having long retained archaic traits.
We recently examined whether or not a universal criterionfor dental modernity could be deﬁned (Bailey and Hublin2013). Like cranial morphology, dental morphology shows amarked range of variation; so much that multiple geographicdental patterns (e.g., Mongoloid, Proto-Sundadont, Indodont,Sub-Saharan African, Afridont, Caucasoid, Eurodont, Sun-dadont, Sinodont) have been identiﬁed in recent humans(Hanihara 1969,1992; Mayhall et al. 1982; Turner 1990;Hawkey 1998; Irish 1998,2013; Scott et al. 2013). Ouranalysis conﬁrmed that, while some populations retain higherfrequencies of ancestral (i.e., primitive) dental traits [e.g.,Dryopithecus molar, moderate incisor shoveling (Irish 1997)]and others show higher frequencies of recently evolved (i.e.,derived) dental traits [e.g., double shoveling, four-cuspedlower molars (Turner 1983; Irish and Guatelli-Steinberg2003)], all recent humans show some combination of bothprimitive and derived traits (Bailey and Hublin 2013).
Africans tend to have higher frequencies in retained features, but in the context of recent Eurasian variants, this is to be expected and Irish have actually used this data to support an African dispersal.
Assuming that phenetic expression approximates genetic variation, previous dental morphological analyses of Sub-Saharan Africans by the author show they are unique among the world’s modern populations. Numerically-derived affinities, using the multivariate Mean Measure of Divergence statistic, revealed significant differences between the Sub-Saharan folk and samples from North Africa, Europe, Southeast Asia, Northeast Asia and the New World, Australia/Tasmania, and Melanesia. Sub-Saharan Africans are characterized by a collection of unique, mass-additive crown and root traits relative to these other world groups. Recent work found that the most ubiquitous of these traits are also present in dentitions of earlier hominids, as well as extinct and extant non-human primates; other ancestral dental features are also common in these forms. The present investigation is primarily concerned with this latter finding. Qualitative and quantitative comparative analyses of Plio-Pleistocene through recent samples suggest that, of all modern populations, Sub-Saharan Africans are the least derived dentally from an ancestral hominid state; this conclusion, together with data on intra- and inter-population variability and divergence, may help provide new evidence in the search for modern human origins.
The same was done by his colleague who first posited an West Asian origin as Fuerle did (undoubtedly on much firmer grounds). Has recently integrated this into modern OOA.
To date, the earliest modern human fossils found outside of Africa are dated to around 90,000 to 120,000 years ago at the Levantine sites of Skhul and Qafzeh. A maxilla and associated dentition recently discovered at Misliya Cave, Israel, was dated to 177,000 to 194,000 years ago, suggesting that members of the Homo sapiens clade left Africa earlier than previously thought. This finding changes our view on modern human dispersal and is consistent with recent genetic studies, which have posited the possibility of an earlier dispersal of Homo sapiens around 220,000 years ago. The Misliya maxilla is associated with full-fledged Levallois technology in the Levant, suggesting that the emergence of this technology is linked to the appearance of Homo sapiens in the region, as has been documented in Africa.
This then smoothly glides into the next topic.
Thus we also find that the basis of modern diversification is recent, as in below 50k in age.
On the appearance of Modern East Asian and Native Americans Traits,
Our results show strong morphological affinities
among the early series irrespective of geographical origin,
which together with the matrix analyses results
favor the scenario of a late morphological differentiation
of modern humans. We conclude that the geographic
differentiation of modern human morphology is a late
phenomenon that occurred after the initial settlement
of the Americas.
On the features of earlier Paleoamericans.
During the last two decades, the idea held by some
late 19th and early 20th century scholars (e.g., Lacerda
and Peixoto, 1876; Rivet, 1908) that early American populations
presented a distinct morphological pattern from
the one observed among recent Native Americans, has
been largely corroborated. Studies assessing the morphological
affinities of early American crania have shown
that crania dating to over seven thousand years BP generally
show a distinct morphology from that observed in
later populations. This observation is better supported in
South America, where larger samples of early specimens
are available: population samples from central Brazil
(Lagoa Santa; Neves and Hubbe, 2005; Neves et al.,
2007a) and Colombia (Bogota´ Savannah; Neves et al.,
2007b) as well as in isolated specimens from Southeast
Brazil (Capelinha; Neves et al., 2005), Northeast Brazil
(Toca dos Coqueiros; Hubbe et al., 2007) and Southern
Chile (Palli Aike; Neves et al., 1999). Distinct cranial
morphology has also been observed in early skulls from
Meso-America (Mexico; Gonzalez-Jose´ et al., 2005) and
North America (Jantz and Owsley, 2001; Powell, 2005).
This evidence has recently demonstrated that the
observed high levels of morphological diversity within
the Americas cannot simply be attributed to bias resulting
from the small available samples of early crania, as
was previously suggested (Van Vark et al., 2003).
Recent Native American cranial morphology varies
around a central tendency characterized by short and
wide neurocrania, high and retracted faces, and high
orbits and nasal apertures. In contrast, the early South and
Meso-American (hereafter Paleoamerican) crania
tend to vary around a completely different morphology:
long and narrow crania, low and projecting faces, and
low orbits and nasal apertures (Neves and Hubbe, 2005).
These differences are not subtle, being of roughly the
same magnitude as the difference observed between
recent Australian aborigines and recent East Asians
(Neves and Hubbe, 2005; Neves et al., 2007a,b; but see
Gonza´lez-Jose´ et al., 2008 for a different opinion). When
assessed within the comparative framework of worldwide
craniometric human variation, Paleoamerican groups
show morphological affinities with some Australo-Melanesian
and African samples, while Amerindian groups
Earlier waves of Native Americans were replaced by later waves of migrants from Asia with latter specializations.
The same can be demonstrated in Africa.
For the second half of the Late Pleistocene and the period pre-ceding the Last Glacial Maximum (LGM) (i.e., MIS 3), the only twosites with well preserved and securely dated human remains areNazlet Khater 2 (38 ±6 Ky, Egypt; Crevecoeur, 2008) and Hofmeyr(36.2 ±3.3 Ky, South Africa; Grine et al., 2007). These fossilsrepresent additional evidence for Late Pleistocene phenotypicvariability of African sub-groups. The Hofmeyr specimen exhibitsthe greatest overall similarities to early modern human specimensfrom Europe rather than to Holocene San populations from thesame region (Grine et al., 2007). Moreover, the Nazlet Khater 2specimen preserves archaic features on the cranium and themandible more comparable to those of Late Middle Pleistocene and
early Late Pleistocene fossils than to chronologically closer recentAfrican populations (Crevecoeur, 2012). These specimens representaspects of modern human phenotypic variation not found in cur-rent populations. This situation seems to have lasted until thebeginning of the Holocene in the African fossil record, not only inthe northeastern part of the continent (Crevecoeur et al., 2009) butalso in the west central (Iwo Eleru, Nigeria, Harvati et al., 2011;Stojanowski, 2014) and eastern regions (Lukenya Hill, Kenya,Tryon et al., 2015). During the Holocene, an increased homogeni-zation of cranio-morphological features is documented, particu-larly within sub-Saharan Africa, with its peak during and after theBantu expansion from 6 Ky ago (Ribot, 2011).
Without Ambiguity, the EUP like Hofmeyr skull was found to be archaic relative to recent SSA.
Although the supraorbital torus is comparable in thickness to that in UP crania, its continuous nature represents a more archaic morphology ( 26 ). In this regard, Hofmeyr is more primitive than later sub-Saharan LSA and North African UP specimens (such as Lukenya Hill and Wadi Kubbaniya), even though they may have a somewhat thicker medial supraorbital eminence. Despite its glabellar prominence and capacious maxillary sinuses, Hofmeyr exhibits only incipient frontal sinus development, a condition that is uncommon among European UP crania ( 27 ). The mandibular ramus has a well-developed gonial angle, and the slender coronoid process is equivalent in height to the condyle. The mandibular (sigmoid) notch is deep and symmetrical, and its crest intersects the lateral third of the condyle. The anterior margin of the ramus is damaged, but it is clear that there was no retro- molar gap. The Hofmeyr molars are large. The bucco- lingual diameter of M 2 exceeds recent African and Eurasian UP sample means by more than 2 SD (table S3). Radiographs reveal cynodont molars, although pulp chamber height is likely to have been affected by the deposition of secondary dentine in these heavily worn teeth. Thus, Hofmeyr is seemingly primitive in comparison to recent African crania in a number of features, including a prominent glabella; moderately thick, continuous supraorbital tori; a tall, flat, and straight malar; a broad frontal process of the maxilla; and comparatively large molar crowns.
One of unique traits to Modern Eurasians is a measurable increase in Cranial Index.
Craniometric data have been collected from published and unpublished reports of numerous authors on 961 male and 439 female crania from various sites in Subsaharan Africa spanning the last 100 ka. All data available in the literature, irrespective of their “racial” affinities, were used to cover the prehistoric and early historic times (up to 400 a BP). Samples covering the last 400 years do not include European colonists and consist of skeletons exavated at archeological sites, collected by early European travelers and derived from anatomical collections. Cranial capacity, depending on the mode of its calculation, has decreased by 95–165 cm3 among males and by 74–106 cm3 among females between the Late Stone Age (30-2 ka BP) and modern times (last 200 years). Values of the cranial index did not show any trend over time and their averages remained in the dolichocephalic category. The decrease in cranial capacity in Subsaharan Africa is similar to that previously found in Europe, West Asia, and North Africa, but, unlike the latter, it is not accompanied by brachycephalization. © 1993 Wiley-Liss, Inc.
It’s worth noting in even Fuerle’s data, despite emphasizing this trait in a singular black example, Caucasians have a larger browridge by comparison. Black were described as small in comparison in this trait. Likewise, the data indicates that the skulls were generally smoother and rounder with more receded Cheekbones.
On a comprehensive look on how these difference, this paper seems sufficient.
Morphological characteristics of the orbit that are most variable among the
African, Asian, and European samples include orbital volume (obv), orbital depth (obd), basion-superior orbit (bso), and orbital breadth (obb), and are also those that contribute most to group separation in the multivariate analyses. Interorbital breadth (dkb), biorbital Samples Asian European African 20.9960 31.2139 Asian 15.4776 Samples Asian European African 1.80745 3.19353 Asian 3.70921
68 breadth (ekb), and basion-orbitale (bio) were not found to be statistically different among these samples, however the low significance value for basion-orbitale in a one-way analysis of variance (p = 0.055) indicates that some degree of divergence exists among them. Additionally, while a significance test was not carried out for “shape” of the orbital margins, it is clear that general differences exist among groups. The most notable difference is between the Asian and African samples, in which the former possesses high and narrow orbits (a more rounded shape), and the latter is characterized by lower and wider orbital margins (a more rectangular shape).
This current investigation reveals that the orbital
margins vary in association with these long-term evolutionary changes, becoming
vertically shorter, horizontally elongated, more frontated, and retracted relative to basion, with a greater degree of reduction in the inferior orbital margins.
In otherwords, the Rectangular Shape of “Negroids” are a retention, but towards a baseline Sapiens trend.
The wide rectangular shape of the orbital margins resulting from a shift in relative
size of orbital height and orbital breadth is highly characteristic of anatomically modern humans from the Upper Paleolithic in Europe and Asia (chapter 5), and extant groups from Sub-Saharan Africa (chapter 3). Following the Upper Paleolithic however, the trend toward superoinferiorly shorter and more elongated orbits associated with a grade shift in craniofacial form began to reverse, and the orbital margins become taller and narrower, taking on a more rounded shape. This more recent trend has also been documented among East Asian groups dating to the Holocene (Brown & Maeda, 2004; Wu et al. 2007), and is investigated as part of a larger examination of orbital change through the European Upper Paleolithic in chapter 5 of this thesis.
On the specifics, Eurasians.
In looking at size and shape of the orbital margins it can be seen that orbital breadth does not vary in relation to cranial shape, but does decrease as the upper facial index increases, with the same being true of biorbital breadth. In contrast, orbital height is positively correlated with both shape features, which one might expect particularly in relation to the upper facial index, in which a vertical increase in facial height and decrease in facial width would be assumed to affect in a similar way these same dimensions of the orbit. However, Brown & Maeda (2004) found that throughout the Neolithic in China, orbital height increases substantially even while facial height is reduced in that region.
In nearly every case, orbital variables are more highly correlated with shape of the
face than with shape of the head, which is understandable given their inclusion in the facial framework. However, the relationship between basion-orbitale and basion-superior orbit is negatively correlated with both cranial and facial shape variables and to approximately the same degree. This is of particular interest given that the upper facial index comprises two variables that indicate the relationship between height and width of the face in the coronal plane, though measures of basion-orbitale and basion-superior orbit lie in the parasagittal plane. Orbital depth also decreases in association with increased facial height and decreased facial breadth, but is not statistically related to change in cranial shape. This too is surprising given that orbital depth might be expected to decrease more as a result of anterior-posterior shortening of the skull rather than in relation to a narrowing and elongation of the face. 104 Although the direction and magnitude of the relationship between orbital morphology and craniofacial shape largely mimics observed changes in orbital features during the last 30,000 years in Western Europe (section 5.4 above), orbital size deviates slightly from this pattern. Both orbital volume and the geometric mean of orbital height, breadth, and depth remained relatively unchanged since the Upper Paleolithic, however both show a statistically significant negative relationship to the upper facial index, meaning that as the face becomes taller and narrower, space within the orbits is diminished.
Brown and Maeda (2004) show that among skulls of Australian Aborigines and
Tohoku Japanese, which represent changing craniofacial form since the end of the
Pleistocene, orbital volume is highly correlated with supraorbital breadth, lower facial prognathism, and shape of the orbital margins. Among these crania a broader
supraorbital region, more projecting facial skeleton and lower orbital index (more
rectangular shape) are associated with a larger orbital volume. Change in these features, including a strong trend toward higher and narrower orbits, is considered to reflect a decrease in orbital volume that occurred throughout the Holocene in China (Brown & Maeda, 2004).
Africans’ Prognathism and inter Orbital breath can be accounted for here. Pg 13. Explains an association between interorbital breadth and prognathism. Within South Africans, however, wide breadth compensates for a low prognathic profile on page 229-230. In Africans, compare to African Americans, it is more variable. On Page 216 it notes how the role for robust craniofacial features do not correlate with browridge size. Uncorrelated features can be explained by geography for instance.
Richard Fuerle noted the particularly archaic nature of the 100-300k Kabwe/Broken Hill skull in contrast to Modern Humans in Ethiopia. He, in totality with modern “retentions”, asserted that this proved that African pecularities were long standing and postulated that the Middle East was the actual home of human origins.
Some problems with this logic are similar findings In Europe and Asia. Despite being contemporary with Neanderthals by context, the morphology of the Ceprano skull is closer to the LCA with Sapiens.
Our analysis suggests two plausible explanations for the morphology sampled at Longlin Cave and Maludong. First, it may represent a late-surviving archaic population, perhaps paralleling the situation seen in North Africa as indicated by remains from Dar-es-Soltane and Temara, and maybe also in southern China at Zhirendong. Alternatively, East Asia may have been colonised during multiple waves during the Pleistocene, with the Longlin-Maludong morphology possibly reflecting deep population substructure in Africa prior to modern humans dispersing into Eurasia.
The number of Late Pleistocene hominin species and the timing of their extinction are issues receiving renewed attention following genomic evidence for interbreeding between the ancestors of some living humans and archaic taxa. Yet, major gaps in the fossil record and uncertainties surrounding the age of key fossils have meant that these questions remain poorly understood. Here we describe and compare a highly unusual femur from Late Pleistocene sediments at Maludong (Yunnan), Southwest China, recovered along with cranial remains that exhibit a mixture of anatomically modern human and archaic traits. Our studies show that the Maludong femur has affinities to archaic hominins, especially Lower Pleistocene femora. However, the scarcity of later Middle and Late Pleistocene archaic remains in East Asia makes an assessment of systematically relevant character states difficult, warranting caution in assigning the specimen to a species at this time. The Maludong fossil probably samples an archaic population that survived until around 14,000 years ago in the biogeographically complex region of Southwest China.
Our results indicate that the Hexian teeth are metrically and morphologically primitive and overlap with H. ergaster and East Asian Early and mid-Middle Pleistocene hominins in their large dimensions and occlusal complexities. However, the Hexian teeth differ from H. ergaster in features such as conspicuous vertical grooves on the labial/buccal surfaces of the central incisor and the upper premolar, the crown outline shapes of upper and lower molars and the numbers, shapes, and divergences of the roots. Despite their close geological ages, the Hexian teeth are also more primitive than Zhoukoudian specimens, and resemble Sangiran Early Pleistocene teeth. In addition, no typical Neanderthal features have been identified in the Hexian sample. Our study highlights the metrical and morphological primitive status of the Hexian sample in comparison to contemporaneous or even earlier populations of Asia. Based on this finding, we suggest that the primitive-derived gradients of the Asian hominins cannot be satisfactorily fitted along a chronological sequence, suggesting complex evolutionary scenarios with the coexistence and/or survival of different lineages in Eurasia. Hexian could represent the persistence in time of a H. erectus group that would have retained primitive features that were lost in other Asian populations such as Zhoukoudian or Panxian Dadong. Our study expands the metrical and morphological variations known for the East Asian hominins before the mid-Middle Pleistocene and warns about the possibility that the Asian hominin variability may have been taxonomically oversimplified.
Mandibular and dental features indicate that the Hexian mandible and teeth differ from northern Chinese H. erectus and European Middle Pleistocene hominins, but show some affinities with the Early Pleistocene specimens from Africa (Homo ergaster) and Java (H. erectus), as well as the Middle-Late Pleistocene mandible from Penghu, Taiwan. Compared to contemporaneous continental Asian hominin populations, the Hexian fossils may represent the survival of a primitive hominin, with more primitive morphologies than other contemporaneous or some chronologically older Asian hominin specimens.
Our dental study reveals a mosaic of primitive and derived dental features for the Xujiayao hominins that can be summarized as follows: i) they are different from archaic and recent modern humans, ii) they present some features that are common but not exclusive to the Neanderthal lineage, and iii) they retain some primitive conformations classically found in East Asian Early and Middle Pleistocene hominins despite their young geological age.
Middle to Late Pleistocene human evolution in East Asia has remained controversial regarding the extent of morphological continuity through archaic humans and to modern humans. Newly found ∼300,000-y-old human remains from Hualongdong (HLD), China, including a largely complete skull (HLD 6), share East Asian Middle Pleistocene (MPl) human traits of a low vault with a frontal keel (but no parietal sagittal keel or angular torus), a low and wide nasal aperture, a pronounced supraorbital torus (especially medially), a nonlevel nasal floor, and small or absent third molars. It lacks a malar incisure but has a large superior medial pterygoid tubercle. HLD 6 also exhibits a relatively flat superior face, a more vertical mandibular symphysis, a pronounced mental trigone, and simple occlusal morphology, foreshadowing modern human morphology. The HLD human fossils thus variably resemble other later MPl East Asian remains, but add to the overall variation in the sample. Their configurations, with those of other Middle and early Late Pleistocene East Asian remains, support archaic human regional continuity and provide a background to the subsequent archaic-to-modern human transition in the region.
The HLD human sample, primarily the HLD 6 skull but includingthe isolated cranial, dental, and femoral remains, provides a suiteof morphological features that place it comfortably within the pre-viously known Middle to early Late Pleistocene East Asian humanvariation and trends. These Middle-to-Late Pleistocene archaichuman remains from East Asia can be grouped into four chro-nological groups, from the earlier Lantian–Chenjiawo, Yunxian,and Zhoukoudian; to Hexian and Nanjing; then Chaoxian, Dali,HLD, Jinniushan, and Panxian Dadong; and ending with Changyang,Xuchang, and Xujiayao. They are followed in the early LatePleistocene by Huanglong, Luna, Fuyan, and Zhiren, which to-gether combine archaic and modern features.
There is nonetheless substantial variation across the availableEast Asian sample within and across these chronological groupsand especially in terms of individual traits and their combinationswithin specimens (SI Appendix, Figs. S16 and S17 and Tables S10,S12, and S13). However, similar variation within regions andwithin site samples is evident elsewhere during the MPl (as reflectedin the persistent absence of taxonomic consensus regarding MPlhumans; see refs. 19, 23, 41, and 42), and it need not imply morethan normal variation among these fluctuating forager populations.The growing human fossil sample from mainland East Asia,enhanced by the HLD remains, therefore provides evidence ofcontinuity through later archaic humans, albeit with some degreeof variation within chronological groups. As such, the samplefollows the same pattern as the accumulating fossil evidence forMPl (variably into the Late Pleistocene) morphological conti-nuity within regional archaic human groups in Europe (e.g., ref.43), Northwest Africa (e.g., ref. 44), and insular Southeast Asia(e.g., refs. 21 and 24), as well as into early modern humans inEast Africa (e.g., ref. 45). Several divergent peripheral samples[Denisova, Dinaledi, and Liang Bua (46–48)] do not follow thispattern, but they are best seen as interesting human evolutionaryexperiments (49) and not representative of Middle to Late Pleisto-cene human evolution. It is the core continental regions that providethe overall pattern of human evolution during this time period andform the background for the emergence of modern humans.Although there is considerable interregional diversity across theseOld World subcontinental samples, primarily in details of craniofa-cial morphology, these fossil samples exhibit similar trends in primarybiological aspects (e.g., encephalization, craniofacial gracilization).Moreover, all of these regional groups of Middle to Late Pleistocenehuman remains reinforce that the dominant pattern through archaichumans [and variably into early modern humans through continuityor admixture (16, 50, 51)] was one of regional population consistencycombined with global chronological trends.
Fuerle has recently attempted to build a case for the existence
of multiple biological species of humans from a molecular perspective.
Fuerle used comparative genetic distance data involving various
DNA types obtained from a variety of sources for a range of
biological species and subspecies . The results of his review
are summarized in the following table. Additional data involving
non-mtDNA based estimates of the genetic distance between the
gorilla species and the chimpanzees and bonobos have been included
Table 4 would seem to suggest that the Sub-Saharan African
(Bantu) and Australopapuan (Aborigine) genetic difference as measured
by SNP’s is greater than the genetic distance between both
the two species of gorilla (Gorilla gorilla and Gorilla beringei), and
greater than the distance between the common chimpanzee and
the bonobo as measured by mtDNA.
On the basis of this Fuerle suggests that there are only two
consistent courses of action to take regarding re-classification –
splitting or lumping. Either H. sapiens could be split into two species
– Homo africanus which would encompass modern African
populations and Homo eurasianensis which would encompass Eurasian
populations; making the genus Homo consistent in his view,
species-wise with respect to other genera in which the differences
between species are expressed in terms of much smaller genetic
distances; or alternatively the genetic variability within the human
species could be used to typologically define the absolute limits of
what constitutes a vertebrate species, which could then be employed
as a taxonomic baseline in the classification of other species.
This would mean lumping the two gorilla species and the
chimpanzee and the bonobo as single species.
FST reflects the relative amount of total genetic differentiation
between populations, however different measures of genetic distance
involving mtDNA and autosomal loci are simply inappropriate for the purposes of inter-specific comparison as the different
genes involved will have been subject to markedly different selection
pressures and are therefore not likely to have diverged at the
same time . To illustrate this point, this author listed alternative
estimates of the distance between the gorilla species and the
common chimpanzee and bonobo, based on various nuclear loci
and autosomal DNA. The much higher numbers reflect the extreme
variation that can be expected when different genes are considered.
Fuerle’s presentation of the data is also problematic for another
reason, namely he makes no mention of the current
debates surrounding gorilla and chimpanzee/bonobo taxonomy;
as new research on these taxa regularly generates novel and in
some cases wildly variable estimates of genetic distance between
these primates, and there is even some debate over whether the
eastern and western gorillas are separate species .
Curnoe and Thorne have estimated that periods of around two
million years were required for the production of sufficient genetic
distances to represent speciation within the human ancestral lineage
. This indicates that the genetic distances between the
races are too small to warrant differentiation at the level of biological
species, as the evolution of racial variation within H. sapiens
started to occur only 60,000 years ago, when the ancestors of modern
humans first left Africa.
The cold winter theory (CWT) is a theory that purports to explain why those whose ancestors evolved in colder climes are more “intelligent” than those whose ancestors evolved in warmer climes. Popularized by Rushton (1997), Lynn (2006), and Kanazawa (2012), the theory—supposedly—accounts for the “haves” and the “have not” in regard to intelligence. However, the theory is a just-so story, that is, it explains what it purports to explain without generating previously unknown facts not used in the construction of the theory. PumpkinPerson is irritated by people who do not believe the just-so story of the CWT writing (citing the same old “challenges” as Lynn which were dispatched by McGreal):
The cold winter theory is extremely important to HBD. In fact I don’t even understand how one can believe in racially genetic differences in IQ without also believing that cold winters select for higher intelligence because of the survival challenges of keeping warm, building shelter, and hunting large game.
The CWT is “extremely important to HBD“, as PP claims, since there needs to be an evolutionary basis for population differences in “intelligence” (IQ). Without the just-so story, the claim that racial differences in “intelligence” are “genetically” based crumbles.
Well, here is the biggest “challenge” (all other refutations of it aside) to the CWT. Notions of which population are or are not “intelligent” change with the times. The best example is what the Greeks—specifically Aristotle—wrote about the intelligence of those who lived in the north. Maurizio Meloni, in his 2019 book Impressionable Biologies: From the Archaeology of Plasticity to the Sociology of Epigenetics captures this point (pg 41-42; emphasis his):
Aristotle’s Politics is a compendium of all these ideas [Orientals being seen as “softer, more delicate and unwarlike” along with the structure of militaries], with people living in temperate (mediocriter) places presented as the most capable of producing the best political systems:
“The nations inhabiting the cold places and those of Europe are full of spirit but somewhat deficient in intelligence and skill, so that they continue comparatively free, but lacking in political organization and the capacity to rule their neighbors. The peoples of Asia on the other hand are intelligent and skillful in temperament, but lack spirit, so that they are in continuous subjection and slavery. But the Greek race participates in both characters, just as it occupies the middle position geographically, for it is both spirited and intelligent; hence it continues to be free and to have very good political institutions, and to be capable of ruling all mankind if it attains constitutional unity.” (Pol. 1327b23-33, my italics)
Views of direct environmental influence and the porosity of bodies to these effects also entered the military machines of ancient empires, like that of the Romans. Offices such as Vegetius (De re militari, I/2) suggested avoiding recruiting troops from cold climates as they had too much blood and, hence, inadequate intelligence. Instead, he argued, troops from temperate climates be recruited, as they possess the right amount of blood, ensuring their fitness for camp discipline (Irby, 2016). Delicate and effemenizing land was also to be abandoned as soon as possible, according Manilius and Caesar (ibid). Probably the most famous geopolitical dictum of antiquity reflects exactly this plastic power of places: “soft lands breed soft men”, according to the claim that Herodotus attributed to Cyrus.
Isn’t that weird, how things change? Quite obviously, which population is or is not “intelligent” is based on the time and place of the observation. Those in northern Europe, who are purported to be more intelligent than those who live in temperate, hotter climes—back in antiquity—were seen to be less intelligent in comparison to those who lived in more temperate, hotter climes. Imagine stating what Aristotle said thousands of years ago in the present day—those who push the CWT just-so story would look at you like you’re crazy because, supposedly, those who live in and evolved in colder climes had to plan ahead and faced a tougher environment in comparison to those who lived closer to the equator.
Imagine we could transport Aristotle to the present day. What would he say about our perspectives on which population is or is not intelligent? Surely he would think it ridiculous that the Greeks today are less “intelligent” than those from northern Europe. But that only speaks to how things change and how people’s perspectives on things change with the times and who is or is not a dominant group. Now imagine that we can transport someone (preferably an “IQ” researcher) to antiquity when the Greeks were at the height of their power. They would then create a just-so story to justify their observations about the intelligence of populations based on their evolutionary history.
Anatoly Karlin cites Galton, who claims that ancient Greek IQ was 125, while Karlin himself claims IQ 90. I cite Karlin’s article not to contest his “IQ estimates”—nor Galton’s—I cite it to show the disparate “estimates” of the intelligence of the ancient Greeks. Because, according to the Greeks, they occupied the middle position geographically, and so they were both spirited and intelligent compared to Asians and northern Europeans.
This is similar to Wicherts, Boorsboom, and Dolan (2010) who responded to Rushton, Lynn, and Templer. They state that the socio-cultural achievements of Mesopotamia and Egypt stand in “stark contrast to the current low level of national IQ of peoples of Iraq and Egypt and that these ancient achievements appear to contradict evolutionary accounts of differences in national IQ.“ One can make a similar observation about the Maya. Their cultural achievements stand in stark contrast to their “evolutionary history” in warm climes. The Maya were geographically isolated from other populations and they still created a writing system (independently) along with other cultural achievements that show that “national IQs” are irrelevant to what the population achieved. I’m sure an IQ-ist can create a just-so story to explain this away, but that’s not the point.
Going back to what Karlin and Galton stated about Greek IQ, their IQ is irrelevant to their achievements. Whether or not their IQ was 120-125 or 90 is irrelevant to what they achieved. To the Mesopotamians and Egyptians, they were more intelligent than those from northern climes. They would, obviously, think that based on their achievements and the lack of achievements in the north. The achievements of peoples in antiquity would paint a whole different picture in regard to an evolutionary theory of human intelligence—and its distribution in human populations.
So which just-so story (ad hoc hypothesis) should we accept? Or should we just accept that which population is or is not “intelligent” and capable of constructing militaries is contingent based on the time and the place of the observation? Looking at “national IQs” of peoples in antiquity would show a huge difference in comparison to what we observe today about the “national IQs” (supposedly ‘intelligence’) of populations around the world. In antiquity, those who lived in temperate and even hotter climes had greater achievements than others. Greeks and Romans argued that peoples from northern climes should not be enlisted in the military due to where they were from.
These observations from the Greeks and Romans about who and who not to enlist in the military, along with their thoughts on Northern Europeans prove that perspectives on which population is or is not “intelligent” is contingent based on the time and place. This is why “national IQs” should not be accepted, not even accounting for the problems with the data (Richardson, 2004; also see Morse, 2008; also see The Ethics of Development: An Introduction by Ingram and Derdak, 2018). Seeing the development of countries/populations in antiquity would lead to a whole different evolutionary theory of the intelligence of populations, proving the contingency of the observations.
Different groups of people eat different things. Different groups of people also differ genetically. What one eats is part of their environment. So, there is a G and E (genes and environment) interaction between races/ethnies in regard to the shape of their teeth. Yes, one can have a different shape to their teeth, on average, compared to their co-ethnics if they eat different things from them as that is one thing that shapes the development of teeth.
It is very difficult to ascertain the race of an individual through their dentition, but there are certain dental characters which can lead to the identification of race. Rawlani et al (2017) show that there are differences in the dentition of Caucasians, Negroids, Mongoloids and Australoids.
One distinct difference that Monogloid teeth have is having a “shovel” or “scoop” appearance. They also have larger incisors than Caucasoids, while having shorter anatomic roots with better-developed trunks. Caucasoids had a “v” shape to their teeth, while their anterior teeth were “chisel shaped”; 37 percent of Caucasoids had a cusp on the carabelli cusp. Rawlani et al (2017) also note that one study found that 94 percent of Anglo-Saxons had four cusps compared to five for other races. Australoids had a larger arch size (but relatively smaller anterior teeth), which accommodates larger teeth. They have the biggest molars of any race; the mesiodistal diameter of the first molar is 10 percent larger than white Americans and Norweigian Lapps. Negroids had smaller teeth with more spacing, they are also less likely to have the Carabelli cusp and shovel incisors. They are more likely to have class III malocclusion (imperfect positioning of the teeth when the jaw is closed) and open bite. Blacks are more likely to have bimaxillary protrusion, though Asians do get orthodontic surgery for it (Yong-Ming et al, 2009).
Rawlani et al’s (2017) review show that there are morphologic differences in teeth between racial groups that can be used for identification.
When it comes to the emergence of teeth, American Indians (specifically Northern Plains Indians) had an earlier emergence of teeth compared to whites and blacks. American Indian children had a higher rate of dental caries, and so, since their teeth appear at an earlier age compared to whites and blacks, they had more of a chance for their teeth to be exposed to diets high in sugar and processed foods along with lack of oral hygiene (Warren et al, 2016).
Older blacks had more decayed teeth than whites in one study (Hybels et al, 2016). Furthermore, older blacks were more likely than older whites to self-report worse oral hygeine; blacks had a lower number of teeth than whites in this study—which was replicated in other studies—though differences in number of teeth may come down to differences in access to dental care along with dental visits (Huang and Park, 2016). One study even showed that there was unconscious racial bias in regard to root canal treatments: whites were more likely to get root canals (i.e., they showed a bias in decision-making favoring whites), whereas blacks were more likely to get the tooth pulled (Patel et al, 2018).
Kressin et al (2003) also show that blacks are less likely to receive root canals than whites, while Asians were more likely, which lends further credence to the claim of unconscious racial bias. So just like unconscious bias affects patients in regard to other kinds of medical treatment, the same is true for other doctors such as dentists: they have a racial bias which then affects the care they give their patients. Gilbert, Shewchuk, and Litaker (2006) also show that blacks are more likely to have tooth extractions when compared to other races, but people who went to a practice that had a higher percentage of black Americans were more likely to have a tooth extraction, regardless of the individual’s race. This says to me that, since there is unconscious bias in tooth extraction (root canals), that the more black patients a dentist sees the more it is likely that they would extract the tooth of the patient (regardless of race), since they would do that more often than not due to the number of patients they see that are black Americans.
Otuyemi and Noar (1996) showed that Nigerian children had larger mesio-distal crown diameters compared to Briton children. American blacks are more likely to have hyperdontia (extra teeth in the mouth) compared to whites, and are also more likely to have fourth molars and extra premolars (Harris and Clark, 2008). Blacks have slightly larger teeth than whites (Parciak, 2015).
Dung et al (2019) also note ethnic differences in teeth looking at four ethnic groups in Vietnam:
Our study of 4565 Vietnamese children of four ethnic groups (Kinh, Tay, Thai and Muong) showed that most dental arch indicators in males were statistically significantly higher than those in females.
In comparison to other ethnic groups, 12-year-old Vietnamese children had similar dimensions of the upper and lower intercanine and intermolar width to children in the same age group in South China. However, the average upper posterior length 1 and lower posterior length 1 were shorter than those in Africans (Kenyan) and Caucasian (American blacks aged 12). The 12-year-old Vietnamese have a narrower and shorter dental arch than Caucasian children, especially the maxillary, and they need earlier orthodontic intervention.
The size of the mandible reflects the type of energy ingested: decreases “in masticatory stress among agriculturalists causes the mandible to grow and develop differently” (Cramon-Taubadel, 2011). This effect would not only be seen in an evolutionary context. Cramon-Taubadel (2011) writes:
The results demonstrate that global patterns of human mandibular shape reflect differences in subsistence economy rather than neutral population history. This suggests that as human populations transitioned from a hunter-gatherer lifestyle to an agricultural one, mandibular shape changed accordingly, effectively erasing the signal of genetic relationships among populations.
So it seems like the change from a hunter-gatherer lifestyle to one based on plant/animal domestication had a significant effect on the mandible—and therefore teeth—of a population.
So teeth are a bone, and bones adapt. When an individual is young, the way their teeth, and subsequently jaw, are can be altered by diet. Eating hard or soft foods during adolescence can radically change the shape of the teeth (Liebermann, 2013). The harder the stuff one has to chew on will alter their facial morphology (i.e., their jaw and cheekbones) and, in turn, their teeth. This is because the teeth are bones and any stress put on them will change them. This, of course, speaks to the interaction of G and E (genes and environment). There are genes that contribute to differences in dental morphology between populations, and they impact the difference between ethnic/racial groups.
Further making the differences between these groups is what they choose to eat: the hardness or softness of the food they eat in adolescence and childhood can and will dictate the strength of one’s jaw and shape and strength of their teeth in adulthood, though racial/ethnic identification would still be possible.
Racial differences in dentition come down to evolution (development) and what and how much of the population in question eats. The differences in dentition between these populations are, in a way, dictated by what they eat in the beginning years of life. This critical period may dictate whether or not one has a strong or weak jaw. These differences come down to, like everything else, an interaction between G and E (genes and environment), such as the food one eats as an adolescent/baby which would then affect the formation of teeth in that individual. Of course, in countries that have a super-majority of one ethnic group over another, we can see what diet does to an individual in an ethnic group’s teeth.
There are quite striking differences in dentition between races/ethnic groups, and this can and will (along with other variables) lead to correctly identifying the race of an individual in question.
I have written a few response articles to some of what Thompson has written over the past few years. He is most ridiculous when he begins to talk about nutrition (see my response to one of his articles on diet: Is Diet An IQ Test?). Now, in a review of Angela Saini’s (2019) new book Superior: The Return of Race Science, titled Superior Ideology, Thompson, yet again, makes more ridiculous assertions—this time about bone density as an adaptation. (I don’t care about what he says about race; though I should note that the debate will be settled with philosophy, not biology. Nor do I care about whatever else he says, I’m only concerned with his awful take on anatomy and physiology.)
The intellectually curious would ask: are there other adaptations which are not superficial? How about bone density?
Just-so story incoming.
I’m very familiar with these two papers. Let’s look at them both in turn.
The first study is Racial Differences in Bone Density between Young Adult Black and White Subjects Persist after Adjustment for Anthropometric, Lifestyle, and Biochemical Differences (Ettinger et al, 1997). Now, I did reference this article in my own piece on racial differences in drowning, though only to drive home the point that there are racial differences in bone density. Thompson is outright using this article as “evidence” that it is an adaptation.
In any case, Ettinger et al (1997) state that greater bone density in blacks may be due to differences in calciotropic hormones—hormones that play a major role in bone growth and bone remodeling. When compared with whites “black persons have lower urinary calcium excretion, higher 1,25-dihydroxyvitamin D (1, 25D) level, and lower 25-hydroxyvitamin D (25D) and osteocalcin level (9)” (Ettinger et al, 1997). They also state that bone density can be affected by calcium intake, physical activity, They also state that testosterone (an androgen) may account for racial and gender differences in bone density, writing “Two studies have demonstrated statistically significantly higher serum testosterone level in young adult black men (22) and women (23).”
Oh, wow. What are refs  and ?  is one of my favorites—Ross et al (1986) (read my response). To be brief, the main problems with Ross et al is that assay times were all over the place, along with it being a small convenience sample of 50 blacks and 50 whites. LabTests Online writes that it is preferred to assay in the morning while in a fasted state. In Ross et al, the assay times were between 10 am and 3 pm, which was a “convenient time” for the students. Along with the fact that the sample was small, this study should not be taken seriously regarding racial differences in testosterone, and, thus, racial differences in bone density.
Now what about ? This is another favorite of mine—Henderson et al (1988; of which Ross was a part of). Mazur (2016) shows that black women do not have higher levels of testosterone than white women. Furthermore, this is just like Ellis’ (2017) claims that there is a difference in prenatal androgen exposure, but that claim, too, fails. In any case, testosterone can’t explain differences in bone density between races.
Ettinger et al (1997) showed that blacks had higher levels of bone density than whites in all of the sites they looked at. (Though they also used skin-fold testing, which is notoroiously bad at measuring body composition in blacks; see Vickery, Cureton, and Collins, 1988). However, Ettinger et al (1997) did not claim, nor did they imply that bone density is an adaptation.
Now, getting to the second citation, Hochberg (2007). Hochberg (2007) is a review of differences in bone mineral density (BMD) between blacks and whites. Unfortunately, there is no evidence in this paper, either, that BMD is an adaptation. Hochberg (2007) gives numerous reasons why blacks would have stronger skeletons than whites, and neither is that they are an “adaptation”:
Higher bone strength in blacks could be due to several factors including development of a stronger skeleton during childhood and adolescence, slower loss of bone during adulthood due to reduced rates of bone turnover and greater ability to replace lost bone due to better bone formation. Bell and colleagues reported that black children had higher bone mass than white children and that this difference persisted into young adulthood, at least in men (23,24). Development of a stronger skeleton during childhood and adolescence is dependent on the interaction of genetic and environmental factors, including nutrition and lifestyle factors (25).
Genetic, nutritional, lifestyle and hormonal factors may contribute to differences in rates of bone turnover during adulthood
There are numerous papers in the literature that show that blacks have higher BMD than whites and that there are racial differences in this variable. However, the papers that Thompson has cited are not evidence. That trait T exists and there is a difference in trait T between G1 and G2 does not license the claim that the difference in trait T between G1 and G2 is “genetic.”
Thompson then writes:
Equally, how about differences in glomerular function, a measure of kidney health, for which the scores are adjusted for those of Black African descent, to account for their higher muscle mass? Muscle mass and bone density are not superficial characteristics. In conflicts it would be a considerable advantage to have strong warriors, favouring “hard survival”.
Here’s the just-so story.
Race adjustment for estimating glomerular filtration rate (GFR) is not always needed (Zanocco et al, 2012). Renal function is measured by GFR. Renal function is an indication of the kidney’s functioning. Racial differences in kidney function exist, even in cases where the patients do not have CKD (chronic kidney disease) (Peralta et al, 2011). Black Americans also constitute 35 percent of all patients in America receiving kidney dialysis, despite being only 13 percent of the US population. Blacks do generate higher levels of creatinine compared to whites, and this is due to higher average muscle mass when compared with whites.
There are differences in BMD and muscle mass between blacks and whites which is established by young adulthood (Popp et al, 2017), but the claim that there trait T is an adaptation because trait T exists and there is a difference between G1 and G2 is unfounded. It’s simply a just-so story, using the old EP reverse engineering. The two papers referenced by Thompson are not evidence that the BMD is an adaptation, it only shows that there are racial differences in the trait. That there are racial differences in the two traits does not license the claim that the traits in question are an adaptation as Thompson seems to be claiming. The papers he refers to only note a difference between the two groups; it does not discuss the ultimate etiology of the difference between the groups, which Thompson does with his just-so story.
In the past, I have written on the subject of HBD and sports (it is a main subject of this blog). I have covered baseball, football, running, bodybuilding, and strength over many articles. Though, I have not covered basketball yet. Black Americans comprised 74.4 percent of the NBA, compared to 19.1 percent of whites (TIDES, 2017). Why do blacks dominate the racial composition of baskeball? Height is strongly related to success in basketball, though whites and blacks are around the same height, with blacks being slightly shorter (blacks being 69.4 inches compared to whites who were 69.8 inches; CDC, 2012). So, why do blacks dominate basketball?
Basketball success isn’t predicated so much on height, rather, limb length plays more of a factor in basketball success. Blacks have longer limbs than whites (Wagner and Heyward, 2000; Bejan, Jones, and Charles, 2010). The average adult man has an arm span about 2.1 inches greater than his height (Nwosu and Lee, 2008), while Monson, Brasil, and Hlusko (2018) state that taller basketball players had a greater height-to-wingspan ratio and they were, therefore, more successful. The Bleacher Report reports that:
The average NBA Player’s wingspan differential came out at 4.3 percent, so anything above that is going to be reasonably advantageous.
So, more successful basketball players have a longer arm span compared to their height, which makes them more successful in the sport. Blacks have longer limbs than whites, even though they are on average the same height. Thus, one reason why blacks are more successful than whites at basketball is due to their somatotype—their long limbs, specifically,
David Epstein (2014: 129) writes in The Sports Gene:
Based on data from the NBA and NBA predraft combines (using only true, shoes-off measurements of players), the Census Bureaum abd the Centers for Disease Control’s National Center for Health Statistics, there is such a premium on extra height for NBA that the probability of an American man between the ages of twenty and forty being a current NBA player rises nearly a full order of magnitude with every two-inch increase in height starting at six feet. For a man between six feet and 6’2”, the chance of his currently being in the NBA is five in a million. At 6’2” to 6’4”, that increases to twenty in a million. For a man between 6’10” and seven feet tall, it rises to thirty-two thousand in a million, or 3.2 percent. An American man who is seven feet tall is such a rarity that the CDC does not even list a height percentile at that stature. But the NBA measurements combined with the curve formed by the CDC’s data suggest that of American men ages twenty to forty who stand seven feet tall, a startling 17 percent of them are in the NBA right now.* Find six honest seven-footers, and one will be in the NBA.
* Many of the men who NBA rosters claim are seven feet tall prove to be an inch or even two inches shorter when measured at the combine with their shows off. Shaquille O’Neal, however, is a true 7’1” with his shoes off.
And on page 132 he writes:
The average arm-span-to-height ratio of an NBA player is 1.063. (For medical context, a ratio greater than 1.05 is one of the traditonal diagnostic criteria for Marfan syndrome, the disorder of the body’s connective tissues that results in elongated limbs.) An average-height NBA player, one who is about 6’7”, has a wingspan of seven feet.
So we can clearly see that NBA players, on average, are freaks of nature when it comes to limb length, having freakish arm length proportions which is conducive to success in basketball.
Why are long limbs so conducive to basketball success? I can think of a few reasons.
(1) The taller one is and the longer one’s limbs are the less likely they are to have a blocked shot.
(2) The taller one is and the longer one’s limbs are is advantageous when performing a lay-up.
(3) The taller one is and the longer one’s limbs are means they can battle for rebounds at better than a shorter man with shorter limbs.
Epstein (2014: 136) also states that the predraft data shows that the average white NBA player is 6’7.5” with a wingspan of 6’10” while the average black NBA player is 6’5.5” with an average wingspan of 6’11”—meaning that blacks were shorter but “longer.” What this means is that blacks don’t play at “their height”—they play as if they were taller due to their wingspan.
Such limb length differences are a function of climate. Shorter, stockier bodies (i.e., an endomorphic somatotype) is conducive to life in colder climes, whereas longer, more narrowbodies (ecto-meso) are conducive to life in the tropics. Endomorphic somas are conducive to life in colder climes because there is less surface area to keep warm—and this is seen by looking at those whose ancestors evolved in cold climes (Asians, Inuits)—shorter, more compact bodies retain more heat. Conversely, ecto-meso somas are conducive to life in hotter, more tropical climes since this type of body dissipates heat more efficiently than endo somas (Lieberman, 2015). So, blacks are more likely to have the soma conducive to basketball success due to where their ancestors evolved.
So, now we have discussed the facts that height and limb length are conducive to success in basketball. Although blacks and whites in America are the same height, they have vastly different average limb lengths, as numerous studies attest to. These average differences in limb length are how and why blacks succeed far better than whites in the NBA.
Athleticism is irreducible to biology (Lewis, 2004), as has been argued in the past. However, that does not mean that there are NOT traits that are conducive to success in basketball and other sports. Both height and limb length are related: more than likely, the taller one is, the longer their limbs are relative to their height. This is what we see in elite NBA players. Height, will, altitude, and myriad other factors combine to create the elite NBA phenotype; height seems to be a necessary—not sufficient—condition for basketball success (since one can be successful at basketball without the freakish heights of the average player). Though, as Epstein wrote in his book, both height and limb length are conducive to success in basketball, and it just so happens that blacks have longer limbs than whites which of course translates over to their domination in basketball.
Contrary to popular belief, though, players coming from broken homes and an impoverished life are not the norm. As Dubrow and Adams (2010) write:
We find that, after accounting for methodological problems common in newspaper data, most NBA players come from relatively advantaged social origins and African Americans from disadvantaged social origins have lower odds of being in the NBA than African American and white players from relatively advantaged origins.
Sports writer Peter Keating writes that:
[Dubrow and Adams] found that among African-Americans, a child from a low-income family has 37 percent lower odds of making the NBA than a child from a middle- or upper-income family. Poor white athletes are 75 percent less likely to become NBA players than middle-class or well-off whites. Further, a black athlete from a family without two parents is 18 percent less likely to play in the NBA than a black athlete raised by two parents, while a white athlete from a non-two-parent family has 33 percent lower odds of making the pros. As Dubrow and Adams put it, “The intersection of race, class and family structure background presents unequal pathways into the league.”
(McSweeney, 2008 also has a nice review of the matter.)
Turner et al (2015) state that black males were more likely to play basketball than whites males. Higher-income boys were more likely to play baseball, whereas lower-income boys were more likely to play basketball. Though, it seems that when it comes to elite basketball success, players seem to come from higher-income homes.
Therefore, to succeed in basketball, one needs height and long limbs to succeed in basketball. Contrary to popular belief, it is less likely for an NBA player to come from a low-income family—they come from middle-class families the most. Indeed, those who come from lower-income families, even if they have the skill, most likely won’t have the money to develop the talent they have. Though there are some analyses which point to basketball being played by lower-income children—and I have no reason to disagree with them—when it comes to professional play, both blacks and whites are less likely to become NBA players if they grew up in poverty.
The limb length differences between blacks and whites which are conducive to sport success are a function of the climate that their ancestors evolved in. Now, although athleticism is irreducible to biology (because biological and cultural factors interact to create the elite athletic phenotype), that does not mean that there are no traits conducive to sporting success. Quite the opposite: A taller player would more often than not beat a shorter player; when it comes to players with the same height and different limb lengths, the one with the longer limbs will stand a better chance at beating the one with shorter limbs. Blacks and whites have different limb lengths, and this explains how and why blacks are more successful at basketball than whites. Cultural and biological factors combine in order to cause what one is good at.
Basketball is huge in the black community (due in part to people gravitating toward what they are good at), and due to this, since blacks have an advantage right out of the gate, they will gravitate more toward the sport and, therefore, height and limb length is a huge reason why black dominate at this sport.
Hoffman et al (2016) questioned laypeople and medical students and residents on a 15-question questionnaire regarding different beliefs people have about racial differences. The point of the questionnaire was to ascertain how people are biased in regard to racial differences in pain and how the bias affects the treatment the individual of the certain racial group. Only two of the questions had anything to do with pain. In this article, I will answer the questions one by one.
1. On average, Blacks age more slowly than Whites.
This one is true (though they rate this question as false). I don’t know why, though, because there are differences between black and white skin and these differences affect the rate of aging between races.
Campiche et al (2019) found that there is a difference in aging regarding skin in different ethnies (the cohorts were French and Mauritanian). The average age was 46 for the French and 56 foe the Mauritanians, and the Mauritanians still looked younger! Campiche et al (2019) write:
The difference in age between our Caucasian and Black African cohorts (median age 46 years vs 56 years) could bring into question the comparisons of the two cohorts. Nevertheless, we mostly found that Caucasians displayed more severe signs of aging than Black Africans which is in line with the common understanding that the onset of aging in fair skin starts earlier than in darkly pigmented skin and that there were differences in the appearance of lip lines and facial pores.
This question is true, contrary to the claims of Hoffman et al (2016).
2. Black people’s nerve-endings are less sensitive than White people’s nerve-endings.
I can find no literature on this matter and the only articles point me to Hoffman et al (2016) and different articles on the matter. I accept the claim as false.
3. Black people’s blood coagulates more quickly–because of that, Blacks have a lower rate of hemophilia than Whites.
Blacks’ blood does clot faster than whites, and part of the cause is differences in the PAR4 gene family (Bray et al, 2013). The reason that blacks’ blood clots faster than whites’ is due to the effects of thrombin, an enzyme that activates the molecule responsible for blood clotting. Blacks do have a lower rate of hemophilia than whites, though, but not by much (13.2 cases/100,000 for whites compared to 11 for blacks) (Soucie, Evatt, and Jackson, 1998). The question is true, contra Hoffman et al (2016).
4. Whites, on average, have larger brains than Blacks.
They stated that this question is false, which is bizarre. I am aware of no literature that attests to the claim that whites do not have larger brains than blacks. Many analyses back the claim that whites have larger brains than blacks (though Nisbett disagrees and states that there are studies that show the contrary but does not leave a citation) (Rushton, 1997). (Though see Race and Brain Size: Blacks Have Bigger Brains for an alternate view.)
5. Whites are less susceptible to heart disease like hypertension than Blacks.
They say this claim is true. And it is. Hypertension (high blood pressure) is a physiological variable which means that social environment can greatly affect it (Williams, 1992). Higher rates of obesity drive this association as well. American blacks have a lower rate of CHD than whites (7.2 compared to 7.8) but this is reversed for women (7.0 compared to 4.6) (Leigh, Alvarez, and Rodriguez, 2016). The CDC, though, says that the rate of heart disease is the same between blacks and whites, at 23.8 percent though (slightly higher than the 23.5 percent average).
6. Blacks are less likely to contract spinal cord diseases like multiple sclerosis.
7. Whites have a better sense of hearing compared with Blacks.
They state that this claim is false. Pratt et al (2009) state that hearing loss is more likely to occur in white over black elderly patients.
8. Black people’s skin has more collagen (i.e., it’s thicker) than White people’s skin.
They state that this claim is false, and it is. That there is no difference in skin thickness between blacks and whites is irrelevant, though. Black skin is more compact, with greater intercellular cohesion (LaRuche and Cesarini, 1992; Rawlings, 2006).
9. Blacks, on average, have denser, stronger bones than Whites.
10. Blacks have a more sensitive sense of smell than Whites; they can differentiate odors and detect faint smells better than Whites.
This claim is false, according to Hofmann et al. And I can find nothing in the literature on the matter so I will accept their claim.
11. Whites have more efficient respiratory systems than Blacks.
They state that this claim is false. However, Schwartz et al (1988) state that “Controlling for sex, age, standing height, and body mass index, blacks had consistently lower levels of lung function for most measures.” This claim seems to be true.
12. Black couples are significantly more fertile than White couples.
They state this claim is false. Wellons et al (2008) state that “black women were more likely to have experienced infertility.” So the claim is in the opposite of what Hoffman et al question.
13. Whites are less likely to have a stroke than Blacks.
They state that this claim is true, and it is. Minorities are more likely to have a stroke than whites. Brevata et al (2005) write that blacks are more likely to have severe strokes than whites. The claim is true.
14. Blacks are better at detecting movement than Whites.
This seems like a bizarre claim. They state that it is false and I will accept it as false since I can find no literature on the matter.
15. Blacks have stronger immune systems than Whites and are less likely to contract colds.
Europeans and Africans have different immune systems. The immune system of black Americans is stronger than whites’. Twenty-four hours after being infected with salmonella and listeria bacteria, researchers found that the white blood cells from black Americans responded quicker than that of the white blood cells from white Americans. The white blood cells from black Americans ridded the infection about three times quicker than the white blood cells from black Americans. They stated that this claim is false, but it appears to be true.
So, by my count, out of the 15 questions asked, 8 of them have a factual basis (with some in the opposite direction), compared to Hoffman et al’s (2016) assertion that only 4 of them are true. In any case, there are a lot of myths about racial differences out there, and some of these questions by Hoffman et al are myths. Though some of them do have a factual basis. I wonder what kind of literature they referred to when asking these questions, because the literature that I am aware of when it comes to some of these matters is different compared to what Hoffman et al (2016) claim. Racial/ethnic differences do, obviously, exist but there are many myths involved with them.
There are many superficial physical differences between the races. But differences in pain sensitivity would be one that is not really “superficial”, as you can’t really see it (you can see someone’s reaction to pain, but not see it). “Pain” is defined as physical discomfort caused by injury. There are some myths about pain differences between racial groups, that still persist today. And these myths have bad consequences.
For example, Hoffman et al (2016) state that “people assume a priori that blacks feel less pain than do whites.” Hoffman et al (2016) carried out two studies: (1) using a between-participants design, laymen were asked to assess the pain of white and black subjects and (2) again using a between-participants design, they asked students and medical doctors to assess pain between blacks and whites. In (2) they asked these 15 questions:
1. On average, Blacks age more slowly than Whites.
2. Black people’s nerve-endings are less sensitive than White
3. Black people’s blood coagulates more quickly–because of
that, Blacks have a lower rate of hemophilia than Whites.
4. Whites, on average, have larger brains than Blacks.
5. Whites are less susceptible to heart disease like hypertension than Blacks.
6. Blacks are less likely to contract spinal cord diseases like
7. Whites have a better sense of hearing compared with Blacks.
8. Black people’s skin has more collagen (i.e., it’s thicker) than
White people’s skin.
9. Blacks, on average, have denser, stronger bones than Whites.
10. Blacks have a more sensitive sense of smell than Whites;
they can differentiate odors and detect faint smells better
11. Whites have more efficient respiratory systems than Blacks.
12. Black couples are significantly more fertile than White couples.
13. Whites are less likely to have a stroke than Blacks.
14. Blacks are better at detecting movement than Whites.
15. Blacks have stronger immune systems than Whites and are
less likely to contract colds.
(I’ll cover these questions in a future article.)
Here is the table showing the respondents’ answers to the questions:
So they established that whites with no medical training hold false beliefs about black-white differences that then carry over to pain management. They showed in study 2 that medical students’ and residents’ apparently false beliefs about racial differences in the questions they answered showed bias in the accuracy of the recommended pain treatments. Hoffman et al (2016) conclude that:
The present work sheds light on a heretofore unexplored source of racial bias in pain assessment and treatment recommendations within a relevant population (i.e., medical students and residents), in a context where racial disparities are well documented (i.e., pain management). It demonstrates that beliefs about biological differences between blacks and whites—beliefs dating back to slavery—are associated with the perception that black people feel less pain than do white people and with inadequate treatment recommendations for black patients’ pain.
(See also the Psychology Today article on the matter.)
Similarly, Hollingshead et al (2016) reported that subjects, regardless of race, rated the white person more sensitive to pain and more likely to report pain than the black person. Whites reported that they were less pain sensitive and less likely to report pain than their peers. Blacks reported that they were more sensitive to pain while reporting more pain than their peers.
Interestingly, Trawalter, Hoffman, and Waytz (2012) state that black NFL players are more likely to play in a subsequent game than whites when injured, and that, as found in many other studies, blacks are more likely to feel less pain than whites. However, what the literature really shows is the opposite: blacks are more likely to feel pain than whites.
Kim et al (2017) showed that blacks, “Hispanics” and Asians had lower pain tolerance, higher pain ratings and greater temporal sensation of pain. They also showed that blacks had lower pain tolerance and higher pain ratings but no differences in pain threshold.
Blacks report greater pain regarding AIDs, glaucoma, migraine, headache, jaw pain, postoperative pain, joint pain and many other types of pain compared to whites (Green et al, 2003; Klonoff, 2009). Riley III et al’s (2002) results indicate that blacks show a stronger link between pain and emotions than whites. Obana and Davis (2016) showed that Native Hawaiian/Pacific Islander male and females reported higher pain scores than whites when it came to joint pain (but they were not significant). Bolen et al (2010) showed that work limitation, severe joint pain, and arthritis-attributable activity were higher for non-“Hispanic” blacks, “Hispanics” and multiracial people compared to non-“Hispanic” whites. Even American Indians, Alaskan natives, and Aboriginal Canadians had a higher prevalence of pain and pain symptoms than Americans (Jimenez et al, 2011).
Chan et al (2011) surveyed older Singaporeans. They found that Malay people had lower pain sensitivity compared to Chinese people, and that Indians reported greater pain sensitivity when compared with Malay and Chinese people. Australian women rated menstrual pain higher and lasting 36 percent longer than Chinese women (Zhu et al, 2010).
When it comes to potential mechanisms, physiological mechanisms are hypothesized by Campbell and Edwards (2012) who write:
For example, in comparison to non-Hispanic whites, African–Americans have reduced nociceptive flexion reflex thresholds ; the nociceptive flexion reflex is an electrophysiological, spinally mediated reflex, which is not amenable to voluntary control or subject to issues of response bias that plague self-report of pain experiences. This finding suggests that the observed ethnic differences in pain are unlikely to be fully explainable by sociocultural influences and hints that neurobiological processes may contribute to such differences.
Mossey (2011) shows that “Racial/ethnic minorities consistently receive less adequate treatment for acute and chronic pain than non-Hispanic whites, even after controlling for age, gender, and pain intensity.” Martinez et al (2014) showed that when it comes to colorectal and lung cancer, mixed-race individuals and blacks are more likely to report higher pain severity than whites. (Also see Shavers, Bakos, and Sheppard, 2010.)
All of the literature points in the opposite direction of the myths about pain sensitivity in regard to race: blacks feel more pain than whites and are more likely to have a lower pain tolerance. So the myths people hold about differences in pain between racial groups (mostly blacks and whites) are false. Pain is a subjective experience. And there will be differences in pain thresholds between individuals and racial groups and the causes may be both sociocultural and physiological in nature. However, this bias (in the wrong direction) speaks to what I wrote about last night: physician bias when it comes to blacks and other minorities.
Barr (2014: 183-184) writes:
Based to a certain extent on the attention given to his earlier publication, Todd moved to a faculty position with the Emory University School of Medicine, in Atlanta, Georgia. There he was able to essentially repeat his earlier study, this time examining persons coming tothe emergency room of a large, inner-city community hospital in Atlanta that was affiliated with Emory (Todd et al, 2000). He evaluated the medical records of 217 individuals coming to the emergency room over a 40-month period for treatment of an isolated long-bone fracture. Given the racial makeup of Atlanta, these included 127 blacks and 90 whites. They found that
- 54 of the blacks (43 percent) received no medication for pain during their treatment
- 23 of the whites (26 percent) received no medication for pain during their treatment
As with the earlier study in Los Angeles involving whites and Hispanics, in this study, the blacks were nearly twice as likely to receive no pain medication while in the emergency room. With this study, the authors were keenly aware of the importance of documenting the extent to which the patients expressed painful symptoms. By thoroughly reviewing the medical records of these patients, they found that 54 percent of blacks and 59 percent of whites had a notation in their medical record that they had expressed painful symptoms. The nearly twofold difference in withholding pain medication in blacks and whites was because the doctor didn’t order the medication, not because the patient didn’t want the medication.
This, again, speaks to physician bias when it comes to race in a medical context. Race is a useful tool in medicine, but to hold biases in the complete opposite direction that they exist in is wrong. This study—and many others—speak to the type of bias that physicians have against minorities in a medical context. Understanding that the differences in pain are actually the opposite from what is commonly believed by both laypeople and medical doctors is important: if blacks feel more pain than whites regarding the same injuries and they are not getting the care needed, then this speaks to physician bias. What Barr showed was that blacks were treated at the emergency room based on their ethnicity. This is wrong. Race/ethnicity is a useful tool in medicine, but to outright use it as an assumption for numerous factors makes no sense and could cause more harm than good.
Using race in a medical context is a good thing. But using race in a medical context using essentialist, outdated views about race is wrong and can lead to many horrible outcomes. Of course, using race in this context can and does lead to certain things being discovered over others. For instance, if one’s race is assumed to be “driving” one’s illness (i.e., that one has a disease that that race/ethny is more likely to have), then race can and is a good marker to use—specifically geographic ancestry. However, when it comes to things like pain management, this obviously leads to false ideas about how different groups manage and feel pain.
Views about racial differences in pain affect both laypeople and medical doctors. These views can be and are harmful. The literature points to the case being the opposite of what is believed by people: blacks have lower pain tolerance and higher pain ratings than whites. These types of differences are also found between many other races and ethnic groups. The causes could be both sociocultural and physiological. A person’s response to pain depends on their unique physiology, life experiences, ethnicity and other factors. Understanding how and why physicians are biased toward how blacks feel pain is important, along with addressing the other biases that they have about other minorities when it comes to a medical context. Race and ethnicity are important tools for medicine, but these are some of the ways that the concepts can be used with nothing good coming out of it.
I’m currently reading Health Disparities in the United States: Social Class, Race, Ethnicity and Health by medical doctor and sociologist Donald Barr. In the book, he chronicles differences in health between races and ethnies, talks about the concepts of race used and cites well-known studies to people who read this blog, and he also shows that doctors are—either conscious or not—biased against minorities in certain medical contexts.
In Chapter 1 discusses the fact that, although Americans spend the most money on health care, Americans have a lower life expectancy and higher infant mortality rate than all other developed countries, showing the association in social inequality and health across all income levels and education. In Chapter 2, he asks the question “What is health?”, discussing many concepts of what “health” is. In Chapter 3, he defines “socioeconomic status” and shows the link between poor health and poor SES. In Chapter 4, he discusses the link between inequality and poor health, introducing the concept of “allostatic load”, which is the physiologic response to being in a spot of social disadvantage.
In Chapter 5, he looks at different race concepts, since it is a main premise of the book. In Chapter 6, he shows that minorities are more likely to be in a position of low SES. He asks, if minorities of the same SES as whites are consistently found to be of lower health than whites of the same SES, is it because those with poor health tend to be minorities, that they tend to have lower SES or both? In Chapter 7, he asks the same questions while focusing on children. In Chapter 8, he examines disparities in access to healthcare, showing that even when minorities have the same insurance and doctors that minorities still face worse health outcomes (he shows that they either do not receive appropriate healthcare or receive lower-quality care). In Chapter 9, he shows that physicians treat blacks and other minorities differently, albeit unconsciously. In Chapter 10, he discusses when—if ever—a physician would be justified in using racial/ethnic categories. And in Chapter 11, he states that not all of these disparities need to be eliminated.
In Chapter 2, Barr (2014: 45) presents this table, showing rates of illness and selective rates of death between States in America. Obviously, the one to look at that is different than the others is Mississippi. Mississippi is 37.5% black.
Wow, I wonder why Mississippi has such a high rate of obesity, diabetes, and hypertension (high blood pressure). Must be all of those obesity, diabetes and hypertension genes (HBDer).
Obesity and diabetes
The first thing to look at is median income. It is substantially lower in Mississippi compared to California, Iowa, and New York. About 23 million people in America live one mile from a supermarket, while black Americans are about half as likely to have access to supermarkets while “Hispanics” are about a third likely to have access to them (New York Law School Racial Justice Project, 2012). So when it comes to those who have to travel more than a mile for fresh fruit and vegetables, they have poorer health (Stack, 2015). So combine lower median income, along with food deserts and one can start to see how minorities have poorer health due in part to their SES. In short, living in a food desert can affect public health.
Blacks are the most obese ethnic group in America, and this relationship is largely driven by black women. Now, it’s not weird that women have higher levels of body fat than men, since women it is needed for physiological functioning. Though, there is something weird here: Black American men with more African ancestry are less likely to be obese (Klimentidis et al, 2016). Since black women and black men in America are in the same economic bracket, there must be something in the West African male physiology that “protects” them against central adiposity, though variation in social, environmental and cultural factors may play a role as well. In any case, the more West African ancestry American blacks have, the less likely they are to be obese. Klimentidis et al’s (2016) study “suggests that there are specific genetic variants and physiological mechanism(s) that differ among West African and European populations.”
Obesity affects more ethnies in America than others: non-Hispanic blacks and “Hispanics” are more likely to be obese than non-“Hispanic” whites and Asians (Hales et al, 2017). This could be due to, in part, to the variation in supermarket access and access to good foods—the concept of food deserts. Look at any low-income area near you. You’ll see a majority of corner stores with cheap, garbage food. The lack of ability to buy good food (along with the education to know what to buy and when to buy it) can explain differences in obesity rates—obviously not all. Obesity is related with diabetes, and sinec the relationship is so strong, the term “diabesity” was coined.
Eating cheap, processed carbohydrates spikes insulin. Repeated insulin spikes over time leads to type II diabetes and, eventually, obesity too. One can be skinny and have diabetes (a phenomenon known as thin on the outside, fat on the inside “TOFI”). However, since both diseases are co-morbid, we need to look at them in similar contexts. The higher rates of obesity can help to explain the higher rates of diabetes and hypertension—since those who are obese have higher blood pressure (Aronow, 2017).
Minorities are more likely to develop type II diabetes (Tuchman, 2011), and the cause of this is access to high-quality foods. But racial differences in obeisty and SES do not fully explain the higher rates of type II diabetes in black Americans; being a black American is a strong, independent factor for developing type II diabetes and this is compounded by low SES (Brancati et al, 1996). Zizi et al (2016) showed that both long and short black sleepers have an increased risk of developing type II diabetes. There are racial differences in sleep, with blacks having higher durations of long and short sleep compared to whites (Adenekan et al, 2013).
Now let’s look at hypertension (blood pressure). Blood pressure is a physiological variable. Since it is a physiological variable, it can and does respond to social/environmental contexts. So blood pressure can be affected by social contexts, too. For example, Williams (1992) cites stress, socioecologic stress, social support, coping patterns, health behavior, sodium and more for reasons why blacks have higher BP than whites. Dressler (1991) shows that the struggle to maintain a middle-class lifestyle is related to higher levels of BP. Similarly, Keith and Herring (1991) show that skin color is a strong predictor of occupational status and that darker-skinned blacks in America are twice as likely to experience racial discrimination than lighter-skinned blacks. This, too, can help to account for higher levels of BP between the races. In any case, Williams (1992) shows, definitively, that the causes of black-white differences in BP lie in the social environment.
Similarly, Non, Gravlee, and Mulligan (2012) show that racial disparities in BP are explained by education, and not genetic ancestry. They show that the association between BP and education was much stronger for blacks than for whites. Their results also support “the minority poverty hypothesis because the worst blood pressures were predicted for people who faced the double burden of being less educated and identifying as African American.” People who face discrimination could, and do, have higher levels of BP due to the stress they feel due to the discrimination. (Note that I take no sides on whether the discrimination is real or imagined, because even if it were imagined, it still leads to real physiologic consequences.)
Do note that there is a just-so story to explain how and why blacks have higher levels of blood pressure than whites: The Slavery Hypertension Hypothesis (Lujan and Dicarlo, 2018). This has all of the hallmarks of a just-so story posited by evolutionary psychologists. The story goes like this: Black slaves who were on the way to America in the Middle Passage had genes that favored better salt retention. So it is noted that black Americans have higher rates of BP than whites, and then they work backward and attempt to posit the best story possible to explain the current-day observation. This is the usual method evolutionary psychologists use—the method of reverse engineering, the inference from function to cause. So (1) note that blacks have higher levels of BP than whites; (2) infer the function to cause (blacks with genes that favored salt retention were more likely to survive; so (3) this is why blacks have higher rates of BP than whites. Though the explanation fails, since education, and not genetic ancestry, explains the difference in BP between blacks and whites (Non, Gravlee, and Mulligan, 2012). One only needs to understand the intricacies of physiology and how our physiological systems respond to what occurs in the greater environment.
So, obesity can explain both the higher rates of diabetes and higher rates of blood pressure, with differences in the immediate social environment explaining the rest of the differences in blood pressure between blacks and whites. (Note that heart disease deaths are directly related to hypertension. Heart disease affects blacks more than whites.)
In Race, Medicine, and Epigenetics: How the Social Becomes Biological, I shortly discussed breast cancer in black women:
Black women are more likely to die from breast cancer, for example, and racism seems like it can explain a lot of it. They have less access to screening, treatment, care, they receive delays in diagnoses, along with lower-quality treatment than white women. But “implicit racial bias and institutional racism probably play an important role in the explanation of this difficult treatment” (Hardimon, 2017: 166). Furthermore, black women are more than twice as likely to acquire a type of breast cancer called “triple negative” breast cancer (Stark et al, 2010; Howlader et al, 2014; Kohler et al, 2015; DeSantis et al, 2019). Of course, this could be a relevant race-related genetic difference in disease.
Now note the infant mortality rate between the states: the infant mortality rate in Mississippi is 9.7%. Smith et al (2018) show that black women are at a higher rate of having their infant die at birth. Pre-term births are related to low birth weights, and they both are related to infant mortality. Matoba and Collins (2017) write:
In the United States, African-American infants have significantly worse infant mortality than white infants. Individual risk factors alone do not explain this persistent gap, just as they did not explain the disparity in preterm birth and low birth weight. Recent studies in social determinants provide insight into the contribution of community and environmental factors to the racial disparity. Select community-level factors are potential, but partial, determinants of the racial disparity. Interpersonal and institutionalized racism is an important, and increasingly recognized, stressor for African-American women with damaging consequences to maternal and child health.
The Guardian ran a recent story on infant mortality and race, positing racism as a cause of the disparity. In any case, the social environment can and does play a part in everything discussed here today since the social can and does become biological. Part of the reason why Mississippi has a way higher rate of years of potential life lost (10,214 compared to 5500-5900 for Iowa, New York, and California) is that rates of infant mortality are higher in Mississippi. So the median age of death is 75. If an infant dies at one year of age, then that is 74 years of life lost. Therefore it is not surprising that the State with the highest level of infant mortalities has a higher number of years of potential life lost. Further, one 2017 review found that segregation was associated with increased risk of preterm birth and low birth weight for blacks (Mehra, Boyd, and Ickovis, 2017)
Note how Mississippi has lower rates of asthma. This is explained by the fact that Mississippi is more rural than, say New York. Rates of asthma are associated with living in a metropolitan area (Frazier et al, 2012; Malik, Kumar, and Frieri, 2012). (Note that blacks and other races have higher rates of asthma than other races.)
The lower one’s position is on the social hierarchy the lower their probability of staying healthy and having a high life expectancy; when people have the same type of health insurance and are treated for the same disease in the same hospital by the same doctor, that minority groups get worse health care, either not receiving it or receiving lower standards of quality in care. What could account for such disparities? I asked PumpkinPerson the question, and he said:
1) EGI: Doctors put more effort into saving coethnics: she looks like my italian grandma. I’ll make sure she gets the best medicine.
2) IQ: low IQ populations don’t understand the doctor’s advice and damage their health
3) r/K: some populations have faster life history so don’t live as long, even with good medical care
If (1), then the doctors need to be named, shamed, and have their medical licenses revoked. If (2), then they need better education (since IQ is just an index of middle-class knowledge). (3) is completely irrelevant, since it doesn’t make sense for humans and the concept is long-dead in ecology. In any case, PumpkinPerson danced around the true cause: differences in healthcare brought about by unconscious bias (of which (1) may be a cause). But positing (1) as a cause completely misses the point (and is the usual HBDer reductionism to genes causing most/if not all things). It’s the usual HBD/Rushtonian reductionism to genes. That’s all the HBD worldview reduces to: genes/IQ.
In any case, Reschovsky and O’Malley (2008: 229, 230)
Our results indicate that the minority makeup of physicians’ patient panels is associated with greater reports from physicians of difficulties providing high-quality care. At least some of this relationship appears to be explained by the lower resources flowing to high-minority practices.
The results of this study suggest that racial and ethnic disparities in primary health care are in part systemic in nature, and the lower resources flowing to physicians treating more minority patients are a contributing factor.
Thus, bias—whether conscious or unconscious—by physicians can explain how and why there are differences in health outcomes between people that have the same health insurance and doctor. Barr (2014: 168) states that “for black Americans, where a person lives sems to be associated with access to primary care, the quality of available hospital care, and the quality of available home care.” Barr shows that blacks receive a different level of care for a wide-range of diseases and illnesses compared to whites. For instance, Smedley et al (2003) write that “some evidence suggests that bias, prejudice, and stereotyping on the part of healthcare providers may contribute to differences in care.” Quite clearly, there is racial bias against minorities and it does seem to affect healthcare, whether or not it is intended or unintended (conscious or unconscious) (Williams and Rucker, 2000). Bird and Clinton (2001: 255) write:
Race and class-based structuring of the U.S. health delivery system has combined with other factors, including physicians’ attitudes—perhaps legacies conditioned by their participation in slavery and creation of the scientific myth of black biological and intellectual inferiority—to create a medical-social, health system cultural, and health delivery environment which contributes to the propagation of racial health disparities, and, ultimately, the health system’s race and class dilemma.
Blacks are more likely to take the advice of physicians, and to use the needed services, such as preventative care and are less likely to delay seeking care when the physician is of their own race (Saha et al, 2000; LaVeist, Nuru-Jeter, and Jones, 2008).
Blacks are more likely to perceive racism in healthcare and when they are able to choose their own doctors, they are more satisfied with their level of care (Chen et al, 2005). Chapman, Kaatz, and Carnes (2013) show that increasing awareness of implicit bias in healthcare can lower such disparities, stating that having more black doctors will alleviate such problems since they are less likely to be biased. Having a black doctor lead to more effective care for black men. Quite clearly, the race of the doctor matters for implicit biases and minority doctors lead to more effective healthcare for minorities, since they are less likely to be affected by racial biases. Minorities trust the healthcare system less than whites (Boulware et al, 2003). Black and white physicians even agree that race is a medically relevant data point, but do not agree on why (Bonham et al, 2009).
The table presented by Barr is telling. He purported to show that on certain indices of health, certain states fair worse than others. Rates of illness and rates of death between different states (with differing ethnic compositions) were compared. Using national data, he showed that Mississippi has the highest rates of death and illness (sans asthma). Social factors can and do account for the differences in hypertension between blacks and whites (and States); food deserts (lack of access to good food) can explain higher rates of obesity and diabetes and also higher rates of blood pressure between the races (and States with a higher percentage of certain racial/ethnic groups). Of course, physiological variables are affected by the social environment, so we have to look at differences in the social environment between groups to see how and why there are differences in any physiological variable we look at.
Doctors, whether consciously or not, treat minority patients differently and there is evidence that this leads to differences in health outcomes between ethnic groups in America. PP’s hypotheses don’t cut it (the only one that does it his “EGIs”, but that explanation fails; the cause is bias by the doctors but “EGIs” have nothing to do with the bias). In any case, there are social and cultural reasons why there are such health disparities between States and races/ethnies. Understanding the causes behind them can and will lead to closing the gap between them. The social can and does become biological, and this is the perfect way to show this. There are ways to lower the disparities in a medical context, and education seems to be one of them—for both patient and doctor.
Some states are healthier than others based on objective measures of health and mortality, and understanding the reasons why can and will decrease these differences.
The following will not all be anthropologists by trade or certification, but each carved their own little niche the distorts research. I will address them in way that reflects the weird way they are all connected.
Bruce Fenton: So thanks to RR, I’ve learned how he once peddled crap such as “giants“, among other things. Needless to say, that answered alot of my questions I had after taking apart his article on his book a while back.
Jeffery Schwarz and John Grehan: Now, to be fair, these guys deserve somewhat more credit in their premise, that is regarding the morphological links with orangutans and humans. It has some basis in how humans evolved, originally being arboreal and not knuckle walking. However, their approach of preferring inheritance based on external morphology over coding DNA and overall genetics has been criticized again, and again, and again, and again.
For clarification, I found them from an Indian study on Hominid development in Fenton’s spurces, which focused on bipedalism. It noted an “Asian hypothesis” of human origins. Technically, Grehan proposed an African-Asian distribution, and as of now Schwarz’s actual hominid data currently works with OOA as a base model while adjusting it. So it doesn’t support Fenton even if Orangutans were the actual ancestors of humans.
It’s worth mentioning as well, the commenter “Marc” in the Grehan link is a crank as well, but not for today.
Shi Huang: Shi Huang here is the only other major researcher I know of who has actually produced any notable difference in the Human-Chimp clade finding, on top of rejecting OOA.
The implications however stray further from the Cann study than Fenton’s. In fact, if you read the link on phenotypic association of human populations…it’s kind demonstrates the futility of using external phenotypes.
Overall, his theory on Africans being Denisovan and Neanderthal admixed humans doesn’t align with Haplogroup associations touched upon previously in my Fenton article linked to Dienekes, the nature of the East African cluster mentioned in my article on modern Africans with A and B y chromosomes making up the majority of Eurasian affiliated Nilotics, and my previous article on the post Neanderthal substructure making up the majority of African genetics.
Now compare all of these inferences, to this-
Fossils or traits indicating AMH migration from East Asia into Africa or Europe have
been noted before. First, native Africans such as Khoisans are well known to have certain East Asian features such as shoveling teeth, epicanthic fold, and lighter skins. Mbuti pygmies look very much like the Andamanese. The much lower frequency of shoveling teeth in African fossils and Khoisan relative to ancient and modern Chinese suggests that this type of teeth could only originate in China with its African presence due to migration. The type of shoveling teeth found in Neanderthals and Pleistocene Homo from Atapuerca-Sima de los Huesos may either be a different type from that of Asians and Africans or come from early disposal of Homo from Asia to Europe (Martinon-Torres et al., 2007; Wolpoff, 1996). Second, a combination of three features has been noted to be region-specific to China fossils with lower frequency also found in North Africa: a non-depressed nasal root, non-projecting perpendicularly oriented nasal bones and facial flatness (Brauer and Stringer, 1997). Third, Dali man of China (~250,000 years ago) had lower upper facial index and flat nasomolar angle, but these two modern features only first
appeared in Europe in Cro Magnons (Xinzhi Wu, personal communication).
I’ll admit I’m no expert in genomics, but having at least looking over the Dali paper and comparing it to this, I think anyone else who has sense and had done the same would come to the same conclusion as I would to dismiss this paper as the leaps and assertions it makes are vast and at times amateur. Wu Xinzhi apparantly read this himself, but recalling his own paper and work he was much more cautious and more involved in this kind of data. As particular as his theory was, it never devolved into statements like this.
That humans have been a single species for more than ~2 myr is consistent with the
unique feature of being human, i.e., creativity, which could be defined as constant creation of novelty. Intentionally made and constantly improved knife type stone tools, first appeared 2.3- 2.8 myr ago, may be beyond the capabilities of non-humans and mark the first appearance of creativity in life on Earth.
The appearance of modern humans should be accompanied by new technologies just as
the knife type stone tools were associated with the first appearance of the genus Homo. A technology just one step more advanced than stone tools is pottery making. Consistent with our model, the earliest pottery making intended for practical usage was found in Hunan and the neighboring Jiangxi in South China at 18,000-20,000 years ago (Boaretto et al., 2009; Wu et al., 2012). While future investigations could extend the time even earlier, one should not expect a new technology to appear simultaneously with the first appearance of AMH since it would take time for the first modern humans to grow into a large enough population to be able to invent new cultures. It is also remarkable to note that the next new invention after pottery, rice or agriculture, also likely came from Hunan (Zhang and Yuan, 1998). Both the link to his blog post on OOA and this slightly dismissal post on his work shows an ardent defender of his, one that should be very familiar.
German Dziebel: It only takes a short cross-reference to see his BS. Basically, he’s pushing some sort of hypothesis that undermines the divergences of Pygmies and Bantu farmers. This basically mean ignoring the conclusions from his own sources on genetics, here, and here. He confuses the pygmy phenotype, which is shown to be independent, with the pygmy genetic cluster. This is disingenuous to anyone familiar with the topic. He is correct that they are not genetically unrelated due to the geneflow, which accounts for language, but his proposal to explain this primarily on recent splits is contradicted by full genetic research tackling the matter.
On the Shi Huang paper, he says that the Chinese lack the possibility of “bias” Americans feel to support OOA based on guilt of African American discrimination. Perhaps, yet that doesn’t explain Manzi on the Ceprano skull, nor does that explain this paper showing Chinese lacking the substructure expected from the more popular idea of regional continuity, which actually shows bias on behalf on the Chinese to push a theory. Likewise, Wu Xinzhi who proposed the hypothesis even stated it wasn’t mutually exclusive with OOA.
I’ve been saving this as an article concept due to how amazing it was to come across each of them twice after what started as a simple cross reference. This shouldn’t, however, be read as someone who is against changes to the mainstream, but as someone with actual scrutiny on scientific stances. I can accept the meaningfulness of Orangutans, Australians, and Chinese archaic humans in modern human origins.