Two dominant theories exist in regard to development, the “gene’s eye view—gene selectionism (GS)—and the developmental view—developmental systems theory (DST). GS proposes that there are two fundamental processes in regard to evolution: replication and interaction. Replicators (the term was coined by Dawkins) are anything that is copied into the next generation whereas interactors (vehicles) are things that only exist to ensure the replicators’ survival. Thus, Dawkins (1976) proposes a distinction between the “vehicle” (organism) and its “riders/replicators” (the genes).
Gene selectionists propose a simple hypothesis: evolution through the differential survival of genes, its main premise being that the “gene” is “the ultimate, fundamental unit of natural selection.” Dusek (1999: 156) writes that “Gene selection claims that genes, not organisms, groups of organisms or species, are selected. The gene is considered to be the unit of selection.” The view of gene selectionists is best—and most popularly put—by Richard Dawkins’ seminal book The Selfish Gene (1976), in which he posits that genes “compete” with each other, and that our “selfish actions” are the result of our genes attempting to replicate to the next generation, relegating our bodies to disposable “vehicles” that only house the “replicators” (or “drivers).
Though, just because one is a gene selectionist does not necessarily mean that they are a genetic determinist (both views will be argued below). Gene selectionists are comitted to the view that genes make a distinctive contribution toward building interactors. Dawkins (1982) claims that genetic determinism is not a problem in regard to gene selectionism. Replicators (genes) have a special status to gene selectionists. Gene selectionists argue that adaptive evolution only occurs through cumulative selection, while only the replicators persist through the generations. Gene selectionists do not see organisms as replicators since genes—and not organisms—are what is replicated according to the view.
The gene selectionist view (Dawkins’ 1976 view) can also be said to apply what Okasha (2018) terms “agential thinking”. “Agential thinking” is “treating an evolved organism as if it were an agent pursuing a goal, such as survival or reproduction, and treating its phenotypic traits, including its behaviour, as strategies for achieving that goal, or furthering its biological interests” (Okasha, 2018: 12). Dawkins—and other gene selectionists—treat genes as if they have agency, speaking of “intra-genomic conflict”, as if genes are competing with each other (sure, it’s “just a metaphor”, see below).
Okasha (2018: 71) writes:
To see how this distinction relates to agential thinking, note that every gene is necessarily playing a zero-sum game against other alleles at the same locus: it can only spread in the population if they decline. Therefore every gene, including outlaws, can be thought of as ‘trying’ to outcompete alleles, or having this as its ultimate goal.
Selfish genes also have intermediate goals, which are to maximize fitness, which is done through expression in the organismic phenotype.
Thus, according to Okasha (2018: 73), “… selfish genetic elements have phenotypic effects which can be regarded as adaptations, but only if we apply the notions of agent, benefit, and goal to genes themselves”, though “… only in an evolutionary context [does] it [make] sense to treat genes as agent-like and credit them with goals and interests.” It does not “make sense to treat genes as even “agent-like and credit them with goals and interests since they can only be attributed to humans.
Other genes have as their intermediate goal to enhance the fitness of their host organism’s relatives, by causing altruistic behaviour [genes can’t cause altruistic behavior; it is an action]. However, a small handful of genes have a different intermediate goal, namely to increase their own transmission in their host organism’s gametes, for example, by biasing segregation in their favour, or distorting the sex-ratio, or transposing to new sites in the genome. These are outlaws, or selfish genetic elements.If oulaws are absent or are effectively suppressed, then the genes within a single organism have a common (intermediate) goal, so will cooperate: each gene can onluy benefit by itself by benefiting the whole organism. Agential thinking then can be applied to the organism itself. The organism’s goal—maximizing its fitness—then equates to the intermediate goal of each of the genes within it. (Okasha, 2018: 72)
Attributing agential thinking to anything other than humans is erroneous, since genes are not “selfish.”
The selfish gene is one of the main theories that define the neo-Darwinian paradigm and it is flat out wrong. Genes are not ultimate causes, as the crafters of the neo-Darwinian Modern Synthesis (MS) propose, genes are resources in a dynamic system and can thusly only be seen as causes in a passive, not active, sense (Noble, 2011).
The alternative to the gene-centric view of evolution is that of developmental systems theory (DST), first proposed by Oyama (1985).
The argument for DST is simple:
(1) Organisms obviously inherit more than DNA from their parents. Since organisms can behave in ways that alter the environment, environments are also passed onto offspring. Thus, it can be said that genes are not the only things inherited, but a whole developmental matrix is.
(2) Genes, according to the orthodox view of the MS, interact with many other factors for development to occur, and so genes are not the only thing that help ‘build’ the organism. Genes can still play some “privileged” role in development, in that they “control”, “direct” or “organize” everything else, but this is up to gene-selectionists to prove. (See Noble, 2012.)
(3) The common claim that genes contain “information” (that is, context-independent information) is untenable, since every reconstruction of genes contain development about information applies directly to all other developmental outcomes. Genes cannot be singled out as privileged causes in development.
(4) Other attempts—such as genes are copied more “directly—are mistaken, since they draw a distinction between development and other factors but fail.
(5) Genes, then, cannot be privileged in development, and are no different than any other developmental factor. Genes, in fact, are just passive templates for protein construction, waiting to be used by the system in a context-dependent fashion (see Moore, 2002; Schneider, 2007). The entire developmental system reconstructs itself “through numerous independent causal pathways” (Sterelny and Griffiths, 1999: 109).
DNA is not the only thing inherited, and the so-called “famed immortality of DNA is actually a property of cells [since] [o]nly cells have the machinery to correct frequent faults that occur in DNA replication.” The thing about replication, though, is that “DNA and the cell must replicate together” (Noble, 2017: 238). A whole slew of developmental tools are inherited and that is what constructs the organism; organisms are, quite obviously, constructed not by genes alone.
Developmental systems, as described by Oyama (1985: 49) do not “have a final form, encoded before its starting point and realized at maturity. It has, if one focuses finely enough, as many forms as time has segments.” Oyama (1985: 61) further writes that “The function of the gene or any other influence can be understood only in relation to the system in which they are involved. The biological relevance or any influence, and therefore the very “information” it conveys, is jointly determined, frequently in a statistically interactive, not additive, manner, by that influence and the system state it influences.”
DNA is, of course, important. For without it, there would be nothing for the cell to read (recall how the genome is an organ of the cell) and so no development would occur. DNA is only “information” about an organism only in the process of cellular functioning.
The simple fact of the matter is this: the development of organs and tissues are not directly “controlled” by genes, but by the exchange signals of the cells. “Details notwithstanding, what is important to note is that whatever kinds of signals it sends out depends on the kind of signals it receives from its immediate environment. Therefore, neighboring cells are interdependent, and its local interactions among cells that drive the developmental processes” (Kampourakis, 2017: 173).
The fact of the matter is that whether or not a trait is realized depends on the developmental processes (and the physiologic system itself) and the environment. Kampourakis, just like Noble (2006, 2012, 2017) pushes a holistic view of development and the system. Kampourakis (2017: 184) writes:
What genetics research consistently shows is that biological phenomena should be approached holistically. at various levels. For example, as genes are expressed and produce proteins, and some of these proteins regulate or affect gene expression, there is absolutely no reason to privilege genes over proteins. This is why it is important to consider developmental processes in order to undertand how characters and disease arise. Genes cannot be considered alone but only in the broader context (cellular, organismal, environmental) in which they exist. And both characters and disease in fact develop; they are not just produced. Therefore, reductionism, the idea that genes provide the ultimate explanation for characters and disease, is also wrong. In order to understand such phenomena, we need to consider influence at various levels of organization, both bottom-up and top-down. This is why current research has adopted a systems biology approach (see Noble, 2006; Voit, 2016 for accessible introductions).
All this shows that developmental processes and interactions play a major role in shaping characters. Organisms can respond to changing environments through changes in their development and eventually their phenotypes. Most interestingly, plastic responses of this kind can become stable and inherited by their offspring. Therefore, genes do not predetermine phenotypes; genes are implicated in the development of phenotypes only through their products, which depends on what else is going on within and outside cells (Jablonka, 2013). It is therefore necessary to replacr the common representation of gene function presented in Figure 9.6a, which we usually find in the public sphere, with others that consider development, such as the one in figure 9.6b. Genes do not determine characters, but they are implicated in their development. Genes are resources that provide cells with a generative plan about the development of the organism, and have a major role in this process through their products. This plan is the resouce for the production of robust developmental outcomes that are at the same time plastic enough to accomodate changes stemming from environmental signals.
Figure 9.6 (a) The common representation of gene function: a single gene determines a single phenotype. It should be clear by what has been present in the book so far that is not accurate. (b) A more accurate representation of gene function that takes development and environment into account. In this case, a phenotype is propduced in a particular environment by developmental processes in which genes are implicated. In a different environment the same genes might contribute tothe development of a different phenotype. Note the “black box” of development.
[Kampourakis also writes on page 188, note 3]
In the original analogy, Wolpert (2011, p. 11) actually uses the term “program.” However, I consider the term “plan” as more accurate and thus more appropriate. In my view, the term “program” impies instructions and their implimentation, whereas the term “plan” is about instructions only. The notion of a genetic program can be very misleading because it implies that, if it were technically feasible, it would be possible to compute an organism by reading the DNA sequence alone (see Lewontin, 2000, pp. 17-18).
Kampourakis is obviously speaking of a “plan” in a context-dependent manner since that is the only way that genes/DNA contain “information” (Moore, 2002; Schneider, 2007). The whole point is that genes, to use Noble’s terminology, are “slaves” to the system, since they are used by and for the (physiological) system. Developmental systems theory is a “wholeheartedly epigenetic approach to development, inheritance and evolution” (Hochman and Griffiths, 2015).
This point is driven home by Richardson (2017:111):
And how did genes eventually become established? Probably not at all as the original recipes, designers, and controllers of life. Instead they arose as templates for molecular components used repeatedly in the life of the cell and the organism: a kind of facility for just-in-time production of parts needed on a recurring basis. Over time, of course, the role of these parts themselves evolved to become key players in the metabolism of the call—but as part of a team, not the boss.
It is not surprising, then, that we find that variation in form and function has, for most traits, only a tenuous relationship with variation in genes.
[And also writes on page 133]:
There is no direct command line between environments and genes or between genes and phenotypes. Predictions and decisions about form and variation are made through a highly evolved dynamical system. That is why ostensibly the same environment, such as hormonal signal, can initiate a variaety of responses like growth, cell division, differentiation, and migration, depending on deeper context. This reflects more than fixes responses from fixed information in genes, something fatally overlooked in the nature-nurture debate
(Also read Richardson’s article So what is a gene?)
The gene-selectionist point-of-view entails too many (false) assumptions. The DST point of view, on the other hand, does not fall prey to the pitfalls of the gene-selectionist POV; Developmental systems theorists look at the gene, not as the ultimate causes of development—and, along with that, only changes in gene frequency driving evolutionary change—but only as products to be used by and for the system. Genes can only be looked at in terms of development, and in no other way (Kamporuakis, 2017; Noble, 2017). Thus, the gene-selectionists are wrong; the main tenet of the neo-Darwinian Modern Synthesis, gene-selectionism—the selfish gene—has been refuted (Jablonka and Lamb, 2005; Noble, 2006, 2011). The main tenets of the neo-Darwinian Modern Synthesis have been refuted, and so it is now time to replace the Modern Synthesis with a new view of evolution: one that includes the role of genes and development and the role of epigenetics on the developmental system. The gene-selectionist view champions an untenable view of the gene: that the gene is priviliged above any other developmental variables, but Noble and Kampourakis show that this is not the case, since DNA is inherited with the cell; the cell is what is “immortal” to use the language of Dawkins—not DNA itself.
A priori, there is no privileged level of causation, and this includes the gene, which so many place at the top of the hierarchy (Noble, 2012).
Black-white differences get talked about more than Asian-white differences. (For the purposes of this article, “Asian” refers to Koreans, Chinese, Japanese and Filipinos whereas “white” refers to those of European descent.) One interesting racial difference is that of body fatness between ethnies/races. Blacks have thinner skin folds and lower percent body fat than whites at the same height/BMI, and Asians have higher body fat and larger skinfolds than do whites. The interesting thing about this Asian-white difference is the fact that, at the same BMI, Asians have more upper body fat (trunk) than whites. The interesting thing is that there are two good studies, looking at these types of differences between Asians and whites (one study looking at the aforementioned “Asians” I previously identified and whites in the NYC area and another comparing whites and Chinese living in China.)
Wang et al (1994) studied 687 healthy volunteers (445 whites and 242 Asians, ranging from 18-94 years of age with BMIs in the range of 15-38). They defined ethnicity as the birthplace of one’s grandparents. The “Asian” category included 225 Chinese, 9 Japanese, 6 Koreans and 2 Filipinos; 97 percent of this sample was born in Asia. Then, after an overnight fast to better assess body fat differences and skinfold measures, they were weighed and measured, with their back, butt and feet firmly against the wall.
They measured skinfold thickness at the midarm for the average of the triceps and biceps, trunk thickness was the average circumference of the chest, subscapular, umbilicus, abdomen, and suprailiac. The circumference of the arm was measured at the midarm, while the circumference of the trunk was the average circumference of the upper chest, waist, iliac crest, and chest.
For lean and normal BMIs, Asians were fatter than whites in both sexes, but the differences in estimated fat% between whites and Asians varied by BMI in different directions for males and females: fat% increased with BMI for males but decreased with BMI for females.
Whites were had significantly larger circumference in the measured appendages compared to Asians, while in Asian and white females, the circumference of the arms and waist were not different but other circumferences showed a greater difference, favoring whites. Asians had significantly higher levels of subcutaneous trunk fat (upper body fat) than whites, while white females had more lower (thigh) body fat than Asians. In both sexes, Asians had thicker bicep, subscapular, abdomen, and suprailiac skinfolds than whites, in both sexes. White women had higher levels of subcutaneous fat in their thighs. The only difference between white and Asian males in regard to skinfold area was the thigh, with whites having larger thighs, but were similar at the midarm and trunk. Asian men had a larger trunk skinfold area whereas whites had a larger thigh skinfold area while arm fatness did not differ between the races. Women in both races had larger skinfold areas except in the trunk; for whites, there were no differences between the sexes. In both sexes, Asians had higher values in subcutaneous fat (at the midarm, trunk, and midthigh), but white women had a higher value in the thigh than Asian women.
Wang et al (1994) show that there are significant differences in body fatness at different sites of the body, and so, since most (if not all) BMI equations are based on white populations, then, these equations will not work for Asians and will result in substantial error.
Wang et al (2011) studied differences in body composition between Chinese and white males living in the Shenzhen, Guangdong Province, China. They studied 115 Chinese and 114 white males. In this sample, Chinese males were younger, shorter, had a lower body weight and lower BMI than the white sample. Whites had higher fat mass, fat-free mass and bioelectrical impedance (which assess body composition, which measures lean mas in relation to fat mass; but these can be skewed by how much water one has or has not drunk, and so the DXA scan and hydrostatic weighing are, in my opinion, superior assessors). After adjustment for age and BMI, the percentage of fat mass in the trunk and arm was higher in Chinese than white males. Further, Chinese men had higher diastolic blood pressure (DBP), fasting glucose (FG) and triglycerides (TG), while whites had higher fasting total plasma cholesterol (TC) and high-density lipoprotein (HDL). The only statistically significant differences were between FG and HDL. Even after adjustment, Chinese men had 3.0 mmHg higher DBP than whites.
Chinese men had higher percent body fat than whites and more fat stored around their trunks than whites at the same BMI. Chinese men had higher fasting glucose levels (a risk-factor for obesity) but lower HDL levels at the same BMI as whites. Wang et al (2011) write:
In addition, comparing the two nationally representative studies, NHANES III  and China National Nutrition and Health Survey 2002 (CNNHS 2002) , Chinese men held a relatively 15.0% lower mean value of BMI than that for American white men. While comparison results from two large-scale epidemiological studies, the Shanghai Diabetes Studies (SHDS)  and the NHANES III , show that the mean value of PBF for American men is relatively 7.4% higher than that for Chinese men. The relative difference of PBF between American and Chinese males is much less than the difference of BMI, implying that the PBF among American men should be lower than that of Chinese men with the same BMI level.
What this implies is that the proportion of overweight/obese Chinese men are severely underestimated since, as noted earlier, most—if not all—BMI equations are created using strictly white populations. This study also provides more evidence that Chinese men had more central (trunk) adiposity than whites (Britons, in this study; Eston, Evans, and Fu, 1994). Central adiposity and risk for type II diabetes and cardiovascular disease is heightened in those of Chinese descent (Weng et al, 2006). It should also be noted that, in a sample of 129 Pacific Islanders, 120 Asians, 91 Maoris, and 91 Europeans aged 12-91, the relationship between bioelectrical impedance analysis (BIA) is ethnicity-dependent, due to the fact the equations developed for fatness estimation using BIA were more accurate than what was recommended by the manufacturer (Sluyter et al, 2010). Cheng (2011) showed that central adiposity was more predictive of cardiovascular diseases in the Chinese population than was BMI, while Hu et al (2007) showed that central obesity was more related to diabetes mellitus and impaired fasting glucose than to overall obesity in the Chinese population.
So, clearly, obesity-related factors appear at lower BMIs for Asians than Europeans (e.g., Huxley et al, 2008). Pan et al (2004) showed that for most BMI values, incidences of hypertension, diabetes, and hyperuricemia were higher in the Taiwanese sample than in the white and black samples. As BMI got higher, the risk for hypertriglyceridemia and hypertension increased. They showed that BMIs of 22.6, 26, and 27.5 were the cutoffs for the best predictabilty in regard to negative and positive variables for Taiwanese, white and black men, respectively. Pan et al (2004: 31) write:
For BMIs 27, 85% of Taiwanese, 66% of whites, and 55% of blacks had at least one of the studied comorbidities. However, a cutoff close to the median of the studied population was often found by maximizing sensitivity and specificity. Reducing BMI from 25 to 25 in persons in the United States could eliminate 13% of the obesity comorbidity studied. The corresponding cutoff in Taiwan is slightly 24.
Pan et al (2004) conclude that, for Taiwanese (Asians) in their study, they should have a lower BMI cutoff than whites and blacks, though it is tough to ascertain where that cutoff would be.
Bell, Adair, and Popkin (2002) show that “at BMI levels less than 25, prevalence difference figures suggested a stronger association between BMI and hypertension in Chinese men and women but not in Filipino women, compared with non-Hispanic Whites” while “[n]on-Hispanic Blacks and Filipino women had a higher prevalence of hypertension at every level of BMI compared with non-Hispanic Whites and Mexican Americans.”
Since Asians have a higher risk of hypertension than whites after controlling for BMI, this indicates that the effects of obesity are not as important as other factors, be they genetic or environmental (or both, which it obviously is). The higher incidence of obesity-related risk-factors in Asian populations with lower BMIs has been attributed to GxE interactions, which, of course, have been intensified with the introduction of the Western Diet (AKA the SAD [Standard American Diet] diet). This can be most notably seen with the explosion of childhood obesity in China, with the number of obese people in China surpassing the US recently, while China is on its way to have the most obese children in the world. The surging obesity epidemic in China is due to increasingly similar lifestyles to what we have (sedentary populations; highly processed, high fat, high carbohydrate foodstuff).
So since the findings in the reviewed studies suggest that, at a lower BMI, Asians are more susceptible to obesity-related risk-factors, and so, BMI standards must be lowered for Asian populations, which would be BMI 24 for overweight and BMI 27 for obese, which was recommended by the Chinese Ministry of Health (Wang et al, 2010). Cheung et al (2018) show that diet quality is inversely associated with obesity in Chinese adults who have type II diabetes.
In conclusion, Asians at the same BMI have higher body fat percentage than whites, and they also have more obesity-related risk-factors than whites at a lower BMI (Pan et al, 2004; WHO expert consultation, 2004; Wang et al, 2010; Hsu et al, 2015), which implies that they need differing BMI scales, just as blacks need different scales in comparison with whites. Here is a good example of two people with the same BMI (22.3) but different DXA results:
This, of course, shows the strong limitations of the use of the same BMI standards calculated in one ethny and used for another. So, just like at the same BMI blacks have lower body fat and thinner skinfolds than whites (Vickery, Cureton, and Collins, 1988; Wagner and Heyward, 2000; Flegal et al, 2010), at the same BMI as whites, Asians have higher body fat and thicker skinfolds (Wang et al, 1994; WHO expert consultation, 2004; Wang et al, 2011).
Is there one (or, one with slight modifications) diet that all humans should be eating? I’m skeptical of such claims. Though both vegans (one who does not eat or use animal products) and carnivores (one who eats only animal products), in my opinion, have some warped views on diet and human evolution. Both are extreme views; both have wrong ideas about diet throughout our evolution; both get some things right. Though, both are extreme views with little to no support. While it is hard to pinpoint what the “human diet” is, clearly, there were certain things that we ate through our evolutionary niches in our ancestral Africa that we “should” be eating today (in good quantities).
Although it is difficult to reconstruct the diet of early hominids due to lack of specimens (Luca, Perry, and Rienzo, 2014), by studying the eating behavior of our closest evolutionary relatives—the chimpanzees—we can get an idea of what our LCA ate and its eating behavior (Ulijaszek, Mann, and Elton, 2013). Humans have been throughout most every niche we could possibly been in and, therefore, have come across the most common foods in each ecology. If animal A is in ecosystem E with foods X, Y, and Z, then animal A eats foods X, Y, and Z, since animals consume what is in their ecosystem. Knowing this much, the niches our ancestors lived in in the past had to have a mix of both game and plants, therefore that was our diet (in differing amounts, obviously). But it is more complicated than that.
So, knowing this, according to Ulijaszek, Mann, and Elton, (2013: 35) “Mammalian comparisons may be more useful than ‘Stone Age’ perspectives, as many of the attributes of hominin diets and the behaviour associated with obtaining them were probably established well before the Pleistocene, the time stone agers were around (Foley 1995; Ulijaszek 2002; Elton 2008a).” Humans eat monocots (various flowering plants with one seed), which is not common our order. The advent of farming was us “expanding our dietary niche”, which began “the widespread adoption of agriculture [which] is an obvious point of transition to a ‘monocot world’” (Ulijaszek, Mann, and Elton, 2013). Although these foodstuffs dominate our diet, there is seasonality in what types of those foods we consume.
So since humans tend to not pick at things to eat, but have discrete meals (it is worth noting that one should have “three square meals a day” is a myth; see Mattson et al, 2014), we need to eat a lot in the times we do eat. Therefore, since we are large-bodied primates and our energy needs are much higher (due to our large brains that consume 20 percent of our daily caloric consumption), we need higher quality energy. The overall quality and energy density of our diets are due to meat-eating—which folivorous/frugivorous primates do not consume. We have a shorter gut tract which is “often attributed to the greater reliance of faunivory in humans“, though “humans are not confined to ‘browse’ vegetation … and make extensive use of grasses and their animal consumers” (Ulijaszek, Mann, and Elton, 2013: 58). Due to this, we show amazing dietary flexibility and adaptability due to our ability to eat a wide range of foodstuffs in most any environment we find ourselves in.
So “It is difficult to pinpoint what the human diet actually is … Nonetheless, humans are frequently described as omnivores” (Ulijaszek, Mann, and Elton, 2013: 59). Omnivores normally feed at two or more trophic levels, though others define it as just consuming plants and animals (Chubaty et al, 2014). Trophic level one is taken up by plants; level two is taken up by herbivores—primary consumers; level three is taken up by predators—who feed on the herbivores; level four or five is taken up by apex predators or carnivores; while the last level is also taken up by detrivores—those who feed on waste. Though, of course, “omnivory” is a continuum and not a category in and of itself. Humans eat primary producers (plants) and primary consumers (herbivores) and some secondary consumers (like fish), “although human omnivory may only be possible because of technological processing” (Ulijaszek, Mann, and Elton, 2013: 59). Other animals described as “omnivorous” eat only foods from one trophic level and only consume food from another level when needed.
Humans—as a species—rely on meat consumption. Fonseca-Azevedo and Herculano-Houzel (2012) showed that the energetic cost of a brain is directly related to the number of neurons in the brain. So, there were metabolic limitations in regard to brain and body size. The number of hours available to feed along with the low caloric yield of plant foods explains why great apes have such large bodies and small brains—which was probably overcome by erectus, who probably started cooking food around 1.5 mya. If we consumed only a diet of raw foods, then it would have taken us around 9 h/day to consume the calories we would need to power our brains—which is just not feasible. So it is unlikely that erectus—who was the first to have the human body plan and therefore the ability to run, which implies he would have needed higher quality energy—would have survived on a diet of raw plant foods since it would take so many hours to consume enough food to power their growing brains.
We can see that we are adapted to eating meat by looking at our intestines. Our small intestines are relatively long, whereas our long intestines are relatively short, which indicates that we became adapted to eating meat. Our “ability to eat significant quantities of meat and fish is a significant departure from the dietary norm of the haplorhine primates, especially for animals in the larger size classes.” Though “Humans share many features of their gut morphology with other primates, particularly the great apes, and have a gut structure that reflects their evolutionary heritage as plant, specifically ripe fruit, eaters” (Ulijaszek, Mann, and Elton, 2013: 63). Chimpanzees are not physiologically adapted to meat eating, which can be seen in the development of hypercholesterolemia along with vascular disease, even when controlled diets in captivity (Ford and Stanford, 2004).
When consuming a lot of protein, though, “rabbit starvation” needs to be kept in mind. Rabbit starvation is a type of malnutrition that arises from eating little to no fat and high amounts of protein. Since protein intake is physiologically demanding (it takes the most energy to process out of the three macros), Ben-Dor et al (2011) suggest a caloric ceiling of about 35 percent of kcal coming from protein. So erectus’ protein ceiling was 3.9 g/bw per day whereas for Homo sapiens it was 4.0 g/bw per day. Ben-Dor et al (2011) show that erectus’ DEE (daily energy expenditure) was about 2704 kcal, with “a maximum long-term plant protein ceiling of 1014 calories“, implying that erectus was, indeed, an omnivore. So, of course, the consumption of protein and raw plants are physiologically limited. Since erectus’ ceiling on protein intake was 947 kcal and his ceiling on raw plant intake was 1014 kcal, then, according to the model proposed by Ben-Dor et al (2011), erectus would have needed to consume about 744 kcal from fat, which is about 27 percent of his overall caloric intake and 44 percent of animal product intake.
Neanderthals would have consumed between 74-85 percent of their daily caloric energy during glacial winters from fat, with the rest coming from protein (Ben-Dor, Gopher, and Barkai, 2016), while consuming between 3,360 to 4,480 kcal per day (Steegman, Cerny, and Holliday, 2002). (See more on Neanderthal diet here.) Neanderthals consumed a large amount of protein, about 292 grams per day (Ben-Dor, Gopher, and Barkai, 2016: 370). Since our close evolutionary cousins (Neanderthals and erectus) ate large amounts of protein and fat, they were well-acclimated, physiologically speaking, to their high-protein diets. Though, their diets were not too high in protein to where rabbit starvation would occur—fat was consumed in sufficient amounts in the animals that Neanderthals hunted and killed, so rabbit starvation was not a problem for them. But since rabbit starvation is a huge problem for our species, “It is therefore unlikely that humans could be true carnivores in the way felids are” (Ulijaszek, Mann, and Elton, 2013: 66).
We consume a diet that is both omnivorous and eclectic, which is determined by our phylogeny through the form of our guts; we have nutritional diversity in our evolutionary history. We needed to colonize new lands and, since animals can only consume what is in their ecosystem, the foods that are edible in said ecosystem will be what is consumed by that animal. Being eclectic feeders made the migration out of Africa possible.
But humans are not true carnivores, contrary to some claims. “Meat-eating has allowed humans to colonize high latitudes and very open landscapes. However, bearing in mind the phylogenetic constraints that prevent humans from being true carnivores, such expansion was probably not accomplished through meat-eating alone. Instead, humans have used their ability to technologically harvest, produce, and consume a very wide range of foods to help exploit all major biomes” (Ulijaszek, Mann, and Elton, 2013: 67).
Humans, though, lack the gut specialization and dentition to process grasses efficiently. This means that our ancestors ate animals that ate these things, and therefore the C4 they consumed elevated the levels in the fossils we discovered. Information like this implies that our ancestors ate across a wide variety of trophic levels and had substantial dietary diversity throughout evolutionary history.
“Hominins lack the specialized dentition found in carnivorans (the group of animals that includes the cat and dog families) and other habitual meat and bone eaters, so must have pre-processed at least some of the meat in their diet” (Ulijaszek, Mann, and Elton, 2013: 81). This is where stone tools come into play (Zink and Lieberman, 2016). “Processing” food can be anything from taking out nutrients to changing how the food looks. We can look at “food processing” as a form of pre-digestion before consumption. The use of stone tools, and cooking, was imperative for us to begin the processing of meat and other foods. This gave us the ability to “pre-digest” our food before consumption, which increases the available energy in any food that is cooked/processed. For example, cooking denatures protein strands and breaks down the cell walls which gelatinizes the collagen in the meat which allows for easier chewing and digestion. Carmody et al (2016) showed that adaptation to a cooked diet began around 275 kya.
In his book Catching Fire, Wrangham (2009: 17-18) writes:
Raw-foodists are dedicated to eating 100 percent of their diets raw, or as close to 100 percent as they can manage. There are only three studies of their body weight, and all find that people who eat raw tend to be thin. The most extensive is the Giessen Raw Food study, conducted by nutritionist Corinna Koebnick and her colleagues in Germany, who used questionnaires to study 513 raw-foodists who ate from 70 to 100 percent of their diet raw. They chose to eat raw to be healthy, to prevent illness, to have a long life, or to live naturally. Raw food included not only uncooked vegetables and occasional meat, but also cold-pressed oil and honey, and some items were lightly heated such as dried fruits, dried meat, and dried fish. Body mass index (BMI), which measures weight in relation to the square of the height, was used as a measure of fatness. As the proportion of food eaten raw rose, BMI fell. The average weight loss when shifting from a cooked to a raw food diet was 26.5 pounds (12 kilograms) for women and 21.8 pounds (9.9 kilograms) for men. Among those eating a purely raw diet (31 percent), the body weights of almost a third indicated chronic energy deficiency. The scientists’ conclusion was unambiguous: “a strict raw food diet cannot guarantee an adequate energy supply.”
Also, vegetarians and meat-eaters who cooked their food have similar body weights. This implies that cooking food—no matter the type—gives more caloric energy to use for the body and that raw-foodists are fighting a losing battle with biology, consuming raw foods at such a high quantity that our guts are not used for. As can be seen above in the citation from Fonseca-Azevedo and Herculano-Houzel (2012), great apes who eat nothing but raw food have large guts and bodies which are needed to consume the raw plant foods they eat but we cannot thrive on such a diet because it is not calorically nor nutritionally viable for us—most importantly due to the size of our brains and its caloric requirements.
Carmody, Weintraub, and Wrangham (2011) show that modern raw-foodists who subsist on raw meat and plants have nutrient deficiencies and chronic energy deficiencies, even though they process their foods (cooking is a form of processing, as is cutting, mashing, pounding, etc) in different manners, while females experience low fecundity. Thus, the cooking of food seems to be needed for normal biological functioning; we have clearly evolved past consuming all raw foods. So it is clear that cooking—along with meat-eating—was imperative to our evolution. (Which does not mean that humans only ate meat and that eating meat and only meat is part of our evolutionary history.) Cooking food lead to it gelatinizing which denatured the protein, leading to easier mastication of the food, which meant less force since the food was not as hard after cooking. This then led to smaller teeth, over time, which was seen in erectus (Zink and Lieberman, 2016). This was due to cooking along with tool-use: the tool-use lead to smaller particles leading to less force per bite, which eventually led to smaller teeth in our lineage.
Finally, humans are said to be “facultative carnivores.” A facultative carnivore is an animal that does best on a carnivorous diet but can survive—not thrive—on other foodstuffs when meat is not available. This, though, doesn’t make sense. Humans are eclectic feeders—omnivorous in nature. Yes, we began cooking about 1.5 mya; yes meat-eating (and the cooking of said meat) is huge in the evolution of our species; yes without meat and cooking we would not have had the energy requirements to split off from chimpanzees/great apes. But this does not mean that we do “best” on a carnivorous diet. There are about 7,105 ethnic groups in the world (Spencer, 2014: 1029), and so to say that all of these ethnies would do the same or similar, physiologically speaking, on an all-meat diet is crazy talk. The claims that we subsisted on one type of food over the other throughout our evolutionary history is a bold claim—with no basis in evolutionary history.
Marlene Zuk (2013: 103-104), author of Paleofantasy writes:
Another implication of the importance Marlowe attaches to bow hunting is that, rather than starting out as exclusively carnivorous and then adding starches and other plant material to the diet, ancient humans have been able to increase the proportion of meat only after newer technology had come about, a mere 30,000 years ago. Other anthropologists concur that the amount of meat in the human diet grew as we diverged from our other primate ancestors. All of this means that, first, contrary to the claims of many paleo-diet proponents, the earliest humans did not have an exclusively meat-based diet that we are best adapted to eat; and second, our ancestors’ diets clearly changed dramatically and repeatedly over the last tens, not to mention hundreds, thousands of years, even before the advent of agriculture.
The assumption that we were fully (or even mostly) carnivorous and then added plant foods/carbs is clearly false. “Fantasies” like this are “just-so stories”; they are nice-sounding stories, but reality is clearly more nuanced than people’s evolutionary and Stone Age imaginations. This makes sense, though. Since we evolved from an LCA (last common ancestor) with chimpanzees some 6.3 mya (Patterson et al, 2006). So why would it make sense that we would then, ultimately, only subsist on an all-meat diet, if our LCA with chimpanzees was most likely a forager who lived in the trees (Lieberman, 2013).
One thing, though, I’m sure that everyone agrees with is that the environments we have constructed for ourselves in the first world are maladaptive—what is termed an “evolutionary mismatch” (Lieberman, 2013; Genne-Bacon, 2014). The mismatch arises from the high-carb food environments we have constructed, with cheap foodstuffs that is loaded with sugar, salt, and fat which is much more addictive than on their own (see Kessler, 2010). This makes food more palatable and people then want to eat it more. Foods like this, obviously, were not in our OEE (original evolutionary environment), and therefore cause us huge problems in our modern-day environments. Evolutionary mismatches occur when technological advancement increases faster than the genome can adapt. This can clearly be seen in our societies and the explosion of obesity over the past few decades (Fung, 2016, 2018).
We did not evolve eating highly processed carbohydrates loaded with salt and sugar. That much everyone can agree on.
It is clear that both claims from vegans/vegetarians and carnivores are false: there is no one “human diet” that we “should” be eating. Individual variation in different physiologic processes implies that there is no one “human diet”, no matter what type of food is being pushed as “what we should be” eating. Humans are eclectic feeders; we will eat anything since “Humans show remarkable dietary flexibility and adaptability“. Furthermore, we also “have a relatively unspecialized gut, with a colon that is shorter relative to overall size than in other apes; this is often attributed to the greater reliance on faunivory in humans (Chivers and Langer 1994)” (Ulijaszek, Mann, and Elton, 2013: 58). Our dietary eclectism can be traced back to our Australopithecine ancestors. The claim that we are either “vegetarian/vegan or carnivore” throughout our evolution is false.
Humans aren’t “natural carnivores” or “natural vegans/vegetarians.” Humans are eclectic feeders. Animals eat whatever is in their ecosystem. Ergo humans are omnivores, though we can’t pinpoint what the “human diet” is since there is great variability in it due to culture/ecology, we know one thing: we did not subsist on mainly only one food; we had a large variety of food, especially with fallback foods, to consume throughout our evolutionary history. So claims that we evolved to eat a certain way (as vegans/vegetarians and carnivores claim) is false. (Note I am not saying that high carb diets are good; I’ve railed hard on them.)
In 2012, biologist Hippokratis Kiaris published a book titled Genes, Polymorphisms, and the Making of Societies: How Genetic Behavioral Traits Influence Human Cultures. His main point is that “the presence of different genes in the corresponding people has actually dictated the acquisition of these distinct cultural and historical lines, and that an alternative outcome might be unlikely” (Kiaris, 2012: 9). This is a book that I have not seen discussed in any HBD blog, and based on the premise of the book (how it purports to explain behavioral/societal outcomes between Eastern and Western society) you would think it would be. The book is short, and he speaks with a lot of determinist language. (It’s worth noting he does not discuss IQ at all.)
In the book, he discusses how genes “affect” and “dictate” behavior which then affects “collective decisions and actions” while also stating that it is “conceivable” that history, and what affects human decision-making and reactions, are also “affected by the genetic identity of the people involved” (Kiaris, 2012: 11). Kiaris argues that genetic differences between Easterners and Westerners are driven by “specific environmental conditions that apparently drove the selection of specific alleles in certain populations, which in turn developed particular cultural attitudes and norms” (Kiaris, 2012: 91).
Kiaris attempts to explain the societal differences between the peoples who adopted Platonic thought and those who adopted Confucian thought. He argues that differences between Eastern and Western societies “are not random and stochastic” but are “dictated—or if this is too strong an argument, they are influenced considerably—by the genes that these people carry.” So, Kiaris says, “what we view as a choice is rather the complex and collective outcome of the influence of people’s specific genes combined with the effects of their specific environment … [which] makes the probability for rendering a certain choice distinct between different populations” (Kiaris, 2012: 50).
The first thing that Kiaris discusses (behavior wise) is DRD4. This allele has been associated with miles migrated from Africa (with a correlation of .85) along with novelty-seeking and hyperactivity (which may cause the association found with DRD4 frequency and miles migrated from Africa (Chen et al, 1999). Kiaris notes, of course, that the DRD4 alleles are unevenly distributed across the globe, with people who have migrated further from Africa having a higher frequency of these alleles. Europeans were more likely to have the “novelty-seeking” DRD7 compared to Asian populations (Chang et al, 1996). But, Kiaris (2012: 68) wisely writes (emphasis mine):
Whether these differences [in DRD alleles] represent the collective and cumulative result of selective pressure or they are due to founder effects related to the genetic composition of the early populations that inhabited the corresponding areas remains elusive and is actually impossible to prove or disprove with certainty.
Kiaris then discusses differences between Eastern and Western societies and how we might understand these differences between societies as regards novelty-seeking and the DRD4-7 distribution across the globe. Westerners are more individualistic and this concept of individuality is actually a cornerstone of Western civilization. The “increased excitability and attraction to extravagance” of Westerners, according to Kiaris, is linked to this novelty-seeking behavior which is also related to individualism “and the tendency to constantly seek for means to obtain satisfaction” (Kiaris, 2012: 68). We know that Westerners do not shy away from exploration; after all, the West discovered the East and not vice versa.
Easterners, on the other hand, are more passive and have “an attitude that reflects a certain degree of stoicism and makes life within larger—and likely collectivistic—groups of people more convenient“. Easterners, compared to Westerners, take things “the way they are” which “probably reflects their belief that there is not much one can or should do to change them. This is probably the reason that these people appear rigid against life and loyal, a fact that is also reflected historically in their relatively high political stability” (Kiaris, 2012: 68-69).
Kiaris describes DRD4 as a “prototype Westerner’s gene” (pg 83), stating that the 7R allele of this gene is found more frequently in Europeans compares to Asians. The gene has been associated with increased novelty-seeking, exploratory activity and human migrations, along with liberal ideology. These, of course, are cornerstones of Western civilization and thought, and so, Kiaris argues that the higher frequency of this allele in Europeans—in part—explains certain societal differences between the East and West. Kiaris (2012: 83) then makes a bold claim:
All these features [novelty-seeking, exploratory activity and migration] indeed tend to characterize Westerners and the culutral norms they developed, posing the intriguing possibility that DRD4 can actually represent a single gene that can “predispose” for what we understand as the stereotypic Western-type behavior. Thus, we could imagine that an individual beating the 7-repeat allele functions more efficiently in Western society while the one without this allele would probably be better suited to a society with Eastern-like structure. Alternatively, we could propose that a society with more individuals bearing the 7-repeat allele is more likely to have followed historical lines and choices more typical of a Western society, while a population with a lower number (or deficient as it is the actual case with Easterners) of individuals with the 7-repeat allele would more likely attend to the collective historical outcome of Eaasterners.
Kiaris (2012: 84) is, importantly, skeptical that having a high number of “novelty-seekers” and “explorers” would lead to higher scientific achievement. This is because “attempts to extrapolate from individual characteristics to those of a group of people and societies possess certain dangers and conceptual limitations.”
Kiaris (2012: 86) says that “collectivistic behavior … is related to the activity of serotonin.” He then goes on to cite a few instances of other polymorphisms which are associated with collective behavior as well. Goldman et al (2010) show ethnic differences in the l and s alleles (from Kiaris, 2012: 86):
It should also be noted that populations (Easterners) that had a higher frequency of the s allele had a lower prevalence of depression than Westerners. So Western societies are more likely to “suffer more frequently from various manifestations of depression and general mood disorders than those of Eastern cultures (Chiao & Blizinsky, 2010)” (Kiaris, 2012: 89).
As can be seen from the table above, Westerners are more likely to have the l allele than Easterners, which should subsequently predict higher levels of happiness in Western compared to Eastern populations. However, “happiness” is, in many ways, subjective; so how would one find an objective way to measure “happiness” cross-culturally? However, Kiaris (2012: 94) writes: “Intuitively speaking, though, I have to admit that I would rather expect Asians to be happier, in general, than Westerners. I cannot support this by specific arguments, but I think the reason for that is related to the individualistic approach of life that the people possess in Western societies: By operating under individualistic norms, it is unavoidably stressful, a condition that operates at the expense of the perception of individuals’ happiness.”
Kiaris discusses catechol-O-methyltransferase (COMT), which is an enzyme responsible for the inactivation of catecholamines. Catecholamines are the hormones dopamine, adrenaline, and noradrenaline. These hormones regulate the “fight or flight” function (Goldstein, 2011). So since catecholamines play a regulatory role in the “fight or flight” mechanism, increased COMT activity results in lower dopamine levels, which is then associated with better performance.
“Warriors” and “worriers” are intrinsically linked to the “fight or flight” mechanism. A “warrior” is someone who performs better under stress, achieves maximal performance despite threat and pain, and is more likely to act efficiently in a threatening environment. A “worrier” is “someone that has an advantage in memory and attention tasks, is more exploratory and efficient in complex environments, but who exhibits worse performance under stressful conditions (Stein et al., 2006)” (Kiaris, 2012: 102).
Kiaris (2012: 107) states that “at the level of society, it can be argued that the specific Met-bearing COMT allele contributes to the buildup of Western individualism. Opposed to this, Easterners’ increased frequency of the Val-bearing “altruistic” allele fits quite well with the construction of a collectivistic society: You have to be an altruist at some degree in order to understand the benefits of collectivism. By being a pure individualist, you only understand “good” as defined and reflected by your sole existence.”
So, Kiaris’ whole point is thus: there are differences in polymorphic genes between Easterners and Westerners (and are unevenly distributed) and that differences in these polymorphisms (DRD4, HTT, MAOA, and COMT) explain behavioral differences between behaviors in Eastern and Western societies. So the genetic polymorphisms associated with “Western behavior” (DRD4) are associated with increased novelty-seeking, tendency for financial risk-taking, distance of OoA migration, and liberal ideology. Numerous different MAOA and 5-HTT polymorphisms are associated with collectivism (e.g., Way and Lieberman, 2006 for MAOA and collectivism). The polymorphism in COMT more likely to be found in Westerners predisposes for “worrier’s behavior”. Furthermore, certain polymorphisms of the CHRNB3 gene are more common in all of the populations that migrated out of Africa, which predisposed for leaders—and not follower—behavior.
|Novelty seeking||DRD4||7-repeat novelty seeking allele more common in the West|
|Migration||DRD4||7-repeat allele is associated with distance from Africa migration|
|Nomads/settlers||DRD4||7-repeat allele is associated with nomadic life|
|Political ideology||DRD4||7-repeat allele is more common in liberals|
|Financial risk taking||DRD4||7-repeat allele is more common in risk takers|
|Individualism/Collectivism||HTT||s allele (collectivistic) of 5-HTT is more common in the East|
|Happiness||HTT||l allele has higher prevalence in individuals happy with their life|
|Individualism/Collectivism||MAOA||3-repeat allele (collectivistic) more common in the East)|
|Warrior/Worrier||COMT||A-allele (worrier) more common in the West|
|Altruism||COMT||G-allele (warrior) associated with altruism|
|Leader/Follower||CHRBN3||A-allele (leader) more common in populations Out-of-Africa|
The table above is from Kiaris (2012: 117) who lays out the genes/polymorphisms discussed in his book—what supposedly shows how and why Eastern and Western societies are so different.
Kiaris (2012: 141) then makes a bold claim: “Since we know now that at least a fraction (and likely more than that) of our behavior is due to our genes“, actually “we” don’t “know” this “now”.
The takeaways from the book are: (1) populations differ genetically; (2) since populations differ genetically, then genetic differences correlated with behavior should show frequency differences between populations; (3) since these populations show both behavioral/societal differences and they also differ in genetic polymorphisms which are then associated with that behavior, then those polymorphisms are, in part, a cause of that society and the behavior found in it; (4) therefore, differences in Eastern and Western societies are explained by (some) of these polymorphisms discussed.
Now for a simple rebuttal of the book:
“B iff G” (behavior B is possible if and only if a specific genotype G is instantiated) or “if G, then necessarily B” (genotype G is a sufficient cause for behavior B). Both claims are false; genes are neither a sufficient or necessary cause for any behavior. Genes are, of course, a necessary pre-condition for behavior, but they are not needed for a specific behavior to be instantiated; genes can be said to be difference makers (Sterelny and Kitcher, 1988) (but see Godfrey-Smith and Lewontin, 1993 for a response). These claims cannot be substantiated; therefore, the claims that “if G, then necessarily B” and “B iff G” are false, it cannot be shown that genes are difference makers in regard to behavior, nor can it be shown that particular genes or whatnot.
I’m surprised that I have not come across a book like this sooner; you would expect that there would be a lot more written on this. This book is short, it discusses some good resources, but the conclusions that Kiaris draws, in my opinion, will not come to pass because genes are not neccesary nor sufficient cause of any type of behavior, nor can it be shown that genes are causes of any behavior B. Behavioral differences between Eastern and Western societies, logically, cannot come down to differences in genes, since they are neither necessary nor sufficient causes of behavior (genes are neccessary pre-conditions for behavior, since without genes there is no organism, but genes cannot explain behavior).
Kiaris attempts to show how and why Eastern and Western societies became so different, how and why Western societies are dominated by “Aristotle’s reason and logic”, while Eastern lines of thought “has been dominated by Confucious’s harmony, collectivism, and context dependency” (Kiaris, 2012: 9). While the book is well-written and researched (he talks about nothing new if you’re familiar with the literature), Kiaris fails to prove his ultimate point: that differences in genetic polymorphisms between individuals in different societies explain how and why the societies in question are so different. Though, it is not logically possible for genes to be a necessary nor sufficient cause for any behavior. Kiaris talks like a determinist, since he says that “the presence of different genes in the corresponding people has actually dictated the acquisition of these distinct cultural and historical lines, and that an alternative outcome might be unlikely” (Kiaris, 2012: 9), though that is just wishful thinking: if we were able to start history over again, things would occur differently, “the presence of different genes in the corresponding people” be dammed, since genes do not cause behavior.
Assessing physical functioning is important. Such simple tests—such as walk, stand, and sit tests—can predict numerous things. “Testing” defines one’s abilities after being given a set of instructions. Racial differences exist and, of course, both genetic and environmental factors play a part in health disparities between ethnies in America. Minorities report lower levels of physical activity (PA) than whites, this—most often—leads to negative outcomes, but due to their (average) physiology, they can get away with doing “less” than other ethnies. In this article, I will look at studies which talk about racial differences in physical functioning, what it means, and what can be done about it.
Racial differences in physical functioning
Racial differences in self-rated health at similar levels of health exist (Spencer et al, 2009). Does being optimistic or pessimistic about health effect one’s outcomes? Using 2,929 HABC (Health, Aging, and Body Composition) participants, Spencer et al (2009) examined the relationship between self-rated health (SRH) and race, while controlling for demographic, psychosocial and physical health factors. They found that whites were 3.7 times more likely than blacks to report good SRH.
Elderly blacks were more likely to be less educated, reported lower satisfaction with social support, and also had lower scores on a physical battery test than whites. Further, black men and women were less likely to report that walking a quarter mile was “easy”, implying that (1) they have no endurance and (2) weak leg muscles.
Blacks were also more likely to report higher personal mastery:
Participants were asked whether they agreed or disagreed with the following statements: “ I often feel helpless in dealing with the problems of life ” and “ I can do just about anything I really set my mind to do, ” with response categories of disagree strongly, disagree somewhat, agree somewhat, and agree strongly. (Spencer et al, 2009: 90)
Blacks were also more likely to report higher BMI and more chronic health conditions than whites. White men, though, were more likely to report higher global pain, but were older than black men in the sample. When whites and blacks of similar physical functioning were compared, whites were more likely to report higher SRH. Health pessimists were found to be at higher risk of poor health.
Vazquez et al (2018) showed that ‘Hispanics’ were less likely to report having mobility limitations than whites and blacks even after adjustment for age, gender, and education. Blacks, compared to non-‘Hispanic’ whites were more likely to have limitations on activities of daily living (ADL) and instrumental activities of daily living (IADL) For ADL limitations, questions like “Do participant receive help or supervision with personal care such as bathing, dressing, or getting around the house because of an impairment or a physical or mental health problem?” and for IADLs “Does participant receive help or supervision using the telephone, paying bills, taking medications, preparing light meals, doing laundry, or going shopping?” (Vazquez et al, 2018: 4). They also discuss the so-called “Hispanic paradox” (which I discussed), but could not come to a conclusion on the data they acquired. Nonetheless, ‘Hispanic’ participants were less likely to report mobility issues; blacks were more likely than whites to report significant difficulties with normal activities of daily living.
Araujo et al (2010) devised a lower-extremities chair test: how quickly one can stand and sit in a chair; along with a walking test: the time it takes to walk 50 feet. Those who could not complete the chair test were given a score of ‘0’. Overall, the composite physical function (CPF) score for blacks was 3.45, for ‘Hispanics’ it was 3.66, and for whites, it was 4.30. This shows that older whites were stronger—in the devised tests—and that into older age whites are more likely to not need assistance for everyday activities.
This is important because differences in physical functioning between blacks and whites can explain differences in outcomes one year after having a stroke (Roth et al, 2018). This makes sense, knowing what we know about stroke, cognitive ability and exercise into old age.
Shih et al (2005) conclude:
a nationally representative study of the US population, indicate that among older adults with arthritis: (1) racial disparities found in rates of onset of ADL [activities of daily living] limitations are explained by differences in health needs, health behaviors, and economic resources; (2) there are race-specific differences in risk factors for the onset of ADL limitations; and (3) physical limitations are the most important risk factor for onset of ADL limitations in all racial and ethnic groups.
Safo (2012) showed that out of whites, blacks and “Hispanics”, blacks reported the most (low back) pain, worse role functioning score and overall physical functioning score. Lavernia et al (2011) also found that racial/ethnic minorities were more likely to report pain and have lower physical functioning after having a total knee arthroplasty (TKA) and total hip arthroplasty (THA). They found that blacks and ‘Hispanics’ were more likely to report pain, decreased well-being, and have a lower physical functioning score, which was magnified specifically in blacks. Blacks were more likely to report higher levels of pain than whites (Edwards et al, 2001; Campbell and Edwards, 2013), while Kim et al (2017) showed that blacks had lower pain tolerance and higher pain ratings. (Read Pain and Ethnicity by Ronald Wyatt.)
Sarcopenia is the loss of muscle tissue which is a natural part of the aging process. Sarcopenia—and sarcopenic obesity (obesity brought on by muscle loss due to aging)—shows racial/ethnic/gender differences, too. “Hispanics” were the most likely to have sarcopenia and sarcopenic obesity and blacks were least likely to acquire those two maladies (Du et al, 2018). They explain why sarcopenic obesity may be higher in ‘Hispanic’ populations:
One possibility to explain the higher rates of sarcopenia and SO in the Hispanic population could be the higher prevalence of poorly controlled chronic disease, particularly diabetes, and other health conditions.
We were surprised to find that Hispanic adults had higher rates of sarcopenia and SO [sarcopenic obesity]. One possible explanation could be the disparity in mortality rates among ethnic populations. Populations that have greater survival rates may live longer even with poorer health and thus have greater chance of developing sarcopenia. Alternatively, populations which have lower survival rates may not live long enough to develop sarcopenia and thus may identify with lower prevalence of sarcopenia. This explanation appears to be supported by the results of our study and current mortality statistics; NH Blacks have the highest mortality rate, followed by NH Whites, and lastly Hispanics.
Differences in physical activity could, of course, lead to differences in sarcopenic obesity. Physical activity leads to an increase in testosterone in lifelong sedentary men (Hayes et al, 2017), while those who had high physical activity compared to low physical activity were more likely to have high testosterone, which was not observed between the groups that were on a calorie-restricted diet (Kumagai et al, 2016). Kumagai et al (2018) also showed that vigorous physical exercise leads to increases in testosterone in obese men:
We demonstrated that a 12-week aerobic exercise intervention increased serum total testosterone, free testosterone, and bioavailable testosterone levels in overweight/obese men. We suggest that an increase in vigorous physical activity increased circulating testosterone levels in overweight/obese men.
(Though see Hawkins et al, 2008 who show that only SHGB and DHT increased with no increase in testosterone.)
So, clearly, since exercise can increase testosterone levels in obese subjects, and higher levels of testosterone are associated with lower levels of adipose tissue; since adequate levels of steroid hormones are needed for lower levels of adipose tissue (Mammi et al, 2012), then since exercise increases testosterone and higher levels of testosterone lead to lower levels of adipose tissue, if physical activity is increased, then levels of obesity and sarcopenic obesity should decrease in those populations.
Racial differences in physical functioning exist; these differences in physical functioning that exist have grave consequences for certain events, especially after a stroke. Differences in physical functioning/activity cause differences in sarcopenia/sarcopenic obesity in different ethnies. This can be ameliorated by targeting at-risk groups with certain outreach. This type of research shows how differences in lifestyle between ethnies cause differences in physical activity between ethnies as the years progress.
(Also read Evolving Human Nutrition: Implications for Public Health, specifically Chapter 8 on socioeconomic status and health disparities for more information on how and why differences like this persist between ethnies in America.)
Evolutionary Psychology (EP) is a discipline which purports to explain mental and psychological traits as adaptations—functional products of “natural selection”—which are genetically inherited/transmitted. Its main premises is that the human mind can be explained by evolution through natural selection; that the mind is “modular”—called the “massive modularity hypothesis” (see Pinker, 1997). EP purports that the mind is “a cluster of evolved information-processing mechanisms” with its main goal being “to characterize these Darwinian algorithms” (Sterelny and Griffiths, 1999: 336). The problem with EP, though, is that many of the “theories/hypotheses” are just speculation—what is termed “just-so stories” (Gould and Lewontin, 1979; Richardson, 2007; Nielsen, 2009; Fodor and Piattelli-Palmarini, 2010). In this article, I will discuss the massive modularity hypothesis, adaptationism, and the promises that EP makes as a whole.
The massive modularity hypothesis (MMH) proposes that the modules “for” mental processing evolved in response to “natural selection” (Samuels, 1998). To evolutionary psychologists, the mind is made up of different modules that were “selected for” different mental abilities. So, to evolutionary psychologists like Tooby and Cosmides, Pinker et al, much of human psychology is rooted in the Pleistocene (i.e., Tooby and Cosmides’ 5th Principle that “our modern skulls house a stone age mind“) . Evolutionary psychologists propose that the mind is made up of different, genetically influenced, modules that which were specifically selected as to help our ancestors solve domain-specific problems.
Two principle arguments exist for the MMH. Argument (1)—called the optimality argument—is:
- There are adaptive problems in every environment; different adaptive problems in different environments require different solutions, and different solutions can be implemented by functionally distinct modules.
- Adaptive problems are selective pressures; for each unique pressure faced in the original evolutionary environment (OEE), there is a unique module which was selected to solve those—and only those—specific adaptive problems.
- Selective mechanisms can produce highly specialized cognitive modules.
- Therefore, since different adaptive problems require different solutions and different solutions can be implemented by functionally distinct modules, then there must exist differing modules in the human mind which were selected for in virtue of their contribution to fitness.
Or the argument could be:
- Domain-specific processes exist
- These processes arose due to evolution
- Therefore these domain-specific processes that arose due to evolution have a genetic basis
Tooby and Cosmides claim that, distinct modules for certain adaptive problems in distinct environments are superior at solving different problems, rather than a general-purpose cognitive module. They argue that selection can produce different modules in the mind “for” different adaptive problems. Tooby and Cosmides put their argument in their own words as:
(1) As a rule, when two adaptive problems have solutions that are incompatible or simply different, a single solution will be inferior to two specialized solutions
(2) .. . domain-specific cognitive mechanisms . . . can be expected to systematically outperform (and hence preclude or replace) more general mechanisms
(3) Simply to survive and reproduce, our Pleistocene ancestors had to be good at solving an enormously broad array of adaptive problems—problems that would defeat any modern artificial intelligence system. A small sampling include foraging for food, navigating, selecting a mate, parenting, engaging in social exchange, dealing with aggressive threat, avoiding predators, avoiding pathogenic contamination, avoiding naturally occurring plant toxins, avoiding incest and so on
(4) [Therefore] The human mind can be expected to include a large number of distinct, domain-specific mechanisms (quoted from Samuels, 1998: 585-586)
Clearly, the assumption from Tooby and Cosmides is that specific modules for certain adaptive problems in the OEE are superior to general-purpose modules. Samuels (1998: 586) writes:
In the case of psychological traits, in order to use optimality considerations with any confidence one needs to know (a) what features were being optimized by the evolutionary process and (b) what range of phenotypes were available to natural selection. As a matter of fact, however, we have little knowledge about either of these matters.
The key point of the MMH is that, according to Tooby and Cosmides, we would expect that the mind consists of different modules which are “designed” to solve domain-specific problems. If we know what type of adaptive situations happened to our ancestors then we should be able to construct the evolution of a trait by knowing its current functional use and “working backwards”—what is termed “reverse engineering”—inferring “function” from “cause” (see Richardson, 2007: chapter 2); inferring effect from relevant causes (see Richardson, 2007: chapter 3) and disentangling historical ancestry from history and structure (see Richardson, 2007: chapter 4).
As for their second argument:
- It is impossible for human psychology—that contains nothing but general-purpose mechanisms—to have evolved since such a system cannot be adaptive.
- Such a system cannot possibly have solved the adaptive problems faced by our ancestors in the evolutionary past.
- Therefore, the mind cannot possibly have evolved general-purpose mechanisms and had to have evolved different mental modules in order to carry out different tasks.
They defend the argument by stating that the domain-dependence of different errors is a cause of the evolution of different modules of the mind; that information for crucial adaptive behavior cannot be learned by using only domain-specific systems; and that many adaptive problems are highly complex and unlikely to have been solved by general-purpose modules. Therefore, the mind must be modular since this can account for domain-specific problems—while, according to Tooby and Cosmides, general-purposed modules cannot. The argument, though, fails to provide us with any reason to accept the claim that the mind is made up of mostly—or is all made up of—Darwinian modules which were kept around since they were targets of selection.
Clearly, evidence for the modularity of mind is lacking—as is the evidence that reverse engineering “works” for the purpose intended.
Lloyd (1999: 224) writes that:
Given these difficulties – well-known especially since Konrad Lorenz and Nico Tinbergen’s pioneering experiments on animal behavior – it is not scientifically acceptable within evolutionary biology to conclude that, because a given pattern of responses contributes to evolutionary success, then there is some ‘organ’ (or part of the brain) producing such a pattern, that is therefore an adaptation (see Williams 1966). This is because the ‘organ’ or ‘module’ may not actually exist as a biologically real trait, and even if it does, its current function may or may not be the same as the past function(s).
Sterelny and Griffiths (1999: 342) write that “… evolutionary psychology has bought into an oversimplified view of the relationship between an evolving population and its environment, and has prematurely accepted a modular conception of the mind.”
Tooby and Cosmides (1992) coined the phrase “Standard Social Science Model” (SSSM) in order to differentiate their EP model (the “integrated causal model”; ICM) from the SSSM. According to Tooby and Cosmides (1992), the basis of the SSSM is to employ complete general-purpose cognitive modules and to deny any type of nativist modules whatsoever. Therefore, according to Tooby and Cosmides’ characterization of their so-called “opposition”, interesting differences between groups—and, of course, individuals—are due completely to cultural conditioning with absolutely no nativist elements since there are only general-purpose modules. Differences between individuals, according to the SSSM, are cultural products—differences in socialization cause individual differences.
Richardson (2007:179) writes:
Tooby and Cosmides’ portrayal [of the SSSM] is very effective. It is also a piece of sophistry, offering a false dichotomy between a manifestly untenable view and their own. The alternative is one that sees no differences between individuals and no biological contribution to individual or social development. I think no serious figure embraces that view, since, perhaps, John Watson in the early twentieth century.
Tooby and Cosmides also say that “There is no small irony in the fact that [the[ Standard Social Science MOdel’s hostility to adaptationist approaches is often justified through the accusation that adaptationist approaches purportedly attribute important differences between individuals, races and classes to genetic differences. In actuality, adaptationist approaches offer the explanation for why the psychic unity of humankind is genuine and not just an ideological fiction” (1992, 79).
Furthermore, David Buss claims that “Natural selection is the only prospect for explaining human nature” (Richardson, 2007: 182). (Whatever “human nature” is. See Nagel’s 2012 Mind and Cosmos for arguments that the mind cannot possibly have been naturally selected since evolutionary biology is a physical theory and Fodor and Pitatelli-Palmarini’s 2010 book What Darwin Got Wrong for the argument against natural selection as an explanatory mechanism in regard to trait fixation.)
Problems with the adaptationist paradigm
Adaptationism is a research programme in which, according to the Stanford Encyclopedia of Philosophy, ““adaptationists” view natural selection among individuals within a population as the only important cause of the evolution of a trait; they also typically believe that the construction of explanations based solely on natural selection to be the most fruitful way of making progress in evolutionary biology and that this endeavor addresses the most important goal of evolutionary biology, which is to understand the evolution of adaptations.”
Though numerous problems exist with this programme, not least the claim that most—or all—important phenotypic traits are the product of evolution by natural selection. In their book Sex and Death: An Introduction to Philosophy of Biology, Sterelny and Griffiths (1999: 351) write:
Adaptive explanation is an inference from the current phenotype of an organism to the problems that organism faced in its evolutionary past. Obviously, that inference will be problematic if we do not have an accurate description of the current phenotype and its adaptive significance—of the solution that evolution actually produced. The inference from current adaptive importance to adaptation is problematic enough even when the adaptive and phenotypic claims on which it is based are uncontroversial (13.1). The inference is still more problematic when the nature of the phenotype and its adaptive importance are yet to be established.
This is not the main problem with the paradigm, though. The main problem is that all of these theories/hypotheses are “just-so stories”—“… an adaptive scenario, a hypothesis about what a trait’s selective history might have been and hence what its function may be” (Sterelny and Griffiths, 1999: 61). I’d also add that just-so stories are stories that cannot be independently verified of the data that they purport to explain—that is, there is no observation that can disconfirm the adaptationist “hypothesis”, and the only data that “proves” the hypothesis is the data it purports to explain. EP hypotheses are not testable. Therefore EP hypotheses are just-so stories.
Sterelny and Griffiths (1999: 338) “… agree with the central idea of evolutionary psychology, namely, that we should look for the effects of natural selection on the psychological mechanisms that explain our behaviors, rather than on those behaviors themselves.” I disagree, since it is not possible that “psychological mechanisms” can be selected.
What is the relationship between environment and adaptation? First, we need to think of some “problems” that exist in the environment. One example is mate choice: Should one be faithful to their partner? When should one abandon their old partner? When should they help their kin find partners? When and how should one punish infidelity? This problem, pretty obviously, is evidence against the idea that adaptations are explained by the problem to which the adapted trait is the solution (see David Buller’s 2005 book Adapting Minds for strong critiques against “reverse engineering”). If—and only if—a single cognitive device exists that guides a creature’s behavior with respect to the issues of mate choice, the issue is then a single-domain, not multi-domain, problem, while there are different aspects of the same problem (see the questions above). The existence of said module explains why we think of mate choice as a single problem.
Sterelny and Griffiths (1999: 342) are hopeful in EP’s quest to discover our shared “human nature”, “But both the objective and subjective obstacles to carrying out this program remain serious.” The adaptationist programme, however, is unfalsifiable. “Particular adaptive stories can be tested, as we discuss below, but Gould and Lewontin argue that this does not test the idea of adaptationism itself. Whenever a particular adaptive story is discredited, the adaptationist makes up a new story, or just promises to look for one. The possibility that the trait is not an adaptation is never considered” (Sterelny and Griffiths, 1999: 237).
Adaptationist explanations (EP is—mostly—nothing but adaptationist explanation) are not scientific since they cannot be falsified—EP hypotheses are not falsifiable, nor do they generate testable predictions. They only explain the data it purports to explain—meaning that all EP adaptationist explanations are just-so stories. (Also see Kaplan, 2002 for arguments against the adaptationist paradigm.)
Even those who are sympathetic to the EP research programme, rightly, point out the glaring flaws in the programme. These flaws—in my opinion—cannot be overcome. EP will always be “plausible and speculatuve ” just-so stories that purport to explain the evolution of what, supposedly, are traits that were “selected for” in virtue of their contribution to fitness in the OEE. However, we do not (and cannot) know what the OEE was like—we would need a time machine. It is not possible for us to know the selective pressures that occurred on our ancestors in the OEE. We do know that increased reproductive efficiency in the current-day is not evidence that said trait was adaptive and selected in the OEE.
The mind is not modular; Tooby and Cosmides proposed a false dichotomy (their ICM vs SSSM) which is not valid (no one is a “blank slatist”, whatever that is); and the adaptationist paradigm is nothing but speculative just-so stories.
- For EP to be a valid research programme, EP hypotheses must generate testable, falsifiable predictions.
- EP cannot generate testable, falsifiable predictions (the hypotheses are inherently ad hoc).
- Therefore, EP is not a valid research programme.
There is no reason at all to accept any just-so story since these adaptive explanations cannot produce evidence that the trait was not a byproduct, due to genetic drift etc. Therefore EP is not a scientific enterprise; it only tells “plausible”, speculative stories just-so stories “… I view evolutionary psychology as more speculation than science. The conclusion I urge is, accordingly, skeptical. Speculation is just that: speculation. We should regard it as such. Evolutionary psychology as currently practiced is often speculation disguised as results. We should regard it as such” (Richardson, 2007: 25). This is the view that should be accepted in the mainstream, since there can be no evidence for the speculative stories of EP.
At least three arguments establish the existence and reality of biological race:
Argument (1) from Michael Hardimon’s (2017) book “Rethinking Race: The Case for Deflationary Realism” (The Argument for the Existence of Minimalist Races, see Chapters 2, 3, and 4):
The conditions of minimalist racehood are as follows:
(C1) … a group, is distinguished from other groups of human beings by patterns of visible physical features
(C2) [the] members are linked be a common ancestry peculiar to members of that group, and
(C3) [they] originate from a distinctive geographic location (Hardimon, 2017: 31).
This is the argument to prove the existence of minimalist races:
P1) There are differences in patterns of visible physical features which correspond to geographic ancestry
P2) These patterns are exhibited between real groups, existing groups (i.e., individuals who share common ancestry)
P3) These real, existing groups that exhibit these physical patterns by geographic ancestry satisfy the conditions of minimalist race
C) Therefore race exists and is a biological reality
Argument (2) from Michael Hardimon’s (2017) book “Rethinking Race: The Case for Deflationary Realism” (The Argument for the Existence of Populationist Races, see Chapters 5 and 6):
P1) The five populations demarcated by Rosenberg et al (2002) are populationist races; K = 5 demarcates populationist races.
P2) Populationist race=minimalist race.
P3) If populationist race=minimalist race, then everything from showing that minimalist races are a biological reality carries over to populationist races.
P4) Populationist races capture differences in genetic variation between continents and this genetic variation is responsible for the distinctive patterns of visible physical features which correspond to geographic ancestry who belong to biological lines of descent which were initiated by geographically isolated founding populations.
C) Therefore, since populationist races=minmalist races, and visible physical features which correspond to geographic ancestry are genetically transmitted by populations who belong to biological lines of descent, initiated by reproductively isolated founding populations, then populationist races exist and are biologically real.
Argument (3) from Quayshawn Spencer’s (2014) paper “A Radical Solution to the Race Problem” (The argument for the Existence of Blumenbachian Populations):
P1) The term “race” in America refers to biologically real entities; when speaking of race in America, Americans defer to the US Census Bureau who defers to the Office of Management and Budget (OMB).
P2) The OMB refers to race as “sets of” categories, while considering “races” to have 5 members, which correspond to the five major geographic regions.
P3) Rosenberg et al show that, at K = 5, meaningful, though small (~4.3 percent) genetic variation exists between continental-populations
C) Since Americans defer to the US Census Bureau who defers to the OMB, and the OMB refers to race as “sets of” categories which then correspond to five clusters found by Rosenberg et al’s (2002) analysis, race (what Spencer, 2014 terms “Blumenbachian populations”) must exist, though “race” is both socially constructed and biologically real.
Put another way, Spencer’s (2014) argument could also be:
P1) The US meaning of “race” is a referent, which refers to the discourse used by the US Census Bureau; the US Census Bureau refers to the discourse used by the Office of Management and Budget (OMB).
P2) The referent of “race”, in US ontology, refers to a set of human population groups, not a biological kind (sets of human population groups as denoted by the OMB), which refer to “Africans”, “Caucasians”, “East Asians”, “Native Americans”, and Pacific Islanders/Oceanians.
P3) The US meaning of race is both biologically real and socially constructed; Americans refer to real, existing groups when they talk about race.
C) If the US meaning of race is a referent which refers to the discourse used by the US Census Bureau and they refer to the OMB who discuss “sets of” population groups, then when Americans talk about race they talk about Blumenbachian partitions, since race is both biologically real and socially constructed.
The claim “Race exists” is now established. Note how Argument (1) establishes the claim that “races” are real, existing groups who are phenotypically distinct populations with differing geographic ancestry. Note how Argument (2) establishes the claim that populationist race = minimalist race and that “races” are a group of populations that exhibit a distinctive pattern of genetically transmitted phenotypic characters which then correspond to that group’s geographic ancestry who belong to a biological line of descent which was initiated by a geographically separated and reproductively isolated founding population. (This definition of “race” a subdivision of Homo sapiens is the best I’ve come across so far.) Finally, note how Argument (3) establishes the claim that race, in the American sense, is both biologically real and socially constructed. All three arguments are sound and logically valid.
Now, which groups fall into which of the five racial categories?
Caucasians denote a wide-range of groups; Europeans, MENA (Middle Eastern/North African) peoples, Indians are a very diverse group, racially speaking, with “Caucasoids”, “Mongoloids” and “Australoids” (Australoids would mean Pacific Islander/Oceanian) (see Kashyap et al, 2006 for an overview of ethnic, linguistic and geographic affiliations of Indians in the study). Ashkenazi Jews are taken to be a specific race in today’s modern racial ontology, however, Ashkenazi Jews do not exhibit a distinctive pattern of genetically transmitted phenotypic characters which then correspond to their geographic ancestry; they do represent a “geographically isolated and reproductively isolated founding population”, but the fact that they do not exhibit a distinctive pattern of genetically transmitted phenotypic characters means they are not a race, according to Arguments (1) and (2). Ashkenazi Jews are Caucasian, and not their own race. Of course, skin color does not denote race, it is only one marker to use to infer which groups are races.
Africans comprise all of Sub-Saharan Africa. Africa has the most genetic diversity in the human species (see Campbell and Tishkoff, 2010). Africans, in general, have long, slim bodies with a broad nose, dark skin, kinky hair (lip size is different based on the ethny in question). There are over 3,000 different ethnic groups in Africa, who all comprise the same race. Now, since Africans have the most genetic diversity this does not necessarily mean that they are so phenotypically distinct that there are tens, hundreds, thousands of races on the continent. One only needs to refer back to Arguments (1) and (2) to see that brash claims that “all Xs are Ys” don’t make any sense—especially with the arguments laid out above.
East Asians denote a minimalist and populationist race (Arguments (1) and (2)) and Blumenbachian partition (Argument (3)). East Asians denote, obviously, those that derive from East Asia (Chinese, Japanese, Koreans, Vietnamese). These peoples are relatively short, on average, have a distinct yellow-ish tint to their skin (which is why they are sometimes called “yellow”), epicanthic folds and shorter limbs (more likely to have the endomorphic phenotype).
Native Americans are derived from a Siberian population that crossed the Bering Land Bridge about 14kya. They then spread throughout the Americas, becoming the “Natives” we know today. They are what used to be termed “red” people, due to their skin color. Native Americans are derived from Siberians, who share affinities with East Asians. (This will be discussed in further depth below.) They have black hair, and dark-ish skin. Populations that lived in the Americans pre-1492 expansion are part of the Native American racial grouping.
The last racial grouping are Pacific Islanders. Spencer (2014: 1032) writes that we can define Oceanians (Pacific Islanders):
as the most inclusive human population born from East Asians in Oceania (Sahul and the Pacific Islands) and from the original human inhabitants of Oceania. Since Sahul was a single landmass composed of present-day Australia, New Guinea, and Tasmania 50,000–60,000 years ago, when humans first inhabited it, and since we know that the original human inhabitants of Oceania interbred to create modern Oceanians, and since temporal parts of populations are genealogically connected, it should be the case that most Oceanians have genealogical connections to the original peoples of some Pacific island. The only Oceanians who will not will be individuals who became Oceanian from interbreeding alone and Oceanians descended from indigenous peoples of Sahul but not indigenous peoples of a Pacific island (e.g., Aboriginal Australians). The final source of evidence comes from counterfactual cases. [Pacific Islanders and Australian Aborigines share a deep ancestry, see McEvoy et al, 2010.]
A group is in race X, if and only if they share a pattern of visible physical features and common geographic ancestry. If they do not share a pattern of visible physical features which correspond to common geographic ancestry then they do not constitute a race. Keep this in mind for the next two sections.
Are Oceanians black?
One claim that gets tossed around a lot (by black nationalists) is the claim that Oceanians are black due to their skin color, certain phenotypic traits. But this could just as easily be explained by convergent evolution, not that they are, necessarily, the same racial grouping. If this were true, then Australian Aborigines would be black, by proxy, since Australian Aborigines and Oceanian are the same race. The claim, though, holds no water. Just because two groups “look similar” (which I do not see), it does not follow that they are the same race, since other conditions need to be met in order to establish the claim that two separate groups belong to the same race.
Are Native Americans Mongoloid?
Lastly is the claim that Native Americans do not denote an actual racial grouping, they are either Mongoloid or a sub-race of Mongoloids.
Many authors throughout history have presumed that Native Americans were Mongoloid. Franz Boas, for example, said that the Maya Indians were Mongoloid, and that, American populations had features the most similar to Mongoloids, so they are thusly Mongoloid. Wikipedia has a great overview of the history of the “Mongoloid” terminology, with examples from authors throughout history. But that is irrelevant. Native Americans genetically transmit heritable phenotypic characters which correspond with their geographic ancestry and are genetically and geographically isolated population groups.
Although the claim that “Native Americans are Mongoloid” has been echoed for hundreds of years, a simple argument can be erected to take care of the claim:
P1) If Native Americans were East Asian/Mongoloid, then they would look East Asian/Mongoloid.
P2) Native Americans don’t look East Asian/Mongoloid, they have a distinct phenotype which corresponds to their geographic ancestry (See Hardimon’s minimalist/populationist race concepts).
C) Therefore, Native Americans are not East Asian/Mongoloid.
Establishing the claim that Native Americans are not East Asian/Mongoloid is simple. Some authors may make the claim that since they look similar (whatever that means, they don’t look similar to me), that they, therefore according to Arguments (1) and (2) they are a separate race and not a sub-race of East Asians/Mongoloids; Argument (3) further establishes the claim that they are a separate race on the basis that they form a distinct cluster in clustering analyses (Rosenberg et al, 2002) and since, Americans defer to the US Census Beureau and the US Census Beureau defers to the OMB who discusses sets of populations, then when Americans talk about race they talk about Native Americans as separate from East Asians/Mongoloids, since, according to Arguments (1) and (2) they have a distinct phenotype.
Generally, they have distinct skin colors (of course, skin color does not equal race, but it is a big tell), they have similar black, straight hair. But they are, in my opinion, just too phenotypically distinct to call them the same race as Mongoloids/East Asians. For the claim “Native Americans and Mongoloids/East Asians” to be true, they would need to satisfy P1 in Argument (1) and P4 in Argument (2). Native Americans do not satisfy P1 in Argument (1) nor do they satisfy P4 in Argument (2). Therefore, Native Americans are not Mongoloid/East Asian.
The claim “Race exists and is a biological reality” is clearly established by three sound, valid arguments—two from Hardimon (2017; chapters 2-6) and one from Spencer (2014). These arguments show, using the latest of genetic clustering studies, that races, as classicly defined, do indeed, exist and that our old views of race hundreds of years ago were, largely, correct. These arguments establish the existence of the old folk-racial categories. Races have distinct phenotypes which are genetically transmitted and are correlated with geographic ancestry. Some may make certain claims that “Oceanians are black” or “Native Americans are Mongoloid”, but these claims do not hold. These two groups in question are phenotypically distinct, and they come from unique geographic locations, therefore they are not a part of the races that some purport them to be.
Blacks vs whites: which race is stronger? Unfortunately, actual studies on actual strength lifts between races are few and far between (and if there are any comparisons between races it’s on an ancillary movement such as quadricep extension; for example see Goodpaster et al 2006). There is, though, one study comparing whites and blacks on a major lift: the bench press. The paper is littered with many problems, most of which make me not able to take the paper seriously. I’ve written about this paper in the past but this article will be much more comprehensive.
When searching for racial comparisons in physical strength, Boyce et al (2014) is one of the only papers one would come across looking at racial differences in strength in one specific (main) lift. The authors used a longitudinal design: they had the police officers’ bench press numbers during their first week on the force and then had data 12.5 years later to assess racial and gender differences in the bench press. They assessed strength scores on the bench press, bench press/lean mass, as well as bench press/body mass.
The assessed 327 police officers: 30 women (13 black, 17 white) and 297 males (41 black and 238 white). Officers that could not be classified as either white or black were removed from the sample. Average number of years between the test at recruitment and later into their career was 12.5 years +/- 2 years while the mean age at the initial recruitment was 26.4 years +/- 3.4 years. The officers were mostly college-educated and were low-middle to upper-middle class.
When the recruits were considered for the job, they had their body mass, percent body fat (with skinfold calipers, which will be covered below) and did a 1RM (one-rep max) on the bench press to assess physical strength. The physical fitness battery (for incoming officers) was as follows (in order): body fat, sit-and-reach flexibility (to see how flexible one is in their lower body/hamstrings), 1RM bench press, and finally a 1.5-mile run. For in-service officers, they had their body fat tested, a treadmill test, bench press, muscular endurance (either curls or sit-ups; curls are a contraindicative exercise if used with a straight bar while sit-ups are contraindicative since they put unneeded strain on the lower back).
All groups, when combined after 12.5 years, had significant mean gains in strength which were accompanied by gains in body mass, percent fat mass, and lean mass. Black men, at initial recruitment, weighed 187 pounds and benched 210 pounds; white men weighed 180 pounds and benched 185 pounds. Black men benched 1.12 times their body weight whereas white men benched 1.027 times their body weight. At the follow-up assessment, black males weighed 223 pounds and benched 240 pounds. White males weighed 205 pounds and benched 215 pounds. Black males benched 1.07 times their body weight whereas white males benched 1.05 times their body weight. (To find out what percentage of body weight one lifts, take weight lifted and divide it by the weight of the individual.)
Black women, on the other hand at initial recruitment, weighed 130 pounds and benched 85 pounds whereas white women weighed 127 pounds and benched 82 pounds. Black women benched .653 times their body weight whereas white women benched .645 times their body weight. At the follow-up assessment, black women weighed 151 pounds and benched 98 pounds whereas white women weighed 141 pounds and benched 90 pounds. Black women benched .649 times their body weight at the follow-up whereas white women benched .638 times their body weight.
So there was no real difference between black and white female recruits/officers but there was a difference between black and white male recruits/officers. Of course, due to higher levels of testosterone and other hormones compared to women, the males had a changed much more significantly from the initial recruitment. Figures 1 and 2 from Boyce et al (2014: 146) are important too:
Since the study was done over 12.5 years and the individuals were in their mid-20s at initial recruitment, then this study is anomalous since the individuals gained strength and lean mass well into their 30s. Testosterone begins to decrease starting at age 25, though it is not a foregone conclusion that this occurs; age-related declines in testosterone are not inevitable. It just seems that this is the case since, around one’s mid-20s, life changes happen (marriage, kids) which then may cause lifestyle changes (weight gain and the onset of depression). Most of the testosterone decline can be explained by smoking, obesity, depression and other health factors (Shi et al, 2013). So if these officers had good nutrition—and I do not see why they did not—then they can, in effect, bypass most of the so-called age-related declines in testosterone (which is also related to marriage, having children and obesity; Gray, 2011; Pollet, Cobey, and van der Meij, 2013; Mazur et al, 2013).
Gains in lean mass are related to gains in strength, and so, since there was considerable body mass gain and, consequently, lean mass gain, then overall bench pressing strength should have pretty much substantially increased. The strength gains, though, were negated when they divided the weight lifted by the weight of the lifters (pound-for-pound strength, which is what matters more). Black males had the greatest body mass gain over the 12.5 year period, which subsequently corresponded to a decrease in strength on the bench press.
Over the 12.5 year period, black males gained 36 pounds whereas white males gained 25 pounds. Although black males gained more weight over the 12.5 year period—and loss on bench press/body mass—blacks were still very slightly stronger than whites (1.07 compared to 1.05) which is not significant. Black and white females, on the other hand, had no real differences in any of the scores that Boyce et al (2014) did. Furthermore, black and white women, in this study, had similar strength gains and body mass/lean mass gains. Of course, this is a self-selected sample: black women are consistently noted to carry more fat mass than white women (see Rahman et al, 2009).
Both black and white males increased in the bench press throughout the 12.5 year period compared to black and white females, which is due to them being men (higher levels of testosterone and other hormones, larger muscle cross-section area; Miller et al, 1993). The groups who were stronger when they were hired remained the strongest at the follow-up. So, Boyce et al (2014) conclude, the bench press is able to be used as an assessor of upper-body strength since blacks actually, according to their study, are stronger than whites so, therefore, “an upper body strength test such as a bench press will have little adverse impact on blacks, a racially protected group” (Boyce et al, 2014: 148). Though, for women, since they are not as physically strong as men, this will have an “adverse impact on female recruit and incumbent officers, a gender protected group, no matter if they are black or white” (Boyce et al, 2014: 148).
Now time for a few pitfalls (one of which is not the fault of the investigators): height was not assessed; skin-fold calipers were used to assess body fat; and a Smith machine bench press was used.
Height: Due to an ongoing investigation on the Charlotte Mecklenburg Police Department (for apparently discriminating against certain recruits on the basis of height), the height of the officers at recruitment and the follow-up were not noted. This is a problem. If height were known (along with a few more anthropometric variables), then we can infer who had the somatotype that is conducive to bench pressing. In lieu of no height data, we cannot infer this. It is easier for people with shorter limbs to bench press, since the bar has a shorter path to travel, compared to people with longer arms who have to move the weight a further distance. Blacks are more likely to have longer arms and, as I have argued, this would impede them in strength when bench pressing and overhead pressing but will help in the deadlift since they have longer arms and a shorter torso, it is easier for people with this soma to deadlift due to their body proportions.
Skin-fold calipers: Body fat was assessed using skin-fold calipers. Blacks have thinner skin folds than whites, and so since they have thinner skin folds than whites, and the formula for assessing body fat from skin-folds is based on whites, then, it has been argued, that black males need their own BMI scale since they have thinner skin folds and differing levels of fat-free body mass (Vickery, Cureton, and Collins, 1988; Wagner and Heyward, 2000). So since the levels of body fat were off for blacks, then Boyce et al’s (2014) bench press/lean mass is useless since body fat was not assessed correctly. (I know that using calipers is cheaper and easier than sending everyone for a DXA scan, but the difference in body composition between blacks and whites should be known so that, at least when it comes to fat comparisons between different races, bad methods don’t get used and parroted.) Since there are well-known differences in skinfolds as they relate to body fat percentage/lean mass regarding blacks and whites, it’s, again, safe to toss out that part of the study (regarding weight moved/lean mass), since there are huge flaws in regard to assessing body fat through use of conventional measures in blacks compared to whites.
Smith machine bench press: This is, perhaps, one of the most important pitfalls. I can think of a few reasons why this machine was used: (1) you don’t really need to teach someone how to get into proper position to grab the bar. (2) the bar is on a set path, and so people with different anthropometric measures may be uncomfortable while using the machine (which would then affect overall strength). (3) Saeterbakken, Tillaar, and Fimland (2011) assessed three different chest press exercises: the Smith machine bench press, the barbell bench press, and the dumbbell bench press. Those in the study had the highest 1RM on the bench press, followed by the Smith machine and finally were the weakest on dumbbell bench press. The biceps brachii is used for stabilization, and they found that EMG activity in the biceps brachii increased with stability requirements (dumbbell > barbell > Smith). This is due to the fact that, on the two exercises that are not assisted, the muscles need to stabilize the free weight. Since you’re using more muscles to stabilize the weight, then, theoretically, you can move more weight. EMG activity was the same regarding the pectoralis major and anterior deltoids but was different in the triceps and biceps brachii. The prime movers (agonists) of the bench press are the anterior deltoids, triceps, and pecs/serratus. Therefore, the antagonists are the posterior deltoids, the biceps, and the lats/rhomboids.
The load lifted on the bench press was three percent higher than on the Smith machine. Since stability requirements are low with the Smith, and the bar is on a set path that cannot be changed, then, theoretically, one should be able to move more weight on the Smith machine (which is my personal experience and the experience of many people I have worked with) compared to the dumbbell and regular bench press. Though, the load lifted on the bench press was about three percent higher than on the Smith machine bench press. The activity of the pectoralis and the anterior deltoid was lower on the eccentric phase (think the negative portion of the lift) when compared to dumbbell and barbell pressing, which is, again, explained by lack of stabilizing muscles used on the Smith machine bench press. So this small study (n=12) shows that there are differences in 1RM between the three lifts studied and that there are differences in the neuromuscular activity of the flexor/extensor muscles of the arm, but showing no difference in the prime movers of the lift (such as the pectoralis major). In the descending phase of the lift, there was less EMG activation, which indicates that a greater neural drive is needed for the eccentric phase of the lift.
The Smith machine used in Boyce et al (2014) was an Atlantis Angled Smith Machine E-155, which I have used personally (and have had hundreds of people use). In my personal experience, weight moved on the Smith is considerably different when compared to a regular bench press, due to the fact that one does not have to stabilize the bar with certain muscles. I will keep an eye out for more Smith machine/bench press/dumbbell press studies in the future, but, from personal experience, one is able to lift more on a Smith than a regular bench.
This is one of the only studies of its kind: assessing racial differences in strength on a major lift. Boyce et al (2014) found that, although blacks had a sharper decrease in pound-for-pound strength, they still kept their slight strength advantage over the 12.5 year period. Both black and white women had similar strength levels and gains in strength and lean mass over the 12.5 year period. Lean mass gain over the period was related to strength gains on the bench; but, as pointed out, they did not correctly assess lean mass for blacks, since calipers (used to test skin folds) are not useful for blacks in the case of figuring out lean mass/body fat levels (Vickery et al, 1988; Wagner and Heyward, 2000).
This study is useless to me. Blacks are not stronger than whites; anthropometric variables play a huge role in strength differences and, due to these differences, blacks are not stronger than whites on certain lifts, as I have documented.
Racial differences in body fat are clear to the naked eye: black women are more likely to carry more body fat than white women; Mexican American women are more likely to carry more body fat than white women, too. Different races/ethnies/genders of these races/ethnies have different formulas to assess body fat through the use of skin-folds. The sites to grasp the skin is different based on gender and race.
Body mass index (BMI) and waist circumference is overestimated in blacks, which means that they need different formulas to assess their BMI and adiposity/lean mass. Race-specific formulas/methods are needed to assess body fat and, along with it, disease risk, since blacks are more likely to be obese (black women, at least, it’s different with black American men with more African ancestry, see below). The fact of the matter is, when matched on a slew of variables, blacks had lower total and abdominal fat mass than whites.
This is even noted in Asian, black and white prepubertal children. He et al (2002) show that sex differences in body fat distribution are present in children who have yet to reach puberty and the differences in body fat in Asians is different than that from blacks and whites which also varies by sex. Asian girls had greater gynoid fat by DXA scan only, with girls having greater gynoid fat than boys. Asian girls had lower adjusted extremity fat and gynoid fat compared to white and black girls. Though, Asian boys had a lower adjusted extremity by fat as shown by DXA (a gold standard in body fat measurement) when compared to whites, but greater gynoid fat than whites and blacks.
Vickery, Cureton, and Collins, (1988), Wagner and Heyward (2000), and Robson, Bazin, and Soderstrom (1971) show that there are considerable body composition differences between blacks and whites. These differences in body composition come down to diet, of course, but there is also a genetic/physiologic component there as well. Combining the known fact that skin-fold testing is not conducive to a good estimate, black American men with more African ancestry are less likely to be obese.
Vickery, Cureton, and Collins (1988) argue that, if accurate estimates of body fat percentages are to be obtained, race-specific formulas need to be developed and used as independent variables to assess racial differences in body fat percentage. Differences in muscularity don’t seem to account for these skinfold differences, nor does greater mesomorphy. One possible explanation for differences in skinfold thickness is that blacks may store most of their body fat subcutaneously. (See Wagner and Heyward, 2000 for a review on fat patterning and body composition in blacks and whites.)
The often-used Durnin-Womersley formula which is used to predict body fat just from skin folds. However, “The 1974 DW equations did not predict %BF(DXA) uniformly in all races or ethnicities” (Davidson et al, 2011). Truesdale et al (2016) even show that numerous formulas used to estimate percent body fat are flawed, even some formulas used on different races. Most of the equations tested showed starkly different conclusions. But, this is based on NHANES data and the only data they provide regarding skin-folds is the tricep and subscapular skinfold so there may still be more problems with all of the equations used to assess body fat percentage between races. (Also see Cooper, 2010.)
Klimentidis et al (2016) show that black men—but not black women—seem to be protected against obesity and central adiposity (fat gain around the midsection) and that race negatively correlated with adiposity. The combo of male gender and West African ancestry predicted low levels of adiposity compared to black Americans with less African ancestry. Furthermore, since black men and women have—theoretically—the same SES, then cultural/social factors would not play as large a role as genetic factors in explaining the differences in adiposity between black men and black women. Black men with more African ancestry had a lower WHR and less central adiposity than black men with less African ancestry. If we assume that they had similar levels of SES and lived in similar neighborhoods, there is only one reason why this would be the case.
Klimentidis et al (2016) write:
One interpretation is that AAs are exposed to environmental and/or cultural factors that predispose them to greater obesity than EAs. Possibly, some of the genes that are inherited as part of their West-African ancestry are protective against obesity, thereby “canceling out” the obesifying effects of environment/culture, but only in men. Another interpretation is that genetic protection is afforded to all individuals of African descent, but this protection is overwhelmed by cultural and/or other factors in women.
Black men do, as is popularly believed, prefer bigger women over smaller women. For example, Freedman et al (2004) showed that black American men were more likely to prefer bigger women. Black American men “are more willing to idealize a woman
of a heavier body size, with more curves, than do their White American counterparts” (Freedman et al, 2004: 197). It is then hypothesized that black American men find these figures attractive (figures with “more curves” (Freedman et al, 2004: 197)) to protect against eating pathologies, such as anorexia and bulimia. So, it has been established that black men have thinner skin folds than whites which leads to skewed lean mass/body fat readings and black men with more African ancestry are less likely to be obese. These average differences between races, of course, contribute to differing disease acquisition.
I have covered differences in body fat in a few Asian ethnies and have come to the obvious conclusion: Asians, at the same height, weight etc as whites and blacks, will have more adipose tissue on their bodies. They, too, like blacks and whites, have different areas that need to be assessed for skin folds to estimate body fat.
Henriques (2016: 29) has a table on the equations for calculating estimated body density from skin fold measures from various populations. Of interest are the ones on blacks or ‘Hispanics‘, blacks or athletes and blacks and whites. (The table is provided from NSCA, 2008 so the references are not in the back of the text.)
For black and ‘Hispanic’ women aged 18-55 years, the sites to use for skin-folds are the chest, abdomen, triceps, subscapular, suprailiac, midaxillary, and the thigh. For blacks or athletes aged 18-61 years, the sites to use are the same as before (but a different equation is used for body fat estimation). For white women or anorexic women aged 18-55, the sites used are just triceps, suprailiac and the thigh. For black and white boys aged 6-17, only the triceps and the calf is used. It is the same for black and white girls, but, again, a different formula is used to assess body fat (Henriques, 2016: 29).
Morrison et al (2012) showed that white girls had a higher percent body fat when compared to black girls at ages 9-12 but every age after, black girls had higher percent body fat (which is related to earlier menarche in black girls since they have higher levels of body fat which means earlier puberty; Kaplowitz, 2008). Black girls, though, had higher levels of fat in their subscapular skin folds than white girls at all ages.
So, it seems, there are population-/race-specific formulas that need to be created to better assess body fat percentage in different races/ethnies and not assume that one formula/way of assessing body fat should be used for all racial/ethnic groups. According to the literature (some reviewed here and in Wagner and Heyward, 2000), these types of formulas are sorely needed to better assess health markers in certain populations. These differences in body fat percentage and distribution then have real health consequences for the races/ethnies in question.
FOXP2 is a so-called “gene for” language. The gene is a transcription factor—meaning that it controls the activity of other genes. Thus, changes to FOXP2 will have changes to other genes as well. Thus, the evolution of language in humans was thought to have hinged on mutations on the FOXP2 gene. Humans that have a single-point mutation in FOXP2 “have impaired speech and grammer, but not impaired language comprehension” (Mason, et al, 2018: 403). This gene is found in numerous mammals (e.g., chimpanzees, gorillas, orangutans, rhesus macaques, and mice) but none of those mammals speak. This gene, then, is expressed in the areas of the brain that affects motor functioning, which includes the coordination needed to create words.
Mice and humans at the FOXP2 gene only differ by 3 amino acids. Only one amino acid difference exists between gorillas, chimps, mice, and macaques, who all have identical amino acid sequences on FOXP2. Furthermore, two more amino acid sequences differ between humans and the sequences which is shared by chimpanzees, gorillas, and macaques. Thus, the difference of two amino acids between humans and other primates appears to have made it possible for language to evolve. Evidence exists for strong selective pressures for the two FOXP2 mutations which allow the brain, larynx, and mouth to coordinate to produce speech. These two altered amino acids may change the ability of FOXP2 transcription factor to be phosphorylated—proteins are either activated by phosphorylation or deactivated by dephosphorylation, or the reverse.
Mason et al (2018: 403) write:
Comparative genomics efforts are now extending beyond primates. A role for FOXP2 in songbird singing and vocal learning has been proposed. Mice communicate via squeaks, with lost young mice emitting high-pitched squeaks, FOXP2 mutations leave mice squeakless. For mice and songbirds, it is a stretch to claim that FOXP2 is a language gene—but it is likely needed in the neuromuscular pathway to make sounds.
Above is Figure 18.17 from Mason et al (2018: 403). They write:
Comparisons of synonymous and nonsynonymous changes in mouse and primate FOXP2 genes indicate that changing two amino acids in the gene corresponds to the emergence of human language. Black bars represent synonymous changes; gray bars represent nonsynymous changes.
But is that the whole story? Is FOXP2 really a “gene for” language? New results call this hypothesis into question.
In their paper No Evidence for Recent Selection at FOXP2 among Diverse Human Populations, Atkinson et al (2018) did not find evidence for recent positive or balancing selection. Atksinson et al (2018) conclude that they:
do not find evidence that the FOXP2 locus or any previously implicated site within FOXP2 is associated with recent positive selection in humans. Specifically, we demonstrate that there is no evidence that the original two amino-acid substitutions were targeted by a recent sweep limited to modern humans <200 kya as suggested by Enard et al. (2002) … Any modified function of the ROI does not appear to be related to language, however, as modern southern African populations tolerate high minor allele frequencies with no apparent consequences to language faculty. We do not dispute the extensive functional evidence supporting FOXP2’s important role in the neurological processes related to language production (Lai et al., 2001, MacDermot et al., 2005, Torres-Ruiz et al., 2016). However, we show that recent natural selection in the ancestral Homo sapiens population cannot be attributed to the FOXP2 locus and thus Homo sapiens’ development of spoken language.
So the two mutations in exon 7 of FOXP2 weren’t selected and are not responsible for human language. Most likely the accelerated rate is due to loss of function (LoF) (null allele).
The gene was originally discovered in a family that had a history of speech and language disorders (Lai et al, 2001). This “speech gene” was also found in Neanderthals in 2007 (see Krasue et al, 2007). Thus, the modifications to FOXP2 occurred before humans and Neanderthals diverged.
So Atkinson et al (2018) found that the so-called sweep on FOXP2 >200KYA was a statistical artifact which was caused by lumping Africans together Caucasians and other populations. Of course, language is complicated and no one single gene will explain the emergence of human language.
This is a just-so story—that is, an ad hoc hypothesis. Humans had X, others didn’t have X or had a different form of X; therefore X explains human language faculties.
Atkinson et al’s (2018) “results represent a substantial revision to the adaptive history of FOXP2, a gene regarded as vital to human evolution.”
High evolutionary constraint among taxa but variability within Homo sapiens is compatible with a modified functional role for this locus in humans, such as a recent loss of function.
Therefore, this SNP must not be necessary for language function as both alleles persist at high frequency in modern human populations. Though perhaps obvious, it is important to note that there is no evidence of differences in language ability across human populations. (Atkinson et al, 2018)
This is another just-so story (Gould and Lewontin, 1976; Lloyd, 1999; Richardson, 2007; Nielsen, 2009) that seems to have bitten the dust. Of course, the functionality of FOXP2 and its role in the neurologic processes related to language; what is disputed (and refuted) is the selectionist just-so story. Selectionist explanations are necessarily ad-hoc. Thus, recent natural selection in our species cannot be attributed to FOXP2, and along with it, our language capabilities.
There is a similar objection, not for FOXP2 and selectionist hypotheses, but for the Lactase gene. Nielsen (2009) puts it succinctly:
The difference in lactose intolerance among human geographic groups, is caused by a difference in allele frequencies in and around the lactase gene (Harvey et al. 1998; Hollox et al. 2001; Enattah et al. 2002; Poulter et al. 2003). … This argument is not erected to dispute the adaptive story regarding the lactase gene, the total evidence in favor of adaptation and selection related to lactose tolerance is overwhelming in this case, but rather to argue that the combination of a functional effect and selection does not demonstrate that selection acted on the specific trait in question. … Although the presence of selection acting on genes underlying a phenotypic trait of interest does help support adaptive stories, it does not establish that selection acted directly on the specific trait of interest.
Even if there were evidence of positive selection of FOXP2 in humans, we cannot logically state that selection acted on the FOXP2 locus; functional effects and selection do not demonstrate that “selection” acted on that trait. Just-so stories (ad hoc hypotheses) “sound good”, but that’s only because they are necessarily true—one can have all the data they want, then they can think up any adaptive story to explain the data and the story will be necessarily true. Therefore, selectionist hypotheses are inherently ad hoc.
In conclusion, another selectionist hypothesis bites the dust. Nevermind the fact that, if FOXP2 were supposedly “selected-for”, there would still be the problem of free-riders (Fodor and Piattelli-Palmarini, 2010). That is, “selection” cannot “select-for” fitness-enhancing traits if/when they are coextensive with other traits—there is no way for selection to distinguish between coextensive traits and thus, it does not explain trait fixation (in this case, the fixation of FOXP2). Ad-hoc hypotheses are necessarily true—that is, they explain the data they purport to explain and only the data they purport to explain. These new results show that there is no support for positive selection at the FOXP2 locus.