NotPoliticallyCorrect

Home » Evolution

Category Archives: Evolution

Advertisements

Gene-Selectionism vs. Developmental Systems Theory

2300 words

Two dominant theories exist in regard to development, the “gene’s eye view—gene selectionism (GS)—and the developmental view—developmental systems theory (DST). GS proposes that there are two fundamental processes in regard to evolution: replication and interaction. Replicators (the term was coined by Dawkins) are anything that is copied into the next generation whereas interactors (vehicles) are things that only exist to ensure the replicators’ survival. Thus, Dawkins (1976) proposes a distinction between the “vehicle” (organism) and its “riders/replicators” (the genes).

Gene selectionism

Gene selectionists propose a simple hypothesis: evolution through the differential survival of genes, its main premise being that the “gene” is “the ultimate, fundamental unit of natural selection.” Dusek (1999: 156) writes that “Gene selection claims that genes, not organisms, groups of organisms or species, are selected. The gene is considered to be the unit of selection.” The view of gene selectionists is best—and most popularly put—by Richard Dawkins’ seminal book The Selfish Gene (1976), in which he posits that genes “compete” with each other, and that our “selfish actions” are the result of our genes attempting to replicate to the next generation, relegating our bodies to disposable “vehicles” that only house the “replicators” (or “drivers).

Though, just because one is a gene selectionist does not necessarily mean that they are a genetic determinist (both views will be argued below). Gene selectionists are comitted to the view that genes make a distinctive contribution toward building interactors. Dawkins (1982) claims that genetic determinism is not a problem in regard to gene selectionism. Replicators (genes) have a special status to gene selectionists. Gene selectionists argue that adaptive evolution only occurs through cumulative selection, while only the replicators persist through the generations. Gene selectionists do not see organisms as replicators since genes—and not organisms—are what is replicated according to the view.

The gene selectionist view (Dawkins’ 1976 view) can also be said to apply what Okasha (2018) terms “agential thinking”. “Agential thinking” is “treating an evolved organism as if it were an agent pursuing a goal, such as survival or reproduction, and treating its phenotypic traits, including its behaviour, as strategies for achieving that goal, or furthering its biological interests” (Okasha, 2018: 12). Dawkins—and other gene selectionists—treat genes as if they have agency, speaking of “intra-genomic conflict”, as if genes are competing with each other (sure, it’s “just a metaphor”, see below).

Okasha (2018: 71) writes:

To see how this distinction relates to agential thinking, note that every gene is necessarily playing a zero-sum game against other alleles at the same locus: it can only spread in the population if they decline. Therefore every gene, including outlaws, can be thought of as ‘trying’ to outcompete alleles, or having this as its ultimate goal.

Selfish genes also have intermediate goals, which are to maximize fitness, which is done through expression in the organismic phenotype.

Thus, according to Okasha (2018: 73), “… selfish genetic elements have phenotypic effects which can be regarded as adaptations, but only if we apply the notions of agent, benefit, and goal to genes themselves”, though “… only in an evolutionary context [does] it [make] sense to treat genes as agent-like and credit them with goals and interests.” It does not “make sense to treat genes as even “agent-like and credit them with goals and interests since they can only be attributed to humans.

Other genes have as their intermediate goal to enhance the fitness of their host organism’s relatives, by causing altruistic behaviour [genes can’t cause altruistic behavior; it is an action]. However, a small handful of genes have a different intermediate goal, namely to increase their own transmission in their host organism’s gametes, for example, by biasing segregation in their favour, or distorting the sex-ratio, or transposing to new sites in the genome. These are outlaws, or selfish genetic elements.If oulaws are absent or are effectively suppressed, then the genes within a single organism have a common (intermediate) goal, so will cooperate: each gene can onluy benefit by itself by benefiting the whole organism. Agential thinking then can be applied to the organism itself. The organism’s goal—maximizing its fitness—then equates to the intermediate goal of each of the genes within it. (Okasha, 2018: 72)

Attributing agential thinking to anything other than humans is erroneous, since genes are not “selfish.”

The selfish gene is one of the main theories that define the neo-Darwinian paradigm and it is flat out wrong. Genes are not ultimate causes, as the crafters of the neo-Darwinian Modern Synthesis (MS) propose, genes are resources in a dynamic system and can thusly only be seen as causes in a passive, not active, sense (Noble, 2011).

Developmental systems

The alternative to the gene-centric view of evolution is that of developmental systems theory (DST), first proposed by Oyama (1985).

The argument for DST is simple:

(1) Organisms obviously inherit more than DNA from their parents. Since organisms can behave in ways that alter the environment, environments are also passed onto offspring. Thus, it can be said that genes are not the only things inherited, but a whole developmental matrix is.

(2) Genes, according to the orthodox view of the MS, interact with many other factors for development to occur, and so genes are not the only thing that help ‘build’ the organism. Genes can still play some “privileged” role in development, in that they “control”, “direct” or “organize” everything else, but this is up to gene-selectionists to prove. (See Noble, 2012.)

(3) The common claim that genes contain “information” (that is, context-independent information) is untenable, since every reconstruction of genes contain development about information applies directly to all other developmental outcomes. Genes cannot be singled out as privileged causes in development.

(4) Other attempts—such as genes are copied more “directly—are mistaken, since they draw a distinction between development and other factors but fail.

(5) Genes, then, cannot be privileged in development, and are no different than any other developmental factor. Genes, in fact, are just passive templates for protein construction, waiting to be used by the system in a context-dependent fashion (see Moore, 2002; Schneider, 2007). The entire developmental system reconstructs itself “through numerous independent causal pathways” (Sterelny and Griffiths, 1999: 109).

DNA is not the only thing inherited, and the so-called “famed immortality of DNA is actually a property of cells [since] [o]nly cells have the machinery to correct frequent faults that occur in DNA replication.” The thing about replication, though, is that “DNA and the cell must replicate together” (Noble, 2017: 238). A whole slew of developmental tools are inherited and that is what constructs the organism; organisms are, quite obviously, constructed not by genes alone.

Developmental systems, as described by Oyama (1985: 49) do not “have a final form, encoded before its starting point and realized at maturity. It has, if one focuses finely enough, as many forms as time has segments.Oyama (1985: 61) further writes that “The function of the gene or any other influence can be understood only in relation to the system in which they are involved. The biological relevance or any influence, and therefore the very “information” it conveys, is jointly determined, frequently in a statistically interactive, not additive, manner, by that influence and the system state it influences.

DNA is, of course, important. For without it, there would be nothing for the cell to read (recall how the genome is an organ of the cell) and so no development would occur. DNA is only “information” about an organism only in the process of cellular functioning.

The simple fact of the matter is this: the development of organs and tissues are not directly “controlled” by genes, but by the exchange signals of the cells. “Details notwithstanding, what is important to note is that whatever kinds of signals it sends out depends on the kind of signals it receives from its immediate environment. Therefore, neighboring cells are interdependent, and its local interactions among cells that drive the developmental processes” (Kampourakis, 2017: 173).

The fact of the matter is that whether or not a trait is realized depends on the developmental processes (and the physiologic system itself) and the environment. Kampourakis, just like Noble (2006, 2012, 2017) pushes a holistic view of development and the system. Kampourakis (2017: 184) writes:

What genetics research consistently shows is that biological phenomena should be approached holistically. at various levels. For example, as genes are expressed and produce proteins, and some of these proteins regulate or affect gene expression, there is absolutely no reason to privilege genes over proteins. This is why it is important to consider developmental processes in order to undertand how characters and disease arise. Genes cannot be considered alone but only in the broader context (cellular, organismal, environmental) in which they exist. And both characters and disease in fact develop; they are not just produced. Therefore, reductionism, the idea that genes provide the ultimate explanation for characters and disease, is also wrong. In order to understand such phenomena, we need to consider influence at various levels of organization, both bottom-up and top-down. This is why current research has adopted a systems biology approach (see Noble, 2006; Voit, 2016 for accessible introductions).

All this shows that developmental processes and interactions play a major role in shaping characters. Organisms can respond to changing environments through changes in their development and eventually their phenotypes. Most interestingly, plastic responses of this kind can become stable and inherited by their offspring. Therefore, genes do not predetermine phenotypes; genes are implicated in the development of phenotypes only through their products, which depends on what else is going on within and outside cells (Jablonka, 2013). It is therefore necessary to replacr the common representation of gene function presented in Figure 9.6a, which we usually find in the public sphere, with others that consider development, such as the one in figure 9.6b. Genes do not determine characters, but they are implicated in their development. Genes are resources that provide cells with a generative plan about the development of the organism, and have a major role in this process through their products. This plan is the resouce for the production of robust developmental outcomes that are at the same time plastic enough to accomodate changes stemming from environmental signals.

genedevelop1

Figure 9.6 (a) The common representation of gene function: a single gene determines a single phenotype. It should be clear by what has been present in the book so far that is not accurate. (b) A more accurate representation of gene function that takes development and environment into account. In this case, a phenotype is propduced in a particular environment by developmental processes in which genes are implicated. In a different environment the same genes might contribute tothe development of a different phenotype. Note the “black box” of development.

[Kampourakis also writes on page 188, note 3]

In the original analogy, Wolpert (2011, p. 11) actually uses the term “program.” However, I consider the term “plan” as more accurate and thus more appropriate. In my view, the term “program” impies instructions and their implimentation, whereas the term “plan” is about instructions only. The notion of a genetic program can be very misleading because it implies that, if it were technically feasible, it would be possible to compute an organism by reading the DNA sequence alone (see Lewontin, 2000, pp. 17-18).

Kampourakis is obviously speaking of a “plan” in a context-dependent manner since that is the only way that genes/DNA contain “information” (Moore, 2002; Schneider, 2007). The whole point is that genes, to use Noble’s terminology, are “slaves” to the system, since they are used by and for the (physiological) system. Developmental systems theory is a “wholeheartedly epigenetic approach to development, inheritance and evolution” (Hochman and Griffiths, 2015).

This point is driven home by Richardson (2017:111):

And how did genes eventually become established? Probably not at all as the original recipes, designers, and controllers of life. Instead they arose as templates for molecular components used repeatedly in the life of the cell and the organism: a kind of facility for just-in-time production of parts needed on a recurring basis. Over time, of course, the role of these parts themselves evolved to become key players in the metabolism of the call—but as part of a team, not the boss.

[…]

It is not surprising, then, that we find that variation in form and function has, for most traits, only a tenuous relationship with variation in genes.

[And also writes on page 133]:

There is no direct command line between environments and genes or between genes and phenotypes. Predictions and decisions about form and variation are made through a highly evolved dynamical system. That is why ostensibly the same environment, such as hormonal signal, can initiate a variaety of responses like growth, cell division, differentiation, and migration, depending on deeper context. This reflects more than fixes responses from fixed information in genes, something fatally overlooked in the nature-nurture debate

(Also read Richardson’s article So what is a gene?)

Conclusion

The gene-selectionist point-of-view entails too many (false) assumptions. The DST point of view, on the other hand, does not fall prey to the pitfalls of the gene-selectionist POV; Developmental systems theorists look at the gene, not as the ultimate causes of development—and, along with that, only changes in gene frequency driving evolutionary change—but only as products to be used by and for the system. Genes can only be looked at in terms of development, and in no other way (Kamporuakis, 2017; Noble, 2017). Thus, the gene-selectionists are wrong; the main tenet of the neo-Darwinian Modern Synthesis, gene-selectionism—the selfish gene—has been refuted (Jablonka and Lamb, 2005; Noble, 2006, 2011). The main tenets of the neo-Darwinian Modern Synthesis have been refuted, and so it is now time to replace the Modern Synthesis with a new view of evolution: one that includes the role of genes and development and the role of epigenetics on the developmental system. The gene-selectionist view champions an untenable view of the gene: that the gene is priviliged above any other developmental variables, but Noble and Kampourakis show that this is not the case, since DNA is inherited with the cell; the cell is what is “immortal” to use the language of Dawkins—not DNA itself.

A priori, there is no privileged level of causation, and this includes the gene, which so many place at the top of the hierarchy (Noble, 2012).

Advertisements

What Is the “Human Diet”?

3000 words

Is there one (or, one with slight modifications) diet that all humans should be eating? I’m skeptical of such claims. Though both vegans (one who does not eat or use animal products) and carnivores (one who eats only animal products), in my opinion, have some warped views on diet and human evolution. Both are extreme views; both have wrong ideas about diet throughout our evolution; both get some things right. Though, both are extreme views with little to no support. While it is hard to pinpoint what the “human diet” is, clearly, there were certain things that we ate through our evolutionary niches in our ancestral Africa that we “should” be eating today (in good quantities).

Although it is difficult to reconstruct the diet of early hominids due to lack of specimens (Luca, Perry, and Rienzo, 2014), by studying the eating behavior of our closest evolutionary relatives—the chimpanzees—we can get an idea of what our LCA ate and its eating behavior (Ulijaszek, Mann, and Elton, 2013). Humans have been throughout most every niche we could possibly been in and, therefore, have come across the most common foods in each ecology. If animal A is in ecosystem E with foods X, Y, and Z, then animal A eats foods X, Y, and Z, since animals consume what is in their ecosystem. Knowing this much, the niches our ancestors lived in in the past had to have a mix of both game and plants, therefore that was our diet (in differing amounts, obviously). But it is more complicated than that.

So, knowing this, according to Ulijaszek, Mann, and Elton, (2013: 35)Mammalian comparisons may be more useful than ‘Stone Age’ perspectives, as many of the attributes of hominin diets and the behaviour associated with obtaining them were probably established well before the Pleistocene, the time stone agers were around (Foley 1995; Ulijaszek 2002; Elton 2008a).” Humans eat monocots (various flowering plants with one seed), which is not common our order. The advent of farming was us “expanding our dietary niche”, which began “the widespread adoption of agriculture [which] is an obvious point of transition to a ‘monocot world’” (Ulijaszek, Mann, and Elton, 2013). Although these foodstuffs dominate our diet, there is seasonality in what types of those foods we consume.

So since humans tend to not pick at things to eat, but have discrete meals (it is worth noting that one should have “three square meals a day” is a myth; see Mattson et al, 2014), we need to eat a lot in the times we do eat. Therefore, since we are large-bodied primates and our energy needs are much higher (due to our large brains that consume 20 percent of our daily caloric consumption), we need higher quality energy. The overall quality and energy density of our diets are due to meat-eating—which folivorous/frugivorous primates do not consume. We have a shorter gut tract which is “often attributed to the greater reliance of faunivory in humans“, though “humans are not confined to ‘browse’ vegetation … and make extensive use of grasses and their animal consumers” (Ulijaszek, Mann, and Elton, 2013: 58). Due to this, we show amazing dietary flexibility and adaptability due to our ability to eat a wide range of foodstuffs in most any environment we find ourselves in.

So “It is difficult to pinpoint what the human diet actually is … Nonetheless, humans are frequently described as omnivores” (Ulijaszek, Mann, and Elton, 2013: 59). Omnivores normally feed at two or more trophic levels, though others define it as just consuming plants and animals (Chubaty et al, 2014). Trophic level one is taken up by plants; level two is taken up by herbivores—primary consumers; level three is taken up by predators—who feed on the herbivores; level four or five is taken up by apex predators or carnivores; while the last level is also taken up by detrivores—those who feed on waste. Though, of course, “omnivory” is a continuum and not a category in and of itself. Humans eat primary producers (plants) and primary consumers (herbivores) and some secondary consumers (like fish), “although human omnivory may only be possible because of technological processing” (Ulijaszek, Mann, and Elton, 2013: 59). Other animals described as “omnivorous” eat only foods from one trophic level and only consume food from another level when needed.

Humans—as a species—rely on meat consumption. Fonseca-Azevedo and Herculano-Houzel (2012) showed that the energetic cost of a brain is directly related to the number of neurons in the brain. So, there were metabolic limitations in regard to brain and body size. The number of hours available to feed along with the low caloric yield of plant foods explains why great apes have such large bodies and small brains—which was probably overcome by erectus, who probably started cooking food around 1.5 mya. If we consumed only a diet of raw foods, then it would have taken us around 9 h/day to consume the calories we would need to power our brains—which is just not feasible. So it is unlikely that erectus—who was the first to have the human body plan and therefore the ability to run, which implies he would have needed higher quality energy—would have survived on a diet of raw plant foods since it would take so many hours to consume enough food to power their growing brains.

We can see that we are adapted to eating meat by looking at our intestines. Our small intestines are relatively long, whereas our long intestines are relatively short, which indicates that we became adapted to eating meat. Our “ability to eat significant quantities of meat and fish is a significant departure from the dietary norm of the haplorhine primates, especially for animals in the larger size classes.” Though “Humans share many features of their gut morphology with other primates, particularly the great apes, and have a gut structure that reflects their evolutionary heritage as plant, specifically ripe fruit, eaters” (Ulijaszek, Mann, and Elton, 2013: 63). Chimpanzees are not physiologically adapted to meat eating, which can be seen in the development of hypercholesterolemia along with vascular disease, even when controlled diets in captivity (Ford and Stanford, 2004).

When consuming a lot of protein, though, “rabbit starvation” needs to be kept in mind. Rabbit starvation is a type of malnutrition that arises from eating little to no fat and high amounts of protein. Since protein intake is physiologically demanding (it takes the most energy to process out of the three macros), Ben-Dor et al (2011) suggest a caloric ceiling of about 35 percent of kcal coming from protein. So erectus’ protein ceiling was 3.9 g/bw per day whereas for Homo sapiens it was 4.0 g/bw per day. Ben-Dor et al (2011) show that erectus’ DEE (daily energy expenditure) was about 2704 kcal, with “a maximum long-term plant protein ceiling of 1014 calories“, implying that erectus was, indeed, an omnivore. So, of course, the consumption of protein and raw plants are physiologically limited. Since erectus’ ceiling on protein intake was 947 kcal and his ceiling on raw plant intake was 1014 kcal, then, according to the model proposed by Ben-Dor et al (2011), erectus would have needed to consume about 744 kcal from fat, which is about 27 percent of his overall caloric intake and 44 percent of animal product intake.

Neanderthals would have consumed between 74-85 percent of their daily caloric energy during glacial winters from fat, with the rest coming from protein (Ben-Dor, Gopher, and Barkai, 2016), while consuming between 3,360 to 4,480 kcal per day (Steegman, Cerny, and Holliday, 2002). (See more on Neanderthal diet here.) Neanderthals consumed a large amount of protein, about 292 grams per day (Ben-Dor, Gopher, and Barkai, 2016: 370). Since our close evolutionary cousins (Neanderthals and erectus) ate large amounts of protein and fat, they were well-acclimated, physiologically speaking, to their high-protein diets. Though, their diets were not too high in protein to where rabbit starvation would occur—fat was consumed in sufficient amounts in the animals that Neanderthals hunted and killed, so rabbit starvation was not a problem for them. But since rabbit starvation is a huge problem for our species, “It is therefore unlikely that humans could be true carnivores in the way felids are” (Ulijaszek, Mann, and Elton, 2013: 66).

We consume a diet that is both omnivorous and eclectic, which is determined by our phylogeny through the form of our guts; we have nutritional diversity in our evolutionary history. We needed to colonize new lands and, since animals can only consume what is in their ecosystem, the foods that are edible in said ecosystem will be what is consumed by that animal. Being eclectic feeders made the migration out of Africa possible.

But humans are not true carnivores, contrary to some claims. “Meat-eating has allowed humans to colonize high latitudes and very open landscapes. However, bearing in mind the phylogenetic constraints that prevent humans from being true carnivores, such expansion was probably not accomplished through meat-eating alone. Instead, humans have used their ability to technologically harvest, produce, and consume a very wide range of foods to help exploit all major biomes” (Ulijaszek, Mann, and Elton, 2013: 67).

Humans, though, lack the gut specialization and dentition to process grasses efficiently. This means that our ancestors ate animals that ate these things, and therefore the C4 they consumed elevated the levels in the fossils we discovered. Information like this implies that our ancestors ate across a wide variety of trophic levels and had substantial dietary diversity throughout evolutionary history.

Hominins lack the specialized dentition found in carnivorans (the group of animals that includes the cat and dog families) and other habitual meat and bone eaters, so must have pre-processed at least some of the meat in their diet” (Ulijaszek, Mann, and Elton, 2013: 81). This is where stone tools come into play (Zink and Lieberman, 2016). “Processing” food can be anything from taking out nutrients to changing how the food looks. We can look at “food processing” as a form of pre-digestion before consumption. The use of stone tools, and cooking, was imperative for us to begin the processing of meat and other foods. This gave us the ability to “pre-digest” our food before consumption, which increases the available energy in any food that is cooked/processed. For example, cooking denatures protein strands and breaks down the cell walls which gelatinizes the collagen in the meat which allows for easier chewing and digestion. Carmody et al (2016) showed that adaptation to a cooked diet began around 275 kya.

In his book Catching Fire, Wrangham (2009: 17-18) writes:

Raw-foodists are dedicated to eating 100 percent of their diets raw, or as close to 100 percent as they can manage. There are only three studies of their body weight, and all find that people who eat raw tend to be thin. The most extensive is the Giessen Raw Food study, conducted by nutritionist Corinna Koebnick and her colleagues in Germany, who used questionnaires to study 513 raw-foodists who ate from 70 to 100 percent of their diet raw. They chose to eat raw to be healthy, to prevent illness, to have a long life, or to live naturally. Raw food included not only uncooked vegetables and occasional meat, but also cold-pressed oil and honey, and some items were lightly heated such as dried fruits, dried meat, and dried fish. Body mass index (BMI), which measures weight in relation to the square of the height, was used as a measure of fatness. As the proportion of food eaten raw rose, BMI fell. The average weight loss when shifting from a cooked to a raw food diet was 26.5 pounds (12 kilograms) for women and 21.8 pounds (9.9 kilograms) for men. Among those eating a purely raw diet (31 percent), the body weights of almost a third indicated chronic energy deficiency. The scientists’ conclusion was unambiguous: “a strict raw food diet cannot guarantee an adequate energy supply.”

Also, vegetarians and meat-eaters who cooked their food have similar body weights. This implies that cooking food—no matter the type—gives more caloric energy to use for the body and that raw-foodists are fighting a losing battle with biology, consuming raw foods at such a high quantity that our guts are not used for. As can be seen above in the citation from Fonseca-Azevedo and Herculano-Houzel (2012), great apes who eat nothing but raw food have large guts and bodies which are needed to consume the raw plant foods they eat but we cannot thrive on such a diet because it is not calorically nor nutritionally viable for us—most importantly due to the size of our brains and its caloric requirements.

Carmody, Weintraub, and Wrangham (2011) show that modern raw-foodists who subsist on raw meat and plants have nutrient deficiencies and chronic energy deficiencies, even though they process their foods (cooking is a form of processing, as is cutting, mashing, pounding, etc) in different manners, while females experience low fecundity. Thus, the cooking of food seems to be needed for normal biological functioning; we have clearly evolved past consuming all raw foods. So it is clear that cooking—along with meat-eating—was imperative to our evolution. (Which does not mean that humans only ate meat and that eating meat and only meat is part of our evolutionary history.) Cooking food lead to it gelatinizing which denatured the protein, leading to easier mastication of the food, which meant less force since the food was not as hard after cooking. This then led to smaller teeth, over time, which was seen in erectus (Zink and Lieberman, 2016). This was due to cooking along with tool-use: the tool-use lead to smaller particles leading to less force per bite, which eventually led to smaller teeth in our lineage.

Finally, humans are said to be “facultative carnivores.” A facultative carnivore is an animal that does best on a carnivorous diet but can survive—not thrive—on other foodstuffs when meat is not available. This, though, doesn’t make sense. Humans are eclectic feeders—omnivorous in nature. Yes, we began cooking about 1.5 mya; yes meat-eating (and the cooking of said meat) is huge in the evolution of our species; yes without meat and cooking we would not have had the energy requirements to split off from chimpanzees/great apes. But this does not mean that we do “best” on a carnivorous diet. There are about 7,105 ethnic groups in the world (Spencer, 2014: 1029), and so to say that all of these ethnies would do the same or similar, physiologically speaking, on an all-meat diet is crazy talk. The claims that we subsisted on one type of food over the other throughout our evolutionary history is a bold claim—with no basis in evolutionary history.

Marlene Zuk (2013: 103-104), author of Paleofantasy writes:

Another implication of the importance Marlowe attaches to bow hunting is that, rather than starting out as exclusively carnivorous and then adding starches and other plant material to the diet, ancient humans have been able to increase the proportion of meat only after newer technology had come about, a mere 30,000 years ago. Other anthropologists concur that the amount of meat in the human diet grew as we diverged from our other primate ancestors. All of this means that, first, contrary to the claims of many paleo-diet proponents, the earliest humans did not have an exclusively meat-based diet that we are best adapted to eat; and second, our ancestors’ diets clearly changed dramatically and repeatedly over the last tens, not to mention hundreds, thousands of years, even before the advent of agriculture.

The assumption that we were fully (or even mostly) carnivorous and then added plant foods/carbs is clearly false. “Fantasies” like this are “just-so stories”; they are nice-sounding stories, but reality is clearly more nuanced than people’s evolutionary and Stone Age imaginations. This makes sense, though. Since we evolved from an LCA (last common ancestor) with chimpanzees some 6.3 mya (Patterson et al, 2006). So why would it make sense that we would then, ultimately, only subsist on an all-meat diet, if our LCA with chimpanzees was most likely a forager who lived in the trees (Lieberman, 2013).

One thing, though, I’m sure that everyone agrees with is that the environments we have constructed for ourselves in the first world are maladaptive—what is termed an “evolutionary mismatch” (Lieberman, 2013; Genne-Bacon, 2014). The mismatch arises from the high-carb food environments we have constructed, with cheap foodstuffs that is loaded with sugar, salt, and fat which is much more addictive than on their own (see Kessler, 2010). This makes food more palatable and people then want to eat it more. Foods like this, obviously, were not in our OEE (original evolutionary environment), and therefore cause us huge problems in our modern-day environments. Evolutionary mismatches occur when technological advancement increases faster than the genome can adapt. This can clearly be seen in our societies and the explosion of obesity over the past few decades (Fung, 2016, 2018).

We did not evolve eating highly processed carbohydrates loaded with salt and sugar. That much everyone can agree on.

Conclusion

It is clear that both claims from vegans/vegetarians and carnivores are false: there is no one “human diet” that we “should” be eating. Individual variation in different physiologic processes implies that there is no one “human diet”, no matter what type of food is being pushed as “what we should be” eating. Humans are eclectic feeders; we will eat anything since “Humans show remarkable dietary flexibility and adaptability“. Furthermore, we also “have a relatively unspecialized gut, with a colon that is shorter relative to overall size than in other apes; this is often attributed to the greater reliance on faunivory in humans (Chivers and Langer 1994)” (Ulijaszek, Mann, and Elton, 2013: 58). Our dietary eclectism can be traced back to our Australopithecine ancestors. The claim that we are either “vegetarian/vegan or carnivore” throughout our evolution is false.

Humans aren’t “natural carnivores” or “natural vegans/vegetarians.” Humans are eclectic feeders. Animals eat whatever is in their ecosystem. Ergo humans are omnivores, though we can’t pinpoint what the “human diet” is since there is great variability in it due to culture/ecology, we know one thing: we did not subsist on mainly only one food; we had a large variety of food, especially with fallback foods, to consume throughout our evolutionary history. So claims that we evolved to eat a certain way (as vegans/vegetarians and carnivores claim) is false. (Note I am not saying that high carb diets are good; I’ve railed hard on them.)

Just-so Stories: FOXP2

1200 words

FOXP2 is a so-called “gene for” language. The gene is a transcription factor—meaning that it controls the activity of other genes. Thus, changes to FOXP2 will have changes to other genes as well. Thus, the evolution of language in humans was thought to have hinged on mutations on the FOXP2 gene. Humans that have a single-point mutation in FOXP2 “have impaired speech and grammer, but not impaired language comprehension” (Mason, et al, 2018: 403). This gene is found in numerous mammals (e.g., chimpanzees, gorillas, orangutans, rhesus macaques, and mice) but none of those mammals speak. This gene, then, is expressed in the areas of the brain that affects motor functioning, which includes the coordination needed to create words.

Mice and humans at the FOXP2 gene only differ by 3 amino acids. Only one amino acid difference exists between gorillas, chimps, mice, and macaques, who all have identical amino acid sequences on FOXP2. Furthermore, two more amino acid sequences differ between humans and the sequences which is shared by chimpanzees, gorillas, and macaques. Thus, the difference of two amino acids between humans and other primates appears to have made it possible for language to evolve. Evidence exists for strong selective pressures for the two FOXP2 mutations which allow the brain, larynx, and mouth to coordinate to produce speech. These two altered amino acids may change the ability of FOXP2 transcription factor to be phosphorylated—proteins are either activated by phosphorylation or deactivated by dephosphorylation, or the reverse.

Mason et al (2018: 403) write:

Comparative genomics efforts are now extending beyond primates. A role for FOXP2 in songbird singing and vocal learning has been proposed. Mice communicate via squeaks, with lost young mice emitting high-pitched squeaks, FOXP2 mutations leave mice squeakless. For mice and songbirds, it is a stretch to claim that FOXP2 is a language gene—but it is likely needed in the neuromuscular pathway to make sounds.

FOXp2

Above is Figure 18.17 from Mason et al (2018: 403). They write:

Comparisons  of synonymous and nonsynonymous changes in mouse and primate FOXP2 genes indicate that changing two amino acids in the gene corresponds to the emergence of human language. Black bars represent synonymous changes; gray bars represent nonsynymous changes.

But is that the whole story? Is FOXP2 really a “gene for” language? New results call this hypothesis into question.

In their paper No Evidence for Recent Selection at FOXP2 among Diverse Human Populations, Atkinson et al (2018) did not find evidence for recent positive or balancing selection. Atksinson et al (2018) conclude that they:

do not find evidence that the FOXP2 locus or any previously implicated site within FOXP2 is associated with recent positive selection in humans. Specifically, we demonstrate that there is no evidence that the original two amino-acid substitutions were targeted by a recent sweep limited to modern humans <200 kya as suggested by Enard et al. (2002) … Any modified function of the ROI does not appear to be related to language, however, as modern southern African populations tolerate high minor allele frequencies with no apparent consequences to language faculty. We do not dispute the extensive functional evidence supporting FOXP2’s important role in the neurological processes related to language production (Lai et al., 2001, MacDermot et al., 2005, Torres-Ruiz et al., 2016). However, we show that recent natural selection in the ancestral Homo sapiens population cannot be attributed to the FOXP2 locus and thus Homo sapiens’ development of spoken language.

So the two mutations in exon 7 of FOXP2 weren’t selected and are not responsible for human language. Most likely the accelerated rate is due to loss of function (LoF) (null allele).

The gene was originally discovered in a family that had a history of speech and language disorders (Lai et al, 2001). This “speech gene” was also found in Neanderthals in 2007 (see Krasue et al, 2007). Thus, the modifications to FOXP2 occurred before humans and Neanderthals diverged.

So Atkinson et al (2018) found that the so-called sweep on FOXP2 >200KYA was a statistical artifact which was caused by lumping Africans together Caucasians and other populations. Of course, language is complicated and no one single gene will explain the emergence of human language.

This is a just-so story—that is, an ad hoc hypothesis. Humans had X, others didn’t have X or had a different form of X; therefore X explains human language faculties.

Atkinson et al’s (2018)results represent a substantial revision to the adaptive history of FOXP2, a gene regarded as vital to human evolution.

High evolutionary constraint among taxa but variability within Homo sapiens is compatible with a modified functional role for this locus in humans, such as a recent loss of function.

Therefore, this SNP must not be necessary for language function as both alleles persist at high frequency in modern human populations. Though perhaps obvious, it is important to note that there is no evidence of differences in language ability across human populations. (Atkinson et al, 2018)

This is another just-so story (Gould and Lewontin, 1976Lloyd, 1999Richardson, 2007; Nielsen, 2009) that seems to have bitten the dust. Of course, the functionality of FOXP2 and its role in the neurologic processes related to language; what is disputed (and refuted) is the selectionist just-so story. Selectionist explanations are necessarily ad-hoc. Thus, recent natural selection in our species cannot be attributed to FOXP2, and along with it, our language capabilities.

There is a similar objection, not for FOXP2 and selectionist hypotheses, but for the Lactase gene. Nielsen (2009) puts it succinctly:

The difference in lactose intolerance among human geographic groups, is caused by a difference in allele frequencies in and around the lactase gene (Harvey et al. 1998; Hollox et al. 2001; Enattah et al. 2002; Poulter et al. 2003). … This argument is not erected to dispute the adaptive story regarding the lactase gene, the total evidence in favor of adaptation and selection related to lactose tolerance is overwhelming in this case, but rather to argue that the combination of a functional effect and selection does not demonstrate that selection acted on the specific trait in question. … Although the presence of selection acting on genes underlying a phenotypic trait of interest does help support adaptive stories, it does not establish that selection acted directly on the specific trait of interest.

Even if there were evidence of positive selection of FOXP2 in humans, we cannot logically state that selection acted on the FOXP2 locus; functional effects and selection do not demonstrate that “selection” acted on that trait. Just-so stories (ad hoc hypotheses) “sound good”, but that’s only because they are necessarily true—one can have all the data they want, then they can think up any adaptive story to explain the data and the story will be necessarily true. Therefore, selectionist hypotheses are inherently ad hoc.

In conclusion, another selectionist hypothesis bites the dust. Nevermind the fact that, if FOXP2 were supposedly “selected-for”, there would still be the problem of free-riders (Fodor and Piattelli-Palmarini, 2010). That is, “selection” cannot “select-for” fitness-enhancing traits if/when they are coextensive with other traits—there is no way for selection to distinguish between coextensive traits and thus, it does not explain trait fixation (in this case, the fixation of FOXP2). Ad-hoc hypotheses are necessarily true—that is, they explain the data they purport to explain and only the data they purport to explain. These new results show that there is no support for positive selection at the FOXP2 locus.

Natural Selection is not an Explanatory Mechanism

2450 words

Darwin proposed, back in 1859, that species arose due to natural selection—the pruning of deleterious genetic variations in a population, which led to the thinking that the “inherent design” in nature, formerly thought to be due to a designer (“God”) was due to a force Darwin called “natural selection” (NS). The line of reasoning is thus: (1) two individuals of the same population are mostly the same genetically/phenotypically, but have small differences between them, and one of the small differences is a difference in a trait needed for survival. (2) But both traits can contribute to fitness, how does NS ‘know’ to select for either coextensive trait? Now think about two traits: trait T and trait T’. What would explain the fixation of either trait in the population we are discussing? NS is not—cannot—be the mechanism of evolution.

In 2010, philosopher Jerry Fodor and cognitive scientist Massimo Piattelli-Palmarini, wrote a book titled “What Darwin Got Wrong“, which argued that NS is not a causal mechanism in regard to the formation of new species. Their argument is (pg 114):

  1. Selection-for is a causal process.
  2. Actual causal relations aren’t sensitive to counterfactual states of affairs: if it wasn’t the case that A, then the fact that it’s being A would have caused its being B doesn’t explain its being the case that B.
  3. But the distinction between traits that are selected-for and their free-riders turns on the truth (or falsity) of relevant counterfactuals.
  4. So if T and T’ are coextensive, selection cannot distinguish the case in which T free-rides on T’ from the case that T’ free-rides on T.
  5. So the claim that selection is the mechanism of evolution cannot be true.

This argument is incredibly strong. If it is true, then NS cannot be the mechanism by which evolution occurs; NS is not—nor can it be—the mechanism of evolution. So, regarding the case of two traits that are coextensive with each other, it’s not possible to ascertain which trait was selected-for and which trait was the free-rider. NS cannot distinguish between two locally coextensive traits, so, therefore, it is not an explanatory mechanism and does not explain the evolution of species, contra Darwin. It cannot be the mechanism that connects phenotypic variation with fitness variation.

The general adaptationist argument is: “(1) the claim that evolution is a process in which creatures with adaptive traits are selected and (2) the claim that evolution is a process in which creatures are selected for their adaptive traits” (Fodor and Piattelli-Palmarini, 2010: 13). Darwinists are committed to inferring (2) from (1), though it is fallacious. It is known as the intensional fallacy.

Due to the intensionality of “select-for” and “trait”, one cannot infer from ‘Xs have trait t and Xs were selected’ to ‘Xs were selected for having trait t’” (Fodor and Piattelli-Palmarini, 2010: 139). How does one distinguish from a trait that was selected-for and a free-rider that hitched a ride on the truly adaptive trait for the organism in question? The argument provided above shows that it is not possible. “Darwinists have a crux about free-riding because they haven’t noticed the intensionality of selection-for and the like; and when it is brought to their attention, they haven’t the slightest idea what to do about it” (Fodor and Piattelli-Palmarini, 2010: 16).

No observation can show whether or not trait T or T’ was selected-for in virtue of its contribution to fitness in a given population; favoring one story over another in regard to the adaptation of a trait in question, therefore, does not make any logical sense due to the problem of free-riders (and, also, favoring one story over another is due to bias for the like of the specific adaptive just-so story in question). For if two traits are coextensive—meaning that traits coincide with one another—then how can NS—which does not have a mind—‘know’ to “select-for” whichever trait contributes to fitness in the population in question? Breeders are the perfect example.

Breeders have minds and can therefore select for certain traits and against undesirable traits; however, of course, since NS does not have a mind, this is not the case when it comes to naturally selected traits (so-called), since NS does not have a mind. NS cannot explain the distribution of phenotypic traits throughout the world; there is no agent of NS nor are there ‘laws of selection’, therefore NS is not an explanatory mechanism. Explanations based on NS are based only on correlations with traits and fitness, not on causes themselves (this critique can be extended to numerous other fields, too). The problem with relying only on correlations between traits and fitness is two-fold: (1) the trait in question can be irrelevant to fitness and (2) the trait in question can be a free-rider.

Creatures have traits that increase fitness because they were selected-for, the story goes. NS explains why the creature in question has trait T, which increases fitness in environment E. One can then also make the claim that the selection of the trait in question was due to the increased fitness it gave the creature. However, if this claim is made, “then the theory of natural selection would reduce to a trait’s being a cause of reproductive success [which then] explains its being a cause of reproductive success which explains nothing (and isn’t true).

So since genetically-linked traits are coextensive with an infinitude of different possible outcomes, then the hypothesis that trait X is an adaptation is underdetermined by all possible observations, which means that NS cannot explain how and why organisms have the traits they do, since NS cannot distinguish between two coextensive traits, since NS lacks a mind and agency.

NS can be said to be an explanation if and only if two conditions are met: (1) if NS can be understood as acting on counterfactuals and (2) if NS can be said to be acting on any physical evolutionary laws.

(1) A counterfactual is an “if-clause”, which is contrary to a fact. A counterfactual is a statement that cannot be true, for example, “I hear but I have no ears” or “I see but I have no eyes.” Thus, if it were possible for NS to be an explanation for the continuance of a specific trait that is linked to other traits (that is, they are coextensive) in a given population, it would need to—necessarily—invoke a counterfactual about NS. It would need to be the case that the trait in question would still be selected for in the absence of free-riders. As an example from Fodor and Piattelli-Palmarini (2010: 103) a heart pumps blood (what it was selected-for) and makes pumping sounds (its linked free-rider). Thus, if the pumping of blood and the sound that blood-pumping makes were not coextensive, then the pumping, not the pumping sounds, get selected for.

There is a huge problem, though. Counterfactuals are intentional statements; they refer to concepts found in our minds, not any physical things. NS does not have a mind and thus lacks the ability to “select-for” since “selecting-for” is intentional. Therefore NS does not act on counterfactuals; it is blind to the fact of counterfactuals since it does not have a mind.

(2) It does not seem likely that there are “laws of selection”. Clearly, the adaptive value of any phenotype depends on the environment that the organism is in. Fodor and Piattelli-Palmarini (2010: 149) write (emphasis theirs):

The problem is that it’s unlikely that there are laws of selection. Suppose that P1 and P2 are coextensive but that, whereas the former is a property that affects fitness, the latter is merely a correlate of a property that does. The suggestion is that all this comes out right if the relation between P1 and fitness is lawful, and the relation between P2 and fitness is not. …it’s just not plausible that there are laws that relate phenotypic traits per se to fitness. What (if any) effect a trait has on fitness depends on what kind of phenotype is embedded in, and what ecology the creature that has the trait inhabits. This is to say that, if you wish to explain the effects that a phenotypic trait has on a creature’s fitness, what you need is not its history of selection but its natural history. And natural history offers not laws of selection but narrative accounts of causal chains that lead to the fixation of phenotypic traits. Although laws support counterfactuals, natural histories do not; and, as we’ve repeatedly remarked, it’s counterfactual support on which distinguishing the arches from the spandrels depends.

There is, too, a simple example regarding coextensive traits and selection. Think of the lactase gene. It is well-known that we humans are adapted to drink milk—and the cause is gene-culture coevolution that occurred at around the time of cow domestication (Beja-Perreira et al, 2003; Gerbalt et al, 2011). No one disputes the fact that gene-culture coevolution is how and why we can drink milk. But what people do dispute is the adaptive just-so story (Gould and Lewontin, 1976; Lloyd, 1999; Richardson, 2007) that was made to explain how and why the trait went to fixation in certain human populations. Nielsen (2009) writes (emphasis mine):

The difference in lactose intolerance among human geographic groups, is caused by a difference in allele frequencies in and around the lactase gene (Harvey et al. 1998; Hollox et al. 2001; Enattah et al. 2002; Poulter et al. 2003). The cause for the difference in allele frequencies is primarily natural selection emerging about the same time as dairy farming evolved culturally (Bersaglieri et al. 2004). Together, these observations lead to a compelling adaptive story of natural selection favoring alleles causing lactose tolerance. But even in this case we have not directly shown that the cause for the selection is differential survival due to an ability/inability to digest lactose. We must acknowledge that there could have been other factors, unknown to us, causing the selection acting on the region around the Lactase gene. Even if we can argue that selection acted on a specific mutation, and functionally that this mutation has a certain effect on the ability to digest lactose, we cannot, strictly speaking, exclude the possibility that selection acted on some other pleiotropic effect of the mutation. This argument is not erected to dispute the adaptive story regarding the lactase gene, the total evidence in favor of adaptation and selection related to lactose tolerance is overwhelming in this case, but rather to argue that the combination of a functional effect and selection does not demonstrate that selection acted on the specific trait in question.

Selection could have acted on a free-rider that is coextensive with the lactase gene, and just because “the story fits the data” well (that’s a necessary truth; of course the story can fit the data because any story can be formulated for any data) does not mean that it is true, that the reason for trait T is reason R since they “fit the data so well.”

Of course, this holds for EP, evolutionary anthropology, and my favorite theory for the evolution of human skin color, the vitamin D hypothesis. I do not, of course, deny that light skin is needed in order to synthesize vitamin D in climates with low UVB; that is a truism. What is denied is the fact that selection acted on light skin (and its associated/causal genes); what is denied is the combination of functional effect and selection. Just-so stories are necessarily true; they, of course, fit any data because one can formulate any story to fit any data points they have. Thus, Darwinists are just storytellers who have a bunch of data; there is no way to distinguish between the selection of a trait because it increased fitness and the selection of a free-rider that is “just there” that does not increase fitness, but the thing that increases fitness is what the free-rider “rode in on.”

NS is not and cannot be an explanatory mechanism. Darwinism has already been falsified (Jablonka and Lamb, 2005; Noble, 2011; Noble, 2012; Noble, 2017) and so, this is yet another nail-in-the-coffin for Darwinism. The fact that traits that are coextensive means that NS would have to “know” which trait to act on; NS cannot “know” which of the coextensive traits to act on (because it has no mind) and, NS cannot be a general mechanism that connects phenotypic variation to variation in fitness. NS does not explain the evolution of species, nor can NS distinguish between two locally coextensive traits—traits T and T’—because NS has no agency and does not have a mind. Therefore NS is not an explanatory mechanism. Just invoking NS to explain the continuance of any trait fails to explain the survival of the trait because NS cannot distinguish between traits that enhance an organism’s fitness and free-riders which are irrelevant to survival but are coextensive with the selected-for trait, as long as the traits in question are coextensive.

P1) If there is selection for T but not T’, various counterfactuals must be true.
P2) If the counterfactuals are true, then NS must be an intentional-agent, or there must be laws about “selection-for”.
P3) NS is mindless.
P4) There are no laws for “selection-for”.
∴ It is false that selection for T but not T’ occurs in a population.

One then has two choices:

(1) Argue that NS has a mind and therefore that it can “select for” certain traits that are adaptable in a given population of organisms in the environment in question. “Select-for” implies intention. Intentional acts only occur in organisms with minds. Intentional states are only possible if something has a mind. Humans are the only organisms with minds. Humans are the only organisms that can act intentionally. NS does not have a mind. (Animal breeder’s are an example that can select-for desirable traits and against undesirable traits because animals breeder’s are humans and humans can act intentionally.) Therefore NS does not act intentionally since it does not have a mind. I don’t think anyone would argue that NS has a mind and acts intentionally as an agent, therefore P3 is true.

(2) Argue that there are laws for “selection-for” phenotypic traits related to fitness. But it’s not possible that there are laws that relate to the selection of a phenotype, per se, in a given population. The effect of a trait depends on the ecology of the organism in question as well as its natural history. Therefore, to understand the effects of a phenotypic trait on the fitness of an organism we must understand its natural history, not its selection history (so-called). Therefore P4 is true.

There are no laws for “selection-for”, nor does NS have a mind that can select a trait that lends to an organism’s fitness and not a trait that’s just correlated with the trait in question

DNA is not a “Blueprint”

2200 words

Leading behavior geneticist Robert Plomin is publishing “Blueprint: How DNA Makes Us Who We Are” in October of 2018. I, of course, have not read the book yet. But if the main thesis of the book is that DNA is a “code”, “recipe”, or “blueprint”, then that is already wrong. This is because presuming that DNA is any of the three aforementioned things marries one to certain ideas, even if they themselves do not explicitly state them. Nevertheless, Robert Plomin is what one would term a “hereditarian”, meaning that he believes that genes—more than environment—shape an individual’s psychological and other traits. (That’s a false dichotomy, though.) In the preview for the book at MIT Press, they write:

In Blueprint, behavioral geneticist Robert Plomin describes how the DNA revolution has made DNA personal by giving us the power to predict our psychological strengths and weaknesses from birth. A century of genetic research shows that DNA differences inherited from our parents are the consistent life-long sources of our psychological individuality—the blueprint that makes us who we are. This, says Plomin, is a game-changer. It calls for a radical rethinking of what makes us who were are.

Genetics accounts for fifty percent of psychological differences—not just mental health and school achievement, but all psychological traits, from personality to intellectual abilities. Nature defeats nurture by a landslide.

Plomin explores the implications of this, drawing some provocative conclusions—among them that parenting styles don’t really affect children’s outcomes once genetics is taken into effect. Neither tiger mothers nor attachment parenting affects children’s ability to get into Harvard. After describing why DNA matters, Plomin explains what DNA does, offering readers a unique insider’s view of the exciting synergies that came from combining genetics and psychology.

I won’t get into most of these things today (I will wait until I read the book for that), but this will be just an article showing that DNA is, in fact, not a blueprint, and DNA is not a “code” or “recipe” for the organism.

It’s funny that the little blurb says that “Nature defeats nurture by a landslide“, because, as I have argued at length, nature vs nurture is a false dichotomy (See Oyama, 1985, 20001999Moore, 2002; Schneider, 2007; Moore, 2017). Nature vs nurture is the battleground that the false dichotomy of genes vs environment is fought on. However, it makes no sense to partition heritability estimates if it is indeed true that genes interact with environment—that is, if nature interacts with nurture.

DNA is also called “the book of life”. For example, in her book The Epigenetics Revolution: How Modern Biology Is Rewriting Our Understanding of Genetics, Disease, and Inheritance, Nessa  Carey writes that “There’s no debate that the DNA blueprint is a starting point” (pg 16). This, though, can be contested. “But the promise of a peep into the ‘book of life’ leading to a cure for all diseases was a mistake” (Noble, 2017: 161).

Developmental psychologist and cognitive scientist David S. Moore concurs. In his book The Developing Genome: An Introduction to Behavioral Epigenetics, he writes (pg 45):

So, although I will talk about genes repeatedly in this book, it is only because there is no other convenient way to communicate about contemporary ideas in molecular biology. And when I refer to gebe, I will be talking about a segment or segments of DNA containing sequence information that is used to help construct a protein (or some other product that performs a biological function). But it is worth remembering that contemporary biologists do not mean any one thing when they talk about “genes”; the gene remains a fundementally hypothetical concept to this day. The common belief that there are things inside of us that constitute a set of instructions for building bodies and minds—things that are analogous to “blueprings” or “recipes”—is undoubedtly false. Instead, DNA segements often contain information that is ambiguous, and that must be edited or arranged in context-dependent ways before it can be used.

Still, other may use terms like “genes for” trait T. This, too, is incorrect. In his outstanding book Making Sense of Genes, Kostas Kamporakis writes (pg 19):

I also explain why the notion of “genes for,” in the vernacular sense, is not only misleading but also entirely inaccurate and scientifcally illegitamate.

[…]

First, I show that genes “operate” in the context of development only. This means that genes are impllicated in the development of characters but do not determine them. Second, I explain why single genes do not alone produce characters or disease but contribute to their variation. This means that genes can account for variation in characters but cannot alone explain their origin. Third, I show that genes are not the masters of the game but are subject to complex regulatory processes.

Genes can only be seen as passive templates, not ultimate causes (Noble, 2011), and they cannot explain the origin of different characters but can account for variation in physical characters. Genes only “do” something in the context of development; they are inert molecules and thusly cannot “cause” anything on their own.

Genes are not ‘for’ traits, but they are difference-makers for traits. Sterelny and Griffiths (1999: 102), in their book Sex and Death: An Introduction to Philosophy of Biology write:

Sterelny and Griffiths (1988) responded to the idea that genes are invisible to selection by treating genes as difference makers, and as visible to selection by virtue of the differences they make. In doing so, they provided a formal reconstruction of the “gene for” locution. The details are complex, but the basic intent of the reconstruction is simple. A certain allele in humans is an “allele for brown eyes” because, in standard environments, having that allele rather than alternatives typically available in the population means that your eyes will be brown rather than blue. This is the concpet of a gene as a difference maker. It is very important to note, however, that genes are context-sensitive difference makers. Their effects depend on the genetic, cellular, and other features of their environment.

(Genes can be difference makers for physical traits, but not for psychological traits because no psychophysical laws exist, but I’ll get to that in the future.)

Note how the terms “context-sensitive” and “context-dependent” continue to appear. The DNA-as-blueprint statement presumes that DNA is context-independent, but we cannot divorce genes—whatever they are—from their context, since genes and environment, nature and nurture, are intertwined. (And it is even questioned if ‘genes’ are truly units of inheritance, see Fogle, 1990. Fogle, 2000 also argues to dispense with the concept of “gene” and that biologists should be using terms like intron, promoter region, and exon. Nevertheless, there is a huge disconnect with the term “gene” in molecular biology and classical genetics. Keller 2000 argues that there are still uses for the term “gene” and that we should not dispense with the term. I believe we should dispense with it.)

Susan Oyama (2000: 77) writes in her book The Ontogeny of Information:

Though a plan implies action, it does not itself act, so if the genes are a blueprint, something else is the constructor-construction worker. Though blueprints are usually contrasted with building materials, the genes are quite easily conceptualized as templates for building tools and materials; once so utilized, of course, they enter the developmental process and influence its course. The point of the blueprint analogy, though, does not seem to be to illuminate developmental processes, but rather to assume them and, in celebrating their regularity, to impute cognitive functions to genes. How these functions are exercised is left unclear in this type of metaphor, except that the genetic plan is seen in some peculiar way to carry itself out, generating all the necessary steps in the necessary sequence. No light is shed on multiple developmental possibilities, species-typical or atypical.

The Modern Synthesis is one of the causes for the genes-as-blueprints thinking; the Modern Synthesis has causation in biology wrong. Genes are not active causes, but they are passive templates, as argued by many authors. They, thus, cannot “cause” anything on their own.

In his 2017 book Dance to the Tune of Life: Biological Relativity, Denis Noble writes (pg 157):

As we saw earlier in this chapter, these triplet sequences are formed from any combination of the four bases U, C, A and G in RNA and T, C, A and G in DNA. They are often described as a genetic ‘code’, but it is important to understand that this usage of the word ‘code’ carries overtones that can be confusing.

A code was originally an intentional encryption used by humans to communicate. The genetic ‘code’ is not intentional in that sense. The word ‘code’ has unfortunately reinforced the idea that genes are active and even complete causes, in much the same was as a computer is caused to follow the instructions of a computer program. The more nuetral word ‘template’ would be better. Templates are used only when required (activated); they are not themselves active causes. The active causes lie within the cells themselves since they determine the expression patterns for the different cell types and states. These patterns are comminicated to the DNA by transcrption factors, by methylation patterns and by binding to the tails of histones, all of which influence the pattern and speed of transcription of different parts of the genome. If the word ‘instruction’ is useful here at all, it is rather that the cell instructs the genome. As Barbara McClintock wrote in 1984 after receiving her Nobel Prize, the genome is an ‘organ of the cell’, not the other way around.

Realising that DNA is under the control of the system has been reinforced by the discovery that cells use different start, stop and splice sites for producing different messenger RNAs from a single DNA sequence. This enables the same sequence to code different proteins in different cell types and under different conditions [here’s where context-dependency comes into play again].

Representing the direction of causality in biology the wrong way round is therefore confusing and has far-reaching conseqeunces. The causality is circular, acting both ways: passive causality by DNA sequences acting as otherwise inert templates, and active causality by the functional networks of interactions that determine how the genome is activated.

This takes care of the idea that DNA is a ‘code’. But what about DNA being a ‘blueprint’, that all of the information is contained in the DNA of the organism before conception? DNA is clearly not a ‘program’, in the sense that all of the information to construct the organism exists already in DNA. The complete cell is also needed, and its “complex structures are inherited by self-templating” (Noble, 2017: 161). Thus, the “blueprint” is the whole cell, not just the genome itself (remember that the genome is an organ of the cell).

Lastly, GWA studies have been all the rage recently. However, there is only so much we can learn just from association studies, before we need to turn to the physiological sciences for functional analyses. Indeed, Denis Noble (2018) writes in a new editorial:

As with the results of GWAS (genome-wide association studies) generally, the associations at the genome sequence level are remarkably weak and, with the exception of certain rare genetic diseases, may even be meaningless (13, 21). The reason is that if you gather a sufficiently large data set, it is a mathematical necessity that you will find correlations, even if the data set was generated randomly so that the correlations must be spurious. The bigger the data set, the more spurious correlations will be found (3).

[…]

The results of GWAS do not reveal the secrets of life, nor have they delivered the many cures for complex diseases that society badly needs. The reason is that association studies do not reveal biological mechanisms. Physiology does. Worse still, “the more data, the more arbitrary, meaningless and useless (for future action) correlations will be found in them” is a necessary mathematical statement (3).

Nor does applying a highly restricted DNA sequence-based interpretation of evolutionary biology, and its latest manifestation in GWAS, to the social sciences augur well for society.

It is further worth noting that there is no privileged level of causation in biological systems (Noble, 2012)—a priori, there is no justification to privilege one system over another in regard to causation, so saying that one level of the organism is “higher” than another (for instance, saying that genes are, and should be, privileged over the environment or any other system in the organism regarding causation) is clearly false, since there is upwards and downwards causation, influencing all levels of the system.

In sum, it is highly misleading to refer to DNA as “blueprints”, a “code”, or a “recipe.” Referring to DNA in this way means that one presumes that DNA can be divorced from its context—that it does not work together with the environment. As I have argued in the past, association studies will not elucidate genetic mechanisms, nor will heritability estimates (Richardson, 2012). We need physiological testing for these functional analyses, and association studies like GWAS and even heritability estimates don’t tell us this type of information (Panofsky, 2014). So, it seems, that what Plomin et al are looking for that they assume are “in the genes”, are not there, because they use a false model of the gene (Burt, 2015; Richardson, 2017). Genes are resources—templates to be used by and for the system—not causes of traits and development. They can account for differences in variation, but cannot be said to be the origin of trait differences. Genes can be said to be difference makers, but knowing whether or not they are difference makers for behavior, in my opinion, cannot be known.

(For further information on genes and what they do, reach Chapters Four and Five of Ken Richardson’s book Genes, Brains, and Human Potential: The Science and Ideology of Intelligence. Plomin himself seems to be a reductionist, and Richardson took care of that paradigm in his book. Lickliter (2018) has a good review of the book, along with critiques of the reductionist paradigm that Plomin et al follow.)

Otzi Man’s Last Meal and the Diet of Neanderthals

1100 words

The debate on what type of diet in regard to macronutrient differences rages on. Should we eat high carb, low fat (HCLF)? Or low carb, high fat (LCHF) or something in between? The answer rests on, of course, the type of diets that our ancestors ate—both immediate and in the distant past. In the 1990s, a frozen human was discovered in the Otzal mountains, which gave him the name “Otzi man.” About 5,300 years ago, he was frozen in the mountains. The contents of his stomach have been analyzed in the 27 years since the discovery of Otzi, but an in-depth analysis was not possible until now.

A new paper was published recently, which analyzed the stomach contents of Otzi man (Maixner et al, 2018). There is one reason why it took so long to analyze the contents of his stomach: the authors state that, due to mummification, his stomach moved high up into his rib cage. The Iceman was “omnivorous, with a diet consisting both of wild animal and plant material” (Maixner et al, 2018: 2). They found that his stomach had a really high fat content, with “the presence of ibex and red deer” (pg 3). He also “consumed either fresh or dried wild meat“, while “a slow drying or smoking of the meat over the fire would explain the charcoal particles detected previously in the lower intestine content.“(pg 5).

The extreme alpine environment in which the Iceman lived and where he have been found (3,210 m above sea level) is particularly challenging for the human physiology and requires optimal nutrient supply to avoid rapid starvation and energy loss [31]. Therefore, the Iceman seemed to have been fully aware that fat displays an excellent energy source. On the other hand, the intake of animal adipose tissue fat has a strong correlation with increased risk of coronary artery disease [32]. A high saturated fats diet raises cholesterol levels in the blood, which in turn can lead to atherosclerosis. Importantly, computed tomography scans of the Iceman showed major calcifications in arteria and the aorta indicating an already advanced atherosclerotic disease state [33]. Both his high-fat diet and his genetic predisposition for cardiovascular disease [34] could have significantly contributed to the development of the arterial calcifications.  Finally, we could show that the Iceman either consumed fresh or dried meat. Drying meat by smoking or in the open air are simple but highly effective methods for meat preservation that would have allowed the Iceman to store meat long term on journeys or in periods of food scarcity. In summary, the Iceman’s last meal was a well-balanced mix of carbohydrates, proteins, and lipids, perfectly adjusted to the energetic requirements of his high-altitude trekking. (Maixner et al, 2018: 5)

They claim that “the intake of animal adipose tissue fat has a strong correlation with increased risk of coronary artery disease“, of course, citing a paper that the AHA is involved in (Sacks et al, 2017) which says that “Randomized clinical trials showed that polyunsaturated fat from vegetable oils replacing saturated fats from dairy and meat lowers CVD.” This is nonsense, because dietary fat guidelines have no evidence (Harcombe et al, 2016; Harcombe, Baker, and Davies, 2016; Harcombe, 2017). Saturated fat consumption is not even associated with all-cause mortality, type II diabetes, ischemic stroke, CVD (cardiovascular disease) and CHD (coronary heart disease) (de Sousa et al, 2015).

Thus, if anything, what contributed to Otzi man’s arterial calcification seems to be grains/carbohydrates (see DiNicolantonio et al, 2017), not animal fat. Fats, at 9 kcal per gram, were better for Otzi to consume, as he got more kcal for his buck; eating a similar portion in carbohydrates, for example, would have meant that Otzi would have had to spend more time eating (since carbs have less than half the energy that animal fat does). Since his stomach had ibex (a type of goat) and red deer, it’s safe to say that many of his meals consisted mainly of animal fat, protein with some cereals and plants thrown in (he was an omnivore).

We can then contrast the findings of Otzi’s diet with that of Neanderthals. It has been estimated that, during glacial winters, Neanderthals would have consumed around 74-85 percent of their diet from animal fat when there were no carbohydrates around, with the rest coming from protein (Ben-Dor, Gopher, and Barkai, 2016). Furthermore, based on contemporary data from polar peoples, it is estimated that Neanderthals required around 3,360 to 4,480 kcal per day to winter foraging and cold resistance (Steegmann, Cerny, and Holliday, 2002). The upper-limit for protein intake for Homo sapiens is 4.0 g/bw/day while for erectus it is 3.9 g/bw/day (Ben-Dor et al, 2011), and so this shows that Neanderthals consumed a theoretical upper-maximum of protein due to their large body size. So we can assume that Neanderthals consumed somewhere near 3800 kcal per day. The average Neanderthal is said to have consumed about 292 grams of protein per day, or 1,170 kcal (with a lower end of 985 kcal and an upper end of 1,170 at the high end) (Ben-Dor, Gopher, and Barkai, 2016: 370).

Then if we further assume that Neanderthals consumed no carbohydrates during glacial winters, that leaves protein as the main source of energy, since the large game the Neanderthals hunted were not around. Thus, Neanderthals would have consumed between 2,812 and 3,230 kcal from animal fat with the rest coming from protein. We can also put this into perspective. The average American man consumes about 100 grams of protein per day, while consuming 2,195 kcal per day (Ford and Dietz, 2013). For these reasons, and more, I argued that Neanderthals were significantly stronger than Homo sapiens, and this does have implications for racial differences in athletic ability.

In sum, the last meal of Otzi man is now known. Of course, this is a case of n = 1, so we should not draw too large a conclusion from this, but it is interesting. I don’t see why the composition of the diets of any of Otzi’s relatives would have been any different (or that the contents of his normal diet would have been any different). He ate a diet high in animal fat like Neanderthals, but unlike Neanderthals, they ate a more cereal-based diet which may have contributed to Otzi’s CVD and arterial calcification. We can learn a lot about ourselves and our ancestors through the analysis of their stomach contents (if possible) and teeth (if possible), and maybe even genomes (Berens, Cooper, and Lachance, 2017) because if we learn what they ate then we can maybe begin to shift dietary advice to a more ‘natural’ way and avoid diseases of civilization. But, we have not had time to adapt to the new obesogenic environments we have constructed for ourselves. It’s due to this that we have an obesity epidemic, and by studying the diets of our ancestors, we can then begin to remedy our obesity and other health problems.

Mini-Review of “J. Phillipe Rushton: A Life History Perspective” by Edward Dutton

1500 words

JP Rushton was a highly controversial psychologist professor, teaching at the University of Western Ontario for his entire career. In the mid-1980s, he proposed that evolution was “progressive” and that there was a sort of “hierarchy” between the three races that he termed “Mongoloid, Caucasoid, and Negroid” (Rushton, 1985). His theory was then strongly criticized scientists from numerous disciplines (Lynn, 1989Cain, 1990; Weizmann et al, 1990Anderson, 1991; Graves, 2002). Rushton responded to these criticisms (Rushton, 1989Rushton, 1991; Rushton, 1997; though it’s worth noting that Rushton never responded to Graves’ 2002 critiques). (Also see Rushton’s and Graves’ debate.) Copping, Campbell, and Muncer (2014) write that “high K scores were related to earlier sexual debut and unrelated to either pubertal onset or number of sexual partners. This suggests that the HKSS does not reflect an underlying “K dimension.”“, which directly contradicts Rushton’s racial r/K proposal.

There is a now a new critique of Rushton’s theory out now, by Edward Dutton, English anthropologist, with a doctorate in religious studies, just published at the end of last month (Dutton, 2018). I ordered the book the day after publication and it took three weeks to get to my residence since it came from the UK. I finally received it on Friday. It’s a small book, 143 pages sans acknowledgments, references and the index, and seems well-written and researched from what I’ve read so far.

Here is the plan of the book:

Accordingly, in this chapter [Chapter One], we will begin by getting to grips with the key concepts of intelligence and personality. This part is primarily aimed at non-specialist readers or those who are sceptical of the two concepts [it’s really barebones; I’m more than ‘sceptical’ and it did absolutely nothing for me]. In Chapter Two, we will explore Rushton’s theory in depth. Readers who are familiar with Life History Theory may wish to fast forward through to the section on the criticisms of Rushton’s model. I intend to be as fair to his theory as possible, in a way so few of the reviewers were when he presented it. I will respond to the many fallacious criticisms of it, all of which indicate non-scientific motives [what about Rushton? Did he have any non-scientific motives?]. However, I will show that Rushton is just as guilty of these kinds of techniques as his opponents. I will also highlight serious problems with his work, including cherry picking, confirmation bias, and simply misleading other researchers. In Chapter Three, we will explore the concept of ‘race’ and show that although Rushton’s critics were wrong to question the concept’s scientific validity, Rushton effectively misuses the concept, cherry-picking such that his concept works. In Chapter Four, we will explore the research that has verified Rushton’s model, including new measures which he didn’t examine. We will then, in Chapter Five, examine the concept of genius and look at how scientific geniuses tend to be highly intelligent r-strategists, though we will see that Rushton differed from accepted scientific geniuses in key ways.

In Chapter Six, we will find that Rushton’s theory itself is problematic, though not in the ways raised by his more prominent critics. It doesn’t work when it comes to a key measure of mental stability as well as to many other measures, specifically preference for oral sex, the desire to adopt non-related children, the desire to have pets, and positive attitudes to the genetically distant. It also doesn’t work if you try to extend it to other races, beyond the three large groups he examined [because more races exist than Rushton allows]. In Chapter Seven, with all the background, we will scrutinize Rushton’s life up until about the age of 30, while in Chapter Eight, we will follow Rushton from the age of 30 until his death. I will demonstrate the extent to which he was a highly intelligent r-strategist and a Narcissist and we will see that Rushton seemingly came from a line of highly intelligent r-strategists. In Chapter Nine, I will argue that for the good of civilization those who strongly disagree with Rushton must learn to tolerate people like Rushton. (Dutton, 2018: 12-13).

On the back of the book, he writes that Rushton had “two illegitimate children including one by a married black woman.” This is intriguing. Could this be part of Rushton’s motivation to formulate his theory (his theory has already been rebutted by numerous people, so speculating on motivations in lieu of new information seems apt)?

Some people, such as PumpkinPerson, may wonder why Dutton is attacking someone “on his team“, but he addresses people who would ask such questions, writing (pg. 15):

“But on this basis, it could be argued that my critique of Rushton simply gives ammunition to emotionally-driven scientists and their friends in the media. However, it could be countered that my critique only goes to show that it is those who are genuinely motivated by the understanding of the world — those who accept empirical evidence, such as with regard to intelligence and race — who are prepared to critique those regarded as being ‘on their side.’ And this is precisely because they are unbiased and thus do not think in terms of ‘teams.’”

Dutton argues that “many of the criticisms leveled against Rushton’s work by mainstream scientists were actually correct” (pg 13). This is a truism. One only need to read the replies to Rushton, especially Anderson (1991) to see that he completely mixed up the theory. He stated ‘Negroids’ were r-strategists and ‘Mongoloids’ were K-strategists, but this reasoning shows that he did not understand the theory—or, if anything, he knowingly attempted to obfuscate the theory in order to lend stronger credence to his own theory (and personal biases).

The fatal flaw for Rushton’s theory is that, if r/K selection theory did apply to human races, that ‘Mongoloids’ would be r-strategists while ‘Negroids’ would be K-strategists. This is because “Rushton’s own suggested agents of natural selection on African populations imply that African populations have had a strong history of K-selection, as well as the r-selection implied by “droughts”” (Anderson, 1991: 59). As for Mongoloids, “Rushton lists many traits of Mongoloid peoples that are thought to represent adaptation to cold. Cold weather acts in a density-independent fashion (adaptations to cold improve survival in cold weather regardless of population density); cold weather is normally an agent of r-selection” (Anderson, 1991: 59). Rushton’s own arguments imply that ‘Negroids’ would have had more time to approach their environmental carrying capacity and experience ‘K-selecting’ pressures.

Thus, Rushton’s claim about the empirical ordering of life history and behavioural traits in the racial groups exactly contradicts general predictions that follow from his own claims about their ancestral ecology and the r/K model (Boyce, 1984; MacArthur, 1972; MacArthur & Wilson, 1967; Pianka, 1970; Ricklefs, 1990, p. 577). (Specific predictions from the model could be made only about individual populations after careful study in their historical habitat, as I have pointed out above). (Anderson, 1991: 59) [And it is not possible, because the populations in question should be living in the environment that the selection is hypothesized to have occurred. That, of course, is not possible today.]

Though, near the end of the book, Dutton writes that (pg 148) that “Rushton was not a scientific genius. As we have discussed, unlike a scientific genius, his models had clear deficiencies, he cherry-picked data to fit his model, and he was biased in favor of his model. However, Rushton was a highly original scientist who developed an extremely original and daring theory: a kind of artistic-scientist genius combination.

The final paragraph of the book, though, sums up the whole book up well. Dutton talks about when Jared Taylor introduces Rushton at one of his American Renaissance conferences (February 25th, 2006):

‘Well, thank you very much and . . . eh . . . and thank you Jared for . . . erm . . . putting on another wonderful conference.’ Rushton was reserved, yet friendly and avuncular. ‘Eh . . . it’s a great honor to be the after dinner speaker; to be elevated up like this.’ He was certainly elevated up. Taylor had even remarked that ‘in a sane and civilized world’ Rushton’s work would have ‘worldwide acclaim.’ Rushton’s audience admired him, trusted him . . . They weren’t familiar with him at all.

All in all, to conclude this little mini-review, I would recommend picking up this book as it’s a great look into Rushton’s life, the pitfalls of his theory (and for the new work and other variables that Dutton shows showed Rushton’s M>C>N ‘hierarchy’). Rushton’s work, while politically daring, did not hold up to scientific scrutiny, since the model was beginning to be abandoned in the late 70s (Graves, 2002), with most scientists completely dismissing the model in the early 90s. Commenting on r/K selection, Stearns (1992: 206) writes that “This explanation was suggestive and influential but incorrect” (quoted in Reznick et al, 2002), while Reznick et al (2002: 1518) write that “The r- and K-selection paradigm was replaced by new paradigm that focused on age-specific mortality (Stearns 1976, Charlesworth 1980).” Rushton’s model, while it ‘made sense with the data’, was highly flawed. And even then, it doesn’t matter that it ‘made sense’ with the data, since Rushton’s theory is one large just-so story (Gould and Lewontin, 1976; Lloyd, 1999Richardson, 2007; Nielsen, 2009; see also Pigliucci and Kaplan, 2000 and Kaplan, 2002

Black-White Differences in Anatomy and Physiology: Black Athletic Superiority

3000 words

Due to evolving in different climates, the different races of Man have differing anatomy and physiology. This, then, leads to differences in sports performance—certain races do better than others in certain bouts of athletic prowess, and this is due to, in large part, heritable biological/physical differences between blacks and whites. Some of these differences are differences in somatotype, which bring a considerable advantage for, say, runners (an ecto-meso, for instance, would do very well in sprinting or distance running depending on fiber typing). This article will discuss differences in racial anatomy and physiology (again) and how it leads to disparities in certain sports performance.

Kerr (2010) argues that racial superiority in sport is a myth. (Read my rebuttal here.) In his article, Kerr (2010) attempts to rebut Entine’s (2000) book Taboo: Why Black Athletes Dominate Sports and Why We’re Afraid to Talk About It. In a nutshell, Kerr (2010) argues that race is not a valid category; that other, nongenetic factors play a role other than genetics (I don’t know if anyone has ever argued if it was just genetics). Race is a legitimate biological category, contrary to Kerr’s assertions. Kerr, in my view, strawman’s Entine (2002) by saying he’s a “genetic determinist”, but while he does discuss biological/genetic factors more than environmental ones, Entine is in no way a genetic determinist (at least that’s what I get from my reading of his book, other opinions may differ). Average physical differences between races are enough to delineate racial categories and then it’s only logical to infer that these average physical/physiological differences between the races (that will be reviewed below) would infer an advantage in certain sports over others, while the ultimate cause was the environment that said race’s ancestors evolved in (causing differences in somatotype and physiology).

Black athletic superiority has been discussed for decades. The reasons are numerous and of course, this has even been noticed by the general public. In 1991, half of the respondents of a poll on black vs. whites in sports “agreed with the idea that “blacks have more natural physical ability,“” (Hoberman, 1997: 207). Hoberman (1997) of course denies that there is any evidence that blacks have an advantage over whites in certain sports that come down to heritable biological factors (which he spends the whole book arguing). However, many blacks and whites do, in fact, believe in black athletic superiority and that physiologic and anatomic differences between the races do indeed cause racial differences in sporting performance (Wiggins, 1989). Though Wiggins (1989: 184) writes:

The anthropometric differences found between racial groups are usually nothing more than central tendencies and, in addition, do not take into account wide variations within these groups or the overlap among members of different races. This fact not only negates any reliable physiological comparisons of athletes along racial lines, but makes the whole notion of racially distinctive physiological abilities a moot point.

This is horribly wrong, as will be seen throughout this article.

The different races have, on average, differing somatotypes which means that they have different anatomic proportions (Malina, 1969):

Data from Malina, (1969: 438) n Mesomorph Ectomorph Endomorph
Blacks 65 5.14 2.99 2.92
Whites 199 4.29 2.89 3.86
Data from Malina (1969: 438) Blacks Whites
Thin-build body type 8.93 5.90
Submedium fatty development 48.31 29.39
Medium fleshiness 33.69 43.63
Fat and very fat categories 9.09 21.06

This was in blacks and whites aged 6 to 11. Even at these young ages, it is clear that there are considerable anatomic differences between blacks and whites which then lead to differences in sports performance, contra Wiggins (1989). A basic understanding of anatomy and how the human body works is needed in order to understand how and why blacks dominate certain sports over whites (and vice versa). Somatotype is, of course, predicated on lean mass, fat mass, bone density, stature, etc, which are heritable biological traits, thus, contrary to popular belief that somatotyping holds no explanatory power in sports today (see Hilliard, 2012).

One variable that makes up somatotype is fat-free body mass. There are, of course, racial differences in fat mass, too (Vickery, Cureton, and Collins, 1988; Wagner and Heyward, 2000). Lower fat mass would, of course, impede black excellence in swimming, and this is what we see (Rushton, 1997; Entine, 2000). Wagner and Heyward (2000) write:

Our review unequivocally shows that the FFB of blacks and whites differs significantly. It has been shown from cadaver and in vivo analyses that blacks have a greater BMC and BMD than do whites. These racial differences could substantially affect measures of body density and %BF. According to Lohman (63), a 2% change in the BMC of the body at a given body density could, theoretically, result in an 8% error in the estimation of %BF. Thus, the BMC and BMD of blacks must be considered when %BF is estimated.

While Vickery, Cureton, and Collins (1988) found that blacks had thinner skin folds than whites, however, in this sample, somatotype did not explain racial differences in bone density, like other studies (Malina, 1969), Vickery, Cureton, and Collins (1988) found that blacks were also more likely to be mesomorphic (which would then express itself in racial differences in sports).

Hallinan (1994) surveyed 32 sports science, exercise physiology, biomechanics, motor development, motor learning, and measurement evaluation textbooks to see what they said racial differences in sporting performance and how they explained them. Out of these 32 textbooks, according to Wikipedia, these “textbooks found that seven [textbooks] suggested that there are biophysical differences due to race that might explain differences in sports performance, one [textbook] expressed caution with the idea, and the other 24 [textbooks] did not mention the issue.” Furthermore, Strklaj and Solyali (2010), in their paper “Human Biological Variation in Anatomy Textbooks: The Role of Ancestry” write that their “results suggest that this type of human variation is either not accounted for or approached only superficially and in an outdated manner.

It’s patently ridiculous that most textbooks on the anatomy and physiology of the human body do not talk about the anatomic and physiologic differences between racial and ethnic groups. Hoberman (1997) also argues the same, that there is no evidence to confirm the existence of black athletic superiority. Of course, many hypotheses have been proposed to explain how and why blacks are at an inherent advantage in sport. Hoberman (1997: 269) discusses one, writing (quoting world record Olympian in the 400-meter dash, Lee Evans):

“We were bred for it [athletic dominance] … Certainly the black people who survived in the slave ships must have contained the highest proportion of the strongest. Then, on the plantations, a strong black man was mated with a strong black woman. We were simply bred for physical qualities.”

While Hoberman (1997: 270-1) also notes:

Finally, by arguing for a cultural rather than a biological interpretation of “race,” Edwards proposed that black athletic superiority results from “a complex of societal conditions” that channels a disproporitionate number of talented blacks into athletic careers.

The fact that blacks were “bred for” athletic dominance is something that gets brought up often but has little (if any) empirical support (aside from just-so stories about white slavemasters breeding their best, biggest and strongest black slaves). The notion that “a complex of societal conditions” (Edwards, 1971: 39) explains black dominance in sports, while it has some explanatory power in regard to how well blacks do in sporting competition, it, of course, does not tell the whole story. Edwards (1978: 39) argues that these complex societal conditions “instill a heightened motivation among black male youths to achieve success in sports; thus, they channel a proportionately greater number of talented black people than whites into sports participation.” While this may, in fact, be true, this does nothing to rebut the point that differences in anatomic and physiologic factors are a driving force in racial differences in sporting performance. However, while these types of environmental/sociological arguments do show us why blacks are over-represented in some sports (because of course motivation to do well in the sport of choice does matter), they do not even discuss differences in anatomy or physiology which would also be affecting the relationship.

For example, one can have all of the athletic gifts in the world, one can be endowed with the best body type and physiology to do well in any type of sport you can imagine. However, if he does not have a strong mind, he will not succeed in the sport. Lippi, Favaloro, and Guidi (2008) write:

An advantageous physical genotype is not enough to build a top-class athlete, a champion capable of breaking Olympic records, if endurance elite performances (maximal rate of oxygen uptake, economy of movement, lactate/ventilatory threshold and, potentially, oxygen uptake kinetics) (Williams & Folland, 2008) are not supported by a strong mental background.

Any athlete—no matter their race—needs a strong mental background, for if they don’t, they can have all of the physical gifts in the world, they will not become top-tier athletes in the sport of their choice; advantageous physical factors are imperative for success in differing sports, though myriad variables work in concert to produce the desired effect so you cannot have one without the other. On the other side, one can have a strong mental background and not have the requisite anatomy or physiology needed to succeed in the sport in question, but if he has a stronger mind than the individual with the requisite morphology, then he probably will win in a head-to-head competition. Either way, a strong mind is needed for strong performance in anything we do in life, and sport is no different.

Echoing what Hoberman (1997) writes, that “racist” thoughts of black superiority in part cause their success in sport, Sheldon, Jayaratne, and Petty (2007) predicted that white Americans’ beliefs in black athletic superiority would coincide with prejudice and negative stereotyping of black’s “intelligence” and work ethic. They studied 600 white men and women to ascertain their beliefs on black athletic superiority and the causes for it. Sheldon, Jayaratne, and Petty (2007: 45) discuss how it was believed by many, that there is a “ perceived inverse relationship between athleticism and intelligence (and hard work).” (JP Rushton was a big proponent of this hypothesis; see Rushton, 1997. It should also be noted that both Rushton, 1997 and Entine, 2000 believe that blacks’ higher rate of testosterone—3 to 15 percent— [Ross et al, 1986; Ellis and Nyborg, 1992; see rebuttal of both papers] causes their superior athletic performance, I have convincingly shown that they do not have higher levels of testosterone than other races, and if they do the difference is negligible.) However, in his book The Sports Gene: Inside the Science of Extraordinary Athletic Performance, Epstein (2014) writes:

With that stigma in mind [that there is an inverse relationship between “intelligence” and athletic performance], perhaps the most important writing Cooper did in Black Superman was his methodological evisceration of any supposed inverse link between physical and mental prowess. “The concept that physical superiority could somehow be a symptom of intellectual superiority became associated with African Americans … That association did not begin until about 1936.”

What Cooper (2004) implied is that there was no “inverse relationship” with intelligence and athletic ability until Jesse Owens blew away the competition at the 1936 Olympics in Berlin, Germany. In fact, the relationship between “intelligence” and athletic ability is positive (Heppe et al, 2016). Cooper is also a co-author of a paper Some Bio-Medical Mechanisms in Athletic Prowess with Morrison (Morrison and Cooper, 2006) where they argue—convincingly—that the “mutation appears to have triggered a series of physiological adjustments, which have had favourable athletic consequences.

Thus, the hypothesis claims that differences in glucose conversion rates between West African blacks and her descendants began, but did not end with the sickling of the hemoglobin molecule, where valine is substituted for glutamic acid, which is the sixth amino acid of the beta chain of the hemoglobin molecule. Marlin et al (2007: 624) showed that male athletes who were inflicted with the sickle cell trait (SCT) “are able to perform sprints and brief exercises at the highest levels.” This is more evidence for Morrison and Cooper’s (2006) hypothesis on the evolution of muscle fiber typing in West African blacks.

Bejan, Jones, and Charles (2010) explain that the phenomenon of whites being faster swimmers in comparison to blacks being faster runners can be accounted for by physics. Since locomotion is a “falling-forward cycle“, body mass falls forward and then rises again, so mass that falls from a higher altitude falls faster and forward. The altitude is set by the position of center of mass above the ground for running, while for swimming it is set by the body rising out of the water. Blacks have a center of gravity that is about 3 percent higher than whites, which implies that blacks have a 1.5 percent speed advantage in running whereas whites have a 1.5 percent speed advantage in swimming. In the case of Asians, when all races were matched for height, Asians fared even better, than whites in swimming, but they do not set world records because they are not as tall as whites (Bejan, Jones, and Charles, 2010).

It has been proposed that stereotype threat is part of the reasons for East African running success (Baker and Horton, 2003). They state that many theories have been proposed to explain black African running success—from genetic theories to environmental determinism (the notion that physiologic adaptations to climate, too, drive differences in sporting competition). Baker and Horton (2003) note that “that young athletes have internalised these stereotypes and are choosing sport participation accordingly. He speculates that this is the reason why white running times in certain events have actually decreased over the past few years; whites are opting out of some sports based on perceived genetic inferiority.” While this may be true, this wouldn’t matter, as people gravitate toward what they are naturally good at—and what dictates that is their mind, anatomy, and physiology. They pretty much argue that stereotype threat is a cause of East African running performance on the basis of two assertions: (1) that East African runners are so good that it’s pointless to attempt to win if you are not East African and (2) since East Africans are so good, fewer people will try out and will continue the illusion that East Africans would dominate in middle- and long-distance running. However, while this view is plausible, there is little data to back the arguments.

To explain African running success, we must do it through a systems view—not one of reductionism (i.e., gene-finding). We need to see how the systems in question interact with every part. So while Jamaicans, Kenyans, and Ethiopians (and American blacks) do dominate in running competitions, attempting to “find genes” that account for success n these sports seems like a moot point—since the whole system is what matters, not what we can reduce the system in question to.

However, there are some competitions that blacks do not do so well in, and it is hardly discussed—if at all—by any author that I have read on this matter. Blacks are highly under-represented in strength sports and strongman competitions. Why? My explanation is simple: the causes for their superiority in sprinting and distance running (along with what makes them successful at baseball, football, and basketball) impedes them from doing well in strength and strongman competitions. It’s worth noting that no black man has ever won the World’s Strongest Man competition (indeed the only African country to even place—Rhodesia—was won by a white man) and the causes for these disparities come down to racial differences in anatomy and physiology.

I discussed racial differences in the big four lifts and how racial differences in anatomy and physiology would contribute to how well said race performed on the lift in question. I concluded that Europeans and Asians had more of an advantage over blacks in these lifts, and the reasons were due to inherent differences in anatomy and physiology. One major cause is also the differing muscle fiber typing distribution between the races (Alma et al, 1986; Tanner et al, 2002Caesar and Henry, 2015 while blacks’ fiber typing helps them in short-distance sprinting (Zierath and Hawley, 2003). Muscle fiber typing is a huge cause of black athletic dominance (and non-dominance). Blacks are not stronger than whites, contrary to popular belief.

I also argued that Neanderthals were stronger than Homo sapiens, which then had implications for racial differences in strength (and sports). Neanderthals had a wider pelvis than our species since they evolved in colder climes (at the time) (Gruss and Schmidt, 2016). With a wider pelvis and shorter body than Homo sapiens, they were able to generate more power. I then implied that the current differences in strength and running we see between blacks and whites can be used for Neanderthals and Homo sapiens, thusly, evolution in differing climates lead to differences in somatotype, which eventually then lead to differences in sporting competition (what Baker and Horton, 2003 term “environmental determinism” which I will discuss in the context of racial differences in sports in the future).

Finally, blacks dominate the sport of bodybuilding, with Phil Heath dominating the competition for the past 7 years. Blacks dominate bodybuilding because, as noted above, blacks have thinner skin folds than whites, so their striations in their muscles would be more prevalent, on average, at the same exact %BF. Bodybuilders and weightlifters were similar in mesomorphy, but the bodybuilders showed more musculature than the bodybuilders whereas the weightlifters showed higher levels of body fat with a significant difference observed between bodybuilders and weightlifters in regard to endomorphy and ectomorphy (weightlifters skewing endo, bodybuilders skewing ecto, as I have argued in the past; Imran et al, 2011).

To conclude, blacks do dominate American sporting competition, and while much ink has been spilled arguing that cultural and social—not genetic or biologic—factors can explain black athletic superiority, they clearly work in concert with a strong mind to produce the athletic phenotype, no one factor has prominence over the other; though, above all, if one does not have the right mindset for the sport in question, they will not succeed. A complex array of factors is the cause of black athletic dominance, including muscle fibers, the type of mindset, anatomy, overall physiology and fat mass (among other variables) explain the hows and whys of black athletic superiority. Cultural and social explanations—on their own—do not tell the whole story, just as genetic/biologic explanations on their own would not either. Every aspect—including the historical—needs to be looked at when discussing the dominance (or lack thereof) in certain sports along with genetic and nongenetic factors to see how and why certain races and ethnies excel in certain sports.

Vitamin D, Physiology, and the Cold

1200 words

I’ve been chronicling the VDH recently since it has great explanatory—and predictive—power. Light skin is a clear adaptation to low UVR, while dark skin is a clear adaptation to high UVR. Dark, highly melanized skin confers advantages in high UVR environments, such as protection against DNA damage, and also absorbs sufficient UV for vitamin D production while also protecting against folate depletion. However, when our ancestors migrated out of Africa, dark skin would not cut it in temperate environments with highly variable UV rays. This is where our highly adaptive physiology came into play, ensuring that we survived in highly variable environments. Light skin was important in low UVR environments in order to synthesize ample vitamin D, however, that synthesized vitamin D then conferred numerous other physiological advantages to the cold.

Eighty to ninety percent of the vitamin D required for humans comes from the sun, whereas ten to twenty percent comes from the diet, such as fatty fish, eggs, and dairy products (fortified with vitamin D, of course) (Ajabshir, Asif, and Nayer, 2014). Humans need to rely on high amounts of UV rays for vitamin D synthesis (Carlberg, 2014) other than Arctic peoples. Since dark skin does not synthesize vitamin D as well as light skin, skin gradually lightened as our ancestors migrated out of Africa (Juzeniene et al, 2009). This was then imperative to the physiologic adaptations that then occurred as our physiology had to adapt to novel, colder environments with fewer UV rays.

Sufficient amounts of vitamin D are highly important for the human musculoskeletal system (Wintermeyer et al, 2016), which is extremely important for birthing mothers. Along with the increased vitamin D synthesis in low UV environments, the heightened production of vitamin D conferred numerous other physiologic benefits which then helped humans adapt to colder environments with more varying UVR.

Vasoconstriction occurs when the blood vessels constrict which leads to heightened blood pressure, whereas vasodilation is the dilation of blood vessels which decreases blood pressure. So evolutionarily speaking, we had to have adaptive physiology in order to be able to “switch” back and forth between vasoconstriction and vasodilation, depending on what the current environment needed. Vasodilation, though, most likely had no advantage in high UV environments, and thus must have been an advantage in low UV environments, where it was more likely to be colder and so, when the blood vessels constrict, blood pressure increases and thus, heat loss could be considerably slowed in these environments due to these physiologic adaptations.

The races also differ, along with many other physiologic abilities, in nitric oxide-mediated vasodilation. Vasodilation is the dilation of blood vessels, which increases blood pressure. Mata-Greenwood and Chen (2008) reviewed the relevant literature regarding black/white differences in nitric oxide-dependent vasorelaxation and concluded that nitric oxide vasodilation is reduced in darker-skinned populations. Thus, we can infer that in lighter-skinned populations nitric oxide vasodilation is increased in lighter-skinned populations, which would have conferred a great physiological advantage when it came to colonizing environments with lower UV rays.

VDR and vitamin D metabolizing enzymes are present in adipose tissue. Tetrahydrobiopterin; which acts as a cofactor in the synthesis of nitric oxide and its primary function is as a vasodilator in the blood vessels (meaning that blood pressure is increased, to keep more heat in the cold) (Chalupsky and Cai, 2005). Since vasodilation is the body’s primary response to heat stress, blood flow increases which allows heat to leave the body. Therefore, the human body’s ability regarding vasodilation and vasoconstriction mechanisms were important in surviving areas with varying UVR.

One function of our adipose tissue is the storage of vitamin D, while vitamin D metabolizing enzymes and VDR are also expressed in the adipocyte (Abbas, 2017). With these known actions of vitamin D on adipose tissue, we can speculate that since vitamin D and the VDR are expressed in adipose tissue, it may have exerted a role in the adipose tissue which may have been important for surviving in cold, low UV environments (see below).

Furthermore, since these mechanisms are brought on by short-term changes, we can infer that it would hardly be of any use in high UVR environments and would be critical in temperate environments. So, vasodilation and vasoconstriction have little to no benefit in high UVR environments but seem to be imperative in temperate environments where UVR varies. It’s also likely that vitamin D influences vasodilation by influential nitric oxide synthesis (see Andrukhova et al, 2014) and vasoconstriction by influencing the renin-angiotensin system (Ajabshir, Asig, and Nayer, 2014).

This would have conferred great benefit to our ancestors as they migrated into more temperate and colder climates. You can read this for information on how adaptive our physiology is and why it’s like that. Because we went into numerous new environments and natural selection couldn’t act quickly enough, therefore the human body’s physiology is extremely adaptive.

What this suggests is that as skin lightened and adapted to low UV, the increased synthesis in vitamin D influenced vasodilation by a strong influence on nitric oxide synthase, along with vasoconstriction, implies that it would have been easier to survive in novel environments due to adaptive physiology and skin color, along with body fat reserves and the physiologic effects of vitamin D on adipose tissue. These physiologic adaptations would have been of no to little use in Africa. Thus, they must have been useful after we migrated out of Africa and experienced wildly varying environments—the whole reason why our physiology evolved (Richardson, 2017: chapter 5).

When the human body is exposed to cold, a few things occur: cutaneous vasoconstriction, shivering (Castellani and Young, 2016), “behavioral thermoregulation” (Young, Sawka, and Pandolf, 1996), while the human body can adapt physiologically to the cold (Young, 1994). The physiologic functions that vitamin D and folate in regard to vasodilation and vasoconstriction, there is a great chance that these effects were important in maintaining energy homeostasis in colder climates.

In sum, the evolution of light skin conferred a great survival advantage to our ancestors. This then upped the production of vitamin D synthesis in the body, which where then of utmost importance in regard to the adaptation of the human physiology to colder, lower-UV environments. Without our adaptive physiological systems, we would not have been able to leave Africa into novel environments. We need both behavioral thermoregulation as well as adaptive physiology to be able to survive in novel environments. Thus, the importance of skin lightening in our evolution becomes clearer:

As humans migrated out of Africa, lighter skin was needed to synthesize vitamin D. This was especially important to women, who needed higher amounts of vitamin D, in order to produce enough calcium for lactation and pregnancy—so the babe had enough calcium to grow its skeleton in the womb. With the uptake in vitamin D synthesis, this then allowed more adaptive physiologic changes that occurred due to the cold, and along with vasodilation and vasoconstriction, along with shivering and adapting behaviorally to the new environments, were our ancestors able to survive. Dark skin cannot synthesize vitamin D as well as light skin in low UV environments; this also can be seen with the lowered production of nitric oxide-dependent vasodilation in dark-skinned populations. Thus, vasoconstriction conferred no physiologic benefit in high UV environments, but almost certainly conferred a physiologic benefit in low UV environments.

Why Are Women Lighter than Men? Skin Color and Sexual Selection

1550 words

Skin color differences between the sexes are always discussed in terms of women being lighter than men, but never men being darker than women. This is seen in numerous animal studies (some reviewed by Rushton and Templer, 2012; read rebuttal here; also see Ducrest, Keller, and Roulin, 2008). Though, the colors that evolved on the animal’s fur due to whatever mate choices are irrelevant to the survival capabilities that the fur, feathers etc give to the organism in question. So, when we look at humans, we lost our protective body hair millions of years ago (Lieberman, 2015), and with that, we could then sweat. So since furlessness evolved in the lineage Homo, there was little flexibility in what could occur due to environmental pressures on skin color in Africa. It should be further noted that, as Nina Jablonski writes in her book Living Color: The Biological and Social Meaning of Skin Color (2012, pg 74)

No researchers, by the way, have explored the opposite possibility, that women deliberately selected darker men!

One hypothesis proposes that lighter skin in women first arose as a byproduct due to the actions of differing levels of hormones in the sexes—with men obviously having higher levels of testosterone, making them darker them women. So according to this hypothesis, light-skinned women evolved since men could tell high-quality from low-quality mates as well as measure hormonal status and childbearing potential, which was much easier to do with lighter- than darker-skinned women.

Another hypothesis put forth is that further from the equator, sexual competition between women would have increased for mates since mates were depleted, and so light skin evolved since men found it more beautiful. Thus, women living at higher latitudes were lighter than women living at lower latitudes because men had to go further to hunt which meant they were more likely to die which caused even greater competition between females, lightening their skin even more. And another, related, argument, proposed that light skin in women evolved due to a complex of childlike traits which includes a higher voice, smoother skin and childlike facial features, which then reduced male competition and aggressiveness. But women did not stay around waiting to be provisioned and they got out and gathered, and hunted sometimes, too.

Harris (2005) proposes that light skin evolved due to parental selection—mothers choosing the lightest daughters to survive, killing off the darker ones. All babies are born pale—or at least lacking the amount of pigment they have later in life. So how would parental—mostly maternal—selection have caused selection for lighter skin in girls as Harris (2005) proposes? It’d be a pretty large guessing game.

The role of sexual selection in regard to human skin color, though, has been tested and falsified. Madrigal and Kelly (2007a) tested the hypothesis that skin reflectance should be positively correlated with distance from the equator. It was proposed by other authors that as our ancestors migrated out of Africa, environmental selection relaxed and sexual selection took over. Their data did not lend credence to the hypothesis and falsified it.

Madrigal and Kelly (2007a: 475) write (emphasis mine):

We tested the hypothesis that human sexual dimorphism in skin color should be positively correlated with distance from the equator, a proposal generated by the sexual selection hypothesis. We found no support for that proposition. Before this paper was written, the sexual selection hypothesis was based on stated male preference data in a number of human groups. Here, we focused on the actual pattern of sexual dimorphism. We report that the distribution of human sexual dimorphism in relation to latitude is not that which is predicted by the sexual selection hypothesis. According to that hypothesis, in areas of low solar radiation, there should be greater sexual dimorphism, because sexual selection for lighter females is not counterbalanced by natural selection for dark skin. Our data analysis does not support this prediction. 

Though Frost (2007) replied, stating that Madrigal and Kelly (2007a) presumed that sexual selection was equal in all areas. Madrigal and Kelly (2007b) responded, stating that they tested one specific hypothesis regarding sexual selection and found it to be false. Frost (2007) proposed two hypotheses in order to test his version, but, again, no one has proposed that women select darker men, which could be a cause of lighter-skinned women (though sexual selection does not—and cannot—explain the observed gradation in skin color between men and women).

Skin color differences between men and women first arose to ensure women enough calcium for lactation and pregnancies. Since skin pigmentation protects against UVR but also must generate vitamin D, it must be light or dark enough to ensure ample vitamin D production in that certain climate, along with protecting against the UVR in that climate. So women needed sufficient vitamin D, which meant they needed sufficient calcium to ensure a strong skeleton for the fetus, for breastfeeding and for the mother’s own overall health.

However, breastfeeding new babes is demanding on the mother’s body (calcium reserves are depleted four times quicker), and the calcium the babe needs to grow its skeleton comes directly from the mother’s bones. Even a mother deficient in vitamin D will still give calcium to the babe at the expense of her own health. But she then needs to increase her reserves of calcium in order to ensure future pregnancies aren’t fatal for her or her offspring.

Though, at the moment to the best of my knowledge, there are no studies on calcium absorption, vitamin D levels and the recovery of the female skeleton after breastfeeding. (Though n3 fatty acids are paramount as well, and so a mother must have sufficient fat stores; see Lassek and Gaulin, 2008.) Thus, light-skinned women are most likely at an advantage when it comes to vitamin D production: The lighter they are, the more vitamin D and calcium they can produce for more pregnancies. Since light skin synthesizes vitamin D more efficiently, the body could then synthesize and use calcium more efficiently. The body cannot use and absorb calcium unless vitamin D is present. Since the fetus takes calcium from the mother’s skeleton, ample amounts of vitamin D must be present. For ample amounts of vitamin D to be present, the skin must be light enough to ensure vitamin D synthesis which would be needed for calcium absorption (Cashman, 2007; Gallagher, Yalamanchili, and Smith, 2012; Aloia et al, 2013).

Nina Jablonski writes in her book (2012, 77):

Women who are chronically deficient in vitamin D because of successive pregnancies and periods of breastfeeding experience a form of bone degeneration called osteomalacia. This has serious consequences for infants born of later pregnancies and for mothers themselves, who are at greater risk of breaking bones. It makes sense that protection of female health during the reproductive years would be a top evolutionary priority, so we are now investigating whether, in fact, slightly lighter skin in women might be a fairly simple way of ensuring that women get enough vitamin D after pregnancy and breastfeeding to enable their bodies to recover quickly. The need for maintaining strong female skeletons through multiple pregnancies may have been the ultimate evolutionary reason for the origin of differences in skin color between men and women.

While Jablonski and Chaplin (2000: 78) write:

We suggest that lighter pigmentation in human females began as a trait directly tied to increased fitness and was subsequently reinforced and enhanced in many human populations by sexual selection.

It is obvious that skin color in women represents a complex balancing act between giving the body the ability to synthesize ample vitamin D and protect from UVR. Skin coloration in humans is very clearly highly adaptive to UVR, and so, with differing average levels of UVR in certain geographic locales, skin color would have evolved to accommodate the human body to whichever climate it found itself in—because human physiology is perhaps the ultimate adaptation.

Sexual selection for skin color played a secondary, not primary role (Jablonski, 2004: 609) in the evolution of skin color differences between men and women. There is a delicate balancing act between skin color, vitamin D synthesis, and UVR protection. Women need to produce enough vitamin D in order to ensure enough calcium and its absorption to the baby and then ensure there are ample amounts to replace what the baby took while in the womb in order for future pregnancies to be successful. Sexual selection cannot explain the observed gradation in skin color between the races and ethnies of the human race. In my opinion, the only explanation for the observed explanation is the fact that skin color evolved due to climatic demands, while independent justification exists for the hypothesis as a whole (Jablonski and Chaplin, 2010).

I don’t see any way that sexual selection can explain the observed gradation in skin color around the world. Skin color is very clearly an adaptation to climate, though of course, cultural customs could widen the skin color differences between the sexes, and make women lighter over time. Nevertheless, what explains the observed skin gradation is adaptation to climate to ensure vitamin D synthesis among a slew of other factors (Jones et al, 2018). Sexual selection, while it may explain small differences between the sexes, cannot explain the differences noted between the native human races.