NotPoliticallyCorrect

Home » Race Realism

Category Archives: Race Realism

Is Obesity Caused by a Virus?

2150 words

I’ve recently taken a large interest in the human microbiome and parasites and their relationship with how we behave. There are certain parasites that can and do have an effect on human behavior, and they also reduce or increase certain microbes, some of which are important for normal functioning. What I’m going to write may seem weird and counter-intuitive to the CI/CO (calories in/calories out) model, but once you understand how the diversity in the human mirobiome matters for energy acquisiton, then you’ll begin to understand how the microbiome contributes to the exploding obesity rate in the first world.

One of the books I’ve been reading about the human microbiome is 10% Human: How Your Body’s Microbes Hold the Key to Health and Happiness. P.h.D. in evolutionary biology Alanna Collen outlines how the microbiome has an effect on our health and how we behave. Though one of the most intriquing things I’ve read in the book so far is how there is a relationship with microbiome diversity, obesity and a virus.

Collen (2014: 69) writes:

But before we get too excited about the potential for a cure for obesity, we need to know how it all works. What are these microbes doing that make us fat? Just as before, the microbiotas in Turnbaugh’s obese mice contained more Firmicutes and fewer Bacteroidetes, and they somehow seemed to enable the mice to extract more energy from their food. This detail undermines one of the core tenets of the obesity equation. Counting ‘calories-in’ is not as simple as keeping track of what a person eats. More accurately, it is the energy content of what a person absorbs. Turnbaugh calculated that the mice with the obese microbiota were collecting 2 per cent more calories from their food. For every 100 calories the lean mice extracted, the obese mice squeezed out 102.

Not much, perhaps, but over the course of a year or more, it adds up. Let’s take a woman of average height. 5 foot 4 inches, who weights 62 kg (9st 11 lb) and a healthy Body Mass Index (BMI: weight (kg) /(height (m)^2) of 23.5. She consumes 2000 calories per day, but with an ‘obese’ microbiota, her extra 2 per cent calorie extraction adds 40 more calories each day. Without expending extra energy, those further 40 calories per day should translate, in theory at least, to a 1.9 kg weight gain over a year. In ten years, that’s 19 kg, taking her weight to 81 kg (12 st 11 lb) and her BMI to an obese 30.7. All because of just 2 percent extra calories extracted from her food by her gut bacteria.

Turnbaugh et al (2006) showed that differing microbiota contributes to differing amounts of weight gain. The obese microbiome does have a greater capacity to extract more energy out of the same amount of food in comparison to the lean microbiome. This implies that obese people would extract more energy eating the same food as a lean person—even if the so-called true caloric value on the package from a caloriometer says otherwise. How much energy we absorb from the food we consume comes down to genes, but not the genes you get from your parents; it matters which genes are turned on or off. Our microbes also control some of our genes to suit their own needs—driving us to do things that would benefit them.

Gut microbiota does influence gene expression (Krautkramer et al, 2016). This is something that behavioral geneticists and psychologists need to look into when attempting to explain human behavior, but that’s for another day. Fact of the matter is, where the energy that’s broken down from the food by the microbiome goes is dictated by genes; the expression of which is controlled by the microbiome. Certain microbiota have the ability to turn up production in certain genes that encourage more energy to be stored inside of the adipocite (Collen, 2014: 72). So the ‘obese’ microbiota, mentioned previously, has the ability to upregulate genes that control fat storage, forcing the body to extract more energy out of what is eaten.

Indian doctor Nikhil Dhurandhar set out to find out why he couldn’t cure his patients of obesity, they kept coming back to him again and again uncured. At the time, an infectious virus was wiping out chickens in India. Dhurandhar had family and friends who were veteraniarians who told him that the infected chickens were fat—with enlarged livers, shrunken thymus glands and a lot of fat. Dhurandhar then took chickens and injected them with the virus that supposedly induced the weight gain in the infected chickens, and discovered that the chickens injected with the virus were fatter than the chickens who were not injected with it (Collen, 2014: 56).

Dhurandhar, though, couldn’t continue his research into other causes for obesity in India, so he decided to relocate his family to America, as well as studing the underlying science behinnd obesity. He couldn’t find work in any labs in order to test his hypothesis that a virus was responsible for obesity, but right before he was about to give up and go back home, nutrional scientist Richard Atkinson offered him a job in his lab. Though, of course, they were not allowed to ship the chicken virus to America “since it might cause obesity after all” (Collen, 2014: 75), so they had to experiment with another virus, and that virus was called adenovirus 36—Ad-36 (Dhurandhar et al, 1997Atkinson et al, 2005; Pasarica et al, 2006;  Gabbert et al, 2010Vander Wal et al, 2013;  Berger et al, 2014; Pontiero and Gnessi, 2015; Zamrazilova et al. 2015).

Atkinson and Dhurandhar injected one group of chickens with the virus and had one control group. The infected chickens did indeed grow fatter than the ones who were not infected. However, there was a problem. Atkinson and Dhurandhar could not outright infect humans with Ad-36 and test them, so they did the next best thing: they tested their blood for Ad-36 antibodies. 30 percent of obese testees ended up having Ad-36 antibodies whereas only 11 percent of the lean testees had it (Collen, 2014: 77).

So, clearly, Ad-36 meddles with the body’s energy storage system. But we currently don’t know how much this virus contributes to the epidemic. This throws the CI/CO theory of obesity into dissarray, proving that stating that obesity is a ‘lifestyle disease’ is extremely reductionist and that other factors strongly influence the disease.

On the mechanisms of exactly how Ad-36 influences obesity:

The mechanism in which Ad-36 induces obesity is understood to be due to the viral gene, E4orf1, which infects the nucleus of host cells. E4orf1 turns on lipogenic (fat producing) enzymes and differentiation factors that cause increased triglyceride storage and differentiation of new adipocytes (fat cells) from pre-existing stem cells in fat tissue.

We can see that there is a large variation in how much energy is absorbed by looking at one overfeeding study. Bouchard et al (1990) fed 12 pairs of identical twins 1000 kcal a day over their TDEE, 6 days per week for 100 days. Each man ate about 84,000 kcal more than their bodies needed to maintain their previous weight. This should have translated over to exactly 24 pounds for each individual man in the study, but this did not turn out to be the case. Quoting Collen (2014: 78):

For starters, even the average amount the men gained was far less than maths dictates that it should have been, at 18 lb. But the individual gains betray the real failings of applying a mathematical rule to weight loss. The man who gained the least managed only 9 lb — just over a third of the predicted amount. And the twin who gained the most put on 29 lb — even more than expected. These values aren’t ’24 lb, more or less’, they are so far wide of the mark that using it even as a guide is purposeless.

This shows that, obviously, the composition of the individual microbiome contributes to how much energy is broken down in the food after it is consumed.

One of the most prominent microbes that shows a lean/obese difference is one called Akkermansia micinphilia. The less Akkermensia one has, the more likely they are to be obese. Akkermansia comprise about 4 percent of the whole microbiome in lean people, but they’re almost no where to be found in obese people. Akkermansia lives on the mucus lining of the stomach, which prevents the Akkermansia from crossing over into the blood. Further, people with a low amount of this bacterium are also more likely to have a thinner mucus layer in the gut and more lipopolysaccharides in the blood (Schneeberger et al, 2015). This one species of microbiota is responsible for dialing up gene activity which prevents LPS from crossing into the blood along with more mucus to live on. This is one example of the trillions of the bacteria in our microbiome’s ability to upregulate the expression of genes for their own benefit.

Everard et al (2013) showed that by supplementing the diets of a group of mice with Akkermensia, LPS levels dropped, their fat cells began creating new cells and their weight dropped. They conclude that the cause of the weight gain in the mice was due to increased LPS production which forced the fat cell to intake more energy and not use it.

There is evidence that obesity spreads in the same way that an epidemic does. Christakis and Fowler (2007) followed over 12000 people from 1971 to 2003. Their main conclusion was that the main predictor of weight gain for an individual was whether or not their closest loved one had become obese. One’s chance of becoming obese increased by a staggering 171 percent if they had a close friend who had become obese in the 32 year time period, whereas among twins, if one twin became obese there was a 40 percent chance that the co-twin would become obese and if one spouse became obese, the chance the other would become obese was 37 percent. This effect also did not hold for neighbors, so something else must be going in (i.e., it’s not the quality of the food in the neighborhood). Of course when obesogenic environments are spoken of, the main culprits are the spread of fast food restaurants and the like. But in regards to this study, that doesn’t seem to explain the shockingly high chance that people have to become obese if their closest loved ones did. What does?

There are, of course, the same old explanations such as sharing food, but by looking at it from a microbiome point of view, it can be seen that the microbiome can and does contribute to adult obesity—due in part to the effect on different viruses’ effects on our energy storage system, as described above. But I believe that introducing the hypothesis that we share microbes with eachother, which also drive obesity, should be an alternate or complimentary explanation.

As you can see, the closer one is with another person who becomes obese, the higher chance they have of also becoming obese. Close friends (and obviously couples) spend a lot of time around each other, in the same house, eating the same foods, using the same bathrooms, etc. Is it really an ‘out there’ to suggest that something like this may also contribute to the obesity epidemic? When taking into account some of the evidence reviewed here, I don’t think that such a hypothesis should be so easily discarded.

In sum, reducing obesity just to CI/CO is clearly erroneous, as it leaves out a whole slew of other explanatory theories/factors. Clearly, our microbiome has an effect on how much energy we extract from our food after we consume it. Certain viruses—such as Ad-36, an avian virus—influence the body’s energy storage, forcing the body to create no new fat cells as well as overcrowding the fat cells currently in the body with fat. That viruses and our diet can influence our microbiome—along with our microbiome influencing our diet—definitely needs to be studied more.

One good correlate of the microbiomes’/virsuses’ role in human obesity is that the closer one is to one who becomes obese, the more likely it is that the other person in the relationship will become obese. And since the chance increases the closer one is to who became obese, the explanation of gut microbes and how they break down our food and store energy becomes even more relevant. The trillions of bacteria in our guts may control our appetites (Norris, Molina, and Gewirtz, 2013; Alcock, Maley, and Atkipis, 2014), and do control our social behaviors (Foster, 2013; Galland, 2014).

So, clearly, to understand human behavior we must understand the gut microbiome and how it interacts with the brain and out behaviors and how and why it leads to obesity. Ad-36 is a great start with quite a bit of research into it; I await more research into how our microbiome and parasites/viruses control our behavior because the study of human behavior should now include the microbiome and parasites/viruses, since they  have such a huge effect on eachother and us—their hosts—as a whole.

Racial Differences in Jock Behavior: Implications for STI Prevalence and Deviance

1350 words

The Merriam-Webster dictionary defines jock asa school or college athlete” and “a person devoted to a single pursuit or interest“. This term, as I previously wrote about, holds a lot of predictive power in terms of life success. What kind of racial differences can be found here? Like with a lot of life outcomes/predictors, there are racial differences and they are robust.

Male jocks get more sex, after controlling for age, race, SES and family cohesion. Being involved in sports is known to decrease sexual promiscuity, however, this effect did not hold for black American jocks, with the jock label being associated with higher levels of sexual promiscuity (Miller et al, 2005). Black American jocks reported significantly higher levels of sexual activity than non-black jocks, but they did not find that white jocks too fewer risks than their non-jock counterparts.

Black Americans do have a higher rate of STDs compathe average population (Laumann et al, 1999; Cavanaugh et al, 2010; CDC, 2015). Black females who are enrolled in, or have graduated from college had a higher STI (sexually transmitted infection) rate (12.4 percent self-reported; 13.4 percent assayed) than white women with less than a high school diploma (6.4 percent self-reported; 2.3 percent assayed) (Annang et al, 2010). I would assume that these black women would be more attracted to black male jocks and thusly would be more likely to acquire STIs since black males who self-identify as jocks are more sexually promiscuous. It seems that since black male jocks—both in high school and college—are more likely to be sexually promiscuous, this then has an effect on even the college-educated black females, since higher educational status has one less likely to acquire STIs.

Whites use the ‘jock identity’ in a sports context whereas blacks use the identity in terms of the body. Black jocks are more promiscuous and have more sex than white jocks, and I’d bet that black jocks have more STDs than white jocks since they are more likely to have sex than white jocks. Jock identity—but not athletic activity and school athlete status—was a better predictor of juvenile delinquency in a sample of 600 Western New York students, which was robust across gender and race (Miller et al, 2007a). Though, surprisingly, the ‘jock effect’ on crime was not as you would expect it: “The hypothesis that effects would be stronger for black adolescents than for their white counterparts, derived from the work of Stark et al. 1987 and Hughes and Coakley (1991), was not supported. In fact, the only clear race difference that did emerge showed a stronger effect of jock identity on major deviance for whites than for blacks” (Miller et al, 2007a).

Miller et al (2007b) found that the term jock means something different to black and white athletes. For whites, the term was associated with athletic ability and competition, whereas for blacks the term was associated with physical qualities. Whites, though, were more likely to self-identify with the label of jock than blacks (37 percent and 22 percent respectively). They also found that binge drinking predicted violence amongst family members, but in non-jocks only. The jock identity, for whites and not blacks, was also associated with more non-family violence while whites were more likely to use the aggression from sports in a non-sport context in comparison to blacks.

For black American boys, the jock label was a predictor of promiscuity but not for dating. For white American jocks, dating meant more than the jock label. Miller et al (2005) write:

We suggest that White male jocks may be more likely to be involved in a range of extracurricular status-building activities that translate into greater popularity overall, as indicated by more frequent dating; whereas African American male jocks may be “jocks” in a more narrow sense that does not translate as directly into overall dating popularity. Furthermore, it may be that White teens interpret being a “jock” in a sport context, whereas African American teens see it more in terms of relation to body (being strong, fit, or able to handle oneself physically). If so, then for Whites, being a jock would involve a degree of commitment to the “jock” risk-taking ethos, but also a degree of commitment to the conventionally approved norms with sanctioned sports involvement; whereas for African Americans, the latter commitment need not be adjunct to a jock identity.

It’s interesting to speculate on why whites would be more prone to risk-taking behavior than blacks. I would guess that it has something to do with their perception of themselves as athletes, leading to more aggressive behavior. Though certain personalities would be more likely to be athletic and thusly refer to themselves as a jock. The same would hold true for somatype as well.

So the term jock seems to mean different things for whites and blacks, and for whites, leads to more aggressive behavior in a non-sport context.

Black and females who self-identified as jocks reported lower grades whereas white females who self-identified as jocks reported higher grades than white females who did not self-report as jocks (Miller et al, 2006). Jocks also reported more misconduct such as skipping school, cutting class, being sent to the principals office, and parents having to go to the school for a disciplinary manner compared to non-jocks. Boys were more likely to engage in actions that required disciplinary intervention in comparison to girls, while boys were also more likely to skip school, have someone called from home and be sent to the principal’s office. Blacks, of course, reported lower grades than whites but there was no significant difference in misconduct by race. However, blacks reported fewer absences but more disciplinary action than whites, while blacks were less likely to cut class, but more likely to have someone called from home and slightly more likely to be sent to the principal’s office (Miller et al, 2006).

This study shows that the relationship between athletic ability and good outcomes is not as robust as believed. Athletes and jocks are also different; athletes are held in high regard in the eyes of the general public while jocks are seen as dumb and slow while also only being good at a particular sport and nothing else. Miller et al (2006) also state that this so-called ‘toxic jock effect‘ (Miller, 2009; Miller, 2011) is strongest for white boys. Some of these ‘effects’ are binge drinking and heavy drinking, bullying and violence, and sexual risk-taking. Though Miller et al (2006) say that, for this sample at least, “It may be that where academic performance is concerned, the jock label constitutes less of a departure from the norm for white boys than it does for female or black adolescents, thus weakening its negative impact on their educational outcomes.

The correlation between athletic ability and jock identity was only .31, but significant for whites and not blacks (Miller et al, 2007b). They also found, contrary to other studies, that involvement in athletic programs did not deter minor and major adolescent crime. They also falsified the hypothesis that the ‘toxic jock effect’ (Miller, 2009; Miller, 2011) would be stronger for blacks than whites, since whites who self-identified as jocks were more likely to engage in delinquent behavior.

In sum, there are racial differences in ‘jock’ behavior, with blacks being more likely to be promiscuous while whites are more likely to engage in deviant behavior. Black women are more likely to have higher rates of STIs, and part of the reason is sexual activity with black males who self-identify as jocks, as they are more promiscuous than non-jocks. This could explain part of the difference in STI acquisition between blacks and whites. Miller et al argue to discontinue the use of the term ‘jock’ and they believe that if this occurs, deviant behavior will be curbed in white male populations that refer to themselves as ‘jocks’. I don’t know if that will be the case, but I don’t think there should be ‘word policing’, since people will end up using the term more anyway. Nevertheless, there are differences between race in terms of those that self-identify as jocks which will be explored more in the future.

Nerds vs. Jocks: Different Life History Strategies?

1150 words

I was alerted to a NEEPS (Northeastern Evolutionary Psychology Society) conference paper, and one of the short abstracts of a talk had a bit about ‘nerds’, ‘jocks’, and differing life history strategies. Surprisingly, the results did not line up with current stereotypes about life outcomes for the two groups.

The Life History of the Nerd and Jock: Reproductive Implications of High School Labels

The present research sought to explore whether labels such as “nerd” and “jock” represent different life history strategies. We hypothesized that self-identified nerds would seek to maximize future reproductive success while the jock strategy would be aimed at maximizing current reproductive success. We also empirically tested Belsky’s (1997) theory of attachment style and life history. A mixed student/community sample was used (n=312, average age = 31) and completed multiple questionnaires on Survey Monkey. Dispelling stereotypes, nerds in high school had a lower income and did not demonstrate a future orientation in regards to reproductive success, although they did have less offspring. Being a jock in high school was related to a more secure attachment style, higher income, and higher perceived dominance. (NEEPS, 2017: 11)

This goes against all conventional wisdom; how could ‘jocks’ have better life outcomes than ‘nerds’, if the stereotype about the blubbering idiot jock is supposedly true?

Future orientation is The degree to which a collectivity encourages and rewards future-oriented behaviors such as planning and delaying gratification (House et al, 2004,p. 282). So the fact that self-reported nerds did not show future orientation in regards to reproductive success is a blow to some hypotheses, yet they did have fewer children.

However, there are other possibilities that could explain why so-called nerds have fewer children, for instance, they could be seen as less attractive and desirable; could be seen as anti-social due to being, more often than not, introverted; or they could just be focusing on other things, and not worrying about procreating/talking to women so they end up have fewer children as result. Nevertheless, the fact that nerds ended up having lower income than jocks is pretty telling (and obvious).

There are, of course, numerous reasons why a student should join a sport. One of the biggest is that the skills that are taught in team sports are most definitely translatable to the real world. Most notably, one who plays sports in high school may be a better leader and command attention in a room, and this would then translate over to success in the post-college/high school world. The results of this aren’t too shocking—to people who don’t have any biases, anyway.

Why may nerds in high school have had lower income in adulthood? One reason could be that the social awkwardness did not translate into dollar signs after high school/college graduation, or chose a bad major, or just didn’t know how to translate their thoughts into real-world success. Athletes, on the other hand, have the confidence that comes from playing sports and they know how to work together with others as a cohesive unit in comparison to nerds, who are more introverted and shy away from being around a lot of people.

Nevertheless, this flew in the faces of the stereotypes of nerds having greater success after college while the jocks—who (supposedly) don’t have anything beyond their so-called ‘primitive’ athletic ability—had greater success and more money. This flies in the face of what others have written in the past about how nerds don’t have greater success relative to the average population, well this new presentation says otherwise. Thinking about the traits that jocks have in comparison to nerds, it doesn’t seem so weird that jocks would have greater life outcomes in comparison to nerds.

Self-reported nerds, clearly, don’t don’t have the confidence to make the stratospheric amounts of cash that people would assume that they should make because they are knowledgeable in a few areas, on the contrary. Those who could use their body’s athletic ability had more children as well as had greater life success than nerds, which of course flew in the face of stereotypes. Certain stereotypes need to go, because sometimes stereotypes do not tell the truth about some things; it’s just what people believe ‘sounds good’ in their head.

If you think about what it would take, on average, to make more money and have great success in life after high school and college, you’ll need to know how to talk to people and how to network, which the jocks would know how to do. Nerds, on the other hand, who are more ‘socially isolated’ due to their introverted personality, would not know too much about how to network and how to work together with a team as a cohesive unit. This, in my opinion, is one reason why this was noticed in this sample. You need to know how to talk to people in social settings and nerds wouldn’t have that ability—relative to jocks anyway.

Jocks, of course, would have higher perceived dominance since athletes have higher levels of testosterone both at rest and exhaustion (Cinar et al, 2009). Athletes, of course, would have higher levels of testosterone since 1) testosterone levels rise during conflict (which is all sports really are, simulated conflict) and 2) dominant behavior increases testosterone levels (Booth et al, 2006). So it’s not out of the ordinary that jocks were seen as more dominant than their meek counterparts. In these types of situations, higher levels of testosterone are needed to help prime the body for what it believes is going to occur—competition. Coupled with the fact that jocks are constantly in situations where dominance is required; engage in more physical activity than the average person; and need to keep their diet on point in order to maximize athletic performance, it’s no surprise that jocks showed higher dominance, as they do everything right to keep testosterone levels as high as possible for as long as possible.

I hope there are videos of these presentations because they all seem pretty interesting, but I’m most interested in locating the video for this specific one. I will update on this if/when I find a video for this (and the other presentations listed). It seems that these labels do have ‘differing life history strategies’, and, despite what others have argued in the past about nerds having greater success than jocks, the nerds get the short end of the stick.

Human Physiological Adaptations to Climate

1750 words

Humans are adapted to numerous ecosystems on earth. This is only possible due to how our physiological systems interact with the environment in a homeodynamic way. This allowed us to spread across the globe, far away from our ancestral home of Africa, and thusly certain adaptations evolved in those populations—which was driven by our intelligent physiology. I will touch on human cold and hot adaptations, how physiology adapts to the two climates and what this means for the populations that make up Mankind.

Physiological adaptations to Arctic climates

The human body is one of the most amazing and complex biological systems on earth. The human body lives and dies on its physiology and how it can adapt to novel environments. When Man first trekked out of Africa into novel environments, our physiology adapted so we could survive in novel conditions. Over time, our phenotypes adapted to our new climates and humans began looking different from one another due to the climatic differences in their environments.

There is a large body of work on human cold adaptation. Thermal balance in humans is maintained by “vasodilation/vasoconstriction of the skin and peripheral tissues within the so-called thermo-neutral zone” (Daanen and Lichtenbelt, 2016). Two other adaptations occur in the cold: shivering thermogenesis (ST) and non-shivering thermogenesis (NST) and one in the heat (the evaporation of sweat). Humans are not Arctic animals by nature, so, therefore, venturing into novel environments would incur new physiological adaptations to better deal with the cold.

Heat is induced by the body in cold climates by shivering (Tikuisis, Bell, and Jacobs, 1991Daanen and Lichtenbelt, 2016). So, therefore, people in colder climates will have higher metabolisms than people in tropical environments, to generate more body heat for vital functioning. People living in Arctic environments have fewer sweat glands than people who live in the tropics. Sweating removes heat from the body, so having more sweat glands in colder climates would not be conducive for survival.

People who evolved in Arctic climates would also be shorter and have wider pelves than people who evolved in the tropics. This is seen in Neanderthals and is an example of  Cold adaptations also show up in the Greenlandic Inuit due to extinct hominins like the Denisova (Fumagalli et al, 2015).

We can see natural selection at work in the Inuits, due to adaptation to Arctic climates (Galloway, Young, and Bjerregaard, 2012; Cardona et al, 2014; Ford, McDowell, and Pierce, 2015NIH, 2015; Harper, 2015Tishkoff, 2015). Climate change is troubling to some researchers, with many researchers suggesting that global warming will have negative effects on the health and food security of the Inuit (WHO, 2003Furgal and Seguin, 2006Wesche, 2010; Ford, 2009, 2012Ford et al, 20142016McClymont and Myers, 2012; Petrasek, 2014Petrasek et al, 2015; Rosol, Powell-Hellyer, and Chan, 2016). This Inuit are the perfect people to look to to see how humans adapt to novel climates—especially colder ones. They have higher BMIs which is better for heat retention, and larger brains with wider pelves and a shorter stature.

Metabolic adaptations also occur due to BMI, which would occur due to diet and body composition. Daanen and Lichtenbelt, (2016) write:

Bakker et al.,48 however, showed that Asians living in Europe had lower BAT prevalence and exhibited a poorer shivering and non-shivering response to cold than Caucasians of similar age and BMI. On the other hand, subjects living in polar regions have higher BMI, and likely more white fat for body energy reserves and insulation.49 This cannot be explained by less exercise,50 but by body composition51 and food intake.49

Basal metabolic rate (BMR) also varies by race. Resting metabolic rate is 5% higher in white women when compared to black women (Sharp et al, 2002). Though low cardiovascular fitness explains 25 percent of the variance in RMR differences between black and white women (Shook et al, 2014). People in Arctic regions have a 3-19 higher BMR than predicted on the basis of the polar climates they lived in (Daanen and Lichtenbelt, 2016). Further, whites had a higher BMR than Asians living in Europe. Nigerian men were seen to have a lower BMR than African-American men (Sharp et al, 2002). So, whites in circumpolar locales have a higher BMR than peoples who live closer to the equator. This has to do with physiologic and metabolic adaptations.

Blacks also show slower and lower cold induced vasodilation (CIVD) than whites. A quicker CIVD in polar climates would be a lifesaver.

However, just our physiologic mechanisms alone aren’t enough to weather the cold. Our ingenuity when it comes to making clothes, fire, and finding and hunting for food are arguably more important than our bodies physiologic ability to adapt to its present environment. Our behavioral plasticity (ability to change our behavior to better survive in the environment) was also another major factor in our adaptation to the cold. Then, cultural changes would lead to genetic changes, and those cultural changes—which were due to the cold climates—would then lead to more genetic change and be an indirect effect of the climate. The same, obviously, holds for everywhere in the world that Man finds himself in.

Physiologic changes to tropical climates

Physiologic changes in tropical climates are very important to us as humans. We needed to be endurance runners millions of years ago, and so our bodies became adapted for that way of life through numerous musculoskeletal and physiologic changes (Lieberman, 2015). One of the most important is sweating.

Sweating is how our body cools itself and maintains its body temperature. When the skin becomes too hot, your brain, through the hypothalamus, reacts by releasing sweat through tens of millions of eccrine glands. As I have covered in my article on the evolution of human skin variation, our loss of fur (Harris, 2009) in our evolutionary history made it possible for sweat to eventually cool our body. Improved sweating ability then led to higher melanin content and selection against fur. Another hypothesis is that when we became bipedal, our bodies were exposed to less solar radiation, selecting against the need for fur. Yet another hypothesis is that trekking/endurance running led to selection for furlessness, selecting for sweating and more eccrine glands (Lieberman, 2015).

Anatomic changes include long and thin bodies with longer limbs as heat dissipation is more efficient. People who live in tropical environments have longer limbs than people who live in polar environments. These tall and slender bodies are what is useful in that environment. People with long, slender bodies are disadvantaged in the cold. Further, longer, slender bodies are better for endurance running and sprinting. They also have narrower hips which helps with heat dissipation and running—which means they would have smaller heads than people in more northerly climes. Most adaptations and traits were once useful in whichever environment that organism evolved in tens of thousands of years ago. And certain adaptations from our evolutionary past are still evident today.

Since tropical people have lower BMRs than people at more northerly climes, this could also explain why, for instance, black American women, have higher rates of obesity than women of other races.  They have a lower BMR and are sedentary and eat lower-quality food so food insecurity would have more of an effect on that certain phenotype. Africans wouldn’t have fast metabolisms since a faster metabolism would generate more heat.

Physiologic changes due to altitude

The last adaptation I will talk about is how our bodies can adapt to high altitudes and how it’s beneficial. Many human populations have adapted to the chronic hypoxia of high latitudes (Bigham and Les, 2014) which, of course, has a genetic basis. Adaptation to high altitudes also occurred due to the introgression of extinct hominin genes into modern humans.

Furthermore, people in the Andean mountains, people living in the highlands of Kenya and people living on the Tibetan plateau have shown that the three populations adapted to the same stress through different manners. Andeans, for instance, breathe the same way as people in lower latitudes but their red blood cells carry more oxygen per cell, which protects them from the effects of hypoxia. They also have higher amounts of hemoglobin in their blood in comparison to people who live at sea level, which also aids in counterbalancing hypoxia.

Tibetans, on the other hand, instead of having hematological adaptations, they have respiratory adaptations. Tibetans also have another adaptation which expands their blood vessels, allowing the whole body to deliver oxygen more efficiently to different parts. Further, Ethiopians don’t have higher hemoglobin counts than people who live at sea level, so “Right now we have no clue how they do it [live in high altitudes without hematologic differences in comparison to people who live at sea level]”.

Though Kenyans do have genetic adaptations to live in the highlands (Scheinfeldt et al, 2012). These genetic adaptations have arisen independently in Kenyan highlanders. The selective force, of course, is hypoxia—the same selective force that caused these physiologic changes in Andeans and Tibetans.

Conclusion

The human body is amazing. It can adapt both physiologically and physically to the environment and in turn heighten prospects for survival in most any environment on earth. These physiologic changes, of course, have followed us into the modern day and have health implications for the populations that possess these changes. Inuits, for instance, are cold-adapted while the climate is changing (which it constantly does). So, over time, when the ice caps do melt the Arctic peoples will be facing a crisis since they are adapted to a certain climate and diet.

People in colder climates need shorter bodies, higher body fat, lower limb ratio, larger brains etc to better survive in the cold. A whole slew of physiologic processes aids in peoples’ survival in the Arctic, but our ability to make clothes, houses, and fire, in conjunction with our physiological dynamicness, is why we have survived in colder climates. Tropical people need long, slender bodies to better dissipate heat, sweat and run. People who evolved in higher altitudes also have hematologic and respiratory adaptations to better deal with hypoxia and less oxygen due to living at higher elevations.

These adaptations have affected us physiologically, and genetically, which leads to changes to our phenotype and are, therefore, the cause of how and why we look different today. Human biological diversity is grand, and there are a wide variety of adaptations to differing climates. The study of these differences is what makes the study of Man and the genotypic/phenotypic diversity we have is one of the most interesting sciences we have today, in my opinion. We are learning what shaped each population through their evolutionary history and how and why certain physical and physiologic adaptations occurred.

Did we come from Australasia?

by Phil78 3179 words

In a recent response to the MCU7  genetic admixture from Archaic, it has been argued that if this entered the Sub Saharan Genome at 145 kya, every population by OOA standards should have it.

Not necessarily, as the study noted how their findings conform to recent findings that actually ground African Origins.

Our finding agrees with recent reports of such an introgression in
sub Saharan African populations (Hammer et al. 2011; Hsieh et al. 2016), as well as the
unexpectedly old human remains (Hublin et al. 2017) and lineages (Schlebusch et al. 2017).

In other words, what I’m thinking is that this connects somewhere with the Basal human component model for West Africans and some LSA finds, though that is for another day.

Now, as for the alternative model that I’ve seen advertise by the site RedIce, we now come to a recent newcomer, Bruce Fenton.

Now, before I begin my criticism of his premise of a new “paradigm”, I like to say that the reviews I’ve seen (Amazon) he certainly seems to have talent in writing. However, reading this article, and other summaries of his model, I must say I’m not tempted to buy his book based on his confidence of his basic model “filling in holes” in OOA and treating it debunked, especially when his sources all more or less can be conformed into OOA 2.

First, let us go into how he rules out both Africa and Europe due to recent Neanderthal DNA  from Neanderthals from Spain.

Research by the geneticists Benoit Nabholz, Sylvain GlĂ©min, and Nicolas Galtier has revealed significant problems with scientific studies that rely heavily on genetic material alone, divorced from the physical examination of fossils (especially in the accuracy of dating by molecular clocks).[i] We are however fortunate to have a 2013 research project from Indiana University, headed by well-respected evolutionary biologist Aida GĂłmez-Robles at our disposal: a comparative analysis of European hominin fossil teeth and jawbones. The Indiana University project concluded that all the fossil hominins in Europe were either Neanderthals or directly ancestral to Neanderthals – not ancestors of Homo sapiens. We must understand that while respective groups in Africa match European hominin populations, this revelation discounted all known African hominins as being ancestors of modern humans. The morphological research also provided further shock – the divergence between Homo sapiens and Neanderthals had apparently begun as early as one million years before present.

Odd how he made that leap when the researcher he cites actually says otherwise on Africa as a candidate.

From the new study’s results, GĂłmez-Robles says that “we think that candidates have to be looked for in Africa.” At present, million-year-old fossils attributed to the prehistoric humans H. rhodesiensis and H. erectus look promising.

Fenton then further mention Denisovan diverging, using DNA, as 800k and the places the ancestor of all three between 700-900.

His Response? This finding from China.

The first possible answer to this ‘where to look’ question came in July 2016 with scientist Professor Zhao Lingxiain, whose research group announced they had identified modern human fossil remains at the Bijie archaeological site ranging up to 180,000 years old.[i] Not only were they digging up fragments of modern humans, but also evidence of other mysterious hominin forms. The Chinese paleoanthropologists suspected that some of the recovered fossils might even be from the mysterious Denisovans, previously identified in Siberia.[i] Could modern humans have first emerged in East Asia? It has certainly begun to look like this might be the case. My independent investigative research carried out over the last several years, however, disagrees: my work places the first Homo sapiens in Australasia.

For the context of how this can still conform to OOA, the actual range was 112k to 178k, and while this muddies the typical 50k to 80k migration it can still fit in the 90k to 130k Migration of the Levant that was presumed to have all been wiped out.

Back in 1982, two of the most renowned evolutionary scientists of the modern age, Professor Alan Wilson and his understudy, Rebecca Cann, discovered compelling evidence for an Australasian genesis for modern humans. These controversial findings never emerged in any of their academic papers; in fact, they only appear in a short transcript included in a book published in the same year by two British research scientists, The Monkey Puzzle: A Family Tree. Silence does not change facts, and the fact remains that there is compelling DNA evidence pointing towards Australasia as the first home of Homo sapiens. Indeed, so much data exists that it eventually led to my controversial new book, The Forgotten Exodus: The Into Africa Theory of Human Evolution. My research colleagues and myself have uncovered overwhelming evidence that places the first modern humans in Australasia, and with them several other advanced hominin forms.

There might be some temptation to dismiss this matter out of hand, as it can be difficult accepting that leading academics have got it so wrong. It is, however, important to understand that in every case the opposing arguments against the current consensus position are based on, or supported by, peer-reviewed studies or statements given by consensus academics. Could it be that the year 2016 will one day be known as the year that the Out of Africa paradigm died?

If 2016 becomes associated with the end of one scientific paradigm, then 2017 may become related to the emergence of a new model for human origins, one that I am proposing and have termed ‘Into Africa’. My Into Africa theory is closely related to the ‘Out of Australia’ theory formulated by two of my Australian collaborators, Steven and Evan Strong, but goes significantly further down the rabbit hole of our evolutionary story.

I’d wish he supported this unreplicated genetic study (as far as I know) with actual archaeological continuity in Australasia because so far, pre-sapiens people there are generally  Erectus-like, his own sources on the matter supporting that view.

He summarizes both Multiregional and OOA theory (single recent origin), then proceeds to his own.

[UPDATE– Something that I pondered was exactly what pattern of migration did Cann produce? Well, based on two articles produced by Steve Strong, who I believe is an associate of Fenton, shows that my suspicions were correct.

The pattern found was Australoids- Mongoloids- Caucasians, Negroids/SSA, the opposite of Fenton’s Framework. I figured that, regardless of where Australians fit, the affinity of groups wouldn’t change. Strong has another article in which he uses a paper linking origins to Australia which was covered on this blog here as well as covering Denisovans which, as I shown in this post, to fit fine in OOA 2 aside from some complications in mapping precisely the nature of smaller migration into SE Asia.

Regarding Cain’s findings as a whole, the sample size of the study was one among many that were small and covered a week range of the Native’s populations in general, as discussed and somewhat ameliorated here.

With that realized, study after study after study places them in a 50k-55k Time Frame, more or less consistent with Archaeological dates, may LM3 (Mungo Man) be either 40k or 60k. It must also be kept in mind that Cann’s findings existed prior to the knowledge of Denisovan admixture, which possibly could’ve skewed divergence dates, as explained by Dienekes. This gives a good reason for Cann’s findings to be seen as erroneous. In regards to his citing of Vanderburg, it shows his specialty in this sort of work if “unique haplotypes” aren’t a natural result of human differentiation.

Regarding Archaeology from both articles, Strong makes the point of even earlier findings not popularly reported in Australia, ranging from 60-135k for fossils, older for tools and scorching. Not only are these younger than the currently oldest Sapiens in Africa, but also in the time frame of a currently known exodus into SE Asia discussed in the post, even if they were legit as I’ll dive into detail.

Reference of certain sites of >100k estimates has been shown to be much more recent, being originally confounded by less accurate techniques. The same could apply to cremated bones listed as well. This leaves the mysterious “Lake Eyre Skullcap” by Steve Webb which, as far as I can tell, has been only scarcely covered. However, only in that source is it reported as that old, as both newspapers and scientific newsletters reports at that time reported it as 60-80 years old using Fluorine-dating, referring specifically to Megafauna that was believed to have existed 30k-40k years ago that it may have coexisted with.

Webbs wrongly compares the Flourine dates relative to the values of the Mungo remains, when this type of dating works best for relative ages on specimens that are on the same site or comparable conditions, of similar density (he describes them as more Robust than Mungo remains), similar size (Uses Large and small animals, but logically it would also apply to mere fragment to more whole remains), and for humans particularly Ribs or Cortical bone layers should be compared.

But an even odder argument of his is how the earliest tools in Australia, being found to be less advanced than other tools of the same time frame mean people sailed from Australia. What this could more likely mean is that they were “simplified” based on Lifestyle, as covered in a previous blog post on Expertise, Brain size, and Tool complexity.]

In my model, I offer compelling evidence for three key migrations of Homo sapiens heading out of Australasia. The first migrations began around 200,000 years ago, during a period of intense climatic problems and low population numbers, with a small group making their way to East Africa.[i] The remains of some of these first Africans have been discovered close to one key entry point in the east of the continent (400km), known as the Bab-el-Mandeb straights.[i]

I then identify a second migration event 74,000 years ago, following the eruption of the Lake Toba super volcano.[i] Small groups of survivors to the north of Lake Toba, finding themselves unable to move south to safety, were then forced to head west to escape the devastating nuclear winter and toxic clouds that followed the disaster. The lucky few that could move fast enough eventually made their way into Africa and found safety in the south of the continent. I suggest that some of these few moved along the coasts of Asia, and others sailed the open ocean to Madagascar and hit the coast of South Africa – I associate these refugees with cave sites including Borders Cave, Klasies River Caves and the Blombos Cave.[i]

The problem with this is due to the previously mentioned finds in Morocco making Sapiens much older in Africa and further West. Though Climates conditions, by the way, based on his link provides no reason for it to be centered at Australasia as it was described to affect Africa’s interior.

Second, the South African caves he describes contains specimens, likely to have contributed to modern South Africans, show deeper genetic roots than what he suggests when they diverged.

But the most glaring problem is that none of his sources shows Sapiens skeletons or activity prior to that in Africa, Indonesia clearly not having a confounding enough preservation problem due to its Erectus sites.

The third migration event identified in my research is arguably of greatest interest because it involved the direct ancestors of all non-African people alive today. As the global environment recovered from the Lake Toba eruption 60,000 years ago, a trickle of modern humans (calculated to be just under 200 individuals) moved out of Australasia into Southeast Asia, slowly colonising the Eurasian continent.[i] These adventurous men and women were the forebears of every non-African and non-Australian person living on Earth today. This Australasian colonisation of the world is very well supported by the study of both mitochondrial and Y-chromosomal haplogroups, and given further credence by the location and dating of several fossils.

This oddly enough goes against what we show with “180k” teeth of a modern human in China, that’s not accounted for in his sequence of African-Eurasian dispersal from Australasia.

He also goes against an earlier point he made by “relying on genetic material”, as he himself has yet to provided H.sapiens being present in the Area.

The model I offer represents a radical revision to the current evolutionary narrative, and is perhaps revolutionary. It will not be easy for academics to accept such bold claims from someone whom is neither a paleoanthropologist or an evolutionary biologist. Why, then, should one take this work seriously?

The Into Africa theory is firmly based on real-world evidence, data that anyone can freely access and examine for themselves. My argument incorporates a great wealth of peer reviewed academic papers, well accepted genetic studies, and opinions offered by the most respected scientific researchers. Indeed, rather ironically, many of my key sources derive from scientists that stand opposed to this model (being vocal supporters of the Out of Africa theories).

Well the irony doesn’t necessarily come off strong when you don’t argue in this article why the findings contradict their views, nor have the sources you provided so far actually firmly grounds your theory by placing human origin into Australasia, the two that do being an unreplicated study and a volcano incident in a vicinity with little fossil continuity with Modern humans from its early hominids.

Recent scientific studies have begun to change the landscape of paleoanthropological research. Examination of the recent conclusions associated with the analysis of Homo erectus skulls in the Georgian Republic confirms that several species of hominins in Africa are in fact nothing more than expected variance within the greater H. erectus population.[i]
That Sources talk about the origin of the Flores Hobbits, not the Georgian Erectus or African Hominid classification.
Elsewhere in Southeast Asia, there is growing suspicion among scientists that Homo floresiensis evolved from a lineage of hominins that lived much earlier than the immediate ancestors of Homo sapiens.[i] Detailed analysis of Neanderthal and Denisovan ancestry convincingly places their founder populations in Southeast Asia and Australasia. There seems little about the currently accepted academic narrative that has not yet come under fire.

He in turns uses a source that supports his later claim of early humans (homo) in India by 3 million (actually 2.6 million based on the source, I believe I’m seeing a trend here), Though the claim he refers to shows continuity with ancestral populations in Africa and has hardly much to do with OOA as of current status hence why there was “no fire”.

Fenton, furthermore, provided no evidence of his claims of Denisovan-Neanderthal origins in Australasia.

 As of 2016, we have finds that place early humans in India 3 million years ago (Masol), and Homo erectus populations ranging from Indonesia to the Georgian Republic 2 million years ago (Dmanisi).[i] On the Australasian island of Guinea, we find the only signature for interbreeding between Denisovans and modern humans dating to 44,000 years ago. This interbreeding occurred long after Australia’s supposed isolation, as claimed by the consensus narrative.[i] How do entirely isolated populations interbreed with other human groups?

See here.

We computed pD(X) for a range of non-African populations and found that for mainland East Asians, western Negritos (Jehai and Onge), or western Indonesians, pD(X) is within two standard errors of zero when a standard error is computed from a block jackknife (Table 1 and Figure 1). Thus, there is no significant evidence of Denisova genetic material in these populations. However, there is strong evidence of Denisovan genetic material in Australians (1.03 ± 0.06 times the New Guinean proportion; one standard error), Fijians (0.56 ± 0.03), Nusa Tenggaras islanders of southeastern Indonesia (0.40 ± 0.03), Moluccas islanders of eastern Indonesia (0.35 ± 0.04), Polynesians (0.020 ± 0.04), Philippine Mamanwa, who are classified as a “Negrito” group (0.49 ± 0.05), and Philippine Manobo (0.13 ± 0.03) (Table 1 and Figure 1). The New Guineans and Australians are estimated to have indistinguishable proportions of Denisovan ancestry (within the statistical error), suggesting Denisova gene flow into the common ancestors of Australians and New Guineans prior to their entry into Sahul (Pleistocene New Guinea and Australia), that is, at least 44,000 years ago.24,25 These results are consistent with the Common Origin model of present-day New Guineans and Australians.26,27 We further confirmed the consistency of the Common Origin model with our data by testing for a correlation in the allele frequency difference of two populations used as outgroups (Yoruba and Han) and the two tested populations (New Guinean and Australian).The f4 statistic that measures their correlation is only |Z| = 0.8 standard errors from zero, as expected if New Guineans and Australians descend from a common ancestral population after they split from East Asians, without any evidence of a closer relationship of one group or the other to East Asians. Two alternative histories, in which either New Guineans or Australians have a common origin with East Asians, are inconsistent with the data (both |Z| > 52).

Here we analyze genome-wide single nucleotide polymorphism data from 2,493 individuals from 221 worldwide populations, and show that there is a widespread signal of a very low level of Denisovan ancestry across Eastern Eurasian and Native American (EE/NA) populations. We also verify a higher level of Denisovan ancestry in Oceania than that in EE/NA; the Denisovan ancestry in Oceania is correlated with the amount of New Guinea ancestry, but not the amount of Australian ancestry, indicating that recent gene flow from New Guinea likely accounts for signals of Denisovan ancestry across Oceania. However, Denisovan ancestry in EE/NA populations is equally correlated with their New Guinea or their Australian ancestry, suggesting a common source for the Denisovan ancestry in EE/NA and Oceanian populations. Our results suggest that Denisovan ancestry in EE/NA is derived either from common ancestry with, or gene flow from, the common ancestor of New Guineans and Australians, indicating a more complex history involving East Eurasians and Oceanians than previously suspected.
So it is accounted for by other genetic research.
We are finding anomalies in all areas of evolutionary studies, whether we look at the mitochondrial and Y-chromosonal data, the datings associated with human archaeological sites, or analysis of hominin morphology. Rather than continuing with the attempt to fit square pegs into a round hole, it is time to face the fact that holes are round and that our story of human origins has been significantly wrong.

Well, studies such as the ones above have reworked hypotheses on migrations theories, the paper you cite on Denisovan admixture being among the many smaller scale migration already being debated and shifting as my second link mentions. So while rethinking ideas in light of evidence is a good thing, there should be clear limits on what to discredit.

Overall I wish I could like the idea as a competing idea to OOA, but this if this paper is to serve any impression of the book, using various studies on hominids and human genetic at different scales showing no clear pattern center towards South East Asia in both Archaeology AND genetics but with just enthusiasm of creating a new idea and to fill holes, then I’m disappointed.

With that said, if anyone with better knowledge and citations from the book (Fenton mentions research from close colleagues of his) then I may be more inclined to accept new finds if they are in favor of shifting human origins from Africa to Australasia.

Origins and the Relationship between West Africans and Hunter-Gatherer Populations

by Phil78 1802 words

Many casual members of HBD may not be completely aware of the population history West Africans and hunter-gathers like Pygmies beyond, say, the Bantu Migration.

Those who frequent articles by population genetics bloggers such as Dienekes or Razib Khan ought to be aware of how, in the sense of Macro races, the two clusters are distinct despite their relatively close association in a human cladistic sense to the confusion of others.

Fortunately enough, two recent finds in both genes and fossils this year not only paint the history of these two groups but also humanity as a whole, with the evolutionary timeline of Sapiens being pushed back to 300k, possibly further according to Chris Springer.

Hublin–one of the study’s coauthors–notes that between 330,000 and 300,000 years ago, the Sahara was green and animals could range freely across it.

While the Moroccan fossils do look like modern H sapiens, they also still look a lot like pre-sapiens, and the matter is still up for debate. Paleoanthropologist Chris Stringer suggests that we should consider all of our ancestors after the Neanderthals split off to be Homo sapiens, which would make our species 500,000 years old. Others would undoubtedly prefer to use a more recent date, arguing that the physical and cultural differences between 500,000 year old humans and today’s people are too large to consider them one species.

(The morphological characteristics of these hominids will come into play latter.)

Taking in this information in, we now ought to have a better context to place the divergence of HG populations and the rest of mankind (including West Africans) being more than 260,000 years ago.

If this is the case, then why do they range so close to Modern West Africans? The reason, for the most part, being that this finding technically refers to it’s ancient primary cluster and not it’s modern composition as a whole.

So in terms of proportions, 30% of the composition of West African people contain the human ancestors of the Ballito Boy, a specimen believed in turn to represent the ancestors of modern Khoisan people without the genetic admixture of either Bantu or East African Pastoralists. (Which this study finds to range from 9% to 22% in all modern Khoisan Groups).

[*Edit- When I say HG’s people association with West Africans, I’m referring to the relative position the two populations have compared the actual East African cluster exemplified the most by Nilotic people, not “Horners”. I realized this confusion when I look at genetic distance test and found Bushmen populations ranking closest to Ethiopians, this is probably just a result of their admixture with Ethiopians as later studies \accounted for the confound and gave more accurate results, West African’s closest cluster being both Nilotics and Ethiopians with Pygmies and San clustering closer than before, though Mbuti are rather intermediate with San than sharing a branch.

This seems to dampen earlier but plausible ideas suggestion of highland Ethiopians having a San-like African profile explain their affinity to each other, but this helps illustrate the nature of the cline in affinity of Native African clusters as Razib covered later that year both between each other and relative to non African populations. Here, we see that the San have closer affinities to West Africans than to Nilotics, though the pygmies groups having an odd relationship. Not only are Biaka people closer to West Africans than Mbuti are, but in general are closer to West Africans than to Mbuti. This is likely connected to these findings.

The findings also illustrate that oddly,  next to my knowledge at least, how though the San and Mbuti share similarly deep splits from non-africans, both of their smallest distances are with the Biaka than with each other despite the Biaka being closest to West Africans who are as distant as the Mbuti are to the San. Clearly imagining pygmies as merely as a cline between San and Bantus doesn’t work without considering a either considering them their own cluster altogether or to isolation. I’m considering the former idea for the most part.

Mainly because Mbuti are the smallest Pygmies, with height in that region being correlated with Pygmy ancestry versus Bantu, and Biaka pygmies have been shown to be only 18.5% to 30% “pygmy”, if assuming Mbutis are pure and Bantus are the outside group. Though the latter point explains the overall association they have with West Africans, there is still the remote position of the Mbuti. According to Cavalli Sforza, the San and Bushmen don’t necessarily share a particularly close relationship but both their lifestyles and apparent divergence from Bantus makes the idea rather convincing.

Based on the new study, which is though scant on including pygmies, comparing Mbuti dates to San shows similar dates to what was estimated at the high range of the previous 90k-195k split, if not a little higher. This is also consistent with the pygmies (in pure form) being intermediate between West African ancestors and San in the 1994 study and the new date for the Khoi-san. Their more or less genetic isolation may’ve played a role as well in their similar position with other African due to limited geneflow compared to the Khoisan and Biaka being connected through the Bantu Expansion, undermining the affinity Khoisan and Mbuti have through common ancestry. Basically similar to the relationship of Sardinians and mainland Italians which reflects a similar distance relationship with. ]

Now, the first thought crime most are pondering at this point is whether or not the Khoisan are fully human in the context of genetics and recent anthropology? John Hawks  already discussed the position of the ancient moroccans in our evolutionary tree and expresses the authors’ comments on how, despite their archaic features restrain them from being clearly modern, they and similar finds play the role as the founding lineage to contribute to modern Sapiens.

In Hublin and colleagues’ “pan-African” hypothesis, every African fossil that had parted ways with Neanderthals is part of a single lineage, a stem population for modern humans. They connect the evolution of these early H. sapiens people to a new form of technology, the Middle Stone Age, which was found in various regions of Africa by 300,000 years ago.

So how many other archaic groups were in Africa? Under the Hublin model, there may have been none. Every fossil sharing some modern human traits may have a place within the “pan-African” evolutionary pattern. These were not river channels flowing into the desert, every channel was part of the mainstream.

But there may be a problem. Geneticists think there were others.

To which he alludes to Iwo Eleru findings, found to be closest to Early AMH (120k levant)

Now, as for cranial representatives that the Ballito Boy likely is associated with, there has been indeed quite a few earlier skulls fitting the profile that were classed separate of more archaic types of similar geography like Florisbad, and thus would be classed away from the Moroccans as well.

An example being this rather interesting analysis of the Border Cave skull (which I believe is 50k, at a site which bushman-like tools dated at 46k).

When all (six rather than just three) discriminants are
considered, Border Cave in fact lies closest to the Hottentot
centroid and is contained within the .05 limits of this distribution.
The fossil also approaches the Venda and Bushman male
centroids but falls beyond the .05 limits of these groups. This
is new information, not principally because of the Hottentot
identification, which is dubious, but because Border Cave is
shown emphatically to be well within the range of modern
African variation for the measurements used. The cranium is
heavily constructed, but it is hardly archaic in the fashion of
Florisbad or Broken Hill.

border cave front view- yooniqimages.com

border cave skull lateral view- yooniqimages.com

Hottentot (Khoi pastorlist) skull, Lateral and Front, (for comparison)

And here’s the best primary reference of a “bushman skull” I could find to display it’s similarities and differences with more admixed Pastoralists. One listed trait that’s notable is the less prominent occipital protrusion of the Bushman skull despite being measured as more dolichocephalic, probably due to a narrower relative breadth but that cannot be seen here.

However the only one I know of that is linked with Khoisan with modern research , is the similar Fish Hoek specimen.

Humanitec-Anatomy of an intellectual triad

Primitive Man of the Peninsula. Cape Times (South Africa), October 26, 1927 

One Hundred Skulls: OEC and Exploring Human Origins

In comparison to Jebel Irhoud 1

Zetaboards (Anthroscape), unknown source

front and lateral view Jebel Irhoud- yooniqimages.com

And Florisbad

Ira Block Photography

And comments from the study mentioned, which links Fish Hoek with modern Khoisan, comparing morphological differences among Stone age African Skulls, Bushman, Pygmies, and Bantu Farmers.

To summarize, therefore, the Pleistocene skulls from across Africa tend to be broad,
long, with a broad face and broad, short orbits.
By contrast, the skulls of the Khoisan (“Bushman”) population are relatively short,
low, broad, narrow, with a comparatively intermediate nose.
Pygmies are characterized by great variability, but they usually have small-sized
round skulls, and a balanced face. Their degree of dispersion, however, contradicts the findings of other studies, which have detected a strong homogeneity among Pygmy populations, even if these support the hypothesis that the typical features of these populations, including their short stature, took place after their geographical separation through convergent evolution. As is suggested by other more recent studies (RamĂ­rez Rossi y Sardi, 2010; Anagnostou, 2010; Vigilant, 1989).
The Bantu-speaking populations are mostly at the center of the graph, which rep-
resents a common morphological tendency, but with a strong variability, whether they come from Southern, Eastern or Central Africa. This supports the idea of a common, more recent ancestor than that for the Pygmy and Khoisan groups, as well as a similar way of life founded on cattle breeding and farming, independent of their surrounding environment.

The Late Peopling of Africa According to Craniometric Data. A Comparison of Genetic and Linguistic Models

For context, the specimen in the sample that exemplifies the traits associated with the Pleistocene group the most would be the Herto Skull, which is comparatively closer to modern humans than the Jebel Irhoud findings.

160,000-year-old skulls fill crucial gap in evolution- Telegraph

So despite their divergence being closer to the age of more archaic specimens, why do their likely less admixed ancestors and modern populations contrast clearly in phenotypical traits? This would, by my amateur speculation, leave two options. Either gracilization took place in convergence with other populations or a more plausible route that the archaeological finds don’t precisely place the specimen’s actually divergence, thus the more archaic forms likely have older splits than what their fossilized age suggest and the clear traits of “Modern human” phenotype possibly being older as well in that respect.

Your brain on poverty.

By Afrosapiens, 1163 words.

 

Poverty has long been associated with educational under-achievement and various behavioral issues. Although the underlying causes of these differences have been at the center of a nature vs.. Nurture debate for decades, it’s only recently that insights from neuroscience have allowed better understanding of how poverty affects the brain. Observations from MRI scans show slower brain growth in children growing up in low SES households (poor and near-poor) which results in reduced volume and grey matter thickness in the frontal and parietal cortices as well as lower amygdala and hippocampus size. All those affected brain areas are crucial to learning and social functioning as they govern cognitive and executive functions such as language, working and long-term memory, attention, impulse control, emotional management and information processing.

poverty-and-brain-300x202

Although research using animal experiments indicate that the relationship between poverty and altered brain development is causal, it is yet not clear which aspect of poverty impacts which function the most. The most cited factors are stress, trauma, low stimulation, poor child-parent relationship, poor nutrition and poor health. Although it is also possible that genetics play a role in individual susceptibility to these factors, the idea that genetic background cause people to be poor in the first place and then have their brains damaged by environmental factors is not supported by science and belongs to pseudo-Darwinian creationism, especially since such deficits appear to be reversible to a substantial degree due to brain plasticity.

Various interventions to improve or prevent decrease in cognitive and executive function have shown good and lasting results in reducing behavioral issues and increasing school performance and job market participation. Interventions can take various forms, first of all, since poverty is lack of financial resources, income supports to families with children are an obvious means of limiting children’s exposure to poverty-related adversity. Although this is absolute common sense, conservative ideologues have managed to convince a large part of the public that pro-poor policies would in fact be harmful to the needy whereas pro-rich ones would mysteriously benefit them.

Besides redistribution, executive function coaching in the form of computer or non-computer games, aerobic exercise and sports, music, martial arts and mindfulness practices as well as improvements in school curricula and teaching methods have been shown to improve social and educational outcomes. One last type of intervention that yielded good results is nurse home visits to low-income mothers of young children which had the effect of improving developmental outcomes of children by teaching mothers parenting skills and healthy practices.

These interventions aren’t to be confused with efforts at increasing IQ that caused little improvement beyond temporarily increasing IQ scores, which has no relevance in terms of life outcomes. IQ can probably benefit from increased language skills and executive function but it doesn’t seem to be the target of remedial intervention on those underlying abilities of which IQ test performance would only be a byproduct.

Now you might wonder how big a problem child poverty and its neurological consequences are in contemporary societies. Although the most extreme and widespread child poverty is seen in developing countries, industrialized countries like the USA, Israel, Turkey, Chile and Spain have rates of prevalence above 20%, whereas countries in Western Europe tend to maintain rates around or below 10%.

While informative, reported child poverty rates only include those who live below an arbitrarily defined poverty threshold in a given year, but the effects on poverty likely affect those living only slightly above poverty line and do not meet their developmental needs and those who have experienced poverty in the past but were living above the threshold when the figures were reported.

Within the United States, significant differences in the prevalence and the nature of child poverty exist between ethnic groups with 34% of Native Americans, 13% of Asians/Pacific Islanders, 36% of African-Americans, 31% of Hispanics and 12% of European Americans living under poverty line in 2015.

Comparing African-Americans and European Americans, the nature of poverty differed markedly with 77% of African Americans experiencing poverty at least once in their childhood and 37% living in poverty for more than 9 years.In comparison, only 30% of European American children experienced poverty while growing up, including 5% for more than 9 years. 40% of black children and 8% of white children were poor at birth. Among those born poor, 60% of African Americans and 25% of European Americans were still poor at age 17, among those not born in poverty, 20% of black children and 5% of whites were poor at age 17.

With the effects of poverty worse felt at a younger age and during long periods of time, such interracial differences in prevalence and persistence of child poverty are one plausible large contributor to the observed gaps in educational and behavioral outcomes between the two groups.

Read more:

Do Physiologists Study General Intelligence?

1100 words

The general factor of intelligence (g) is said to be physiological. Jensen (1998: xii) states that “Students in all branches of the behavioral and social sciences, as well as students of human biology and evolution, need to grasp the essential psychometric meaning of g, its basis in genetics and brain physiology, and its broad social significance.” There are, furthermore, “a number of suggestive neurological correlates of g, but as yet these have not been integrated into a coherent neurophysiological theory of g” (Jensen, 1998: 257). I personally don’t care for correlations too much anymore, I’m interested in actual causes. Jensen (1998: 578) also states “Although correlated with g [size of the brain, metabolic rate, nerve conduction velocity, and latency and amplitude of evoked electrical potentials], these physiological variables have not yet provided an integrated explanatory theory.”

This seems suspiciously like Dreary’s (2001: 14) statement that there “is no such thing as a theory of human intelligence differences – not in the way that grown-up sciences like physics or chemistry have theories.” If g is physiological, then where is the explanatory theory? On that same matter, where is the explanatory theory for individual intelligence differences? That’s one thing that needs to be explained, in my opinion. I could muster something up off the top of my head, such as individual differences in glucose metabolism in the brain, comparing both high and low IQ people (Cochran et al, 2006; Jensen, 1998: 137), however, that is still not good enough.

In physiology there is sliding filament theory which explains the mechanism of muscle contraction (Cooke, 2004). Why is there no such theory of why individuals differ in intelligence and why have these “suggestive neurological correlates of g” not been formulated into a coherent neurophysiological theory? There are numerous theories in physiology, but a theory of g or why individuals differ in intelligence is not one of them.

It’s like Darwin only saying “Species change“, and that’s it; no theory of how or why. He’s just stating something obvious. Similarly, saying “Person A is smarter or has a higher IQ than person B” is just an observation; there is no theory of how or why for why individuals differ in intelligence. There are theories for group differences (garbage cold winter theory), but no individual differences in intelligence? Hmmm
 Sure it’d be a ‘fact that species change over time’, but without a theory of how or why, how useful is that observation? Similarly, it is true that some people are more intelligent than others (score higher on IQ tests), yet there is no explanatory theory as to why? I believe this ties back to the physiological basis for g: are physiologists studying it, and if not, why?

Reaction time (RT) is one of the most talked about physiological correlates in regards to IQ. However, as a fitness professional, I know that exercise can increase reaction time, especially in those with intellectual disabilities (Yildirim et al, 2001). I am now rethinking the correlate between reaction time and IQ, since it can be trained in children, especially those with intellectual disabilities. Clearly, RT can be trained by exercise, participating in sports, and even by playing video games (Green, 2008). So since RT can be trained, I don’t think it’s a good physiological measure for g.

Individuals do differ in individual physiology, however, I have never heard of a physiologist attempting to rank individuals on different traits, nevermind attempting to say that a higher level of one variable is better than a lower variable, say blood pressure or metabolic rate. In fact, individuals with high blood pressure and metabolic rates would need immediate medical attention.

There are also wide variations in how immune systems act when faced with pathogens, bacteria and viruses. Though, “no one dreams of ranking individual differences on a general scale of immunocompetence” (Richardson, 2017: 166). So if g is physiological then why don’t other physiological traits get placed on a rank order, with physiologists praising certain physiological functions as “better”?

Richardson (2017: 166-167) writes:

In sum, no physiologist would suggest the following:

(a) that within the normal range of physiological differences, a higher level is better than any others (as is supposed in the construction of IQ tests);

(b) that there is a general index or “quotient” (a la IQ) that could meaningfully describe levels of physiological sufficiency or ability and individual differences in it;

(c) that “normal” variation is associated with genetic variation (except in rare deleterious conditions; and

(d) the genetic causation of such variation can be meaningfully separated from the environmental causes of the variation.

A preoccupation with ranking variations, assuming normal distributions, and estimating their heritabilities simply does not figure in the field of physiology in the way that it does in the field of human intelligence. This is in stark contrast with the intensity of the nature-nurture debate in the human cognitive domain. But perhaps ideology has not infiltrated the subject of physiology as much as it has that of human intelligence.

This is all true. I know of no physiologist who would suggest such a thing. So does it make sense to compare g with physiological variables—even when classic physiological variables do not have some kind of rank order? Heritabilities for BMR are between .4 and .8, which is in the same range as the heritability of IQ. Can you imagine any physiologist on earth suggesting a rank order for physiological traits such as BMR or stroke volume? I can’t, and if you knew anything about physiological variables then you wouldn’t either.

In sum, I believe that conflating g with physiology is erroneous; mostly because physiologists don’t rank physiological traits in the same ways that human intelligence researchers do. Our physiology is intelligent in and of itself, and this process begins in the cell—the intelligent cell. Our physiological systems are intelligent—in our bodies are dynamic systems that keenly respond to whatever is going on in the environment (think of how the body always attempts to maintain homeostasis). Physiology deals with the study of living organisms—more to the point, how the systems that run the organisms work.

Looking at physiological variables and attempting to detangle environmental and genetic effects is a daunting task—especially the way our physiological systems run (responding to cues from the environment, attempting to maintain homeostasis). So if general intelligence—g—had a true biological underpinning in the body, and if physiologists did study it, then they would not have a rank ordering for g like psychologists do; it’d just be another human trait to study.

So the answer to the question “Do physiologists study g?” is no, and if they did they would not have the variable on a rank order because physiologists don’t study traits in that manner—if a true biological underpinning for g exists. Physiology is an intelligent and dynamic system in and of itself, and the process begins in the intelligent cell, except it is on a larger scale, with numerous physiological variables working in concert, constantly attempting to stay in homeostasis.

Homo Neanderthalis vs. Homo Sapiens Sapiens: Who is Stronger? Implications for Racial Strength Differences

1300 words

Unfortunately, soft tissue does not fossilize (which is a problem for facial reconstructions of hominins; Stephan and Henneberg, 2001; I will cover the recent ‘reconstructions’ of Neanderthals and Nariokotome boy soon). So saying that Neanderthals had X percent of Y fiber type is only conjecture. However, to make inferences on who was stronger, I do not need such data. I only need to look at the morphology of the Neanderthals and Homo sapiens, and from there, inferences can be made as to who was stronger. I will argue that Neanderthals were stronger which is, of course, backed by solid data.

Neanderthals had wider pelves than Homo sapiens. Wider pelves in colder climes are due to adaptations. Although Neanderthals had wider pelves than ours, they had infants around the same size as Homo sapiens, which implies that Neanderthals had the same obstetric difficulties that we do. Neanderthals also had a pelvis that was similar to Heidelbergensis, however, most of the pelvic differences Neanderthals had that were thought to be derived traits are, in fact, ancestral traits—except for the cross-sectional shape of the pubic ramus (Gruss and Schmidt, 2015). Since Neanderthals had wider pelves and most of their pelvis were ancestral traits, then wide pelves may have been a trait of ancestral Homo (Trinkaus, Holliday, and Aurbach, 2014).

Hominins do need wider pelves in colder climates, as it is good for heat retention, however (see East Asians and Northern Europeans). Also, keep in mind that Neanderthals were shorter than us—with the men averaging around 5 feet five inches, and the women averaging about 5 feet, about 5.1 inches shorter than post-WW II Europeans (Helmuth, 1998).

So what does a wider pelvis mean? Since the Neanderthals were shorter than us and also had a wider pelvis, they had a lower center of gravity in comparison to us. Homo sapiens who came Out of Africa, had a narrower pelvis since narrow pelves are better to dissipate heat (Gruss and Schmidt, 2015). Homo sapiens would have been better adapted to endurance running and athleticism, in comparison to the wide-pelved Neanderthals.

People from tropical climates have longer limbs, and are tall and narrow (which is also good for endurance running/sprinting) while people from colder climates are shorter and more ‘compact’ (Lieberman, 2015: 113-114) with a wide pelvis for heat retention (Gruss and Schmidt, 2015). So, clearly, due to the differences in pelvic anatomy between Homo sapiens and Neanderthals,

Furthermore, due to the length of Neanderthal clavicles, it was thought that they had long clavicles which would have impeded strength. However, when the clavicles were reanalyzed it was discovered that when the clavicles were adjusted with the body size of Neanderthals—and not compared with the humeral lengths—Neanderthals had a similar clavicular length, which implies a similar shoulder breadth as well, to Homo sapiens (Trinkaus, Holliday, and Aurbach, 2014). This is another clue that Neanderthals were stronger.

Yet more evidence comes from comparing the bone density of Neanderthal bones to that of Homo sapiens. Denser bones would imply that the body would be able to handle a heavier load, and thusly generate more power. In adolescent humans, muscle power predicts bone strength (Janz et al, 2016). So if the same holds true for Neanderthals—and I don’t see why not—then Neanderthals would have higher muscle power since it predicts bone strength.

Given the “heavy musculature” of Neanderthals, along with high bone robusticity, then they must have had denser bones than Homo sapiens (Friedlander and Jordan, 1994). So since Neanderthals had denser bones, then they had higher muscle power; they had a lower center of gravity due to having a wider pelvis and being shorter than Homo sapiens whose body was heat-adapted. Putting this all together, the picture is now becoming clearer that Neanderthals were, in fact, way stronger than Homo sapiens.

Another cause for these anatomical differences between Neanderthals and Homo sapiens is completely independent of cold weather. Neanderthals had an enlarged thorax (rib cage), which evolved to hold an enlarged liver, which is responsible for metabolizing large amounts of protein. Since protein has the highest thermic effect of food (TEF), then they would have had a higher metabolism due to a higher protein diet which would also have resulted in an enlarged bladder and kidneys which are necessary to remove urea, which possibly would have also contributed to a wider pelvis for Neanderthals (Ben-Dor, Gopher, and Barkai, 2016).

During glacial winters, Neanderthals would have consumed 74-85 percent of their calories from fat, with the rest coming from protein (Ben-Dor, Gopher, and Barkai, 2016). Neanderthals also consumed around 3,360-4,480 kcal per day (Steegman, Cerny, and Holliday, 2002). Let’s assume that Neanderthals averaged 3800 kcal per day. Since the upper limit of protein intake is 3.9 g/bw/day (erectus) and 4.0 g/bw/day for Homo sapiens (Ben-Dor et al, 2011), then Neanderthals would have had a theoretical higher upper limit due to having larger organs, which are useful in processing large amounts of protein. The protein intake for a Neanderthal male was between estimated to be between 985 kcal (low end) to 1170 kcal (high end). It was estimated that Neanderthal males had a protein intake of about 292 grams per day, or 1,170 kcal (Ben-Dor, Gopher, and Holliday, 2016: 370).

Assuming that Neanderthals did not eat carbohydrates during glacial winters (and even if a small amount were eaten, the model would not be affected) and an upper limit of protein intake of 300 grams per day for Neanderthal males, this implies that 74-85 percent of their diet came from animal fat—the rest being protein. Protein is the most thermogenic macro (Corvelli et al, 1997; Eisenstein et al, 2002; Buchholz and Schoeller, 2004; Halton and Hu, 2004; Gillingham et al, 2007; Binns, Grey, and Di Brezzo, 2014). So since Neanderthals ate a large amount of protein, along with their daily activities, they had to have had a high metabolic rate.

To put into perspective how much protein Neanderthals ate, the average American man eats about 100 grams of protein per day. In an analysis of the protein intake of Americans from 2003-2004, it was found that young children ate about 56 grams of protein per day, adults aged 19-30 ate about 91 grams of protein per day, and the elderly ate about 56 grams of protein per day (Fulgoni, 2008). Neanderthals ate about 3 times the amount of protein than we do, which would lead to organ enlargement since larger organs are needed to metabolize said protein as well. Another factor in the increase of metabolism for Neanderthals was the fact that it was, largely, extremely cold. Shivering increases metabolism (Tikuisis, Bell, and Jacobs, 1985; van Ooijen et al, 2005). So the Neanderthal metabolism would have been revved up close to a theoretical maximum capacity.

The high protein intake of Neanderthals is important because high amounts of protein are needed to build muscle. Neanderthals consumed a sufficient amount of kcal, along with 300 grams of protein per day on average for a Neanderthal male, which would have given Neanderthals yet another strength advantage. 

I am also assuming that Neanderthals had slow twitch muscle fibers since they have wider pelves, along with evolving in higher latitudes (see Kenyans, East Asians, European muscle fiber distribution), they would have an abundance of type slow twitch muscle fibers, in comparison to fast twitch muscle fibers, however, they also have more slow twitch fibers which Europeans have, while African-Americans (West-African descendants) have a higher amount of fast twitch fibers. (Caesar and Henry, 2015). So now, thinking of everything I explained above and replacing Neanderthals with Europeans and Homo sapiens with Africans, who do you think would be stronger? Clearly, Europeans, which is what I have argued for extensively. African morphology (tall, lanky, high limb ratio) is not conducive to strength; whereas European morphology (wide pelvis, low limb ratio, an abundance of slow twitch fibers) is.

The implications for these anatomic differences between Neanderthals and Homo sapiens and how it translates into racial differences will be explored more in the future. This was just to lay the anatomic and morphologic groundwork in regards to strength and cold weather adaptations. Nevertheless, the evidence that Neanderthals were stronger/more powerful than Europeans stands on solid ground, and the same does hold for the differences in strength between Africans and Europeans. The evolution of racial pelvic variation is extremely important to understand if you want to understand racial differences in sports. 

r/K Selection Theory: A Response to Anonymous Conservative

2800 words

I knew the article about r/K selection would stir a bit of debate. Anonymous Conservative has replied to both articles that were published the other day. However, he seems confused. He doesn’t talk about r/K selection theory in terms of density-dependence/independence. That’s what r/K theory was based on before it was discredited for age-specific mortality (Reznick et al, 2002). The theory was discredited decades ago. This article will be a response to him. How can you use age-specific mortality for your theory?


Combining all African and all European populations probably dulls the degree to which certain populations are r and K.

Combining the ethnies of all three populations makes no sense if you’re attempting to infer how behavior X evolved in ecosystem Y using r/K selection theory. To conduct such a study, you would need to study the races in the ecosystem that the selection was hypothesized to have occurred. r/K selection is—as I’ve already brought up—proven false. I will get to that below.

If r/K selection did apply to humans, then since Africans have been in their habitat—according to Rushton—for 140ky and Mongoloids have been in their habitat for 40ky, then Africans would have had more opportunity to approach the environmental carrying capacity while Mongoloids who migrated into novel environments (cold weather, as mentioned above) would experience r-selected traits since they are in a novel environment (r pressure) and facing cold weather (another r pressure). Per Rushton’s own arguments—along with how r/K theory was really used—Africans are K and Mongoloids are r.

Take the most r populations in Africa and you would also see highly obvious differences deviating from normal human behavior.

Which populations in Africa are ‘the most r’? What is ‘normal human behavior’?

Goal number one should be to get people forced to acknowledge that some humans are exhibiting the r-strategy compared to others.

If this were the case, then Mongoloids would be r while Africans would be K—if r/K selection theory weren’t discredited and if human races qualified as local populations. This, of course, comes from Rushton own words, who asserts that Mongoloids have cold-weather adaptations. So if Mongoloids have cold-weather adaptations and cold weather is an agent of r-selection as described previously, then Mongoloids are r-selected. This argument comes straight from Rushton’s own theory. Furthermore, Africans would be K-selected since endemic disease is an agent of K-selection. This is simple enough to understand, especially if you read a few papers on r/K selection.

I get the impression the author is a pot-stirrer ginning up debate, which I can respect. But I would counter that I think this argument requires a slightly more complex view on a few points, and it seeks to cite the established literature on r/K a little too much.

Citing papers is what’s needed when discussing scientific matters. If your arguments are not backed by scientific papers then your argument is pretty much moot.

Most of the literature on r/K is incredibly shallow in its analyses. I suspect nobody really cared about the theory on an emotional level, so nobody really bothered to look too closely at it, or tried to understand why some arguments would seemingly violate simple common sense. One person would assert things that would make no sense in certain contexts, and nobody would ever try to highlight the complexity required for a fuller understanding of the issue. It is either that, or the more powerful minds gravitated somewhere else in the sciences with more practical application.

blf4lad

This looks pretty clear-cut to me. r/K selection theory has been extensively tested and falsified. Of course people cared about it, it dominated biology and ecology literature for about twenty years after Pianka’s (1970) paper where he proposed his now debunked ‘r/K continuum’. As I have said, Pianka gave no experimental rationale on why he chose the traits he did for the continuum (Graves, 2002: 135). This is simple enough to understand on its own.

As an example, the author cites papers that say drought is an r-selective pressure. Drought can be r or K, depending on the abilities of the organisms confronted with it. Mice will die in a drought, and have short enough life cycles to reproduce in the wet periods following it. So with mice, after the drought, there will be free resources and that makes drought a huge r-selection pressure.

But suppose you have an organism with the intelligence to envision how to survive the drought, and which thinks in terms of long time frames. Now that drought will cull the relatively r-selected individuals who are designed to exploit a glut with no thought of the future, while favoring those who planned for the drought and stockpiled water, or organized a way to acquire it. Is the drought still an r-selective pressure? Being human, with a high IQ and an ability to plan for the future changes a lot of these rules.

Drought is an agent of r-selection. How about earthquakes and volcanic eruptions? Are those agents of K-selection as well if you can ‘plan for the future changes’? Provide references for your assertion or your claim is unfounded.

On the issue of colder climates being K, the author cites research which makes the case that cold climates kill back the population in the winter, and then allow explosive growth in the summer, and thus are r-selecting.

This will be true in things like insects with short lifespans and no ability to plan for the winter. But in humans, this will favor those who can defer pleasures in the summer, looking forward to the winter and sacrificing by setting aside resources to get themselves through the colder period. It will also favor groups which can work together in pursuit of common goals.

You don’t get it. Mongoloids being r-selected is straight from Rushton. He asserts that they have cold-adaptations. Cold adaptations are due to cold weather. Cold weather is an agent of r-selection (temperature extreme). If cold weather is an agent of r-selection and Mongoloids further migrated into a novel environment (another agent of r-selection), then, per Rushton’s own words, Mongoloids are r-selected. Conversely, Rushton describes endemic disease and drought in Africa (without references), but let’s assume it’s true. As described above, drought is an agent of r (see the table from Anderson above) while endemic disease is an agent of K-selection.

Endemic (native) disease is an agent of K-selection. Since the disease is constant, then the population under that agent of K-selection can prepare ahead for disease. Indeed, in Africa, measures can be taken to reduce the number of those infected with malaria, such as mothers shielding their babies from mosquitoes, to even herbal remedies which have been in use for thousands of years (Wilcox and Bodecker, 2004). If endemic disease is constant (and it is) and Africans are under that constant pressure, then they will be K-selected.

Do groups not work together in Africa to reach common goals? In the Pleistocene as well? Citations? Think before you write (and cite), because hunting bands in our species began with Homo erectus. The capacity for endurance running evolved in erectus which can be seen with the beginnings of our modern pelvis as well as the evolution of the gluteus maximus (Lieberman et al, 2006). So how can you assert that working together to reach common goals only occurred where it was cold—as if tropical environments don’t have their own challenges which require foresight and planning? Think about human evolution and how modern human cognition evolved in Africa.

This will be true of most hardships to some degree. Where they kill back the population massively and randomly, and then allow explosive regrowth, they are r-pressures. But where they are challenges that select for those who can prepare and overcome them, they will tend to favor K, even if they may, strictly by the numbers, appear to be r.

How can you prepare and overcome a violent winter storm, volcanic eruption, earthquake, and drought (which vary wildly)? At a certain point, you can be the smartest one around but one would still succumb to the elements.

He also speaks of aggression. There the question is, is aggression borne of a competitive psychology that embraces risk innately because it evolved to embrace risk in a competitive environment where resources are scarce, or is aggression an opportunistic seizure of free resources from the weak and helpless.

A criminal who sees an old lady and pushes her to the ground to steal her purse is not the same as a Marine who proceeds to selflessly storm enemy lines and kill fifteen men with his bare hands simply to try and save his fellow Marines in battle. The criminal will seek out the weak and vulnerable to victimize safely for personal gain, while the Marine would find that in conflict with his nature. The Marine will sacrifice himself for his group and nothing more, while the criminal would view that as pointless and stupid. Those are two vastly different forms of aggression.

Aggression and violence can be principled and daring, or opportunistic and cowardly. Each is driven by a different psychology, and you can see this difference extend to sexual drive, promiscuity, and even rearing investments. I think there needs to be a difference cited there. One aggressive psychology is r and one is K. One is designed to take free resources in a world with no consequences, while the other is programmed to fight with anyone to try and get a share of scarce resources, because if they didn’t they would starve.

I speak of aggression in regards to testosterone and Richard Lynn’s claims that gonadotropin levels and testosterone lend further support for Rushton’s theory. However, I’ve falsified Ross et al (1986) numerous times. Further, the correlation between testosterone and physical aggression is a pitiful .08  (Archer, Graham-Kevan, and Lowe 2005). The point is that testosterone is not related to aggression, nor crime. Furthermore, the time of day that crime is committed at the highest rates for teens (3 pm) and adults (10 pm) discredit the testosterone-causing-crime theory since testosterone levels are highest at 8 am and lower at 8 pm. You did not address my arguments on testosterone—try again.

Then there is disease. Disease can be r or K, depending on epidemiology. If a disease is sexually transmitted, it is going to take out those with a high sex drive, promiscuity, and reduced disgust. That doesn’t means the disease is K-selecting, so much as it preferentially kills those with an r-selected psychology, and fosters the rise of K.

What about if a disease is endemic? Endemic disease (Rushton’s assertion) is an agent of K, this is not up for discussion. Endemic disease reduces carrying capacity and thusly is an agent of K-selection.

This is simple enough to understand, especially if you understand r/K selection theory.

On the other hand, if a disease infects and kills randomly, such as one transmitted by mosquito, then it will open up free resources by killing the population back below the carrying capacity. That will favor the rise of the r-selected psychologies.

Nope.

I have found the vast majority are written by individuals looking to create quick rules of thumb for much more complex variables that can only be looked at in the context of the mechanisms they are a part of. In many cases, I see authors claiming something is always r or K, when the truth is they are more often the opposite for reasons which the authors seem strangely blind to.

The vast majority of what was written about r/K in its heyday was written by biologists and ecologists. Why reduce a complex biological system interacting with its almost equally complex environment down to a discredited theory? It doesn’t make sense to reduce what organisms do to some ‘simple model’ when the real world—and by proxy ecological theories—are much more complex than a ‘simple model’.

r and K are simple adaptation to either free or limited resource availabilities. To understand how the environment affects the evolution of r and K psychologies, you have to understand that those adaptations to free or limited resources imbue certain psychological predispositions. Once imbued, all other selective pressures have to be examined with an eye to how they either confer advantage or disadvantage on those who express those psychological traits.

r/K selection theory is based on density-dependence and density-independence. As a matter of fact, searching for ‘density-dependent‘ brings up no hits and for ‘density-independent‘, the only hit is for your response to my article. Which makes me believe that you don’t understand r/K selection theory since it’s based on density-dependence and density-independence. It’s also impossible to predict which life history traits will be favored by selection unless you know which particular ecological factors influence life history traits as well as needing a model as to how they function (Anderson, 1991). Rushton did neither, and so he was wrong with his application of r/K to human races.

A sexually transmitted disease that savages a population will open up resource availability and reduce the population well below the carrying capacity, and thus could be mistaken for an r-selecting pressure. But if it wipes out every promiscuous r-strategist, and leaves behind only the monogamous K-strategists, then it is not an r-selective pressure at all. It is favoring the K-psychology, even as from a raw numerical standpoint it would appear an r-pressure.

Which STD? Which population(s)? Source? Even then, STDs such as chancroid (in the US and Europe) were endemic in the early 20th century (Aral, Fenton, and Holmes, 2007). Which populations are you describing? An event like that would be part of the density-dependence aspect of what r/K described. The population would dip and then go right back to environmental carrying capacity (K).

It is necessary—for a K-selected history—to have some sort of density-dependent pressure. Density-dependent pressures are things such as endemic disease in Africa—which is necessary for a K-selected history since density-dependent natural selection occurs at or close to the environmental carrying capacity (Anderson, 1991: 58).  If you truly understood r/K selection theory, you’d understand how it’s based on density dependence. You’d understand that ‘r’ and ‘K’ are not adjectives.

(Indeed, I suspect a golden age in the context of human history will be found to often be such an unusual circumstance, where a population is K-ified, even as it is placed in an r-selected environment of free resource availability. The opposite, an r-ified population placed in a grossly overpopulated environment of shortage will be found to reliably be Hell on earth. Guess which one we have coming.)

You should learn about what r/K selection really is (it is density-dependent selection).

The complete absence of that type of detailed understanding of the effects of selective pressures in the literature about r/K Selection Theory is why I don’t waste extensive time here quoting the source texts on the subject. Most seem strangely shallow in their analyses.

It is detailed, see the table above. Where does alpha-selection fit into your theory? Are conservatives alpha-selected? Not speaking about alpha-selection throws a wrench into the theory. The r/K continuum doesn’t even exist!

I am amused to see the author mention r/K Selection Theory has been linked to ideology, without any mention of where. My greatest hope has always been that r/K Theory would become so ever present in the dialog that nobody would remember where it first arose. When that happens, r/K will be everywhere, and nobody will have any idea who to blame.

Well, the ‘one’s to blame’ would be the originators of the theory, MacArthur and Wilson. But r/K selection is a dead concept in biology and population ecology. Don’t worry, r/K selection is dead and isn’t coming back. I’ve shown how it’s a discredited model.

In regards to r/K being falsified, when the theory was tested, key life history variables did not conform to the predictions of the theory (Graves, 2002: 137). People should stop pushing discredited theories.

By the way, in regards to the one comment that was left, why breakdown complex biological interactions with the environment into something so simple? Can you explain to me how and why complex biological systems interacting with their environment can be broken down ‘simply’? You, as well, have no idea what r/K selection is either.


Anonymous Conservative should try to be aware of his political biases. That much is clear. Although, now I know what will happen. We will see a case of the backfire effect where these corrections will increase his misconceptions of r/K selection theory (Nyhan and Reifler, 2012). Everyone should try keep this quote in mind at all times:

When you are studying any matter, or considering any philosophy, ask yourself only what are the facts and what is the truth that the facts bear out. Never let yourself be diverted either by what you wish to believe, or by what you think would have beneficent social effects if it were believed. But look only, and solely, at what are the facts. That is the intellectual thing that I should wish to say. —Bertrand Russel, 1959