Please keep comments on topic.

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 102 other followers

Follow me on Twitter


JP Rushton

Richard Lynn

Linda Gottfredson

Is Obesity Caused by a Virus?

2150 words

I’ve recently taken a large interest in the human microbiome and parasites and their relationship with how we behave. There are certain parasites that can and do have an effect on human behavior, and they also reduce or increase certain microbes, some of which are important for normal functioning. What I’m going to write may seem weird and counter-intuitive to the CI/CO (calories in/calories out) model, but once you understand how the diversity in the human mirobiome matters for energy acquisiton, then you’ll begin to understand how the microbiome contributes to the exploding obesity rate in the first world.

One of the books I’ve been reading about the human microbiome is 10% Human: How Your Body’s Microbes Hold the Key to Health and Happiness. P.h.D. in evolutionary biology Alanna Collen outlines how the microbiome has an effect on our health and how we behave. Though one of the most intriquing things I’ve read in the book so far is how there is a relationship with microbiome diversity, obesity and a virus.

Collen (2014: 69) writes:

But before we get too excited about the potential for a cure for obesity, we need to know how it all works. What are these microbes doing that make us fat? Just as before, the microbiotas in Turnbaugh’s obese mice contained more Firmicutes and fewer Bacteroidetes, and they somehow seemed to enable the mice to extract more energy from their food. This detail undermines one of the core tenets of the obesity equation. Counting ‘calories-in’ is not as simple as keeping track of what a person eats. More accurately, it is the energy content of what a person absorbs. Turnbaugh calculated that the mice with the obese microbiota were collecting 2 per cent more calories from their food. For every 100 calories the lean mice extracted, the obese mice squeezed out 102.

Not much, perhaps, but over the course of a year or more, it adds up. Let’s take a woman of average height. 5 foot 4 inches, who weights 62 kg (9st 11 lb) and a healthy Body Mass Index (BMI: weight (kg) /(height (m)^2) of 23.5. She consumes 2000 calories per day, but with an ‘obese’ microbiota, her extra 2 per cent calorie extraction adds 40 more calories each day. Without expending extra energy, those further 40 calories per day should translate, in theory at least, to a 1.9 kg weight gain over a year. In ten years, that’s 19 kg, taking her weight to 81 kg (12 st 11 lb) and her BMI to an obese 30.7. All because of just 2 percent extra calories extracted from her food by her gut bacteria.

Turnbaugh et al (2006) showed that differing microbiota contributes to differing amounts of weight gain. The obese microbiome does have a greater capacity to extract more energy out of the same amount of food in comparison to the lean microbiome. This implies that obese people would extract more energy eating the same food as a lean person—even if the so-called true caloric value on the package from a caloriometer says otherwise. How much energy we absorb from the food we consume comes down to genes, but not the genes you get from your parents; it matters which genes are turned on or off. Our microbes also control some of our genes to suit their own needs—driving us to do things that would benefit them.

Gut microbiota does influence gene expression (Krautkramer et al, 2016). This is something that behavioral geneticists and psychologists need to look into when attempting to explain human behavior, but that’s for another day. Fact of the matter is, where the energy that’s broken down from the food by the microbiome goes is dictated by genes; the expression of which is controlled by the microbiome. Certain microbiota have the ability to turn up production in certain genes that encourage more energy to be stored inside of the adipocite (Collen, 2014: 72). So the ‘obese’ microbiota, mentioned previously, has the ability to upregulate genes that control fat storage, forcing the body to extract more energy out of what is eaten.

Indian doctor Nikhil Dhurandhar set out to find out why he couldn’t cure his patients of obesity, they kept coming back to him again and again uncured. At the time, an infectious virus was wiping out chickens in India. Dhurandhar had family and friends who were veteraniarians who told him that the infected chickens were fat—with enlarged livers, shrunken thymus glands and a lot of fat. Dhurandhar then took chickens and injected them with the virus that supposedly induced the weight gain in the infected chickens, and discovered that the chickens injected with the virus were fatter than the chickens who were not injected with it (Collen, 2014: 56).

Dhurandhar, though, couldn’t continue his research into other causes for obesity in India, so he decided to relocate his family to America, as well as studing the underlying science behinnd obesity. He couldn’t find work in any labs in order to test his hypothesis that a virus was responsible for obesity, but right before he was about to give up and go back home, nutrional scientist Richard Atkinson offered him a job in his lab. Though, of course, they were not allowed to ship the chicken virus to America “since it might cause obesity after all” (Collen, 2014: 75), so they had to experiment with another virus, and that virus was called adenovirus 36—Ad-36 (Dhurandhar et al, 1997Atkinson et al, 2005; Pasarica et al, 2006;  Gabbert et al, 2010Vander Wal et al, 2013;  Berger et al, 2014; Pontiero and Gnessi, 2015; Zamrazilova et al. 2015).

Atkinson and Dhurandhar injected one group of chickens with the virus and had one control group. The infected chickens did indeed grow fatter than the ones who were not infected. However, there was a problem. Atkinson and Dhurandhar could not outright infect humans with Ad-36 and test them, so they did the next best thing: they tested their blood for Ad-36 antibodies. 30 percent of obese testees ended up having Ad-36 antibodies whereas only 11 percent of the lean testees had it (Collen, 2014: 77).

So, clearly, Ad-36 meddles with the body’s energy storage system. But we currently don’t know how much this virus contributes to the epidemic. This throws the CI/CO theory of obesity into dissarray, proving that stating that obesity is a ‘lifestyle disease’ is extremely reductionist and that other factors strongly influence the disease.

On the mechanisms of exactly how Ad-36 influences obesity:

The mechanism in which Ad-36 induces obesity is understood to be due to the viral gene, E4orf1, which infects the nucleus of host cells. E4orf1 turns on lipogenic (fat producing) enzymes and differentiation factors that cause increased triglyceride storage and differentiation of new adipocytes (fat cells) from pre-existing stem cells in fat tissue.

We can see that there is a large variation in how much energy is absorbed by looking at one overfeeding study. Bouchard et al (1990) fed 12 pairs of identical twins 1000 kcal a day over their TDEE, 6 days per week for 100 days. Each man ate about 84,000 kcal more than their bodies needed to maintain their previous weight. This should have translated over to exactly 24 pounds for each individual man in the study, but this did not turn out to be the case. Quoting Collen (2014: 78):

For starters, even the average amount the men gained was far less than maths dictates that it should have been, at 18 lb. But the individual gains betray the real failings of applying a mathematical rule to weight loss. The man who gained the least managed only 9 lb — just over a third of the predicted amount. And the twin who gained the most put on 29 lb — even more than expected. These values aren’t ’24 lb, more or less’, they are so far wide of the mark that using it even as a guide is purposeless.

This shows that, obviously, the composition of the individual microbiome contributes to how much energy is broken down in the food after it is consumed.

One of the most prominent microbes that shows a lean/obese difference is one called Akkermansia micinphilia. The less Akkermensia one has, the more likely they are to be obese. Akkermansia comprise about 4 percent of the whole microbiome in lean people, but they’re almost no where to be found in obese people. Akkermansia lives on the mucus lining of the stomach, which prevents the Akkermansia from crossing over into the blood. Further, people with a low amount of this bacterium are also more likely to have a thinner mucus layer in the gut and more lipopolysaccharides in the blood (Schneeberger et al, 2015). This one species of microbiota is responsible for dialing up gene activity which prevents LPS from crossing into the blood along with more mucus to live on. This is one example of the trillions of the bacteria in our microbiome’s ability to upregulate the expression of genes for their own benefit.

Everard et al (2013) showed that by supplementing the diets of a group of mice with Akkermensia, LPS levels dropped, their fat cells began creating new cells and their weight dropped. They conclude that the cause of the weight gain in the mice was due to increased LPS production which forced the fat cell to intake more energy and not use it.

There is evidence that obesity spreads in the same way that an epidemic does. Christakis and Fowler (2007) followed over 12000 people from 1971 to 2003. Their main conclusion was that the main predictor of weight gain for an individual was whether or not their closest loved one had become obese. One’s chance of becoming obese increased by a staggering 171 percent if they had a close friend who had become obese in the 32 year time period, whereas among twins, if one twin became obese there was a 40 percent chance that the co-twin would become obese and if one spouse became obese, the chance the other would become obese was 37 percent. This effect also did not hold for neighbors, so something else must be going in (i.e., it’s not the quality of the food in the neighborhood). Of course when obesogenic environments are spoken of, the main culprits are the spread of fast food restaurants and the like. But in regards to this study, that doesn’t seem to explain the shockingly high chance that people have to become obese if their closest loved ones did. What does?

There are, of course, the same old explanations such as sharing food, but by looking at it from a microbiome point of view, it can be seen that the microbiome can and does contribute to adult obesity—due in part to the effect on different viruses’ effects on our energy storage system, as described above. But I believe that introducing the hypothesis that we share microbes with eachother, which also drive obesity, should be an alternate or complimentary explanation.

As you can see, the closer one is with another person who becomes obese, the higher chance they have of also becoming obese. Close friends (and obviously couples) spend a lot of time around each other, in the same house, eating the same foods, using the same bathrooms, etc. Is it really an ‘out there’ to suggest that something like this may also contribute to the obesity epidemic? When taking into account some of the evidence reviewed here, I don’t think that such a hypothesis should be so easily discarded.

In sum, reducing obesity just to CI/CO is clearly erroneous, as it leaves out a whole slew of other explanatory theories/factors. Clearly, our microbiome has an effect on how much energy we extract from our food after we consume it. Certain viruses—such as Ad-36, an avian virus—influence the body’s energy storage, forcing the body to create no new fat cells as well as overcrowding the fat cells currently in the body with fat. That viruses and our diet can influence our microbiome—along with our microbiome influencing our diet—definitely needs to be studied more.

One good correlate of the microbiomes’/virsuses’ role in human obesity is that the closer one is to one who becomes obese, the more likely it is that the other person in the relationship will become obese. And since the chance increases the closer one is to who became obese, the explanation of gut microbes and how they break down our food and store energy becomes even more relevant. The trillions of bacteria in our guts may control our appetites (Norris, Molina, and Gewirtz, 2013; Alcock, Maley, and Atkipis, 2014), and do control our social behaviors (Foster, 2013; Galland, 2014).

So, clearly, to understand human behavior we must understand the gut microbiome and how it interacts with the brain and out behaviors and how and why it leads to obesity. Ad-36 is a great start with quite a bit of research into it; I await more research into how our microbiome and parasites/viruses control our behavior because the study of human behavior should now include the microbiome and parasites/viruses, since they  have such a huge effect on eachother and us—their hosts—as a whole.

Racial Differences in Jock Behavior: Implications for STI Prevalence and Deviance

1350 words

The Merriam-Webster dictionary defines jock asa school or college athlete” and “a person devoted to a single pursuit or interest“. This term, as I previously wrote about, holds a lot of predictive power in terms of life success. What kind of racial differences can be found here? Like with a lot of life outcomes/predictors, there are racial differences and they are robust.

Male jocks get more sex, after controlling for age, race, SES and family cohesion. Being involved in sports is known to decrease sexual promiscuity, however, this effect did not hold for black American jocks, with the jock label being associated with higher levels of sexual promiscuity (Miller et al, 2005). Black American jocks reported significantly higher levels of sexual activity than non-black jocks, but they did not find that white jocks too fewer risks than their non-jock counterparts.

Black Americans do have a higher rate of STDs compathe average population (Laumann et al, 1999; Cavanaugh et al, 2010; CDC, 2015). Black females who are enrolled in, or have graduated from college had a higher STI (sexually transmitted infection) rate (12.4 percent self-reported; 13.4 percent assayed) than white women with less than a high school diploma (6.4 percent self-reported; 2.3 percent assayed) (Annang et al, 2010). I would assume that these black women would be more attracted to black male jocks and thusly would be more likely to acquire STIs since black males who self-identify as jocks are more sexually promiscuous. It seems that since black male jocks—both in high school and college—are more likely to be sexually promiscuous, this then has an effect on even the college-educated black females, since higher educational status has one less likely to acquire STIs.

Whites use the ‘jock identity’ in a sports context whereas blacks use the identity in terms of the body. Black jocks are more promiscuous and have more sex than white jocks, and I’d bet that black jocks have more STDs than white jocks since they are more likely to have sex than white jocks. Jock identity—but not athletic activity and school athlete status—was a better predictor of juvenile delinquency in a sample of 600 Western New York students, which was robust across gender and race (Miller et al, 2007a). Though, surprisingly, the ‘jock effect’ on crime was not as you would expect it: “The hypothesis that effects would be stronger for black adolescents than for their white counterparts, derived from the work of Stark et al. 1987 and Hughes and Coakley (1991), was not supported. In fact, the only clear race difference that did emerge showed a stronger effect of jock identity on major deviance for whites than for blacks” (Miller et al, 2007a).

Miller et al (2007b) found that the term jock means something different to black and white athletes. For whites, the term was associated with athletic ability and competition, whereas for blacks the term was associated with physical qualities. Whites, though, were more likely to self-identify with the label of jock than blacks (37 percent and 22 percent respectively). They also found that binge drinking predicted violence amongst family members, but in non-jocks only. The jock identity, for whites and not blacks, was also associated with more non-family violence while whites were more likely to use the aggression from sports in a non-sport context in comparison to blacks.

For black American boys, the jock label was a predictor of promiscuity but not for dating. For white American jocks, dating meant more than the jock label. Miller et al (2005) write:

We suggest that White male jocks may be more likely to be involved in a range of extracurricular status-building activities that translate into greater popularity overall, as indicated by more frequent dating; whereas African American male jocks may be “jocks” in a more narrow sense that does not translate as directly into overall dating popularity. Furthermore, it may be that White teens interpret being a “jock” in a sport context, whereas African American teens see it more in terms of relation to body (being strong, fit, or able to handle oneself physically). If so, then for Whites, being a jock would involve a degree of commitment to the “jock” risk-taking ethos, but also a degree of commitment to the conventionally approved norms with sanctioned sports involvement; whereas for African Americans, the latter commitment need not be adjunct to a jock identity.

It’s interesting to speculate on why whites would be more prone to risk-taking behavior than blacks. I would guess that it has something to do with their perception of themselves as athletes, leading to more aggressive behavior. Though certain personalities would be more likely to be athletic and thusly refer to themselves as a jock. The same would hold true for somatype as well.

So the term jock seems to mean different things for whites and blacks, and for whites, leads to more aggressive behavior in a non-sport context.

Black and females who self-identified as jocks reported lower grades whereas white females who self-identified as jocks reported higher grades than white females who did not self-report as jocks (Miller et al, 2006). Jocks also reported more misconduct such as skipping school, cutting class, being sent to the principals office, and parents having to go to the school for a disciplinary manner compared to non-jocks. Boys were more likely to engage in actions that required disciplinary intervention in comparison to girls, while boys were also more likely to skip school, have someone called from home and be sent to the principal’s office. Blacks, of course, reported lower grades than whites but there was no significant difference in misconduct by race. However, blacks reported fewer absences but more disciplinary action than whites, while blacks were less likely to cut class, but more likely to have someone called from home and slightly more likely to be sent to the principal’s office (Miller et al, 2006).

This study shows that the relationship between athletic ability and good outcomes is not as robust as believed. Athletes and jocks are also different; athletes are held in high regard in the eyes of the general public while jocks are seen as dumb and slow while also only being good at a particular sport and nothing else. Miller et al (2006) also state that this so-called ‘toxic jock effect‘ (Miller, 2009; Miller, 2011) is strongest for white boys. Some of these ‘effects’ are binge drinking and heavy drinking, bullying and violence, and sexual risk-taking. Though Miller et al (2006) say that, for this sample at least, “It may be that where academic performance is concerned, the jock label constitutes less of a departure from the norm for white boys than it does for female or black adolescents, thus weakening its negative impact on their educational outcomes.

The correlation between athletic ability and jock identity was only .31, but significant for whites and not blacks (Miller et al, 2007b). They also found, contrary to other studies, that involvement in athletic programs did not deter minor and major adolescent crime. They also falsified the hypothesis that the ‘toxic jock effect’ (Miller, 2009; Miller, 2011) would be stronger for blacks than whites, since whites who self-identified as jocks were more likely to engage in delinquent behavior.

In sum, there are racial differences in ‘jock’ behavior, with blacks being more likely to be promiscuous while whites are more likely to engage in deviant behavior. Black women are more likely to have higher rates of STIs, and part of the reason is sexual activity with black males who self-identify as jocks, as they are more promiscuous than non-jocks. This could explain part of the difference in STI acquisition between blacks and whites. Miller et al argue to discontinue the use of the term ‘jock’ and they believe that if this occurs, deviant behavior will be curbed in white male populations that refer to themselves as ‘jocks’. I don’t know if that will be the case, but I don’t think there should be ‘word policing’, since people will end up using the term more anyway. Nevertheless, there are differences between race in terms of those that self-identify as jocks which will be explored more in the future.

Nerds vs. Jocks: Different Life History Strategies?

1150 words

I was alerted to a NEEPS (Northeastern Evolutionary Psychology Society) conference paper, and one of the short abstracts of a talk had a bit about ‘nerds’, ‘jocks’, and differing life history strategies. Surprisingly, the results did not line up with current stereotypes about life outcomes for the two groups.

The Life History of the Nerd and Jock: Reproductive Implications of High School Labels

The present research sought to explore whether labels such as “nerd” and “jock” represent different life history strategies. We hypothesized that self-identified nerds would seek to maximize future reproductive success while the jock strategy would be aimed at maximizing current reproductive success. We also empirically tested Belsky’s (1997) theory of attachment style and life history. A mixed student/community sample was used (n=312, average age = 31) and completed multiple questionnaires on Survey Monkey. Dispelling stereotypes, nerds in high school had a lower income and did not demonstrate a future orientation in regards to reproductive success, although they did have less offspring. Being a jock in high school was related to a more secure attachment style, higher income, and higher perceived dominance. (NEEPS, 2017: 11)

This goes against all conventional wisdom; how could ‘jocks’ have better life outcomes than ‘nerds’, if the stereotype about the blubbering idiot jock is supposedly true?

Future orientation is The degree to which a collectivity encourages and rewards future-oriented behaviors such as planning and delaying gratification (House et al, 2004,p. 282). So the fact that self-reported nerds did not show future orientation in regards to reproductive success is a blow to some hypotheses, yet they did have fewer children.

However, there are other possibilities that could explain why so-called nerds have fewer children, for instance, they could be seen as less attractive and desirable; could be seen as anti-social due to being, more often than not, introverted; or they could just be focusing on other things, and not worrying about procreating/talking to women so they end up have fewer children as result. Nevertheless, the fact that nerds ended up having lower income than jocks is pretty telling (and obvious).

There are, of course, numerous reasons why a student should join a sport. One of the biggest is that the skills that are taught in team sports are most definitely translatable to the real world. Most notably, one who plays sports in high school may be a better leader and command attention in a room, and this would then translate over to success in the post-college/high school world. The results of this aren’t too shocking—to people who don’t have any biases, anyway.

Why may nerds in high school have had lower income in adulthood? One reason could be that the social awkwardness did not translate into dollar signs after high school/college graduation, or chose a bad major, or just didn’t know how to translate their thoughts into real-world success. Athletes, on the other hand, have the confidence that comes from playing sports and they know how to work together with others as a cohesive unit in comparison to nerds, who are more introverted and shy away from being around a lot of people.

Nevertheless, this flew in the faces of the stereotypes of nerds having greater success after college while the jocks—who (supposedly) don’t have anything beyond their so-called ‘primitive’ athletic ability—had greater success and more money. This flies in the face of what others have written in the past about how nerds don’t have greater success relative to the average population, well this new presentation says otherwise. Thinking about the traits that jocks have in comparison to nerds, it doesn’t seem so weird that jocks would have greater life outcomes in comparison to nerds.

Self-reported nerds, clearly, don’t don’t have the confidence to make the stratospheric amounts of cash that people would assume that they should make because they are knowledgeable in a few areas, on the contrary. Those who could use their body’s athletic ability had more children as well as had greater life success than nerds, which of course flew in the face of stereotypes. Certain stereotypes need to go, because sometimes stereotypes do not tell the truth about some things; it’s just what people believe ‘sounds good’ in their head.

If you think about what it would take, on average, to make more money and have great success in life after high school and college, you’ll need to know how to talk to people and how to network, which the jocks would know how to do. Nerds, on the other hand, who are more ‘socially isolated’ due to their introverted personality, would not know too much about how to network and how to work together with a team as a cohesive unit. This, in my opinion, is one reason why this was noticed in this sample. You need to know how to talk to people in social settings and nerds wouldn’t have that ability—relative to jocks anyway.

Jocks, of course, would have higher perceived dominance since athletes have higher levels of testosterone both at rest and exhaustion (Cinar et al, 2009). Athletes, of course, would have higher levels of testosterone since 1) testosterone levels rise during conflict (which is all sports really are, simulated conflict) and 2) dominant behavior increases testosterone levels (Booth et al, 2006). So it’s not out of the ordinary that jocks were seen as more dominant than their meek counterparts. In these types of situations, higher levels of testosterone are needed to help prime the body for what it believes is going to occur—competition. Coupled with the fact that jocks are constantly in situations where dominance is required; engage in more physical activity than the average person; and need to keep their diet on point in order to maximize athletic performance, it’s no surprise that jocks showed higher dominance, as they do everything right to keep testosterone levels as high as possible for as long as possible.

I hope there are videos of these presentations because they all seem pretty interesting, but I’m most interested in locating the video for this specific one. I will update on this if/when I find a video for this (and the other presentations listed). It seems that these labels do have ‘differing life history strategies’, and, despite what others have argued in the past about nerds having greater success than jocks, the nerds get the short end of the stick.

Why Are People Afraid of Testosterone?

1100 words

The answer to the question of why people are afraid of testosterone is very simple: they do not understand the hormone. People complain about birth rates and spermatogenesis, yet they believe that having high testosterone makes one a ‘savage’ who ‘cannot control their impulses’. However, if you knew anything about the hormone and how it’s vital to normal functioning then you would not say that.

I’ve covered why testosterone does not cause crime by looking at the diurnal variation in the hormone, showing that testosterone levels are highest at 8 am and lowest at 8 pm, while children commit the most crimes at 3 pm and adults at 10 pm. The diurnal variation is key: if testosterone truly did cause crime then rates of crime would be higher in both children and adults in the morning; yet, as can be seen with children, there are increases in amounts of violence committed when they enter school, go to recess, and exit school. This shows why those times are related to the spike in crime in children.

I have wrote a previous article citing a paper by Book et al (2001) in which they meta-analyzed testosterone studies and found that the correlation between testosterone and aggression was .14. However, that estimate is too high since they included 15 studies that should have not been included in the analysis. The true correlation is .08 (Archer, Graham-Kevan, and Davies, 2004). So, clearly, along with the fact that the diurnal variation in testosterone does not correlate with crime spikes, it shows that testosterone has no relationship to the cause of crime; it’s just always at the scene because it prepares the body to deal with a threat. That does not mean that testosterone itself causes crime.

One main reason people fear testosterone and believe that it causes crime and by extension aggressive behavior is because of racial crime disparities. According to the FBI, black Americans by and large commit the most crime, despite being 13 percent of the US population. And since it has been reported that blacks have higher levels of testosterone (Ross et al, 1986; Lynn, 1992; Rushton, 1997; Ellis, 2017), people believe that the supposed higher levels of testosterone that blacks, on average, have circulating in their blood is the ultimate cause of the crime disparities in America between races. Though see above to see why this is not the ultimate cause.

Blacks, contrary to popular belief, don’t have higher levels of testosterone (Gasper et al, 2006; Rohrrman et al, 2007; Lopez et al, 2013; Richard et al, 2014). Even if they did have higher levels, say the 13 percent that is often cited, it would not be the cause of higher rates of crime, nor the cause of higher rates of prostate cancer in blacks compared to whites. What does cause part of the crime differential, in my opinion, is honor culture (Mazur, 2016). The blacks-have-higher-testosterone canard was pushed by Rushton and Lynn to explain both higher rates of prostate cancer and crime in black Americans, however I have shown that high levels of testosterone do not cause prostate cancer (Stattin et al, 2003; Michaud, Billups, and Partin, 2015). Looking to testosterone as a ‘master switch’ as Rushton called it is the wrong thing to research because, clearly, the theories of Lynn, Rushton, and Ellis have been rebutted.

People are scared of testosterone because they do not understand the hormone. Indeed, people complain about lower birth rates and lower sperm counts, yet believe that having high testosterone will cause one to be a high T savage. This is seen in the misconception that injecting anabolic steroids causes higher levels of aggression. One study looked at the criminal histories of men who self-reported drug use and steroid use Lundholm et al (2014) who conclude: “We found a strong association between self-reported lifetime AAS use and violent offending in a population-based sample of more than 10,000 men aged 20-47 years. However, the association decreased substantially and lost statistical significance after adjusting for other substance abuse. This supports the notion that AAS use in the general population occurs as a component of polysubstance abuse, but argues against its purported role as a primary risk factor for interpersonal violence. Further, adjusting for potential individual-level confounders initially attenuated the association, but did not contribute to any substantial change after controlling for polysubstance abuse.

The National Institute of Health (NIH) writes: “In summary, the extent to which steroid abuse contributes to violence and behavioral disorders is unknown. As with the health complications of steroid abuse, the prevalence of extreme cases of violence and behavioral disorders seems to be low, but it may be underreported or underrecognized.” We don’t know whether steroids cause aggression or more aggressive athletes are more likely to use the substance (Freberg, 2009: 424). Clearly, the claims of steroids causing aggressive behavior and crime are overblown and there has yet to be a scientific consensus on the matter. A great documentary on the matter is Bigger, Stronger, Faster, which goes through the myths of testosterone while chronicling the use of illicit drugs in bodybuilding and powerlifting.

People are scared of the hormone testosterone—and by extent anabolic steroids—because they believe the myths of the hulking, high T aggressive man that will fight at the drop of the hat. However, reality is much more nuanced than this simple view and psychosocial factors must also be taken into account. Testosterone is not the ‘master switch’ for crime, nor prostate cancer. This is very simply seen with the diurnal variation of the hormone as well as the peak hours for crime in adolescent and adult populations. The extremely low correlation with aggression and testosterone (.08) shows that aggression is mediated by numerous other variables other than testosterone, and that testosterone alone does not cause aggression, and by extension crime.

People fear things they don’t understand and if people were to truly understand the hormone, I’m sure that these myths pushed by people who are scared of the hormone will no longer persist. Low levels of testosterone are part of the cause of our fertility problems in the West. So does it seem logical to imply that high testosterone is for ‘savages’, when, clearly, high levels of testosterone are needed for spermatogenesis which, in turn, would mean a higher birth rate? Anyone who believes that testosterone causes aggression and crime and that the injection of anabolic steroids causes ‘roid rage’ should do some reading on how the production of the hormone in the body as well as the literature on anabolic steroids. If one wants birth rates to increase in the West, then they must also want testosterone levels to increase as well, since they are intimately linked.

Testosterone does not cause crime and there is no reason to fear the hormone.

Diet and Exercise: Don’t Do It? Part II

2300 words

In part II, we will look at the mental gymnastics of someone who is clueless to the data and uses whatever mental gymnastics possible in order to deny the data. Well, shit doesn’t work like that, JayMan. I will review yet more studies on sitting, walking and dieting on mortality as well as behavioral therapy (BT) in regards to obesity. JayMan has removed two of my comments so I assume the discussion is over. Good thing I have a blog so I can respond here; censorship is never cool. JayMan pushes very dangerous things and they need to be nipped in the bud before someone takes this ‘advice’ who could really benefit from lifestyle alterations. Stop giving nutrition advice without credentials! It’s that simple.

JayMan published a new article on ‘The Five Laws of Behavioral Genetics‘ with this little blip:

Indeed, we see this with health and lifestyle: people who exercise more have fewer/later health problems and live longer, so naturally conventional wisdom interprets this to mean that exercise leads to health and longer life, when in reality healthy people are driven to exercise and have better health due to their genes.

So, in JayMan’s world diet and exercise have no substantial impact on health, quality of life and longevity? Too bad the data says otherwise. Take this example:

Take two twins. Lock both of them in a metabolic chamber. Monitor them over their lives and they do not leave the chamber. They are fed different diets (one has a high-carb diet full of processed foods, the other a healthy diet for whatever activity he does); one exercises vigorously/strength trains (not on the same day though!) while the other does nothing and the twin who exercises and eats well doesn’t sit as often as the twin who eats a garbage diet and doesn’t exercise. What will happen?

Jayman then shows me Bouchard et al, (1990) in which a dozen pairs of twins were overfed for three months with each set of twins showing different gains in weight despite being fed the same amount of kcal. He also links to Bouchard et al, 1996 (can’t find the paper; the link on his site is dead) which shows that the twins returned to their pre-experiment weight almost effortlessly. This, of course, I do not deny.

This actually replicates a study done on prisoners in a Vermont prison (Salans, Horton, and Sims, 1971). “The astonishing overeating paradox” is something that’s well worth a look in to. Salans et al had prisoners overeat and also limited their physical activity. They started eating 4000 kcal per day and by the end of the study they were eating about 10000 kcal per day. But something weird happened: their metabolisms revved up by 50 percent in an attempt to get rid of the excess weight. After the study, the prisoners effortlessly returned to their pre-experiment weight—just like the twins in Bouchard et al’s studies.

The finding is nothing new but it’s nice to have replication (on top of the replication that it already had), but that’s not what I was talking about. Of course, being sedentary, eating like shit and not exercising will lead to deleterious health outcomes. The fact of the matter is, the twin in my thought experiment that did not exercise, sat around all day and ate whatever would die way sooner, have a lower quality of life, and more deleterious disease due to the shitty diet while his co-twin would have less since he ate right, exercised and spent less time sitting.

JayMan says, in regards to studies that show that obese people that even do light physical activity show lower all-cause mortality, that “That’s not what large RCTs show.” I know the study that he’s speaking of—the Look AHEAD study (Action for Health and Diabetes) (The Look AHEAD Research Group, 2009). The research group studied the effects of lifestyle interventions in type II diabetics. For one of the groups they gave intensive diet and exercise information, the other they gave only the standard advice. However, the study ended early at 9.3 years because there was no difference between both groups (Pi-Sunyer, 2015). JayMan uses this study as evidence that diet and exercise have no effect on the mortality of type II diabetics; however, in actuality, the results are much more nuanced.

Annuzzi et al (2014) write in their article The results of Look AHEAD do not row against the implementation of lifestyle changes in patients with type 2 diabetes:

The intervention aimed at weight loss by reducing fat calories, and using meal replacements and, eventually, orlistat, likely underemphasizing dietary composition. There is suggestive evidence, in fact, that qualitative changes in dietary composition aiming at higher consumption of foods rich in fiber and with a high vegetable/animal fat ratio favorably influence CV risk in T2D patients.

In conclusion, the Look AHEAD showed substantial health benefits of lifestyle modifications. Prevention of CV events may need higher attention to dietary composition, contributing to stricter control of CV risk factors. As a better health-related quality of life in people with diabetes is an important driver of our clinical decisions, efforts on early implementation of behavioral changes through a multifactorial approach are strongly justified.

They reduced far calories and used meal replacements. This is the trial JayMan is hedging his assertion on. Type II diabetics need a higher fat diet and don’t need the carbs as it will spike their insulin. Eating a higher fat diet will also lower the rate of CVD as well. This trial wasn’t too vigorous in terms of macronutrient composition. This is one of many reasons why type II diabetics discard dieting and exercise just yet.

Even modest weight loss of 5 to 10 percent is associated with significant improvements in cardiovascular disease (CVD) after one year, with larger weight loss showing better improvement (Wing et al, 2011). (Also read the article The Spinning of Look AHEAD.)

Telling diabetics not to eat right and exercise is, clearly, a recipe for disaster. This canard that dieting/exercise doesn’t work to decrease all-cause mortality—especially for diabetics and others who need the lifestyle interventions—is dangerous and a recipe for disaster.

Intentional weight loss needs to be separated from intentional weight loss as to better study the effects of both variables. Kritchevsky et al (2015) meta-analyzed 15 RCTs that “reported mortality data either as an endpoint or as an adverse event, including study designs where participants were randomized to weight loss or non-weight loss, or weight loss plus a co-intervention (e.g. weight loss plus exercise) or the weight stable co-intervention (i.e. exercise alone).” They conclude that the risk for all-cause mortality in obese people who intentionally lose weight is 15 percent lower than people not assigned to lose weight.

This study replicates a meta-analysis by Harrington, Gibson, and Cottrell (2009) on the benefits of weight loss and all-cause mortality. They noted that in unhealthy adults, weight loss accounted for a 13 percent decrease in all-cause mortality increase while in the obese this accounted for a 16 percent decrease. Of course, since the weights were self-reported and there are problems with self-reports of weight (Mann et al, 2007), then that is something that a skeptic can rightfully bring up. However, it would not be a problem since this would imply that they weighed the same/gained more weight yet had a decrease in all-cause mortality.

Even light physical activity is associated with a decrease in all-cause mortality. People who go from light activity, 2.5 hours a week of moderate physical intensity compared to no activity, show a 19 percent decrease in all-cause mortality while people who did 7 hours a week of moderate activity showed a 24 percent decrease in all-cause mortality (Woodcock et al, 2011). Even something as simple as walking is associated with lower incidence of all-cause mortality, with the largest effect being seen in individuals who went from no activity to light walking. Walking is inversely associated with disease incidence (Harner and Chida, 2008) but their analysis indicated publication bias so further study is needed. Nevertheless, the results line up with what is already known—that low-to-moderate exercise is associated with lower all-cause mortality (as seen in Woodcock et al, 2011).

What is needed to change habits/behavior is behavioral therapy (BT) (Jacob and Isaac, 2012; Buttren, Webb, and Waddren, 2012; Wilfley, Kolko, and Kaas, 2012; ). BT can also be used to increase adherence to exercise (Grave et al, 2011). BT has been shown to have great outcomes in the behaviors of obese people, and even if no weight loss/5-10 percent weight loss is seen (from Wing and Hill, 2001), better habits can be developed, and along with ‘training’ hunger hormones with lifestyle changes such as fasting, people can achieve better health and longevity—despite what naysayers may say. Though I am aware that outside of clinics/facilities, BT does not have a good track record (Foster, Makris, and Bailer, 2005). However, BT is the most studied and effective intervention in managing obesity at present (Levy et al, 2007). This is why people need to join gyms and exercise around people—they will get encouragement and can talk to others about their difficulties. Though, people like JayMan who have no personal experience doing this would not understand this.

In regards to dieting, the effect of macronutrient composition on blood markers is well known. Type II diabetics need to eat a certain diet to manage their insulin/blood sugar, and doing the opposite of those recommendations will lead to disaster.

Low-carb ketogenic diets are best for type II diabetics. There are benefits to having ketones circulating in the blood, which include (but are not limited to): weight loss, improved HbA1c levels, reduced rate of kidney disease/damage, cardiac benefits, reversing non-alcoholic fatty liver, elevated insulin, and abnormal levels of cholesterol in the blood (Westman et al, 2008; Azar, Beydoun, and Albadri, 2016; Noakes and Windt, 2016; Saslow et al, 2017). These benefits, of course, carry over to the general non-diabetic population as well.

Of course, JayMan has reservations about these studies wanting to see follow-ups—but the fact of the matter is this: dieting and eating right is associated with good blood markers, exactly what type II diabetics want. In regards to food cravings, read this relevant article by Dr. Jason Fung: Food Cravings. Contrary to JayMan’s beliefs, it’s 100 percent possible to manage food cravings and hunger. The hormone ghrelin mediates hunger. There are variations in ghrelin every day (Natalucci et al, 2005) and so if you’re feeling hungry if you wait a bit it will pass. This study lines up with most people’s personal experience in regards to hunger. One would have to have an understanding of how the brain regulates appetite to know this, though.

JayMan also cannot answer simple yes or no questions such as: Are you saying that people should not watch what they eat and should not make an effort to eat higher-quality foods? I don’t know why he is so anti-physical activity. As if it’s so bad to get up, stop sitting so much and do some exercise! People with more muscle mass and higher strength levels live longer (Ruiz et al, 2008). This anti-physical activity crusade makes absolutely no sense at all given the data. If I were to stop eating well and strength training, along with becoming a couch potato, would my chance of dying early from a slew of maladies decrease? Anyone who uses basic logic would be able to infer that the answer is yes.

I also need to address JayMan’s last comment to me which he censored:

No intervention shows that lifestyle changes extend life – or even improve health. Even if they did, their generalizability would depend on their actual prescription. In any case, the point is moot, since they don’t even show such improvements in the first place.

You’re only saying that because you’re literally hand waving away data. It’s clear that going from no exercise to some exercise will decrease all-cause mortality. I’m sorry that you have a problem reading and understanding things that you don’t agree with, but this is reality. You don’t get to construct your own reality using cherry-picked studies that don’t mean what you think they mean (like Look AHEAD; Dr. Sharma states that we may never know if weight reduction can save lives in type II diabetics, however the three studies on low-carb diets cited above lend credence to the idea that we can).

Please see my previously linked Obesity Facts page for more. Once you’ve read that, get back to me. Until then, I’m putting the brakes on this discussion.

Of course, you’re putting the brakes on this discussion, you have substantial replies other than your one-liners. You need to censor people when you have no substantial response, that’s not intellectually honest.

All in all, JayMan is giving very dangerous ‘advice’, when the literature says otherwise in regards to lifestyle interventions and all-cause mortality. You can talk about genes for this or that all you want; you’re just appealing to genes. Light physical exercise shows that mortality risk can be decreased; that’s not too hard for most people.

I know JayMan talks about genes for this and that, yet he does not understand that obesogenic environments drive this epidemic (Lake and Townshend, 2006; Powell, Spears, and Rebori, 2011;  Fisberg et al, 2016). He doesn’t seem to know about the food reward hypothesis of obesity either. Think about obesogenic environments and food reward and how our brains change when we eat sugar and then things will begin to become clearer.

JayMan is giving out deadly ‘advice’, again, without the correct credentials. Clearly, as seen in both of my responses to him, taking that ‘advice’ will lead to lower quality of life and lower life expectancy. But I’m sure my readers are smart enough to not listen to such ‘advice’.

(Note: Diet and exercise under Doctor’s supervision only)

Human Physiological Adaptations to Climate

1750 words

Humans are adapted to numerous ecosystems on earth. This is only possible due to how our physiological systems interact with the environment in a homeodynamic way. This allowed us to spread across the globe, far away from our ancestral home of Africa, and thusly certain adaptations evolved in those populations—which was driven by our intelligent physiology. I will touch on human cold and hot adaptations, how physiology adapts to the two climates and what this means for the populations that make up Mankind.

Physiological adaptations to Arctic climates

The human body is one of the most amazing and complex biological systems on earth. The human body lives and dies on its physiology and how it can adapt to novel environments. When Man first trekked out of Africa into novel environments, our physiology adapted so we could survive in novel conditions. Over time, our phenotypes adapted to our new climates and humans began looking different from one another due to the climatic differences in their environments.

There is a large body of work on human cold adaptation. Thermal balance in humans is maintained by “vasodilation/vasoconstriction of the skin and peripheral tissues within the so-called thermo-neutral zone” (Daanen and Lichtenbelt, 2016). Two other adaptations occur in the cold: shivering thermogenesis (ST) and non-shivering thermogenesis (NST) and one in the heat (the evaporation of sweat). Humans are not Arctic animals by nature, so, therefore, venturing into novel environments would incur new physiological adaptations to better deal with the cold.

Heat is induced by the body in cold climates by shivering (Tikuisis, Bell, and Jacobs, 1991Daanen and Lichtenbelt, 2016). So, therefore, people in colder climates will have higher metabolisms than people in tropical environments, to generate more body heat for vital functioning. People living in Arctic environments have fewer sweat glands than people who live in the tropics. Sweating removes heat from the body, so having more sweat glands in colder climates would not be conducive for survival.

People who evolved in Arctic climates would also be shorter and have wider pelves than people who evolved in the tropics. This is seen in Neanderthals and is an example of  Cold adaptations also show up in the Greenlandic Inuit due to extinct hominins like the Denisova (Fumagalli et al, 2015).

We can see natural selection at work in the Inuits, due to adaptation to Arctic climates (Galloway, Young, and Bjerregaard, 2012; Cardona et al, 2014; Ford, McDowell, and Pierce, 2015NIH, 2015; Harper, 2015Tishkoff, 2015). Climate change is troubling to some researchers, with many researchers suggesting that global warming will have negative effects on the health and food security of the Inuit (WHO, 2003Furgal and Seguin, 2006Wesche, 2010; Ford, 2009, 2012Ford et al, 20142016McClymont and Myers, 2012; Petrasek, 2014Petrasek et al, 2015; Rosol, Powell-Hellyer, and Chan, 2016). This Inuit are the perfect people to look to to see how humans adapt to novel climates—especially colder ones. They have higher BMIs which is better for heat retention, and larger brains with wider pelves and a shorter stature.

Metabolic adaptations also occur due to BMI, which would occur due to diet and body composition. Daanen and Lichtenbelt, (2016) write:

Bakker et al.,48 however, showed that Asians living in Europe had lower BAT prevalence and exhibited a poorer shivering and non-shivering response to cold than Caucasians of similar age and BMI. On the other hand, subjects living in polar regions have higher BMI, and likely more white fat for body energy reserves and insulation.49 This cannot be explained by less exercise,50 but by body composition51 and food intake.49

Basal metabolic rate (BMR) also varies by race. Resting metabolic rate is 5% higher in white women when compared to black women (Sharp et al, 2002). Though low cardiovascular fitness explains 25 percent of the variance in RMR differences between black and white women (Shook et al, 2014). People in Arctic regions have a 3-19 higher BMR than predicted on the basis of the polar climates they lived in (Daanen and Lichtenbelt, 2016). Further, whites had a higher BMR than Asians living in Europe. Nigerian men were seen to have a lower BMR than African-American men (Sharp et al, 2002). So, whites in circumpolar locales have a higher BMR than peoples who live closer to the equator. This has to do with physiologic and metabolic adaptations.

Blacks also show slower and lower cold induced vasodilation (CIVD) than whites. A quicker CIVD in polar climates would be a lifesaver.

However, just our physiologic mechanisms alone aren’t enough to weather the cold. Our ingenuity when it comes to making clothes, fire, and finding and hunting for food are arguably more important than our bodies physiologic ability to adapt to its present environment. Our behavioral plasticity (ability to change our behavior to better survive in the environment) was also another major factor in our adaptation to the cold. Then, cultural changes would lead to genetic changes, and those cultural changes—which were due to the cold climates—would then lead to more genetic change and be an indirect effect of the climate. The same, obviously, holds for everywhere in the world that Man finds himself in.

Physiologic changes to tropical climates

Physiologic changes in tropical climates are very important to us as humans. We needed to be endurance runners millions of years ago, and so our bodies became adapted for that way of life through numerous musculoskeletal and physiologic changes (Lieberman, 2015). One of the most important is sweating.

Sweating is how our body cools itself and maintains its body temperature. When the skin becomes too hot, your brain, through the hypothalamus, reacts by releasing sweat through tens of millions of eccrine glands. As I have covered in my article on the evolution of human skin variation, our loss of fur (Harris, 2009) in our evolutionary history made it possible for sweat to eventually cool our body. Improved sweating ability then led to higher melanin content and selection against fur. Another hypothesis is that when we became bipedal, our bodies were exposed to less solar radiation, selecting against the need for fur. Yet another hypothesis is that trekking/endurance running led to selection for furlessness, selecting for sweating and more eccrine glands (Lieberman, 2015).

Anatomic changes include long and thin bodies with longer limbs as heat dissipation is more efficient. People who live in tropical environments have longer limbs than people who live in polar environments. These tall and slender bodies are what is useful in that environment. People with long, slender bodies are disadvantaged in the cold. Further, longer, slender bodies are better for endurance running and sprinting. They also have narrower hips which helps with heat dissipation and running—which means they would have smaller heads than people in more northerly climes. Most adaptations and traits were once useful in whichever environment that organism evolved in tens of thousands of years ago. And certain adaptations from our evolutionary past are still evident today.

Since tropical people have lower BMRs than people at more northerly climes, this could also explain why, for instance, black American women, have higher rates of obesity than women of other races.  They have a lower BMR and are sedentary and eat lower-quality food so food insecurity would have more of an effect on that certain phenotype. Africans wouldn’t have fast metabolisms since a faster metabolism would generate more heat.

Physiologic changes due to altitude

The last adaptation I will talk about is how our bodies can adapt to high altitudes and how it’s beneficial. Many human populations have adapted to the chronic hypoxia of high latitudes (Bigham and Les, 2014) which, of course, has a genetic basis. Adaptation to high altitudes also occurred due to the introgression of extinct hominin genes into modern humans.

Furthermore, people in the Andean mountains, people living in the highlands of Kenya and people living on the Tibetan plateau have shown that the three populations adapted to the same stress through different manners. Andeans, for instance, breathe the same way as people in lower latitudes but their red blood cells carry more oxygen per cell, which protects them from the effects of hypoxia. They also have higher amounts of hemoglobin in their blood in comparison to people who live at sea level, which also aids in counterbalancing hypoxia.

Tibetans, on the other hand, instead of having hematological adaptations, they have respiratory adaptations. Tibetans also have another adaptation which expands their blood vessels, allowing the whole body to deliver oxygen more efficiently to different parts. Further, Ethiopians don’t have higher hemoglobin counts than people who live at sea level, so “Right now we have no clue how they do it [live in high altitudes without hematologic differences in comparison to people who live at sea level]”.

Though Kenyans do have genetic adaptations to live in the highlands (Scheinfeldt et al, 2012). These genetic adaptations have arisen independently in Kenyan highlanders. The selective force, of course, is hypoxia—the same selective force that caused these physiologic changes in Andeans and Tibetans.


The human body is amazing. It can adapt both physiologically and physically to the environment and in turn heighten prospects for survival in most any environment on earth. These physiologic changes, of course, have followed us into the modern day and have health implications for the populations that possess these changes. Inuits, for instance, are cold-adapted while the climate is changing (which it constantly does). So, over time, when the ice caps do melt the Arctic peoples will be facing a crisis since they are adapted to a certain climate and diet.

People in colder climates need shorter bodies, higher body fat, lower limb ratio, larger brains etc to better survive in the cold. A whole slew of physiologic processes aids in peoples’ survival in the Arctic, but our ability to make clothes, houses, and fire, in conjunction with our physiological dynamicness, is why we have survived in colder climates. Tropical people need long, slender bodies to better dissipate heat, sweat and run. People who evolved in higher altitudes also have hematologic and respiratory adaptations to better deal with hypoxia and less oxygen due to living at higher elevations.

These adaptations have affected us physiologically, and genetically, which leads to changes to our phenotype and are, therefore, the cause of how and why we look different today. Human biological diversity is grand, and there are a wide variety of adaptations to differing climates. The study of these differences is what makes the study of Man and the genotypic/phenotypic diversity we have is one of the most interesting sciences we have today, in my opinion. We are learning what shaped each population through their evolutionary history and how and why certain physical and physiologic adaptations occurred.

Diet and Exercise: Don’t Do It?

1800 words

On Twitter, JayMan linked to a video about a time traveling dietician who travels back to the 70s to give nutritional advice to a couple. He kept going back on what he said, re eggs and cholesterol, Paleo diet, etc. Then at the end of the video, the ‘time traveling dietician’ says “It turns out it’s genetic. It doesn’t matter whether you exercise or what you eat.”

I then asked JayMan if he was advising people to not diet or exercise—and if he was doing so—what credentials does he have to give such advice? “Appeal to authority!” So if some random guy gave me legal advice and I asked his credentials, is that an appeal to authority? Similarly, if someone is trying to give me medical advice, is asking where he got his medical license an appeal to authority? The thing is, people have specialties for a reason. I wouldn’t take diet and exercise advice from some anon blogger with no credentials, just like I wouldn’t take legal advice from a biologist. Anyway, I’ll review some studies on exercise, dieting, and sitting in regards to all-cause mortality.

Sitting and all-cause mortality

Listening to such advice—like not dieting or exercising—will lower your quality of life and life expectancy. The longer you sit, the more likely you are to have rolled shoulders among other postural imbalances. One of the biggest reasons that sitting is related to all-cause mortality (Chau et al, 2013Biddle et al, 2016). So listening to this shitty advice to ‘not exercise’ will lead an individual to having a lower QoL and lower life expectancy.

Sitting is associated with all-cause mortality because if, say, one is sitting at a desk for 8 hours per day then goes home and sits for the rest of the day, circulation will not get not get to the lower extremities. Furthermore, even mild-to-moderate exercise attenuates the situation (Chau et al, 2013). Further, reducing sedentary behavior (and of course, watching less TV) can possibly raise life expectancy in the US (Katzmarzyk and Lee, 2012). They found that cutting daily sitting time to less than three hours can increase life expectancy by two years (and, of course, quality of life). There is a large body of research on sitting and all-cause mortality (Stamatakis et al, 2013). It’s also worth noting that too much sitting decreases life expectancy—even with exercise. So JayMan’s (unprofessional) advice will lead to someone having a shitty life quality and lower life expectancy.

Dieting, and all-cause mortality

This is a bit trickier. I know that dieting for weight loss doesn’t work (Aamodt, 2016; Fung, 2016)—that is, traditional dieting (high-carb diets). The traditional advice is to eat high-carb, low-fat and moderate protein—this is due to what occurred in the 70s—the demonization of fat and the championing of carbs. This, clearly, is wrong. This has led to the obesity epidemic and the cause is our evolutionary novel environments. The main reason is that we have constructed environments for ourselves that are novel, and thus we’ve not had enough time to adapt to what we eat/how we live our new lives in our modernized world.

Indeed, even hunter-gathers don’t have our disease rates that we have—having low to no cases of our diseases of civilization (see Taubes, 2007 for a review). Why is this? It’s because they are physically active and they do not eat the same processed carbohydrates that we in first-world societies do.

In regards to exercise and all-cause mortality, people who exercise more often have a lower chance of dying from all causes than more sedentary people (Oja et al, 2016O’Donovan et al, 2017). So it’s becoming clear that JayMan is just talking out out his ass here. I’d love to hear any MD say to a patient “Don’t diet, don’t exercise. Don’t eat well. It doesn’t work.” Because that MD will be a shill for Big Food.

Further, when I say ‘diet’, I don’t mean eating below the BMR. Your ‘diet’ is what you eat, and by changing your diet, you’re changing to healthier habits and eating higher-quality foods. People like JayMan make it seem like you should eat whatever you want and not to exercise. Following this advice, however, will lead to deleterious consequences.

It DOES matter what you put into your body; it DOES matter if you exercise or not. If you do not, you will have a lower life expectancy than who does exercise and eats well.

On a side note, I know that dieting does not work for weight loss. Traditional dieting, that is. Dr. Jason Fung, world-renowned obesity, diabetes and intermittent fasting expert, has people lose and keep their weight off. He actually understands what causes obesity—insulin. Higher insulin levels are also tied to the obesity pathway through lack of glucagon receptors (Lee et al, 2014). Why is this important? First, we have to understand what insulin does in the body. Once you understand what insulin does in the body then you will see why JayMan is wrong.

Insulin inhibits the breakdown of fat in the adipose tissue by inhibiting the lipase that hydrolyzes (the chemical breakdown of a compound due to a reaction with water) the fat out of the cell. Since insulin facilitates the entry of glucose into the cell, when this occurs, the glucose is synthesized into glycerol. Along with the fatty acids in the liver, they both are synthesized into triglycerides in the liver. Due to these mechanisms, insulin is directly involved with the shuttling of more fat into the adipocyte. Since insulin has this effect on fat metabolism in the body, it has a fat-sparing effect. Insulin drives most cells to prefer carbohydrates for energy. Putting this all together, insulin indirectly stimulates the accumulation of fat into the adipose tissue.

Does this physiologic process sound that you can ‘eat whatever you want’? Or does it tell you that you should lower your carb intake as to not induce blood glucose spikes which lead to an increase in insulin? Over time, these constant blood glucose/insulin spikes lead to insulin resistance which has the body produce more insulin due to the insulin resistance resulting in a vicious cycle.

So, it seems that in order to have a higher QoL and life expectancy, one must consume processed carbs very sparingly.

These behaviors of over consuming processed carbohydrates come down to the environments we have constructed for ourselves—obesogenic environments. An obesogenic environment “refers to an environment that helps, or contributes to,
obesity” (Powell, Spears, and Rebori, 2010).

Our current obesogenic environment also contributes to dementia and cognitive impairment. What makes environments ‘obesogenic’ “is the increased presence of food cues and the increased consumption of a diet which compromises our ability to resist those cues” (Martin and Davidson, 2015). So if our obesogenic environments change, then we should see a reduction in the number of overweight/obese people.

Diet is very important for Type II diabetics. For instance, TII diabetics can manage, and even reverse, their disease with a low-carb ketogenic diet (LCKD) lowering their hBA1c, having a better lipid profile, cardiac benefits, weight loss etc (Westman et al, 2008; Azar, Beydoun, and Albadri, 2016; Noakes and Windt, 2016; Saslow et al, 2017). I wonder if JayMan would tell TII diabetics not to diet or exercise…. That’d be a recipe for disaster. TII diabetics need to keep their insulin down and eating an LCKD will do that; taking JayMan’s ‘advice’ not to diet or exercise will quickly lead to more weight gain, an exacerbation of problems and, eventually, death due to complications from not correctly managing the disease. JayMan needs to learn the literature and understand these papers to truly understand why he is wrong.

Exercise and all-cause mortality

The relationship between vigorous exercise and all-cause mortality is well studied. Gebel et al (2015) conclude that “Independent of the total amount of physical activity, engaging in some vigorous activity was protective against all-cause mortality. This finding applied to both sexes, all age categories, people with different weight status, and people with or without cardiometabolic disease.” Reduced exercise capacity also causes higher all-cause mortality rates (McAuley et al, 2016).

Unfit thin people had two times higher mortality rate than normal weight fit people. Further, overweight and obese fit people had similar mortality rates when compared to normal weight fit people (Barry et al, 2013). Clearly, physical activity needs to be heightened if one wants to live a longer, higher quality life. This runs completely opposite of what JayMan is implying.

Exercise into old age is also related to higher cognition and lower mortality rate in when compared to individuals who do not exercise. Exercise also protects against cognitive degeneration in the elderly (Bherer, Erikson and Lie-Ambrose, 2013; Carvalho et al, 2014; Paillard, 2015). If you want to keep your cognition into old age and live longer, it seems like your best bet is to exercise at a young age in order to stave off cognitive degeneration.

Strength and mortality

Finally, one last thing I need to touch on is strength and mortality. Strength is, obviously, increased through exercise. Stronger men live longer—and are protected from more disease such as cancer—than weaker men, even when controlling for cardiorespiratory fitness and other confounds (Ruiz et al, 2008).

As I have covered in the past, differences in grip strength account for differences in mortality in men—which also has a racial component (Araujo et al, 2010; Volkalis, Halle, and Meisinger, 2015). The stronger you are, the less chance you have of acquiring cancer and other maladies. Does the advice of ‘don’t exercise’ sound good now? It doesn’t, and I don’t know why anyone would seriously imply that dieting and exercise doesn’t work.


Dieting (meaning eating a higher quality diet, not attempting to lose weight) and exercise do work to increase life expectancy. The advice of “don’t do anything, it’s genetic” makes no sense at all after one sees the amount of literature there is on eating mindfully and exercising. I know that exercise does not induce weight loss, but it does contribute to living longer and staving off disease.

People should stay in their lane and leave things to the professionals—the people who are actually working with individuals every day and know and understand what they are going through. The canard of ‘eat whatever, don’t exercise, it’s genetic’ is very dangerous, especially today when obesity rates are skyrocketing. JayMan needs to learn the literature and how and why exercise and eating right leads to a higher quality of life and life expectancy. Thankfully, people like JayMan who say not to diet or exercise have no pull in the real world.

Clearly, to live longer, eat right, don’t sit for too long (because even if you exercise, sitting too long will lower your life expectancy) and exercise into old age and your chance of acquiring a whole slew of deleterious diseases will be lessened.

Did we come from Australasia?

by Phil78 3179 words

In a recent response to the MCU7  genetic admixture from Archaic, it has been argued that if this entered the Sub Saharan Genome at 145 kya, every population by OOA standards should have it.

Not necessarily, as the study noted how their findings conform to recent findings that actually ground African Origins.

Our finding agrees with recent reports of such an introgression in
sub Saharan African populations (Hammer et al. 2011; Hsieh et al. 2016), as well as the
unexpectedly old human remains (Hublin et al. 2017) and lineages (Schlebusch et al. 2017).

In other words, what I’m thinking is that this connects somewhere with the Basal human component model for West Africans and some LSA finds, though that is for another day.

Now, as for the alternative model that I’ve seen advertise by the site RedIce, we now come to a recent newcomer, Bruce Fenton.

Now, before I begin my criticism of his premise of a new “paradigm”, I like to say that the reviews I’ve seen (Amazon) he certainly seems to have talent in writing. However, reading this article, and other summaries of his model, I must say I’m not tempted to buy his book based on his confidence of his basic model “filling in holes” in OOA and treating it debunked, especially when his sources all more or less can be conformed into OOA 2.

First, let us go into how he rules out both Africa and Europe due to recent Neanderthal DNA  from Neanderthals from Spain.

Research by the geneticists Benoit Nabholz, Sylvain GlĂ©min, and Nicolas Galtier has revealed significant problems with scientific studies that rely heavily on genetic material alone, divorced from the physical examination of fossils (especially in the accuracy of dating by molecular clocks).[i] We are however fortunate to have a 2013 research project from Indiana University, headed by well-respected evolutionary biologist Aida GĂłmez-Robles at our disposal: a comparative analysis of European hominin fossil teeth and jawbones. The Indiana University project concluded that all the fossil hominins in Europe were either Neanderthals or directly ancestral to Neanderthals – not ancestors of Homo sapiens. We must understand that while respective groups in Africa match European hominin populations, this revelation discounted all known African hominins as being ancestors of modern humans. The morphological research also provided further shock – the divergence between Homo sapiens and Neanderthals had apparently begun as early as one million years before present.

Odd how he made that leap when the researcher he cites actually says otherwise on Africa as a candidate.

From the new study’s results, GĂłmez-Robles says that “we think that candidates have to be looked for in Africa.” At present, million-year-old fossils attributed to the prehistoric humans H. rhodesiensis and H. erectus look promising.

Fenton then further mention Denisovan diverging, using DNA, as 800k and the places the ancestor of all three between 700-900.

His Response? This finding from China.

The first possible answer to this ‘where to look’ question came in July 2016 with scientist Professor Zhao Lingxiain, whose research group announced they had identified modern human fossil remains at the Bijie archaeological site ranging up to 180,000 years old.[i] Not only were they digging up fragments of modern humans, but also evidence of other mysterious hominin forms. The Chinese paleoanthropologists suspected that some of the recovered fossils might even be from the mysterious Denisovans, previously identified in Siberia.[i] Could modern humans have first emerged in East Asia? It has certainly begun to look like this might be the case. My independent investigative research carried out over the last several years, however, disagrees: my work places the first Homo sapiens in Australasia.

For the context of how this can still conform to OOA, the actual range was 112k to 178k, and while this muddies the typical 50k to 80k migration it can still fit in the 90k to 130k Migration of the Levant that was presumed to have all been wiped out.

Back in 1982, two of the most renowned evolutionary scientists of the modern age, Professor Alan Wilson and his understudy, Rebecca Cann, discovered compelling evidence for an Australasian genesis for modern humans. These controversial findings never emerged in any of their academic papers; in fact, they only appear in a short transcript included in a book published in the same year by two British research scientists, The Monkey Puzzle: A Family Tree. Silence does not change facts, and the fact remains that there is compelling DNA evidence pointing towards Australasia as the first home of Homo sapiens. Indeed, so much data exists that it eventually led to my controversial new book, The Forgotten Exodus: The Into Africa Theory of Human Evolution. My research colleagues and myself have uncovered overwhelming evidence that places the first modern humans in Australasia, and with them several other advanced hominin forms.

There might be some temptation to dismiss this matter out of hand, as it can be difficult accepting that leading academics have got it so wrong. It is, however, important to understand that in every case the opposing arguments against the current consensus position are based on, or supported by, peer-reviewed studies or statements given by consensus academics. Could it be that the year 2016 will one day be known as the year that the Out of Africa paradigm died?

If 2016 becomes associated with the end of one scientific paradigm, then 2017 may become related to the emergence of a new model for human origins, one that I am proposing and have termed ‘Into Africa’. My Into Africa theory is closely related to the ‘Out of Australia’ theory formulated by two of my Australian collaborators, Steven and Evan Strong, but goes significantly further down the rabbit hole of our evolutionary story.

I’d wish he supported this unreplicated genetic study (as far as I know) with actual archaeological continuity in Australasia because so far, pre-sapiens people there are generally  Erectus-like, his own sources on the matter supporting that view.

He summarizes both Multiregional and OOA theory (single recent origin), then proceeds to his own.

[UPDATE– Something that I pondered was exactly what pattern of migration did Cann produce? Well, based on two articles produced by Steve Strong, who I believe is an associate of Fenton, shows that my suspicions were correct.

The pattern found was Australoids- Mongoloids- Caucasians, Negroids/SSA, the opposite of Fenton’s Framework. I figured that, regardless of where Australians fit, the affinity of groups wouldn’t change. Strong has another article in which he uses a paper linking origins to Australia which was covered on this blog here as well as covering Denisovans which, as I shown in this post, to fit fine in OOA 2 aside from some complications in mapping precisely the nature of smaller migration into SE Asia.

Regarding Cain’s findings as a whole, the sample size of the study was one among many that were small and covered a week range of the Native’s populations in general, as discussed and somewhat ameliorated here.

With that realized, study after study after study places them in a 50k-55k Time Frame, more or less consistent with Archaeological dates, may LM3 (Mungo Man) be either 40k or 60k. It must also be kept in mind that Cann’s findings existed prior to the knowledge of Denisovan admixture, which possibly could’ve skewed divergence dates, as explained by Dienekes. This gives a good reason for Cann’s findings to be seen as erroneous. In regards to his citing of Vanderburg, it shows his specialty in this sort of work if “unique haplotypes” aren’t a natural result of human differentiation.

Regarding Archaeology from both articles, Strong makes the point of even earlier findings not popularly reported in Australia, ranging from 60-135k for fossils, older for tools and scorching. Not only are these younger than the currently oldest Sapiens in Africa, but also in the time frame of a currently known exodus into SE Asia discussed in the post, even if they were legit as I’ll dive into detail.

Reference of certain sites of >100k estimates has been shown to be much more recent, being originally confounded by less accurate techniques. The same could apply to cremated bones listed as well. This leaves the mysterious “Lake Eyre Skullcap” by Steve Webb which, as far as I can tell, has been only scarcely covered. However, only in that source is it reported as that old, as both newspapers and scientific newsletters reports at that time reported it as 60-80 years old using Fluorine-dating, referring specifically to Megafauna that was believed to have existed 30k-40k years ago that it may have coexisted with.

Webbs wrongly compares the Flourine dates relative to the values of the Mungo remains, when this type of dating works best for relative ages on specimens that are on the same site or comparable conditions, of similar density (he describes them as more Robust than Mungo remains), similar size (Uses Large and small animals, but logically it would also apply to mere fragment to more whole remains), and for humans particularly Ribs or Cortical bone layers should be compared.

But an even odder argument of his is how the earliest tools in Australia, being found to be less advanced than other tools of the same time frame mean people sailed from Australia. What this could more likely mean is that they were “simplified” based on Lifestyle, as covered in a previous blog post on Expertise, Brain size, and Tool complexity.]

In my model, I offer compelling evidence for three key migrations of Homo sapiens heading out of Australasia. The first migrations began around 200,000 years ago, during a period of intense climatic problems and low population numbers, with a small group making their way to East Africa.[i] The remains of some of these first Africans have been discovered close to one key entry point in the east of the continent (400km), known as the Bab-el-Mandeb straights.[i]

I then identify a second migration event 74,000 years ago, following the eruption of the Lake Toba super volcano.[i] Small groups of survivors to the north of Lake Toba, finding themselves unable to move south to safety, were then forced to head west to escape the devastating nuclear winter and toxic clouds that followed the disaster. The lucky few that could move fast enough eventually made their way into Africa and found safety in the south of the continent. I suggest that some of these few moved along the coasts of Asia, and others sailed the open ocean to Madagascar and hit the coast of South Africa – I associate these refugees with cave sites including Borders Cave, Klasies River Caves and the Blombos Cave.[i]

The problem with this is due to the previously mentioned finds in Morocco making Sapiens much older in Africa and further West. Though Climates conditions, by the way, based on his link provides no reason for it to be centered at Australasia as it was described to affect Africa’s interior.

Second, the South African caves he describes contains specimens, likely to have contributed to modern South Africans, show deeper genetic roots than what he suggests when they diverged.

But the most glaring problem is that none of his sources shows Sapiens skeletons or activity prior to that in Africa, Indonesia clearly not having a confounding enough preservation problem due to its Erectus sites.

The third migration event identified in my research is arguably of greatest interest because it involved the direct ancestors of all non-African people alive today. As the global environment recovered from the Lake Toba eruption 60,000 years ago, a trickle of modern humans (calculated to be just under 200 individuals) moved out of Australasia into Southeast Asia, slowly colonising the Eurasian continent.[i] These adventurous men and women were the forebears of every non-African and non-Australian person living on Earth today. This Australasian colonisation of the world is very well supported by the study of both mitochondrial and Y-chromosomal haplogroups, and given further credence by the location and dating of several fossils.

This oddly enough goes against what we show with “180k” teeth of a modern human in China, that’s not accounted for in his sequence of African-Eurasian dispersal from Australasia.

He also goes against an earlier point he made by “relying on genetic material”, as he himself has yet to provided H.sapiens being present in the Area.

The model I offer represents a radical revision to the current evolutionary narrative, and is perhaps revolutionary. It will not be easy for academics to accept such bold claims from someone whom is neither a paleoanthropologist or an evolutionary biologist. Why, then, should one take this work seriously?

The Into Africa theory is firmly based on real-world evidence, data that anyone can freely access and examine for themselves. My argument incorporates a great wealth of peer reviewed academic papers, well accepted genetic studies, and opinions offered by the most respected scientific researchers. Indeed, rather ironically, many of my key sources derive from scientists that stand opposed to this model (being vocal supporters of the Out of Africa theories).

Well the irony doesn’t necessarily come off strong when you don’t argue in this article why the findings contradict their views, nor have the sources you provided so far actually firmly grounds your theory by placing human origin into Australasia, the two that do being an unreplicated study and a volcano incident in a vicinity with little fossil continuity with Modern humans from its early hominids.

Recent scientific studies have begun to change the landscape of paleoanthropological research. Examination of the recent conclusions associated with the analysis of Homo erectus skulls in the Georgian Republic confirms that several species of hominins in Africa are in fact nothing more than expected variance within the greater H. erectus population.[i]
That Sources talk about the origin of the Flores Hobbits, not the Georgian Erectus or African Hominid classification.
Elsewhere in Southeast Asia, there is growing suspicion among scientists that Homo floresiensis evolved from a lineage of hominins that lived much earlier than the immediate ancestors of Homo sapiens.[i] Detailed analysis of Neanderthal and Denisovan ancestry convincingly places their founder populations in Southeast Asia and Australasia. There seems little about the currently accepted academic narrative that has not yet come under fire.

He in turns uses a source that supports his later claim of early humans (homo) in India by 3 million (actually 2.6 million based on the source, I believe I’m seeing a trend here), Though the claim he refers to shows continuity with ancestral populations in Africa and has hardly much to do with OOA as of current status hence why there was “no fire”.

Fenton, furthermore, provided no evidence of his claims of Denisovan-Neanderthal origins in Australasia.

 As of 2016, we have finds that place early humans in India 3 million years ago (Masol), and Homo erectus populations ranging from Indonesia to the Georgian Republic 2 million years ago (Dmanisi).[i] On the Australasian island of Guinea, we find the only signature for interbreeding between Denisovans and modern humans dating to 44,000 years ago. This interbreeding occurred long after Australia’s supposed isolation, as claimed by the consensus narrative.[i] How do entirely isolated populations interbreed with other human groups?

See here.

We computed pD(X) for a range of non-African populations and found that for mainland East Asians, western Negritos (Jehai and Onge), or western Indonesians, pD(X) is within two standard errors of zero when a standard error is computed from a block jackknife (Table 1 and Figure 1). Thus, there is no significant evidence of Denisova genetic material in these populations. However, there is strong evidence of Denisovan genetic material in Australians (1.03 ± 0.06 times the New Guinean proportion; one standard error), Fijians (0.56 ± 0.03), Nusa Tenggaras islanders of southeastern Indonesia (0.40 ± 0.03), Moluccas islanders of eastern Indonesia (0.35 ± 0.04), Polynesians (0.020 ± 0.04), Philippine Mamanwa, who are classified as a “Negrito” group (0.49 ± 0.05), and Philippine Manobo (0.13 ± 0.03) (Table 1 and Figure 1). The New Guineans and Australians are estimated to have indistinguishable proportions of Denisovan ancestry (within the statistical error), suggesting Denisova gene flow into the common ancestors of Australians and New Guineans prior to their entry into Sahul (Pleistocene New Guinea and Australia), that is, at least 44,000 years ago.24,25 These results are consistent with the Common Origin model of present-day New Guineans and Australians.26,27 We further confirmed the consistency of the Common Origin model with our data by testing for a correlation in the allele frequency difference of two populations used as outgroups (Yoruba and Han) and the two tested populations (New Guinean and Australian).The f4 statistic that measures their correlation is only |Z| = 0.8 standard errors from zero, as expected if New Guineans and Australians descend from a common ancestral population after they split from East Asians, without any evidence of a closer relationship of one group or the other to East Asians. Two alternative histories, in which either New Guineans or Australians have a common origin with East Asians, are inconsistent with the data (both |Z| > 52).

Here we analyze genome-wide single nucleotide polymorphism data from 2,493 individuals from 221 worldwide populations, and show that there is a widespread signal of a very low level of Denisovan ancestry across Eastern Eurasian and Native American (EE/NA) populations. We also verify a higher level of Denisovan ancestry in Oceania than that in EE/NA; the Denisovan ancestry in Oceania is correlated with the amount of New Guinea ancestry, but not the amount of Australian ancestry, indicating that recent gene flow from New Guinea likely accounts for signals of Denisovan ancestry across Oceania. However, Denisovan ancestry in EE/NA populations is equally correlated with their New Guinea or their Australian ancestry, suggesting a common source for the Denisovan ancestry in EE/NA and Oceanian populations. Our results suggest that Denisovan ancestry in EE/NA is derived either from common ancestry with, or gene flow from, the common ancestor of New Guineans and Australians, indicating a more complex history involving East Eurasians and Oceanians than previously suspected.
So it is accounted for by other genetic research.
We are finding anomalies in all areas of evolutionary studies, whether we look at the mitochondrial and Y-chromosonal data, the datings associated with human archaeological sites, or analysis of hominin morphology. Rather than continuing with the attempt to fit square pegs into a round hole, it is time to face the fact that holes are round and that our story of human origins has been significantly wrong.

Well, studies such as the ones above have reworked hypotheses on migrations theories, the paper you cite on Denisovan admixture being among the many smaller scale migration already being debated and shifting as my second link mentions. So while rethinking ideas in light of evidence is a good thing, there should be clear limits on what to discredit.

Overall I wish I could like the idea as a competing idea to OOA, but this if this paper is to serve any impression of the book, using various studies on hominids and human genetic at different scales showing no clear pattern center towards South East Asia in both Archaeology AND genetics but with just enthusiasm of creating a new idea and to fill holes, then I’m disappointed.

With that said, if anyone with better knowledge and citations from the book (Fenton mentions research from close colleagues of his) then I may be more inclined to accept new finds if they are in favor of shifting human origins from Africa to Australasia.

Origins and the Relationship between West Africans and Hunter-Gatherer Populations

by Phil78 1802 words

Many casual members of HBD may not be completely aware of the population history West Africans and hunter-gathers like Pygmies beyond, say, the Bantu Migration.

Those who frequent articles by population genetics bloggers such as Dienekes or Razib Khan ought to be aware of how, in the sense of Macro races, the two clusters are distinct despite their relatively close association in a human cladistic sense to the confusion of others.

Fortunately enough, two recent finds in both genes and fossils this year not only paint the history of these two groups but also humanity as a whole, with the evolutionary timeline of Sapiens being pushed back to 300k, possibly further according to Chris Springer.

Hublin–one of the study’s coauthors–notes that between 330,000 and 300,000 years ago, the Sahara was green and animals could range freely across it.

While the Moroccan fossils do look like modern H sapiens, they also still look a lot like pre-sapiens, and the matter is still up for debate. Paleoanthropologist Chris Stringer suggests that we should consider all of our ancestors after the Neanderthals split off to be Homo sapiens, which would make our species 500,000 years old. Others would undoubtedly prefer to use a more recent date, arguing that the physical and cultural differences between 500,000 year old humans and today’s people are too large to consider them one species.

(The morphological characteristics of these hominids will come into play latter.)

Taking in this information in, we now ought to have a better context to place the divergence of HG populations and the rest of mankind (including West Africans) being more than 260,000 years ago.

If this is the case, then why do they range so close to Modern West Africans? The reason, for the most part, being that this finding technically refers to it’s ancient primary cluster and not it’s modern composition as a whole.

So in terms of proportions, 30% of the composition of West African people contain the human ancestors of the Ballito Boy, a specimen believed in turn to represent the ancestors of modern Khoisan people without the genetic admixture of either Bantu or East African Pastoralists. (Which this study finds to range from 9% to 22% in all modern Khoisan Groups).

[*Edit- When I say HG’s people association with West Africans, I’m referring to the relative position the two populations have compared the actual East African cluster exemplified the most by Nilotic people, not “Horners”. I realized this confusion when I look at genetic distance test and found Bushmen populations ranking closest to Ethiopians, this is probably just a result of their admixture with Ethiopians as later studies \accounted for the confound and gave more accurate results, West African’s closest cluster being both Nilotics and Ethiopians with Pygmies and San clustering closer than before, though Mbuti are rather intermediate with San than sharing a branch.

This seems to dampen earlier but plausible ideas suggestion of highland Ethiopians having a San-like African profile explain their affinity to each other, but this helps illustrate the nature of the cline in affinity of Native African clusters as Razib covered later that year both between each other and relative to non African populations. Here, we see that the San have closer affinities to West Africans than to Nilotics, though the pygmies groups having an odd relationship. Not only are Biaka people closer to West Africans than Mbuti are, but in general are closer to West Africans than to Mbuti. This is likely connected to these findings.

The findings also illustrate that oddly,  next to my knowledge at least, how though the San and Mbuti share similarly deep splits from non-africans, both of their smallest distances are with the Biaka than with each other despite the Biaka being closest to West Africans who are as distant as the Mbuti are to the San. Clearly imagining pygmies as merely as a cline between San and Bantus doesn’t work without considering a either considering them their own cluster altogether or to isolation. I’m considering the former idea for the most part.

Mainly because Mbuti are the smallest Pygmies, with height in that region being correlated with Pygmy ancestry versus Bantu, and Biaka pygmies have been shown to be only 18.5% to 30% “pygmy”, if assuming Mbutis are pure and Bantus are the outside group. Though the latter point explains the overall association they have with West Africans, there is still the remote position of the Mbuti. According to Cavalli Sforza, the San and Bushmen don’t necessarily share a particularly close relationship but both their lifestyles and apparent divergence from Bantus makes the idea rather convincing.

Based on the new study, which is though scant on including pygmies, comparing Mbuti dates to San shows similar dates to what was estimated at the high range of the previous 90k-195k split, if not a little higher. This is also consistent with the pygmies (in pure form) being intermediate between West African ancestors and San in the 1994 study and the new date for the Khoi-san. Their more or less genetic isolation may’ve played a role as well in their similar position with other African due to limited geneflow compared to the Khoisan and Biaka being connected through the Bantu Expansion, undermining the affinity Khoisan and Mbuti have through common ancestry. Basically similar to the relationship of Sardinians and mainland Italians which reflects a similar distance relationship with. ]

Now, the first thought crime most are pondering at this point is whether or not the Khoisan are fully human in the context of genetics and recent anthropology? John Hawks  already discussed the position of the ancient moroccans in our evolutionary tree and expresses the authors’ comments on how, despite their archaic features restrain them from being clearly modern, they and similar finds play the role as the founding lineage to contribute to modern Sapiens.

In Hublin and colleagues’ “pan-African” hypothesis, every African fossil that had parted ways with Neanderthals is part of a single lineage, a stem population for modern humans. They connect the evolution of these early H. sapiens people to a new form of technology, the Middle Stone Age, which was found in various regions of Africa by 300,000 years ago.

So how many other archaic groups were in Africa? Under the Hublin model, there may have been none. Every fossil sharing some modern human traits may have a place within the “pan-African” evolutionary pattern. These were not river channels flowing into the desert, every channel was part of the mainstream.

But there may be a problem. Geneticists think there were others.

To which he alludes to Iwo Eleru findings, found to be closest to Early AMH (120k levant)

Now, as for cranial representatives that the Ballito Boy likely is associated with, there has been indeed quite a few earlier skulls fitting the profile that were classed separate of more archaic types of similar geography like Florisbad, and thus would be classed away from the Moroccans as well.

An example being this rather interesting analysis of the Border Cave skull (which I believe is 50k, at a site which bushman-like tools dated at 46k).

When all (six rather than just three) discriminants are
considered, Border Cave in fact lies closest to the Hottentot
centroid and is contained within the .05 limits of this distribution.
The fossil also approaches the Venda and Bushman male
centroids but falls beyond the .05 limits of these groups. This
is new information, not principally because of the Hottentot
identification, which is dubious, but because Border Cave is
shown emphatically to be well within the range of modern
African variation for the measurements used. The cranium is
heavily constructed, but it is hardly archaic in the fashion of
Florisbad or Broken Hill.

border cave front view-

border cave skull lateral view-

Hottentot (Khoi pastorlist) skull, Lateral and Front, (for comparison)

And here’s the best primary reference of a “bushman skull” I could find to display it’s similarities and differences with more admixed Pastoralists. One listed trait that’s notable is the less prominent occipital protrusion of the Bushman skull despite being measured as more dolichocephalic, probably due to a narrower relative breadth but that cannot be seen here.

However the only one I know of that is linked with Khoisan with modern research , is the similar Fish Hoek specimen.

Humanitec-Anatomy of an intellectual triad

Primitive Man of the Peninsula. Cape Times (South Africa), October 26, 1927 

One Hundred Skulls: OEC and Exploring Human Origins

In comparison to Jebel Irhoud 1

Zetaboards (Anthroscape), unknown source

front and lateral view Jebel Irhoud-

And Florisbad

Ira Block Photography

And comments from the study mentioned, which links Fish Hoek with modern Khoisan, comparing morphological differences among Stone age African Skulls, Bushman, Pygmies, and Bantu Farmers.

To summarize, therefore, the Pleistocene skulls from across Africa tend to be broad,
long, with a broad face and broad, short orbits.
By contrast, the skulls of the Khoisan (“Bushman”) population are relatively short,
low, broad, narrow, with a comparatively intermediate nose.
Pygmies are characterized by great variability, but they usually have small-sized
round skulls, and a balanced face. Their degree of dispersion, however, contradicts the findings of other studies, which have detected a strong homogeneity among Pygmy populations, even if these support the hypothesis that the typical features of these populations, including their short stature, took place after their geographical separation through convergent evolution. As is suggested by other more recent studies (RamĂ­rez Rossi y Sardi, 2010; Anagnostou, 2010; Vigilant, 1989).
The Bantu-speaking populations are mostly at the center of the graph, which rep-
resents a common morphological tendency, but with a strong variability, whether they come from Southern, Eastern or Central Africa. This supports the idea of a common, more recent ancestor than that for the Pygmy and Khoisan groups, as well as a similar way of life founded on cattle breeding and farming, independent of their surrounding environment.

The Late Peopling of Africa According to Craniometric Data. A Comparison of Genetic and Linguistic Models

For context, the specimen in the sample that exemplifies the traits associated with the Pleistocene group the most would be the Herto Skull, which is comparatively closer to modern humans than the Jebel Irhoud findings.

160,000-year-old skulls fill crucial gap in evolution- Telegraph

So despite their divergence being closer to the age of more archaic specimens, why do their likely less admixed ancestors and modern populations contrast clearly in phenotypical traits? This would, by my amateur speculation, leave two options. Either gracilization took place in convergence with other populations or a more plausible route that the archaeological finds don’t precisely place the specimen’s actually divergence, thus the more archaic forms likely have older splits than what their fossilized age suggest and the clear traits of “Modern human” phenotype possibly being older as well in that respect.

The West’s Sperm Decline: Is It True?

2200 words

Another day, another slew of articles full of fear mongering. This one is on sperm decline in the West. Is it true? I have recently covered on this blog that as of July 17th, 2017, the testosterone range for men decreased (more on that when I get access to the paper). I have also covered the obesity epidemic a bit, and that also factors in to lowered testosterone and, of course, low spermatoza count. Due to these environmental factors, we can logically deduce that sperm counts have fallen as well. However, as I will cover, it may not be so cut and dry due to analyzing numerous studies with different counting methodologies among numerous other confounds that will be addressed below. First I will cover the physiology of sperm production and what may cause decreases in production. Next, I will cover the new study that is being passed around. Finally, I will talk about why you should worry about this.

Physiology of sperm production

The accumulation of testosterone by ABP leads to the onset and rising rate of sperm production. So if testosterone production ceases or decreases, then subsequent decreases in sperm count and spermatogenesis should follow. If this change is drastic, infertility will soon follow. The process of sperm production is called spermatogenesis. It occurs in the seminiforous tubules and involves three main events: 1) remodeling relatively large germ cells into smaller mobile cells with flagella, 2) reducing the chromosome number by half, and 3) shuffling the genes so that each chromosome in the sperm carries novel gene combinations that differ from the parents. This is what ensures that a child will differ from their parents but still, at the same time, will be similar to them. The process by which this occurs is called meiosis, in which four daughter cells split which subsequently differentiate sperm (Saladin, 2010: 1063).

After the conclusion of meiosis I, each chromosome is still double stranded, except each daughter cell only has 23 chromosomes becoming a haploid while at the end of meiosis II,  there are four haploid cells with 23 single-stranded chromosomes. Fertilization then combined the 23 chromosomes from the father and mother, which “reestablishes the diploid number of 46 chromosomes in the zygote“(Saladin, 2010: 1063-1064).

Spermatogonia divide by mitosis and then enlarge to become primary spermatocyte. The cell is then protected from the immune system since it is going to become genetically different from the rest of the cells in the body. Since the cells are guarded from the body’s immune system, the main spermatocyte undergoes meiosis I, giving rise to equal size haploid and genetically unique secondary spermatocytes. Then, each secondary spermatocyte undergoes meiosis II dividing into two spermatids with a total of four spermatogoniom. Lastly, the spermatozoa undergo no further division but undergoes spermiogenesis in which it differentiates into a single spermatozoon (Saladin, 2010: 1065-1066). Young men produce about 300,000 sperm per minute, about 400 million per day.

Sperm decrease?

The new study was published on July 25, 2017, in the journal Human Reproduction Update titled Temporal trends in sperm count: a systematic review and meta-regression analysis. Levine et al (2017) used 185 studies (n=42,935) and showed a sperm count (SC) decline of .75 percent per year, coming out to a 28.5 percent decrease between 1975 and 2011. Similar declines were seen in total sperm count (TSC) while 156 estimates of serum volume showed little change.


Figure 2a shows the mean sperm concentration between the years 1973 and 2011. Figure 2b shows the mean total sperm count between those same years.


Figure 3a shows sperm concentration for the West (North America, Australia, Europe and New Zealand) vs Other (South America, Asia, and Africa), adjusted for potential confounders such as BMI, smoking etc. Figure 3b shows total sperm count by fertility and the West and Other. You can see that Fertile Other had a sharp increase, but the increase may be due to limited statistical power and a lack of studies of unselected men from those countries before 1985. There is a sharp increase for Other, however and so the data does not support as sharp of a decline as observed in Western countries.

If this is true, why is this happening? Factors that decrease spermatogenesis include (but are not limited to): obesity, smoking, exposure to traffic exhaust fumes, and combustion products. Though there is no data (except animal models) that lend credence to the idea that pesticides, food additives, etc decrease spermatogenesis (Sharpe, 2010). Other factors are known to cause lower SC which includes maternal smoking, alcohol, stress, endocrine disruptors, persistent and nonpersistent chemicals, and, perhaps most importantly today, the use of mobile phones and the wireless Internet (Virtanen, Jorgansen, and Toparri, 2017). Radiation exposure due to constant mobile phone use may cause DNA fragmentation and decreased sperm mobility (Gorpinchenko et al, 2014). Clearly, most of this decrease can largely be ameliorated. Exercise, eating right, and not smoking seem to be the most immediate changes that can and will contribute to an increase in SC in Western men. This will also increase testosterone levels. The cause is largely immobility due to the comfortable lifestyles that we in the West have. So by becoming more active and putting down smartphones, we can then begin to reverse this downward trend.

Saladin (2010: 1067) also states that pollution has deleterious effects on reproduction—and by proxy, sperm production. He states that the evidence is mounting that we are showing declining fertility due to “anatomical abnormalities” in water, meat, vegetables, breast milk and the uterus. He brings up that sperm production decreased in 15,000 men in 1990, decreasing from 113 million/ml in 1940 to 66 million/ml in 1990. Sperm production decreased more, he says, since “the average volume of semen per ejaculate has dropped 19% over this period” (Saladin, 2010: 1067).

Saladin (2010: 1067) further writes:

The pollutants implicated in this trend include a wide array of common herbicides, inseciticides, industrial chemicals, and breakdown products of materials ranging from plastics to dishwashing detergents. Some authorities think these chemicals act by mimicking estrogens by blocking the action of testosterone by binding to its receptors. Other scientists, however, question the data and feel the issue may be overstated. While the debate continues, the U.S. Environmental Protection Agency is screening thousands of industrial chemicals for endocrine effects.

 Is it really true?

As seen above, the EPA is investigating whether thousands of industrial chemicals of effects on our endocrine system. If this is true, it occurs due to the binding of these chemicals to androgen receptors, blocking the production of testosterone and thusly sperm production. However, some commentators have contested the results of studies that purport to show a decrease in SC in men over the decades.

Sherins and Delbes are critical of such studies. They rightly state that most of these studies have numerous confounds such as:

1) lack of standardized counting measures, 2) bias introduced by using different counting methodologies, 3) inadequate within-individual semen sampling in the analysis, 4) failure to account for variable abstinence intervals and ejaculatory frequency, 5) failure to assess total sperm output rather than concentration, 6) failure to assess semen parameteres other than the number of sperm, 7) failure to account for age of subject, 8) subject selection bias among comparitive studies, 9) inappropriate statistical analysis, 10) ignoring major geographic differences in sperm counts, and 11) the causal equating of male ferility with sperm count per se.

Levine et al (2017) write:

We controlled for a pre-determined set of potential confounders: fertility group, geographic group, age, abstinence time, whether semen collection and counting methods were reported, number of samples per man and indicators for exclusion criteria (Supplementary Table S1).

So they covered points 1, 2, 4, 5, 6, 7, 8,  9, and 10. This study is very robust. Levine et al (2017) replicate numerous other studies showing that sperm count has decreased in Western men (Centola et al, 2015; Senputa et al, 2017; Virtanen, Jorgensen, and Toparri, 2017). Men Southern Spain show normal levels (Fernandez et al, 2010), while Southern Spanish University students showed a decrease (Mendiola et al, 2013). The same SC decrease has been noted in Brazil in the last ten years (Borges Jr. et al, 2015).

However, te Velde and Bonde (2013) in their paper Misconceptions about falling sperm counts and fertility in Europe contest the results of studies that argue that SC has decreased within the last 50 years stating that, for instance in Denmark, the median values remained between 40-45 million sperm per ml in the 15 years analyzed. They also state that declining birth rates can be explained by cultural and social factors, such as contraception, the female emancipation, and the second demographic transition. Clearly, ferility rates are correlated with the human development index (HDI) meaning that more developed countries have a lower birth rate in comparison to less developed countries. I believe that part of the reason why we in the West have lower birth rates is because there are too many things to for us to do to occupy our time, time that could be used to have children, like going to school to pursue Masters degrees and PhDs, to just wanting more ‘me time’.

Te Velde and Bonde (2013) conclude:

‘Whether the sperm concentration and human fecundity have declined during the past 50 years is a question we will probably never be able to answer’. This statement by Olsen and Rachootin in 200348 still holds for sperm concentration despite the report in 1992. In the meantime, we know that the results of oft-repeated studies from Copenhagen and Malmö do not indicate any notable change in sperm count during the last 10–15 years. Moreover, none of the available evidence points to a decline in couple fecundity during the last 30–40 years, including Denmark.28 Moreover, birth rates and TFRs instead of declining are on the increase in many EU countries, including the spectacular rise in Denmark.34

Echoing the same sentiments, Cocuzza and Esteves (2014) conclude “that there is no enough evidence to confirm a worldwide decline in sperm counts or other semen parameters. Also, there is no scientific truth of a causative role for endocrine disruptors in the temporal decline of sperm production as observed in some studies. We conjecture that a definite conclusion would only be achieved if good quality collaborative long-term research was carried out, including aspects such as semen quality, reproductive hormones, and xenobiotics, as well as a strict definition of fecundity.Merzenich, Zeeb, and Blettner (2010) also caution that “The observed time trend in semen quality might be an artefact, since the methodological differences between studies might be time dependent as well. Intensive research will be necessary in both clinical and epidemiological domains. More studies are needed with strict methodological standards that investigate semen quality obtained from large samples of healthy men representative for the normal male population.

Clearly, this debate is long and ongoing, and I doubt that even Levine et al (2017) will be good enough for some researchers.


There are various papers for and against a decrease in sperm production in the West, just like with testosterone. However, there are ways we can deduce that SC has fallen in the West, since we have definitive data that testosterone levels have decreased. This, then, would lead to a decrease in sperm production and then fecundity and number of children conceived by couples. Of course, sociocultural factors are involved, as well as immediate environmental ones that are immediately changeable. Even if there is no scientific consensus on industrial chemicals and effects on the endocrine system, you should stay away from those too. One major reason for the decrease in sperm production—if the decrease is true—is increased mobile phone usage. Mobile phone usage has increased and so this would lower SC over time.

Whether or not the decrease in SC is true or not, every man should take steps to lead a healthier lifestyle without their cell phone. Because if this decrease is true (and Other doesn’t show a decrease as well) then it would be due to the effects of our First World societies, which would mean that we need to change how we live our lives to get back on the right track. Clearly, we must change our diets and our lifestyles. I’ve written numerous articles about how testosterone is strongly mediated by the environment, and that testosterone production in men has decreased since Western men have been, in a way, feminized and not been as dominant. This can and does decrease testosterone production which would, in turn, decrease sperm production and decrease fertility rates.

Nevertheless, taking steps to leading a healthier lifestyle will ameliorate a ton of the problems that we have in the West, which are mainly due to low birth rates, and by ameliorating these problems, the quality of life will the increase in the West. I am skeptical of the decrease due to what was brought up above, but nevertheless I assume that it is true and I hope my readers do too—if only to get some fire under you to lead a healthier lifestyle if you do not do so already as to prevent these problems before they occur and lead to serious deleterious health consequences.

(I am undecided leaning towards yes. There are too many behaviors linked to lower SC which Western men partake in. There are numerous confounds which may have not been controlled for, however knowing the main reasons why men have lower sperm count and the increased prevalence in these behaviors, we can logically deduce that sperm count has fallen too. Look to the testosterone decrease, that causes both low sperm count and lower fertility.)

Charles Murray

Arthur Jensen

Blog Stats

  • 196,426 hits
Follow NotPoliticallyCorrect on