NotPoliticallyCorrect

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 39 other followers

Follow me on Twitter

Goodreads

JP Rushton

Richard Lynn

Linda Gottfredson

Muscle Fiber Typing and Race: Redux

I recently blogged on Muscle Fiber Typing, HBD, and Sports. I showed that differences in which race wins at what competition comes down to ancestry, which then correlates with muscle fiber typing. I came across this paper, Black and White race differences in aerobic capacity, muscle fiber type, and their influence on metabolic processes, today which, of course, proved my point on muscle fiber typing.

The authors say that obesity is a known risk factor of cardiometabolic disease (though Blüher 2012 says that up to 30 percent of obese patients are metabolically healthy with insulin sensitivity on the same level as thin individuals) and that cardio can reduce excess adipose tissue (this isn’t true either), maintains weight (maybe) and reduces the risk of obesity (it doesn’t) and cardiometabolic disease (this is true). The two major determinants of aerobic capacity are muscle fiber typing and “the capacity of the cardiorespiratory system to deliver nutrient-rich content to the muscle”. As I said in my previous article on muscle fiber typing, depending on which fibers an individual has determines whether or not they are predisposed to being good at endurance sports (Type I fibers) or being good at explosive sports (Type II fibers). Recent research has shown that blacks fiber typing predisposes them to a lower overall VO2 max.

VO2 max comes down to a strong oxygen support system and the capacity to contract a large number of muscle fibers at once, both of which are largely genetic. Lactic acid makes us tired, the best way to train is to minimize lactic acid production and maximize lactic acid removal during exercise. High-Intensity Interval Training, or HIIT, achieves this. The more O2 consumed during exercise, the less of a reliance there will be on the anaerobic breakdown of CHO to lactic acid.

Along with inadequate exercise, these variables place blacks at an increased risk for obesity as well as other negative metabolic factors in comparison to other races/ethnic groups. The author’s purpose of the review was to show how skeletal muscle fiber typing contributes to obesity in non-“Hispanic” black populations.

The review indicates that the metabolic properties of Type II fibers (reduced oxidative capacity, capillary density, which is a physiological measure that takes a cross-section of muscle and counts the number of blood vessels within. The measurement can be considered an indicator of physical health and is also related to the ability to do strenuous activity) are related to various cardiometabolic diseases.

Since non-“Hispanic” blacks have more Type II fibers on average, they have a lower maximal aerobic capacity. Combined with low Resting Energy Expidenture (REE) and reduced hemoglobin concentration (hemoglobin is a protein in the red blood cells that shuttles oxygen to your tissues and organs and transports carbon dioxide from your organs and tissues back to your lungs), non-“Hispanic” blacks may be predisposed (they are when you look at what the differing skeletal muscle fibers do in the body and if you have a basic understanding of physiology) to a lower maximal aerobic capacity, which contributes to obesity and metabolic disease in the non-“Hispanic” black population.

I have written on ethnicity and obesity last year. In the two racial groups that were tested, American non-“Hispanic” whites and American non-“Hispanic” blacks, what the researchers say holds true.

On the other hand, Kenyans have an average BMI of 21.5. Since we know that a high VO2 max and low BMI are correlated, this is why Kenyans succeed in distance running (along with VO2 max training, which only enhances the genetic effects that are already there).

Moreover, I wrote an article on how Black American Men with More African Ancestry Less Likely to Be Obese. How do we reconcile this with the data I have just written about?

Simple. The population in the study I’m discussing in this article must have had more non-African ancestry than the population that was gathered showing that black American men with more African ancestry are less likely to be obese. The researchers in that study looked at  3,314 genetic markers. They then tested whether sex modifies the association of West African genetic ancestry and body mass index, waist circumference, and waist to hip ratio. Also, they adjusted for income and education as well as examined associations of ancestry with the phenotypes of males and females separately. They conclude that their results suggest that a combination of male gender and West African ancestry is correlated with protection against central obesity and suggests that a portion of the difference in obesity (13.2 percent difference) may be due, in part to genetic factors. The study also suggests that there are specific genetic and physiologic differences in African and European Americans (take that, race-denialists =^) ).

Since both black men and women in America share the same environment, some genetic factors are at play in the differences in obesity rates between the two sexes with more African ancestry for black American men being the main reason.

Finally, I wrote an article on BPA consumption and obesity. The sample was on blacks and Dominicans (they’re black as well) in NYC. It was discovered that babes who were exposed to BPA more in childhood and in the womb had higher chances of being obese. This goes with what the authors of the study I’m citing in this article say. There are numerous environmental factors that pertain to obesity that’s not kcal in/out (which the First Law of Thermodynamics is irrelevant to human physiology). BPA consumption is one of them (as well as a cause for the current and ongoing cucking of Europe). Whites at all age groups drink more tap water. Blacks and ‘Hispanics’ were pretty much even in consumption of bottled water. Bottled water has BPA in the plastic, and since they drink more bottled water, they run the risk of their children being more prone to obesity due to the negative effects of BPA in the human body.

In sum, blacks are more likely to be faster due to their fiber typing, but are also more likely to be obese (in this sample, anyway which I assume was a mix of men and women. I will update this article when I find and read the full paper). They also run a higher risk of having related diseases, most notably due to a lower REE (showing they don’t walk around as much, since too much sitting increases mortality EVEN WITH EXERCISE. So if you have a desk job and don’t do any other physical activity and enjoy living, do more LISS, low-intensity steady-state cardio). These factors also, in part, explain why blacks have higher rates of hypertension (with Sickle Cell Anemia being another cause since when the blood is sickle-shaped, they crowd in the blood vessels causing blockage in the veins which leads to strokes and other diseases). The more the genetic factors that predispose people to obesity are understood (let’s be real here, there ARE genetic correlates with obesity), the better we can help those who suffer from the condition.

Myopia, IQ, and Race

1200 words

We’ve all heard of the nerd stereotype. One of the main ones is that nerds wear glasses. However, as most of my readers may know, stereotypes are based on fact more often than not. From the black criminal and sprinter, to the hyper-intelligent East Asian, to the intelligent and creative Europeans, we see that these so-called ‘stereotypes’ arise because stereotypes are actually average traits. Therefore, this ‘nerd stereotype’ that they always wear glasses is based on averages, so there must be a genetic component behind it. In this article I will talk about the genetics of myopia, reasons why researchers believe it arises, and racial differences in the prevalence of myopia.

Myopia, better known as nearsightedness, has a pleiotropic relationship with intelligence. Pleiotropy is the single gene or set of genes controlling multiple, possibly unrelated, phenotypic traits. So if the two traits are correlated, then there is a good chance that if one wears glasses they may have higher average intelligence.

Rosner and Belkin (1987) found that the prevalence of myopia was higher in more intelligent and educated groups. They found a strong association between the rate of myopia, years of schooling and intelligence level. Schooling and intelligence weigh equally with myopia, showing that those who are myopic tend to stay in school longer and are more intelligent than average.

Saw et al (2004) show that there may be similar genes associated with eye growth or size (myopia) and neocortical size (*possibly* correlated with IQ, we know it is). This is exactly what Cohn, Cohn, and Jensen found in 1987; that there was a pleiotropic relationship between IQ and myopia. One set of genes controls one or multiple phenotypic traits. They also say that nonverbal IQ is correlated with myopia in the Singaporean cohort independent of near work from the children (such as reading). Nonverbal IQ may be an independent risk factor of myopia independent of books read per week. They conclude that more research needs to be taken out to untangle the cause and effect of the myopia/intelligence/reading relationship.

Mirashi et al (2014) show in a sample of 4600 myopia-inflicted Germans between the ages of 35 and 74 that about 53 percent of the sample had graduated from college compared to 24 percent of the sample who didn’t go to school past high school. They, too, conclude that higher levels of myopia are associated with higher educational achievement and post-school professional achievement and that those who were myopic had higher levels of educational achievement than those in the sample who weren’t myopic.

More recently, Verma and Verma (2015) state that there is evidence that both genetic and environmental factors play a role in the prevalence of myopia. Moreover, Czepida, Lodykowska, and Czepita (2008) come to the same conclusion; that children with myopia have higher IQs and was verified in other countries (the USA, the Czech Republic, Denmark, Israel, New Zealand).

The correlation between myopia and IQ is between .2 and .25 (Jensen, 1998 b; 149). Jensen writes on page 150:

. . .the degree of myopia was measured as a continuous variable (refraction error) by means of optical techniques in a group of sixty adolescents selected only for high IQs (Raven matrices) and their less gifted full siblings, who averaged fourteen IQ points lower, a difference equivalent to 0.92o. The high-IQ subjects differed significantly from their lower-IQ siblings in myopia by an average of 0.39a on the measure of refraction error.1161 In other words, since there is a within-families correlation between myopia and IQ, the relationship is intrinsic. However, it is hard to think of any directly functional relationship between myopia and IQ. The data are most consistent with there being a pleiotropic relationship. The causal pathway through which the genetic factor that causes myopia also to some extent elevates g (or vice versa) is unknown. Because the within-family relationship of myopia and IQ was found with Raven’s matrices, which in factor analyses is found to have nearly all of its common factor variance on g ,n it leaves virtually no doubt that the IQ score in this case represents g almost exclusively. (emphasis his)

Therefore, as noted earlier, we would see a slight variation in the general population between those with high IQs and those who wear glasses and are myopic.

Jensen also talks about race and myopia. He says that Asians have the highest rates of myopia, while blacks have the lowest rate and whites have a rate slightly higher than blacks.

In a tribute to Arthur Jensen, edited by Helmuth Nyborg, it states that East Asians have the highest rates of myopia, with blacks having the lowest rate and whites being intermediate (Rushton’s Rule of Three).  Ashkenazi Jews have a rate of myopia two times higher than that of gentiles, on par with East Asians. These are yet other biological correlates with the g factor that also lend credence to the hereditarian hypothesis.

Glassescrafter.com also shows that East Asians have a higher rate of myopia with blacks having a lower rate:

Certain types of visual disturbances affect some races more frequently. Asian-Americans, for example, are more likely to be near-sighted than Caucasians or African-Americans. African-Americans have the lowest incidence of near-sightedness, but are more prone to cataracts and some other eye diseases. Eye problems, including the need to wear glasses, also can run in families.

Of course, if myopia is a pleiotropic trait (there is good evidence that there is), and wearing glasses runs in families as well as high intelligence, it can be safely hypothesized that the two indeed do have a relationship with each other. The biological correlates show enough that these traits, too, follow Rushton’s Rule.

Finally, Au, Tay and Kim (1993) present data that shows the prevalence and severity of myopia is associated with higher education. They also report on data that Rosner and Belkin reported the prevalence of myopia in males with and IQ of 80 or less was 8 percent while the rate increased up to 27.3 among those with an IQ of 128 or higher. Reported separately, it was concluded that the myopia rates in the cohort of 110,236 young Singaporean males correlated with race (Au, Tay, and Lim, 1993). The myopia rate for the Chinese was 48.5 (IQ 105), for Eurasians it was 34.7, for Indians it was 30.4 (IQ 82), and for Malays it was 24.5 (IQ 92). It’s worth noting that India’s IQ is depressed by disease and bad nutrition, and if this were to be ratified their IQ would be around 94. So this, again, shows the biological correlate with IQ and myopia as it’s showing on the Indians’ genotype.

The association between myopia and intelligence isn’t definite yet, however with more studies looking into the relationship between these variables I believe it will be concrete that those who are more myopic tend to have higher IQs due to the pleiotropic nature of IQ and myopia. Since reading is heritablethose with higher IQs as children tend to read more as adults, and the racial gradient is noticed in children, it’s pretty safe to say that myopia and IQ are linked pleiotropically and give more credence to the hereditarian hypothesis. Most studies find a statisically positive correlation between myopia and intelligence. Along with the racial disparities in myopia as well as intelligence, it’s pretty safe to say that the relationship is genetic and pleiotropic in nature since the races also differ in these variables.

Muscle Fiber Typing, HBD, and Sports

1850 words

With the Olympics currently happening, I figured I’d talk about muscle fiber typing and how it plays a factor in who wins what competition. First I’ll go through both fiber typings and what they mean for each sport. Then I will go through some of the most well-known sports and show how and why certain races dominate in different sports.

Muscle fiber typing

There are two types of muscle fibers: Type I fibers (slow twitch) and Type II fibers (fast twitch). Each fiber fires off through different pathways, whether they be anaerobic or aerobic. The body uses two types of energy systems, aerobic or anaerobic, which then generate Adenosine Triphosphate, better known as ATP, which causes the muscles to contract. Depending on the type of fibers an individual has dictates which pathway muscles use to contract which then, ultimately, dictate if there is high muscular endurance or if the fibers will fire off faster for more speed.

Type I fibers lead to more strength and muscular endurance as they are slow to fire off, while Type II fibers fire quicker and tire faster. Slow twitch fibers use oxygen more efficiently, while fast twitch fibers do not burn oxygen to create energy. Slow twitch muscles delay firing which is why the endurance is so high in individuals with these fibers whereas for those with fast twitch fibers have their muscles fire more explosively. Slow twitch fibers don’t tire as easily while fast twitch fibers tire quickly. This is why West African blacks and their descendants dominate in sprinting and other competitions where fast twitch muscle fibers dominate in comparison to slow twitch.

Usain Bolt, who just won the 100m dash the other day, has fast twitch fibers (Type II) due to a gene called ACTN3 which is associated with elite athletic performance. West African blacks and their descendants have this gene. For example, 70 percent of Jamaicans have the ACTN3 gene, and this gene is why Usain Bolt is the world’s fastest man.

Though at the same time, West Africans and their descendants suffer in competitions where muscular endurance is needed (swimming is one of them). Caucasians Asians and East Africans have more slow twitch fibers (Type I fibers) which allows them to dominate in competitions where endurance is needed (weightlifting, Strong Man, distance running, swimming).

There are physiological differences found in the winners of these competitions, and like most things, there is a racial basis to them.

Sports

As noted above, West Africans and their descendants dominate competitions in which their muscle fibers are best put to use (sprinting, football, basketball, etc) while they suffer in competitions in which Caucasians and Asians dominate in which muscular endurance is needed (weightlifting, powerlifting, distance running).

World’s Strongest Man

Muscle fiber typings play a major part in the winners of these competitions as does limb length. Generally, the winners of the World’s Strongest Man (WSM) are more stocky and have shorter limbs which translates into more power generated since the distance is shorter.

A white man has won the WSM competition every year since its inception. It’s always a Northern or Easter European who wins these competitions. The Russians and Slavs are known for their crazy squat programs, and muscle fiber typing is the reason why. They are able to generate more power than those with fast twitch fibers which translates into domination in strength-based competitions.

The same thing is noticed in powerlifting. Caucasians and Asians dominate. I’ve seen some incredibly strong East Asian powerlifters, and the reason is they are shorter and stockier with shorter limbs. More power is able to be generated with the shorter distance and Type II fibers which allow these populations to excel in these types of competitions.

I hypothesize that just like West Africans and their descendants consistently win sprinting competitions due to their genes and fiber typing, this is the same reason why Europeans consistently win WSM. Though, PumpkinPerson thinks differently about this.

PP believes that since Africans have higher testosterone, then they, therefore, should dominate in these types of competitions. His reasoning is based on Rushton’s Rule of Three, which all though it holds well for a wide variety of variables, it doesn’t hold with more complex traits such as muscle fiber typing.

PP cites a study stating that blacks out benched whites in the beginning and end of the study. However, it seems this is anomalous. The researchers say this is the only study looking at this, and from what I can tell, they didn’t ask about dietary and or exercise habits. They also say that blacks were heavier in BMI at the onset, but not in the follow-up.

I’d like to see another study like this before any conclusions are drawn. Because what I see in actual powerlifting competitions from people who go above and beyond their genetic potential when everyone is using, Caucasians (whites, MENA people) and East Asians are consistently always stronger than blacks. From what we see from actual competitions, Caucasians and Asians dominate these competitions. Africans are really nowhere to be found. In fact, Kenya is the only sub-Saharan African country to place in the top 3 in the WSM, which strengthens my theory on muscle fiber typing and strength-based competitions since they have slow twitch fibers.

PP then writes another article saying that from 1938 to 1953 the WSM was a black man named John Henry Davis. He was known as the WSM from those years, but as we know, exceptions don’t prove rules.

Mark Henry is a better example. Genetic freak of nature. World record total in squat, bench and deadlift; he was a squatting 600 pounds as a freshman; as a teenager, he had the 8th best total regardless of age group.

He’s a genetic freak of nature. He’s way stronger than the guy you cited. Mark Henry is one of the strongest people to ever live. He is a freak of nature. I can’t emphasize that enough.

Sprinting

West Africans and their descendants excel at sports where their muscle fiber typing is put to good use. The ACTN3 gene, as noted above, has a lot to do with their success in these competitions but it doesn’t tell the whole story. Sprinters have long limbs, which allow them to cover a greater distance with each stride in comparison to another with shorter limbs. Sprinters also have lower levels of body fat which translates to more speed. Where these lower levels of body fat make have them suffer in swimming competitions since fat floats, this helps in sprinting competitions due to less fat mass.

Swimming

For those of you who are keeping up with the Olympics, you may have heard of Robel Kiros Habte. He finished with the worst time out of the 59 contestants and was only there due to an invitation extended to him by the International Swimming Federation who chooses people from countries that are underrepresented in the Games. This invitation shows that even the ‘best’ in their country is nowhere near good enough versus the best in the world.

But on the other hand, for the first time in history, swimmer Simone Manuel became the first black American to win gold in the 100m freestyle. There’s a first time for everything and exceptions don’t disprove rules.

Of course, Michael Phelps speaks for himself, with his 23rd gold medal win which broke a record that was standing for 2168 years.

Bodybuilding

Blacks dominate in American bodybuilding. This is due to them having lower fat-free body (FFB) and being more mesomorphic on average.

The winner of Mr. Olympia for the five years in a row is Phil Heath (who will win a sixth title next month during the Olympia). Blacks have consistently been in the top running in the IFBB (International Federation of Bodybuilding). This is due to their muscle insertions and lower average fat-free body that allows a high percentage of blacks to compete. Moreover, I’d say that genetically speaking, blacks have a better chance to win over whites since they have a more sculpted physique naturally, which comes down to evolutionary selection

Some people may say that the above sports are tainted due to performance enhancing drug (PED) use. Though what they fail to realize is that drugs take you above and beyond your genetic limit. These people are already genetic freaks of nature and taking drugs just makes them that much better. You can’t take someone with garbage genetics, have him shoot up for years and bust his ass in the gym to be Mr. Olympia. Just like you can’t take someone with garbage genetics and the wrong proportions, inject them with PEDs and expect them to do well in powerlifting and Strongman competitions. The genetic potential is already there in these athletes and PEDs take them above and beyond what is naturally possible.

Strength and Mortality

Finally, to round this up, there is a correlation between strength and mortality. With a sample of 8762 men between the ages of 20 and 80, it was found that muscular strength was inversely and independently associated with death from all causes and cancer in men even after adjusting for cardiorespiratory fitness and other possible confounders. From the discussion of the paper:

The analysis on the combined effects of muscular strength and cardiorespiratory fitness with all cause mortality showed that the age adjusted death rate in men with high levels of both muscular strength and cardiorespiratory fitness was 60% lower (P<0.001) than the death rate in the group of unfit men with the lowest levels of muscular strength. These results highlight the importance of having at least moderate levels of both muscular strength and cardiorespiratory fitness to reduce risk of death from all causes and cancer in this population of men.

The point of bringing this paper up is that Caucasians and Asians are stronger than blacks, and also live longer. This is just like the correlation between IQ and life expectancy. Since men with higher levels of strength live longer than men with lower levels of strength, this strengthens my hypothesis for strength-based competitions and the racial mix of the competitions. Caucasians and East Asians, who have higher IQs than blacks, are also stronger than them on average, which also correlates with life expectancy.

(For more information see Steve Sailer’s post on West African and East Africans in sprinting and distance running as well as Razib Khan’s post on West Africans and their domination of sprinting competitions.)

Conclusion

HBD is evident in all of our lives. Though many of us don’t bring it up, it’s evident in the sports we watch to everyday life. The reason why there are racial disparities in the upper echelons of professional sports has to do with muscle fiber typing as well as those who are genetically predisposed to do well in these competitions. West Africans dominate in sprinting competitions and others where they are able to use their longer limbs and fast twitch fibers whereas Caucasians and Asians dominate in strength sports due to their limb length and slow twitch fibers. Professional sports proves what is evident in our everyday lives and, subconsciously at least, the average person sees this.

The Concept of “More Evolved”: A Reply to Pumpkin Person

1000 words

Recently, PumpkinPerson has been stating that one population can be ‘more evolved‘ than another which doesn’t make any biological sense. PP’s basic thesis is that since we are the last branch on the tree in comparison to the lifeforms that came before Homo Sapiens, that due to that, we are ‘more evolved’ than other organisms on the planet. I get where he’s coming from; he’s just extremely wrong.

Organisms evolve to better adapt to their environment through Natural Selection (NS). NS does select for positive traits, however, evolution is not a linear process. PP also claims that “evolution is progressive“. That couldn’t be further from the truth. Stating that evolution is “progressive” means that evolution through NS is progressing to an “endgame”. Though, we know there is no “endgame” with evolution, as evolution just happens.

Evolution is not progressive. NS may select for traits not suitable for that environment, as NS is “not all-powerful”. Selecting for one advantageous trait may change another trait for the worse. (See “Misconceptions on Evolutionary Trees“, which is what PP did, from Berkely).

PP asks “Who is most evolved?”

No organism is “more evolved” than another. NS selects for traits that are advantageous to that current environment (it selects for negative traits as well). Due to this, the word “superior”, the phrase “more evolved” is meaningless comparing human races to one another and humanity as a whole to the other lifeforms on the planet.

PP quotes Rushton as saying

One theoretical possibility is that evolution is progressive, and that some populations are more advanced than others.” J.P. Rushton, 1989

We know that evolution is not progressive, so due to this, some populations are not more advanced than others. Genetic superiority can be measured subjectively, but not objectively, as each organism has different strengths and weaknesses due to its environment.

PP then implies that bacteria are “less evolved” than we are. However, with recent breakthroughs in the HMP (Human Microbiome Project) we see the huge role that gut microbiota play when it comes to communicating with the brain, how antibiotics that kill gut microbiota also stop the growth of new brain cells, and how altered gut microbiota cause obesity. With more amazing uses and benefits we find involving gut microbiota and human health, can we really say that we’re “more evolved” than these organisms when they account for a huge amount of positive benefits for as a whole.

For another example, cows using their own genes wouldn’t be able to extract the fiber out of the food they eat. They would need special enzymes to break down the cell wall to extract the nutrients from the food. Though, evolving the genes to do this would take an extremely long time. This is where gut microbiota come in. Trillions of microbiomes live in the cows’ 4 stomachs.  The microbiomes living in the cows’ gut processes the food back and forth through the mechanical grinding of the cows’ mouth and thus, the nutrients are extracted by the microbiomes that way.

In this instance, is a cow superior to its microbiomes if a cow’s microbiomes make it possible for it to digest its food?

PP then asks “Does more evolved mean superior?”

No, it doesn’t. There is no way to quantify this, as evolution is not progressive. Furthermore, saying that one organism is “more evolved” than another doesn’t make any sense since, as noted earlier in this article, each organism is suited to the environment it evolved in through NS.

PP then says that he prefers a 3 race model, when a 5 race model makes more sense. These populations are “Africa, Europe, Asia, Melanesia and the Americas.”

I assume he would put ‘Natives’ with Asian Mongoloids, but ‘Natives’ have been genetically isolated in the Americas for so long that they formed their own distinct clade away from other populations due to no introgression between them, when other populations have admixture from other parts of the world:

Significant genetic input from outside is not noticed in Meso and South American Amerindians according to the phylogenetic analyses; while all world populations (including Africans, Europeans, Asians, Australians, Polynesians, North American Na-Dene Indians and Eskimos) are genetically related. Meso and South American Amerindians tend to remain isolated in the Neighbor-Joining, correspondence and plane genetic distance analyses.

Hence, a 5 race model makes more sense as these populations show genetic differentiation between each other.

Still, others may take the concept of “more evolved” and believe that one race is “more evolved” than another. That’s another wrong statement.

The assumption here is that populations that evolved closer to the equator had evolution “stop” for them due to “ease of lifestyle” (life is easy nowhere). That too makes no evolutionary sense. If that were so, how did Africans evolve the sickle cell trait? Evolution is a constant, ongoing process and does not ‘speed up or slow down’ based on the environments in which ancestral evolution has occurred.

Moreover, r/K selection theory does dictate fast and slow life history strategies, but it has nothing to do with ‘fast or slow evolution within human populations’.

To state that evolution ‘is faster or slower’ in certain populations of humans is like saying ‘evolution has slowed for man since 50kya’ as anti-human-evolutionists have said:

“Something must have happened to weaken the selective pressure drastically. We cannot escape the conclusion that man’s evolution towards manness has suddenly come to a halt.” – Ernst Mayr

“There’s been no biological change in humans in 40,000 or 50,000 years. Everything we call culture and civilization we’ve built with the same body and brain.” – Stephen Jay Gould

Stating that evolution occurs faster in certain populations is on the complete opposite of the “evolution stopped for humans 50kya” camp, which we know is not true and evolution has sped up in the last 10kya.

To say that one organism, or population for that matter, is more evolved than another makes no biological sense. Each organism is suited to its own environment and where it evolved. Even then, different organisms evolve different traits depending on what they have to do in that ecosystem to survive. Darwin’s finches are a perfect example of that.

Misconceptions on Calories In and Calories Out

1750 words

“Eat Less and move more!!! That’s how you lose weight!” What people who don’t understand about human metabolism and homeostasis is that when caloric reduction occurs, the body drops the metabolism to match the amount of kilocalories (kcal) it is receiving. Thus, weight will plateau and you will need to further decrease caloric consumption to lose more weight. In this article, I will go through what a calorie is, common misconceptions of Calories In and Calories Out, the reasons for metabolic slow down,  the process of thermodynamics that people who don’t understand this research cry out whenever it’s said, and finally starvation experiments that prove metabolic slow down occurs during a decrease in caloric intake and how this metabolic slow down persists after the diet is over.

A kilocalorie is the heat required to raise 1 kilogram of water 1 degree celsius. This definition is used whenever people say ‘Calorie’.

Misconceptions on kcal in/kcal out

  1. One of the biggest misconceptions people have on Calories In/Calories out is that these variables are independent of each other. However, they are extremely dependent variables.  When you decrease Calories In, your body decreases Calories Out. Basically, a 20 percent reduction in kcal will result in a 20 percent reduction in metabolism which the end result ends up being minimal weight loss.
  2. The next big assumption people have about Calories In and Calories Out is the assumption that the Basal Metabolic Rate (BMR) remains stable. Of course, measuring the caloric intake is simple. However, measuring caloric outtake is a much more complicated process. When ever the Total Daily Energy Expidenture (TDEE) is spoken of, that involves the BMR, thermic effect of food, nonexercise activity thermogenesis (the energy expidenture of all activities sans sports), excess post-exercise consumption (EPOC, a measurably increased rate of oxygen intake following increased oxygen depletion), as well as exercise. the TDEE can increase or decrease by as much as 50 percent depending on caloric intake as well as the aforementioned variables.
  3. The third misconception people have is that we have conscious control over what we eat. We decide to eat when we are hungry (obviously). But numerous hormonal factors dictate the decision on when to eat or when to stop. We stop eating when we are full, which is hormonally mediated. Like breathing, the regulation of body fat is under automatic control. Just like we don’t have to remind ourselves to breath or remind our heart to beat, we don’t need to remind ourselves to eat. Thus, since hormones control both Calories In and Calories Out, obesity is a hormonal, not caloric disorder.
  4. The fourth misconception is that fat stores are essentially unregulated. However, every single system in the body is regulated. Height increases come from growth hormones; blood sugar is regulated by insulin, glucagon, and numerous other hormones; sexual maturation is regulated by testosterone and estrogen (as well as the hormone leptin which I will return to later); body temperature is mediated by a thyroid-stimulating hormone, among numerous other biologic factors. Though, we are told that the production of fat cells is unregulated. This is false. The best researched hormone on the storage of fat cells that we know of is the hormone leptin which was discovered in 1994. So if hormones dictate fat gain, obesity is a hormonal, not caloric disorder.
  5. And the final misconception is that a calorie is a calorie. This implies that the only important variable on weight gain is caloric intake and thus all foods can be reduced to how much caloric energy they have. But a calorie of potatoes doesn’t have the same effect on the body as a calorie of olive oil. The potatoes will increase the blood glucose level, provoking a response from the pancreas, which olive oil will not. Olive oil is immediately transported to the liver and has no chance to induce an insulin response and so there is no increase in insulin or glucose.

All five of these assumptions have been proven false.

The correlation between weight gain and caloric consumption has recently been discovered. Ladabaum, et al (2014) examined trends in obesity from 1988 to 2010. The trends they observed were: obesity, abdominal obesity, physical activity and caloric consumption in US adults. They discovered that the obesity rate increased at .37 percent per year while caloric intake remained virtually the same.

The Law of Thermodynamics

The first law of thermodynamics states that energy can not be created nor destroyed in an isolated system (this is important). People often invoke the LoT to support the Calories In and Calories Out model. Dr. Jules Hirsch says in this NYT article:

There is an inflexible law of physics – energy taken in must exactly equal the number of calories leaving the system when fat storage is unchanged. Calories leave the system when food is used to fuel the body. To lower fat content – reduce obesity – one must reduce calories taken in, or increase output by increasing activity, or both. This is true whether the calories come from pumpkins or peanuts or pâtés de foie gras.

To quote MD Jason Fung, author of The Obesity Code:

But thermodynamics, a law of physics, has minimal relevance to human biology for the simple reason that the human body is not an isolated system. Energy is constantly entering and leaving. In fact, the very act we are most concerned about-eating-puts energy into the system. Food energy is also excreted from the system in the form of stool Having studied a full year of thermodynamics in university, I can assure you that neither calories nor weight gain were mentioned even a single time. (Fung, 2016: 33)

We assume with the model of the calorie-balancing scale that fat gain or fat loss is unregulated, however, no system in the body is unregulated like that. Hormones tightly regulate all bodily functions. Body fat is no exception. The body actually has numerous ways in which to control body fat. Distribution of energy is the problem with fat accumulation. Too much energy is diverted to fat creation as opposed to body-heat production. Most of this is under automatic control, except exercise (which even then, there is a genetic basis for motivated exercise). We can’t decide whether or not to allocate calories to nail production or increase stroke volume. These metabolic processes are almost impossible to measure, and thus most assume that it’s relatively constant. Particularly, Calories In is not assumed to change in response to Calories Out. We assume these are independent variables. Reducing calories in only works if calories out remains constant. However what we find is that a sudden reduction of Calories In leads to a similar reduction of Calories Out and no weight is lost as the body balances its energy budget.

Starvation experiments

In 1919, a landmark study was carried out by Francis Benedict. The volunteers in the study agreed to a semi-starvation diet ranging from 1400 to 2100 kcal, approximately 30 percent of the subject’s bodyweight. The question was whether or not decreased caloric intake lead to a decrease in metabolism. The results were shocking.

The subjects experienced a 30 percent reduction in metabolism, with their initial caloric expidenture being 3000 kcal dropping to 1950 kcal. A 30 percent reduction in kcal resulted in a 30 percent decrease in metabolism. The First Law of Thermodynamics is not broken. 

Towards the end of WWII, Dr. Ancel Keys wanted to improve understanding of starvation and better help Europe after the War. With an average height of 5 feet 10 inches and an average weight of 153 pounds, these were normal men, which Dr. Keys wanted to see the effects of a semi-starvation diet on those with a normal weight. For the first three months of the study, they were given slightly over 3000 kcal. Though over the next six months, they were given 1570 kcal. Eventually, some men were decreased to less than 1000 kcal a day. They were given a diet of foods high in carbs and low to no animal meat as that was the condition in Europe at the time. Moreover, they also had to walk 22 miles a week as exercise. Again, the results were shocking.

Dr. Keys showed that they had a 40 percent decrease in metabolic rate. The body decreased its metabolism to match the amount of calories consumed. They showed a 20 percent decrease in strength, a significant decrease in heart rate (55 to 35 beats per minute), stroke volume decreased by 20 percent, body temperature dropped to 95.8 degrees Fahrenheit (which makes sense since less caloric consumption means less energy for the body to convert into heat), physical endurance dropped by half, blood pressure dropped, they became tired and dizzy and finally their hair and nails grew extremely brittle. They couldn’t stop thinking about food. Some of them wrote cookbooks, others dreamed about food. They became obsessed with eating. All of these causes go directly back to decreased caloric consumption as the amount of heat produced by the body decreased due to an increase in caloric consumption. In sum, the body responds to a decrease in caloric intake by dropping metabolism.

Metabolic slow down

Recent data has come out on decreased energy expidenture due to dieting from contestants on the show The Biggest Loser. The contestants were followed for six years after the show ended. Fothergill, et al (2016) showed that after six years, most contestants gained back the original weight they lost, but their metabolism was still decreased by 600 kcal.

The mean metabolic adaptation had increased to 500 kcal per day, which explains why RMR remained 700 kcal per day below the baseline level despite a 90 lb body weight regain. The researchers even said that this large metabolic difference couldn’t be explained by the different calirometer used at the end of the six year period. 

Substantial weight loss induces biological changes that promote weight gain.

Moreover, after a period of dieting, your brain panics and thinks it’s starving. During this time, the the production of the hunger hormone ghrelin increases. Levels of this hormone increase right before a meal and steadily decrease after. This is one of the many hormones that control when we’re hungry and this is one of the many reasons why diets fail and do not work long term.

Our bodies have homeostatic mechanisms that cause us to gain back or lose weight whenever caloric consumption is increased or decreased. The main cause is the body weight set-point which I will cover in a future article.

And a quote from Sandra Aamodt’s book “Why Diets Make Us Fat“:

“Leibel finds that metabolic suppression persists in dieters who have kept weight off for one to six years, so he scoffs at claims that the successful weight loss story disproves his ideas. “If you talk to people who’ve done it – not the studies, but people who actually manage to lose weight and keep it off – they’ll tell you what I’m telling you,” he says: that the only way to achieve this goal was to allow themselves to be hungry all the time while increasing their physical activity substantially. Indeed, his point is supported by data on the eating and exercise habits of people listed in the National Weight Control Registry, who have lost at least thirty pounds and kept it off for one year. A calorie calculator says that Dennis Asbury should have needed 2,100 calories to maintain his weight at 150 pounds, but instead he found that he needed to eat 400 to 500 calories less than that. Such metabolic suppression is the difference between being within the defended range and being below it. Many people blame others for eating too much or exercising too little, assuming incorrectly that both are under voluntary control, but it’s much harder to justify holding people responsible for diet-induced changes in the way the body burns energy.” (Aamodt, 2016, pg. 68)

Conclusion

The fact of the matter is, kcal in and out is completely misunderstood due to a non-understanding of human metabolism. As we decrease our caloric intake, our body adjusts its metabolism down to match the amount of kcal we are currently consuming. This is why Calories In and Calories Out does not tell the whole story. Our body constantly fights to maintain what is normal, its set-point. When thrown out of what the brain considers ‘normal’ the brain through the hypothalamus does whatever it can to get us back to its set-point. Thus, obesity is a hormonal, not a caloric disorder.

The Evolution of Violence: A Look at Infanticide and Rape

1700 words

The pioneer of criminology was a man named Cesare Lombroso, an Italian Jew (a leftover remnant from the Roman days), who had two central theories: 1) that criminal behavior originated in the brain and 2) criminals were an evolutionary throwback, a more primitive type of human. Lombroso felt strongly about the rehabilitation of criminals, at the same time believing in the death penalty for “born criminals”. Though, with new advances in criminology and new insights to the brain, it looks like Lombroso was right with his theory of born criminals.

Why are you 100 times more likely to be killed on your birthday? Why are children 50 times more likely to be murdered by their stepfather than biological one? Why do some parents kill their children? Finally, why do men rape not only strangers, but also rape their wives? All of these questions can be answered with evolutionary psychology.

Evolutionarily speaking, antisocial and violent behavior wasn’t a random occurrence. When these actions occurred tens of thousands of years ago, they were because resources were being acquired from these actions. Thus, we can see some modern criminal acts as resource competition. The more resources one has, the easier it is for him to pass his genes on to the next generation (a big driver for violence). In turn, women are more attracted to males who can provide resources and protection (those who were more antisocial and violent). This also explains these prison romances, in which women get into romances with murderous criminals since they are attracted to the violence (protection) and resources (theft).

The mugger who robs for a small amount of money is increasing his odds of resource acquisition. Drive-by shootings in violent neighborhoods increase the status of those who survive the shootout. What looks like a simple brawl over nothing may be one attempting to increase social dominance. All of these actions have evolutionary causes. What drive these actions are our ‘Selfish Genes’.

The more successful genes are more ruthlessly selfish in their struggle for survival, which then drives individual behavior. The individual behaviors that occur due to our selfish genes may be antisocial and violent in nature, which in our modern society is frowned upon. The name of the game is ‘fitness’. The amount of children you can have in your time allotted on Earth. This is all that matters to our genes. Even those accomplishments you think of, such as completing college or attaining mass amounts of capital all fall back to fitness. With that, increasing your fitness and ensuring your genetic lineage passes on to the next generation is greatly enhanced.

Biological fitness can be enhanced in one of two ways. You can have as many children as possible, giving little parental care to each, or you can have fewer children but show more attention and care to them. This is known as r/K Selection Theory. Rushton’s r/K Selection Theory compliments Dawkins Selfish Gene theory in that the r-strategist is maximizing his fitness by having as many children as possible, while the K-strategist increases his fitness by having fewer children in comparison to the r-strategist but showing more parental attention. There are, however, instances in which humans kill children, whether it’s a mother killing a newborn baby or a stepfather killing a child. What are the reasons for this?

Killing Kids

The risk of being a homicide victim in the first year of life is highest in the first year of life. Why? Canadian Psychologists Daly and Wilson demonstrated in inverse relationship between degree of genetic relatedness and being a victim of homicide. Daly and Wilson discovered that the offender and victim are genetically related in only 1.8 percent of all homicides. Therefore, 98 percent of all murders are killings of people who do not share the killer’s genes.

Many stories have been told about ‘wicked stepparents’ in numerous myths and fairytales. But, as we know, a lot of stories have some basis in reality. Children of stepparents are 40 times more likely to suffer abuse at the hands of a stepparent. People who are living together who are unrelated to one another are more likely to kill one another. Even adoptions are more successful when the adopting parents view the child as genetically similar to themselves.

In this study carried out by Maillart, et al, it was discovered that for mothers, the average age of offense for filicide was 29.5 years for the mother and 3.5 years for the babe. Bourget, Grace, and Whitehurst, 2007 showed that a risk factor for infanticide was a second child born to a mother under 20-years of age. The reasoning for this is simple: at a younger age the mother is more fertile, and thus, more attractive to potential mates. The older the woman is the more sense it makes to hold on to the genetic investment since it’s harder to make up for the genetic loss late in her reproductive life.

Genetic relatedness, fitness, and parental investment show, in part, why filicides and infanticides occur.

Raping Your Wife

There are evolutionary reasons for rape as well. The rape of a non-relative can be looked at as the ultimate form of ‘cheating’ in this selfish game of life. One who rapes doesn’t have to acquire resources in order to attract a mate, he can just go and ‘take what he wants’ and attempt to spread his genes to the next generation through non-consensual sex. It’s known that rape victims have a higher chance of getting pregnant, with 7.98 percent of rape victims becoming pregnant. (News article) One explanation for this is that the rapist may be able to possibly detect how fertile a woman is. Moreover, rapists are more likely to rape fertile women rather than infertile women.

One rapist that author of the book The Anatomy of Violence, Adrian Raine interviewed said that he specifically chose ugly women to rape (Raine, 2013: 28). He says that he’s giving ugly women ‘what they want’, which is sex. There is a belief that women actually enjoy sex, and even orgasm during the rape, even though they strongly resist and fight back during the attack. Reports of orgasm during rape are around 5 to 6 percent (Raine, 2013: 29), but the true number may be higher since most women are embarrassed to say that they orgasmed during a rape.

Men, as we all know, are more likely to engage in no-strings-attached sex more than women. This is due to the ‘burden’ of sex: children. Women are more likely to carefully select a partner who has numerous resources and the ability to protect the family. Men don’t have the burden of sticking around to raise the child.

Men are more likely to find a sexual relationship more upsetting in comparison to women who are more likely to find an emotional infidelity as more distressing. This data on Americans still held true for South Korea, Germany, Japan, and the Netherlands. Men are better than women at detecting infidelity, and are more likely to suspect cheating in their spouses (Raine, 2013: 32). Unconscious reason being, a man doesn’t want to raise a child who is not genetically similar to themselves.

But this begs another question: why would a man rape his wife? One reason is that when a man discovers his spouse has been unfaithful, he would want to inseminate her as quickly as possible.

There has never in the history of humankind been one example of women banding together to wage war on another society to gain territory, resources or power. Think about it. It is always men. There are about nine male murderers for every one female murderer. When it comes to same-sex homicides, data from twenty studies show that 97 percent of the perpetrators are male. Men are murderers. The simple evolutionary reason is that women are worth fighting for. (Raine, 2013: 32)

A feminist may look at this stat and say “MEN cause all of the violence, MEN hurt women” and attempt to use this data as ‘proof’ that men are violent. Yes, we men are violent, and there is an evolutionary basis for it. However, what feminists who push the ‘all sexes are equal’ card don’t know, is that when they say ‘men are more likely to be murderers’ (which is true), they are actively accepting biological differences between men and women. Most of these differences in crime come down to testosterone. I would also assume that men would be more likely to have the ‘warrior gene’, otherwise known as the MAOA-L gene, which ups the propensity for violence.

The sociobiological model suggests that poorer people kill due to lack of resources. And one reason that men are way more likely to be victims of homicide is because men are in competition with other men for resources.

Going back to the violence on stepchildren that I alluded to earlier, aggression towards stepchildren can be seen as a strategic way of motivating unwanted, genetically dissimilar others out of the home and not take up precious resources for the next generation bred by the stepfather (Raine, 2013: 34).

Women also have a way to increase their fitness, which a brunt of it is through sexual selection. Women are known to be ‘worriers’. That is, they rate dangerous and aggressive acts higher than men. Women are also more fearful of bodily injury and more likely to develop phobias of animals. In these situations, women are protecting themselves and their unborn (or born) children by maximizing their chances for survival by being more fearful of things. This can help explain why women are less physically violent than men and why those murder stats are so heavily skewed towards men: biology.

Women compete for their genetic interests with beauty and childbearing. The more beautiful the woman, the better resources a woman can acquire from a male and this will ensure a healthy life for the offspring.

Evolutionary psychology can help explain the differences in murder between men and women. It can also explain why young mothers kill their children and why stepparents are so abusive to, and are more likely to murder stepchildren. Of course, a social context is involved but we need to look at evolutionary causes for what we think we may be able to simply explain. Because it’s, more often than not, more complex than we could imagine. And that complexity is our Selfish Genes doing anything possible to reproduce more copies of itself through its vehicle: the human body.

Morality and Altruism

Moral reasoning and altruism evolved together. Both of these traits are beneficial to human survival, so they got selected for in human populations. I will show today how moral reasoning and altruism evolved side by side to increase fitness. 

As discussed previously in my post The Evolution of Morality, moral reasoning is a post hoc search for reasons to justify judgements we already made. Moral reasoning evolved, according to Jonathan Haidt (2012) because of a bigger brain. Those with bigger brains can better process the environment around them and increase fitness for that population. As the brain grows more complex, more sophisticated thinking emerges. Since rapid and automatic processes drive our brain, those populations with bigger brains show more cognitive sophistication due to more cortical neurons as well as bigger overall brain areas which lead to increases in intelligence. 

Both altruism and morality evolved hand-in-hand. Post hoc moral reasoning helps altruistic acts occur. Since “judgement” and “justification” are separate processes, one does not have to justify a moral act, instead relying on his innate judgement that this is the most beneficial act. The “judgement” that’s made is really the *genes* doing what is best to survive. What survives when self-sacrifice occurs aren’t the bodies, the vehicles for the genes, obviously. The gene only cares about the proliferation of more copies of itself.

Darwin said that those who have the altruistic trait are more evolutionarily successful than those who do not have it. Thus, those populations that have more alleles for altruism will be more evolutionarily successful than those populations without it. 

Darwin held that morality evolved in humans because it was a beneficial trait for human social cohesiveness. Without even having a good reason for morale reasoning, just going on gut instinct (which I believe the gut instinct in these situations is the our selfish genes), altruistic acts can then occur without second thought. 

If a trait is beneficial to a population, then it will be selected for in that group. Moral reasoning was a trait that was selected for since those with the trait could better aid the group they were apart of by being ‘selfless’. 

For instance, when animals care for their babes, we don’t say that it’s ‘animal culture’ that causes them to care for their offspring. It’s obviously a trait evolved over time. When an immediate threat occurs, the animal will engage in what looks to be a ‘selfless act’, when in actuality the *selfish genes* are making sure the *copies* of themselves survive. 

All human traits are heritable. So those blank slaters who believe that all of our behavioral traits are molded by the environment, there is a considerable genetic component involved. Thus, it would take us further away from the truth of why altruism and morality occurred in human populations. 

We can see some altruistic-like traits in nonhuman animals. For instance, in bees. The worker bees inherit the queen’s matrigenes, which direct the altruistic behavior of the worker bees to their female kin. These genes inherited from the queen bee have the worker bees forgo their own reproduction to help rear their siblings. So when the queen does, the workers can begin to selfishly compete with one another to lay eggs. This behavior is inherited from the father. 

Emotional intelligence can also be said to be a form of social intelligence. Though, it has been recently discovered that EQ is a mix of high IQ and the Big Five Personality Traits. Traits that enhanced human social cohesiveness get selected for. For instance, in Eurasia, the Big Five Personality Traits evolved since those who are more altruistic were better able to survive in the harsh Eurasian winters due to an increase of frequency in altruistic alleles. 

The moral reasoning is the ‘gut instinct’, where the person *knows* something is ‘wrong’, they just can’t explain it rationally. This human behavior has an evolutionary basis, which increased human social cohesiveness and eventually led to our complex societies. Altruism would not have evolved without moral reasoning (which the reasoning we construct is post hoc to justify judgements we already made). 

Thus, when when speaking of mortality with someone attempting to figure out truth, you will hear nonsensical answers. But thinking of moral reasoning as a skill that evolved to further our own agenda, moral reasoning makes a lot more sense. By keeping your eye on the intuition (what their *genes* want), you can see a person’s motivations for holding these views they do, even though they cannot think of a reason for their belief. 

So, I’m proposing that moral reasoning evolved to increase human fitness and social cohesiveness, going hand-in-hand with altruism.

Without these two traits, we wouldn’t be able to build these complex societies, which moral reasoning (post hoc or not) and altruism are two of the driving force forces behind our both our evolution as well as our societal evolution. 

Nordicist Fantasies: The Myth of the Blonde-Haired, Blue-Eyed Aryans and the Origins of the Indo-Europeans

750 words

Nordicists say that the Aryans, the Indo-Europeans, had blonde hair and blue eyes. Though, recent genetic evidence shows that the origin of the Indo-European language is from the Russian steppe, originating from the Yamnaya people. The originators of the Indo-European languages weren’t blonde-haired and blue-eyed, but dark-haired and dark-eyed. Better known as the ‘Kurgan Hypothesis’, this is now the leading theory for the origin of Indo-European people.

Haak et al (2016) showed that at the beginning of the Neolithic period in Europe (approximately 7 to 8 kya) that a closely related group of farmers appeared in Germany, Hungary, and Spain. These ancient populations were different from the indigenous peoples from the Russian steppe, the Yamnaya, who showed high affinity with a 24000-year-old Siberian sample. Approximately 5 to 6 kya, farmers throughout Europe had more hunter-gatherer ancestry than their predecessors from the early Neolithic, but the Yamnaya from the Russian steppe were descended from the Eastern European hunter-gatherers, but also from a population with Near East ancestry (Ancient North Eurasians, ANE). Further, the migration of haplotypes R1b and R1a traveled into Europe 5000 years ago.

The Late Neolithic Corded Ware culture from Germany trace approximately 75 percent of their ancestry to the Yamnaya, which confirms a massive migration from Eastern Europe to the heartland of the continent 4500 years ago. This ancestry from the Yamnaya persisted in all of the Europeans sampled up until approximately 3000 years ago, and is common in all modern-day Europeans. The researchers then conclude that this provides evidence for a steppe origin for some of the Indo-European languages from Europe.

As mentioned above, Haber et al (2016) show how, as I alluded to above, that the Yamnaya people share  distant ancestry with the Siberians, which is probably the source of one of the three ancient populations that contributed to the modern-day European gene pool (Ancient North Eurasians, West European hunter-gatherers, and Early European farmers from Western Asia with the fourth population being the Yamnaya people).

Olade et al (2015) show that since the Basque people speak a pre-Indo-European language that this indicates that the expansion of Indo-European languages is unlikely to have begun during the early Neolithic (7 to 8 kya). They, like Haak et al, conclude that it’s in agreement with the hypothesis of the Indo-European languages coming out of the East, the Russian steppe, around 4500 years ago which is associated with the spread of Indo-European languages into Western Europe.

Finally, it is known that the Yamnaya people had dark skin (relative to today’s Europeans), dark hair, and dark eyes. Knowing what is presented in this article, this directly goes against the Nordicist fantasy of the blue-eyed, blonde-haired Indo-Europeans. Nordicists also like to claim that the Indo-Europeans had blonde hair and blue eyes, when genetic evidence goes directly against this claim:

For rs12913832, a major determinant of blue versus brown eyes in humans, our results indicate the presence of blue eyes already in Mesolithic hunter-gatherers as previously described. We find it at intermediate frequency in Bronze Age Europeans, but it is notably absent from the Pontic-Caspian steppe populations, suggesting a high prevalence of brown eyes in these individuals.

Further, the Yamnaya were a tall population. Since the Yamnaya had a greater genotypic height, it stands to reason that Northern European populations have more Yamnaya ancestry.

The Yamnaya herded cattle and other animals, buried their dead in mounds called kurgans, and may have created some of the world’s first wheeled vehicles. They were a nomadic population that, some linguists say, had a word for wheel. The massive migration into Western Europe from the Russian steppe contributed large amounts of North Asian ancestry in today’s Europeans. The Yamnaya are also shown to be the fourth ancient population that is responsible for modern-day Europeans.

Modern-day genetic testing is shattering all of these myths that are told about the origins of Europeans and Proto-Indo-European peoples and languages. The ACTUAL basis for most PIE languages is from the Russian steppe, from a relatively (to modern Europe) dark-skinned, dark-haired, and dark-eyed people who then spread into Europe 4500 years ago.

The Nordicist fantasies of the Aryans, the originators of Proto-Indo-European languages has been put to rest. It was originally proposed based off of myths and stories, mostly from ancient Indo-European cultures who were situated thousands of miles away from the original Indo-Europeans (the Yamnaya).

The Kurgan Hypothesis is now the theory that’s largely accepted by the scientific community as being the homeland of the Proto-Indo-Europeans. The Yamnaya people now make a fourth founding population for Europeans, with the other three being West European hunter-gatherers, Ancient North Eurasians, and Early European Farmers.

The Evolution of Morality

Summary: Moral reasoning is just a post hoc search for reasons to justify the judgments people have already made. When people are asked why, for certain questions, they find things morally wrong, they say they cannot think of a reason but they still think it is wrong. This has been verified by numerous studies. Moral reasoning evolved as a skill to further social cohesiveness and to further our social agendas. Even in different cultures, those with matching socioeconomic levels have the same moral reasoning. Morality cannot be entirely constructed by children based on their own understanding of harm. Thus, cultural learning must play a bigger role than the rationalists had given it. Larger and more complex brains also show more cognitive sophistication in making choices and judgments, confirming a theory of mine that larger brains are the cause of making correct choices as well as making moral judgments.

The evolution of morality is a much-debated subject in the field of evolutionary psychology. Is it, as the nativists say, innate? Or is it as the empiricists say, learned? Empiricists, better known as Blank Slatists, believe that we are born with a ‘blank slate’ and thus acquire our behaviors through culture and experience. In 1987 when John Haidt was studying moral psychology (now known as evolutionary psychology), moral psychology was focused on the third answer: rationalism. Rationalism dictates that children learn morality through social learning and interacting with other children to learn right from wrong.

Developmental psychologist Jean Piaget focused on the type of mistakes that children would make when seeing water moved from different shape glasses. He would, for example, put water into the same size glasses and ask children which one had more water. They all said they held the same amount of water. He then poured water from one glass into a taller glass and then asked the children which glass held more water. Children aged 6 and 7 say that the water level changed since the water was now in a taller glass. The children don’t understand that just because the water was moved to a taller glass doesn’t mean that there is now more water in the glass. Even when parents attempt to explain to their children why there is the same amount of water in the glass, they don’t understand it because they are not ready cognitively. It’s only when they reach an age and cognitive stage that they are ready to understand that the water level doesn’t change, just by playing around with cups of water themselves.

Basically, the understanding of the conservation of volume isn’t innate, nor is it learned by parents. Children figure it out for themselves only when their minds are cognitively ready and they are given the right experiences.

Piaget then applied his rules from the water experiment with the development of children’s morality. He played a marble game with them where he would break the rules and play dumb. The children the responded to his mistakes, correcting him, showing that they had the ability to settle disputes and respect and change rules. The growing knowledge progressed as children’s cognitive abilities matured.

Thus, Piaget argued that like children’s understanding of water conservation is like children’s understanding of morality. He concludes that children’s reasoning is self-constructed. You can’t teach 3-year-old children the concept of fairness or water conservation, no matter how hard you try. They will figure it out on their own through dispute and do things themselves, better than any parent could teach them, Piaget argued.

Piaget’s insights were then expanded by Lawrence Kohlberg who revolutionized the field of moral psychology with two innovations: developing a set of moral dilemmas that were presented to children of various ages. One example given was that a man broke into a drug store to steal medication for his ill wife. Is that a morally wrong act? Kohlberg wasn’t interested in whether the children said yes or no, but rather, their reasoning they gave when explaining their answers.

Kohlberg found a six-stage progression in children’s reasoning of the social world that matched up with what Piaget observed in children’s reasoning about the physical world. Young children judged right and wrong, for instance, on whether or not a child was punished for their actions, since if they were punished for their actions by an adult then they must be wrong. Kohlberg then called the first two stages the “pre-conventional level of moral judgment”, which corresponded to Piaget’s stage at which children judge the physical world by superficial features.

During elementary school, most children move on from the pre-conventional level and understand and manipulate rules and social conventions. Kids in this stage care more about social conformity, hardly ever questioning authority.

Kohlberg then discovered that after puberty, which is right when Piaget found that children had become capable of abstract thought, he found that some children begin to think for themselves about the nature of authority, the meaning of justice and the reasoning behind rules and laws. Kohlberg considered children “‘moral philosophers’ who are trying to work out coherent ethical systems for themselves”, which was the rationalist reasoning at the time behind morality. Kohlberg’s most influential finding was that the children who were more morally advanced frequently were those who had more opportunities for role-taking, putting themselves into another person’s shoes and attempting to feel how the other feels through their perspective.

We can see how Kohlberg and Piaget’s work can be used to support and egalitarian and leftist, individualistic worldview.

Kohlberg’s student, Elliot Turiel, then came along. He developed a technique to test for moral reasoning that doesn’t require verbal skill. His innovation was to tell children stories about children who break rules and then give them a series of yes or no questions. Turiel discovered that children as young as five normally say that the child was wrong to break the rule, but it would be fine if the teacher gave the child permission, or occurred in another school with no such rule.

But when children were asked about actions that harmed people, they were given a different set of responses. They were asked if a girl pushes a boy off of a swing because she wants to use it, is that OK? Nearly all of the children said that it was wrong, even when they were told that a teacher said it was fine; even if this occurred in a school with no such rule. Thus, Turiel concluded, children recognize that rules that prevent harm are moral rules related to “justice, rights, and welfare pertaining to how people ought to relate to one another” (Haidt, 2012, pg. 11). All though children can’t speak like moral philosophers, they were busy sorting information in a sophisticated way. Turiel realized that was the foundation of all moral development.

There are many rules and social conventions that have no moral reasoning behind them. For instance, the numerous laws of the Jews in the Old Testament in regards to eating or touching the swarming insects of the earth, to many Christians and Jews who believe that cleanliness is next to Godliness, to Westerners who believe that food and sex have a moral significance. If Piaget is right then why do so many Westerners moralize actions that don’t harm people?

Due to this, it is argued that there must be more to moral development than children constructing roles as they take the perspectives of others and feel their pain. There MUST be something beyond rationalism (Haidt, 2012, pg. 16).

Richard Shweder then came along and offered the idea that all societies must resolve a small set of questions about how to order society with the most important being how to balance the needs of the individual and group (Haidt, 2012, pg. 17).

Most societies choose a sociocentric, or collectivist model while individualistic societies choose a more individualist model. There is a direct relationship between consanguinity rates, IQ, and genetic similarity and whether or not a society is collectivist or individualistic.

Shweder thought that the concepts developed by Kohlberg and Turiel were made by and for those from individualistic societies. He doubted that the same results would occur in Orissa where morality was sociocentric and there was no line separating moral rules from social conventions. Shweder and two collaborators came up with 39 short stories in which someone does something that would violate a commonly held rule in the US or Orissa. They interviewed 180 children ranging from age 5 to 13 and 60 adults from Chicago and a matched sample of Brahmin children and adults from Orissa along with 120 people from lower Indian castes (Haidt, 2012, pg. 17).

In Chicago, Shweder found very little evidence for socially conventional thinking. Plenty of stories said that no harm or injustice occurred, and Americans said that those instances were fine. Basically, if something doesn’t protect an individual from harm, then it can’t be morally justified, which makes just a social convention.

Though Turiel wrote a long rebuttal essay to Shweder pointing out that most of the study that Shweder and his two collaborators proposed to the sample were trick questions. He brought up how, for instance, that in India eating fish is will stimulate a person’s sexual appetite and is thus forbidden to eat, with a widow eating hot foods she will be more likely to have sex, which would anger the spirit of her dead husband and prevent her from reincarnating on a higher plane. Turiel then argued that if you take into account the ‘informational assumptions’ about the way the world works, most of Shweder’s stories were really moral violations to the Indians, harming people in ways that Americans couldn’t see (Haidt, 2012, pg. 20).

Jonathan Haidt then traveled to Brazil to test which force was stronger: gut feelings about important cultural norms or reasoning about harmlessness. Haidt and one of his colleagues worked for two weeks to translate Haidt’s short stories to Portuguese, which he called ‘Harmless Taboo Violations’.

Haidt then returned to Philadelphia and trained his own team of interviewers and supervised the data collection for the four subjects in Philadelphia. He used three cities, using two levels of social class (high and low) and within each social class was two groups of children aged 10 to 12 and adults aged 18 to 28.

Haidt found that the harmless taboo stories could not be attributed to some way about the way he posed the questions or trained his interviewers, since he used two questions directly from Turiel’s experiment and found the same exact conclusions. Upper-class Brazilians looked like Americans on these stories (I would assume since Upper-class Brazilians have more European ancestry). Though in one example about breaking the dress-code of a school and wearing normal clothes, most middle-class children thought that it was morally wrong to do this. The pattern supported Shweder showing that the size of the moral-conventional distinction varied across cultural groups (Haidt, 2012, pg. 25).

The second thing that Haidt found was that people responded to harmless taboo stories just as Shweder predicted: upper-class Philadelphians judged them to be violations of social conventions while lower-class Brazilians judged them to be moral violations. Basically, well-educated people in all of the areas Haidt tested were more similar to each other in their response to harmless taboo stories than to their lower-class neighbors.

Haidt’s third finding was all differences stayed even when controlling for perceptions of harm. That is, he included a probe question at the end of each story asking: “Do you think anyone was harmed by what [the person in the story] did?” If Shweder’s findings were caused by perceptions of hidden victims, as was proposed by Turiel, then Haidt’s cross-cultural differences should have disappeared when he removed the subjects who said yes to the aforementioned question. But when he filtered out those who said yes, he found that the cultural differences got BIGGER, not smaller. This ended up being very strong evidence for Shweder’s claim that morality goes beyond harm. Most of Haidt’s subjects said that the taboos that were harmless were universally wrong, even though they harmed nobody.

Shweder had won the debate. Turiel’s findings had been replicated by Haidt using Turiel’s methods showing that the methods worked on people like himself, educated Westerners who grew up in an individualistic culture. He showed that morality varied across cultures and that for most people, morality extended beyond the issues of harm and fairness.

It was hard, Haidt argued, for  a rationalist to explain these findings. How could children self-construct moral knowledge from disgust and disrespect from their private analyses of harmlessness (Haidt, 2012, pg. 26)? There must be other sources of moral knowledge, such as cultural learning, or innate moral intuitions about disgust and disrespect which Haidt argued years later.

Yet, surprises were found in the data. Haidt had written the stories carefully to remove all conceivable harm to other people. But, in 38 percent of the 1620 times people heard the harmless offensive story, they said that somebody was harmed.

Haidt found that it was obvious in his sample of Philadelphians that it was obvious that the subjects had invented post hoc fabrications. People normally condemned the action very quickly, but didn’t need a long time to decide what they thought, as well as taking a long time to think up a victim in the story.

He also taught his interviewers to correct people when they made claims that contradicted the story. Even when the subjects realized that the victim they constructed in their head was fake, they still refused to say that the act was fine. They, instead, continued to search for other victims. They just could not think of a reason why it was wrong, even though they intuitively knew it was wrong (Haidt, 2012, pg. 29).

The subjects were reasoning, but they weren’t reasoning in search for moral truth. They were reasoning in support of their emotional reactions. Haidt had found evidence for philosopher David Hume’s claim that moral reasoning was often a servant of moral emotions. Hume wrote in 1739: “reason is, and ought to be only the slave of the passions, and can never pretend to any other office than to serve and obey them.”

Judgment and justification are separate processes. Moral reasoning is just a post hoc search for reasons to justify the judgments people have already made.

The two most common answers of where morality came from are that it’s innate (nativists) or comes from childhood learning (empiricists), also known as “social learning theory”. Though the empiricist position is incorrect.

  • The moral domain varies by culture. It is unusually narrow in western education and individualistic cultures. Sociocentric cultures broaden moral domain to encompass and regulate more aspects of life.
  • People sometimes have gut feelings – particularly about disgust – that can drive their reasoning. Moral reasoning is sometimes a post hoc fabrication.
  • Morality can’t be entirely self-constructed by children based on their understanding of harm. Cultural learning (social learning theory, Rushton, 1981) not guidance must play a larger role than rationalist had given it.

(Haidt, 2012, pg 30 to 31)

If morality doesn’t come primarily from reasoning, then that leaves a combination of innateness and social learning. Basically, intuitions come first, strategic reasoning second.

If you think that moral reasoning is something we do to figure out truth, you’ll be constantly frustrated by how foolish, biased, and illogical people become when they disagree with you. But if you think about moral reasoning as a skill we humans evolved to further our social agendas – to justify our own actions and to defend the teams we belong to – then things will make a lot more sense. Keep your eye on the intuitions, and don’t take people’s moral arguments at face value. They’re mostly post hoc constructions made up on the fly crafted to advance one or more strategic objectives (Haidt, 2012, pg XX to XXI).

Haidt also writes on page 50:

As brains get larger and more complex, animals begin to show more cognitive sophistication – choices (such as where to forage today, or when to fly south) and judgments (such as whether a subordinate chimpanzee showed proper differential behavior). But in all cases, the basic psychology is pattern matching.

It’s the sort of rapid, automatic and effortless processing that drives our perceptions in the Muller-Lyer Illusion. You can’t choose whether or not to see the illusion, you’re just “seeing that” one line is longer than the other. Margolis also called this kind of thinking “intuitive”.

This shows that moral reasoning came about due to a bigger brain and that the choices and judgments we make  evolved because they better ensured our fitness, not due to ethics.

Moral reasoning evolved for us to increase our fitness on this earth. The field of ethics justifies what benefits group and kin selection with minimal harm to the individual. That is, the explanations people make through moral reasoning are just post hoc searches for people to justify their gut feelings, which they cannot think of a reason why they have them.

Source: The Righteous Mind: Why Good People Are Divided By Politics and Religion

Science Proves It: Fat-shaming Doesn’t Work

2250 words

Milo Yiannopoulos published an article yesterday saying that “fat-shaming works”. It’s clear that the few papers he cites he didn’t read correctly while disregarding the other studies stating the opposite saying “there is only one serious study”. There is a growing body of research that says otherwise.

He first claims that with the knowledge of what he is going to show will have you armed with the facts so that you can hurl all the insults you want at fat people and genuinely be helping them. This is objectively wrong.

In the study he’s citing, the researchers used a quantitative analysis using semi-structured interview data (which is used when subjects are seen only one time and are instructed by the researchers what the guidelines of the experiment will be in order to get the reliable, comparable, and quality data) on 40 adolescents who lost at least 10 pounds and maintained their weight loss for at least a year. This guideline came from Wing and Hill (2001) who say that maintaining a 10 percent weight loss for one year is successful maintenance. He claims that the abstract says that bullying by the peer group induces weight loss. Though, it’s clear that he didn’t read the abstract correctly because it says:

In contrast to existing literature, our findings suggest that primary motivating factors for adolescent weight loss may be intrinsic (e.g., desire for better health, desire to improve self-worth) rather than extrinsic. In addition, life transitions (e.g., transition to high school) were identified as substantial motivators for weight-related behavior change. Peer and parental encouragement and instrumental support were widely endorsed as central to success. The most commonly endorsed weight loss maintenance strategies included attending to dietary intake and physical activity levels, and making self-corrections when necessary.

Peer encouragement and instrumental support were two variables that are the keys to success in childhood weight loss maintenance, not fat-shaming as he claims.

The same study found that obese people were more likely to lose weight around “life transitions,” like starting high school. In other words, people start to worry about how others will see them, especially when they need to make a good first impression. Fear of social judgement is key. So keep judging them. 

The study didn’t find that at all. In fact, it found the opposite.

Dr. Fred Pescatore says:

According to a new study, while most teens’ weight loss attempts don’t work, the ones who do lose weight successfully, quite simply, do it for themselves, rather than to please their (bullying) peers or (over-pressuring) parents.

He then cites a paper from the UCLA stating that social pressure on the obese (fat-shaming) will lead to positive changes. Some of the pressures referenced are:

If you are overweight or obese, are you pleased with the way you look?

Are you happy that your added weight has made many ordinary activities, such as walking up a long flight of stairs, harder?

The average fat person would say no to the first two.

Are you pleased when your obese children are called “fatty” or otherwise teased at school?

Fair or not, do you know that many people look down upon those excessively overweight or obese, often in fact discriminating against them and making fun of them or calling them lazy and lacking in self-control?

Self-control has a genetic component.In a 30 year follow-up to the Marshmallow Experiment, those who lacked self-control during pre-school had a higher chance of becoming obese 30 years later. Analyzing self-reported heights and weights of those who participated in the follow-up (n=164, 57 percent women), the researchers found that the duration of the delay on the gratification task accounted for 4 percent of the variance in BMI between the subjects, which, according to the researchers, was responsible for a significant portion of the variation in the subjects. The researchers also found that each additional minute they delayed gratification that there was a .2 reduction in BMI.

Why? Because people change their health and dietary habits to mimic that of their friends and loved ones, especially if they spend lots of time around them. Peer pressure encourages people to look like the people they admire and whose company they enjoy. Unless there’s a more powerful source of social pressure (say, fat shaming) from the rest of society, of course.

Not even thinking of the genetic component. The increase in similarity relative to strangers is on the level of 4th cousins. Thus, since ‘dietary habits are mimicked by friends and family’, what’s really going on is genotypic matching and that, not socialization, is the cause for friends and family mimicking diets.

There is only one serious study, from University College London, that suggests fat-shaming doesn’t work, and it’s hopelessly flawed. Firstly, it’s based on survey data — relying on fat people to be honest about their weight and diets. Pardon the pun, but … fat chance!

Moreover, the study defines “weight discrimination” much like feminists define “misogyny,” extending it to a dubiously wide range of behaviours, including “being treated poorly in shops.” The study also takes survey answers from 50-year olds and tries to apply them to all adults. But in what world do 20-year-olds behave the same way as older people?

The paper he cites, Perceived Weight Discrimination and Changes in Weight, Waist Circumference, and Weight Statusdoes say what he claims. However, the researchers do say that due to having a sample of people aged 50 and older that it wasn’t applicable to younger populations (as well as other ethnicities, this sample being 97.9 percent white). (Which you can tell he did not read, and if he did he omitted this section.)

The researchers found that 5.1 percent of the participants reported being discriminated on the basis of their weight. They discovered that those who experienced weight discrimination were more likely to engage in behaviors that promoted weight gain, and were more likely to see an increase in weight and waist circumference. Also observed, was that weight discrimination was a factor in early onset obesity.

Present research indicates that in addition to poorer mental health outcomes, weight discrimination has implications for obesity. Rather than motivating people to lose weight, weight discrimination increases the risk for obesity. Sutin and Terraciano (2013) conclude that though fat shaming is thought to have a positive effect on weight loss and maintenance, it is, in reality, associated with maintenance of obesity. Also seen in this sample of over 6,000 people was that those who experienced weight discrimination were 2.5 times more likely to become obese in the next few years.Further, obese subjects were 3.2 times as likely to remain obese over the next few years.

Sutin et al (2014) also showed how weight discrimination can lead to “poor subjective health, greater disease burden, lower life satisfaction and greater loneliness at both assessments and with declines in health across the four years”.

Puhl and Heuer (2010) says that weight discrimination is not a tool for obesity prevention and that stigmatization of the obese leads to threatened health, the generation of health disparities and, most importantly, it interferes with effective treatments.

Tomiyama (2014) showed that any type of fat shaming leads to an increase in weight and caloric consumption.

Shvey, Puhl, and Brownell (2011) found in a sample of 73 overweight women, that those who watched a video in which weight discrimination occurred ate 3 times as many calories than those who did not see the video. The authors conclude that despite people claiming that weight discrimination works for weight loss, the results of the study showed that it leads to overeating, which directly challenges the (wrong) perception on weight discrimination being positive for weight loss.

Participants were from an older population, in which weight change and experiences of weight discrimination may differ relative to younger populations so findings cannot be assumed to generalize

Puhl and King (2013) show that weight discrimination and bullying during childhood can lead to “depression, anxiety, low self-esteem, body dissatisfaction, suicidal ideation, poor academic performance, lower physical activity, maladaptive eating behaviors, and avoidance of health care.”

I expect we’ll see more of these pseudo-studies, and not just because academics tend to be lefties. Like climate scientists before them, I suspect a substantial number of “fat researchers” will simply choose to follow the political winds, and the grant money that follows them, rather than seeking the truth.

He is denying the negative implications of fat-shaming, disregarding the ‘one study’ (or so he claims) that shows the opposite of what he cited (which he didn’t read fully). I also like how these studies are called ‘pseudo-studies’ when the conclusion that’s found is a conclusion he doesn’t like. Really objective journalism there.

The reverse is also true. Just being around attractive women raises a man’s testosterone.

The researchers say that talking with a beautiful woman for five minutes led to  14 percent increase in testosterone and a 48 percent increase in cortisol, the anti-stress hormone.

Of course, this has its grounds in evolution. When two people are attracted to each other, they begin to mimic each other’s movements and using the same body language unconsciously. The researchers he cited concluded that “women may release steroid hormones to facilitate courtship interactions with  high-value men“. This, of course, has an evolutionary basis. Women seek the best mate that will be able to provide the most for them. Men and women who are more attractive are also more intelligent on average with the reverse holding true for fat people, who are uglier and less intelligent on average.

Though it would be to un-PC to conduct an experiment proving it, it stands to reason that looking at fat, ugly people depresses testosterone. This is certainly how any red-blooded man feels when looking at a hamplanet.

Depressed testosterone is associated with many negative health outcomes, and thus the mere presence of fat people is actively harming the population’s health — particularly men’s, since we’re more visual. We ban public smoking based on the minuscule effects of “passive” intake, so why aren’t the same lefty, public-health aware politicians clamouring for a ban on fat people being seen in public?

A study conducted on people’s hormonal response to the obese and overweight may indeed show a decrease in testosterone and cortisol. Though, these hormonal responses are temporary, which he doesn’t say.

Instead, the same lefties who want to stop us having fags or drinking too much in public (and even alcoholics and chain smokers are healthier than the obese) are the same ones urging the authorities to treat “fat-shaming” as a crime and investigate it. Insane!

There are, contrary to popular belief, obese people who are metabolically healthy. Blüher (2012) reviewed the data on obese patients and found that 30 percent of them were metabolically healthy with the obese patients having similar levels of insulin sensitivity similar to lean individuals.

Moreover, new research has found that having a BMI of 27 leads to a decrease in mortality. In a huge study of over 120,000 people, the researchers gathered people from Copenhagen, Denmark, recruiting people from 1976 to 2013. They were then separately compared to those who were recruited in the 70s, 90s, and 00s. Surprisingly, the BMI linked with the lowest risk of having died from any cause was 23.7 in the 70s, 24.6 in the 90s, and 27 from 2003-2013. Due to the results of this study, the researchers are arguing that BMI categories may need adjusting.

As shown in that 2014 study, young people in particular are concerned about what their peers think about them, especially when they start high school. That’s why it’s so critical to let them know that their instincts are correct, and that they can’t be “healthy at any size.”

If you can be unhealthy at any size, why can’t you be ‘healthy at any size’? As I’ve shown, those with a BMI of 27, on average, are metabolically similar to those with to those with lower BMIs. Since, in the study previously cited, BMI increased while mortality decreased, technological advancements in caring for diseases, such as Diabetes Mellitus, improved, this is one possible explanation for this.

Those with a BMI under 25 may still suffer from negative effects, the same as obese people. They may suffer from metabolic syndrome, high triglycerides, low HDL, small LDL particles, high blood sugar and high insulin. Those who are skinny fat need to worry more about their vital organs, as the fat deposits they carry are white fat which is wrapped around the vital organs in the body. These are some of the reasons why being skinny fat can be more dangerous than being obese or overweight: they think that because their BMI is in the ‘normal range’ that they’re fine and healthy. Clearly, sometimes even being ‘underlean’ can have serious consequences worse than obesity.

Then he brings up smoke shaming and bills being passed to stop smokers from smoking in certain public areas lead to a decrease in smoking, so fat shaming makes sense in that manner.

Except it doesn’t.

Humans need to eat, we don’t need to smoke. Moreover, since the rising rates in obesity coincide with the increase in height, it has been argued by some researchers that having an obese population is just a natural progression of first world societies.

Fat shaming doesn’t work. It, ironically, makes the problem worse. The physiological components involved with eating are a factor as well. It is known that the brain scans of the obese and those addicted to cocaine mirror each other. With this knowledge of food changing the brain, we can think of other avenues that do not involve shaming people for their weight, which increases the problem we all hate.

 

Charles Murray

Arthur Jensen

Blog Stats

  • 52,904 hits
Follow NotPoliticallyCorrect on WordPress.com
Follow

Get every new post delivered to your Inbox.

Join 39 other followers