NotPoliticallyCorrect
Please keep comments on topic.

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 118 other followers

Follow me on Twitter

Goodreads

JP Rushton

Richard Lynn

Linda Gottfredson

Weight Loss and the Microbiome

 

1450 words

Last month I argued that there was more to weight loss than CI/CO. One of the culprits is a virus called Ad-36. Obese people are more likely to have Ad-36 antibodies in comparison to lean people, which implies that they have/had the virus and could be a part of the underlying cause of obesity. However, a paper was recently published that your stool can predict whether or not you can lose weight. This is due to how certain bacteria in the gut respond to different macronutrients ingested into the body.

ScienceDaily published an article a few days ago titled Your stools reveal whether you can lose weight. In the article, they describe the diets of the cohort, which followed 31 people, some followed the New Nordic Diet (NND), while others followed the Average Danish Diet (ADD) (Hjorth et al, 2017; I can’t find this study!! I’ll definitely edit this article after I read the full paper when it is available). So 31 people ate the NDD for 26 weeks, and lost 3.5 kg (7.72 pounds for those of us who use freedom numbers) while those who ate the ADD lost an average of 1.7 kg (3.75 pounds for those of us who use freedom numbers). So there was a 1.8 kg difference in pounds lost between the two diets. Why?

Here’s the thing: when people were divided by their microbiota, those who had a higher proportion of Prevotella to Bacteriodoites lost 3.5 more kg (7.72 pounds) in 26 weeks when they ate the NND in comparison to the ADD. Those who had a lower proportion of Prevotella to Bacteriodoites lost no additional weight on the NND. Overall, they say, about 50 percent of the population would benefit from the NND, while the rest of the population should diet and exercise until new measures are found.

The New Danish Diet is composed of grains, fruits, and vegetables. The diet worked for one-half of the population, but not for the other. The researchers state that people should try other diets and to exercise for weight loss while they study other measures. This is important to note: the same diet did not induce weight loss in a population; the culprit here is the individual microbiome.

Now that those Bacteroidotes have come up again, this quote from Allana Collen’s 2014 book 10% Human: How Your Body’s Microbes Hold the Key to Health and Happiness:

But before we get too excited about the potential for a cure for obesity, we need to know how it all works. What are these microbes doing that make us fat? Just as before, the microbiotas in Turnbaugh’s obese mice contained more Firmicutes and fewer Bacteroidetes, and they somehow seemed to enable the mice to extract more energy from their food. This detail undermines one of the core tenets of the obesity equation. Counting ‘calories-in’ is not as simple as keeping track of what a person eats. More accurately, it is the energy content of what a person absorbs. Turnbaugh calculated that the mice with the obese microbiota were collecting 2 per cent more calories from their food. For every 100 calories the lean mice extracted, the obese mice squeezed out 102.

Not much, perhaps, but over the course of a year or more, it adds up. Let’s take a woman of average height. 5 foot 4 inches, who weights 62 kg (9st 11 lb) and a healthy Body Mass Index (BMI: weight (kg) /(height (m)^2) of 23.5. She consumes 2000 calories per day, but with an ‘obese’ microbiota, her extra 2 per cent calorie extraction adds 40 more calories each day. Without expending extra energy, those further 40 calories per day should translate, in theory at least, to a 1.9 kg weight gain over a year. In ten years, that’s 19 kg, taking her weight to 81 kg (12 st 11 lb) and her BMI to an obese 30.7. All because of just 2 percent extra calories extracted from her food by her gut bacteria.

This corresponds with the NND/ADD study on weight loss… This proves that there is more than the simplistic CI/CO to weight loss, and that an individual’s microbiome/physiology definitely does matter in regards to weight loss. Clearly, to understand the population-wide problem of obesity we must understand the intricate relationship between the microbiome/brain/gut/body relationship and how it interacts with what we eat. Because evidence is mounting that the individual’s microbiome houses the key to weight loss/gain.

Exercise does not induce weight loss. A brand new RCT (randomized controlled trial) showed that in a cohort of children who were made to do HIIT (high-intensity interval training) did show better cardiorespiratory fitness, but there were no concomitant reductions in adiposity and bio blood markers (Dias et al, 2017)What this tells me is that people should exercise for health and that ‘high’ that comes along with it; if people exercise for weight loss they will be highly disappointed. Note, I am NOT saying to not exericse, I’m only saying to not have any unrealistic expectations that cardio will induce it, it won’t!

Bjornara et al (2016) showed that, when the NND was compared to the ADD, there was better adherence to the NND when compared to the ADD. Poulskin et al (2015) showed that the NND provided higher satisfaction, and body weight reduction with higher compliance with the NND and with physical activity (I disagree there, see above).

This study is important for our understanding of weight loss for the population as a whole. More recent evidence has shown that our microbiome and body clock work together to ‘pack on the pounds‘. This recent study found that the microbiome “regulate[s] lipid (fat) uptake and storage by hacking into and changing the function of the circadian clocks in the cells that line the gut.” The individual microbiome could induce weight gain, especially when they consume a Western diet, which of course is full of fat and sugar. One of the most important things they noticed is that mice without a microbiome fared much better on a high-fat diet.

The microbiome ‘talks’ to the gut lining. Germ-free mice were genetically unable to make NFIL3 in the cell lining of the gut. So germ-free mice lack a microbiome and lower than average production of NFIL3, meaning they take up and store fewer lipids than those with a microbiome.

So the main point about this study is the circadian rhythm. The body’s circadian clock recognizes the day/night system, which of course are linked to feeding times, which turn the body’s metabolism on and off. Cells are not directly exposed to light, but they capture light cues from visual and nervous systems, which then regulates gene expression. The gut’s circadian clock then regulates the expression of NFIL3 and the lipid metabolic machinery which is controlled NFIL3. So this study shows how the microbiome interacts with and impacts metabolism. This could also, as the authors state, explain how and why people who work nights and have shift-work disorder and the concurrent metabolic syndromes that come along with it.

In regards to the microbiome and weight loss, it is poorly understood at the moment (Conlon and Bird, 2015), though a recent systematic review showed that restrictive diets and bariatric surgery “reduce microbial abundance and promote changes in microbial composition that could have long-term detrimental effects on the colon.” They further state that “prebiotics might restore a healthy microbiome and reduce body fat“(Segenfrado et al, 2017). Wolf and Lorenz (2012) show that using “good” probiotic bacteria may induce changes in the obese phenotype. Bik (2015) states that learning more about the microbiome, dysbiosis (Carding et al, 2015), and how the microbiome interacts with our metabolism, brain, and physiology, then we can better treat those with obesity due to the dysbiosis of the microbiome. Clark et al (2012) show how the mechanisms behind the microbiota and obesity.

Weight loss is, clearly, more than CI/CO, and once we understand other mechanisms of weight loss/gain/regulation then we can better treat people with these metabolic syndromes that weirdly are all linked to each other. Diets affect the diversity of the microbiome, the diversity of the microbiome already there though, may need other macro/micro splits in order to show differing weight loss, in the case of the NND and ADD study reviewed above. Changes in weight do change the diversity of the microbiome of an individual, however, the heritable component of the microbiome may mean that some people need to eat different foods compared to others who have a different microbiome. Over time, new studies will show how and why the macro/micronutrient content matters for weight loss/gain.

Clearly, reducing the complex physiological process of weight gain/loss to numbers and ignoring the physiological process and how the microbiome induces weight gain/loss and works together with our other body’s cells. As the science grows here we will have a much greater understanding of our body’s weight loss mechanisms. Once we do that, then we can better help people with this disease.

Advertisements

Does Human Potential Lie in the Embryo?

900 words

The debate on human potential—and whether or not it is innate and ‘in the genes’—is steeped in bias and ideology from both sides (despite the claims that HBD ‘has no ideological bias’). Hereditarians assume that human potential is ‘in the genes’, and some even believe that human potential is testable during embryonic development (like psychologist Stuart Ritchie). However, this assumes two things: 1) that genes are the masters of development, and not the slaves, and 2) that differences in potential are already encoded in the genes of the homunculus. I will show that these two assumptions are wrong.

Embryonic development is a part of a larger whole of a complex process. Cells, in the beginning of embryonic development, are totipotent—meaning they have the ability to become any type of cell (Condic, 2014) depending on what the intelligent system calls for. This is important to note: at the beginning, all cells are the same and, despite having the same genes, “they have the same potential to become any kind of differentiated cell for a particular organism” (Richardson, 2017: 156). It is also possible to grow stem cells in a lab that are pluripotent—which have the ability to become any cell in the body—called iPS cells.

Even embryos that are of low quality do end up developing into healthy babes (emphasis in second para mine):

Embryo quality as we see it under the microscope in the IVF lab gives us some reasonable ability to predict the chances for pregnancy after the embryo transfer procedure. However, because there are many other contributing factors involved that we can not see or measure, the generalizations about “quality” made from grading embryos are often inaccurate.

We see some cycles fail after transferring 3 perfect looking embryos, and we also see beautiful babies born after transferring only one “low grade” embryo. The true genetic potential of the embryo to continue normal development is very difficult to measure accurately unless we utilize preimplantation genetic screening (PGS) to select chromosomally normal embryos for transfer.

So it seems that not even just looking at the quality of the embryo will show you if it will grow into a healthy baby with no birth complications. Potential must come after the embryonic stage of development. Another thing about testing the ‘quality’ of the embryo: it tells nothing about “what is going on inside the embryo genetically“.

The thing is, most chromosomal and other defects in any embryos can be noted under a microscope within 3 days of the embryo forming. And if you paid attention to totipotent cells earlier, you’d know that those cells have the potential to become any cell in the body—which is driven by the body’s intelligent systems/cells.

So embryonic quality really has no bearing on whether or not the embryo will eventually reach birth. As I’ve argued before in Human Mating and Aggression—An Evolutionary Perspective, the age of the mother is one of the strongest predictors of whether or not there will be deleterious effects on the child—mostly after 35 years of age (O’Reilly-Green and Cohen, 1993; van Katwijk and Peeters, 1998; Stein and Susser, 2010; Lampinen, Vehviläinen-Julkunen, and Kankkunen, 2009Jolly, 2010; Yaniv et al, 2010Liu et al, 2011). However, there is evidence that a woman can be too young to become a mother (Geronimus, Korenman, and Hillemeier, 1994; Fall et al, 2015) and that children born to young mothers “might be better off if the parents waited a few years” (Myrskyla and Fenelon, 2012). The same holds true for fathers, with it recently being observed that older fathers and their offspring have lower evolutionary fitness even over across four centuries (Arslan et al, 2017). So it seems that the best predictor of embryonic quality is parental age (Scheffer et al, 2017)—not what an embryo really looks like or the totipotent cells already in the embryo.

So there is no test for the genetic potential of embryos and sperm—with the best tell being parental age. Embryonic development is a part of the intelligent developmental system and each stage of embryonic development is brand new, rather than being the cause of an already laid out blueprint. So even though the embryo has all of the same genes (in totipotent cells), they have the potential to become any cell in the body which is directed by the intelligent system (as noted above).

So if you understand embryonic development and how it’s a part of the intelligent system itself and not a part of an already laid out blueprint, then you’ll understand how potential—as we know it— is not in the embryo. They all have the same kinds of totipotent cells which have the chance to become any cell in the body which are then activated and used by the intelligent physiology. The age of both parents are the best predictors of embryonic quality—just by looking at the embryos after they’ve developed from blastocytes, you cannot infer that embryo’s potential.

Ken Richardson also responded to Stuart Ritchie’s article It’s now possible, in theory, to predict life success from a genetic test at birth to which Ken Richardson responded to. Potential is not in the embryo due to the number of totipotent cells in the embryo. Even ‘low-quality’ embryos can become healthy babes, so ’embryonic quality’ is not a good measure of whether or not it will be born with a defect, etc.

Worldwide IQ estimates based on education data

By Afrosapiens, 2851 words

One of the leading theories to explain differences in cognitive test performance between time and place is that intelligence as measured by such tests is affected by exposure to formal schooling and the cognitive demands of a high-technology society (D. Marks, JR. Flynn). One of the strongest evidence for such an effect of schooling on IQ comes from a reform in the Norwegian school system in which an expansion of compulsory schooling was associated with a 3.7 points increase in IQ per additional year of education between pre-reform and post-reform cohorts. In order to test this relationship between years of schooling and commonly reported national IQ averages, I used data from the United Nation’s Development Program to estimate the average IQ of each country’s adult and school-age population. Adult IQs were estimated from mean years of schooling completed by adults aged 25 and older whereas School-population IQs were estimated based on the expected years of schooling that a student is supposed to complete if the enrollment ratios from primary through tertiary education remain constant. All variables were reported in year 2015. Great Britain was chosen as the reference country and assigned a default value of 100 on both variables. For each country, a difference of one year in completed or expected schooling added or removed 3.7 IQ points. Adult IQ and School-age population IQ were averaged to estimate the most probable mean IQ that would be found by randomly reviewing literature without controlling for the age or the health and socio-economic profile of the sampled individuals.

Results

Country Main ancestry School age-Adult IQ average School age IQ Adult IQ
Australia West-European 107 115 100
Denmark West-European 104 111 98
New Zealand West-European 104 111 97
Iceland West-European 103 110 96
Ireland West-European 102 109 96
Norway West-European 101 105 98
Germany West-European 101 103 100
Netherlands West-European 101 107 95
United States West-European 100 101 100
United Kingdom West-European 100 100 100
Switzerland West-European 100 99 100
Canada West-European 100 100 99
Slovenia East-European 100 104 96
Lithuania East-European 99 101 98
Czech Republic East-European 99 102 96
Estonia East-European 99 101 97
South Korea North-East Asian 99 101 96
Israel West and Central Asian, North African 99 99 98
Sweden West-European 98 99 96
Poland East-European 98 100 95
Finland East-European 97 103 92
France West-European 97 100 94
Japan North-East Asian 97 96 97
Latvia East-European 96 99 94
Belarus East-European 96 98 95
Greece East-European 96 103 90
Hungary East-European 96 97 95
Spain West-European 96 105 87
Hong Kong North-East Asian 96 98 94
Austria West-European 96 99 93
Italy West-European 96 100 91
Slovakia East-European 96 95 96
Argentina West-European 95 104 87
Singapore North-East Asian 95 97 94
Liechtenstein West-European 95 94 97
Russia East-European 95 95 95
Kazakhstan West and Central Asian, North African 95 95 94
Ukraine East-European 94 96 93
Palau South-East Asian and Polynesian 94 93 96
Croatia East-European 94 96 92
Montenegro East-European 94 96 93
Chile West-European 94 100 87
Georgia West and Central Asian, North African 94 91 96
Cyprus East-European 93 93 94
Luxembourg West-European 93 91 95
Malta West-European 93 94 93
Bulgaria East-European 93 95 91
Barbados Black African 93 96 90
Fiji South-East Asian and Polynesian

South Asian

93 96 90
Cuba West-European 93 91 94
Saudi Arabia West and Central Asian, North African 93 99 86
Portugal West-European 92 101 84
Romania East-European 92 94 91
Tonga South-East Asian and Polynesian 92 93 92
Serbia East-European 92 93 91
Belgium West-European 91 90 93
Sri Lanka South Asian 91 91 91
Mongolia North-East Asian 91 91 87
Grenada Black African 90 98 83
Mauritius South Asian 90 96 84
Uzbekistan West and Central Asian, North African 90 85 95
Uruguay West-European 90 97 83
Armenia West and Central Asian, North African 90 87 93
Brunei South-East Asian and Polynesian 89 95 84
Azerbaijan West and Central Asian, North African 89 87 92
Bahrain West and Central Asian, North African 89 93 86
Andorra West-European 89 90 89
Kyrgyzstan West and Central Asian, North African 89 88 91
Albania East-European 89 92 86
Moldova East-European 89 83 95
Venezuela West-European 89 93 86
Trinidad and Tobago Black African

South Asian

89 87 91
Bahamas Black African 89 87 91
Iran West and Central Asian, North African 89 94 83
Seychelles Black African

South Asian

West-European

89 92 86
Belize Black African

Native American

88 87 90
South Africa Black African 88 88 89
Malaysia South-East Asian and Polynesian 88 88 88
Bosnia East-European 88 92 84
Samoa South-East Asian and Polynesian 88 87 89
Jordan West and Central Asian, North African 88 88 88
Qatar West and Central Asian, North African 88 89 87
Brazil West-European 88 96 79
Costa Rica West-European 88 92 83
Panama Native American 88 88 87
United Arab Emirates West and Central Asian, North African 87 89 86
Turkey West and Central Asian, North African 87 94 80
Peru Native American 87 89 84
Saint Lucia Black African 87 88 85
Jamaica Black African 87 87 86
Macedonia East-European 86 87 86
Ecuador Native American 86 91 82
Algeria West and Central Asian, North African 86 93 82
Saint-Kitts and Nevis Black African 86 90 82
Bolivia Native American 86 91 81
Mexico West-European 86 89 83
Saint Vincent and the Grenadines Black African 86 89 83
Lebanon West and Central Asian, North African 86 89 83
Oman West and Central Asian, North African 86 90 81
Botswana Black African 86 86 85
Palestine West and Central Asian, North African 85 87 84
Tajikistan West and Central Asian, North African 85 82 89
Tunisia West and Central Asian, North African 85 94 77
Thailand South-East Asian and Polynesian 85 90 80
Micronesia South-East Asian and Polynesian 85 83 87
Colombia West-European 84 90 79
China North-East Asian 84 90 79
Philippines South-East Asian and Polynesian 84 83 85
Suriname South-East Asian and Polynesian

Black African

South Asian

84 87 82
Dominican Republic Black African 84 89 79
Indonesia South-East Asian and Polynesian 84 87 80
Dominica Black African 84 87 80
Gabon Black African 84 86 81
Libya West and Central Asian, North African 84 89 79
Turkmenistan West and Central Asian, North African 84 80 87
Kuwait West and Central Asian, North African 83 89 78
Vietnam South-East Asian and Polynesian 83 86 80
Paraguay Native American 83 85 81
Egypt West and Central Asian, North African 82 88 77
Kiribati Melanesian 82 84 80
El Salvador Native American 81 89 75
Zambia Black African 80 86 76
Maldives South Asian 80 87 74
Guyana Black African

South Asian

79 78 82
Namibia Black African 79 83 76
Ghana Black African 79 82 76
Cabo Verde Black African 79 90 69
Nicaragua Native American 79 83 75
Swaziland Black African 79 82 76
India South Asian 79 83 74
Zimbabwe Black African 79 79 79
Vanuatu Melanesian 78 80 76
Honduras Native American 77 81 74
Congo Black African 77 81 74
Kenya Black African 77 81 74
Sao Tome and Principe Black African 77 84 70
Morocco West and Central Asian, North African 77 84 69
Guatemala Native American 77 79 74
Timor-Leste Melanesian 76 86 67
Lesotho Black African 76 79 73
Togo Black African 76 84 68
Iraq West and Central Asian, North African 76 77 75
Cameroon Black African 76 78 73
Angola Black African 76 82 69
Madagascar South-East Asian and Polynesian

Black African

76 78 73
Nepal South Asian 75 85 66
Laos South-East Asian and Polynesian 75 80 70
Nigeria Black African 75 77 73
Comoros Black African 75 81 69
DR Congo Black African 75 76 73
Uganda Black African 74 77 72
Bhutan South Asian 74 86 62
Cambodia South-East Asian and Polynesian 74 80 68
Bangladesh South Asian 74 77 70
Malawi Black African 73 80 67
Solomon Islands Melanesian 73 75 70
Equatorial Guinea Black African 72 74 71
Tanzania Black African 72 73 72
Rwanda Black African 72 80 65
Haiti Black African 72 73 70
Liberia Black African 72 76 67
Benin Black African 72 79 64
Papua New Guinea Melanesian 72 76 67
Syria West and Central Asian, North African 71 73 70
Cote d’Ivoire Black African 71 73 69
Myanmar South-East Asian and Polynesian 71 73 68
Afghanistan West and Central Asian, North African 70 77 64
Burundi Black African 70 79 62
Pakistan South Asian 70 70 70
Mauritania West and Central Asian, North African

Black African

69 71 67
Sierra Leone Black African 69 75 63
Mozambique Black African 69 73 64
Senegal Black African 68 75 61
Gambia Black African 68 73 63
Guinea-Bissau Black African 68 74 62
Yemen West and Central Asian, North African 67 73 62
Guinea Black African 66 72 60
Central African Republic Black African 66 66 66
Ethiopia North-East African 66 71 60
Mali Black African 65 71 59
Sudan North-East African 65 66 64
Djibouti Black African 64 63 66
South Sudan Black African 63 58 69
Chad Black African 63 67 59
Burkina-Faso Black African 62 68 56
Eritrea Northeast-African 62 58 65
Niger Black African 58 60

57

The values were rounded to the nearest unit.

In comparison to the mean national IQs mainly reported by Richard Lynn, 65 countries differed by less than 5 IQ points using the present methodology. It can be said that such small differences validate Lynn’s estimates since it is unlikely that years of education have the same cognitive value in every country and likewise, averaging adult IQ and school-age population IQ without controlling for a country’s age structure somewhat weaken the representativeness of my findings. Differences larger than 5 points were found for 30 countries, and in these cases, I suspect it is due to Lynn manipulating the data to fit racial patterns, Sub-Saharan African countries have been systematically under-estimated and East-Asian ones have been systematically over-estimated by Lynn, also, Some nations in Europe, the Middle-East, South-Asia and Latin America seem to have their scores manipulated in order to appear closer to what they would be based on their racial composition.

Such inconsistencies result in incoherences between the reported IQs and the educational and socio-economic outcomes (regardless of which variable influences the other) of the affected countries and support the accusations of racially-motivated fraud in Richard Lynn’s data collection. In the same way, estimating the mean IQs of countries for which direct data is missing by averaging the figures of neighboring countries of similar ethno-racial composition is unwarranted as race does not seem to play a role in a country’s cognitive performance.

In spite of all the deserved criticism that Lynn’s data met, it can be said that most of the commonly cited mean IQs out of Africa and East-Asia are reliable and that a strong relationship between human capital and human development exists whether we measure it by IQ or years of education. The causes of international variation in school quality and enrollment are well-known and come down to school and student characteristics. Schools in developing countries face numerous challenges: lack of basic amenities such as electricity, potable water, air-conditioning and heating, like of educational supplies (school rarely have enough textbooks and rely on chalk and blackboards), high student to teacher ratios (primary school classes with more than 50 students are common low-income countries), chronic teacher absenteeism (teachers usually have a business on the side), obsolete pedagogy, outdated or irrelevant curricula, multilingualism, exam-corruption, low public funding, misguided policies, gender and ethnic discrimination. Pupils are held back by poor health and nutrition resulting in developmental delays, tuition fees and supplies that poor families can’t afford, war, population displacement, absent educational resources at home, low parental education, lack of transportation, child labor, excessive use of grade repetition, mismatch between school curricula and daily life demands and many other factors. Differences in human capital have large implication in terms of workforce qualification and social behavior, which contribute for a large part to a country’s socio-economic development. The present findings provide evidence for large international inequalities in inter-generational change in educational outcomes which are probably the driving cause of the Flynn effect.

Intergenerational change in cognitive performance.

This slideshow requires JavaScript.

Estimating IQs from the current school enrollment rates and the mean educational attainment of adults provide insights regarding intergenerational differences in cognitive performance. We can see from these figures that the countries that developed the fastest show large intergenerational differences in education/IQ favoring the younger cohorts, these countries are concentrated in South America, Southern Europe, West Africa, the Middle-East and Oceania, Ethiopia and China also show trends that are in line with their recent economic success. On the other hand, many ex-USSR countries, as well as Japan, Cuba, South Africa, Zimbabwe and the Philippines have been stagnant or even declining relative to the United Kingdom and this is also reflect in their poor socio-economic performance in the last decades. War-torn South Sudan and the Central African Republic experience alarming declines in their educational performance that expose them to grave humanitarian crises in the future. Although there is a clear relationship between socio-economic progress and gains in cognitive performance, a country’s ability to capitalize on its intellectual potential remains highly dependent on its leadership and the odds of the world-market, that’s why theories claiming that IQ is the main driver of global inequalities are not tenable in the light of the current evidence.

 

Update 09/07/2017 – Detailed comparison with Lynn’s Data

 

To test the predictive power of my estimates in comparison to Lynn’s, I decided to focus only on the world’s 20 most populous countries. The reason for that is that these countries are home to 70% of the world’s population and the law of large numbers says that they are likely more representative of whatever they could represent. On the other hand, the 100+ other countries are home to only 30% of humanity. They are a source of statistical noise due to extreme outlying values and differences in regional political fragmentation that would hide or weaken general trends better evidenced by considering large countries.

Data:

ranking

 

Correlations and averages:

correlations

 

Noticing an abnormal 22 points gap between Sub-Saharan African IQs and the world average on Lynn data, Suspecting that extremely low values would flaw the correlations, I tested if my estimates and Lynn’s would retain the same predictive power with the African IQs excluded. My assumption was that a strong causal relationship would leave the correlations unchanged no matter which countries were included whereas any change in predictive power resulting from excluding some countries would cast doubt on the accuracy of the reported data.

IQ-HDI correlation:

Similarly to my previous calculation including all the countries for which data were available, I found a 0.96 correlation between my estimates and HDI, Lynn’s estimates correlation with HDI was higher (+0.06) than with the worldwide data, but still largely inferior to mine. Removing African countries, the predictive power of my estimates remained the same (+02) whereas Lynn’s significantly decreased (-0.13) and left a predictive gap of 0.24 favoring my estimates. However, given the fact that my estimates are based on variables that are included in the calculation of HDI, such a high predictive power as to be met with caution.

IQ-GDP per capita correlation:

My previous calculation from the worldwide data yielded a correlation coefficient of 0.65 between my IQ estimates and GDP/capita and 0.60 for Lynn’s. Among the 20 most populated countries, the correlation rose by 0.24 points to 0.89 with my estimates and by 0.12 points to 0.72 with Lynn’s. Excluding Sub-Saharan African countries did not affect the predictive power of my estimates (+0.01) and further weakened Lynn’s by 0.04 points, resulting in a 0.22 gap in predictive power favoring my estimates again. This correlation of 0.89 between my IQ estimates and GDP per capita within the world’s population top 20 countries likely is the highest correlate of IQ ever reported in the psychological science and gives strong support to the relationship between schooling, economic development and cognitive ability.

IQ-Life expectancy correlation:

Compared with the worldwide database, the correlation between my IQ estimates and life expectancy was down 0.04 points within the world’s top 20 to 0.76, Lynn’s went up by o.o5 points to 0.84. However, removing Sub-Saharan Africa left the predictive power of my estimates unchanged whereas Lynn’s fell by 0.13 points to 0.71. My estimates again predicted life expectancy better by a small 0.4 points this time.

IQ-Homicide correlation:

Not estimated previously, my data finds an non-existent relationship between IQ and homicide rate (-0.01) and excluding Sub-Saharan Africa confirmed a null relationship between homicide rates and IQ in the rest of the world. Lynn’s estimates showed a low negative correlation between IQ and homicide (-0.35) and the exclusion of African countries further lowered the correlation (-0.25). Lynn’s estimates had a better predictive power which still remained in the range of low statistical significance.

IQ-Fertility correlation:

Adding a new variable, I found a negative correlation of -0.69 between my IQ estimates and Fertility, the correlation remained the same (-0.68) with the African countries excluded. The correlation between Lynn’s IQs and fertility was stronger (-0.84), but removing African data decreased it by 0.18 points to 0.66. My estimates ended up with a slightly stronger predictive power (+0.02).

General patterns:

In addition to having a stronger and globally consistent predictive power, my estimates reveal how Richard Lynn manipulates the data to fit desired racial patterns.

As expected from the 0.96 correlation between my IQ data and HDI, the ranking of countries by cognitive ability shows a perfect gradient from high to low development status. Moreover, the highest gap between two following countries is the 6 points separating Russia and Iran, showing a marked difference between the developed and the developing world.

Ranking countries by Lynn-estimated IQs results in a whole other pattern in which a country’s dominant ancestry seems to be the only variable that matters. East-Asians are on top, followed by Western Europeans, then Eastern-Europeans, South-East Asians, fair skinned Middle-Easterners (Turkey and Iran) and Latin Americans, Austronesians (Indonesia and the Philippines), South Asians and Arabs, and finally Sub-Saharan Africans far below, with a huge 10 points gap (the largest between two following countries in his dataset) separating Bangladesh from Nigeria.

The manipulation is quite apparent, Lynn largely over-estimated China (+22), Japan (+7) to make East-Asians cluster on top, thus protecting himself from accusations of nordicism and giving support to the inter-cultural validity of the IQs that he cherry-picked. The western European and Russian  data remained mostly unchanged. Vietnam (+11) and Thailand (+5) were given a bonus for their genetic proximity to North-East Asia that is supposed to make them score in the low 90s despite their lack of development. Little changes were brought to the scores of the Latin American, Middle-Eastern and Austronesian countries usually scoring in the mid-80s. Major fraud (+14 in Pakistan, +7 in Bangladesh) was done to lift up South-Asian countries out of the 70s range and excluding Sub-Saharan Africa as the only region scoring 70 or below and downgrading Nigeria (-4) and the DR. Congo (-7) in the process.

By pointing this out, I’m warning honest researchers and laymen about the dangers of relying on data resulting from undisclosed, unsystematic and un-replicable methodology. And although my estimates do not result from any actual IQ measurement beyond the relationship between IQ and schooling evidenced in Norwegian cohorts, my method uses a single, universal conversion factor applied to representative official data collected by professional demographers whereas Lynn’s and the likes’ cherry-picking of samples is only the hobby of a dozen scholars and pseudo-scholars. This is how I found out strong, consistent and meaningful correlations between IQ and various development variables.

Although they are likely more representative of the worldwide distribution of cognitive ability, my estimates still provide evidence that a large part (the largest part, actually) of the world’s population scoring below one standard deviation on Western-normed IQ tests, which is the case for 11 of the world’s 20 most populated countries. Although this may sound alarming, with Pakistan and Ethiopia scoring in the range of mental disability (70 and 66 respectively), I think this effect comes from using Western populations as a reference for standardization.

In fact, another picture emerges when we compare countries with the world’s average, replacing the eurocentric British Greenwich IQ of 100 by an universal IQ of 84 and thus giving a more accurate idea of what is normal cognitive ability by the standards of humanity. In this sample, China, the Philippines and Indonesia are representative of the top of the bell curve whereas Ethiopia, the United States and Germany are the only outliers left with respective Universal IQs of 81.6, 115.6 and 116.6. For this reason, I recommend the use of Chinese or South-East Asian normalization samples in international IQ comparisons.

 

 

 

 

Does Exogenous Oxytocin Make Xenophobic People Non-Xenophobic?

2050 words

Oxytocin (OXT) is known as ‘the love hormone’, since it facilitates bonding from mother to child (Galbally et al, 2011; Feldman and Bakermans-Kranenburg, 2017), facilitates childbirth and breastfeeding (since OXT is released in large amounts after nipple stimulation) (Magon and Kalra, 2011), and increases trust in humans (Kosfeld et al, 2005). It is also implicated in some psychiatric disorders (Marrazziti and Del’Osso, 2008; Cochran et al, 2013). OXT, furthermore, also has endocrine and paracrine roles in male reproduction (Nicholson, 1996Thackare, Nicholson, and Whittleson, 2006), so it is not strictly ‘a female hormone’ (Saladin, 2010). The hormone induces numerous important behaviors that attach the mother—emotionally speaking—to her new child.

A recent study published back in July titled “Oxytocin-enforced norm compliance reduces xenophobic outgroup rejection” (Marsh et al, 2017), purports to observe that, xenophobic individuals administered a nasal spray with OXT and then shown pro-social behaviors to other ethnies (refugees) show a reduction in xenophobic attitudes. First, I will cover the science aspect of it, then, I will cover the ideological aspect of the paper, and finally will address the societal implications this paper may have in the future. I will conclude with my thoughts on both the science and ideology behind the paper (because, in my opinion, there was a clear ideological drive behind the paper, though, the same holds for most other fields).

The Science

In the first experiment, 53 males and 23 females (n=76) were given either the spray with the OXT or a placebo. They were then administered a test that tested how high they scored on a ‘Xenophobia index’. Marsh et al (2017: 9,318) write:

In a separate screening session, we evaluated xenophobia by measuring the attitudes toward refugees based on an adapted assessment instrument developed by Schweitzer and colleagues (33). Adaptions encompassed the wording; for example, “Australian refugee” was replaced by “German refugee.” The assessment instrument contained two inventories, in which participants indicated how strongly they associate refugees with realistic and symbolic threats.

The realistic threat scale items encompass different threat perceptions; for example: “Refugees are not displacing German workers from their jobs” or “Refugees have increased the tax burden on Germans.” Responses were coded on a 10-point Likert scale, ranging from 1 (“I strongly disagree”) to 10 (“I strongly agree”). All items were recoded such that higher values reflected greater feelings of perceived realistic threats. The term Xi index, which we used for subsequent analyses, refers to a subject’s mean score achieved on the realistic threat inventory.

Higher Xi scores imply that an individual is more xenophobic. For experiment 1, they put the subjects into a lecture hall to establish altruistic norms, which enabled reputation pressures if one was seen to not be generous when giving. Marsh et al (2017) discovered that those who donated to refugees donated 19 percent more money. Further, donations to natives or refugees—including outgroup bias—was not dependant on gender. The bias (19 percent more donation) indicated altruistic actions and was lowest in those with high Xi scores.

Experiments 2 and 3 were randomized controlled trials (RCTs) of a random sample of 107 males (mean age of 24). They were administered either the OXT nasal spray or a placebo by a blind researcher. They separated them into high Xi scorers or low Xi scorers (n=53 and n=54 respectively). The OXT administered to low Xi scorers specifically increased altruistic behavior towards the ingroup and outgroup “evident in a 68% (outgroup) and an 81% (ingroup) increase in the donated sums” (Marsh et al, 2017: 9315). However, this effect was not noticed in high Xi scorers, so the researchers wondered if showing pro-social behaviors after being administered OXT would show a change in xenophobic behavior.

So people who scored high on the Xi index and were administered OXT showed no change in altruistic behavior. However, when those who scored high on the XI index were administered OXT and prosocial behavior to the outgroup was seen by those with higher Xi scores, they increased their donations to the outgroup by 74 percent.

f1-large_-1024x739

Figure 1 shows, clearly, that those who were administered OXT and were exposed to altruistic norms from co-ethnics to the outgroup showed more generosity towards the outgroup than those administered the placebo. It’s also worth noting that these findings (of course) are not generalizable to women.

How does ideology affect this? Of course, both the Right and Left can use this study for their own agendas, but, of course, the Marsh et al (2017) may have biases themselves (everyone has biases, even the most well-known, most respected scientists), so now I will look at the ideology behind this paper through both a Right and Left lens, since political bias permeates our every day lives, and due to this, people won’t be able to think rationally about things, ironically, using their emotions to guide their thought processes/conclusions.

The Ideology

Marsh et al (2017: 9317) write:

The effect of solutions combining selective enhancement of OXT signaling and peer influence would be expected to diminish selfish motives, and thereby increase the ease by which people adapt to rapidly changing social ecosystems. More generally, our results imply that an OXT-enforced social norm adherence could be instrumental in motivating a more generalized acceptance toward ethnic diversity, religious plurality, and cultural differentiation resulting from migration by proposing that interventions to increase altruism are most effective when charitable social cues instill the notion that one’s ingroup shows strong affection for an outgroup. Furthermore, UNESCO has emphasized the importance of developing neurobiologically informed strategies for reducing xenophobic, hostile, and discriminatory attitudes (47). Thus, considering OXT-enforced normative incentives in developing future interventions and policy programs intended to reduce outgroup rejection may be an important step toward making the principle of social inclusion a daily reality in our societies.

This seems pretty bad to me. “If you won’t accept people in your countries, you must take this exogenous OXT while watching your ethnic group show altruistic behavior towards the outgroup so then you too will no longer be a ‘racist’.

In regard to ref 47, it is a 2001 UNESCO address on ‘racism’. Of course, it begins by stating that “Science – modern genetics in particular – has constantly affirmed the unity of the human species, and denied that the notion of `race’ has any foundation.” This, as regular readers know, is false. Race is a social construct of a biological reality. Self-reported race is a great metric to gauge geographic ancestry (Risch et al, 2002), while Tang et al (2005) showed that self-reported race correlated almost perfectly with geographic ancestry. Though I can forgive this since it is a 2001 address.

Here is the money quote (emphasis mine):

Similarly, respect for others and acceptance of the right to be different should be built in the minds of human beings to replace hostile, discriminatory and xenophobic attitudes.

So it seems that Marsh et al (2017) is the first step in UNESCO’s quest for “[building] the minds of human beings to replace hostile, discriminatory and xenophobic attitudes … I can assure you that UNESCO will work actively to achieve this goal in close cooperation with other UN bodies and specialised agencies, other intergovernmental and nongovernmental organizations and with all interested partners.“. This screams social engineering to me, and it seems that the authors would approve of this, especially if you read the Discussion of the paper. This seems to be whatever the Left thinks would make for a better society, they’ll attempt to enact it. People believe they’re the opposite sex? Give them ‘gender-affirming surgery’ (whatever that means). People are ‘racist’? Better strap them down into a chair and shoot exogenous OXT up their nose while their eyes are forced open while watching videos of prosocial behaviors to the outgroup! The numerous possible scenarios that can be thought up due to this paper are mind boggling. For instance, maybe they can use our Internet history and see who the ‘wrongthinkers’ are to forcibly administer OXT to the ‘racists’. But I thought people should be who they are….?

I’d like to know what the baseline levels of OXT in the subjects were. For instance, did the people who had a high Xi score have higher levels of endogenous OXT? Furthermore, were they around people who did not show altruistic behavior towards ‘refugees’? That, then, would show that higher levels of endogenous OXT combined with non-altruistic behavior would increase ethnocentrism (Dreu et al, 2010). OXT has also been called by journalists ‘the love and trust hormone‘ and ‘the cuddle hormone‘, however, the results of Dreu et al (2010) call this into question showing that the hormone may be a cause of inter-group violence along with outgroup derogation. Dreu et al (2010) also conclude that OXT contributes to prosocial behaviors for the ingroup and facilitates outgroup derogation.

So OXT doesn’t make us prosocial on its own; OXT functions as a way to differentiate the ingroup vs. the outgroup, along with giving the in-group more preferential treatment (though other analyses fail to find that outgroup trust decreases; IJzendoorn and Bakermans-Kranenburg, 2011). Dreu et al (2010) also states that ethnocentrism that’s driven by high levels of endogenous OXT “paves the way for intergroup bias, conflict, and violence.” However, the results of Marsh et al (2017) show that OXT may facilitate prosocial behavior.

Conclusion

This study—especially with the discussion and the authors’ citation of the 2001 UNESCO address about “[building] the minds of human beings” is pretty scary. If you don’t go by what society says is ‘good’ and ‘right’ (whatever that means), you’re a heretic and you must be shown the way—forced OXT and watch the altruistic behavior, you don’t want to be ‘racist’ now, do you? We know that those that run our Western countries would like to make us how they think we should be—non-‘xenophobic’, accepting of migration, and they don’t want use to complain about it. So why not attempt to social engineer the populace into conforming to what the government wants?

Of course, over the past decade or so, mass immigration from outside the West has increased. I won’t go into the causes for that since I don’t discuss politics, however, unchecked immigration—no matter what the ultimate cause is—will change the host society. Go somewhere for X amount of benefits? If enough migration occurs to that nation and the native population is displaced enough, how would those benefits continue if those that migrated still exhibit the same behavior they did in their native countries?

This seems to be the start of “If we don’t like what you think or your beliefs, we will attempt to administer hormones to you and force you to watch this in order to cure you of your unnatural (in our egalitarian society) ‘racism’.” Measures such as this have, as far as I know, been spoken about since the turn of the last millennium and the completion of the Human Genome Project. It seems that as more and more migration occurs to the West, more and more anti-migrant attitudes will be had. The plan here seems to be to socially engineer people to be accepting of their replacement. Why? I thought that people should ‘be themselves’, that’s what they tell transgenders, anyway. Why would ‘racists’ be any different? Oh, because it’s not acceptable in today’s increasingly multi-ethnic society.

I won’t go down the path of the naturalistic fallacy (re: ethnocentrism is good and natural because we evolved that way), however, there is, of course, great adaptive significance to such behavior. If you show more altruistic behavior towards the in-group, you’re more likely to show more altruistic behavior to your family members and thusly have a better chance of protecting co-ethnics.

This is a great example of people attempting to enact policies to socially engineer people, a la Brave New World or 1984. Hormones influence behaviors, yes. Further, watching similar others engage in an action facilitates the possibility that they would also take t at same action. Administering exogenous OXT while seeing that would, according to Marsh et al (2017), cure ‘racism’ and make people happy about being displaced in their own countries. Non-Western people are abnormal to our societies, and when migration occurs to the West, this leads to a decrease in social trust in the native population (Putnam, 2007).

The paper (and its results) seem heavily driven by political bias. Will these political biases doom us to further social engineering through administering the populace with whatever hormones are discovered to do what ‘they‘ (the government) want us to do and how ‘they‘ want us to act? All I know is that it’s pretty scary to hear that this is even being talked about. I hope this never comes to fruition.

Does Testosterone Affect Human Cognition and Decision-Making?

1450 words

According to a new article published at The Guardian, testosterone does affect human cognition and decision-making. The article, titled, Now we men can blame our hormones: testosterone is trouble, by Phil Daoust, is yet more media sensationalism against testosterone. Daoust’s article is full of assumptions and conclusions that do not follow from an article he cites on testosterone and cognitive reflection and decision making.

The cited article, Single dose testosterone administration impairs cognitive reflection in menstates that endogenous testosterone (testosterone produced in the body) is correlated with physical aggression. However, I’ve shown that this is not true. They conclude overall the exogenous testosterone is related to an increase in irrational thinking and decision-making. Nothing wrong with concluding that from the data. However, Daoust’s interpretation and conclusions he draws from this study are wrong, mostly due to the same old tales and misconceptions about testosterone.

This is the largest study of the effect of exogenous testosterone and decision-making and cognition. The authors show that men administered a gel that was rubbed into the upper body that is used for TRT (testosterone replacement therapy) showed “incorrect intuitive answers were more common, and correct answers were less common in the T group, for each of the three CRT questions analyzed separately” (Nave et al, 2017: 8). However, what The Guardian article does not state is that this relationship could be mediated by more than testosterone, such as motivation and arithmetic skills.

Nevertheless, those who rubbed themselves with the testosterone gel answered 20 percent fewer questions correctly. This was attributed to the fact that they were more likely to be anxious and not think about the answer. One of the authors also states that either testosterone inhibits the action of mentally checking your work or it increases the intuitive feeling that you’re definitely right (since those who rubbed themselves with T gel gave more intuitive answers, implying that the testosterone made them go to their first thought in their head). I have no problems with the paper—other than the fact that gel has an inconsistent absorption rate and has high rates of aromatization.  The study has a good design and I hope it gets explored more. I do have a problem with Dauost’s interpretation of it, however.

A host of studies have already shown a correlation between elevated testosterone levels and aggression – and now they’re being linked to dumb overconfidence.

The ‘host of studies‘ that ‘have already shown a correlation between elevated  testosterone levels and aggression‘ don’t say what you think they do. This is another case of the testosterone sensationalism of the media—talking about a hormone they don’t really know anything about.

That won’t help with the marketing – though it may explain Donald Trump and his half-cocked willy-waggling. Perhaps it’s not the president’s brain that’s running things, but the Leydig cells in his testicles.

Nice shot. This isn’t how it works, though. You can’t generalize a study done on college-aged males to a 71-year-old man.

Women aren’t entirely off the hook – their bodies also produce testosterone, though in smaller quantities, and the Caltech study notes that “it remains to be tested whether the effect is generalisable to females” – but for now at least they now have another way to fight the scourge of mansplaining: “You’re talking out of your nuts.”

Another paragraph showing no understanding, even bringing up the term ‘mansplaining’—whatever that means. This article is, clearly, demonizing high T men, and is a great example of the media bias on testosterone studies that I have brought up in the past.

Better still, with the evils of testosterone firmly established, the world may learn to appreciate older men. Around the age of 30, no longer “young, dumb and full of cum”, we typically find our testosterone levels declining, so that with every day that passes we become less aggressive, more rational and generally nicer.

The evils of testosterone firmly established“, nice job at hiding your bias. Yes the cited article (Nave et al 2017) does bring up how testosterone is linked to aggression. But, for the millionth time, the correlation between testosterone and aggressive behavior is only .08 (Archer, Graham-Kevan, and Davies, 2005).

Even then, most of the reduction of this ‘evil hormone’ is due to lifestyle changes. It just so happens that around the ages 25-30—when most men notice a decrease in testosterone levels—that men begin to change their lifestyle habits, which involve marriage which decreases testosterone levels (Gray et al, 2002; Burnham et al, 2003; Gray, 2011; Pollet, Cobey, and van der Meij, 2013Farrelly et al, 2015;  Holmboe et al, 2017), having children (Gray et al, 2002Gray et al, 2006; Gettler et al, 2011) to obesity (Palmer et al, 2012Mazur et al, 2013; Fui, Dupuis, and Grossman, 2014; Jayaraman, Lent-Schochet, and Pike, 2014; Saxbe et al, 2017) smoking is not clearly related to testosterone (Zhao et al, 2016), and high-carb diets decrease testosterone (Silva, 2014).

So the so-called age-related decline in testosterone is not really age-related at all—it has to do with environmental and social factors which then decreases testosterone (Shi et al, 2013). Why should a man be ‘happy’ that his testosterone levels are decreasing due—largely—to his lifestyle? Low testosterone is related to cardiovascular risk (Maggio and Basaria, 2009), insulin sensitivity (Pitteloud et al, 2005Grossman et al, 2008), metabolic syndrome (Salam, Kshetrimayum, and Keisam, 2012Tsuijimora et al, 2013), heart attack (Daka et al, 2015), elevated risk of dementia in older men (Carcaillon et al, 2014), muscle loss (Yuki et al, 2013), and stroke and ischemic attack (Yeap et al, 2009).

So it seems that, contrary to Phil Daoust’s (the author of The Guardian article on testosterone) claims that low testosterone is associated with less aggressive behavior, more rationality and being nicer, in general, are wrong. Low testosterone is associated with numerous maladies, and the Daoust is trying to make low testosterone out to be ‘a good thing’, while demonizing men with higher levels of testosterone with cherry-picked studies and not large meta-analyses like I have cited that show that testosterone has an extremely low correlation with aggressive behavior.

As I have covered in the past, testosterone levels in the West are declining, along with semen count and quality. These things are due, largely in part, to social and environmental factors such as obesity, low activity, and an overall change in lifestyle. One (albeit anecdotal) reason I could conjure up has to do with dominance. Testosterone is the dominance hormone and so if testosterone levels are declining, then that means men must not be showing dominance as much. I would place part of the blame here on feminism and articles like the one reviewed here as part of the problem. So contra the author’s assertion, lower levels of testosterone into old age are not good, since that signifies a change in lifestyle—many of which are in the control of the male in question (I, of course, would not advise anyone to not have children or get married).

Nave et al (2017) lead the way for further research into this phenomenon. If higher doses of exogenous testosterone do indeed inhibit cognitive reflection, then, as the authors note, “The possibility that this widely prescribed treatment has unknown deleterious influences on specific aspects of decision-making should be investigated further and taken into account by users, physicians, and policy makers” (Nave et al, 2017: 11). This is perhaps one of the most important sentences in the whole article. This is about the application of testosterone-infused gel and decision-making. They’re talking about the implications of administering the gel to men and how it affects decision-making and cognitive reflection. This study is NOT generalizable for 1) endogenous testosterone and 2) non-college students. If the author understood the paper and science, he wouldn’t make those assumptions about Trump’s Leydig cells in his testicles “running the show”.

Because of the testosterone fear, good studies like Nave at al (2017) get used for an agenda by people who don’t understand the hormone. People the the Right and Left both have horrible misconceptions about the hormone, and some cannot interpret studies correctly and draw the correct conclusions from them. Testosterone—endogenous or exogenous—does not cause aggression (Batrinos, 2012). This is an established fact. The testosterone decrease between the ages of 25-30 is avoidable if you don’t change to bad habits that decrease testosterone. All in all, the testosterone scare is ridiculous. People are scared of it because they don’t understand it.

Daoust didn’t understand the article he cited and drew false conclusions from his misinterpretations. I would be interested to see how men would fare on a cognitive reflection test after, say, their favorite team scored during a game, and not after being given supraphysiological doses of testosterone gel. Drawing conclusions like Daoust did, however, is wrong and will mislead numerous more people under the guise of science.

Responses to The Alternative Hypothesis and Robert Lindsay on Testosterone

2300 words

I enjoy reading what other bloggers write about testosterone and its supposed link to crime, aggression, and prostate cancer; I used to believe some of the things they did, since I didn’t have a good understanding of the hormone nor its production in the body. However, once you understand how its produced in the body, then what others say about it will seem like bullshit—because it is. I’ve recently read a few articles on testosterone from the HBD-blog-o-sphere and, of course, they have a lot of misconceptions in them—some even using studies I have used myself on this blog to prove my point that testosterone does not cause crime!! Now, I know that most people don’t read studies that are linked, so they would take what it says on face value because, why not, there’s a cite so what he’s saying must be true, right? Wrong. I will begin with reviewing an article by someone at The Alternative Hypothesis and then review one article from Robert Lindsay on testosterone.

The Alternative Hypothesis

Faulk has great stuff here, but the one who wrote this article, Testosterone, Race, and Crime1) doesn’t know what he’s talking about and 2) clearly didn’t read the papers he cited. Read this article, you’ll see him make bold claims using studies I have used for my own arguments that testosterone doesn’t cause crime! Let’s take a look.

One factor which explains part of why Blacks have higher than average crime rates is testosterone. Testosterone is known to cause aggression, and Blacks are known to at once have more of it and, for genetic reasons, to be more sensitive to its effects.

  1. No it doesn’t.
  2. Testosterone is known to cause aggression“, but that’s the thing: it’s only known that it ’causes’ aggression, it really doesn’t.
  3. Evidence is mixed on blacks being “… for genetic reasons … more sensitive to its effects” (Update on Androgen Receptor gene—Race/History/Evolution Notes).

Testosterone activity has been linked many times to aggression and crime. Meta-analyses show that testosterone is correlated with aggression among humans and non human animals (Book, Starzyk, and Quinsey, 2001).

Why doesn’t he say what the correlation is? It’s .14 and this study, while Archer, Graham-Kevan and Davies, (2005) reanalyzed the studies used in the previous analysis and found the correlation to be .08. This is a dishonest statement.

Women who suffer from a disease known as congenital adrenal hyperplasia are exposed to abnormally high amounts of testosterone and are abnormally aggressive.

Abnormal levels of androgens in the womb for girls with CAH are associated with aggression, while boys with and without CAH are similar in aggression/activity level (Pasterski et al, 2008), yet black women, for instance, don’t have higher levels of testosterone than white women (Mazur, 2016). CAH is just girls showing masculinized behavior; testosterone doesn’t cause the aggression (See Archer, Graham-Kevan and Davies, 2005)

Artificially increasing the amount of testosterone in a person’s blood has been shown to lead to increases in their level of aggression (Burnham 2007Kouri et al. 1995).

Actually, no. Supraphysiological levels of testosterone administered to men (200 and 600 mg weekly) did not increase aggression or anger (Batrinos, 2012).

 Finally, people in prison have higher than average rates of testosterone (Dabbs et al., 2005).

Dabbs et al don’t untangle correlation from causation. Environmental factors can explain higher testosterone levels (Mazur, 2016) in inmates, and even then, some studies show socially dominant and aggressive men have the same levels of testosterone (Ehrenkraz, Bliss, and Sheard, 1974).

Thus, testosterone seems to cause both aggression and crime.

No, it doesn’t.

Why Testosterone Does Not Cause Crime

Testosterone and Aggressive Behavior

Can racial differences in circulating testosterone explain racial differences in crime?—Race/History/Evolution Notes

Furthermore, of the studies I could find on testosterone in Africans, they have lower levels than Western men (Campbell, O’Rourke, and Lipson, 2003Lucas and Campbell, and Ellison, 2004Campbell, Gray, and Ellison, 2006) so, along with the studies and articles cited on testosterone, aggression, and crime,  that’s another huge blow to the testosterone/crime/aggression hypothesis.

Richard et al. (2014) meta-analyzed data from 14 separate studies and found that Blacks have higher levels of free floating testosterone in their blood than Whites do.

They showed that blacks had 2.5 to 4.9 percent higher testosterone than whites, which could not explain the higher prostate cancer incidence (which meta-analyses call in to question; Sridhar et al 2010; Zagars et al 1998). That moderate amount would not be enough to cause differences in aggression either.

Exacerbating this problem even further is the fact that Blacks are more likely than Whites to have low repeat versions of the androgen receptor gene. The androgen reception (AR) gene codes for a receptor by the same name which reacts to androgenic hormones such as testosterone. This receptor is a key part of the mechanism by which testosterone has its effects throughout the body and brain.

No they’re not.

The rest of the article talks about CAG repeats and aggressive/criminal behavior, but it seems that whites have fewer CAG repeats than blacks.

Robert Lindsay

This one is much more basic, and tiring to rebut but I’ll do it anyway. Lindsay has a whole slew of articles on testosterone on his blog that show he doesn’t understand the hormone, but I’ll just talk about this one for now: Black Males and Testosterone: Evolution and Perspectives.

It was also confirmed by a recent British study (prostate cancer rates are somewhat lower in Black British men because a higher proportion of them have one White parent)

Jones and Chinegwundoh (2014) write: “Caution should be taken prior to the interpretation of these results due to a paucity of research in this area, limited accurate ethnicity data, and lack of age-specific standardisation for comparison. Cultural attitudes towards prostate cancer and health care in general may have a significant impact on these figures, combined with other clinico-pathological associations.

This finding suggests that the factor(s) responsible for the difference in rates occurs, or first occurs, early in life. Black males are exposed to higher testosterone levels from the very start.

In a study of women in early pregnancy, Ross found that testosterone levels were 50% higher in Black women than in White women (MacIntosh 1997).

I used to believe this, but it’s much more nuanced than that. Black women don’t have higher levels of testosterone than white women (Mazur, 2016; and even then Lindsay fails to point out that this was pregnant women).

According to Ross, his findings are “very consistent with the role of androgens in prostate carcinogenesis and in explaining the racial/ethnic variations in risk” (MacIntosh 1997).

Testosterone has been hypothesized to play a role in the etiology of prostate cancer, because testosterone and its metabolite, dihydrotestosterone, are the principal trophic hormones that regulate growth and function of epithelial prostate tissue.

Testosterone doesn’t cause prostate cancer (Stattin et al, 2003Michaud, Billups, and Partin, 2015). Diet explains any risk that may be there (Hayes et al, 1999; Gupta et al, 2009Kheirandish and Chinegwundoh, 2011; Williams et al, 2012Gathirua-Mingwai and Zhang, 2014). However in a small population-based study on blacks and whites from South Carolina, Sanderson et al (2017) “did not find marked differences in lifestyle factors associated with prostate cancer by race.”

Regular exercise, however, can decrease PCa incidence in black men (Moore et al, 2010). A lot of differences can be—albeit, not too largely— ameliorated by environmental interventions such as dieting and exercising.

Many studies have shown that young Black men have higher testosterone than young White men (Ellis & Nyborg 1992; Ross et al. 1992; Tsai et al. 2006).

Ellis and Nyborg (1992) found 3 percent difference. Ross et al (1992) have the same problem as Ross et al (1986), which used University students (~50) for their sample. They’re not representative of the population. Ross et al (1992) also write:

Samples were also collected between 1000 h and 1500 h to avoid confounding
by any diurnal variation in testosterone concentrations.

Testosterone levels should be measured near to 8 am. This has the same time variation too, so I don’t take this study seriously due to that confound. Assays were collected “between” the hours of 10 am and 3 pm, which means it was whenever convenient for the student. No controls on activities, nor attempting to assay at 8 am. People of any racial group could have gone at whatever time in that 5 hour time period and skew the results. Assaying “between” those times completely defeats the purpose of the study.

 

This advantage [the so-called testosterone advantage] then shrinks and eventually disappears at some point during the 30s (Gapstur et al., 2002).

Gapstur et al (2002) help my argument, not yours.

This makes it very difficult if not impossible to explain differing behavioral variables, including higher rates of crime and aggression, in Black males over the age of 33 on the basis of elevated testosterone levels.

See above where I talk about crime/testosterone/aggression.

Critics say that more recent studies done since the early 2000’s have shown no differences between Black and White testosterone levels. Perhaps they are referring to recent studies that show lower testosterone levels in adult Blacks than in adult Whites. This was the conclusion of one recent study (Alvergne et al. 2009) which found lower T levels in Senegalese men than in Western men. But these Senegalese men were 38.3 years old on average.

Alvergne, Fauri, and Raymond (2009) show that the differences are due to environmental factors:

This study investigated the relationship between mens’ salivary T and the trade-off between mating and parenting efforts in a polygynous population of agriculturists from rural Senegal. The men’s reproductive trade-offs were evaluated by recording (1) their pair-bonding/fatherhood status and (2) their behavioral profile in the allocation of parental care and their marital status (i.e. monogamously married; polygynously married).

They also controlled for age, so his statement “But these Senegalese men were 38.3 years old on average” is useless.

These critics may also be referring to various studies by Sabine Rohrmann which show no significance difference in T levels between Black and White Americans. Age is poorly controlled for in her studies.

That is one study out of many that I reference. Rohrmann et al (2007) controlled for age. I like how he literally only says “age is poorly controlled for in her studies“, because she did control for age.

That study found that more than 25% of the samples for adults between 30 and 39 years were positive for HSV-2. It is likely that those positive samples had been set aside, thus depleting the serum bank of male donors who were not only more polygamous but also more likely to have high T levels. This sample bias was probably worse for African American participants than for Euro-American participants.

Why would they use diseased samples? Do you even think?

Young Black males have higher levels of active testosterone than European and Asian males. Asian levels are about the same as Whites, but a study in Japan with young Japanese men suggested that the Japanese had lower activity of 5-alpha reductase than did U.S. Whites and Blacks (Ross et al 1992). This enzyme metabolizes testosterone into dihydrotestosterone, or DHT, which is at least eight to 10 times more potent than testosterone. So effectively, Asians have the lower testosterone levels than Blacks and Whites. In addition, androgen receptor sensitivity is highest in Black men, intermediate in Whites and lowest in Asians.

Wu et al (1995) show that Asians have the highest testosterone levels. Evidence is also mixed here as well. See above on AR sensitivity.

Ethnicmuse also showed that, contrary to popular belief, Asians have higher levels of testosterone than Africans who have higher levels of testosterone than Caucasians in his meta-analysis. (Here is his data.)

The Androgen Receptor and “masculinization”

Let us look at one study (Ross et al 1986) to see what the findings of a typical study looking for testosterone differences between races shows us. This study gives the results of assays of circulating steroid hormone levels in white and black college students in Los Angeles, CA. Mean testosterone levels in Blacks were 19% higher than in Whites, and free testosterone levels were 21% higher. Both these differences were statistically significant.

Assay times between 10 am and 3 pm, unrepresentative sample of college men, didn’t have control for waist circumference. Horribly study.

A 15% difference in circulating testosterone levels could readily explain a twofold difference in prostate cancer risk.

No, it wouldn’t (if it were true).

Higher testosterone levels are linked to violent behavior.

Causation not untangled.

Studies suggest that high testosterone lowers IQ (Ostatnikova et al 2007). Other findings suggest that increased androgen receptor sensitivity and higher sperm counts (markers for increased testosterone) are negatively correlated with intelligence when measured by speed of neuronal transmission and hence general intelligence (g) in a trade-off fashion (Manning 2007).

Who cares about correlations? Causes matter more. High testosterone doesn’t lower IQ. Racial differences in testosterone are tiring to talk about now, but there are still a few more articles I need to rebut.

Conclusion

Racial differences in testosterone don’t exist/are extremely small in magnitude (as I’ve covered countless times). The one article from TAH literally misrepresents studies/leaves out important figures in the testosterone differences between the two races to push a certain agenda. Though if you read the studies you see something completely different. It’s the same with Lindsay. He misunderstood a few studies to push his agenda about testosterone and crime and prostate cancer. They’re both wrong, though.

Why Testosterone Does Not Cause Crime

Testosterone and Aggressive Behavior

Race, Testosterone, and Prostate Cancer

Population variation in endocrine function—Race/History/Evolution Notes


Can racial differences in circulating testosterone explain racial differences in crime?—Race/History/Evolution Notes

Racial differences in testosterone are tiring to talk about now, but there are still a few more articles I need to rebut. People read and write about things they don’t understand, which is the cause of these misconceptions with the hormone, as well as, of course, misinterpreting studies. Learn about the hormone and you won’t fear it. It doesn’t cause crime, prostate cancer nor aggression; these people who write these articles have one idea in their head and they just go for it. They don’t understand the intricacies of the endocrine system and how sensitive it is to environmental influence. I will cover more articles that others have written on testosterone and aggression to point out what they got wrong.

HBD and Sports: Baseball and Reaction Time

2050 words

If you’ve ever played baseball, then you have first-hand experience on what it takes to play the game, one of the major abilities you need is a quick reaction time. Baseball players are in the upper echelons in regards to pitch recognition and ability to process information (Clark et al, 2012).

Some people, however, believe that there is an ‘IQ cutoff’ in regards to baseball; since general intelligence is supposedly correlated with reaction time (RT), then those with higher RTs must have higher intelligence and vice-versa. However, this trait—in a baseball context—is trainable to an extent. To those that would claim that IQ would be a meaningful metric in baseball I pose two question: would higher IQ teams, on average, beat lower IQ teams and would higher IQ people have better batting averages (BAs) than lower IQ people? This, I doubt, because as I will cover, these variables are trainable and therefore talking about reaction time in the MLB in regards to intelligence is useless.

Meden et al (2012) tested athlete and non-athlete college students on visual reaction time (VRT). They tested the athletes’ VRT once, while they tested the non-athletes VRT two times a week for a 3 week period totaling 6 tests. Men ended up having higher VRTs in comparison to women, and athletes had better VRTs than non-athletes. So therefore, this study proves that VRT is a trainable variable. If VRT can be improved with training, then hitting and fielding can also be trained as well.

Reaction time training is the communication between the brain, musculoskeletal system and spinal cord, which includes both physical and cognitive training. So since VRT can be trained, then it makes logical sense that Major League hitting and fielding can be trained as well.

David Epstein, author of The Sports Gene says that he has a faster reaction time than Albert Pujols:

One of the big surprises for me was that pro athletes, particularly in baseball, don’t have faster reflexes on average than normal people do. I tested faster than Albert Pujols on a visual reaction test. He only finished in the 66thpercentile compared to a bunch of college students.

It’s not a superior RT that baseball players have in comparison to the normal population, says Epstein, but “learned perceptual skills that the MLB players don’t know they learned.” Major League baseball players do have average reaction times (Epstein, 2013: 1) but a far superior visual acuity. Most pro-baseball players had visual acuity of 20/13, with some players having 20/11; the theoretical best visual acuity that is possible is 20/8 (Clark et al, 2012). Laby, Kirschen, and Abbatine show that 81 percent of the 1500 Major and Minor League Mets and Dodgers players had visual acuities of 20/15 or better, along with 2 percent of players having a visual acuity of 20/9.2. Baseball players average a 20/13 visual acuity with the best eyesight humanly possible being 20/8. (Laby et al, 1996).

So it’s not faster RT that baseball players have, but a better visual acuity—on average—in comparison to the general population. Visual reaction time is a highly trainable variable, and so since MLB players have countless hours of practice, they will, of course, be superior on that variable.

Clark et al (2012) showed that high-performance vision training can be performed at the beginning of the season and maintained throughout the season to improve batting parameters. They also state that visual training programs can help hitters, since the eyes account for 80 percent of the information taken into the brain. Reichow, Garchow, and Baird (2011) conclude that a “superior ability to recognize pitches presented via tachistoscope may correlate with a higher skill level in batting.” Clark et al (2012) posit that their training program will help batters to better recognize the spot of the ball and the pitcher’s finger position in order to better identify different pitches. Clark et al (2012) conclude:

The University of Cincinnati baseball team, coaches and vision performance team have concluded that our vision training program had positive benefits in the offensive game including batting and may be providing improved play on defense as well. Vision training is becoming part of out pre-season and in season conditioning program as well as for warmups.

Classe et al (1997) showed that VRT was related to batting, but not fielding or pitching skill. Further, there was no statistically significant difference observed between VRT and age, race or fielding. Therefore, we can say that VRT has no statistical difference on race and does not contribute to any racial differences in baseball.

Baseball and basketball athletes had faster RTs than non-athletes (Nakamoto and Mori, 2008). The Go/NoGo response that is typical of athletes is most certainly trainable. Kida et al (2005) showed that intensive practice improved the Go/NoGo reaction time, but not simple reaction time. Kida et al (2005: 263-264) conclude that simple reaction time is not an accurate indicator of experience, performance or success in sports; Go/NoGo can be improved by practice and is not innate (but simple reaction time was not altered) and the Go/NoGo reaction time can be “theoretically shortened toward a certain value determined by the simple reaction time proper to each individual.

In baseball players in comparison to a control group, readiness potential was significantly shorter for the baseball players (Park, Fairweather, and Donaldson, 2015).  Hand-eye coordination, however, had no effect on earned run average (ERA) or batting average in a sample of 410 Major and Minor League members of the LA Dodgers (Laby et al, 1997).

So now we know that VRT can be trained, VRT shows no significant racial differences, and that Go/NoGo RT can be improved by practice. Now a question I will tackle is: can RT tell us anything about success in baseball and is RT related to intelligence/IQ?

Khodadi et al (2014) conclude that “The relationship between reaction time and IQ is too complicated and revealing a significant correlation depends on various variables (e.g. methodology, data analysis, instrument etc.).” So since the relationship is too complicated between the two variables, mostly due to methodology and the instrument used, RT is not a good correlate of IQ. It can, furthermore, be trained (Dye, Green, and Bavelier, 2012).

In the book A Question of Intelligence, journalist Dan Seligman writes:

In response, Jensen made two points: (1) The skills I was describing involve a lot more than just reaction time, they also depended heavily on physcial coordination and endless practice. (2) It was, however, undoubtedly true that there was some IQ requirement-Jensen guessed it might be around 85- below which you could never recruit for major league baseball. (About one-sixth of Americans fall below 85).

I don’t know where Jensen grabbed the ‘IQ requirement’ for baseball, which he claims to be around 85 (which is at the black average in America). This quote, however, proves my point that there is way more than RT involved in hitting a baseball, especially a Major League fastball:

Hitting a baseball traveling at 100 mph is often considered one of the most difficult tasks in all of sports. After all, if you hit the ball only 30% of the time, baseball teams will pay you millions of dollars to play for them. Pitches traveling at 100 mph take just 400 ms to travel from the pitcher to the hitter. Since the typical reaction time is 200 ms, and it takes 100 ms to swing the bat, this leaves just 100 ms of observation time on which the hitter can base his swing.

This lends more credence to the claim that hitting a baseball is more than just quick reflexes; considerable training can be done to learn certain cues that certain pitchers use; for instance, like identifying different pitches a particular pitcher does with certain arm motions coming out of the stretch. This, as shown above in the Epstein quote, is most definitely a trainable variable.

Babe Ruth, for instance, had better hand-eye coordination than 98.8 percent of the population. Though that wasn’t why he was one of the greatest hitters of all time; it’s because he mastered all of the other variables in regards to hitting, which are learnable and not innate.

Witt and Proffitt (2005) showed that the apparent ball size is correlated with batting average, that is, the better batters fared at the plate, the bigger they perceived the ball to be so they had an easier time hitting it. Hitting has much less to do with reaction time and much more to do with prediction, as well as the pitching style of the pitcher, his pitching repertoire, and numerous other factors.

It takes a 90-95 mph fast ball about 400 milliseconds to reach home plate. It takes the brain 100 milliseconds to process the image that the eyes are taking in, 150 milliseconds to swing and 25 milliseconds for his brain to send a signal to his body to swing. This leaves the hitter with 125 milliseconds left to hit the incoming fastball. Clearly, there is more to hitting than reaction time, especially when all of these variables are in play. Players have .17 seconds to decide whether or not to hit a pitch and where to place their bat (Clark et al, 2012)

A so-called ‘IQ cutoff’ for baseball does exist, but only because IQs lower than 85 (once you begin to hit the 70s range, especially the lower levels) indicate developmental disorders. Further, the 85-115 IQ range encompasses 68 percent of the population. However, RT is not even one of the most important factors in hitting; numerous other (trainable) variables influence fastball hitting, and all of the best players in the world employ these strategies. People may assume that since intelligence and RT are (supposedly) linked, that baseball players, since they (supposedly) have quick RTs. Nevertheless, if quick RTs were correlated with baseball profienciency—namely, in hitting, then why are Asians 1.2 percent of the players in the MLB? Maybe because RT doesn’t really have anything to do with hitting proficiency and other variables have more to do with it.

People may assume that since intelligence and RT are (supposedly) linked, that baseball players, since they (supposedly) have quick RTs then they must be intelligent and therefore there must be an IQ cutoff because intelligence/g and RT supposedly correlate. However, I’ve shown 2 things: 1) RT isn’t too important to hitting at an elite level and 2) more important skills can be acquired in hitting fastballs, most notable, in my opinion, is pitch verification and the arm location of the pitcher. The Go/NoGo RT can also be trained and is, arguably, one of the most important training systems for elite hitting. Clearly, elite hitting is predicated on way more than just a quick RT; and most of the variables that are involved in elite hitting are most definitely trainable, as reviewed in this article.

People, clearly, make unfounded claims without having any experience in something. It’s easy to make claims about something when you’re just looking at numbers and attempting to draw conclusions based on data. But it’s a whole other ballgame (pun intended) when you’re up at the plate yourself or coaching someone on how to hit or play in the infield. These baseless claims would be avoided a lot more if only the people who make these claims had any actual athletic experience. If so, they would know of the constant repetition that goes into hitting and fielding, the monotonous drills you have to do everyday until your muscle memory is trained to flawlessly—without even thinking about it—throw a ball from shortstop to first base.

Practice, especially Major League practice, is pivotal to elite hitting; only with elite practice can a player learn how to spot the ball and the pitcher’s finger position to quickly identify the pitch type in order to decide if he wants to swing or not. In conclusion, a whole slew of cognitive/psychological abilities are involved in the upper echelons of elite baseball, however a good majority of the traits needed to succeed in baseball are trainable, and RT has little to do with elite hitting.

(When I get time I’m going to do a similar analysis like what I wrote about in the article on my possible retraction of my HBD and baseball article. Blacks dominate in all categories that matter, this holds for non-Hispanic whites and blacks as well as Hispanic blacks and whites, read more here. Nevertheless, I may look at the years 1997-2017 and see if anything has changed from the analysis done in the late 80s. Any commentary on that matter is more than welcome.)

Racial Differences in Testosterone…again

1200 words

Testosterone is a fascinating hormone—the most well-known hormone to the lay public. What isn’t well-known to the lay public is how the hormone is produced and the reasons why it gets elevated. I’ve covered racial differences in testosterone in regards to crime, penis size and Rushton’s overall misuse of r/K selection theory. In this article, I will talk about what raises and decreases testosterone, as well as speak about racial differences in testosterone again since it’s such a fun topic to cover.

JP Rushton writes, in his 1995 article titled Race and Crime: An International Dilemma:

One study, published in the 1993 issue of Criminology by Alan Booth and D. Wayne Osgood, showed clear evidence of a testosterone-crime link based on an analysis of 4,462 U.S. military personnel. Other studies have linked testosterone to an aggressive and impulsive personality, to a lack of empathy, and to sexual behavior.

Booth and Osgood (1993: 93) do state that “This pattern of results supports the conclusions that (I) testosterone is one of a larger constellation of factors contributing to a general latent propensity toward deviance and (2) the influence of testosterone on adult deviance is closely tied to social factors.” However, as I have extensively documented, the correlation between testosterone and aggression is extremely low (Archer, 1991; Book et al (2001)Archer, Graham-Kevan and Davies, 2005), and therefore cannot be the cause of crime.

Another reason why testosterone is not the cause of aggression/deviant behavior is due to what times most crimes are committed at. Therefore, testosterone cannot possibly be the cause of crime. I’ve also shown that, contrary to popular belief, blacks don’t have higher levels testosterone than whites, along with the fact that testosterone does not cause prostate cancer, that even if blacks did have these supposed higher levels of the hormone, that it would NOT explain higher rates of crime. 

Wu et al (1995) show that Asian Americans had the highest testosterone levels, African Americans were intermediate and European Americans were last, after adjustments for BMI and age were made. Though, I’ve shown in larger samples that, if there is any difference at all (and a lot of studies show no difference), it is a small advantage favoring blacks. We then are faced with the conclusion that this would not explain disease prevalence nor higher rates of crime or aggression.

Testosterone, contrary to Rushton’s (1999) assertion, is not a ‘master switch’. Rushton, of course, cites Ross et al (1986) which I’ve tirelessly rebutted. Assay times were all over the place (between 10 am and 3 pm) with testosterone levels being highest at 8 am. The most important physiological variable in Rushton’s model is testosterone, and without his highly selected studies, his narrative falls apart. Testosterone doesn’t cause crime, aggression, nor prostate cancer.

The most important take away is this: Rushton’s r/K theory hinges on 1) blacks having higher levels of testosterone than whites and 2) that these higher levels of testosterone then influence higher levels of aggression which lead to crime and then prostate cancer. Even then, Sridhar et al (2010) meta-analyzed 17 articles which talk about racial differences in prostate cancer survival rates. They state in their conclusion that “there are no differences between African American and Whites in survival from prostate cancer.Zagars et al (1998) show that there were no significant racial differences in serum testosterone. Furthermore, when matched for major prognostic factors “the outcome for clinically local–regional prostate cancer does not depend on race (6,7,14–19). Moreover there appear to be no racial differences in the response of advanced prostate cancer to androgen ablation (29,47). Our study provides further evidence that racial differences in disease outcome are absent for clinically localized prostate cancer” (Zagars et al, 1998: 521). So it seems that these two studies also provide further support that Rushton et al were wrong in regards to prostate cancer mortality as well.

Rushton (1997 185) writes:

In any case, socialization cannot account for the early onset of the traits, the speed of dental and other maturational variables, the size of the brain, the number of gametes produced, the physiological differences in testosterone, nor the evidence on cross-cultural consistency.

There are no racial differences in testosterone and if there were, social factors would explain the difference between the races. However, as I’ve noted in the past, testosterone levels are high in young black males with low educational attainment (Mazur, 2016). The higher levels of testosterone in blacks compared to whites (which, if you look at figure 1 the levels are not high at all) is accounted for by honor culture, a social variable. Furthermore, the effects of the environment are also more notable on testosterone than are genetics at 5 months of age (Carmaschi et al, 2010). Environmental factors greatly influence testosterone (Booth et al, 2006), so Rushton stating that “socialization cannot account for the early onset” of “physiological differences in testosterone” is clearly wrong since environmental influences can be seen in infants as well as adults. Testosterone is strongly mediated by the environment; this is not up for debate.

Testosterone is one of many important hormones in the body; the races do not differ in the variable. So, therefore, all of Rushton’s ‘r/K predictions’, which literally hinge on testosterone (Lynn, 1990) fall apart without this ‘master switch’ (Rushton, 1999) driving all of these behaviors. Any theories of crime that include testosterone as a main driver in crime need to be rethunk; numerous studies attest to the fact that testosterone does not cause crime. Racial differences in testosterone only appear in small studies and the studies that do show these differences get touted around all the while, all of the better, larger analyses don’t get talked about because it goes against a certain narrative.

Finally, there is no inevitability of a testosterone decrease in older men. So-called “age-related declines” in the hormone are largely explained by smoking, obesity, chronic disease, marital status, and depression (Shi et al, 2013), and even becoming a father explains lower levels of testosterone (Gray, Yang, and Pope, 2006). On top of that, marriage also reduces testosterone, with men who went from unmarried to married showing a sharp decline in testosterone over a ten-year period (Holmboe et al, 2017). This corroborates numerous other studies showing that marriage lowers testosterone levels in men (Mazur and Michalek, 1998; Nansunga et al, 2014)   But some of this decrease may be lessened by frequent sexual intercourse (Gettler et al, 2013). So if you live a healthy lifestyle, the testosterone decrease that plagues most men won’t occur to you. The decreases are due to lifestyle changes; not explicitly tied to age.

People are afraid of higher levels of testosterone at a young age and equally as terrified of lowering testosterone levels at an old age. However, I’ve exhasutively shown that testosterone is not the boogeyman, nor the ‘master switch’ (Rushton, 1999) it’s made out to be. There are no ‘genes for’ testosterone; its production is indirect through DNA. Thusly, if you keep an active lifestyle, don’t become obese, and don’t become depressed, you can bypass the so-called testosterone decrease. Fear mongering on both sides of the ‘testosterone curve’ are seriously blown out of proportion. Testosterone doesn’t cause crime, aggression, nor prostate cancer (even then, large meta-analyses show no difference in PCa mortality between blacks and whites).

The fear of the hormone testosterone is due to ignornace of what it does in the body and how it is produced in the body. If people were to understand the hormone, they would not fear it.

Earlier Evidence for Erectus’ Use of Fire

1600 words

I hold the position that the creation and management of man-made fire was a pivotal driver in our brain size increase over the past 2 my. However, evidence for fire use in early hominins is scant; a few promising locations have popped up over the year, the most promising being Wonderwerk Cave in South Africa (Berna et al, 2012). Much more recently, however, it was discovered in Koobi Fora, Kenya, that there was evidence of fire use by Erectus 1.5 mya (Hlublik et al, 2017).

Hlublik et al (2017) identified two sites at Koobi Fora, Kenya that have evidence of fire use 1.5 mya, and Erectus was the hominin in that area at that time. Hlublik et al (2017) conclude the following:

(1) Spatial analysis reveals statistically significant clusters of ecofacts and artifacts, indicating that the archaeological material is in situ and is probably the result of various hominin activities during one or a few occupation phases over a short period of time. (2) We have found evidence of fire associated with Early Stone Age archaeological material in the form of heated basalt (potlids flakes), heated chert, heated bone, and heated rubified sediment. To our knowledge this is, to date, the earliest securely documented evidence of fire in the archaeological context. (3) Spatial analysis shows the presence of two potential fire loci. Both loci contain a few heated items and are characterized by surrounding artifact distributions with strong similarities to the toss and drop zones and ring distribution patterns described for ethnographic and prehistoric hearths (Binford 1983; Henry 2012; Stappert 1998).

This is one of the best sites yet for early hominin fire control. This would also show how the biologic/physiologic/anatomic changes occurred in Erectus. Erectus could then afford a larger brain and could spend more time doing other activities since he wasn’t constrained to foraging and eating for 8+ hours per day. So, clearly, the advent of fire use in our lineage was one of the most important time frames in our evolutionary history since we could extract more energy from what we ate.

Carmody et al (2016) identified ‘cooking genes’ that were under selection between 275-765,000 ya. So we must have been cooking, in my opinion, before 765,000 ya, which would have then brought about the genetic changes due to a large shift in diet—which would have been cooked meat/tubers. Man’s adaptation to cooked food is one of the most important things to occur in our genus, because it allowed us to spend less time eating and more time doing. This change to a higher quality diet began in early Erectus (Aiello, 1997), which then shrank his teeth and gut. If Erectus did not control fire, the reduction in tooth size needs to be explained—but the only way it can be explained is through the use of cooked food.

Around 1.6 my there is evidence of the first human-like footprints/gait (Steudel-Numbers, 2006Bennett et al, 2009). This seems to be the advent of hunting parties and cooperation in hominins to chase prey. The two new identified sites at Koobi Fora lend further credence to the endurance running hypothesis (Carrier, 1984Bramble and Lieberman, 2004; Mattson, 2012), since without higher quality nutrition, Erectus would not have had the anatomic changes he did, nor would he have the ability to hunt due to being restricted to foraging, eating, and digesting. Only with higher quality energy could the human body have evolved. The further socialization from hunting and cooking/eating meat was also pivotal to our brain size and evolution. This allowed our brains to grow in size, since we could have high-quality energy to power our growing set of neurons.

The growth pattern of Nariokotome boy (formerly called Turkana boy) is within the range of modern humans and does not imply that he had a growth pattern different from that of modern humans (Clegg and Aiello, 1999). It’s interesting to note that Nariokotome boy, one of the best preserved Erectus fossils we have discovered to date, had a similar growth pattern to modern humans. Nariokotome boy is estimated to be about 1.6 million years old, so this implies that we had similar high-quality diets in order to have similar morphology. It is also interesting to note that Lake Turkana is near Koobi Fora in Kenya. So it seems that basic human morphology emerged around 1.5-1.6 mya and was driven, in part, by the use and acquisition of fire to cook food.

This is in line with the brain size increase that Fonseca-Azevedo and Herculano-Houzel (2012) observed in their study. Metabolic limitations of herbivorous diets impose constraints on how big a brain can get. Herculano-Houzel and Kaas (2011) state that Erectus had about 62 billion neurons, so given the number of neurons he had to power, he’d have had to eat a raw, herbivorous diet for over 8 hours a day. Modern humans, with our 86 billion neurons (Herculano-Houzel, 2009; 2012), would need to feed for over 9 hours on a raw diet to power our neurons. But, obviously, that’s not practical. So Erectus must have had another way to extract more and higher-quality energy out of his food.

Think about this for a minute. If we ate a raw, plant-based diet, then we would have to feed for most of our waking hours. Can you imagine spending what amounts to more than one work day just foraging and eating? The rest of the time awake would be mostly spent digesting the food you’ve eaten. This is why cooking was pivotal to our evolution and why Erectus must have had the ability to cook—his estimated neuron count based on his cranial capacity shows that he would not have been able to subsist on a raw, plant-based diet and so the only explanation is that Erectus had the ability to cook and afford his larger brain.

Wrangham (2017) goes through the pros and cons of the cooking hypothesis, which hinges on Erectus’ control of fire. Dates for Neanderthal hearths have been appearing later, however nowhere near close enough to when Erectus was proposed to have used and controlled fire. As noted above, 1.6 mya is when the human body plan began to emerge (Gowlett, 2016) which can be seen by looking at Nariokotome boy. So if Nariokotome boy had a growth pattern similar to that of modern humans, then he must have been eating a high-quality diet of cooked food.

Wrangham (2017) poses two questions if Erectus did not control fire:

First, how could H. erectus use increased energy, reduce its chewing efficiency, and sleep safely on the ground without fire? Second, how could a cooked diet have been introduced to a raw-foodist, mid-Pleistocene Homo without having major effects on its evolutionary biology? Satisfactory answers to these questions will do much to resolve the tension between archaeological and biological evidence.

I don’t see how these things can be explained without entertaining the fact that Erectus did control fire. And now, Hlublick et al (2017) lends more credence to the cooking hypothesis. The biological and anatomical evidence is there, and now the archaeological evidence is beginning to line up with what we know about the evolution of our genus—most importantly our brain size, pelvic size, and modern-day gait.

Think about what I said above about time spent foraging and eating. Looking at gorillas, for instance, they have large bodies partly due to sexual selection and the large amounts of kcal they consume. Some gorillas have been observed to have consumed food for upwards of 10 hours per day. So, pretty much, you can have brains or you can have brawn, you can’t have both. Cooking allowed for our brains as we could extract higher quality nutrients out of our food. Cooking allowed for the release of a metabolic constraint—as seen with gorillas. It wouldn’t be possible to power such large brains without the addition of higher quality nutrition.

One of the most important things to note is that Erectus had smaller teeth. That could only occur due to a shift in diet—masticating softer foods leads to a subsequent decrease in tooth and jaw size. Zink and Lieberman (2016) show that although fire-use/cooking was important for mastication. Slicing meat and pounding tubers improved the ability to breakdown food by 5 percent and decreased masticatory force requirements by 41 percent. This, too, led to the decrease in jaw/tooth size in Erectus, and the advent of cooking with softer/higher-quality food led to a further decrease ontop of what Zink and Lieberman (2016) state, although Zink and Lieberman (2016: 3) state that “the reductions in jaw muscle and dental size that evolved by H. erectus did not require cooking and would have been made possible by the combined effects of eating meat and mechanically processing both meat and USOs.” I disagree and believe that the two hypotheses are complimentary. A reduction would have occurred with the introduction of mashed food and then again when Erectus controlled fire and began cooking meat and tubers.

In conclusion, the brain size increases that are noted in Erectus’ evolutionary history need to be explained and one of the best is that he controlled fire and cooked his food. There are numerous lines of evidence that he did, mostly biological in nature, but now archaeological sites are beginning to show just how long ago erectus began using fire (Berna et al, 2012; Hlublick et al, 2017). Many pivotal events in our history can be explained by our shift in diet to softer foods due to the advent of cooking, like smaller teeth and jaws to the biologic and physiologic adaptations that occurred after the shift to a new diet. Erectus is a very important hominin to study, because many of our modern-day behaviors began with him and by better understanding what he did and created and how he lived, we can better understand ourselves.

Is Obesity Caused by a Virus?

2150 words

I’ve recently taken a large interest in the human microbiome and parasites and their relationship with how we behave. There are certain parasites that can and do have an effect on human behavior, and they also reduce or increase certain microbes, some of which are important for normal functioning. What I’m going to write may seem weird and counter-intuitive to the CI/CO (calories in/calories out) model, but once you understand how the diversity in the human mirobiome matters for energy acquisiton, then you’ll begin to understand how the microbiome contributes to the exploding obesity rate in the first world.

One of the books I’ve been reading about the human microbiome is 10% Human: How Your Body’s Microbes Hold the Key to Health and Happiness. P.h.D. in evolutionary biology Alanna Collen outlines how the microbiome has an effect on our health and how we behave. Though one of the most intriquing things I’ve read in the book so far is how there is a relationship with microbiome diversity, obesity and a virus.

Collen (2014: 69) writes:

But before we get too excited about the potential for a cure for obesity, we need to know how it all works. What are these microbes doing that make us fat? Just as before, the microbiotas in Turnbaugh’s obese mice contained more Firmicutes and fewer Bacteroidetes, and they somehow seemed to enable the mice to extract more energy from their food. This detail undermines one of the core tenets of the obesity equation. Counting ‘calories-in’ is not as simple as keeping track of what a person eats. More accurately, it is the energy content of what a person absorbs. Turnbaugh calculated that the mice with the obese microbiota were collecting 2 per cent more calories from their food. For every 100 calories the lean mice extracted, the obese mice squeezed out 102.

Not much, perhaps, but over the course of a year or more, it adds up. Let’s take a woman of average height. 5 foot 4 inches, who weights 62 kg (9st 11 lb) and a healthy Body Mass Index (BMI: weight (kg) /(height (m)^2) of 23.5. She consumes 2000 calories per day, but with an ‘obese’ microbiota, her extra 2 per cent calorie extraction adds 40 more calories each day. Without expending extra energy, those further 40 calories per day should translate, in theory at least, to a 1.9 kg weight gain over a year. In ten years, that’s 19 kg, taking her weight to 81 kg (12 st 11 lb) and her BMI to an obese 30.7. All because of just 2 percent extra calories extracted from her food by her gut bacteria.

Turnbaugh et al (2006) showed that differing microbiota contributes to differing amounts of weight gain. The obese microbiome does have a greater capacity to extract more energy out of the same amount of food in comparison to the lean microbiome. This implies that obese people would extract more energy eating the same food as a lean person—even if the so-called true caloric value on the package from a caloriometer says otherwise. How much energy we absorb from the food we consume comes down to genes, but not the genes you get from your parents; it matters which genes are turned on or off. Our microbes also control some of our genes to suit their own needs—driving us to do things that would benefit them.

Gut microbiota does influence gene expression (Krautkramer et al, 2016). This is something that behavioral geneticists and psychologists need to look into when attempting to explain human behavior, but that’s for another day. Fact of the matter is, where the energy that’s broken down from the food by the microbiome goes is dictated by genes; the expression of which is controlled by the microbiome. Certain microbiota have the ability to turn up production in certain genes that encourage more energy to be stored inside of the adipocite (Collen, 2014: 72). So the ‘obese’ microbiota, mentioned previously, has the ability to upregulate genes that control fat storage, forcing the body to extract more energy out of what is eaten.

Indian doctor Nikhil Dhurandhar set out to find out why he couldn’t cure his patients of obesity, they kept coming back to him again and again uncured. At the time, an infectious virus was wiping out chickens in India. Dhurandhar had family and friends who were veteraniarians who told him that the infected chickens were fat—with enlarged livers, shrunken thymus glands and a lot of fat. Dhurandhar then took chickens and injected them with the virus that supposedly induced the weight gain in the infected chickens, and discovered that the chickens injected with the virus were fatter than the chickens who were not injected with it (Collen, 2014: 56).

Dhurandhar, though, couldn’t continue his research into other causes for obesity in India, so he decided to relocate his family to America, as well as studing the underlying science behinnd obesity. He couldn’t find work in any labs in order to test his hypothesis that a virus was responsible for obesity, but right before he was about to give up and go back home, nutrional scientist Richard Atkinson offered him a job in his lab. Though, of course, they were not allowed to ship the chicken virus to America “since it might cause obesity after all” (Collen, 2014: 75), so they had to experiment with another virus, and that virus was called adenovirus 36—Ad-36 (Dhurandhar et al, 1997Atkinson et al, 2005; Pasarica et al, 2006;  Gabbert et al, 2010Vander Wal et al, 2013;  Berger et al, 2014; Pontiero and Gnessi, 2015; Zamrazilova et al. 2015).

Atkinson and Dhurandhar injected one group of chickens with the virus and had one control group. The infected chickens did indeed grow fatter than the ones who were not infected. However, there was a problem. Atkinson and Dhurandhar could not outright infect humans with Ad-36 and test them, so they did the next best thing: they tested their blood for Ad-36 antibodies. 30 percent of obese testees ended up having Ad-36 antibodies whereas only 11 percent of the lean testees had it (Collen, 2014: 77).

So, clearly, Ad-36 meddles with the body’s energy storage system. But we currently don’t know how much this virus contributes to the epidemic. This throws the CI/CO theory of obesity into dissarray, proving that stating that obesity is a ‘lifestyle disease’ is extremely reductionist and that other factors strongly influence the disease.

On the mechanisms of exactly how Ad-36 influences obesity:

The mechanism in which Ad-36 induces obesity is understood to be due to the viral gene, E4orf1, which infects the nucleus of host cells. E4orf1 turns on lipogenic (fat producing) enzymes and differentiation factors that cause increased triglyceride storage and differentiation of new adipocytes (fat cells) from pre-existing stem cells in fat tissue.

We can see that there is a large variation in how much energy is absorbed by looking at one overfeeding study. Bouchard et al (1990) fed 12 pairs of identical twins 1000 kcal a day over their TDEE, 6 days per week for 100 days. Each man ate about 84,000 kcal more than their bodies needed to maintain their previous weight. This should have translated over to exactly 24 pounds for each individual man in the study, but this did not turn out to be the case. Quoting Collen (2014: 78):

For starters, even the average amount the men gained was far less than maths dictates that it should have been, at 18 lb. But the individual gains betray the real failings of applying a mathematical rule to weight loss. The man who gained the least managed only 9 lb — just over a third of the predicted amount. And the twin who gained the most put on 29 lb — even more than expected. These values aren’t ’24 lb, more or less’, they are so far wide of the mark that using it even as a guide is purposeless.

This shows that, obviously, the composition of the individual microbiome contributes to how much energy is broken down in the food after it is consumed.

One of the most prominent microbes that shows a lean/obese difference is one called Akkermansia micinphilia. The less Akkermensia one has, the more likely they are to be obese. Akkermansia comprise about 4 percent of the whole microbiome in lean people, but they’re almost no where to be found in obese people. Akkermansia lives on the mucus lining of the stomach, which prevents the Akkermansia from crossing over into the blood. Further, people with a low amount of this bacterium are also more likely to have a thinner mucus layer in the gut and more lipopolysaccharides in the blood (Schneeberger et al, 2015). This one species of microbiota is responsible for dialing up gene activity which prevents LPS from crossing into the blood along with more mucus to live on. This is one example of the trillions of the bacteria in our microbiome’s ability to upregulate the expression of genes for their own benefit.

Everard et al (2013) showed that by supplementing the diets of a group of mice with Akkermensia, LPS levels dropped, their fat cells began creating new cells and their weight dropped. They conclude that the cause of the weight gain in the mice was due to increased LPS production which forced the fat cell to intake more energy and not use it.

There is evidence that obesity spreads in the same way that an epidemic does. Christakis and Fowler (2007) followed over 12000 people from 1971 to 2003. Their main conclusion was that the main predictor of weight gain for an individual was whether or not their closest loved one had become obese. One’s chance of becoming obese increased by a staggering 171 percent if they had a close friend who had become obese in the 32 year time period, whereas among twins, if one twin became obese there was a 40 percent chance that the co-twin would become obese and if one spouse became obese, the chance the other would become obese was 37 percent. This effect also did not hold for neighbors, so something else must be going in (i.e., it’s not the quality of the food in the neighborhood). Of course when obesogenic environments are spoken of, the main culprits are the spread of fast food restaurants and the like. But in regards to this study, that doesn’t seem to explain the shockingly high chance that people have to become obese if their closest loved ones did. What does?

There are, of course, the same old explanations such as sharing food, but by looking at it from a microbiome point of view, it can be seen that the microbiome can and does contribute to adult obesity—due in part to the effect on different viruses’ effects on our energy storage system, as described above. But I believe that introducing the hypothesis that we share microbes with eachother, which also drive obesity, should be an alternate or complimentary explanation.

As you can see, the closer one is with another person who becomes obese, the higher chance they have of also becoming obese. Close friends (and obviously couples) spend a lot of time around each other, in the same house, eating the same foods, using the same bathrooms, etc. Is it really an ‘out there’ to suggest that something like this may also contribute to the obesity epidemic? When taking into account some of the evidence reviewed here, I don’t think that such a hypothesis should be so easily discarded.

In sum, reducing obesity just to CI/CO is clearly erroneous, as it leaves out a whole slew of other explanatory theories/factors. Clearly, our microbiome has an effect on how much energy we extract from our food after we consume it. Certain viruses—such as Ad-36, an avian virus—influence the body’s energy storage, forcing the body to create no new fat cells as well as overcrowding the fat cells currently in the body with fat. That viruses and our diet can influence our microbiome—along with our microbiome influencing our diet—definitely needs to be studied more.

One good correlate of the microbiomes’/virsuses’ role in human obesity is that the closer one is to one who becomes obese, the more likely it is that the other person in the relationship will become obese. And since the chance increases the closer one is to who became obese, the explanation of gut microbes and how they break down our food and store energy becomes even more relevant. The trillions of bacteria in our guts may control our appetites (Norris, Molina, and Gewirtz, 2013; Alcock, Maley, and Atkipis, 2014), and do control our social behaviors (Foster, 2013; Galland, 2014).

So, clearly, to understand human behavior we must understand the gut microbiome and how it interacts with the brain and out behaviors and how and why it leads to obesity. Ad-36 is a great start with quite a bit of research into it; I await more research into how our microbiome and parasites/viruses control our behavior because the study of human behavior should now include the microbiome and parasites/viruses, since they  have such a huge effect on eachother and us—their hosts—as a whole.

Charles Murray

Arthur Jensen

Blog Stats

  • 233,275 hits
Follow NotPoliticallyCorrect on WordPress.com