NotPoliticallyCorrect
Please keep comments on topic.

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 310 other subscribers

Follow me on Twitter

Archives

Steroid Mythconceptions and Racial Differences in Steroid Use

2000 words

Steroids get a bad reputation. It largely comes from movies and people’s anecdotal experiences and repeating stories they hear from the media and other forms of entertainment, usually stating that there is a phenomenon called ‘roid rage’ that makes steroid users violent. Is this true? Are any myths about steroids true, such as a shrunken penis? Are there ways to off-set it? Steroids and their derivatives are off-topic for this blog, but it needs to be stressed that there are a few myths that get pushes about steroids and what it does to behavior, its supposed effects on aggression and so forth.

With about 3 million AAS (ab)users (anabolic-androgenic steroids) in America (El Osta et al, 2016), knowing the effects of steroids and similar drugs such as Winny (a cutting agent) would have positive effects, since, of course, athletes mostly use them.

Shrunken testicles

This is, perhaps, one of the most popular. Though the actual myth is that AAS use causes the penis to shrink (which is not true), in reality, AAS use causes the testicles to shrink by causing the Leydig cells to decrease natural testosterone production which then decreases the firmness and shape of the testicles which then results in a loss of size.

In one study of 772 gay men using 6 gyms between the months of January and February (and you need to think of the type of bias there that those people who are ‘Resolutioners’ would be more likely to go to the gym those months), a questionnaire was given to the men. 15 .2 percent of the men had used, with 11.7 percent of them injecting within the past 12 months. HIV positive men were more likely to have used in the past compared to negative men (probably due to scripts). Fifty-one percent of them reported testicular atrophy, and they were more likely to report suicidal thoughts (Bolding, Sherr, and Elford, 2002). They conclude:

One in seven gay men surveyed in central London gyms in 2000 said they had used steroids in the previous 12 months. HIV positive men were more likely to have used steroids than other men, some therapeutically. Side effects were reported widely and steroid use was associated with having had suicidal thoughts and feeling depressed, although cause and effect could not be established. Our findings suggest that steroid use among gay men may have serious consequences for both physical and mental health.

Of course, those who (ab)use substances have more psychological problems than those who do not. Another study of 203 bodybuilders found that 8 percent (n = 17) found testicular atrophy (for what it’s worth, it was an internet survey of drug utilization) (Perry et al, 2005). Another study found that out of 88 percent of individuals who abused the drug complained of side-effects of AAS use, about 40 percent described testicular atrophy (Evans, 1997), while testicular atrophy was noted in about 50 percent of cases (sample size n = 24) (Darke et al, 2016).

 

Sperm production

One study of steroid users found that only 17 percent of them had normal sperm levels (Torres-Calleja et al, 2001), this is because exogenous testosterone will result in the atrophy of germinal cells which cause a decrease in spermatogenesis. Though, too, increased AAS (ab)use later into life may lead to infertility later in life. Knuth et al (1989) also studied 41 bodybuilders with an average age of 26.7. They went through a huge laundry list of different types of steroids they have taken over their lives. Nineteen of the men were still using steroids at the time of the investigation (group I), whereas 12 of them (group II) stopped taking steroids 3 months prior, while 10 of them (group III) stopped steroid use 4 to 24 months prior.

They found that only 5 of them had sperm counts below the average of 20 million sperm per square ml, while 24 of the bodybuilders showed these symptoms. No difference between group I and II was noticed and group III (the group that abstained from use for 4 to 24 months) largely had sperm levels in the normal range. So, the data suggests that even in cases of severe decrease of sensitivity to androgens due to AAS (ab)use, spermatogenesis may still continue normally in some men, even when high levels of androgens are administered exogenously, while even after prolonged use it seems it is possible for sperm levels to go back to the normal range (Knuth et al 1989).

Aggression and crime

Now it’s time for the fun part and my reason for writing this article. Does (ab)using steroids cause someone to go into an uncontrollable rage, a la the Incredible Hulk when they inject themselves with testosterone? The media has latched into the mind of many, with films and TV shows showing the insanely aggressive man who has been (ab)using AAS. But how true is this? A few papers have shown that this phenomenon is indeed true (Konacher and Workman, 1989; Pope and Katz, 1994), but how true is it on its own, since AAS (ab)users are known to use multiple substances???

Konacher and Workman (1989) is a case study done on one man who had no criminal history, who began taking AASs three months before he murdered his wife, and they conclude that AAS can be said to be a ‘personality changer’. Piacetino et al (2015) conclude in their review of steroid use and psychopathology in athletes that “AAS use in athletes is associated with mood and anxiety disturbances, as well as reckless behavior, in some predisposed individuals, who are likely to develop various types of psychopathology after long-term exposure to these substances. There is a lack of studies investigating whether the preexistence of psychopathology is likely to induce AAS consumption, but the bulk of available data, combined with animal data, point to the development of specific psycho-pathology, increased aggressiveness, mood destabilization, eating behavior abnormalities, and psychosis after AAS abuse/dependence.” I, too, would add that since most steroid abuse are polysubstance abusers (they use multiple illicit drugs on top of AAS), that the steroids per se are not causing crime or aggressive behavior, it’s the other drugs that the steroid (ab)user is also taking. And there is evidence for this assertion.

Lundholm et al (2015) showed just that: that AAS (ab)use was confounded with other substances used while the individual in question was also taking AAS. They write:

We found a strong association between self-reported lifetime AAS use and violent offending in a population-based sample of more than 10,000 men aged 20-47 years. However, the association decreased substantially and lost statistical significance after adjusting for other substance abuse. This supports the notion that AAS use in the general population occurs as a component of polysubstance abuse, but argues against its purported role as a primary risk factor for interpersonal violence. Further, adjusting for potential individual-level confounders initially attenuated the association, but did not contribute to any substantial change after controlling for polysubstance abuse.

Even The National Institute of Health (NIH) writes: “In summary, the extent to which steroid abuse contributes to violence and behavioral disorders is unknown. As with the health complications of steroid abuse, the prevalence of extreme cases of violence and behavioral disorders seems to be low, but it may be underreported or underrecognized.” We don’t know whether steroids cause aggression or more aggressive athletes are more likely to use the substance (Freberg, 2009: 424). Clearly, the claims of steroids causing aggressive behavior and crime are overblown and there has yet to be a scientific consensus on the matter. A great documentary on the matter is Bigger, Stronger, Faster, which goes through the myths of testosterone while chronicling the use of illicit drugs in bodybuilding and powerlifting.

This, too, was even seen in one study where men were administered supraphysiologic doses of testosterone to see its effects on muscle size and strength since it had never been tested; no changes in mood or behavior occurred (Bhasin et al, 1996). Furthermore, injecting individuals with supraphysiological doses of testosterone as high as 200 and 600 mg per week does not cause heightened anger or aggression (Tricker et al, 1996O’Connor et, 2002). Testosterone is one of the most abused AASs around, and if a heightened level of T doesn’t cause crime, nor can testosterone levels being higher this week compared to last seem to be a trigger for crime, we can safely disregard any claims of ‘roid rage’ since they coincide with other drug use (polysubstance abuse). So since we know that supraphysiologic doses of testosterone don’t cause crime nor aggression, we can say that AAS use, on its own (and even with other drugs) does not cause crime or heightened aggression since aggression elevates testosterone secretion, testosterone doesn’t elevate aggression.

One review also suggests that medical issues associated with AAS (ab)use are exaggerated to deter their use by athletes (Hoffman and Ratamess, 2006). They conclude that “Existing data suggest that in certain circumstances the medical risk associated with anabolic steroid use may have been somewhat exaggerated, possibly to dissuade use in athletes.

Racial differences in steroid use

Irving et al (2002) found that 2.1 percent of whites used steroids, whereas 7.6 percent of blacks did; 6.1 percent of ‘Hispanics’ use them within the past 12 months, and a whopping 14.1 percent of Hmong Chinese used them; 7.9 percent of ‘other Asians’ used them, and 3,1 percent of ‘Native Americans’ did with 11.3 percent of mixed race people using them within the past 12 months to gain muscle. Middle schoolers were more likely to use than high schoolers, while people from lower SES brackets were more likely to use than people in higher SES brackets.

Stilger and Yesalis (1999: 134) write (emphasis mine):

Of the 873 high school football players participating in the study, 54 (6.3%) reported having used or currently using AAS. Caucasians represented 85% of all subjects in the survey. Nine percent were African-American while the remainder (6%) consisted of Hispanics, Asian, and other. Of the AAS users, 74% were Caucasian, 13% African American, 7% Hispanic, and 3% Asian, x2 (4,854 4) 4.203, p 4 .38. The study also indicated that minorities are twice as likely to use AAS as opposed to Caucasians. Cross tabulated results indicate that 11.2% of all minorities use/used AAS as opposed to 6.5% of all Caucasians (data not displayed).

One study even had whites and blacks reporting the same abuse of steroids in their sample (n = 10,850 ‘Caucasians’ and n = 1,883 black Americans), with blacks reporting, too, lower levels of other drug abuse (Green et al, 2001). Studies indeed find higher rates of drug use for white Americans than other ethnies, in college (McCabe et al, 2007). Black Americans also frequently underreport and lie about their drug use (Ledgerwood et al, 2008; Lu et al, 2001). Blacks are also more likely to go to the ER after abusing drugs than whites (Drug Abuse Warning Network, 2011). Bauman and Ennett (1994) also found that blacks underreport drug use whereas whites overreport.

So can we really believe the black athletes who state that they do not (ab)use AAS? No, we cannot. Blacks like about any and all drug use, so believing that they are being truthful about AAS (ab)use in this specific instance is not called for.

Conclusion

Like with all things you use and abuse, there are always side-effects. Though, the media furor one hears regarding AAS and testosterone (ab)use are largely blown out of proportion. The risks associated with AAS (ab)use are ‘transient’, and will subside after one discontinues using the drugs. Blacks seem to take more AAS than whites, even if they do lie about any and all drug use. (And other races, too, seem to use it at higher rates than whites.) Steroid use does not seem to be ‘bad’ if one knows what they’re doing and are under Doctor’s supervision, but even then, if you want to know the truth about AAS, then you need to watch the documentary Bigger, Stronger, Faster. I chalk this up to the media themselves demonizing testosterone itself, along with the ‘toxic masculinity’ and the ‘toxic jock effect‘ (Miller, 2009Miller, 2011). Though, if you dig into the literature yourself you’ll see there is scant evidence for AAS and testosterone (ab)use causing crime, that doesn’t stop papers like those two by Miller talking about the effects of ‘toxic jocks’ and in effect, deriding masculine men and with it the hormone that makes Men men: testosterone. If taken safely, there is nothing wrong with AAS/testosterone use.

(Note: Doctor’s supervision only, etc)

Don’t Fall for Facial ‘Reconstructions’

1400 words

Back in April of last year, I wrote an article on the problems with facial ‘reconstructions’ and why, for instance, Mitochondrial Eve probably didn’t look like that. Now, recently, ‘reconstructions’ of Nariokotome boy and Neanderthals. The ‘reconstructors’, of course, have no idea what the soft tissue of said individual looked like, so they must infer and use ‘guesswork’ to show parts of the phenotype when they do these ‘reconstructions’.

My reason for writing this is due to the ‘reconstruction’ of Nefertiti. I have seen altrighers proclaim ‘The Ancient Egyptians were white!’ whereas I saw blacks stating ‘Why are they whitewashing our history!’ Both of these claims are dumb, and they’re also wrong. Then you have articles—purely driven by ideology—that proclaim ‘Facial Reconstruction Reveals Queen Nefertiti Was White!

This article is garbage. It first makes the claim that King Tut’s DNA came back as being similar to 70 percent of Western European man. Though, there are a lot of problems with this claim. 1) the company IGENEA inferred his Y chromosome from a TV special; the data was not available for analysis. 2) Haplogroup does not equal race. This is very simple.

Now that the White race has decisively reclaimed the Ancient Egyptians

The white race has never ‘claimed’ the Ancient Egyptians; this is just like the Arthur Kemp fantasy that the Ancient Egyptians were Nordic and that any and all civilizations throughout history were started and maintained by whites, and that the causes of the falls of these civilizations were due to racial mixing etc etc. These fantasies have no basis in reality, and, now, we will have to deal with people pushing these facial ‘reconstructions’ that are largely just ‘art’, and don’t actually show us what the individual in question used to look like (more on this below).

Stephan (2003) goes through the four primary fallacies of facial reconstruction: fallacy 1) That we can predict soft tissue from the skull, that we can create recognizable faces. This is highly flawed. Soft tissue fossilization is rare—rare enough to be irrelevant, especially when discussing what ancient humans used to look like. So for these purposes, and perhaps this is the most important criticism of ‘reconstructions’, any and all soft tissue features you see on these ‘reconstructions’ are largely guesswork and artistic flair from the ‘reconstructor’. So facial ‘reconstructions’ are mostly art. So, pretty much, the ‘reconstructor’ has to make a ton of leaps and assumptions while creating his sculpture because he does not have the relevant information to make sure it is truly accurate, which is a large blow to facial ‘reconstructions’.

And, perhaps most importantly for people who push ‘reconstructions’ of ancient hominin: “The decomposition of the soft tissue parts of paleoanthropological beings makes it impossible for the detail of their actual soft tissue face morphology and variability to be known, as well as the variability of the relationship between the hard and the soft tissue.” and “Hence any facial “reconstructions” of earlier hominids are likely to be misleading [4].

As an example for the inaccuracy of these ‘reconstructions’, see this image from Wikipedia:

Gail_Mathews

The left is the ‘reconstruction’ while the right is how the woman looked. She had distinct lips which could not be recreated because, again, soft tissue is missing.

2) That faces are ‘reconstructed’ from skulls: This fallacy directly follows from fallacy 1: that ‘reconstructors’ can accurately predict what the former soft tissue looked like. Faces are not ‘reconstructed’ from skulls, it’s largely guesswork. Stephan states that individuals who see and hear about facial ‘reconstructions’ state things like “wow, you have to be pretty smart/knowledgeable to be able to do such a complex task”, which Stephan then states that facial ‘approximation’ may be a better term to use since it doesn’t imply that the face was ‘reconstructed’ from the skull.

3) That this discipline is ‘credible’ because it is ‘partly science’, but Stephan argues that calling it a science is ‘misleading’. But he writes (pg 196): “The fact that several of the commonly used subjective guidelines when scientifically evaluated have been found to be inaccurate, … strongly emphasizes the point that traditional facial approximation methods are not scientific, for if they were scientific and their error known previously surely these methods would have been abandoned or improved upon.

And finally, 4) We know that ‘reconstructions’ work because they have been successful in forensic investigations. Though this is not a strong claim because other factors could influence the discovery, such as media coverage, chance, or ‘contextual information’. So these forensics cases cannot be pointed to when one attempts to argue for the utility of facial ‘reconstructions’. There also seems to be a lot of publication bias in this literature too, with many scientists not publishing data that, for instance, did not show the ‘face’ of the individual in question. It is largely guesswork. “The inconsistency in reports combined with confounding factors influencing casework success suggest that much caution should be employed when gauging facial approximation success based on reported practitioner success and the success of individual forensic cases” (Stephan, 2003: 196).

So, 1) the main point here is that soft tissue work is ‘just a guess’ and the prediction methods employed to guess the soft tissue have not been tested. 2) faces are not ‘reconstructed’ from skulls. 3) It’s hardly ‘science’, and more of a form of art due to the guesses and large assumptions poured into the ‘technique’. 4) ‘Reconstructions’ don’t ‘work’ because they help us ‘find’ people, as there is a lot more going on there than the freak-chance happenings of finding a person based on a ‘reconstruction’ which was probably due to chance. Hayes (2015) also writes: “Their actual ability to meaningfully represent either an individual or a museum collection is questionable, as facial reconstructions created for display and published within academic journals show an enduring preference for applying invalidated methods.

Stephan and Henneberg (2001) write: “It is concluded that it is rare for facial approximations to be sufficiently accurate to allow identification of a target individual above chance. Since 403 incorrect identifications were made out of 592 identification scenarios, facial approximation should be considered to be a highly inaccurate and unreliable forensic technique. These results suggest that facial approximations are not very useful in excluding individuals to whom skeletal remains may not belong.

Wilkinson (2010) largely agrees, but states that ‘artistic interpretation’ should be used only when “particularly for the morphology of the ears and mouth, and with the skin for an ageing adult” but that “The greatest accuracy is possible when information is available from preserved soft tissue, from a portrait, or from a pathological condition or healed injury.” But she also writes: “… the laboratory studies of the Manchester method suggest that facial reconstruction can reproduce a sufficient likeness to allow recognition by a close friend or family member.

So to sum up: 1) There is insufficient data for tissue thickness. This just becomes guesswork and, of course, is up to artistic ‘interpretation’, and then becomes subjective to whichever individual artist does the ‘reconstruction’. Cartilage, skin and fat does not fossilize (only in very rare cases and I am not aware of any human cases). 2) There is a lack of methodological standardization. There is no single method to use to ‘guesstimate’ things like tissue thickness and other soft tissue that does not fossilize. 3) They are very subjective! For instance, if the artist has any type of idea in his head of what the individual ‘may have’ looked like, his presuppositions may go from his head to his ‘reconstruction’, thusly biasing a look he/she will believe is true. I think this is the case for Mitochondrial Eve; just because she lived in Africa doesn’t mean that she looks similar to any modern Africans alive today.

I would make the claim that these ‘reconstructions’ are not science, they’re just the artwork of people who have assumptions of what people used to look like (for instance, with Nefertiti) and they take their assumptions and make them part of their artwork, their ‘reconstruction’. So if you are going to view the special that will be on tomorrow night, keep in the back of your mind that the ‘reconstruction’ has tons of unvalidated assumptions thrown into it. So, no, Nefertiti wasn’t ‘white’ and Nefertiti wasn’t ‘white washed’; since these ‘methods’ are highly flawed and highly subjective, we should not state that “This is what Nefertiti used to look like”, because it probably is very, very far from the truth. Do not fall for facial ‘reconstructions’.

IQ, Interoception, and the Heartbeat Counting Task: What Does It Mean?

1400 words

We’re only one month into the new year and I may have come across the most ridiculous paper I think I’ll read all year. The paper is titled Knowledge of resting heart rate mediates the relationship between intelligence and the heartbeat counting task. They state that ‘intelligence’ is related to heartbeat counting task (HCT), and that HBC is employed as a measure of interoception—which is a ‘sense’ that helps one understand what is going on in their body, sensing the body’s internal state and physiological changes (Craig, 2003; Garfinkel et al, 2015).

Though, the use of HCT as a measure of interoception is controversial (Phillips et al, 1999; Brener and Ring, 2016) mostly because it is influenced by prior knowledge of one’s resting heart rate. The concept of interoception has been around since 1906, with the term first appearing in scientific journals in the 1942 (Ceunen, Vlaeyen, and Dirst, 2016). It’s also interesting to note that interoceptive accuracy is altered in schizophrenics (who had an average IQ of 101.83; Ardizzi et al, 2016).

Murphy et al (2018) undertook two studies: study one demonstrated an association with ‘intelligence’ and HCT performance whereas study 2 demonstrated that this relationship is mediated by one’s knowledge of resting heart rate. I will briefly describe the two studies then I will discuss the flaws (and how stupid the idea is that ‘intelligence’ partly is responsible for this relationship).

In both studies, they measured IQ using the Wechsler intelligence scales, specifically the matrix and vocabulary subtests. In study 1, they had 94 participants (60 female, 33 female, and one ‘non-binary’; gotta always be that guy eh?). In this study, there was a small but positive correlation between HCT and IQ (r = .261).

In study 2, they sought to again replicate the relationship between HCT and IQ, determine how specific the relationship is, and determine whether higher IQ results in more accurate knowledge of one’s heart rate which would then improve their scores. They had 134 participants for this task and to minimize false readings they were asked to forgo caffeine consumption about six hours prior to the test.

As a control task, participants were asked to complete a timing accuracy test (TAT) in which they were asked to count seconds instead of heartbeats. The correlation with HCT performance and IQ was, again, small but positive (r = -.211) with IQ also being negatively correlated with the inaccuracy of resting heart rate estimations (r = .363), while timing accuracy was not associated with the inaccuracy of heart rate estimates, IQ or HCT. In the end, knowledge of average resting heart rate completely mediated the relationship between IQ and HCT.

This study replicated another study by Mash et al (2017) who show that their “results suggest that cognitive ability moderates the effect of age on IA differently in autism and typical development.” This new paper then extends this analysis showing that it is fully mediated by prior knowledge of average resting heart rate, and this is key to know.

This is simple: if one has prior knowledge of their average resting heart rate and their fitness did not change from the time they were aware of their average resting heart rate then when they engage in the HCT they will then have a better chance of counting the number of beats in that time frame. This is very simple! There are also other, easier, ways to estimate your heart rate without doing all of that counting.

Heart rate (HR) is a strong predictor of cardiorespiratory fitness. So it would follow that those who have prior knowledge of their HRs would more fitness savvy (the authors don’t really say too much about the subjects if there is more data when the paper is published in a journal I will revisit this). So Murphy et al (2018) showed that 1) prior knowledge of resting heart rate (RHR) was correlated—however low—with IQ while IQ was negatively correlated with the inaccuracy of RHR estimates. So the second study replicated the first and showed that the relationship was specific (HCT correlated with IQ, not any other measure).

The main thing to keep in mind here is that those who had prior knowledge of their RHR scored better on the task; I’d bet that even those with low IQs would score higher on this test if they, too, had prior knowledge of their HRs. That’s, really, what this comes down to: if you have prior knowledge of your RHR and your physiological state stays largely similar (body fat, muscle mass, fitness, etc) then when asked to estimate your heart rate by, say, using the radial pulse method (placing two fingers along the right side of the arm in line just above the thumb), they, since they have prior knowledge, will more accurately guess their RHR, if they had low or high IQs, regardless.

I also question the use of the HCT as a method of interoception, in line with Brener and Ring (2016: 2) who write “participants with knowledge about heart rate may generate accurate counting scores without detecting any heartbeat sensations.” So let’s say that HCT is a good measure of interoception, then it still remains to be seen whether or not manipulating subjects’ HRs would change the accuracy of the analyses. Other studies have shown that testing HR after one exercises, people underestimate their HR (Brener and Ring, 2016: 2). This, too, is simple. To get your max HR after exercise, subtract your age from 220. So if you’re 20 years old, your max HR would be 200, and after exercise, if you know you’re body and how much energy you have expended, then you will be able to estimate better with this knowledge.

Though, you would need to have prior knowledge, of course, of these effects and knowledge of these simple formulas to know about this. So, in my opinion, this study only shows that people who have a higher ‘IQ’ (more access to cultural tools to score higher on IQ tests; Richardson, 2002) are also more likely to, of course, go to the doctor for checkups, more likely to exercise and, thusly, be more likely to have prior knowledge of their HR and score better than those with lower IQs and less access to these types of facilities where they would have access to prior knowledge and get health assesments to have prior knowledge like those with higher IQs (which are more likely to be middle class and have more access to these types of facilities).

I personally don’t think that HCT is a good measure of interoception due to the criticisms brought up above. If I have prior knowledge of my HR (average HR for a healthy person is between 50-75 BPM depending on age, sex, and activity (along with other physiological components) (Davidovic et al, 2013). So, for example,if my average HR is 74 (I just checked mine last week and I checked it in the morning, and averaged 3 morning tests one morning was 73, the other morning was 75 and the third was 74 for an average of 74 BPM), and I had this prior knowledge before undergoing this so-called HCT interoception task, I would be better equipped to score better than one who does not have the same prior knowledge of his own heart rate as I do.

In conclusion, in line with Brener and Ring (2016), I don’t think that HCT is a good measure for interoception, and even if it were, the fact that prior knowledge fully mediates this relationship means that, in my opinion, other methods of interoception need to be found and studied. The fact that if someone has prior knowledge of their HR can and would skew things—no matter their ‘IQ’—since they know that, say, their HR is in the average range (50-75 BPM). I find this study kind of ridiculous and it’s in the running for most ridiculous things I have read all year. Prior knowledge (both with RHR and PEHR; post-exercise heart rate) of these variables will have you score better and, since IQ is a measure of social class then with the small correlation between HCT and IQ found by Murphy et al (2018), some (but most is not) is mediated by IQ, which is just largely tests for skills found in a narrow social class, so it’s no wonder that they corrrlate—however low—and the reason why the relationship was found is obvious, especially if you have some prior knowledge of this field.

Calories are not Calories

1300 words

(Read part I here)

More bullocks from Dr. Thompson:

I say that if you are over-weight and wish to lose weight, then you should eat less. You should keep eating less until you achieve your desired weight, and then stick to that level of calorific intake.

Why only talk about calories and assume that they do the same things once ingested into the body? See Feinman and Fine (2004) to see how and why that is fallacious. This was actually studied. Contestants on the show The Biggest Loser were followed after they lost a considerable amount of weight. They followed the same old mantra: eat less, and move more. Because if you decrease what is coming in, and expend more energy then you will lose weight. Thermodynamics, energy in and out, right? That should put one into a negative energy balance and they should lose weight if they persist with the diet. And they did. However, what is going on with the metabolism of the people who lost all of this weight, and is this effect more noticeable for people who lost more weight in comparison to others?

Fothergill et al (2016) found that persistent metabolic slowdown occurred after weight loss, the average being a 600 kcal slowdown. This is what the conventional dieting advice gets you, a slowed metabolism with you having to eat fewer kcal than one who was never obese. This is what the ‘eat less, move more’ advice, the ‘CI/CO’ advice is horribly flawed and does not work!

He seems to understand that exercise does not work to induce weight loss, but it’s this supposed combo that’s supposed to be effective, a kind of one-two punch, and you only need to eat less and move more if you want to lose weight! This is horribly flawed. He then shows a few table from a paper he authored with another researcher back in 1974 (Bhanji and Thompson, 1974).

Say you take 30 people who weigh the same, have the same amount of body fat and are the same height, they eat the same exact macronutrient composition, with the same exact foods, eating at a surplus deficit with the same caloric content, and, at the end of say, 3 months, you will get a different array of weight gained/stalled/decrease in weight. Wow. Something like this would certainly disprove the CI/CO myth. Aamodt (2016: 138-139) describes a study by Bouchard and Tremblay (1997; warning: twin study), writing:

When identical twins, men in their early 20s, were fed a thousand extra calories per day for about three months, each pair showed similar weight gains. In contrast, the gains varied across twin pairs, ranging from nine to twenty-nine pound, even though the calorie imbalance esd the same for everyone. An individual’s genes also influence weight loss. When another group of identical twins burned a thousand more calories per day through exercise while maintaining a stable food intake in an inpatient facility, their losses ranged from two to eighteen pounds and were even more similar within twin pairs than weight gain.

Take a moment to think about that. Some people’s bodies resis weight loss so well that burning an extra thousand calpires a day for three months, without eating more, leads them to lose only two pounds. The “weight loss is just math” crows we met in the last chapter needs to look at what happens when their math is applied to living people. (We know what usually happens: they accuse the poor dieter of cheating, whether or not it’s true.) If cutting 3,500 calories equals one pound of weight loss, then everyone on the twuns’ exercist protocol should have lost twenty-four pounds, but not a single participant lost that much. The average weight loss was only eleven pounds, and the individual variation was huge. Such differences can result from genetic influences on resting metabolism, which varies 10 to 15 percent between people, or from differences in the gut. Because the thousand-calorie energy imbalance was the same in both the gain and loss experiments, this twin research also illustrates that it’s easier to gain weight than to lose it.

That’s weird. If a calorie were truly a calorie, then, at least in the was CI/COers word things, everyone should have had the same or similar weight loss, not with the average weight loss less than half what should have been expected from the kcal they consumed. That is a shot against the CI/CO theory. Yet more evidence against comes from the Vermont Prison Experiment (see Salans et al, 1971). In this experiment, they were given up to 10,000 kcal per day and they, like in the other study described previously, all gained differing amounts of weight. Wow, almost as if individuals are different and the simplistic caloric math of the CI/COers doesn’t size up against real-life situations.

The First Law of Thermodynamics always holds, it’s just irrelevant to human physiology. (Watch Gary Taubes take down this mythconception too; not a typo.) Think about an individual who decreases total caloric intake from 1500 kcal per day to 1200 kcal per day over a certain period of time. The body is then forced to drop its metabolism to match the caloric intake, so the metabolic system of the human body knows when to decrease when it senses it’s getting less intake, and for this reason the First Law is not violated here, it’s irrelevant. The same thing also occurred to the Biggest Loser contestants. Because the followed the CI/CO paradigm of ‘eat less and move more’.

Processed food is not bad in itself, but it is hard to monitor what is in it, and it is probably best avoided if you wish to lose weight, that is, it should not be a large part of your habitual intake.

If you’re trying to lose weight you should most definitely avoid processed foods and carbohydrates.

In general, all foods are good for you, in moderation. There are circumstances when you may have to eat what is available, even if it is not the best basis for a permanent sustained diet.

I only contest the ‘all foods are good for you’ part. Moderation, yes. But in our hedonistic world we live in today with a constant bombardment of advertisements there is no such thing as ‘moderation’. Finally, again, willpower is irrelevant to obesity.

I’d like to know the individual weight gains in Thompson’s study. I bet it’d follow both what occurred in the study described by Aamodt and the study by Sims et al. The point is, human physiological systems are more complicated than to attempt to break down weight loss to only the number of calories you eat, when not thinking of what and how you eat it. What is lost in all of this is WHEN is a good time to eat? People continuously speak about what to eat, where to eat, how to eat, who to eat with but no one ever seriously discusses WHEN to eat. What I mean by this is that people are constantly stuffing their faces all day, constantly spiking their insulin which then causes obesity.

The fatal blow for the CI/CO theory is that people do not gain or lose weight at the same rate (I’d add matched for height, overall weight, muscle mass and body fat, too) as seen above in the papers cited. Why people still think that the human body and its physiology is so simple is beyond me.

Hedonism along with an overconsumption of calories consumed (from processed carbohydrates) is why we’re so fat right now in the third world and the only way to reverse the trend is to tell the truth about human weight loss and how and why we get fat. CI/CO clearly does not work and is based on false premises, no matter how much people attempt to save it. It’s highly flawed and assumed that the human body is so ‘simple’ as to not ‘care’ about the quality of the macro nor where it came from.

I Am Not A Phrenologist

1500 words

People seem to be confused on the definition of the term ‘phrenology’. Many people think that just the measuring of skulls can be called ‘phrenology’. This is a very confused view to hold.

Phrenology is the study of the shape and size of the skull and then drawing conclusions from one’s character from bumps on the skull (Simpson, 2005) to overall different-sized areas of the brain compared to others then drawing on one’s character and psychology from these measures. Franz Gall—the father of phrenology—believed that by measuring one’s skull and the bumps etc on it, then he could make accurate predictions about their character and mental psychology. Gall had also proposed a theory of mind and brain (Eling, Finger, and Whitaker, 2017). The usefulness of phrenology aside, the creator Gall contributed a significant understanding to our study of the brain, being that he was a neuroanatomist and physiologist.

Gall’s views on the brain can be seen here (read this letter where he espouses his views here):

1.The brain is the organ of the mind.
2. The mind is composed of multiple, distinct, innate faculties.
3. Because they are distinct, each faculty must have a separate seat or “organ” in the brain.
4. The size of an organ, other things being equal, is a measure of its power.
5. The shape of the brain is determined by the development of the various organs.
6. As the skull takes its shape from the brain, the surface of the skull can be read as an accurate index of psychological aptitudes and tendencies.

Gall’s work, though, was imperative to our understanding of the brain and he was a pioneer in the inner workings of the brain. They ‘phrenologized’ by running the tips of their fingers or their hands along the top of one’s head (Gall liked using his palms). Here is an account of one individual reminiscing on this (around 1870):

The fellow proceeded to measure my head from the forehead to the back, and from one ear to the other, and then he pressed his hands upon the protuberances carefully and called them by name. He felt my pulse, looked carefully at my complexion and defined it, and then retired to make his calculations in order to reveal my destiny. I awaited his return with some anxiety, for I really attached some importance to what his statement would be; for I had been told that he had great success in that sort of work and that his conclusion would be valuable to me. Directly he returned with a piece of paper in his hand, and his statement was short. It was to the effect that my head was of the tenth magnitude with phyloprogenitiveness morbidly developed; that the essential faculties of mentality were singularly deficient; that my contour antagonized all the established rules of phrenology, and that upon the whole I was better adapted to the quietude of rural life rather than to the habit of letters. Then the boys clapped their hands and laughed lustily, but there was nothing of laughter in it for me. In fact, I took seriously what Rutherford had said and thought the fellow meant it all. He showed me a phrenological bust, with the faculties all located and labeled, representing a perfect human head, and mine did not look like that one. I had never dreamed that the size or shape of the head had anything to do with a boy’s endowments or his ability to accomplish results, to say nothing of his quality and texture of brain matter. I went to my shack rather dejected. I took a small hand- mirror and looked carefully at my head, ran my hands over it and realized that it did not resemble, in any sense, the bust that I had observed. The more I thought of the affair the worse I felt. If my head was defective there was no remedy, and what could I do? The next day I quietly went to the library and carefully looked at the heads of pictures of Webster, Clay, Calhoun, Napoleon, Alexander Stephens and various other great men. Their pictures were all there in histories.

This—what I would call skull/brain-size fetishizing—is still evident today, with people thinking that raw size matters (Rushton and Ankney, 2007; Rushton and Ankney, 2009) for cognitive ability, though I have compiled numerous data that shows that we can have smaller brains and have IQs in the normal range, implying that large brains are not needed for high IQs (Skoyles, 1999). It is also one of Deacon’s (1990) fallacies, the “bigger-is-smarter” fallacy. Just because you observe skull sizes, brain size differences, structural brain differences, etc, does not mean you’re a phrenologist. you’re making easy and verifiable claims, not like some of the outrageous claims made by phrenologists.

What did they get right? Well, phrenologists stated that the most-used part of the brain would become bigger, which, of course, was vindicated by modern research—specifically in London cab drivers (McGuire, Frackowiak, and Frith, 1997; Woolett and McGuire, 2011).

It seems that phrenologists got a few things right but their theories were largely wrong. Though those who bash the ‘science’ of phrenology should realize that phrenology was one of the first brain ‘sciences’ and so I believe phrenology should at least get some respect since it furthered our understanding of the brain and some phrenologists were kind of right.

People see the avatar I use which is three skulls, one Mongoloid, the other Negroid and the other Caucasoid and then automatically make that leap that I’m a phrenologist based just on that picture. Even, to these people, stating that races/individuals/ethnies have different skull and brain sizes caused them to state that what I was saying is phrenology. No, it isn’t. Words have definitions. Just because you observe size differences between brains of, say either individuals or ethnies, doesn’t mean that you’re making any value judgments on the character/mental aptitude of that individual based on the size of theur skull/brain. On the other hand, noting structural differences between brains like saying “the PFC is larger here but the OFC is larger in this brain than in that brain” yet no one is saying that and if that’s what you grasp from just the statement that individuals and groups have different sized skulls, brains, and parts of the brain then I don’t know what to tell you. Stating that one brain weighs more than another, say one is 1200 g and another is 1400 g is not phrenology. Stating that one brain is 1450 cc while another is 1000 cc is not phrenology. For it to be phrenology I have to outright state that differences in the size of certain areas of the brain or brains as a whole cause differences in character/mental faculties. I am not saying that.

A team of neuroscientists just recently (recently as in last month, January, 2018) showed, in the “most exhaustive way possible“, tested the claims from phrenological ‘research’ “that measuring the contour of the head provides a reliable method for inferring mental capacities” and concluded that there was “no evidence for this claim” (Jones, Alfaro-Almagro, and Jbabdi, 2018). That settles it. The ‘science’ is dead.

It’s so simple: you notice physical differences in brain size between two corpses. That one’s PFC was bigger than OFC and with the other, his OFC was bigger than his PFC. That’s it. I guess, using this logic, neuroanatomists would be considered phrenologists today since they note size differences between individual parts of brains. Just noting these differences doesn’t make any type of judgments on potential between brains of individuals with different size/overall size/bumps etc.

It is ridiculous to accuse someone of being a ‘phrenologist’ in 2018. And while the study of skull/brain sizes back in the 17th century did pave the way for modern neuroscience and while they did get a few things right, they were largely wrong. No, you cannot see one’s character from feeling the bumps on their skull. I understand the logic and, back then, it would have made a lot of sense. But to claim that one is a phrenologist or is pushing phrenology just because they notice physical differences that are empirically verifiable does not make them a phrenologist.

In sum, studying physical differences is interesting and tells us a lot about our past and maybe even our future. Stating that one is a phrenologist because they observe and accept physical differences in the size of the brain, skull, and neuroanatomic regions is like saying that physical anthropologists and forensic scientists are phrenologists because they measure people’s skulls to ascertain certain things that may be known in their medical history. Chastizing someone because they tell you that one has a different brain size than the other by calling them outdated names in an attempt to discredit them doesn’t make sense. It seems that even some people cannot accept physical differences that are measurable again and again because it may go against some long-held belief.

Responding to Jared Taylor on the Raven Progressive Matrices Test

2950 words

I was on Warski Live the other night and had an extremely short back-and-forth with Jared Taylor. I’m happy I got the chance to shortly discuss with him but I got kicked out about 20 minutes after being there. Taylor made all of the same old claims, and since everyone continued to speak I couldn’t really get a word in.

A Conversation with Jared Taylor

I first stated that Jared got me into race realism and that I respected him. He said that once you see the reality of race then history etc becomes clearer.

To cut through everything, I first stated that I don’t believe there is any utility to IQ tests, that a lot of people believe that people have surfeits of ‘good genes’ ‘bad genes’ that give ‘positive’ and ‘negative’ charges. IQ tests are useless and that people ‘fetishize them’. He then responded that IQ is one of, if not the, most studied trait in psychology to which JF then asked me if I contended that statement and I responded ‘no’ (behavioral geneticists need to work to ya know!). He then talked about how IQ ‘predicts’ success in life, e.g., success in college,

Then, a bit after I stated that, it seems that they painted me as a leftist because of my views on IQ. Well, I’m far right (not that my politics matters to my views on scientific matters) and they made it seem like I meant that Jared fetishized IQ, when I said ‘most people’.

Then Jared gives a quick rundown of the same old and tired talking points how IQ is related to crime, success, etc. I then asked him if there was a definition of intelligence and whether or not there was consensus in the psychological community on the matter.

I quoted this excerpt from Ken Richardson’s 2002 paper What IQ Tests Test where he writes:

Of the 25 attributes of intelligence mentioned, only 3 were mentioned by 25 per cent or more of respondents (half of the respondents mentioned `higher level components’; 25 per cent mentioned ‘executive processes’; and 29 per cent mentioned`that which is valued by culture’). Over a third of the attributes were mentioned by less than 10 per cent of respondents (only 8 per cent of the 1986 respondents mentioned `ability to learn’).

Jared then stated:

“Well, there certainly are differing ideas as to what are the differing components of intelligence. The word “intelligence” on the other hand exists in every known language. It describes something that human beings intuitively understand. I think if you were to try to describe sex appeal—what is it that makes a woman appealing sexually—not everyone would agree. But most men would agree that there is such a thing as sex appeal. And likewise in the case of intelligence, to me intelligence is an ability to look at the facts in a situation and draw the right conclusions. That to me is one of the key concepts of intelligence. It’s not necessarily “the capacity to learn”—people can memorize without being particularly intelligent. It’s not necessarily creativity. There could be creative people who are not necessarily high in IQ.

I would certainly agree that there is no universally accepted definition for intelligence, and yet, we all instinctively understand that some people are better able to see to the essence of a problem, to find correct solutions to problems. We all understand this and we all experience this in our daily lives. When we were in class in school, there were children who were smarter than other children. None of this is particularly difficult to understand at an intuitive level, and I believe that by somehow saying because it’s impossible to come up with a definition that everyone will accept, there is no such thing as intelligence, that’s like saying “Because there may be no agreement on the number of races, that there is no such thing as race.” This is an attempt to completely sidetrack a question—that I believe—comes from dishonest motives.”

(“… comes from dishonest motives”, appeal to motive. One can make the claim about anyone, for any reason. No matter the reason, it’s fallacious. On ‘ability to learn’ see below.)

Now here is the fun part: I asked him “How do IQ tests test intelligence?” He then began talking about the Raven (as expected):

“There are now culture-free tests, the best-known of which is Raven’s Progressive Matrices, and this involves recognizing patterns and trying to figure out what is the next step in a pattern. This is a test that doesn’t require any language at all. You can show an initial simple example, the first square you have one dot, the next square you have two dots, what would be in the third square? You’d have a choice between 3 dots, 5 dots, 20 dots, well the next step is going to be 3 dots. You can explain what the initial patterns are to someone who doesn’t even speak English, and then ask them to go ahead and go and complete the suceeding problems that are more difficult. No language, involved at all, and this is something that correlates very, very tightly with more traditonal, verbally based, IQ tests. Again, this is an attempt to measure capacity that we all inherently recognize as existing, even though we may not be able to define it to everyone’s mutual satisfaction, but one that is definitely there.

Ultimately, we will be able to measure intelligence through direct assessment of the brain, that it will be possible to do through genetic analysis. We are beginning to discover the gene patterns associated with high intelligence. Already there have been patent applications for IQ tests based on genetic analysis. We really aren’t at the point where spitting in a cup and analyzing the DNA you can tell that this guy has a 140 IQ, this guy’s 105 IQ. But we will eventually get there. At the same time there are aspects of the brain that can be analyzed, repeatedly, with which the signals are transmitted from one part of the brain to the other, the density of grey matter, the efficiency with which white matter communicates between the different grey matter areas of the brain.

I’m quite confident that there will come a time where you can just strap on a set of electrodes and have someone think about something—or even not think about anything at all—and we will be able to assess the power of the brain directly through physical assessment. People are welcome to imagine that this is impossible, or be skeptical about that, but I think we’re defintely moving in that direction. And when the day comes—when we really have discovered a large number of the genetic patterns that are associated with high intelligence, and there will be many of them because the brain is the most complicated organ in the human body, and a very substantial part of the human genome goes into constructing the brain. When we have gotten to the bottom of this mystery, I would bet the next dozen mortgage payments that those patterns—alleles as they’re called, genetic patterns—that are associated with high intelligence will not be found to be equally distributed between people of all races.”

Then immediately after that, the conversation changed. I will respond in points:

1) First off, as I’m sure most long-time readers know, I’m not a leftist and the fact that (in my opinion) I was implied to be a leftist since I contest the utility of IQ is kind of insulting. I’m not a leftist, nor have I ever been a leftist.

2) On his points on definitions of ‘intelligence’: The point is to come to a complete scientific consensus on how to define the word, the right way to study it and then think of the implications of the trait in question after you empirically verify its reality. That’s one reason to bring up how there is no consensus in the psychological community—ask 50 psychologists what intelligence is, get numerous different answers.

3) IQ and success/college: Funny that gets brought up. IQ tests are constructed to ‘predict’ success since they’re similar already to achievement tests in school (read arguments here, here, and here). Even then, you would expect college grades to be highly correlated with job performance 6 years after graduation from college right? Wrong. Armstrong (2011: 4) writes: “Grades at universities have a low relationship to long-term job performance (r = .05 for 6 or more years after graduation) despite the fact that cognitive skills are highly related to job performance (Roth, et al. 1996). In addition, they found that this relationship between grades and job performance has been lower for the more recent studies.” Though the claim that “cognitive skills are highly related to job performancelie on shaky ground (Richardson and Norgate, 2015).

4) My criticisms on IQ do not mean that I deny that ‘intelligence exists’ (which is a common strawman), my criticisms are on construction and validity, not the whole “intelligence doesn’t exist” canard. I, of course, don’t discard the hypothesis that individuals and populations can differ in ‘intelligence/intelligence ‘genes’, the critiques provided are against the “IQ-tests-predict-X-in-life” claims and ‘IQ-tests-test-‘intelligence” claims. IQ tests test cultural distance from the middle class. Most IQ tests have general knowledge questions on them which then contribute a considerable amount to the final score. Therefore, since IQ tests test learned knowledge present in some cultures and not in others (which is even true for ‘culture-fair’ tests, see point 5), then learning is intimately linked with Jared’s definition of ‘intelligence’. So I would necessariliy state that they do test learned knowledge and test learned knowledge that’s present in some classes compared to others. Thusly, IQ tests test learned knowledge more present in some certain classes than others, therefore, making IQ tests proxies for social class, not ‘intelligence’ (Richardson, 2002; 2017b).

5) Now for my favorite part: the Raven. The test that everyone (or most people) believe is culture-free, culture-fair since there is nothing verbal thusly bypassing any implicit suggestion that there is cultural bias in the test due to differences in general knowledge. However, this assumption is extremely simplistic and hugely flawed.

For one, the Raven is perhaps one of the most tests, even more so than verbal tests, reflecting knowledge structures present in some cultures more than others (Richardson, 2002). One may look at the items on the Raven and then proclaim ‘Wow, anyone who gets these right must be ‘intelligent”, but the most ‘complicated’ Raven’s items are not more complicated than everyday life (Carpenter, Just, and Shell, 1990; Richardson, 2002; Richardson and Norgate, 2014). Furthermore, there is no cognitive theory in which items are selected for analysis and subsequent entry onto a particular Raven’s test. Concerning John Raven’s personal notes, Carpenter, Just, and Shell (1990: 408) show that John Raven—the creator of the Raven’s Progressive Matrices test—used his “intuition and clinical experience”  to rank order items “without regard to any underlying processing theory.

Now to address the claim that the Raven is ‘culture-free’: take one genetically similar population, one group of them are foraging hunter-gatherers  while the other population lives in villages with schools. The foraging people are tested at age 11. They score 31 percent, while the ones living in more modern areas with amenities get 72 percent right (‘average’ individuals get 78 percent right while ‘intellectually defective’ individuals get 47 percent right; Heine, 2017: 188). The people I am talking about are the Tsimane, a foraging, hunter-gatherer population in Bolivia. Davis (2014) studied the Tsimane people and administered the Raven test to two groups of Tsimane, as described above. Now, if the test truly were ‘culture-free’ as is claimed, then they should score similarly, right?

Wrong. She found that reading was the best predictor of performance on the Raven. Children who attend school (presumably) learn how to read (with obviously a better chance to learn how to read if you don’t live in a hunter-gatherer environment). So the Tsimane who lived a more modern lifestyle scored more than twice as high on the Raven when compared to those who lived a hunter-gatherer lifestyle. So we have two genetically similar populations, one is exposed to more schooling while the other is not and schooling is the most related to performance on the Raven. Therefore, this study is definitive proof that the Raven is not culture-fair since “by its very nature, IQ testing is culture bound” (Cole, 1999: 646, quoted by Richardson, 2002: 293).

6) I doubt that we will be able to genotype people and get their ‘IQ’ results. Heine (2017) states that you would need all of the SNPs on a gene chip, numbering more than 500,000, to predict half of the variation between individuals in IQ (Davies et al, 2011; Chabris et al, 2012). Furthermore, since most genes may be height genes (Goldstein, 2009). This leads Heine (2017: 175) to conclude that “… it seems highly doubtful, contra Robert Plomin, that we’ll ever be able to estimate someone’s intelligence with much precision merely by looking at his or her genome.” 

I’ve also critiqued GWAS/IQ studies by making an analogous argument on testosterone, the GWAS studies for testosterone, and how testosterone is produced in the body (its indirectly controlled by DNA, while what powers the cell is ATP, adenosine triphosphate (Kakh and Burnstock, 2009).

7) Regarding claims on grey and white matter: he’s citing Haier et al’s work, and their work on neural efficiency, white and grey matter correlates regarding IQ, to how different networks of the brain “talk” to each other, as in the P-FIT hypothesis of Jung and Haier (2007; numerous critiques/praises). Though I won’t go in depth on this point here, I will only say that correlations from images, correlations from correlations etc aren’t good enough (the neural network they discuss also may be related to other, noncognitive, factors). Lastly, MRI readings are known to be confounded by noise, visual artifacts and inadequate sampling, even getting emotional in the machine may cause noise in the readings (Okon-Singer et al, 2015) and since movements like speech and even eye movements affect readings, when describing normal variation, one must use caution (Richardson, 2017a).

8) There are no genes for intelligence (I’d also say “what is a gene?“) in the fluid genome (Ho, 2013), so due to this, I think that ‘identifying’ ‘genes for’ IQ will be a bit hard… Also touching on this point, Jared is correct that many genes—most, as a matter of fact—are expressed in the brain. Eighty-four percent, to be exact (Negi and Guda, 2017), so I think there will be a bit of a problem there… Further complicating these types of matters is the matter of social class. Genetic population structures have also emerged due to social class formation/migration. This would, predictably, cause genetic differences between classes, but these genetic differences are irrelevant to education and cognitive ability (Richardson, 2017b). This, then, would account for the extremely small GWAS correlations observed.

9) For the last point, I want to touch briefly on the concept of heritability (because I have a larger theme planned for the concept). Heritability ‘estimates’ have both group and individual flaws; environmental flaws; genetic flaws (Moore and Shenk, 2017), which arise due to the use of the highly flawed CTM (classical twin method) (Joseph, 2002; Richardson and Norgate, 2005; Charney, 2013Fosse, Joseph, and Richardson, 2015). The flawed CTM inflates heritabilities since environments are not equalized, as they are in animal breeding research for instance, which is why those estimates (which as you can see are lower than the sky-high heritabilities that we get for IQ and other traits) are substantially lower than the heritabilities we observe for traits observed from controlled breeding experiments; which “surpasses almost anything found in the animal kingdom” (Schonemann, 1997: 104).

Lastly, there are numerous hereditarian scientific fallacies which include: 1) trait heritability does not predict what would occur when environments/genes change; 2) they’re inaccurate since they  don’t account for gene-environment covariation or interaction while also ignoring nonadditive effects on behavior and cognitive ability; 3) molecular genetics does not show evidence that we can partition environment from genetic factors; 4) it wouldn’t tell us which traits are ‘genetic’ or not; and 5) proposed evolutionary models of human divergence are not supported by these studies (since heritability in the present doesn’t speak to what traits were like thousands of years ago) (Bailey, 1997). We, then, have a problem. Heritability estimates are useful for botanists and farmers because they can control the environment (Schonemann, 1997; Moore and Shenk, 2017). Regarding twin studies, the environment cannot be fully controlled and so they should be taken with a grain of salt. It is for these reasons that some researchers call to end the use of the term ‘heritability’ in science (Guo, 2000). For all of these reasons (and more), heritability estimates are useless for humans (Bailey, 1997; Moore and Shenk, 2017).

Still, other authors state that the use of heritability estimates “attempts to impose a simplistic and reified dichotomy (nature/nurture) on non-dichotomous processes.” (Rose, 2006) while Lewontin (2006) argues that heritability is a “useless quantity” and that to better understand biology, evolution, and development that we should analyze causes, not variances. (I too believe that heritability estimates are useless—especially due to the huge problems with twin studies and the fact that the correct protocols cannot be carried out due to ethical concerns.) Either way, heritability tells us nothing about which genes cause the trait in question, nor which pathways cause trait variation (Richardson, 2012).

In sum, I was glad to appear and discuss (however shortly) with Jared. I listened to it a few times and I realize (and have known before) that I’m a pretty bad public speaker. Either way, I’m glad to get a bit of points and some smaller parts of the overarching arguments out there and I hope I have a chance in the future to return on that show (preferably to debate JF on IQ). I will, of course, be better prepared for that. (When I saw that Jared would appear I decided to go on to discuss.) Jared is clearly wrong that the Raven is ‘culture-free’ and most of his retorts were pretty basic.

(Note: I will expand on all 9 of these points in separate articles.)

Should We End Sex Segregation in Sports? Should Athletes Be Assessed by Anatomy And Physiology?

1400 words

An opinion piece by sociologist Roslyn Kerr, senior lecturer in sociology of sport,  from Lincoln University wrote an article on January 18th for The Conversation titled Why it might be time to eradicate sex segregation in sports where she argues against sex segregation in sports. She does publish articles on sports history, leisure studies and sports management and used to be a gymnast so she should have good knowledge—perhaps better than the general public—on anatomy and physiology and how they interact during elite sporting performances. Though is there anything to the argument she provides in her article? Maybe.

The paper is pretty good, though it, of course, uses sociological terms and cites feminist theorists talking about gender binaries in sports and how they’re not ‘fair’. One thing continously brought up in the paper is how there is no way to discern sex regarding sporting competitions (Simpson et al, 1993Dickinson et al, 2002Heggie, 2010), with even chromosome-based testing being thrown out (Elsas et al, 2000). which can be seen with the Olympics “still struggling to define gender“. They state that women are put through humiliating tests to discern their sex.

They use this to buttress their own arguments which are based off of what bodies of disables athletes did: whether or not one competed in a particular sport was not on their disability, per se, but the functionality of their own bodies. As an example, sporting bodies used to group people with, say, a similar spinal injury even though they had different physical abilities. Call me crazy, but I most definitely see the logic that these authors are getting at, and not only because I ruminated on something similar back in the summer in an article on transgendered athletes in sports, writing:

This then brings up some interesting implications. Should we segregate competitions by race since the races have strength and weaknesses due to biology and anatomysuch as somatype? It’s an interesting question to consider, but I think we can all agree on one thing: Women should compete with women, and men should compete with men. Thus, transgenders should compete with transgenders.

Of course I posed the question regarding different races since they have different strengths and weaknesses on average due to evolution in different environments. Kerr and Obel (2017) conclude (pg 13):

Numerous authors have noted that the current two-sex classification system is problematic. They argued that it does not include all bodies, such as intersex bodies, and more importantly, does not work to produce fair competition. Instead, some argued that other traits that we know influence sporting success should be used to classify bodies. In this article, we extended this idea through using the ANT concepts of assemblage and black box. Specifically, we interpreted the current understanding of the body that sex segregation is based on as a black box that assumes the constant superiority of the male body over the female. But we argued that with the body understood as an assemblage, this classification could be reassembled so that this black box is no longer given. Instead we argued that by identifying the multiple traits that make up the assemblage of sporting success, sex classification becomes irrelevant and that it is these traits that we should use to classify athletes rather than sex. Drawing on the example of disability sport we noted that the black box of a medical label was undone and replaced with an emphasis on functionality with different effects for each sport. This change had the effect of undoing rigid medical disability label and enabling athletes’ bodies to be viewed as assemblages consisting of various functional and potentially changing physical abilities. We used this discussion to propose a model of classified that eliminated the need for sex segregation and instead used physical measures such as LBM and VO2 capabilities to determine an athlete’s competitive class.

All of their other arguments aside that I disagree with in their paper (their use of ‘feminist theory’, gendered divisions, short discussions and quotes from other authors on the ‘power structure’ of males), I definitely see the logic here and, in my opinion, it makes sense. Anyway, those shortcomings aside, the actual argument of using anatomy and physiology and seeing which different parts work in concert to produce elite athletic performance in certain sports then having some kind of test, say, the Heath-Carter method for somatype (Wilmore, 1970) to a test of Vo2 max (Cureton et al, 1986) to even lean body mass (LBM).

Healy et al (2014) studied 693 elite athletes in a post-competition setting. They assesed testosterone, among other variables such as aerobic performance. They observed a difference of 10 of between men and women’s LBM and that it exclusively accounts for the “observed diffences in strength and aerobic performance seen between the sexes” while they conclude:

We have shown that despite differences in mean testosterone level between genders, there is complete overlap of the range of concentrations seen. This shows that the recent decision of the IOC and IAAF to limit participation in elite events to women with ‘normal’ serum testosterone is unsustainable.

Yes, this testosterone-influences-sports-performance is still ongoing. I’ve covered it a bit last year, and while I believe there is a link between testosterone and athletic ability and have provided some data and a few anecdotes from David Epstein, I do admit that the actual literature is scant with conclusive evidence that testosterone positively influences sport performance. Either way, if testosterone truly does infer an advantage then, of course, the model (which Kerr and Obel admit is simple at the moment) will need to be slightly revised. Arguments and citations can be found in this article written back in the summer on whether or not transgender MtFs should compete with women. This is also directly related to the MtF who dominated women a few months back.

Either way, the argument that once we better identify anatomic and physiologic causes for differences in certain sporting competition, this could, in theory, be used instead of sex segregation. I think it’s a good idea personally and to see how effective it could be there should be a trial run on it. Kerr and Obel state that it would make competition more ‘fair’. However, Sanchez et al, 2014 cite Murray (2010) who writes “fair sports competition does not require that athletes be equal in every imaginable respect.

At the end of the day, what a lot of this rests on is whether or not testosterone infers athletic advantage at the elite level and there is considerable data for both sides. It’ll be interesting to see how the major sporting bodies handle the question of testosterone in sports and transgenders and hyperandrogenic females.

Personally, I think there may be something to Kerr and Obel’s arguments in their paper (feminist/patriarchy garbage aside) since it’s based on anatomy and physiology which is what we see on the field. However, it can also be argued that sex/gender is manifested in the brain which then infers other advantages/disadvantages in sports. Nonetheless, I think the argument in the paper is sound (the anatomy and physiology arguments only). For instance, we can look at one sport, say, 100 m dash, and we can say “OK, we know that sprinters have meso-ecto somatypes and that combined with the RR ACTN3 genotype, that confers elite athletic performance (Broos et al, 2016).” We could use those two variables along with leg length, foot length etc and then we can test—both in the lab and on the field—which variables infer advantages in certain sports. Another sport we can think of is swimming. Higher levels of body fat with wide clavicles and chest cavity are more conducive to swimming success. We could use those types of variables for swimming and so on.

Of course, this method may not work or it may only work in theory but not work in practice. Using lean body mass, Vo2 max etc etc based on which sport is in question may be better than using the ‘sex binary’, since some women (trust me, I’ve trained hundreds) would be able to compete head-to-head with men and, if for nothing else, it’d be good entertainment.

However, in my opinion, the logic on using anatomy and physiology instead of sex to segregate in sports is intriguing and, if nothing else, would finally give feminists (and non-feminists) the ‘equality’ they ask for.

Race, Testosterone, Aggression, and Prostate Cancer

4050 words

Race, aggression, and prostate cancer are all linked, with some believing that race is the cause of higher testosterone which then causes aggression and higher rates of crime along with maladies such as prostate cancer. These claims have long been put to bed, with a wide range of large analyses.

The testosterone debate regarding prostate cancer has been raging for decades and we have made good strides in understanding the etiology of prostate cancer and how it manifests. The same holds true for aggression. But does testosterone hold the key to understanding aggression, prostate cancer and does race dictate group levels of the hormone which then would explain some of the disparities between groups and individuals of certain groups?

Prostate cancer

For decades it was believed that heightened levels of testosterone caused prostate cancer. Most of the theories to this day still hold that large amounts of androgens, like testosterone and it’s metabolic byproduct dihydrotestosterone, are the two many factors that drive the proliferation of cells and therefore, if a male is exposed to higher levels of testosterone throughout their lives then they are at a high risk of prostate cancer compared to a man with low testosterone levels, so the story goes.

In 1986 Ronald Ross set out to test a hypothesis: that black males were exposed to more testosterone in the womb and this then drove their higher rates of prostate cancer later in life. He reportedly discovered that blacks, after controlling for confounds, had 15 percent higher testosterone than whites which may be the cause of differential prostate cancer mortality between the two races (Ross et al, 1986) This is told in a 1997 editorial by Hugh McIntosh. First, the fact that black males were supposedly exposed to more testosterone in the womb is brought up. I am aware of one paper discussing higher levels of testosterone in black women compared to white women (Perry et al, 1996). Though, I’ve shown that black women don’t have high levels of testosterone, not higher than white women, anyway (see Mazur, 2016 for discussion). (Yes I changed my view on black women and testosterone, stop saying that they have high levels of testosterone it’s just not true. I see people still link to that article despite the long disclaimer at the top.)

Alvarado (2013) discusses Ross et al (1986)Ellis and Nyborg (1992) (which I also discussed here along with Ross et al) and other papers discussing the supposed higher testosterone of blacks when compared to whites and attempts to use a life history framework to explain higher incidences of prostate cancer in black males. He first notes that nutritional status influences testosterone production which should be no surprise to anyone. He brings up some points I agree with and some I do not. For instance, he states that differences in nutrition could explain differences in testosterone between Western and non-Western people (I agree), but that this has no effect within Western countries (which is incorrect as I’ll get to later).

He also states that ancestry isn’t related to prostate cancer, writing “In summation, ancestry does not adequately explain variation among ethnic groups with higher or lower testosterone levels, nor does it appear to explain variation among ethnic groups with high or low prostate cancer rates. This calls into question the efficacy of a disease model that is unable to predict either deleterious or protective effects.

He then states that SES is negatively correlated with prostate cancer rates, and that numerous papers show that people with low SES have higher rates of prostate cancer mortality which makes sense, since people in a lower economic class would have less access to and a chance to get good medical care to identify problems such as prostate cancer, including prostate biopsies and checkups to identify the condition.

He finally discusses the challenge hypothesis and prostate cancer risk. He cites studies by Mazur and Booth (who I’ve cited in the past in numerous articles) as evidence that, as most know, black-majority areas have more crime which would then cause higher levels of testosterone production. He cites Mazur’s old papers showing that low-class men, no matter if they’re white or black, had heightened levels of testosterone and that college-educated men did not, which implies that the social environment can and does elevate testosterone levels and can keep them heightened. Alvarado concludes this section writing: “Among Westernized men who have energetic resources to support the metabolic costs associated with elevated testosterone, there is evidence that being exposed to a higher frequency of aggressive challenges can result in chronically elevated testosterone levels. If living in an aggressive social environment contributes to prostate cancer disparities, this has important implications for prevention and risk stratification.” He’s not really wrong but on what he is wrong I will discuss later on this section. It’s false that testosterone causes prostate cancer so some of this thesis is incorrect.

I rebutted Ross et al (1986) December of last year. The study was hugely flawed and, yet, still gets cited to this day including by Alvarado (2013) as the main point of his thesis. However, perhaps most importantly, the assay times were done ‘when it was convenient’ for the students which were between 10 am and 3 pm. To not get any wacky readings one most assay the individuals as close to 8:30 am as possible. Furthermore, they did not control for waist circumference which is another huge confound. Lastly, the sample was extremely small (50 blacks and 50 whites) and done on a nonrepresentative sample (college students). I don’t think anyone can honestly cite this paper as any evidence for blacks having higher levels of testosterone or testosterone causing prostate cancer because it just doesn’t do that. (Read Race, Testosterone and Prostate Cancer for more information.)

What may explain prostate cancer rates if not for differences in testosterone like has been hypothesized for decades? Well, as I have argueddiet explains a lot of the variation between races. The etiology of prostate cancer is not known (ACA, 2016) but we know that it’s not testosterone and that diet plays a large role in its acquisition. Due to their dark skin, they need more sunlight than do whites to synthesize the same amount of vitamin D, and low levels of vitamin D in blacks are strongly related to prostate cancer (Harris, 2006). Murphy et al (2014) even showed, through biopsies, that black American men had higher rates of prostate cancer if they had lower levels of vitamin D. Lower concentrations of vitamin D in blacks compared to whites due to dark pigmentation which causes reduced vitamin D photoproduction and may also account for “much of the unexplained survival disparity after consideration of such factors as SES, state at diagnosis and treatment” (Grant and Peiris, 2012).

Testosterone

As mentioned above, testosterone is assumed to be higher in certain races compared to others (based on flawed studies) which then supposedly exacerbates prostate cancer. However, as can be seen above, a lot of assumptions go into the testosterone-prostate cancer hypothesis which is just false. So if the assumptions are false about testosterone, mainly regarding racial differences in the hormone and then what the hormone actually does, then most of their claims can be disregarded.

Perhaps the biggest problem is that Ross et al is a 32-year-old paper (which still gets cited favorably despite its huge flaws) while our understanding of the hormone and its physiology has made considerable progress in that time frame. So it’s in fact not so weird to see papers like this that say “Prostate cancer appears to be unrelated related to endogenous testosterone levels” (Boyle et al, 2016). Other papers also show the same thing, that testosterone is not related to prostate cancer (Stattin et al, 2004; Michaud, Billups, and Partin, 2015). This kills a lot of theories and hypotheses, especially regarding racial differences in prostate cancer acquisition and mortality. So, what this shows is that even if blacks did have 15 percent higher serum testosterone than whites as Ross et al, Rushton, Lynn, Templer, et al believed then it wouldn’t cause higher levels of prostate cancer (nor aggression, which I’ll get into later).

How high is testosterone in black males compared to white males? People may attempt to cite papers like the 32-year-old paper by Ross et al, though as I’ve discussed numerous times the paper is highly flawed and should therefore not be cited. Either way, levels are not as high as people believe and meta-analyses and actual nationally representative samples (not convenience college samples) show low to no difference, and even the low difference wouldn’t explain any health disparities.

One of the best papers on this matter of racial differences in testosterone is Richard et al (2014). They meta-analyzed 15 studies and concluded that the “racial differences [range] from 2.5 to 4.9 percent” but “this modest difference is unlikely to explain racial differences in disease risk.” This shows that testosterone isn’t as high in blacks as is popularly misconceived, and that, as I will show below, it wouldn’t even cause higher rates of aggression and therefore criminal behavior. (Rohrmann et al 2007 show no difference in testosterone between black and white males in a nationally representative sample after controlling for lifestyle and anthropometric variables. Whereas Mazur, 2009 shows that blacks have higher levels of testosterone due to low marriage rates and lower levels of adiposity, while be found a .39 ng/ml difference between blacks and whites aged 20 to 60. Is this supposed to explain crime, aggression, and prostate cancer?)

However, as I’ve noted last year (and as Alvarado, 2013 did as well), young black males with low education have higher levels of testosterone which is not noticed in black males of the same age group but with more education (Mazur, 2016). Since blacks of a similar age group have lower levels of testosterone but are more highly educated then this is a clue that education drives aggression/testosterone/violent behavior and not that testosterone drives it.

Mazur (2016) also replicated Assari, Caldwell, and Zimmerman’s (2014) finding that “Our model in the male sample suggests that males with higher levels of education has lower aggressive behaviors. Among males, testosterone was not associated with aggressive behaviors.” I know this is hard for many to swallow that testosterone doesn’t lead to aggressive behavior in men, but I’ll cover that in the last and final section.

So it’s clear that the myth that Rushton, Lynn, Templer, Kanazawa, et al pushed regarding hormonal differences between the races are false. It’s also with noting, as I did in my response to Rushton on r/K selection theory, that the r/K model is literally predicated on 1) testosterone differences between races being real and in the direction that Rushton and Lynn want because they cite the highly flawed Ross et al (1986) and 2) testosterone does not cause higher levels of aggression (which I’ll show below) which then lead to higher rates of crime along with higher rates of incarceration.

A blogger who goes by the name of ethnicmuse did an analysis of numerous testosterone papers and he found:

Which, of course, goes against a ton of HBD theory, that is, if testosterone did what HBDers believed it does (it doesn’t). This is what it comes down to: blacks don’t have higher levels of testosterone than whites and testosterone doesn’t cause aggression nor prostate cancer so even if this relationship was in the direction that Rushton et al assert then it still wouldn’t cause any of the explanatory variables they discuss.

Last year Lee Ellis published a paper outlining his ENA theory (Ellis, 2017). I responded to the paper and pointed out what he got right and wrong. He discussed strength (blacks aren’t stronger than whites due to body type and physiology, but excel in other areas); circulating testosterone, umbilical cord testosterone exposure; bone density and crime; penis size, race, and crime (Rushton’s 1997 claims on penis size don’t ‘size up’ to the literature as I’ve shown two times); prostate-specific antigens, race, and prostate cancer; CAG repeats; intelligence and education and ‘intelligence’; and prenatal androgen exposure. His theory has large holes and doesn’t line up in some places, as he himself admits in his paper. He, as expected, cites Ross et al (1986) favorably in his analysis.

Testosterone can’t explain all of these differences, no matter if it’s prenatal androgen exposure or not, and a difference of 2.5 to 4.9 percent between blacks and whites regarding testosterone (Richard et al, 2014) won’t explain differences in crime, aggression, nor prostate cancer.

Other authors have attempted to also implicate testosterone as a major player in a wide range of evolutionary theories (Lynn, 1990Rushton, 1997Rushton, 1999Hart, 2007Rushton and Templer, 2012Ellis, 2017). However, as can be seen by digging into this literature, these claims are not true and therefore we can discard the conclusions come to by the aforementioned authors since they’re based on false premises (testosterone being a cause for aggression, crime, and prostate cancer and r/K meaning anything to human races, it doesn’t)

Finally, to conclude this section, does testosterone explain racial differences in crime? No, racial differences in testosterone, however small, cannot be responsible for the crime gap between blacks and whites.

Testosterone and aggression

Testosterone and aggression, are they linked? Can testosterone tell us anything about individual differences in aggressive behavior? Surprisingly for most, the answer seems to be a resounding no. One example is the castration of males. Does it completely take away the urge to act aggressively? No, it does not. What is shown when sex offenders are castrated is that their levels of aggression decrease, but importantly, they do not decrease to 0. Robert Sapolsky writes on page 96 of his book Behave: The Biology of Humans at Our Best and Worst (2017) (pg 96):

… the more experience a male has being aggressive prior to castration, the more aggression continues afterward. In other words, the less his being aggressive in the future requires testosterone and the more it’s a function of social learning.

He also writes (pg 96-97):

On to the next issue that lessens the primacy of testosterone: What do individual levels of testosterone have to do with aggression? If one person higher testosterone levels than another, or higher levels this week than last, are they more likely to be aggressive?

Initially the answer seemed to be yes, as studies showed correlation between individual differences in testosterone levels and levels of aggression. In a typical study, higher testosterone levels would be observed in those male prisoners with higher rates of aggression. But being aggressive stimulates testosterone secretion; no wonder more aggressive individuals had higher levels. Such studies couldn’t disentangle chickens and eggs.

Thus, a better question is whether differences in testosterone levels among individuals predict who will be aggressive. And among birds, fish, mammals, and especially other primates, the answer is generally no. This has been studied extensively in humans, examining a variety of measures of aggression. And the answer is clear. To quote British endocrinologist John Archer in a definitive 2006 review, “There is a weak and inconsistent association between testosterone levels and aggression in [human] adults, and . . . administration of testosterone to volunteers typically does not increase aggression.” The brain doesn’t pay attention to testosterone levels within the normal range.

[…]

Thus, aggression is typically more about social learning than about testosterone, differing levels of testosterone generally can’t explain why some individuals are more aggressive than others.

Sapolsky also has a 1997 book of essays on human biology titled The Trouble With Testosterone: And Other Essays On The Biology Of The Human Predicament and he has a really good essay on testosterone titled Will Boys Just Be Boys? where he writes (pg 113 to 114):

Okay, suppose you note a correlation between levels of aggression and levels of testosterone among these normal males. This could be because (a)  testosterone elevates aggression; (b) aggression elevates testosterone secretion; (c) neither causes the other. There’s a huge bias to assume option a while b is the answer. Study after study has shown that when you examine testosterone when males are first placed together in the social group, testosterone levels predict nothing about who is going to be aggressive. The subsequent behavioral differences drive the hormonal changes, not the other way around.

Because of a strong bias among certain scientists, it has taken do forever to convince them of this point.

[…]

As I said, it takes a lot of work to cure people of that physics envy, and to see interindividual differences in testosterone levels don’t predict subsequent differences in aggressive behavior among individuals. Similarly, fluctuations in testosterone within one individual over time do not predict subsequent changes in the levels of aggression in the one individual—get a hiccup in testosterone secretion one afternoon and that’s not when the guy goes postal.

And on page 115 writes:

You need some testosterone around for normal levels of aggressive behavior—zero levels after castration and down it usually goes; quadruple it (the sort of range generated in weight lifters abusing anabolic steroids), and aggression typically increases. But anywhere from roughly 20 percent of normal to twice normal and it’s all the same; the brain can’t distinguish among this wide range of basically normal values.

Weird…almost as if there is a wide range of ‘normal’ that is ‘built in’ to our homeodynamic physiology…

So here’s the point: differences in testosterone between individuals tell us nothing about individual differences in aggressive behavior; castration and replacement seems to show that, however broadly, testosterone is related to aggression “But that turns out to not be true either, and the implications of this are lost on most people the first thirty times you tell them about it. Which is why you’d better tell them about it thirty-one times, because it’s the most important part of this piece” (Sapolsky, 1997: 115).

Later in the essay, Sapolsky discusses a discusses 5 monkeys that were given time to form a hierarchy of 1 through 5. Number 3 can ‘throw his weight’ around with 4 and 5 but treads carefully around 1 and 2. He then states to take the third-ranking monkey and inject him with a ton of testosterone, and that when you check the behavioral data that he’d then be participating in more aggressive actions than before which would imply that the exogenous testosterone causes participation in more aggressive behavior. But it’s way more nuanced than that.

So even though small fluctuations in the levels of the hormone don’t seem to matter much, testosterone still causes aggression. But that would be wrong. Check out number 3 more closely. Is he now raining aggression and terror on any and all in the group, frothing in an androgenic glaze of indiscriminate violence. Not at all. He’s still judiciously kowtowing to numbers 1 and 2 but has simply become a total bastard to number 4 and 5. This is critical: testosterone isn’t causing aggression, it’s exaggerating the aggression that’s already there.

The correlation between testosterone and aggression is between .08 and .14 (Book, Starzyk, and Quinsey, 2001Archer, Graham-Kevan, and Davies, 2005Book and Quinsey, 2005). Therefore, along with all of the other evidence provided in this article, it seems that testosterone and aggression have a weak positive correlation, which buttresses the point that aggression concurrent increases in testosterone.

Sapolsky then goes on to discuss the amygdala’s role in fear processing. The amygdala has its influence on aggressive behavior through the stria terminalis, which is a bunch of neuronal connections. How the amygdala influences aggression is simple: bursts of electrical excitation called action potentials go up and down the stria terminalis which changes the hypothalamus. You can then inject testosterone right into the brain and will it cause the same action potentials that surge down the stria terminalis? No, it does not turn on the pathway at all. This only occurs only if the amygdala is already sending aggression-provoking action potentials down the stria terminalis with testosterone increasing the rate of action potentials you’re shortening the rest time between them. So it doesn’t turn on this pathway, it exaggerates the preexisting pattern, which is to say, it’s exaggerating the response to environmental triggers of what caused the amygdala to get excited in the first place.

He ends this essay writing (pg 119):

Testosterone is never going to tell us much about the suburban teenager who, in his after-school chess club, has developed a particularly aggressive style with his bishops. And it certainly isn’t going to tell us much about the teenager in some inner-city hellhole who has taken to mugging people. “Testosterone equals aggression” is inadequate for those who would offer a simple solution to the violent male—just decrease levels of those pesky steroids. And “testosterone equals aggression” is certainly inadequate for those who would offer a simple excuse: Boys will be boys and certain things in nature are inevitable. Violence is more complex than a single hormone. This is endocrinology for the bleeding heart liberal—our behavioral biology is usually meaningless outside of the context of social factors and the environment in which it occurs.

Injecting individuals with supraphysiological doses of testosterone as high as 200 and 600 mg per week does not cause heightened anger or aggression (Tricker et al, 1996O’Connor et, 2002). This, too, is a large blow for the testosterone-induces-aggression hypothesis. Because aggressive behavior heightens testosterone, testosterone doesn’t heighten aggressive behavior. (This is the causality that has been looked for, and here it is. The causality is not in the other direction.) This tells us that we need to be put into situations for our aggression to rise and along with it, testosterone. I don’t even see how people could think that testosterone could cause aggression. It’s obvious that the environmental trigger needs to be there first in order for the body’s physiology to begin testosterone production in order to prepare for the stimulus that caused the heightened testosterone production. Once the trigger occurs, then it can and does stay heightened, especially in areas where dominance contests would be more likely to occur, which would be low-income areas (Mazur, 2006, 2016).

(Also read my response to Batrinos, 2012, my musings on testosterone and race, and my responses to Robert Lindsay and Sean Last.)

Lastly, one thing that gets on my nerves that people point to to attempt to show that testosterone and its derivatives cause violence, aggression etc is the myth of “roid rage” which is when an individual objects himself with testosterone, anabolic steroids or another banned substance, and then the individual becomes more aggressive as a result of more free-flowing testosterone in their bloodstream.

But it’s not that simple.

The problem here is that people believe what they hear on the media about steroids and testosterone, and they’re largely not true. One large analysis was done to see the effects of steroids and other illicit drug use on behavior, and what was found was that after controlling for other substance use “Our results suggest that it was not lifetime steroid use per se, but rather co-occurrring polysubstance abuse that most parsimoniously explains the relatively strong association of steroid use and interpersonal violence” (Lundholm et al, 2015). So after controlling for other drugs used, men who use steroids do not go to prison and be convicted of violence after other polysubstance use was controlled for, implying that is what’s driving interpersonal violence, not the substance abuse of steroids.

Conclusion

Numerous myths about testosterone have been propagated over the decades, which are still believed in the new millennium despite numerous other studies and arguments to the contrary. As can be seen, the myths that people believe about testosterone are easily debunked. Numerous papers (with better methodology than Ross et al) attest to the fact that testosterone levels aren’t as high as was believed decades ago between the races. Diet can explain a lot of the variation, especially vitamin D intake. Injecting men with supraphysiological doses of testosterone does not heighten anger nor aggression. It does not even heighten prostate cancer severity.

Racial differences in testosterone are also not as high as people would like to believe, there is even an opposite relationship with Asians having higher levels and whites having lower (which wouldn’t, on average, imply femininity) testosterone levels. So as can be seen, the attempted r/K explanations from Rushton et al don’t work out here. They’re just outright wrong on testosterone, as I’ve been arguing for a long while on this blog.

Testosterone doesn’t cause aggression, aggression causes heightened testosterone. It can be seen from studies of men who have been castrated that the more crime they committed before castration, the more crime they will commit after which implies a large effect of social learning on violent behavior. Either way, the alarmist attitudes of people regarding testosterone, as I have argued, are not needed because they’re largely myths.

Donald Trump, His Health, Diet, and ‘Good Genes’

Much has been written about the health of our President Donald Trump recently. He reportedly eats a meal at McDonald’s worth a whooping 2420 kcal in one sitting. It comes out to 112 grams of fat. His order, during his campaign trail, was 2 filet-o-fishes, two Big Macs and a chocolate shake. It’s been well documented that his diet is full of garbage, so you’d think he’d get a bad bill of health from his doctor, right?

Wrong.

The White House doctor recently stated that the President was in good health. How could he be in good health if he eats McDonald’s garbage? He is 6 foot 3 inches and weighs about 239 pounds giving him a BMI of 29.9—literally one point under obesity. He reportedly eats a lot of garbage ‘food’ and even reportedly drinks up to 12 cans of Diet Coke per day. It seems that Big Food is going to love our President even more because he gives them more free advertisements.

Renowned nutritionist Zoe Harcombe says “If true, this is a terrible diet” … “Twelve cans of Diet Coke contain far more than an adults daily recommended dose of caffeine. Consuming too much if it induces energy highs followed by crashing lows and potentially manic behaviour, which could explain his enraged tweets.” I’m not interested in attempting to ‘explain’ his behavior in a psychological manner due to his diet, though, I’m only interested in his overall health and how his diet does or does not affect it. 

He also scored 30 out of 30 in the Montreal Cognitive Assessment which tests for mild cognitive dysfunctioning—the test being requested by Trump himself. However, two mental health experts state that the Montreal Cognitive Assessment is only used to determine whether or not further Alzheimer’s or cognitive screening is needed. Either way, they state that he should get a PET or MRI scan to assess any possible damage to his brain.

Interestingly, here is the President’s health report.

A few things to note here. Trump’s total cholesterol puts him into the borderline range. His triglycerides are at a good level, below average. HDL cholesterol being 67 mg/dL is in the good range while his LDL cholesterol level puts him at borderline risk. Trump’s cholesterol to HDL level, however, is good implying that he’s not at an increased risk of heart attack.

Regarding his white blood cell count, he’s slightly below the values for a man of his age, and white blood cell count is a good predictor of mortality in the elderly (Nilsson, Hedberg, and Ohrvik, 2014) and since he’s on the lower end there, he doesn’t have to worry about that.

For hemoglobin (HGB) he’s in the normal range. His HCT levels (hematocrit tests measure the percentage of red blood cells in the blood) and since he’s at 48.7% this puts him in the normal range of 38.8 to 50 percent. His platelet count (PLT) is in the normal range (241 K/uL), with the normal range being between 140 to 400 K/uL. His fasting blood glucose is normal (he had a value of 89 mg/dL with the average being 70 to 100 mg/dL implying that Trump is at no risk of developing type II diabetes.

His BUN level (blood urea nitrogen) was 19.0 mg/dL with the normal range being 7 to 20 mg/dL so he’s at the high end there. His blood creatinine level being .98 mg/dL is in the normal range of .84 to 1.21 mg/dL. His ALT levels were 27 U/L with the normal range being 7 to 56 units per literally. His AST levels were 19 U/L with the normal range being 10 to 40 U/L. His hemoglobin A1C level is normal at 5.0%. A level between 5.7 and 6.4 indicates pre-diabetes indicating that he is not at risk for diabetes.

Regarding vitamin D levels, Trump is right at the edge of good levels, at 20 ng/ml however others state that for the elderly their levels should be around 32 to lessen the chance of fracture. His PSA level (prostate specific antigen) is extremely low at .12 ng/dl with medical professionals advising that PSA levels over 10 ng/dl are at risk for prostate cancer. Finally his TSH (thyroid stimulating hormone level) was 1.76 uIU/ml with the normal range being .4 to 5.0 uIU/ml.

The only thing wrong looking at this report are his LDL cholesterol levels. Regarding the summary of the report, it states that he should lower his intake of fat and carbohydrates. Ravnskov et al (2016) state that “High LDL-C is inversely associated with mortality in most people over 60 years.” So it seems he’s fine there.

He should drop carbs for more fat. Of course, it seems like the doctors at the White House are still living in the 70s where fat is demonized, despite saturated fat consumption showing no risk for higher all cause mortality (de Sousa et al, 2015Dehghan et al, 2017). In my opinion, it’s the carbohydrates driving up his cholesterol levels, not fat consumption.

Trump claims that he “gets more exercise than people think“, though his doctor himself said that Trump was dealt a hand of “good genes”, so well these “good genes” protect him for the rest of his life as he eats a shitty diet? In short, no.

medical doctor and geneticist stated that “Even if Trump has been dealt a good genetic hand, he’s certainly not helping himself.” Also stating that people who have genes that lower their risk of disease “can mess that up.” Kera et al (2016) showed that “Among participants at high genetic risk, a favorable lifestyle was associated with 50% lower relative risk of coronary artery disease than was an unfavorable lifestyle.

So ‘good genes’ and bad diet is the same as ‘bad genes’ and good diet. ‘Good genes’ doesn’t mean you should (and are able to) eat garbage like fast food every day of your life. Your diet affects your health, no matter if you have “good” or “bad” genes.

In sum, Trump may have ‘good genes’ to eat the garbage he eats and have a relatively good blood panel. However, as can be seen with the data I have provided here, ‘good genes’ and a bad diet isn’t ‘better’ than ‘bad genes’ and a good diet. No matter which genetic hand you were dealt, your diet needs to be kept in check it you want to have a good metabolic and blood panel and be healthy into old age. It seems that Trump will need to put away the Diet Cokes, and 2420 kcal McDonald’s meals if he wants to live into old age. Though the shitty advice for him to ‘lower fat and carbs’ doesn’t make sense; just lower carbs, no need to lower fat at all. 

Responding to Criticisms on IQ

2250 words

My articles get posted on the Reddit board /r/hbd and, of course, people don’t like what I write about IQ. I get accused of reading ‘Richardson n=34 studies’ even though that was literally one citation in a 32 page paper that does not affect his overall argument. (I will be responding to Kirkegaard and UnsilencedSci in separate articles.) I’ll use this time to respond to criticisms from the Reddit board.

quieter_bob says:

He’s peddling BS, say this:

“But as Burt and his associates have clearly demonstrated, teachers’ subjective assessments afford even more reliable predictors.”
Well, no, teachers are in fact remarkably poor at predicting student’s success in life. Simple formulas based on school grades predict LIFE success better than teachers, notwithstanding the IQ tests.

You’re incorrect. As I stated in my response to The Alternative Hypothesis, the correlation between teacher’s judgement and student achievement is .66. “The median correlation, 0.66, suggests a moderate to strong correspondence between teacher judgements and student achievement” (Hoge and Coladarci, 1989: 303). This is a higher correlation than what was found in the ‘validation studies’ from. Hunter and Schmidt.

He cherry-picks a few bad studies and ignores entire bodies of evidence with sweeping statements like this:

“This, of course, goes back to our good friend test construction. ”
Test construction is WHOLLY IRRELEVANT. It’s like saying: “well, you know, the ether might be real because Michelson-Morley experiment has been constructed this way”. Well no, it does not matter how MM experiment has been constructed as long as it tests for correct principles. Both IQ and MM have predictive power and it has nothing to do with “marvelling”, it has to do whether the test, regardless of its construction, can effectively predict outcomes or not.

This is a horrible example. You’re comparing the presuppositions of the test constructors who have in their mind who is or is not intelligent and then construct the test to confirm those preconceived notions to an experiment that was used to find the presence and properties of aether? Surely you can think of a better analogy because this is not it.

More BS: “Though a lot of IQ test questions are general knowledge questions, so how is that testing anything innate if you’ve first got to learn the material, and if you have not you’ll score lower?”

Of course the IQ tests do NOT test much of general knowledge. Out of 12 tests in WAIS only 2 deal with general knowledge.

The above screenshot is from Nisbett (2012: 14) (though it’s the WISC, not WAIS they’re similar, all IQ tests go through item analysis, tossing items that don’t conform to the test constructors’ presuppositions).

Either way, our friend test construction makes an appearance here, too. This is how these tests are made and they are made to conform to the constructor’s presuppositions. The WISC and WAIS have similar subtests, either way. Test anxiety, furthermore, leads to a lessened performance on the block design and picture arrangement subtests (Hopko et al, 2005) and moderate to severe stress, furthermore, is related to social class and IQ test performance. Stress affects the growth of the hippocampus and PFC (prefrontal cortex) (Davidson and McEwing, 2012) so does it seem like an ‘intellectual’ thing here? Furthermore, all tests and batteries are tried out on a sample of children, with items not contributing to normality being tossed out, therefore ‘item analysis’ forces what we ‘see’ regarding IQ tests.

Even the great Jensen said in his 1980 book Bias in Mental Testing (pg 71):

It is claimed that the psychometrist can make up a test that will yield any type of score distribution he pleases. This is roughly true, but some types of distributions are easier to obtain than others.

This holds for tbe WAIS, WISC, the Raven, any type of IQ test. This shows how arbitrary the ‘item selection’ is. No matter what type of ‘IQ test’ you attempt to use to say ‘It does test “intelligence” (whatever that is)!!’ the reality of test construction and constructing tests to fit presuppositions and distributions cannot be ran away from.

The other popular test, Raven’s Progressive Matrices does not test for general knowledge at all.

This is a huge misconception. People think that just because there are no ‘general knowledge questions’ or anything verbal regarding the Matrices then it must test an innate power, thus mysterious ‘g’. However, this is wrong and he clearly doesn’t keep up with recent data:

Reading was the greatest predictor of performance Raven’s, despite controlling for age and sex. Attendance was so strongly related with Raven’s performance [school attendance was used as a proxy for motivation]. These findings suggest that reading, or pattern recognition, could be fundamentally affecting the way an individual problem solves or learns to learn, and is somehow tapping into ‘g’. Presumably the only way to learn to read is through schooling. It is, therefore, essential that children are exposed to formal education, have the mother to go/stay in school, and are exposed to consistent, quality training in order to develop the skills associated with your performance. (pg 83) Variable Education Exposure and Cognitive Task Performance Among the Tsimane, Forager- Horticulturalists.

Furthermore, according to Richardson (2002): “Performance on the Raven’s test, in other words, is a question not of inducing ‘rules’ from meaningless symbols, in a totally abstract fashion, but of recruiting ones that are already rooted in the activities of some cultures rather than others.

The assumption that the Raven is ‘culture free’ because it’s ‘just shapes and rote memory’ is clearly incorrect. James Thompson even said to me that Linda Gottfredson said that people only think the Raven is a ‘test of pure g’ because Jensen said it, which is not true.

samsungexperience says:

This is completely wrong in so many ways. No understanding of normalization. Suggestion that missing heritability is discovering environmentally. I think a distorted view of the Flynn Effect. I’ll just stick to some main points.

I didn’t imply a thing about missing heritability. I only cited the article by Evan Charney to show how populations become stratified.

RR: There is no construct validity to IQ tests

First, let’s go through the basics. All IQ tests measure general intelligence (g), the positive manifold underlying every single measure of cognitive ability. This was first observed over a century ago and has been replicated across hundreds of studies since. Non-g intelligences do not exist, so for all intents and purposes it is what we define as intelligence. It is not ‘mysterious’

Thanks for the history lesson. 1) we don’t know what ‘g’ is. (I’ve argued that it’s not physiological.) So ‘intelligence’ is defined as ‘g’ yet which we don’t know what ‘g’ is. His statement here is pretty much literally ‘intelligence is what IQ tests test’.

It would be correct to say that the exact biological mechanisms aren’t known. But as with Gould’s “reification” argument, this does not actually invalidate the phenomenon. As Jensen put it, “what Gould has mistaken for “reification” is neither more nor less than the common practice in every science of hypothesizing explanatory models or theories to account for the observed relationships within a given domain.” Poor analogies to white blood cells and breathalyzer won’t change this.

It’s not a ‘poor analogy’ at all. I’ve since expanded on the construct validity argument with more examples of other construct valid tests like showing how the breathalyzer is construct valid and how white blood cell count is a proxy for disease. They have construct validity, IQ tests do not.

RR: I said that I recall Linda Gottfredson saying that people say that Ravens is culture-fair only because Jensen said it

This has always been said in the context of native, English speaking Americans. For example it was statement #5 within Mainstream Science on Intelligence. Jensen’s research has demonstrated this. The usage of Kuwait and hunter gatherers is subsequently irrelevant.

Point 5 on the Mainstream Science on Intelligence memo is “Intelligence tests are not culturally biased against American blacks or other native-born, English-speaking peoples in the U.S. Rather, IQ scores predict equally accurately for all such Americans, regardless of race and social class. Individuals who do not understand English well can be given either a nonverbal test or one in their native language.

This is very vague. Richardson (2002) has noted how different social classes are differentially prepared for IQ test items:

I shall argue that the basic source of variation in IQ test scores is not entire (or even mainly) cognitive, and what is cognitive is not general or unitary. It arises from a nexus or sociocognitive-affective factors determining individuals: relative preparedness for the demands of the IQ test.

The fact of the matter is, all social classes aren’t prepared in the same way to take the IQ test and if you read the paper you’d see that.

RR: IQ test validity

I’ll keep this short. There exist no predictors stronger than g across any meaningful measures of success. Not education, grades, upbringing, you name it.

Yes there are. Teacher assessment which has a higher correlation than the correlation between ‘IQ’ and job performance.

RR: Another problem with IQ test construction is the assumption that it increases with age and levels off after puberty.

The very first and most heavily researched behavioral trait’s heritability has been intelligence. Only through sheer ignorance could the term “assumption” describe findings from over a century of inquiry.

Yes the term ‘assumption’ was correct. You do realize that, of course, the increase in IQ heritability is, again, due to test construction? You can also build that into the test as well, by putting more advanced questions, say high school questions for a 12 year old, and heritability would seem to increase due to just how the test was constructed.

Finally, IanTichszy says:

That article is thoroughly silly.

First, the IQ tests predict real world-performance just fine: http://thealternativehypothesis.org/index.php/2016/04/15/the-validity-of-iq/

I just responded to this article this week. They only ‘predicts real-world performance just fine’ because they’re constructed to and even then, high-achieving children in achievement rarely become high achieving adults whereas low-achieving adults tend to become successful adults. There are numerous problems with TAH’s article which I’ve already covered.

That is the important thing, not just correlation with blood pressure or something biological. Had g not predicted real-world performance from educational achievement to job performance with very high reliability, it would be useless, but it does predict those.

Test construction. You can’t get past that by saying ‘it does predict’ because it only predicts because it’s constructed to (I’d call it ‘post-dict’).

Second, on Raven’s Progressive Matrices test: the argument “well Jensen just said so” is plain silly. If RPM is culturally loaded, a question: just what culture is represented on those charts? You can’t reasonably say that. Orangutans are able to solve simplified versions of RPM, apparently they do not have a problem with cultural loading. Just look at the tests yourself.

Of course it’s silly to accept that the Raven is culture free and tests ‘g’ the best just ‘because Jensen said so’. The culture loading of the Raven is known, there is a ‘hidden structure’ in them. Even the constructors of the Raven have noted this where they state that they transposed the items to read from left to right, not right to left which is a tacit admission of cultural loading.  “The reason that some people fail such problems is exactly the same reason some people fail IQ test items like the Raven Matrices tests… It simply is not the way the human cognitive system is used to being engaged” (Richardson, 2017: 280).

Furthermore, when items are familiar to all groups, even young children are capable of complex analogical reasoning. IQ tests “test for the learned factual knowledge and cognitive habits more prominent in some social classes than in others. That is, IQ scores are measures of specific learning, as well as self-confidence and so on, not general intelligence (Richardson, 2017: 192).

Another piece of misinformation: claiming that IQs are not normally distributed. Well, we do not really know the underlying distribution, that’s the problem, only the rank order of questions by difficulty, because we do not have absolute measure of intelligence. Still, the claim that SOME human mental traits, other than IQ, do not have normal distribution, in no way impacts the validity of IQ distribution as tests found it and projected onto mean 100 and standard dev 15 since it reflects real world performance well.

Physiological traits important for survival are not normally distributed (of course it is assumed that IQ both tests innate physiological differences and is important for survival so if it were physiological it wouldn’t be normally distributed either since traits important for survival have low heritabilities). It predicts real world performance well because, see above and my other articles on thus matter.

If you know even the basic facts about IQ, it’s clear that this article has been written in bad faith, just for sake of being contrarian regardless of the truth content or for self-promotion.

No, people don’t know the basic facts of IQ (or its construction). My article isn’t written in bad faith nor is it being contrarian regardless of the truth content or for self-promotion. I can, clearly, address criticisms to my writing.

In the future, if anyone has any problems with what I write then please leave a comment here on the blog at the relevant article. Commenting on Reddit on the article that gets posted there is no good because I probably won’t see it.

Blog Stats

  • 1,032,098 hits
Follow NotPoliticallyCorrect on WordPress.com

Keywords