NotPoliticallyCorrect

Home » Posts tagged 'Psychology'

Tag Archives: Psychology

Superiority, Psychometrics, and Measurement

2800 words

In a conversation with an IQ-ist, one may eventually find themselves discussing the concept of “superiority” or “inferiority” as it Regards IQ. The IQ-ist may say that only critics of the concept of IQ place any sort of value-judgments on the number one gets when they take an IQ test. But if the IQ-ist says this, then they are showing their ignorance regarding the history of the concept of IQ. The concept was, in fact, formulated to show who was more “intelligent”—“superior”—and who was less “intelligent”—“inferior.” But here is the thing, though: The terms “superior” and “inferior” are, however, anatomic which shows the folly of the attempted appropriation of the term.

Superiority and inferiority

If one wants to find early IQ-ists talking about superiority and inferiority regarding IQ, they would only need to check out Lewis Terman’s very first Stanford-Binet tests. His scales—now in their fifth edition—state that IQs between 120 and 129 are “superior” while 130-144 is “gifted or very advanced” and 145-160 is “very gifted” or “highly advanced.” How strange… But, the IQ-ist can say that they were just products of their time and that no serious researcher believes such foolish things, that one is “superior” to another on the basis of an IQ score. What about proximal IQs? Lateral IQs? Posterior IQs? Distal IQs? It’s ridiculous to use anatomic terminology (for physical things) and attempt to use them to describe mental “things.”

But, perhaps the most famous hereditarian Arthur Jensen, as I have noted, wrongly stated that heritability estimates can be used to estimate one’s “genetic standing” (Jensen, 1970) and that if we continue our current welfare policies then we are in danger of creating a “genetic underclass” (Jensen, 1969). This, as does the creation of the concept of IQ in the early 1900s, speaks to the hereditarian agenda and the reason for the IQ enterprise as a whole. (See Taylor, 1980 for a wonderful discussion of Jensen’s confusion on the concept of heritability.)

This is no surprise when you understand that IQ tests were created to rank people on a mental hierarchy that reflected the current social hierarchy of the time which would then be used as justification for their spot on the social hierarchy (Mensh and Mensh, 1991). So it is no surprise that anatomic terminology was hijacked in an attempt at forwarding eugenic ideas. But the eugenicists concept of superiority didn’t always pan out the way they wanted it to go, which is evidenced a few decades before the conceptualization of standardized testing.

Galton attempted to show that those with the fastest reaction times were more intelligent, but when he found out that the common man had just as quick of a reaction time, he abandoned this test. Then Cattell came along and showed that no relationship existed between sensory perception and IQ scores. Finally, Binet showed that measures of the skull did not correspond with teacher’s assessment of who is or is not “intelligent.” Then, some decades later, Binet and Simon finally construct a test that discriminates between who they feel is or is not intelligent—which discriminated by social class. This test was finally the “measure” that would differentiate between social classes since it was based on a priori notions of an individual’s place in the social hierarchy (Garrison, 2009: 75). Binet and Simon’s “ideal city” would use test scores as a basis to shuttle people into occupations they “should be” in on the basis of their IQ scores which would show how they would work based on their “aptitudes” (Mensh and Mensh, 1991: 24; Garrison, 2009: 79). Bazemore-James, Shinaorayoon, and Martin (2017) write that:

The difference in racial subgroup mean scores mimics the intended outcomes of the original standardized IQ tests, with exception to Asian Americans. Such tests were invented in the 1910s to demonstrate the superiority of rich, U.S.-born, White men of northern European Descent over non-Whites and recent immigrants (Gersh, 1987). By developing an exclusion-inclusion criteria that favored the aforementioned groups, test developers created a norm “intelligent” (Gersh, 1987, p. 166) populationiot “to differentiate subjects of known superiority from subjects of known inferiority” (Terman, 1922, p. 656).

So, as one can see, this “superiority” was baked-in to IQ tests from the very start and the value-judgments, then, are not in the minds of IQ critics but is inherent in the scores themselves as stated by the pioneers of IQ testing in America and the originators of the concept that would become IQ. Garrison (2009: 79) writes:

With this understanding it is possible to make sense of Binet’s thinking on intelligence tests as group differentiation. That is, the goal was to group children as intelligent and unintelligent, and to grade (value) the various levels of the unintelligent (also see Wolf 1973, 152–154). From the point of view of this goal, it mattered little whether such differences were primarily biological or environmental in origin. The genius of the theory rests in how it postulates one group as “naturally” superior to the other without the assumptions of biology, for reason had already been established as a natural basis for distinction, irrespective of the origin of differences in reasoning ability.

While Binet and Simon were agnostic on the nature-nurture debate, the test items that they most liked were those items that differentiated between social classes the most (which means they were consciously chosen for those goals). But reading about their “ideal city”, we can see that those who have higher test scores are “superior” to those who do not. They were operating under the assumption that they would be organizing society along class lines with the tests being measures of group mental ability. For Binet and Simon, it does not matter whether or not the “intelligence he sought to define” was inherited or acquired, they just assumed that it was a property of groups. So, in effect, “Binet and Simon developed a standard whereby the value of people’s thinking could be judged in a standard way, in a way that corresponded with the exigencies of social reproduction at that time” (Garrison, 2009: 94). The only thing such tests do is reproduce the differences they claim to measure—making it circular (Au, 2009).

But the whole reason why Binet and Simon developed their test was to rank people from “best” to “worst”, “good” to “bad.” But, this does not mean that there is some “thing” inherent in individuals or groups that is being “measured” (Nash, 1990). Thus, since their inception, IQ tests (and by proxy all standardized testing) has pronouncements of such ranking built-in, even if it is not explicitly stated today. Such “measures” are not scientific and psychometrics is then shown for what it really is: “best understood as the development of tools for vertical classification and the production of social value” (Garrison, 2009: 5).

The goal, then, of psychometry is clear. Garrison (2009: 12) writes:

Ranking human worth on the basis of how well one competes in academic contests, with the effect that high ranks are associated with privilege, status, and power, suggests that psychometry is premised, not on knowledge of intellectual or emotional development, but on Anglo-American political ideals of rule by the best (most virtuous) and the brightest (most talented), a “natural aristocracy” in Jeffersonian parlance.

But, such notions of superiority and inferiority, as I have stated back in 2018, are nonsense when taken out of anatomic context:

It should be noted that the terms “superior” and “inferior” are nonsensical, when used outside of their anatomic contexts.

An IQ-ist may exclaim “Are you saying that you can’t say that person A has superior sprinting ability or breath-holding ability!? Are you denying that people are different?!” No, what I’m saying is that it is absurd to take anatomic terminology (physical measures) and attempt to liken it to IQ—this is because nothing physical is being measured, not least because the mental isn’t physical nor reducible to it.

They were presuming to measure one’s “intelligence” and then stating that one has ‘superior’ “intelligence” to another—and that IQ tests were measuring this “superiority”. However, pscyhometrics is not a form of measurement—rankings are not measures.

Knowledge becomes reducible to a score in regard to standardized testing, so students, and in effect their learning and knowledge, are then reduced to their scores on these tests. And so, “such inequalities [with the SAT, which holds for all standardized testing] are structured into the very foundations of standardized test construction itself” (Au, 2009: 64). So what is built into a test can also be built out of it (Richardson, 1990, 2000; Hilliard, 2012).

Measurement

In first constructing its scales and only then preceding to induce what they ‘measure’ from correlational studies, psychometry has got into the habit of trying to do what cannot be done and doing it the wrong way round anyway. (Nash, 1990: 133)

…psychometry fails to meet its claim of measurement and … its object is not the measurement of nonphysical human attributes, but the marking of some human beings as having more worth or value than other human beings … Psychometry’s claim to measurement serves to veil and justify the fundamentally political act of marking social value, and the role this practice plays in legitimating vast social inequalities. (Garrison, 2009: 30-31)

One of the best examples of a valid measure is temperature—and it has a long history (Chang, 2007). It is valid because there is a well-accepted theory of temperature, what is hot and what is cold. It is a physical property of measure which quantitatively expressed heat and cold. So thermometers were invented to quantify temperature, whereas “IQ” tests were invented to quantify “intelligence.” Those, like Jensen, attempt to make the analogy between temperature and IQ, thermometers and IQ tests. Thermometers, with a high degree of reliability, measure temperature and so do, Jensen, claims, IQ tests. But this, then, presumes that there is something “increasing”—presumably—physiologically in the brain, when there is no such evidence. So for this and many more reasons, the attempted comparison of temperature to intelligence, thermometers to intelligence tests, fails.

So, IQ-ists claim, temperature is measured by thermometers, by definition, therefore intelligence is what IQ tests measure, by definition. But there is a problem with claims such as this. Temperature was verified independently of the measuring device originally used to measure it. Fixed points were first established, and then numerical thermometers could be constructed in which we then find a procedure to assign numbers to degrees of heat between and beyond the fixed points. The thermoscope was what was used for the establishment of fixed points, The thermoscope has no fixed points, so we do not have to circularly rely on the concept of fixed points for reference. And if it goes up and down, we can then rightly infer that the temperature of blood is not stable. But what validates the thermoscope? Human sensation. We can see that when we put our hand into water that is scalding hot, if we put the thermoscope in the same water and note that it rises rapidly. So the thermoscopes agreement with our basic sensations of ‘hot’ and ‘cold’ serve as reliability for the fact that thermoscopes reliably justify (in a non-circular way) that temperature is truly being measured. We are trusting the physical sensation we get from whichever surface we are touching, and from this, we can infer that thermoscopes do indeed validate thermometers making the concept of temperature validated in a non-circular manner and a true measure of hot and cold. (See Chang, 2007 for a full discussion on the measurement of temperature.)

Thermometers could be tested by the criterion of comparability, whereas IQ tests, on the other hand, are “validated” circularly with tests of educational achievement, other IQ tests which were not themselves validated. and job performance (Howe, 1997; Richardson and Norgate, 2015; Richardson, 2017) which makes the “validation” circular since IQ tests and achievement tests are different versions of the same test (Schwartz, 1975).

For example, take intro chemistry. When one takes the intro course, they see how things are measured. Chemists may be measuring in mols, grams, the physical state of a substance, etc. We may measure water displacement, reactions between different chemicals or whatnot. And although chemistry does not reduce to physics, these are all actual physical measures.

But the same cannot be said for IQ (Nash, 1990). We can rightly say that one scores higher than another on an IQ tests but that does not signify that some “thing” is being measured and this is because, to use the temperature example again, there is no independent validation of the “construct.” IQ is a (latent) construct but temperature is a quantitative measure of hot and cold. It really exists, though the same cannot be said about IQ or “intelligence.” The concept of “intelligence” does not refer to something like weight and temperature, for example (Midgley, 2018).

Physical properties are observables. We observe the mercury in a thermometer change based on the temperature inside a building or outside. One may say that we observe “intelligence” daily, but that is NOT a “measure”, it’s just a descriptive claim. Blood pressure is another physical measure. It refers to the pressure in large arteries of the system. This is due to the heart pumping blood. An IQ-ist may say that intelligence is the emergent product of thinking and that this is due to the brain and that correlations between life outcomes, IQ tests and educational achievements then validate the measure. But, as noted above, this is circular. The two examples given—blood pressure and temperature—are real things that are physically measurable, unlike IQ (a latent construct).

It also should be noted that Eysenck claimed that if the measurement of temperature is scientific, then so is the measurement of intelligence. But thermometers are not identical to standardized scales. However, this claim fails, as Nash (1990: 131) notes:

In order to measure temperature three requirements are necessary: (i) a scale, (ii) some thermometric property of an object and, (iii) fixed points of reference. Zero temperature is defined theoretically and successive interval points are fixed by the physical properties of material objects. As Byerly (p. 379) notes, that ‘the length of a column of mercury is a thermometric property presupposes a lawful relationship between the order of length and the temperature order under certain conditions.’ It is precisely this lawful relationship which does not exist between the normative IQ scale and any property of intelligence.

This is where IQ-ists go the most wrong: the emphatically state that their tests are measuring SOMETHING! which is important for life success since they correlate with them. Though, there is no precise specification of the measured object, no object of measurement and no measurement unit, so this “means that the necessary conditions for metrication do not exist [for IQ]” (Nash, 1990: 145).

Since IQ tests have a scoring system, the general impressions is that IQ tests measure intelligence just like thermometers measure temperature—but this is a nonsense claim. IQ is an artifact of the test’s norming population. These points do not reflect any inherent property of individuals, they reflect one’s relation to the society they are in (since all standardized tests are proxies for social class).

Conclusion

One only needs to read into the history of IQ testing—and standardized testing as a whole—to see how and why these tests were first devised. From their beginnings wkth Binet and then over to Terman, Yerkes, and Goddard, the goal has been clear—enact eugenic policies on those deemed “unintelligent” by IQ tests which just so happen to correspond with lower classes in virtue of how the tests were constructed, which goes back originally to Binet and Simon. The history of the concept makes it clear that it’s not based on any kind of measurement theory like blood pressure and temperature. It is based on a priori notions of the structure and distribution of “intelligence” which then reproduces the social structure and “justifies” notions of superiority and inferiority on the basis of “intelligence tests” (Mensh and Mensh, 1991; Au, 2009; Garrison, 2009).

The attempts to hijack anatomic terminology, as I have shown, are nonsense since one doesn’t talk in other anatomic terminology about other kinds of things; the first IQ-ists’ intentions were explicit in what they were attempting to “show” which still holds for all standardized testing today.

Binet, Terman, Yerkes, Goddard and others all had their own priors which then led them to construct tests in such a way that would lead to their desired conclusions. No “property” is being “measured” by these tests, nor can they be used to show one’s “genetic standing” (Jensen, 1970) which implies that one is “genetically superior” (this can be justified by reading Jensen’s interview with American Renaissance and his comments on the “genetic enslavement” of a group of we continued our welfare policy).

Physiological measures, such as blood pressure, and measures of hot and cold, such as temperature, are valid measures and in no way, shape or form—contra Jensen—like the concept of IQ/”intelligence”, which Jensen conflates (Edwards, 1973). Intelligence (which is extra-physical) cannot be measured (see Berka, 1983 and see Nash, 1990: chapter 8 for a discussion of the measurement objection of Berka).

For these reasons, we should not claim that IQ tests ‘measure’ “intelligence”, nor do they measure one’s “genetic standing” or how “superior” one is to another and we should claim that psychometrics is nothing more than a political ring.

Advertisement

Delaying Gratification and Social Trust

1900 words

Tests of delayed gratification, such as the Marshmallow Experiment, show that those who can better delay their gratification have better life outcomes than those who cannot. The children who succumbed to eating the treat while the researcher was out of the room had worse life outcomes than the children who could wait. This was chalked up to cognitive processes by the originator of the test, while individual differences in these cognitive processes also were used as explanations for individual differences between children in the task. However, it doesn’t seem to be that simple. I did write an article back in December of 2015 on the Marshmallow Experiment and how it was a powerful predictor, but after extensive reading into the subject, my mind has changed. New research shows that social trust has a causal effect on whether or not one would wait for the reward—if the individual trusted the researcher he or she was more likely to wait for the other reward than if they did not trust the researcher, in which they were more likely to take what was offered in the first place.

The famous Marshmallow Experiment showed that children who could wait with a marshmallow or other treat in front of them while the researcher was out of the room, they would get an extra treat. The children who could not wait and ate the treat while the researcher was out of the room had worse life outcomes than the children who could wait for the other treat. These lead researchers to the conclusion that the ability to delay gratification depended on ‘hot’ and ‘cold’ cognitive processes. According to Walter Mischel, the originator of the study method, the ‘cool’ system is the thinking one, the cognitive system, which reminds you that you get a reward if you wait, while the ‘hot’ system is the impulsive system, the system that makes you want the treat now and not want to wait for the other treat (Metcalfe and Mischel, 1999).

Some of these participants were followed up on decades later, and those who could better delay their gratification had lower BMIs (Schlam et al, 2014); scored better on the SAT (Shoda, Mischel, and Peake, 1990) and other tests of educational attainment (Ayduk et al, 2000); along with other positive life outcomes. So it seems that placing a single treat—whether it be a marshmallow or another sweet treat—would predict one’s success, BMI, educational attainment and future prospects in life and that there are underlying cognitive processes, between individuals that lead to differences between them. But it’s not that simple.

After Mischel’s studies in the 50s, 60s and 70s on delayed gratification and positive and negative life outcomes (e.g., Mischel, 1958; Mischel, 1961Mischel, Ebbeson, and Zeiss, 1972) it was pretty much an accepted fact that delaying gratification somehow was related to these positive life outcomes, while the negative life outcomes were partly a result of the lack of ability to delay gratification. Though in 2014, a study was conducted showing that ability to delay gratification depends on social trust (Michaelson et al, 2013).

Using Amazon’s Mechanical Turk, (n = 78, 34 male, 39 female and 5 who preferred not to state their gender) completed online surveys and read three vignettes in order—trusty, untrustworthy and neutral—while using a scale of 1-7 to note how likeable, trustworthy, and how sharing their likelihood of sharing. Michaelson et al (2013) write:

Next, participants completed intertemporal choice questions (as in Kirby and Maraković, 1996), which varied in immediate reward values ($15–83), delayed reward values ($30–85), and length of delays (10–75 days). Each question was modified to mention an individual from one of the vignettes [e.g., “If (trustworthy individual) offered you $40 now or $65 in 70 days, which would you choose?”]. Participants completed 63 questions in total, with 21 different questions that occurred once with each vignette, interleaved in a single fixed but random order for all participants. The 21 choices were classified into 7 ranks (using the classification system from Kirby and Maraković, 1996), where higher ranks should yield higher likelihood of delaying, allowing a rough estimation of a subject’s willingness to delay using a small number of trials. Rewards were hypothetical, given that hypothetical and real rewards elicit equivalent behaviors (Madden et al., 2003) and brain activity (Bickel et al., 2009), and were preceded by instructions asking participants to consider each choice as if they would actually receive the option selected. Participants took as much time as they needed to complete the procedures.

When one’s trust was manipulated in the absence of a reward, within the group of subjects influenced their ability to delay gratification, along with how trustworthy one was perceived to be, influenced their ability to delay gratification. So this suggests that, in the absence of rewards, when social trust is reduced, ability to delay gratification would be lessened. Due to the issues of social trust manipulation due to the order of how the vignettes were read, they did a second experiment using the same model using 172 participants (65 males, 63 females, and 13 who chose not to state their gender). Though in this experiment, a computer-generated trustworthy, untrustworthy and neutral face was presented to the participants. They were only paid $.25 cents, though it has been shown that the compensation only affects turnout, not data quality (Burhmester, Kwang, and Gosling, 2011).

In this experiment, each participant read a vignette and there was a particular face attached to it (trustworthy, untrustworthy and neutral), which were used in previous studies on this matter. They found that when trust was manipulated in the absence of a reward between the subjects, this influenced the participants’ willingness and to delay gratification along with the perceived trustworthiness influencing it as well.

Michaelson et al (2013) conclude that the ability to delay gratification is predicated on social trust, and present an alternative hypothesis for all of these positive and negative life outcomes:

Social factors suggest intriguing alternative interpretations of prior findings on delay of gratification, and suggest new directions for intervention. For example, the struggles of certain populations, such as addicts, criminals, and youth, might reflect their reduced ability to trust that rewards will be delivered as promised. Such variations in trust might reflect experience (e.g., children have little control over whether parents will provide a promised toy) and predisposition (e.g., with genetic variations predicting trust; Krueger et al., 2012). Children show little change in their ability to delay gratification across the 2–5 years age range (Beck et al., 2011), despite dramatic improvements in self-control, indicating that other factors must be at work. The fact that delay of gratification at 4-years predicts successful outcomes years or decades later (Casey et al., 2011; Shoda et al., 1990) might reflect the importance of delaying gratification in other processes, or the importance of individual differences in trust from an early age (e.g., Kidd et al., 2012).

Another paper (small n, n = 28) showed that the children’s perception of the researchers’ reliability predicted delay of gratification (Kidd, Palmeri, and Aslin, 2012). They suggest that “children’s wait-times reflected reasoned beliefs about whether waiting would ultimately pay off.” So these tasks “may not only reflect differences in self-control abilities, but also beliefs about the stability of the world.” Children who had reliable interactions with the researcher waited about 4 times as long—12 minutes compared to 3 minutes—if they thought the researcher was trustworthy. Sean Last over at the Alternative Hypothesis uses these types of tasks (and other correlates) to show that blacks have lower self-control than whites, citing studies showing correlations with IQ and delay of gratification. Though, as can be seen, alternative explanations for these phenomena make just as much sense, and with the new experimental evidence on social trust and delaying gratification, this adds a new wrinkle to this debate. (He also shortly discusses ‘reasons’ why blacks have lower self-control, implicating the MAOA alleles. However, I have already discussed this and blaming ‘genes for’ violence/self-control doesn’t make sense.)

Michaelson and Munakata (2016) show more evidence for the relationship between social trust and delaying gratification. When children (age 4 years, 5 months, n = 34) observed an adult as trustworthy, they were able to wait for the reward, compared to when they observed the adult as untrustworthy they ate the treat thinking that, since they observed the adult as untrustworthy, they were not likely to get the second marshmallow than if they waited for the adult to return if they believed him to be untrustworthy. Ma et al (2018) also replicated these findings in a sample of 150 Chinese children aged 3 to 5 years old. They conclude that “there is more to delay of gratification than cognitive capacity, and they suggest that there are individual differences in whether children consider sacrificing for a future outcome to be worth the risk.” Those who had higher levels of generalized trust waited longer, even when age and level of executive functioning were controlled for.

Romer et al (2010) show that people who are more willing to take risks may be more likely to engage in risky behavior that provides insights to that specific individual on why delaying gratification and having patience leads to longer-term rewards. This is a case of social learning. However, people who are more willing to take risks have higher IQs than people who do not. Though SES was not controlled for, it is possible that the ability to delay gratification in this study came down to SES, with lower class people taking the money, while higher class people deferred. Raine et al (2002) showed a relationship between sensation seeking in 3-year-old children from Mauritius, which then was related to their ‘cognitive scores’ at age 11. As usual, parental occupation was used as a measure of ‘social class’, and since SES does not capture all aspects of social class then controlling for the variable does not seem to be too useful. Because a confound here could be that children from higher classes have more of a chance to sensation seek which may cause higher IQ scores due to cognitive enrichment. Either way, you can’t say that IQ ’causes’ delayed gratification since there are more robust predictors such as social trust.

Though the relationship is there, what to make of it? Since exploring more leads to, theoretically, more chances to get things wrong and take risks by being impulsive, those who are more open to experience will have had more chances to learn from their impulsivity, and so learn to delay gratification through social learning and being more open. ‘IQ’ correlating with it, in my opinion, doesn’t matter too much; it just shows that there is a social learning component to delaying gratification.

In conclusion, there are alternative ways to look at the results from Marshmallow Experiments, such as social trust and social learning (being impulsive and seeing what occurs when an impulsive act is carried out may have one learn, in the future, to wait for something). Though these experiments are new and the research is young, it’s very promising that there are other explanations for delayed gratification that don’t have to do with differences in ‘cognitive ability’, but depend on social trust—trust between the child and the researcher. If the child sees the researcher is trustworthy, then the child will wait for the reward, whereas if they see the researcher is not trustworthy, they ill take the marshmallow or whatnot, since they believe the researcher is not trustworthy and therefore won’t stick to their word. (I am also currently reading Mischel’s 2014 book Marshmallow Test: Mastering Self-Control and will have more thoughts on this in the future.)

r/K Selection Theory: A Response to Truth-Justice

1700 words

After the publishing of the article debunking r/K selection theory last week, I decided to go to a few places and provide the article to a few sites that talk about r/K selection theory and it’s (supposed) application to humans and psychometric qualities. I posted it on a site called ‘truthjustice.net‘, and the owner of the site responded to me:

Phillippe Rushton is not cited a single time in AC’s book. In no way, shape or form does the Theory depend on his opinions.

AC outlines a very coherent theoretical explanation for the differing psychological behavior patterns existing on a bell curve distribution in our population. Especially when it comes to the functioning of the Amygdala for which we have quite a lot of data by now.

Leftists are indeed in favor of early childhood sexualization to increase the quantity of offspring which will inevitably reduce the quality and competitive edge of children. They rank significantly lower on the moral foundations of “loyalty”, “authority” and “purity” as outlined by Jonathan Haidt’s research into moral psychology. Making them more accepting of all sorts of degeneracy, deviancy, and disloyalty to the ingroup.

http://people.stern.nyu.edu/jhaidt/

They desire a redestribution of resources to the less well performing part of our population to reduce competitive stress and advantage while giving far less to charity and being significantly more narcissistic to increase their own reproductive advantage.

https://anepigone.blogspot.com/2008/11/more-income-more-votes-republicans_13.html

Their general mindset becomes more and more nihilistic, atheistic, anarchistic, anti-authority and overall r-selected the further left you go on the bell curve. A denial of these biological realities in our modern age is ridiculous when we can easily measure their psychology and brain functionality in all sorts of ways by now.

Does that now mean that AC is completely right in his opinions on r/K-Selection Theory? No, much more research is necessary to understand the psychological differences between leftists and rightists in full detail.

But the general framework outlined by r/K-Selection Theory very likely applies to the bell curve distribution in psychological behavior patterns we see in our population.

I did respond, however, he removed my comment and banned me after I published my response. My response is here:

“Phillippe Rushton is not cited a single time in AC’s book. In no way, shape or form does the Theory depend on his opinions.”

Meaningless. He uses the r/K continuum so the link in my previous comment is apt.

“AC outlines a very coherent theoretical explanation for the differing psychological behavior patterns existing on a bell curve distribution in our population. Especially when it comes to the functioning of the Amygdala for which we have quite a lot of data by now.”

No, he doesn’t.

1) Psychological traits are not normally distributed,

2) even if r/K were a valid paradigm, it would not pertain to within species variation,

3) it’s just a ‘put these traits on one end that I don’t like and these traits at the other end that I like and that’s my team while the other team has all of the bad traits’ thing,

4) his theory literally rests on the r/K continuum proposed by Pianka. Furthermore, no experimental rationale “was ever given for the assignment of these traits [the r/K traits Pianka inserted into his continuum] to either category” (Graves, 2002: 135), and

5) the r/K paradigm was discredited in the late 70s (see Graves 2002 above for a review)

“Leftists are indeed in favor of early childhood sexualization to increase the quantity of offspring which will inevitably reduce the quality and competitive edge of children. They rank significantly lower on the moral foundations of “loyalty”, “authority” and “purity” as outlined by Jonathan Haidt’s research into moral psychology. Making them more accepting of all sorts of degeneracy, deviancy, and disloyalty to the ingroup.”

I love Haidt. I’ve read his book and all of his papers and articles. So you notice a few things. Then see the (discredited) r/K paradigm. Then you say “oh! liberals are bad and are on the r side while conservatives are K!!”

Let me ask you this: where does alpha-selection fall into this?

“They desire a redestribution of resources to the less well performing part of our population to reduce competitive stress and advantage while giving far less to charity and being significantly more narcissistic to increase their own reproductive advantage.”

Oh.. about that… liberals have fewer children than conservatives. Liberals are also more intelligent than conservatives. So going by Rushton’s r/K model, liberals are K while conservatives are r (conservatives are less intelligent and have more children). So the two cornerstones of the (discredited) r/K continuum show conservatives breeding more and also are less intelligent while it’s the reverse for liberals. So who is ‘r’ and ‘K’ again?

“Their general mindset becomes more and more nihilistic, atheistic, anarchistic, anti-authority and overall r-selected the further left you go on the bell curve. A denial of these biological realities in our modern age is ridiculous when we can easily measure their psychology and brain functionality in all sorts of ways by now.”

‘r’ and ‘K’ are not adjectives (Anderson, 1991: 57).

Why does no one understand r/K selection theory? You are aware that r/K selection theory is density-dependent selection, correct?

“Does that now mean that AC is completely right in his opinions on r/K-Selection Theory? No, much more research is necessary to understand the psychological differences between leftists and rightists in full detail.”

No, he’s horribly wrong with his ‘theory’. I don’t deny psych differences between libs and cons, but to put them on some (discredited) continuum makes no sense in reality.

“But the general framework outlined by r/K-Selection Theory very likely applies to the bell curve distribution in psychological behavior patterns we see in our population.”

No, it doesn’t. Psych traits are not normally distributed (see above). Just like Rushton, AC saw that some things ‘fit’ into this (discredited) continuum. What’s that mean? Absolutely nothing. He doesn’t even cite papers for his assertion; he called Pianka a leftist and said that he tried to sabotage the theory because he thought that it described libs (huh? this makes no sense). AC is a clear ideologue and is steeped in his own political biases as well as wanting to sell more copies of his book. So he will not admit that he is wrong.

Let me ask you a question: where did liberals and conservatives evolve? What selective pressures brought about these psych traits in these two ‘populations’? Are liberals and conservatives local populations?

I’ve also summarily discredited AC and I am waiting on a reply from him (I will be surprised if he replies).


However, unfortunately for AC et al, concerns have been raised “about the use of psychometric indicators of lifestyle and personality as proxies for life history strategy when they have not been validated against objective measures derived from contemporary life history theory and when their status as causes, mediators, or correlates has not been investigated” (Copping, Campbell, and Muncer, 2014). This ends it right here. People don’t understand density-dependent/independent selection since Rushton never talked about it. That, as has been brought up, is a huge flaw in Rushton’s application of r/K theory to the races of Man.

Liberals are, on average, more intelligent than conservatives (Kanazawa, 2010; Kanazawa, 2014) Lower cognitive ability has been linked to greater prejudice through right-wing ideology and low intergroup contact (Hodson and Busseri, 2012), with social conservatives (probably) having lower IQs. There are also three ‘psychological continents’—Europe, Australia, and, Canada and are the liberal countries whereas Southeast Asia, South Asia, South America and Africa contain more conservative countries with all other countries including Russia, the US and Asia in the middle and “In addition, gross domestic product (GDP) per capita, cognitive test performance, and governance indicators were found to be low in the most conservative group and high in the most liberal group” (Stankov and Lee, 2016). Further, economic liberals—as a group—tend to be better educated than Republicans—so intelligence is positively correlated with socially and economically liberal views (Carl, 2014).

There is also a ‘conservative baby boom‘ in the US—which, to the Rushtonites, is ‘r-selected behavior’. Furthermore, women who reported that religion was ‘very important to them’ reported having higher fertility than women who said that it was ‘somewhat important’ or ‘not important’ (Hayford and Morgan, 2008). Liberals are more likely to be atheist (Kanazawa, 2010), while, of course, conservatives are more likely to be religious (Morrison, Duncan, and Parton, 2015; McAdams et al, 2015).

All in all, even if we were to allow the use of liberals and conservatives as local populations, like Rushton’s erroneous use of r/K theory for human races, the use of r/K theory to explain the conservative/liberal divide makes no sense. People don’t know anything about ecology, evolution, or neuroscience. People should really educate themselves on the matters they speak about—I mean a full-on reading into whatever it is you believe. Because people like TIJ and AC are clearly idealogues, pushing a discredited ecological theory and applying it to liberals and conservatives, when the theory was never used that way in the first place.

For anyone who would like a look into the psychological differences between liberals and conservatives, Jonathan Haidt has an outstanding book outlining the differences between the two ideologies called The Righteous Mind: Why Good People are Divided by Politics and ReligionI actually just gave it a second read and I highly, highly recommend it. If you want to understand the true differences between the two ideologies then read that book. Try to always remember and look out for your own biases when it comes to your political beliefs and any other matter.

For instance, if you see yourself frantically attempting to gather support for a contention in a debate, then that’s the backfire effect in action (Nyhan and Reifler, 2012), and if you have a knowledge of the cognitive bias, you can better take steps to avoid such a heavy-handed bias. This, obviously, occurred with TIJ. The response above is airtight. If this ‘continuum’ did exist, then it’s completely reversed with liberals having fewer children and generally being more intelligent with the reverse for conservatives. So liberals would be K and conservatives would be r (following Rushton’s interpretation of the theory which is where the use of the continuum comes from).

Testosterone and Aggressive Behavior

1200 words

Testosterone gets a bad rep. People assume that if one has higher testosterone than average, that they will be a savage, bloodthirsty beast with an insatiable thirst for blood. This, however, is not the case. I’ve documented how testosterone is vital for male functioning, and how higher levels don’t lead to maladies such as prostate cancer. Testosterone is feared for no reason at all. The reason that people are scared of it is that of the anecdotal reports that individual A had higher testosterone when he committed crime B so, therefore, anyone who commits a crime has higher testosterone and that is the ultimate—not proximate—cause of crime. This is erroneous. There is a positive—albeit extremely low—correlation between physical aggression and violence at .14. That’s it. Furthermore, most of these claims of higher levels of testosterone causing violence is extrapolated from animal studies to humans.

Testosterone has been shown to lead to violent and aggressive behavior, largely only in animal studies (Archer, 1991; Book et al, 2001). For years, the relationship between the two variables was thought to be causal, i.e., high levels of testosterone cause violent crimes, which has been called into question over recent years. This is due to how the environment can raise testosterone levels. I have documented how these environmental factors can raise testosterone—and after these events, testosterone stays elevated.

Largely, animal studies are used to infer that high levels of testosterone in and of themselves lead to higher rates of aggression and therefore crime. However, two important meta-analyses show this is not necessarily the case (Archer, 1991; Book et al, 2001). Book et al, 2001 showed that two variables were important in seeing the relationship between aggression and crime—the time of day that the assay was taken and the age of the participant. This effect was seen to be largest in, not unexpectedly, males aged 13-20 (Book et al, 2001: 594). So since age confounds the relationship between aggression and testosterone in males, that is a variable that must also be controlled for (which, in the meta-analyses and other papers I cite on black and white testosterone is controlled for).

More interestingly, Book et al (2001) showed that the nature of the measure of aggression (self-reported or behavioral) did not have any effect on the relationship between testosterone and aggression. Since there is no difference between the two measures, then a pencil-and-paper test is a good enough index of measure of aggression, comparable to observing the behavior of the individual studied.

Archer (1991) also showed the same low—but positive—correlations between aggression and testosterone. Of course, as I’ve extensively documented since there is a positive relationship between the two variables does not necessarily mean that high-testosterone men commit more crime—since the outcome of certain situations can increase and decrease testosterone, no causal factors have been detangled. Book et al (2001) confirmed Archer’s (1991) finding that the correlation between violent and aggressive behavior was positive and low at .14.

Valois et al (2017) showed there was a relationship between emotional self-efficacy (ESE) and aggressive and violent behaviors in a statewide sample of high school children in South Carolina (n=3,386). Their results suggested that there was a relationship between carrying a weapon to school within the past 30 days along with being injured with a club, knife or gun in the past 12 months was significantly associated with ESE for specific race and sex groups.

Black girls who reported a low ESE reported carrying a weapon to school 30 days prior to the survey were 3.22 times more than black girls with a high ESE who did not report carrying a weapon to school within the past 30 days prior to the questionnaire. For black boys with low ESE, they were 3.07 times more likely to carry a weapon to school within the past 30 days in comparison to black boys with high ESE who did not carry a weapon to school in the past 30 days. White girls who reported low ESE had the highest chance of bringing a weapon to school in comparison to white girls with low ESE—they were 5.87 times more likely to carry a weapon to school 30 days prior to the survey. Finally, white boys with low ESE were slightly more than 2 times more likely than white boys with high ESE to carry a weapon to school 30 days prior to the survey.

Low ESE in white and black girls is associated with carrying a weapon to school, whereas low ESE for white and black boys is associated with being threatened. Further, their results suggested that carrying a weapon to school was associated with low ESE in black and white girls suggesting that low ESE is both situation-specific and specific to the female sex. The mediator between these things is low ESE—it is different for both black boys and black girls, and when it occurs different courses of action are taken, whether it’s through bringing a weapon to school or being threatened. What this tells me is that black and white boys with low ESE are more likely to be threatened because they are perceived to be more meek, while black and white girls with low ESE that get provoked at school are more likely to bring weapons. So it seems that girls bring weapons when provoked and boys fight.

The two meta-analyses reviewed above show that there is a low positive (.14) correlation between testosterone and aggression (Archer, 1991; Book et al, 2001). Thusly, high levels of testosterone on their own are not sufficient enough to explain high levels of aggression/violence. Further, there are race- and sex-specific differences when one is threatened at high school with black and white boys being more likely to report being threatened more (which implies a higher rate of physical fighting) while black and white girls when threatened brought weapons to school. These race- and sex-specific differences in the course of action taken when they are physically threatened needs to be looked into more.

I’d like to see the difference in testosterone levels for a matched sample of black and white boys from two neighboring districts with different murder rates as a proxy for the amount of violence in the area. I’d bet that the places with a higher murder rate would have children 1) report more violence and instances of bringing weapons to school and 2) report more harm from these encounters—especially if they have low ESE as seen in Valois (2017) and 3) the children in the high schools along with the residents of the area would have higher testosterone than the place with less violence. I would expect these differences to be magnified in the direction of Valois (2017) in that areas with higher murder rates would have black and white girls report bringing weapons to school when threatened whereas black and white boys would report more physical violence.

High testosterone itself is not sufficient enough to explain violence as the correlation is extremely low at .14. Testosterone levels fluctuate depending on the time of day (Brambilla et al, 2009; Long, Nguyen, and Stevermer, 2015) to the time of year (Stanton, Mullette-Gillman, and Huettel, 2011Demur, Uslu, and Arslun, 2016). How the genders/races react differently when threatened in adolescence is interesting and deserves further study.

Biases and Political Beliefs

2150 words

The study of political bias is very important. Once the source of what motivates political bias—which no doubt would translate to other facets of life—is found, individual action can be taken to minimize any future bias. Two recent studies found that contrary to other studies showing that conservatives are more biased than liberals, both groups were equally as biased.

Everyone is biased—even physicians (Cain and Detsky, 2008). When beliefs we hold to be true are questioned, we do anything we can to shield ourselves from conflicting information. Numerous studies have looked into biases in politics, with some studies showing that conservatives are more likely to be biased towards their views more than liberals. However, recent research has shown that this is not true.

Frimer, Skitka, and Motyl, (2017) showed there were similar motives to shield one’s self from contradictory information. Hearing opposite viewpoints—especially for staunch conservatives and liberals—clearly leads to them doing anything possible to, in their heads, defend their dearly held beliefs. In four studies (1: people would forgo the chance to win money if they didn’t have to hear the opposite sides’ opinions on the same-sex marriage debate; 2: thinking back to the 2012 election; 3: upcoming elections in the US and Canada; “a range of other Culture War Issues” (Frimer, Skitka, and Motyle, 2017); and 4: both groups reported similar diversions towards hearing the opposite group’s beliefs), both groups reported that hearing the other side’s beliefs would induce cognitive dissonance (Frimer, Skitka, and Motyle, 2017). They meta-analyzed all of their studies and still found that both groups would “rather remain in their ideological bubbles”.

Ditto et al (2017) also had similar findings. They meta-analyzed 41 studies with over 12,000 participants, testing two hypotheses: 1) conservatives would be more biased than liberals and 2) there would be equal amounts of bias. They discovered that the correlation for partisan bias was “robust”, with a correlation of .254. They showed that “liberals (r = .248) and conservatives (r = .247) showed nearly identical levels of bias across studies” (Ditto et al, 2017).

These two studies show what we know is true: it’s extremely hard/damn near impossible to change one’s view. Someone can be dead wrong, yet attempt to gather up whatever kind of data they possibly can to shield themselves from the truth.

This all comes down to one thing: the backfire effect. When we are presented with contradictory information, we immediately reject it. Everyone is affected by this bias. One study showed that corrections frequently failed to correct political misconceptions, with these attempted corrections actually doing the opposite, people increased their misconception of the group in question (Nyhan and Riefler, 2010). The thing is, people lack the knowledge about political matters which then affects their opinions. These studies show why it’s next to impossible to change one’s view in regards to anything, especially political matters.

New York University’s Professor of Ethical Leadership and social psychologist with a specialty in morality Jonathan Haidt also talks about partisan bias in his outstanding book on religion and politics The Righteous Mind: Why Good People are Divided by Politics and Religion (Haidt, 2013). This book is outstanding and I highly recommend it. I’ve written about some of his thoughts in his book, his theory on the evolution of morality is very well argued. Moral reasoning is just a post-hoc search for reasons to justify the judgments that people have already made. When asked why people are so averse to questions they find morally wrong, they cannot give good reasons to why they find the scenarios morally wrong (Haidt, 2001). More specifically, people couldn’t say why it was morally wrong to have sex with a sibling even though they were told that they used birth control and both enjoyed the act, suffering no emotional damage. This is direct evidence for Haidt’s ‘wag-the-dog’ illusion.

Haidt (2001: 13) writes:

 If moral reasoning is generally a post-hoc construction intended to justify automatic moral intuitions, then our moral life is plagued by two illusions. The first illusion can be called the “wag-the-dog” illusion: we believe that our own moral judgment (the dog) is driven by our own moral reasoning (the tail). The second illusion can be called the “wag-the-otherdog’s-tail” illusion: in a moral argument, we expect the successful rebuttal of an opponent’s arguments to change the opponent’s mind. Such a belief is like thinking that forcing a dog’s tail to wag by moving it with your hand should make the dog happy.

Except the opponent’s mind is never changed. People always search for things to affirm their worldviews.

In his book, Haidt cites a study done on 14 liberals and conservatives who were stuck into an fMRI machine to scan their brains when shown 18 slides to see how their brain changed when viewing them (Weston et al, 2006). The first of which slide one set was George W. Bush praiding Ken Lay, the CEO of Enron. After, they were shown a slide in which the former President avoided mentioning Lay’s name. “At this point, Republicans were squirming” (Haidt, 2013: 101). Then they were finally shown a slide that said that Bush “felt betrayed” by the CEO’s actions and was shocked to find out that he was corrupt. There was a set of similar slides showing similar contradictory statements from John Kerry. The researchers had engineered situations that made the individual uncomfortable when shown their candidate contradicted themselves, while at the same time not showing any signs of being uncomfortable when it was shown their ideological opposite was caught being a hypocrite (Haidt, 2013: 101).

This study shows that emotional and intuitive processes are the causes for such extreme biases, with one only employing reasoning when it supports their own conclusions. Weston et al (2006) saw that when the individuals looked at the final slides, they had a sense of ‘escape’ and ‘release’. They cite further studies showing that this sense of escape and release is associated with the release of dopamine in the nucleus accumbens and dorsal striatum in other animals (Weston et al, 2006). So the subjects experienced this small hit of dopamine when they saw the final slide that showed everything was “OK”. If this is true, then this explains why we engage in these ‘addictive behaviors’—believing things with such conviction, even when shown contradictory information.

Like rats that cannot stop pressing a button, partisans may be simply unable to stop believing weird things. The partisan brain has been reinforced so many times for performing mental contortions that free it from unwanted beliefs. Even partisanship may be literally addictive. (Haidt, 2013: 103)

Haidt has also been covering the recent University protests that have been occurring around the country. About fifty years ago, a judge predicted the political turmoil we see in Universities today, writing:

No one can be expected to accept an inferior status willingly. The black students, unable to compete on even terms in the study of law, inevitably will seek other means to achieve recognition and self-expression. This is likely to take two forms. First, agitation to change the environment from one in which they are unable to compete to one in which they can. Demands will be made for elimination of competition, reduction in standards of performance, adoption of courses of study which do not require intensive legal analysis, and recognition for academic credit of sociological activities which have only an indirect relationship to legal training. Second, it seems probable that this group will seek personal satisfaction and public recognition by aggressive conduct, which, although ostensibly directed at external injustices and problems, will in fact be primarily motivated by the psychological needs of the members of the group to overcome feelings of inferiority caused by lack of success in their studies. Since the common denominator of the group of students with lower qualifications is one of race this aggressive expression will undoubtedly take the form of racial demands–the employment of faculty on the basis of race, a marking system based on race, the establishment of a black curriculum and a black law journal, an increase in black financial aid, and a rule against expulsion of black students who fail to satisfy minimum academic standards.

This seems to have come true today, seeing as political diversity has decreased in psychology, for instance, in the past fifty years (Duarte et al, 2015). In America, they found that 58-66 percent of social science professors identified as liberals, whereas only 5-8 percent identified as conservatives. Self-identified Democrats also outnumbered Conservatives by almost 8 to 1. Other researchers found that 52 to 77 percent of humanities professors were liberal with only 4-8 percent identifying as Conservative, for a ratio of about 5 to 1, favoring liberals. Finally, 84 percent of psychologists identified as liberal, with only 8 percent identifying as conservative for a 10.5 to 1 ratio (Duarte et al, 2015). However, this skew has only existed for about fifty years. When our institutions show this heavy skew in political beliefs, self-affirming, self-fulfilling prophecies will affect the quality of what is taught to students which will have a negative effect on the type of education received. 

Finally, when talking about political biases, one cannot go without mentioning Stephen Jay Gould. Although I’ve come to love his work on evolutionary theory, he was horribly wrong on human differences and let his motivations, biases and political views cloud his judgement and drive him to be grossly dishonest in his posthumous attacks of a man long dead who could no longer defend himself in one Samuel Morton, which first appeared in 1978. This culminated in his widely acclaimed (and, as fas as I can tell, still given to college students to read) book Mismeasure of Man (Gould, 1981). In the book, he attacked Morton for being biased in his measurements of his skull collection. However, in 2011, an anthropology team lead by Jason Lewis remeasured Morton’s skulls and found that Morton was not biased and his measurements were correct (Lewis et al, 2011). Gould was the one who ended up showing the huge bias that he accused Morton of and, ironically for Gould, he was the case study in avoiding bias in scholarship and science, not Morton.

However, as is usually the case, long debates such as this are not so easily settled. Philosopher Michael Weisberg (Weisberg, 2014) argued that Gould’s arguments against Morton were sound and that “Although Gould made some errors and overstated his case in a number of places, he provided prima facie evidence, as yet unrefuted, that Morton did indeed mismeasure his skulls in ways that conformed to 19th century racial biases.” Further, Kaplan, Pigliucci and Banta (2015) argue that Gould’s problem with Morton’s measurements came down to how the measurements should have been done (lead shot or seed). They contend that many of Lewis et al’s (2011) claims against Gould were “misleading” and “had no relevance to Gould’s published analysis.” They also argue that both Gould’s and Morton’s methods (inclusion/exclusion of skulls, how to compute averages, etc) were “inappropriate”. Nevertheless, the point is, this debate seems to be far from over and I await the next chapter. Whatever the case may be, Gould vs. Morton is a perfect case of politics and bias in science.

Everyone is biased. Researchers, physicians, normal everyday people, etc. But where we become most biased is when politics comes into play. To become better, well-rounded people with a myriad of knowledge, we need to listen to other’s viewpoints without immediately rejecting them. But, first, we must recognize the cognitive bias and attempt to correct it. Political differences begin in the brain and then are shaped by experience. These political differences then lead to feelings of disgust when hearing of the views of the ‘opposite team’. Both sides of the political spectrum are equally as biased, contrary to each groups’ perception of this particular issue. There are differences in the brain between Conservatives and Liberals, and when they see their ‘enemy’ engage in contradictory behavior they get joy, whereas when they see their guy engage in the same contradictory behavior they show disgust.

The long debate on Morton’s skulls that’s been raging for over forty years is the perfect look into how politics, motivation, and bias comes into effect in science, no matter which camp ultimately ends up being right (I’m in the Morton camp, obviously). Studying the causes and effects of why we have such strong biases can lead to a better understanding of the causes of these underlying defense mechanisms—the causes of the backfire effect and similar cognitive biases. Everyone and anyone—from the scientist to the layman—should always let what the facts say guide their points of view and not their emotions.

When you are studying any matter, or considering any philosophy, ask yourself only what are the facts and what is the truth that the facts bear out. Never let yourself be diverted either by what you wish to believe, or by what you think would have beneficent social effects if it were believed. But look only, and solely, at what are the facts. That is the intellectual thing that I should wish to say.Bertrand Russel, 1959

Psychology, Anti-Hereditarianism, and HBD

3800 words

Abstract

The denial of human nature is extremely prevalent, most noticeably in our institutions of higher learning. To most academics, the fact that there could be population differences that are genetic in nature is troubling for many people. However, denying genetic/biological causes for racial differences is 1) intellectually dishonest; 2) will lead to negative health outcomes for populations due to the assumption that all human populations are the same; and 3) the ‘lie of equality’ will not allow all human populations to reach their ‘potential’ to be as good as they can be due to the fact that implicit assumption that all human populations are the same. Anti-hereditarians fully deny any and all genetic explanations for human differences, believing that human brain evolution somehow halted around 50-100 kya. Numerous studies show that race is a biological reality; it doesn’t matter what we call the clusters as those are the social constructs. The contention is that ‘all brains are the same color’ (Nisbett, 2007; for comment see my article Refuting Richard Nisbett), and that evolution in differing parts of the world for the past 50,000 years was not enough for any meaningful population differences between people. But to accept that means you must accept the fact that the brain is the only organ that is immune to natural selection. Does that make any sense? I will show that these differences do exist and should be studied, as free of any bias as possible, with every possible hypothesis being looked at and not discarded.

Evolution is true. It’s not ‘only a theory’ (as some anti-evolutionists contend). Anti-evolutionists do not understand the definition of the word ‘theory’. Richard Dawkins (2009) wrote that a theory is a scheme or system of ideas or statements held as an explanation or account of a group of facts or phenomena. This is in stark contrast to the layperson’s definition of the word theory, which means ‘just a guess’. Evolution is a fact. What biologists argue with each other about is the mechanisms behind evolution, for any quote-mining Creationists out there.

We know that evolution is a fact and it is the only game in town (Dawkins, 2009) to explain the wide diversity and variation we see on our planet. However, numerous scholars deny the effect of evolution on human behavior (most residing in the social sciences, but other prominent biologists have denied (or implied there were no differences between us and our ancestors) the effect of human evolution on behavior and cognition; Gould 1981, 1996, for a review of Gould 1996, see my article Complexity, Walls, 0.400 Hitting and Evolutionary “Progress” and Stephen Jay Gould and Anti-Hereditarianism; Mayr 1963; see Cochran and Harpending 2009). A prominent neuroscientist, who I have written about here, Herculano-Houzel, implied that Neanderthals and Antecessor may have been just as intelligent as we are due to a neuronal count in a similar range to ours (Herculano-Houzel 2013). This raises an interesting question (which I have tackled here and will return to in the future): did our recent hominin ancestors at least have the capacity for similar intellect to ours (Villa and Roebroeks, 2014; Herculano-Houzel and Kaas, 2011)? It is interesting that neuronal scaling rules hold for our extinct ancestors, and this question is most definitely worth looking into.

Whatever the case may be in regards to recent human evolution and our extinct hominin ancestors, human evolution has increased in the past 10,000 years (Cochran and Harpending, 2009; Wade, 2014). This is due to the dispersal of Anatomical Modern Humans (AMH) OoA around 70 kya; and with this geographical isolation, populations began to diverge with no interbreeding with each other. However, this is noticed most in ‘Native’ Americans, who show no gene flow with other populations due to being genetically isolated (Villena et al, 2000). Who’s to say that evolution stops at the neck, and no further evolution occurs on the brain? Is the brain itself exempt from the laws of natural selection? We know that there is no/hardly any gene flow between populations before the advent of modern-day technology and vehicles; we know that humans differ on morphological and anatomical traits, why are genetic differences out of the question, especially when genetic differences may explain, in part, some of the variation between populations?

We know that evolution is true, without a reasonable doubt. So why, do some researchers contend, is the human brain exempt from such selective pressures?

A theoretical article by Winegard, Winegard, and Boutwell (2017) was just released on January 17th. In the article, they argue that social scientists should integrate HBD into their models. Social scientists do not integrate genetics into their models, and the longer one studies social sciences, the more likely it is they will deny human nature, regardless of political leaning (Perry and Mace, 2010). This poses a problem. By completely ignoring a huge variable (possible genetic differences), this has the potential to harm people’s health, as race is a very informative marker when discussing diseases acquisition as well as whether certain drugs will work on two individuals of different races (Risch et al, 2002; Tang et al, 2005; Wade, 2014). People who deny the usefulness of race, even in a medical context, endanger the lives of individuals from different races/ethnies since they assume that all humans are the same inside, despite ‘superficial differences’ between populations.

The notion that all human populations—genetic isolation and evolution in differing ecosystems/climates/geographic locales be damned—is preposterous to anyone who has a true understanding of evolution. Why should man’s brain be the only organ on earth exempt from the forces of natural selection? Why do egalitarians assume that all humans are the same and have the same psychological faculties compared to other humans, despite the fact that rapid evolution has occurred within the human species within the last 10,000 years?

To see some of the most obvious ways to see natural selection in action in human populations, one should look to the Inuits (Fumagalli, 2015; Daanen and Lichtenbelt, 2016; NIH, 2015; Cardona et al, 2014; Tishkoff, 2015; Ford, McDowell, and Pierce, 2015; Galloway, Young, and Bjerregaard, 2012; Harper, 2015). Global warming is troubling to some researchers, with many researchers suggesting that global warming will have negative effects on the health and food security of the Inuit (Ford et al, 2014, 2016; Ford, 2012, 2009; Wesche, 2010; Furgal and Seguin, 2006; McClymont and Myers, 2012; Petrasek et al, 2015; Rosol, Powell-Hellyer, and Chan, 2016; Petrasek, 2014; WHO, 2003). I could go on and on citing journal articles for both claims, but you get the point already. The main point is this: we know the Inuit have evolved for their climate, and a (possible) climate change would then have a negative effect on their quality of life due to their adaptations to the cold weather climate. However, egalitarians still contend, with these examples and numerous others I could cite, that any and all differences within and between human populations can be explained by socio-cultural factors and not any genetic ones.

One of the best examples of genetic isolation in a geographic locale that is the complete opposite from the environment of evolutionary adaptedness (EEA; Kanazawa, 2004), the African savanna in which we evolved in. I did entertain the idea of the Savanna hypothesis, and while I do believe that it could explain a lot of the variance in IQ between countries (Kanazawa, 2007), his hypothesis doesn’t make sense with what we know about human evolution over the past 10,000 years.

The most obvious differences we can see between populations is differences in skin color. Skin color does not signify race, per se, but it is a good indicator. Skin color is an adaptation to UV radiation (Jablonski and Chaplin, 20102000; Juzenienne et al, 2009; Jeong and Rienzo, 2015; Hancock, et al, 2010; Kita and Fraser, 2016; Scheinfeldt and Tishkoff, 2013), and is therefor and adaptation based on climate. Dark skin is a protectant from skin cancer (Brenner and Hearing, 2008; D’Orazio et al, 2010; Bradford, 2009). Skin cancer is a possible selective force in black pigmentation of the skin in early hominin evolution (Greaves, 2014). With these adaptations in skin color between genetically and geographically isolated populations, are changes in the brain, however small, really out of the question?

A better population to bring up in regards to geographic isolation having an effect on human evolution is the Tibetans. For instance, Tibetans have higher total lung capacities in comparison to the Han Chinese (Droma et al, 1991). There are even differences in lung capacity between Tibetans and Han Chinese who live at the same altitude (Yangzong et al, 2013), with the same thing noticed for peoples living in the Andean mountains (Beall, 2007). Tibetans evolved in a higher elevation than the Han Chinese who lived closer to sea level, so it makes sense that they would be selected for the ability to take deeper inhales They also have a larger chest circumference and greater capacity than the Han Chinese who live at lower altitudes (Gilbert-Kawai et al, 2014).

Admittedly, the acceptance of the usefulness of race in regards to human differences is a touchy subject. So much so, that social scientists do not take genetics into account in their models. However, researchers in the relevant fields accept the usefulness of race (Risch et al, 2002; Tang et al, 2005; Wade, 2014; Sesardic, 2010), so the fact that social scientists do not is to be ignored. Race is a social construct, yes. But no matter what we call these clusters, clines, demes, races, ethnies—whatever name you want to use to describe them—this does not change the fact that race is a useful category in biomedical research. Race is an issue when talking about bone marrow transplants, so by treating all populations as the same with no variation between them, people are pretty much saying that differences between people in a biomedical context do not exist, with there being other explanatory factors behind population differences, in this case, bone marrow transplants. Ignoring heritable human variation will lead to disparate health outcomes for all human populations with the assumption that all humans are the same. Is that what we want? Is that what race-deniers want?

So there are anatomical and physiological differences between human populations (Wagner and Hayward, 2000), with black Americans having a different morphology and lower fat-free body mass on average in comparison to white Americans. This, then, is one of the variables that dictates racial differences in sports, along with muscle fiber explaining a large portion of the variance, in my opinion. No one denies that blacks and whites differ at elite levels in baseballfootballswimming and jumping, and bodybuilding and strength sports. Though, accepting the fact that these morphological and anatomical differences between the races come down to evolution, one would then have to accept the fact that different races/ethnies differ in the brain, thusly destroying their egalitarian fantasy in their head of all genetically isolated human populations being the same in the brain. Wade (2014) writes on page 106:

“… brain genes do not lie in some special category exempt from natural selection. They are as much under evolutionary pressure as any other category of gene”

This is a hard pill to swallow for race-deniers, especially those who emphatically deny any type of selection pressure on the human brain within the past 10,000 to 100,000 years.

Winegard, Winegard, and Boutwell (2017) write:

Consider an analogy that might make this clear while simultaneously illuminating the explanatory importance of population differences. Most cars are designed from the same basic blueprint and consist of similar parts—an internal combustion engine, a gas tank, a chassis, tires, bearings, spark plugs, et cetera. Cars as distinct as a Honda Civic and a Subaru Outback are built from the same basic blueprint and comprised of the same parts; so, in this sense, there is a “universal car nature” (Newton 1999). However, precise, correlated changes in these parts can dramatically change the characteristics of a car.

Humans, like cars, are built from the same basic body plan. They all have livers, lungs, kidneys, brains, arms, and legs. And these structures are built from the same basic building blocks, tissues, which are built of proteins, which are built of amino acids, et cetera. However, small changes in the structures of these building blocks can lead to important and scientifically meaningful differences in function.

Put in this context, yes, there is a ‘universal human nature’, but the application of that human nature will differ depending on what a population has to do to survive in that climate/ecosystem. And, over time, populations will diverge away from each other, both physically and mentally. The authors also argue that societal differences between Eurasians (Europeans and East Asian) can be explained partly by genetic differences. Indeed, the races do differ on the Big Five Personality traits, with heritable components explaining 40 to 60 percent of the variation (Power and Pluess, 2015). So some of the cultural differences between European and East Asians must come down to some biological variation.

One of the easiest ways to see the effects of cultural/environmental selective pressures in humans is to look at Ashkenazi Jews (Cochran et al, 2006). Due to Ashkenazi Jews being barred from numerous occupations, they were confined to a few cognitively demanding occupations. Over time, only the Jews that could handle these occupations would prosper, further selecting for higher intelligence due to the cognitive demands of the jobs they were able to acquire. Thus, Ashkenazi Jews who could handle the few occupations they were allowed to do would breed more and pass on variants for higher intelligence to their offspring, whereas those Jews who couldn’t handle the cognitive demands of the occupation were selected out of the gene pool. This is one situation in which natural selection worked swiftly, and is why Ashkenazi Jews are so overrepresented in the fields of academia today—along with nepotism.

Winegard, Winegard, and Boutwell (2017) lay out six basic principles for a new Darwinian paradigm, as follows:

  1. Variation is the grist for the mill of natural selection and is ubiquitous within and among human populations.
  2. Evolution by natural selection has not stopped acting on human traits and has significantly shaped at least some human traits in the past 50,000 years.
  3. Current hunter-gatherer groups might be slightly different from other modern human populations because of culture and evolution by natural selection acting to influence the relative presence, or absence, of trait-relevant alleles in those groups. Therefore, using extant hunter-gatherers as a template for a panhuman nature is problematic.
  4. It is probably more accurate to say that, while much of human nature is universal, there may have been selective tuning on various aspects of human nature as our species left Africa and settled various regions of the planet (Frost 2011).
  5. The human brain is subject to selective forces in the same way that other organ systems are. Natural selection does not discriminate between genes for the body and genes for the brain (Wade 2014).
  6. The concept of a Pleistocene-based environment of evolutionary adaptedness (EEA) is likely unhelpful (Zuk 2013). Individual traits should be explored phylogenetically and historically. Some human traits were sculpted in the Pleistocene (or before) and have remained substantially unaltered; some, however, have been further shaped in the past 10,000 years, and some probably quite recently (Clark 2007). It remains imperative to describe what selection pressures might have been actively shaping human nature moving forward from the Pleistocene epoch, and how those ecological pressures might have differed for different human populations.

No stone should be left unturned when attempting to explain population differences between geographically isolated peoples, and these six principles are a great start, which all social scientists should introduce into their models.

As I brought up earlier, Kanazawa’s (2004b) hypothesis doesn’t make sense in regards to what we know about the evolution of human psychology. Thus, any type of proposed evolutionary mismatch in regards to our societies do not make much sense. However, one mismatch that does need to be looked into is the negative mismatch we have with our modern-day Western diets. Agriculture was both a gift and a negative event in human history. Yes, without the advent of agriculture 10,000 years ago we would not have the societies we have today. However, on the other hand, we have higher rates of disease compared to our hunter-gatherer ancestors. This is one evolutionary mismatch that cannot and should not go ignored as it has devastating effects on our populations that consume a Western diet—which we did not evolve to eat.

Winegart, Winegart, and Boutwell (2017) then discuss how their new Darwinian paradigm could be used by researchers: 1) look for differences among human populations; 2) after population differences are found, causal analyses should be approached neutrally; 3) researchers should consider a broad range of data to consider whether or not the trait or traits in question are heritable; and 4) researchers should test the posited biological cause more indepth. Without understanding—and using—biological differences between human populations, the quality of life for some populations will be diminished, all for the false notion of ‘equality’ between human races.

There are huge barriers in place to studying human differences, however. Hayden (2013) documents differing taboos in genetics, with intelligence having a high taboo rating. Of course, we HBDers know that intelligence is a highly heritable trait, largely genetic in nature, and so studying these differences between human populations may lead to some uncomfortable truths for some people. On the 200th anniversary of Darwin’s On the Origin of Species, Ceci and Williams (2009) said that “the scientific truth must be pursued” and that researchers must study race and IQ, much to the chagrin of anti-hereditarians (Horgan, 2013). He does write something very troubling in regards to this research, and free speech in our country as a whole:

Some readers may wonder what I mean by “ban,” so let me spell it out. I envision a federal prohibition against speech or publications supporting racial theories of intelligence. All papers, books and other documents advocating such theories will be burned, deleted or otherwise destroyed. Those who continue espousing such theories either publicly or privately (as determined by monitoring of email, phone calls or other communications) will be detained indefinitely in Guantanamo until or unless a secret tribunal overseen by me says they have expressed sufficient remorse and can be released.

Whether he’s joking or not, that’s besides the point. The point is, is that these topics are extremely sensitive to the lay public, and with these articles being printed in popular publications, the reader will get an extremely biased look into the debate and their mind will already be made up for them. This is the definition of intellectual dishonesty, attempting to sway a lay-readers’ opinion on a subject they are ignorant of with an appeal to emotion. Shouldn’t all things be studied scientifically, without any ideological biases?

Speaking about the ethics of putting this information out to the general public, Winegard, Winegard, and Boutwell (2017) write:

If researchers do not responsibly study and discuss population differences, then they leave an abyss that is likely to be filled by the most extreme and hateful writings on population differences. So, although it is understandable to have concerns about the dangers of speaking and writing frankly about potential population differences, it is also important to understand the likely dangers of not doing so. It is not possible to hide the reality of human variation from the world, not possible to propagate a noble lie about human equality, and the attempt to do so leaves a vacancy for extremists to fill.

This is my favorite quote in the whole paper. It is NOT possible to hide the reality of HBD from the world; anyone with eyes can see that humans do differ. Attempting to continue the feel-good liberal lie of human equality will lead to devastating effects in all countries/populations due to the implicit assumption that all human groups are the same in their cognitive and mental faculties.

The denial of genetic human differences, could, as brought up earlier in this article, lead to negative effects in regards to health outcomes between populations. Black Americans have higher rates of hypertension than white Americans (Fuchs, 2011; Ferdinand, 2007; Ortega, Sedki, and Nayer, 2015; Nesbitt, 2009; Wright et al, 2005). To overlook possible genetic differences as a causal factor in regards to racial differences will mean the deaths of many people since people truly believe that people are the same and that all differences come down to the environment. This, however, is not true and believing so is extremely dangerous to the health of all populations in the world.

Epigenetic signatures of ethnicity may be biomarkers for shared cultural experiences. Seventy-six percent of the genetic alteration between Mexicans and Puerto Ricans in this study was due to DNA methylation—which is an epigenetic mechanism used by cells to control gene expression. Therefore, 24 percent of the effect is due to an unknown factor, probably regarding environmental, social, and cultural differences between the two ethnies (Galanter et al, 2017). This is but one of many effects that culture can have on the genome, leading to differences between two populations, and is good evidence for the contention that the different races/ethnies evolved different psychological mechanisms due to genetic isolation in different environments.

We must now ask the question: what if the hereditarian hypothesis is true (Gottfredson, 2005)? If the hereditarian hypothesis is true, Gottfredson argues, special consideration should be given to those found to have a lower IQ, with better training and schooling that specifically target those individuals at risk to be less able due to their lower intelligence. This is one way the hereditarian hypothesis can help race relations in the country: people will (hopefully) accept intrinsic differences between the races. What Gottfredson argues in her paper will hopefully then pacify anti-hereditarians, as less able people of all races/ethnicities will still get the extra help they need in regards to finding work and getting schooling/training/jobs that accommodate their intelligence.

Conclusion

People accept genetic causes for racial differences in sports, yet emphatically deny that human races/ethnies differ in the brain. The denial of human nature—racially and ethnically—is the next hurdle for us to jump over. Once we accept that these differences in populations can, in part, be explained by genetic factors, we can then look to other avenues to see how and why these differences exist between populations occur and if anything can be done to ameliorate them. However, ironically, anti-hereditarians do not realize that their policies and philosophy is actively hindering their goals, and by accepting biological causes—if only to see them researched and held against other explanations—will lead to further inequality, while they scratch their heads without realizing that the cause is the one variable that they have discarded: genetics. Still, however, I see this won’t happen in the future and the same non-answers will be given in response to findings on how the human races differ psychologically (Gottfredson, 2012). The races do differ in biologically meaningful ways, and denying or disregarding the truth will not make these differences disappear. Social scientists must take these differences into account in their models, and seriously entertain them like any other hypothesis, or else they will never fully understand human nature.

The Threat of Increasing Diversity: Why Many White Americans Supported Trump in the 2016 Presidential Election

3250 words

Tl;dr: White Americans exposed to more diversity are more likely to support Trump, anti-PC speech and anti-immigration policies while showing less support and positivity towards Democratic candidates. In the racial shift group, whites with low racial identity, ethnic replacement didn’t seem to care about ethnic replacement and showed stronger support for Democratic candidates. To wake up more whites to anti-immigration sentiments and white identity politics, you need to show them the effects of diversity in the social context as well as what a demographic replacement will mean in the next two decades.

Why did so many white Americans support Donald Trump’s Presidency? The reasons are numerous, though there are some key reasons why he won. To look at the exact reasons why, we need to look at some evolutionary psychology as well as political psychology. I came across a paper today titled The threat of increasing diversity: Why many White Americans support Trump in the 2016 presidential election, it has many thought provoking things in it and pretty much confirms what the altright says about an increase in white identity occurring. An ‘ethnic awakening’ if you will. The authors state that white Americans high in racial identity will be more likely to derogate out-groups when white Americans realize they are becoming replaced in their own country.

Major, Blojorn and Blascovich (2016) state that reminding white Americans who are ‘high in ethnic identification’ (i.e., a white identitarian, an altrighters) that non-white populations will soon outnumber whites caused them to be more concerned about the future of whites in America, pushing them towards Trump and his anti-immigration policies. This also led to an increase in being politically incorrect. Moreover,  whites low in ethnic identification (say, a progressive leftist) showed no greater chance in voting for Trump nor his anti-immigration policies. This did, however, decrease positivity towards Trump as well as decreased their opposition towards political correctness. The authors write:

The U.S. Census Bureau (2012) projects that the national population of non-White racial groups will exceed that of Whites before the middle of this century. Many White Americans in the US view race relations as “zero-sum,” in which status gains for minorities means status loss for Whites (Wilkins & Kaiser, 2014) and less bias against minorities means more bias against Whites (Norton & Sommers, 2011). The belief that Whites are losing out to ethnic minorities is particularly prevalent among Trump supporters (De Jonge, 2016).

This is noticed, anecdotally speaking and you can follow the citations to get more information. From an evolutionary perspective, this does make sense. Competition for resources between groups trigger evolutionary instincts. More non-whites in America will decrease the white population who has the lowest birth rate by ethnicity in the country and this will trigger more anti-immigration sentiments in whites high in ethnic identification. This ‘zero-sum game’, the ‘if your ethnic group has more than mine has less’ game will start to take hold in America in the next coming years if this paper is any indication of the future. The one particularly interesting point the authors bring up is that if there is “less ‘bias against minorities, there will be more minorities against whites”, and that, in turn, increased anti-immigration sentiments as well as drove people towards Trump and his anti-immigration views.

The more minorities that come into the country, the more whites in America will start to band together for their own ethnic genetic interests, move towards more conservative policies and begin to show more derogation towards the out-group.

The authors use the term ‘group status threat’, which is when one “worries that his group’s status, influence, and position in the hierarchy is under threat.” This threat then predicts out-group derogation. I wonder if oxytocin (a brain peptide that increases out-group derogation) increases when diversity occurs in the social context. I’d like to see that looked into one day.

There is also ‘integrated threat theory’ where increased diversity poses a threat to white Americans’ resources and American values. They also state, using social cognition theory, that increases in diversity will be ‘frightening’ and ‘confusing’ to whites, causing “uncertainty and fear”, which then drove whites towards more conservative anti-immigration policies.

When whites high in ethnic identification were shown a newspaper article stating that whites would be a minority by 2042, it led whites to be more concerned about whites’ social status in the country, leading them towards more conservative views and policies. It’s important to note that their views changed along with their policy recommendations.

In this study, the authors tested experimentally whether reminding white Americans that of the increasing diversity in the US affects their political leanings, whether or not group status is the cause of the political leanings when one hears about ethnic replacements, and whether or not ethnic identification or political alignment moderated the effects. They expected that reminding whites of ethnic replacement will cause them to lean towards conservative views and politicians (Trump, Kasich, Cruz) while decreasing support for Democrats (Clinton and Sanders).

People who experience ‘group status threat’ will be more likely to vote for Trump since he has more anti-immigration, antidiversity views than all politicians who ran for President. This, the researchers hypothesized, would come to fruition in their study. They also predicted that reminding white Americans of ethnic replacement would cause them to support more anti-immigration policies and be more resistant to political correctness, i.e., they would be more likely to be against positive policies for the out-group. They would become intolerant towards the out-group upon exposure to the reality of ethnic replacement in the country.

We also tested ethnic identification and political affiliation as potential moderators of the predicted effect of condition.1 Drawing on social identity theory (Tajfel & Turner, 1986), we expected that reminders of increasing ethnic diversity would be especially threatening to Whites whose race/ethnicity is a central aspect of their identity. Thus we expected them to report greater support for Republican candidates, anti-immigrant policies, and opposition to political correctness in response to reminders of the racial shift compared to Whites low in ethnic identification. In contrast, based on Craig and Richeson’s (2014b) finding that reminders of the racial shift increased support for conservative ideology irrespective of political leanings, we did not expect political affiliation to moderate effects.

Whites whose ‘race/ethnicity is a central aspect of their identity’, i.e. altrighters were predicted to be especially threatened at the reminder of ethnic replacement in their country of birth. However, as expected and what is seen in anecdotal accounts, whites low in ethnic identification, i.e., progressive leftists, antifas, etc, showed the opposite.

The researchers had a sample consisting of 450 white Americans with the following political beliefs: 262 Democrats, 114 Republicans, 50 Independents and 24 ‘other’. After removing the Independents and ‘others’ from the sample they had 376 white American participants (51.1 percent female).

They were given articles and were given two minutes to read them. One was an article talking about the ethnic replacement of whites and whites’ minority status in America that’s projected to occur by 2042 (aptly called ‘racial shift’) while the other article used “similar language to indicate geographic mobility is increasing (control condition).” It’s interesting to note that it seems like the only difference between the two articles is the wording. After reading the articles, they then completed tasks assessing group status threat, support for the current candidates running for office, anti-immigration sentiments, ethnic identification, and opposition to political correctness. After the completion of the tasks, they were then told the reason for the study and compensated their one dollar.

White Americans exposed to ‘racial shift condition’ reported greater group status threat than those in the control condition. This shows that white Americans who live in a diverse neighborhood will be more likely to be affected by the ‘racial shift condition’, leading them towards anti-immigration sentiment, a strong feeling towards white identity, and be more likely to hold more right-wing views. Whites high in ethnic identification showed greater group status threat than the control (.29) in the racial shift condition while whites low in ethnic identification did not. So, white identitarians showed a greater feeling of threat towards the group than did progressive leftists and antifas. Can’t say I’m too surprised. I did theorize in my article on the rise of the altright that either leftists have less oxytocin and altrighters have more, or that since political beliefs are heritable that high amounts of oxytocin will have one gravitate towards using their altruistic tendencies for the out-group or the in-group. This seems to be some evidence for my theory. For both right-wingers and left-wingers, ethnic identification was positively related to group status threat, but it was stronger in right-wingers. Even more evidence for my oxytocin/political beliefs theory.

White identitarians (whites high in ethnic identification) reported moderately greater positivity towards Trump as well as an even greater chance of voting for him in the racial shift scenario compared to the geographic movement scenario. Conversely, whites low in ethnic identification (progressive leftists, antifas, etc) showed less positivity towards Trump in the racial shift condition than in the geographic movement (control) condition,. However, in the racial shift condition, when one had high ethnic identification it led to increased positivity and a higher chance of voting for Trump. However, in the geographic condition, ethnic identification was unrelated to positivity towards Trump as well as voting for him.

Whites who showed less identification showed somewhat less support towards Sanders, being somewhat less likely to vote for him in the racial shift condition than in the geographic movement condition. In whites low in ethnic identity, neither condition (racial shift or geographic movement) had any effect on voting for Sanders or positivity towards him. Now here’s the good part: in the racial shift condition, whites high in ethnic identity showed somewhat less support and positivity towards Sanders in the racial shift condition compared to the geographic shift condition. Moreover, in the racial shift condition, ethnic identification was negatively correlated with positivity and chance of voting for Sanders, whereas in the control condition ethnic identification showed no effect.

In the racial shift condition, white identitarians were more supportive of anti-immigration policies than progressive leftists, while whites low in ethnic identification showed no difference, regardless of the condition. Ethnic identification was related to anti-immigration policies in both the racial shift and geographic movement conditions, but it was stronger in the racial shift condition.

White identitarians did not differ in outlook on political correctness by condition, while whites who show less ethnic identity reported less opposition to political correctness. Ethnic identification and anti-PC views were positively related in the racial shift condition but unrelated in the geographic shift condition.

Exposure to the racial shift condition vs. the geographic movement condition elicits different responses based on one’s political alignment and ethnic identification. Exposure to the racial shift condition increased group status threat, support for Trump and support for anti-immigration policies while somewhat decreasing support for Sanders, but only among whites high in ethnic identification. Conversely, for whites low in ethnic identification in exposure to the racial shift, there was no effect on group status threat, support for Sanders or anti-immigration sentiments and actually led to a decrease in positivity for Trump. That’s pretty powerful right there.

The support and election of Donald Trump is showing a paradigm shift in this country as ethnies in America start voting on racial lines. As diversity continues to increase and as more white Americans begin to realize the ethnic replacement will begin to impede on how many resources they have access to as well as the ‘racism being flipped on them’ with ‘less bias on minorities being more bias towards whites’, more and more whites will start voting not on party lines, but ethnic lines like all other ethnies in this country do. In the racial shift group, whites high in ethnic identification showed increased support for Trump and anti-immigration policies, increased opposition towards political correctness and decreased Sanders support through group status threat. Conversely, in the racial shift group, reminders of ethnic replacement in whites low in ethnic identification showed decreased Trump support and his policies and did not lead to group status threat. This can be termed ‘ethnic suicide’. Clearly, increased diversity is a threat to some but not all white Americans.

What boggles my mind is that when whites low in ethnic identification were reminded of the projected ethnic replacement by 2042, they decreased support for Trump and increased support for anti-immigration policies and their support for norms that prohibit bias in hate speech, which was not mediated by the group status threat. The authors put forth one theory why this may be the case. They say that whites low in ethnic identification were thinking of the changing racial demographics on the country as a whole, not just on their own ethnic group which may have led them to support a candidate who is tolerant of diversity and antibias norms. Reminding Americans high in racial identity of ethnic replacement increasingly shifted support to Trump and away from Sanders. Though this effect was not seen in relation to other candidates, the authors attributed this to Trump’s stance on immigration and political correctness relative to the other Republican candidates. To those white Americans with a high racial identity who experience group status threat, they would be drawn to Trump and his anti-immigration, anti-PC speech. The authors state:

Of all of the candidates, Trump has been most vocal in his opposition to “outsiders” such as Muslims and illegal immigrants from Latin America, and most openly critical of “political correctness” in both his rhetoric and his behavior. Trump’s rhetoric and policies thus appear to hold special appeal for White Americans highly in racial/ethnic identification who are concerned about the declining position of Whites in American society and who often perceive reverse discrimination as prevalent. In contrast, Sanders may have been perceived as the most inclusive candidate and thus most likely to exacerbate threats to White’s status as a group.

This sums up the 2016 election in one paragraph. White Americans high in racial identity showed a greater chance to vote for Trump, greater opposition to political correctness and were more likely to espouse anti-immigration sentiments.

Political leaning affiliation had a large and expected effect on candidate choice as well as policy preferences. Compared to Dems, Republicans reported much stronger support for Republican candidates than Democratic candidates while being more supportive of anti-immigration and “more un-PC attitudes”. However, when reminded of ethnic replacement, both Democrats and Republicans who showed high racial identification were more likely to lean right and vote Trump. This study shows important implications about group identity and intergroup process to voting preferences. In whites high in racial identity, increased racial diversity affects voting preferences amongst whites, with the strength of the racial/ethnic identity moderating the effect. I.e., the stronger a racial identity one has the more likely they are to support Trump and anti-immigration policies, irrespective of political leaning. Due to this study, psychologists and political scientists need to begin to pay attention to the increasing concerns of whites high in racial identification, while traditionally thinking that white Americans’ politics weren’t driven by white identity, deeming them to be unimportant to whites’ political outlooks. For example, one study showed that “racial identification, perceptions of discrimination, and linked fate were only weak predictors of White Americans’ attitudes on policies related to race and immigration. This led them to conclude that “Whites’ whiteness is usually likely to be no more noteworthy to them than is breathing the air around them” (Sears and Savelli, 2006, p. 901).

However, the current political climate shows that this no longer is the case. As more non-whites immigrate into America, whites who have high racial identity, irrespective of political leaning, will become more open to supporting Trump (or people like him) as well as anti-immigration policies. As the white majority in America shrinks, more and more white Americans will be open to white identity politics to get back their rightful resources in the country as well as the demographic majority. Eventually, with more and more unchecked immigration, white identity will start to become a central part in white American politics and voting blocks. White Americans who regard their identity as ‘white’ and an important part of their identity, future white American political preferences will be molded by group status threat as well as opposition to diversity. Trump has ‘tapped into’ the demographic of white Americans who feel looked down on in their own home country from mass immigration from the South (and soon from MENA countries). White Americans who feel that their numerical advantage is threatened are more likely to vote for Trump and support anti-immigration policies that will begin to benefit American whites.

It is, however, important to note that Trump may not be who he says he is (like most politicians). On election night last month I blogged on Donald Trump and Ethnic Genetic Interests. I showed that contrary to the average perception of him, his interests lie with Israel, not with his own racial group (due to his children marrying Jews). Moreover, he has already reneged on his wall, deporting illegals and his supposed moratorium on Muslim immigration into the US from threat countries. If anything, Trump is just a stepping stone towards more nationalistic attitudes in the US for whites. With the increased diversity, whites will start to see that they are becoming replaced by other ethnies and in whites with high racial identity, it will trigger nationalistic attitudes and responses to the impending threat on their unique genetic code. This will help to foster the awakening of more whites to identity politics, voting in their own ethnic interests and not for the interest of other ethnies.

I personally hope this leads to a renaissance of race-realism in America, but I may be aiming the bar too high. The conclusion of this study is hopeful for the status of whites in America, however. The more whites that get exposed to diversity AND have high racial identity will then lean more towards Trumpian policies. As whites decrease in number in the US, more and more whites will begin to vote for themselves and, in my opinion, once these nationalistic attitudes appear in the white consciousness in America, this demographic replacement can begin to be reversed. If it were not for the increased immigration, however, this would not have happened. The increased immigration is a main driver of these feelings towards political correctness and anti-immigration. The more anti-white sentiment that is heard in America, the more whites high in racial identity will move towards the right while leftists will continue to commit ‘ethnic suicide’.

The takeaway from this paper is this: Whites exposed to the racial shift high in racial identity were more likely to support Trump, anti-immigration policies and be anti-PC. Whites in the racial shift condition who showed low racial identity showed the opposite and were more likely to vote for Democratic candidates. This paper shows good news in the future for whites in America and voting in their interests. Whites in America are beginning to vote for their ethnic genetic interests and this is largely due to genetic similarity theory as immigration from MENA countries and South of the border increase into America. Moreover, with Trump’s allegiance to Israel, Trump is just a man to awaken more people to the realities of immigration. So Trump himself won’t do anything, but his anti-immigration rhetoric is having people notice the realities of immigration and ethnic replacement in America.

The Evolution of Violence: A Look at Infanticide and Rape

1700 words

The pioneer of criminology was a man named Cesare Lombroso, an Italian Jew (a leftover remnant from the Roman days), who had two central theories: 1) that criminal behavior originated in the brain and 2) criminals were an evolutionary throwback, a more primitive type of human. Lombroso felt strongly about the rehabilitation of criminals, at the same time believing in the death penalty for “born criminals”. Though, with new advances in criminology and new insights to the brain, it looks like Lombroso was right with his theory of born criminals.

Why are you 100 times more likely to be killed on your birthday? Why are children 50 times more likely to be murdered by their stepfather than biological one? Why do some parents kill their children? Finally, why do men rape not only strangers, but also rape their wives? All of these questions can be answered with evolutionary psychology.

Evolutionarily speaking, antisocial and violent behavior wasn’t a random occurrence. When these actions occurred tens of thousands of years ago, they were because resources were being acquired from these actions. Thus, we can see some modern criminal acts as resource competition. The more resources one has, the easier it is for him to pass his genes on to the next generation (a big driver for violence). In turn, women are more attracted to males who can provide resources and protection (those who were more antisocial and violent). This also explains these prison romances, in which women get into romances with murderous criminals since they are attracted to the violence (protection) and resources (theft).

The mugger who robs for a small amount of money is increasing his odds of resource acquisition. Drive-by shootings in violent neighborhoods increase the status of those who survive the shootout. What looks like a simple brawl over nothing may be one attempting to increase social dominance. All of these actions have evolutionary causes. What drive these actions are our ‘Selfish Genes’.

The more successful genes are more ruthlessly selfish in their struggle for survival, which then drives individual behavior. The individual behaviors that occur due to our selfish genes may be antisocial and violent in nature, which in our modern society is frowned upon. The name of the game is ‘fitness’. The amount of children you can have in your time allotted on Earth. This is all that matters to our genes. Even those accomplishments you think of, such as completing college or attaining mass amounts of capital all fall back to fitness. With that, increasing your fitness and ensuring your genetic lineage passes on to the next generation is greatly enhanced.

Biological fitness can be enhanced in one of two ways. You can have as many children as possible, giving little parental care to each, or you can have fewer children but show more attention and care to them. This is known as r/K Selection Theory. Rushton’s r/K Selection Theory compliments Dawkins Selfish Gene theory in that the r-strategist is maximizing his fitness by having as many children as possible, while the K-strategist increases his fitness by having fewer children in comparison to the r-strategist but showing more parental attention. There are, however, instances in which humans kill children, whether it’s a mother killing a newborn baby or a stepfather killing a child. What are the reasons for this?

Killing Kids

The risk of being a homicide victim in the first year of life is highest in the first year of life. Why? Canadian Psychologists Daly and Wilson demonstrated in inverse relationship between degree of genetic relatedness and being a victim of homicide. Daly and Wilson discovered that the offender and victim are genetically related in only 1.8 percent of all homicides. Therefore, 98 percent of all murders are killings of people who do not share the killer’s genes.

Many stories have been told about ‘wicked stepparents’ in numerous myths and fairytales. But, as we know, a lot of stories have some basis in reality. Children of stepparents are 40 times more likely to suffer abuse at the hands of a stepparent. People who are living together who are unrelated to one another are more likely to kill one another. Even adoptions are more successful when the adopting parents view the child as genetically similar to themselves.

In this study carried out by Maillart, et al, it was discovered that for mothers, the average age of offense for filicide was 29.5 years for the mother and 3.5 years for the babe. Bourget, Grace, and Whitehurst, 2007 showed that a risk factor for infanticide was a second child born to a mother under 20-years of age. The reasoning for this is simple: at a younger age the mother is more fertile, and thus, more attractive to potential mates. The older the woman is the more sense it makes to hold on to the genetic investment since it’s harder to make up for the genetic loss late in her reproductive life.

Genetic relatedness, fitness, and parental investment show, in part, why filicides and infanticides occur.

Raping Your Wife

There are evolutionary reasons for rape as well. The rape of a non-relative can be looked at as the ultimate form of ‘cheating’ in this selfish game of life. One who rapes doesn’t have to acquire resources in order to attract a mate, he can just go and ‘take what he wants’ and attempt to spread his genes to the next generation through non-consensual sex. It’s known that rape victims have a higher chance of getting pregnant, with 7.98 percent of rape victims becoming pregnant. (News article) One explanation for this is that the rapist may be able to possibly detect how fertile a woman is. Moreover, rapists are more likely to rape fertile women rather than infertile women.

One rapist that author of the book The Anatomy of Violence, Adrian Raine interviewed said that he specifically chose ugly women to rape (Raine, 2013: 28). He says that he’s giving ugly women ‘what they want’, which is sex. There is a belief that women actually enjoy sex, and even orgasm during the rape, even though they strongly resist and fight back during the attack. Reports of orgasm during rape are around 5 to 6 percent (Raine, 2013: 29), but the true number may be higher since most women are embarrassed to say that they orgasmed during a rape.

Men, as we all know, are more likely to engage in no-strings-attached sex more than women. This is due to the ‘burden’ of sex: children. Women are more likely to carefully select a partner who has numerous resources and the ability to protect the family. Men don’t have the burden of sticking around to raise the child.

Men are more likely to find a sexual relationship more upsetting in comparison to women who are more likely to find an emotional infidelity as more distressing. This data on Americans still held true for South Korea, Germany, Japan, and the Netherlands. Men are better than women at detecting infidelity, and are more likely to suspect cheating in their spouses (Raine, 2013: 32). Unconscious reason being, a man doesn’t want to raise a child who is not genetically similar to themselves.

But this begs another question: why would a man rape his wife? One reason is that when a man discovers his spouse has been unfaithful, he would want to inseminate her as quickly as possible.

There has never in the history of humankind been one example of women banding together to wage war on another society to gain territory, resources or power. Think about it. It is always men. There are about nine male murderers for every one female murderer. When it comes to same-sex homicides, data from twenty studies show that 97 percent of the perpetrators are male. Men are murderers. The simple evolutionary reason is that women are worth fighting for. (Raine, 2013: 32)

A feminist may look at this stat and say “MEN cause all of the violence, MEN hurt women” and attempt to use this data as ‘proof’ that men are violent. Yes, we men are violent, and there is an evolutionary basis for it. However, what feminists who push the ‘all sexes are equal’ card don’t know, is that when they say ‘men are more likely to be murderers’ (which is true), they are actively accepting biological differences between men and women. Most of these differences in crime come down to testosterone. I would also assume that men would be more likely to have the ‘warrior gene’, otherwise known as the MAOA-L gene, which ups the propensity for violence.

The sociobiological model suggests that poorer people kill due to lack of resources. And one reason that men are way more likely to be victims of homicide is because men are in competition with other men for resources.

Going back to the violence on stepchildren that I alluded to earlier, aggression towards stepchildren can be seen as a strategic way of motivating unwanted, genetically dissimilar others out of the home and not take up precious resources for the next generation bred by the stepfather (Raine, 2013: 34).

Women also have a way to increase their fitness, which a brunt of it is through sexual selection. Women are known to be ‘worriers’. That is, they rate dangerous and aggressive acts higher than men. Women are also more fearful of bodily injury and more likely to develop phobias of animals. In these situations, women are protecting themselves and their unborn (or born) children by maximizing their chances for survival by being more fearful of things. This can help explain why women are less physically violent than men and why those murder stats are so heavily skewed towards men: biology.

Women compete for their genetic interests with beauty and childbearing. The more beautiful the woman, the better resources a woman can acquire from a male and this will ensure a healthy life for the offspring.

Evolutionary psychology can help explain the differences in murder between men and women. It can also explain why young mothers kill their children and why stepparents are so abusive to, and are more likely to murder stepchildren. Of course, a social context is involved but we need to look at evolutionary causes for what we think we may be able to simply explain. Because it’s, more often than not, more complex than we could imagine. And that complexity is our Selfish Genes doing anything possible to reproduce more copies of itself through its vehicle: the human body.

The Evolution of Morality

Summary: Moral reasoning is just a post hoc search for reasons to justify the judgments people have already made. When people are asked why, for certain questions, they find things morally wrong, they say they cannot think of a reason but they still think it is wrong. This has been verified by numerous studies. Moral reasoning evolved as a skill to further social cohesiveness and to further our social agendas. Even in different cultures, those with matching socioeconomic levels have the same moral reasoning. Morality cannot be entirely constructed by children based on their own understanding of harm. Thus, cultural learning must play a bigger role than the rationalists had given it. Larger and more complex brains also show more cognitive sophistication in making choices and judgments, confirming a theory of mine that larger brains are the cause of making correct choices as well as making moral judgments.

The evolution of morality is a much-debated subject in the field of evolutionary psychology. Is it, as the nativists say, innate? Or is it as the empiricists say, learned? Empiricists, better known as Blank Slatists, believe that we are born with a ‘blank slate’ and thus acquire our behaviors through culture and experience. In 1987 when John Haidt was studying moral psychology (now known as evolutionary psychology), moral psychology was focused on the third answer: rationalism. Rationalism dictates that children learn morality through social learning and interacting with other children to learn right from wrong.

Developmental psychologist Jean Piaget focused on the type of mistakes that children would make when seeing water moved from different shape glasses. He would, for example, put water into the same size glasses and ask children which one had more water. They all said they held the same amount of water. He then poured water from one glass into a taller glass and then asked the children which glass held more water. Children aged 6 and 7 say that the water level changed since the water was now in a taller glass. The children don’t understand that just because the water was moved to a taller glass doesn’t mean that there is now more water in the glass. Even when parents attempt to explain to their children why there is the same amount of water in the glass, they don’t understand it because they are not ready cognitively. It’s only when they reach an age and cognitive stage that they are ready to understand that the water level doesn’t change, just by playing around with cups of water themselves.

Basically, the understanding of the conservation of volume isn’t innate, nor is it learned by parents. Children figure it out for themselves only when their minds are cognitively ready and they are given the right experiences.

Piaget then applied his rules from the water experiment with the development of children’s morality. He played a marble game with them where he would break the rules and play dumb. The children the responded to his mistakes, correcting him, showing that they had the ability to settle disputes and respect and change rules. The growing knowledge progressed as children’s cognitive abilities matured.

Thus, Piaget argued that like children’s understanding of water conservation is like children’s understanding of morality. He concludes that children’s reasoning is self-constructed. You can’t teach 3-year-old children the concept of fairness or water conservation, no matter how hard you try. They will figure it out on their own through dispute and do things themselves, better than any parent could teach them, Piaget argued.

Piaget’s insights were then expanded by Lawrence Kohlberg who revolutionized the field of moral psychology with two innovations: developing a set of moral dilemmas that were presented to children of various ages. One example given was that a man broke into a drug store to steal medication for his ill wife. Is that a morally wrong act? Kohlberg wasn’t interested in whether the children said yes or no, but rather, their reasoning they gave when explaining their answers.

Kohlberg found a six-stage progression in children’s reasoning of the social world that matched up with what Piaget observed in children’s reasoning about the physical world. Young children judged right and wrong, for instance, on whether or not a child was punished for their actions, since if they were punished for their actions by an adult then they must be wrong. Kohlberg then called the first two stages the “pre-conventional level of moral judgment”, which corresponded to Piaget’s stage at which children judge the physical world by superficial features.

During elementary school, most children move on from the pre-conventional level and understand and manipulate rules and social conventions. Kids in this stage care more about social conformity, hardly ever questioning authority.

Kohlberg then discovered that after puberty, which is right when Piaget found that children had become capable of abstract thought, he found that some children begin to think for themselves about the nature of authority, the meaning of justice and the reasoning behind rules and laws. Kohlberg considered children “‘moral philosophers’ who are trying to work out coherent ethical systems for themselves”, which was the rationalist reasoning at the time behind morality. Kohlberg’s most influential finding was that the children who were more morally advanced frequently were those who had more opportunities for role-taking, putting themselves into another person’s shoes and attempting to feel how the other feels through their perspective.

We can see how Kohlberg and Piaget’s work can be used to support and egalitarian and leftist, individualistic worldview.

Kohlberg’s student, Elliot Turiel, then came along. He developed a technique to test for moral reasoning that doesn’t require verbal skill. His innovation was to tell children stories about children who break rules and then give them a series of yes or no questions. Turiel discovered that children as young as five normally say that the child was wrong to break the rule, but it would be fine if the teacher gave the child permission, or occurred in another school with no such rule.

But when children were asked about actions that harmed people, they were given a different set of responses. They were asked if a girl pushes a boy off of a swing because she wants to use it, is that OK? Nearly all of the children said that it was wrong, even when they were told that a teacher said it was fine; even if this occurred in a school with no such rule. Thus, Turiel concluded, children recognize that rules that prevent harm are moral rules related to “justice, rights, and welfare pertaining to how people ought to relate to one another” (Haidt, 2012, pg. 11). All though children can’t speak like moral philosophers, they were busy sorting information in a sophisticated way. Turiel realized that was the foundation of all moral development.

There are many rules and social conventions that have no moral reasoning behind them. For instance, the numerous laws of the Jews in the Old Testament in regards to eating or touching the swarming insects of the earth, to many Christians and Jews who believe that cleanliness is next to Godliness, to Westerners who believe that food and sex have a moral significance. If Piaget is right then why do so many Westerners moralize actions that don’t harm people?

Due to this, it is argued that there must be more to moral development than children constructing roles as they take the perspectives of others and feel their pain. There MUST be something beyond rationalism (Haidt, 2012, pg. 16).

Richard Shweder then came along and offered the idea that all societies must resolve a small set of questions about how to order society with the most important being how to balance the needs of the individual and group (Haidt, 2012, pg. 17).

Most societies choose a sociocentric, or collectivist model while individualistic societies choose a more individualist model. There is a direct relationship between consanguinity rates, IQ, and genetic similarity and whether or not a society is collectivist or individualistic.

Shweder thought that the concepts developed by Kohlberg and Turiel were made by and for those from individualistic societies. He doubted that the same results would occur in Orissa where morality was sociocentric and there was no line separating moral rules from social conventions. Shweder and two collaborators came up with 39 short stories in which someone does something that would violate a commonly held rule in the US or Orissa. They interviewed 180 children ranging from age 5 to 13 and 60 adults from Chicago and a matched sample of Brahmin children and adults from Orissa along with 120 people from lower Indian castes (Haidt, 2012, pg. 17).

In Chicago, Shweder found very little evidence for socially conventional thinking. Plenty of stories said that no harm or injustice occurred, and Americans said that those instances were fine. Basically, if something doesn’t protect an individual from harm, then it can’t be morally justified, which makes just a social convention.

Though Turiel wrote a long rebuttal essay to Shweder pointing out that most of the study that Shweder and his two collaborators proposed to the sample were trick questions. He brought up how, for instance, that in India eating fish is will stimulate a person’s sexual appetite and is thus forbidden to eat, with a widow eating hot foods she will be more likely to have sex, which would anger the spirit of her dead husband and prevent her from reincarnating on a higher plane. Turiel then argued that if you take into account the ‘informational assumptions’ about the way the world works, most of Shweder’s stories were really moral violations to the Indians, harming people in ways that Americans couldn’t see (Haidt, 2012, pg. 20).

Jonathan Haidt then traveled to Brazil to test which force was stronger: gut feelings about important cultural norms or reasoning about harmlessness. Haidt and one of his colleagues worked for two weeks to translate Haidt’s short stories to Portuguese, which he called ‘Harmless Taboo Violations’.

Haidt then returned to Philadelphia and trained his own team of interviewers and supervised the data collection for the four subjects in Philadelphia. He used three cities, using two levels of social class (high and low) and within each social class was two groups of children aged 10 to 12 and adults aged 18 to 28.

Haidt found that the harmless taboo stories could not be attributed to some way about the way he posed the questions or trained his interviewers, since he used two questions directly from Turiel’s experiment and found the same exact conclusions. Upper-class Brazilians looked like Americans on these stories (I would assume since Upper-class Brazilians have more European ancestry). Though in one example about breaking the dress-code of a school and wearing normal clothes, most middle-class children thought that it was morally wrong to do this. The pattern supported Shweder showing that the size of the moral-conventional distinction varied across cultural groups (Haidt, 2012, pg. 25).

The second thing that Haidt found was that people responded to harmless taboo stories just as Shweder predicted: upper-class Philadelphians judged them to be violations of social conventions while lower-class Brazilians judged them to be moral violations. Basically, well-educated people in all of the areas Haidt tested were more similar to each other in their response to harmless taboo stories than to their lower-class neighbors.

Haidt’s third finding was all differences stayed even when controlling for perceptions of harm. That is, he included a probe question at the end of each story asking: “Do you think anyone was harmed by what [the person in the story] did?” If Shweder’s findings were caused by perceptions of hidden victims, as was proposed by Turiel, then Haidt’s cross-cultural differences should have disappeared when he removed the subjects who said yes to the aforementioned question. But when he filtered out those who said yes, he found that the cultural differences got BIGGER, not smaller. This ended up being very strong evidence for Shweder’s claim that morality goes beyond harm. Most of Haidt’s subjects said that the taboos that were harmless were universally wrong, even though they harmed nobody.

Shweder had won the debate. Turiel’s findings had been replicated by Haidt using Turiel’s methods showing that the methods worked on people like himself, educated Westerners who grew up in an individualistic culture. He showed that morality varied across cultures and that for most people, morality extended beyond the issues of harm and fairness.

It was hard, Haidt argued, for  a rationalist to explain these findings. How could children self-construct moral knowledge from disgust and disrespect from their private analyses of harmlessness (Haidt, 2012, pg. 26)? There must be other sources of moral knowledge, such as cultural learning, or innate moral intuitions about disgust and disrespect which Haidt argued years later.

Yet, surprises were found in the data. Haidt had written the stories carefully to remove all conceivable harm to other people. But, in 38 percent of the 1620 times people heard the harmless offensive story, they said that somebody was harmed.

Haidt found that it was obvious in his sample of Philadelphians that it was obvious that the subjects had invented post hoc fabrications. People normally condemned the action very quickly, but didn’t need a long time to decide what they thought, as well as taking a long time to think up a victim in the story.

He also taught his interviewers to correct people when they made claims that contradicted the story. Even when the subjects realized that the victim they constructed in their head was fake, they still refused to say that the act was fine. They, instead, continued to search for other victims. They just could not think of a reason why it was wrong, even though they intuitively knew it was wrong (Haidt, 2012, pg. 29).

The subjects were reasoning, but they weren’t reasoning in search for moral truth. They were reasoning in support of their emotional reactions. Haidt had found evidence for philosopher David Hume’s claim that moral reasoning was often a servant of moral emotions. Hume wrote in 1739: “reason is, and ought to be only the slave of the passions, and can never pretend to any other office than to serve and obey them.”

Judgment and justification are separate processes. Moral reasoning is just a post hoc search for reasons to justify the judgments people have already made.

The two most common answers of where morality came from are that it’s innate (nativists) or comes from childhood learning (empiricists), also known as “social learning theory”. Though the empiricist position is incorrect.

  • The moral domain varies by culture. It is unusually narrow in western education and individualistic cultures. Sociocentric cultures broaden moral domain to encompass and regulate more aspects of life.
  • People sometimes have gut feelings – particularly about disgust – that can drive their reasoning. Moral reasoning is sometimes a post hoc fabrication.
  • Morality can’t be entirely self-constructed by children based on their understanding of harm. Cultural learning (social learning theory, Rushton, 1981) not guidance must play a larger role than rationalist had given it.

(Haidt, 2012, pg 30 to 31)

If morality doesn’t come primarily from reasoning, then that leaves a combination of innateness and social learning. Basically, intuitions come first, strategic reasoning second.

If you think that moral reasoning is something we do to figure out truth, you’ll be constantly frustrated by how foolish, biased, and illogical people become when they disagree with you. But if you think about moral reasoning as a skill we humans evolved to further our social agendas – to justify our own actions and to defend the teams we belong to – then things will make a lot more sense. Keep your eye on the intuitions, and don’t take people’s moral arguments at face value. They’re mostly post hoc constructions made up on the fly crafted to advance one or more strategic objectives (Haidt, 2012, pg XX to XXI).

Haidt also writes on page 50:

As brains get larger and more complex, animals begin to show more cognitive sophistication – choices (such as where to forage today, or when to fly south) and judgments (such as whether a subordinate chimpanzee showed proper differential behavior). But in all cases, the basic psychology is pattern matching.

It’s the sort of rapid, automatic and effortless processing that drives our perceptions in the Muller-Lyer Illusion. You can’t choose whether or not to see the illusion, you’re just “seeing that” one line is longer than the other. Margolis also called this kind of thinking “intuitive”.

This shows that moral reasoning came about due to a bigger brain and that the choices and judgments we make  evolved because they better ensured our fitness, not due to ethics.

Moral reasoning evolved for us to increase our fitness on this earth. The field of ethics justifies what benefits group and kin selection with minimal harm to the individual. That is, the explanations people make through moral reasoning are just post hoc searches for people to justify their gut feelings, which they cannot think of a reason why they have them.

Source: The Righteous Mind: Why Good People Are Divided By Politics and Religion

Individual and Racial Differences in IQ and Allele Frequencies

1300 words

In the past 100 years since the inception of the IQ test there have been racial differences in test scores. What causes these score differences? Genetics? Environment? Both? Recently it has come out that populations do differ in allele frequencies that affect intelligence. David Piffer’s “forbidden paper on population genetics and IQ” was rejected by the new editor of the journal Intelligence. In the paper, he shows how IQ alleles vary in frequency by population. One reviewer even said it should not be put up for review, which Piffer believes there was a hidden agenda or a closed minded attitude. He even puts reviewers comments and responds to them. He says science should be transparent, which is why he’s showing the researchers’ comments on his paper.

His December, 2015 paper titled: A review of intelligence GWAS hits: Their relationship to country IQ and the issue of spatial autocorrelation shows that there are differing allele frequencies in which IQ between populations that affect IQ which are then correlated highly with average IQ by country (r=.92, factor analysis showed a correlation of .86). There was also a “positive and significant correlation between the 9 SNPs metagene and IQ”(pg. 45). However, Piffer does conclude that since the 9 alleles are present within all populations (Africans, Latin Americans, Europeans, South Asians, and East Asians) that the intelligence polymorphisms don’t appear to be race-specific, but were already present in Homo Sapiens before the migration out of Africa. He then goes on to say that it’s extremely likely that the vast majority of alleles were subject do differential selection pressure which lead increases in cognitive abilities at different rates rates in different geographical areas (pg. 49). It’s of course known that differing populations faced differing selection pressures which then lead to genotypic changes which then affected the phenotype. It’s not surprising that genes that correlate strongly with intelligence have differing frequencies in different geographical populations; it’s to be expected with what we know about evolution and natural selection. Below is the scatter plot showing the relationship between polygenic score GWAS (Genome Wide Association Studies) hits and IQ:

Scatter plot IQ-polygenic score

The fact that these differences exist should not come as a shock to those who want to seek the truth, but as seen with how David Piffer didn’t even get consideration for a revision, this shows the bias in science to studies such as this that show racial differences in intelligence exist.

Piffer’s data also corroborates Lynn and Meisenberg’s (2010) finding of a correlation of .907 with measured and estimated IQ. This shows that the differing allele frequencies affect IQ, which then affect a countries GDP, GNP, and over all quality of life.

With a sample with a huge n (over 100,000 subjects) cognitive abilities tests were performed on verbal-numerical reasoning, memory and reaction time (a huge correlate for IQ itself, see Rushton and Jensen, 2005). Davies et al (2016) discovered that there were significant genome-wide SNP based associations in 20 genomic regions, with significant gene-based regions on 46 loci!! Once we find definitive proof that intelligence differences vary between individuals, as well as the loci and genomic regions responsible, we can then move on to difference in allele frequency in depth (which Piffer 2015 was one of the starts to this project).

Moreover, genes that influence intelligence determine how well axons are encased in myelin, which is the fatty insulation that coats our axons, allowing for fast signaling to the brain. Thicker myelin also means faster nerve impulses. The researchers used HARDI to measure water diffusion in the brain. If the water diffuses rapidly in one direction, that shows the brain has very fast connections. Whereas a more broad diffusion would indicate slower signaling, thus lower intelligence. It basically gives us a picture of an individuals mental speed. Thinking of reaction time tests where Asians beat whites who beat blacks, this could possibly show how differing process times between populations manifest itself in reaction time. Since myelin is correlated with fast connections, we can make the inference that Asians have more than whites who have more than blacks, on average. The researchers also say that it’s a long time from now, but we may be able to increase intelligence by manipulating the genes responsible for myelin. This leads me to believe that there must be racial differences in myelin as well, following Rushton’s Rule of Three.

Since the mother’s IQ is the best predictor of the child’s IQ, this should really end the debate on its own. Sure on average, intelligent black mothers would birth intelligent children, but due to regression to the mean, the children would be less intelligent than the mother. JP Rushton also says that regression works in the opposite way. Both blacks and whites who fall below their racial means will have children who regress to the means of 85 and 100 respectively, showing the reality of the genetic mean in IQ between the races.

Why would differing allele frequencies lead to the same cognitive processes in the brain in genetically isolated populations? I’ve shown that brain circuits vary by IQ genes, and populations do differ in this aspect, like all other differing genotypic/phenotypic traits.

East Asians have bigger brains, as shown by MRI studies. Rushton and Rushton (2001) showed that the three races differ in IQ, brain size, and 37 different musculoskeletal traits. We know that West Africans and West African-descended people have genes for fast twitch muscle fibers (Type II) (Nielson and Christenson, 2001). Europeans and East Asians have slow twitch muscle fibers (Type I) for strength and endurance. (East Africans have this as well, which allows for ability to run for distance, which fast twitch fibers do not allow for. The same is true for slow twitch fibers and sprinting events.) Bengt Saltin showed that European distance runners have up to 90 percent slow twitch fibers (see Entine, 2000)! So are genetic IQ differentials really that hard to believe? With all of these differing variables in regards to intelligence that all point to a strong genetic cause for individual differences in other genes that lead to stark phenotypic differences between the races, is it really not plausible that populations differ in intelligence, which is largely inherited?

Is it really plausible that differing populations would be the same cognitvely? That they would have the same capacity for intelligence? Even when evolution occurred  in differing climates?  The races/ethnicities differ on so many different variables with differing genes being responsible for it. Would IQ genes really be out of the question? Evolution didn’t stop from the neck up. Different populations faced different selection pressures, so different human traits then evolved for better adaption in that environment. Different traits clearly developed in genetically isolated populations that had no gene flow with each other for tens of thousands of years. These differing evolutionary environments for the races put different pressures on them, selecting some for high IQ alleles and others for low IQ alleles.

We are coming to a time where intelligence differences between populations will become an irrefutable fact. With better technology to see how differing genes or sets of genes affect our mind as well as physiology, we will see that most all human differences will come down to differing allele frequencies along with differing gene expression. Following Rushton’s simple rule based on over 60 variables, East Asians will have the most high IQ alleles followed by Europeans and then blacks. The whole battery of different cognitive abilities tests that have been conducted over the past 100 years show us that there are differences, yet we haven’t been able to fully explain it by GWAS and other similar techniques. Charles Murray says within the next 5 to 10 years we will have definitive proof that IQ genes exist. After that, it’s only a matter of time before it comes out that racial differences in IQ are due to differing allele frequency as well as gene expression.