NotPoliticallyCorrect

Home » Psychology

Category Archives: Psychology

Conceptual Arguments Against Heredetarianism

3300 words

Introduction

Hereditarianism is the theory that differences in psychology between individuals and groups have a ‘genetic’ or ‘innate’ (to capture the thought before the ‘gene’ was conceptualized) cause to them—which therefore would explain the hows and whys of, for example, the current social hierarchy. The term ‘racism’ has many referents—and using one of the many definitions of ‘racism’, one could say that the hereditarian theory is racist since it attempts to justify and naturalize the current social hierarchy.

In what I hope is my last word on the IQ/hereditarian debate, I will provide three conceptual arguments against hereditarianism: (1) psychologists don’t ‘measure’ any’thing’ with their psychological tests since there is no object of measurement, no specified measured object, and measurement unit for any specific trait; only physical things can be measured and psychological ‘traits’ are not physical so they cannot be measured (Berka, 1983; Nash, 1990; Garrison, 2009); (2) there is no theory or definition of “intelligence” (Lanz, 2000; Richardson, 2002; Richardson and Norgate, 2015; Richardson, 2017) so there can be no ‘measure’ of it, the example of temperature and thermometers will be briefly mentioned; (3) the logical impossibility of psychophysical reduction entails that mental abilities/psychological traits cannot be genetically inherited/transmitted; and (4) psychological theories are influenced by the current problems/going-on in society as well as society influencing psychological theories. These four objections are lethal to hereditarianism, the final showing that psychology is not an ‘objective science.’

(i) The Berka/Nash measurement objection

The Berka/Nash measurement objection is simply: if there is no specified measured object, object of measurement or measuring unit for the ‘trait’, then no ‘thing’ is truly being ‘measured’ as only physical things can be measured. Nash gives the example of a stick—the stick is the measured object, the length of the stick is the object of measurement (the property being measured) and inches, centimeters etc are the measuring units. Being that the stick is in physical space, its property can be measured—its length. Since psychological traits are not physical (this will also come into play for (ii) as well) nor do they have a physical basis, there can be no ‘measuring’ of psychological traits. Indeed, since scaling is accepted by fiat to be a ‘measure’ of something. This, though, leads to confusion, especially to psychologists.

The most obvious problem with the theory of IQ measurement is that although a scale of items held to test ‘intelligence’ can be constructed, there are no fixed points of reference. If the ice point of water at one atmosphere fixes 276.16 K, what fixes 140 points of IQ? Fellows of the Royal Society? Ordinal scales are perfectly adequate for certain measurements. Moh’s scale of scratch hardness consists of ten fixed points from talc to diamond, and is good enough for certain practical purposes. IQ scales (like attainment test scales) are ordinal scales, but this is not really to the point, for whatever the nature of the scale it could not provide evidence for the property IQ or, therefore, that IQ has been measured. (Nash, 1990: 131)

In first constructing its scales and only then preceding to induce what they ‘measure’ from correlational studies, psychometry has got into the habit of trying to do what cannot be done and doing it the wrong way round anyway. (Nash, 1990: 133)

The fact of the matter is, IQ tests don’t even meet the minimal theory of measurement since there is no—non-circular—definition of what this ‘general cognitive ability’ even is.

(ii) No theory or definition of intelligence

This also goes back to Nash’s critique of IQ (since there can be no non-circular definition of what “IQ tests” purport to measure): There is no theory or definition of intelligence therefore there CAN BE no ‘measure’ of it. Imagine saying that you have measured temperature without a theory behind it. Indeed, I have explained in another article that although IQ-ists like Jensen and Eysenck emphatically state that the ‘measuring’ of ‘intelligence’ with “IQ tests” is “just like” the measuring of temperature with thermometers, this claim fails as there is no physical basis to psychological traits/mental abilities so they, therefore, cannot be measured. If “intelligence” is not like height or weight, then “intelligence”‘ cannot be measured. “Intelligence” is not like height or weight. Therefore, “intelligence” cannot be measured.

We had a theory and definition of temperature and then the measuring tool was constructed to measure our new construct. The construct of temperature was then verified independently of the instrument used to originally measure it, with the thermoscope which then was verified with human sensation. Thus, temperature was verified in a non-circular way. On the other hand, “intelligence tests” are “validated” circularly, if the tests correlate highly with other older tests (like Terman’s Stanford-Binet), it is held that the new test ‘measures’ the construct of ‘intelligence’—even if none of the previous tests have themselves been validated!

Therefore, this too is a problem for IQ-ists—their scale was first constructed (to agree with the social hierarchy, no less; Mensh and Mensh, 1991) and then they set about trying to see what their scales ‘measure’ with correlational studies. But we know that since two things are correlated that doesn’t mean that one causes the other—there could be some unknown third variable causing the relationship or the relationship could be spurious. In any case, this conceptual problem, too, is a problem for the IQ-ist. IQ is nothing like temperature since temperature is an actual physical measure that was verified independently of the instrument constructed to measure the construct in the first place.

Claims of individuals as ‘intelligent’ (whatever that means) or not are descriptive, not explanatory—it is the reflection of one’s current “ability” (used loosely) in relation to their current age norms (Anastasi; Howe, 1997).

(iii) The logical impossibility of psychophysical reduction

I will start this section off with two (a priori) arguments:

Anything that cannot be described in material terms using words that only refer to material properties is immaterial.
The mind cannot be described in material terms using words that only refer to material properties.
Therefore the mind is immaterial; materialism is false.

and

If physicalism is true then all facts can be stated using a physical vocabulary.
But facts about the mind cannot be stated using a physical vocabulary.
So physicalism is false.

(Note that the arguments provided are valid, and I hold them to be sound thus an objector would need to reject then refute a premise.)

Therefore, if all facts cannot be stated using a physical vocabulary and if the mind cannot be described in material terms using words that only refer to material properties, then there can, logically, be no such thing as ‘mental measurement’—no matter what IQ-ists try to tell you.

Different physical systems can give rise to different mental phenomena—what is known as the argument from multiple realizability. Thus, since psychological traits/mental states are multiply realizable, then it is impossible for psychology to reduce to mental kinds to reduce to physical kinds—the mental kind can be realized by multiple physical states. Psychological states are either multiply realizable or they are type identical to the physio-chemical states of the brain. That is a kind of mind-brain identity thesis—that the mind is identical to the states of the brain. Although they are correlated, this does not mean that the mind is the brain or that the mind can be reduced to physio-chemical states, as Putnam’s argument from multiple realizability concludes. If type-physicalism is true, then it must be true that every and all mental properties can be realized in the same exact way. But, empirically, it is highly plausible that mental properties can be realized in multiple ways. Therefore, type-identity theory is false.

Psychophysical laws are laws connnecting mental abilities/psychological traits with physical states. But, as Davidson famously argued in his defense of Anomalous Monism, there can be no such laws linking mental and physical events. There are no mental laws therefore there can be no scientific theory of mental states. Science studies the physical. The mental is not physical. Thus, science cannot study the mental. Indeed, since there are no bridge laws that link the mental and physical and the mental is irreducible to and underdetermined by the physical, it then follows that science cannot study the mental. Therefore, a science of the mind is impossible.

Further note that the claim “IQ is heritable” reduces to “thinking is heritable”, since the main aspect of test-taking is thinking. Thinking is a mental activity which results in a thought. If thinking is a mental activity which results in a thought, then what is a thought? A thought is a mental state of considering a particular idea or answer to a question or committing oneself to an idea or an answer. These mental states are, or are related to, beliefs. When one considers a particular answer to a question, they are paving the way to holding a certain belief. So when they have committed themselves to an answer, they have committed themselves to a new belief. Since beliefs are propositional attitudes, believing p means adopting the belief attitude that p. So, since cognition is thinking, then thinking is a mental process that results in the formation of a propositional belief. Thus, since thinking is related to beliefs and desires (without beliefs and desires we would not be able to think), then thinking (cognition) is irreducible to physical/functional states, meaning that the main aspect of test-taking (thinking) is irreducible to the physical thus physical states don’t explain thinking which means the main aspect of (IQ) test-taking is irreducible to the physical.

(iv) Reflexivity in psychology

In this last section, I will discuss the reflexivity—circularity—problem for psychology. This is important for psychological theorizing since, to its practitioners, psychology is seen to be an ‘objective science.’ If you think about psychology (and science) and how it is practiced, it (they) investigates third-personal, not first-personal, states. Thus, there can be no science of the mind (what psychology purports to be) and psychology can, therefore, not be an ‘objective science’ as the hard sciences are. The ‘knowledge’ that we attain from psychology comes from, obviously, the study of people. As Wade (2010: 5) notes, the knowledge that people and society are the object of study “creates a reflexivity, or circular process of cause and effect, whereby the ‘objects’ of study can and do change their behavior and ideas according to the conclusions that their observers draw about their behavior and ideas.”

It is quite clear that such academic concepts do not arise independently—in the history of psychology, it has been used in an attempt to justify the current social hierarchy of the time (as seen in 1900s America, Germany, and Britain). Psychological theories are influenced by current social goings-on. Thus, it is influenced by the bias of the psychologists in question. “The views, attitudes, and values of psychologists influence the claims they make” (Jones, Elcock, and Tyson, 2011: 29).

… scientific ideas did not develop in a vacuum but rather reflected underlying political or economic trends. 15

The current social context influences the psychological discourse and the psychological discourse influences the current social context. The a priori beliefs that one holds will influence what they choose to study. An obvious example being, hereditarian psychologists who believe there are innate differences in ‘IQ’ (they use ‘IQ’ and ‘intelligence’ interchangeably as if there is an identity relation) will undertake certain studies in order to ‘prove’ that the relationship they believe to be true holds and that there is indeed a biological cause to mental abilities within and between groups and individuals. Do note, however, that we have the data (blacks score lower on IQ tests) and one must then make an interpretation. So we have three possible scenarios: (1) differences in biology cause differences in IQ; (2) differences in experience cause differences in IQ; or (3) the tests are constructed to get the results the IQ-ists want in order to justify the current social hierarchy. Mensh and Mensh (1991) have succinctly argued for (3) while hereditarians argue for (1) and environmentalists argue for (2). While it is indeed true that one’s life experiences can influence their IQ scores, we have seen that it is logically impossible for genes to influence/cause mental abilities/psychological traits.

The only tenable answer is (3). Such relationships, as noted by Mensh and Mensh (1991), Gould (1996), and Garrison (2009), between test scores and the social hierarchy are interpreted by the hereditarian psychologist thusly: (1) our tests measure an innate mental ability; (2) if our tests measure an innate mental ability, then differences in the social hierarchy are due to biology, not environment; (3) thus, environmental differences cannot account for what is innate between individuals so our tests measure innate biological potential for intelligence.

The [IQ] tests do what their construction dictates; they correlate a group’s mental worth with its place in the social hierarchy. (Mensh and Mensh, 1991)

Richards (1997) in his book on racism in the history of psychology, identified 331 psychology articles published between 1909 (the first conceptualization of the ‘gene’, no less) and 1940 which argued for biology as a difference-maker for psychological traits while noting that 176 articles for the ‘environment’ side were published in that same time period.

EfV3_d2XgAEkn3c

Note that the racist views of the psychologists in question more than likely influenced their research interests—they set out to ‘prove’ their a priori biases. Indeed, they even modeled their tests after such biases. Tests that were constructed that agreed with their a priori pre-suppositions on who was or was not intelligence was kept whereas those that did not agree with those notions were thrown out (as noted by Hilliard, 2012). This is just as Jones, Elcock, and Tyson (2011: 67) note with the ‘positive manifold’ (‘general intelligence’):

Subtests within a battery of intelligence tests are included n the basis of them showing a substantial correlation with the test as a whole, and tests which do not show such correlations are excluded.

From this, it directly follows that psychometry (and psychology) are not sciences and do not ‘measure’ anything (returning to (i) above). What psychometrics (and psychology) do is attempt to use their biased tests in order to sort individuals into where they ‘belong’ on the social hierarchy. Standardized testing (IQ tests were one of the first standardized tests, along with the SAT)—and by proxy psychometrics—is NOT a form of measurement. The hierarchy that the tests ‘find’ is presupposed to exist and then constructed into existence using the test to ‘prove’ their biases ‘right.’

Indeed, Hilliard (2012) noted that in South Africa in the 1950s that there was a 15-point difference in IQ between two white cultural groups. Rather than fan flames of political tension between the groups, the test was changed in order to eliminate the difference between the two groups. The same, she notes, was the case regarding IQ differences between men and women—Terman eliminated such differences by picking and choosing certain items that favored one group and balanced them out so that they scored near-equally. These are two great examples from the 20th century that demonstrate the reflexivity in psychology—how one’s a priori biases influence what they study and the types of conclusions they draw from data.

Psychology, at least when it comes to racial differences in ‘IQ’, is being used to confirm pre-existing prejudices and not find any ‘new objective facts.’ “… psychology [puts] a scientific gloss on the accepted social wisdom of the day” (Christian, 2008: 5). This can be seen with a reading into the history of “IQ tests” themselves. The point is, that psychology and society influence each other in a reflexive—circular—manner. Thus, psychology is not and cannot be an ‘objective science’ and when it comes to ‘IQ’ the biases that led to the bringing of the tests to America and concurrently social policy are still—albeit implicitly—believed today.

Psychology originally developed in the US in the 19th century in order to attempt to fix societal problems—there needed to be a science of the mind and psychology purported to be just that. They, thusly, needed a science of ‘human nature’, and it was for this reason that psychology developed in the US. The first US psychologists were trained in Germany and then returned to the US and developed an American psychology. Though, do note that in Germany psychology was seen as the science of the mind while in America it would then turn out to be the science of behavior (Jones, Elcock, and Tyson, 2011). This also does speak to the eugenic views held by certain IQ-ists in the 20th and into the 21st century.

In Nazi Germany, Jewish psychologists were purged since their views did not line-up with the Nazi regime.

Psychology appealed to the Nazi Party for two reasons: because psychological theory could be used to support Nazi ideology, and because psychology could be applied in service to the state apparatus. Those psychologists who remained adapted their theories to suit Nazi ideology, and developed theories that demonstrated the necessary inferiority of non-Aryan groups (Jones and Elcock, 2001). These helped to justify actions by the state in discriminating against, and ultimately attempting to eradicate, thse other groups. (Jones, Elcock, and Tyson (2011: 38-39)

These examples show that psychology is influenced by society but also that society influences psychological theorizing. Clearly, what psychologists choose to study, since society influences psychology, is a reflection of a society’s social concerns. In the case of IQ, crime, etc, the psychologist attempts to naturalize and biologicize such differences in order to explain them as ‘innate’ or ‘genetic’. The rise of IQ tests in America, too, also coincided with the worry that ‘national intelligence’ was declining and so, the IQ test would need to be used to ‘screen’ prospective immigrants. (See Richardson, 2011 for an in-depth consideration on the tests and conditions that the testees were exposed to on Ellis Island; also see Gould, 1996.)

Conclusion

(i) The Berka/Nash measurement objection is one of the most lethal arguments for IQ-ists. If they cannot state the specified measured object, the object of measurement, and the measuring unit for IQ then they cannot say that any’thing’ is being ‘measured’ by ‘IQ tests.’ This then brings us to (ii). Since there is no theory or definition of what is being ‘measured’, and if the tests were constructed first before the theory, then there will necessarily be a built-in bias to what is being ‘measured’ (namely, so-called ‘innnate mental potential’). (iii) Since it is logically impossible for psychology to reduce to physical structure, and since all facts cannot be stated using a physical vocabulary nor can the mind be described using material terms that only refer to material properties, then this is another blow to the claim that psychology is an ‘objective science’ and that some’thing’ is being ‘measured’ by their tests (constructed to agree with their a priori biases). And (iv) The bias that is inherent in psychology (for both the right and the left) influences the practitioners’ theorizing and how they interpret data. Society has influenced psychology (and psychology has influenced the society) and we only need to look at America and Nazi Germany in the 20th century to see that this holds.

The relationship between psychology and society is inseparable—it is a truism that what psychologists choose to study and how and why they formulate their conclusions will be influenced by the biases they already hold about society and how and why it is the way it is. For these reasons, psychology/psychometry are not ‘sciences’ and hereditarianism is not a logically sound position. Hereditarianism, then, stays what it was when it was formulated—a racist theory that attempts to bilogicize and justify the current social hierarchy. Thus, one should not accept that psychologists ‘measure’ any’thing’ with their tests; one should not accept the claim that mental abilities can be genetically transmitted/inherited; one should not accept the claim that psychology is an objective ‘science’ due to the reflexive relationship between psychology and society.

The arguments given show why hereditarianism should be abandoned—it is not a scientific theory, it just attempts to naturalize biological inequalities between individuals and groups (Mensh and Mensh, 1991; Gould, 1996; Garrison, 2009). Psychometrics (what hereditarians use to attempt to justify their claims) is, then, nothing more than a political ring.

Superiority, Psychometrics, and Measurement

2800 words

In a conversation with an IQ-ist, one may eventually find themselves discussing the concept of “superiority” or “inferiority” as it Regards IQ. The IQ-ist may say that only critics of the concept of IQ place any sort of value-judgments on the number one gets when they take an IQ test. But if the IQ-ist says this, then they are showing their ignorance regarding the history of the concept of IQ. The concept was, in fact, formulated to show who was more “intelligent”—“superior”—and who was less “intelligent”—“inferior.” But here is the thing, though: The terms “superior” and “inferior” are, however, anatomic which shows the folly of the attempted appropriation of the term.

Superiority and inferiority

If one wants to find early IQ-ists talking about superiority and inferiority regarding IQ, they would only need to check out Lewis Terman’s very first Stanford-Binet tests. His scales—now in their fifth edition—state that IQs between 120 and 129 are “superior” while 130-144 is “gifted or very advanced” and 145-160 is “very gifted” or “highly advanced.” How strange… But, the IQ-ist can say that they were just products of their time and that no serious researcher believes such foolish things, that one is “superior” to another on the basis of an IQ score. What about proximal IQs? Lateral IQs? Posterior IQs? Distal IQs? It’s ridiculous to use anatomic terminology (for physical things) and attempt to use them to describe mental “things.”

But, perhaps the most famous hereditarian Arthur Jensen, as I have noted, wrongly stated that heritability estimates can be used to estimate one’s “genetic standing” (Jensen, 1970) and that if we continue our current welfare policies then we are in danger of creating a “genetic underclass” (Jensen, 1969). This, as does the creation of the concept of IQ in the early 1900s, speaks to the hereditarian agenda and the reason for the IQ enterprise as a whole. (See Taylor, 1980 for a wonderful discussion of Jensen’s confusion on the concept of heritability.)

This is no surprise when you understand that IQ tests were created to rank people on a mental hierarchy that reflected the current social hierarchy of the time which would then be used as justification for their spot on the social hierarchy (Mensh and Mensh, 1991). So it is no surprise that anatomic terminology was hijacked in an attempt at forwarding eugenic ideas. But the eugenicists concept of superiority didn’t always pan out the way they wanted it to go, which is evidenced a few decades before the conceptualization of standardized testing.

Galton attempted to show that those with the fastest reaction times were more intelligent, but when he found out that the common man had just as quick of a reaction time, he abandoned this test. Then Cattell came along and showed that no relationship existed between sensory perception and IQ scores. Finally, Binet showed that measures of the skull did not correspond with teacher’s assessment of who is or is not “intelligent.” Then, some decades later, Binet and Simon finally construct a test that discriminates between who they feel is or is not intelligent—which discriminated by social class. This test was finally the “measure” that would differentiate between social classes since it was based on a priori notions of an individual’s place in the social hierarchy (Garrison, 2009: 75). Binet and Simon’s “ideal city” would use test scores as a basis to shuttle people into occupations they “should be” in on the basis of their IQ scores which would show how they would work based on their “aptitudes” (Mensh and Mensh, 1991: 24; Garrison, 2009: 79). Bazemore-James, Shinaorayoon, and Martin (2017) write that:

The difference in racial subgroup mean scores mimics the intended outcomes of the original standardized IQ tests, with exception to Asian Americans. Such tests were invented in the 1910s to demonstrate the superiority of rich, U.S.-born, White men of northern European Descent over non-Whites and recent immigrants (Gersh, 1987). By developing an exclusion-inclusion criteria that favored the aforementioned groups, test developers created a norm “intelligent” (Gersh, 1987, p. 166) populationiot “to differentiate subjects of known superiority from subjects of known inferiority” (Terman, 1922, p. 656).

So, as one can see, this “superiority” was baked-in to IQ tests from the very start and the value-judgments, then, are not in the minds of IQ critics but is inherent in the scores themselves as stated by the pioneers of IQ testing in America and the originators of the concept that would become IQ. Garrison (2009: 79) writes:

With this understanding it is possible to make sense of Binet’s thinking on intelligence tests as group differentiation. That is, the goal was to group children as intelligent and unintelligent, and to grade (value) the various levels of the unintelligent (also see Wolf 1973, 152–154). From the point of view of this goal, it mattered little whether such differences were primarily biological or environmental in origin. The genius of the theory rests in how it postulates one group as “naturally” superior to the other without the assumptions of biology, for reason had already been established as a natural basis for distinction, irrespective of the origin of differences in reasoning ability.

While Binet and Simon were agnostic on the nature-nurture debate, the test items that they most liked were those items that differentiated between social classes the most (which means they were consciously chosen for those goals). But reading about their “ideal city”, we can see that those who have higher test scores are “superior” to those who do not. They were operating under the assumption that they would be organizing society along class lines with the tests being measures of group mental ability. For Binet and Simon, it does not matter whether or not the “intelligence he sought to define” was inherited or acquired, they just assumed that it was a property of groups. So, in effect, “Binet and Simon developed a standard whereby the value of people’s thinking could be judged in a standard way, in a way that corresponded with the exigencies of social reproduction at that time” (Garrison, 2009: 94). The only thing such tests do is reproduce the differences they claim to measure—making it circular (Au, 2009).

But the whole reason why Binet and Simon developed their test was to rank people from “best” to “worst”, “good” to “bad.” But, this does not mean that there is some “thing” inherent in individuals or groups that is being “measured” (Nash, 1990). Thus, since their inception, IQ tests (and by proxy all standardized testing) has pronouncements of such ranking built-in, even if it is not explicitly stated today. Such “measures” are not scientific and psychometrics is then shown for what it really is: “best understood as the development of tools for vertical classification and the production of social value” (Garrison, 2009: 5).

The goal, then, of psychometry is clear. Garrison (2009: 12) writes:

Ranking human worth on the basis of how well one competes in academic contests, with the effect that high ranks are associated with privilege, status, and power, suggests that psychometry is premised, not on knowledge of intellectual or emotional development, but on Anglo-American political ideals of rule by the best (most virtuous) and the brightest (most talented), a “natural aristocracy” in Jeffersonian parlance.

But, such notions of superiority and inferiority, as I have stated back in 2018, are nonsense when taken out of anatomic context:

It should be noted that the terms “superior” and “inferior” are nonsensical, when used outside of their anatomic contexts.

An IQ-ist may exclaim “Are you saying that you can’t say that person A has superior sprinting ability or breath-holding ability!? Are you denying that people are different?!” No, what I’m saying is that it is absurd to take anatomic terminology (physical measures) and attempt to liken it to IQ—this is because nothing physical is being measured, not least because the mental isn’t physical nor reducible to it.

They were presuming to measure one’s “intelligence” and then stating that one has ‘superior’ “intelligence” to another—and that IQ tests were measuring this “superiority”. However, pscyhometrics is not a form of measurement—rankings are not measures.

Knowledge becomes reducible to a score in regard to standardized testing, so students, and in effect their learning and knowledge, are then reduced to their scores on these tests. And so, “such inequalities [with the SAT, which holds for all standardized testing] are structured into the very foundations of standardized test construction itself” (Au, 2009: 64). So what is built into a test can also be built out of it (Richardson, 1990, 2000; Hilliard, 2012).

Measurement

In first constructing its scales and only then preceding to induce what they ‘measure’ from correlational studies, psychometry has got into the habit of trying to do what cannot be done and doing it the wrong way round anyway. (Nash, 1990: 133)

…psychometry fails to meet its claim of measurement and … its object is not the measurement of nonphysical human attributes, but the marking of some human beings as having more worth or value than other human beings … Psychometry’s claim to measurement serves to veil and justify the fundamentally political act of marking social value, and the role this practice plays in legitimating vast social inequalities. (Garrison, 2009: 30-31)

One of the best examples of a valid measure is temperature—and it has a long history (Chang, 2007). It is valid because there is a well-accepted theory of temperature, what is hot and what is cold. It is a physical property of measure which quantitatively expressed heat and cold. So thermometers were invented to quantify temperature, whereas thermometers were invented to quantify “intelligence.” Those, like Jensen, attempt to make the analogy between temperature and IQ, thermometers and IQ tests. Thermometers, with a high degree of reliability, measure temperature and so do, Jensen, claims, IQ tests.

So, IQ-ists claim, temperature is measured by thermometers, by definition, therefore intelligence is what IQ tests measure, by definition. But there is a problem with claims such as this. Temperature was verified independently of the measuring device originally used to measure it. Fixed points were first established, and then numerical thermometers could be constructed in which we then find a procedure to assign numbers to degrees of heat between and beyond the fixed points. The thermoscope was what was used for the establishment of fixed points, The thermoscope has no fixed points, so we do not have to circularly rely on the concept of fixed points for reference. And if it goes up and down, we can then rightly infer that the temperature of blood is not stable. But what validates the thermoscope? Human sensation. We can see that when we put our hand into water that is scalding hot, if we put the thermoscope in the same water and note that it rises rapidly. So the thermoscopes agreement with our basic sensations of ‘hot’ and ‘cold’ serve as reliability for the fact that thermoscopes reliably justify (in a non-circular way) that temperature is truly being measured. We are trusting the physical sensation we get from whichever surface we are touching, and from this, we can infer that thermoscopes do indeed validate thermometers making the concept of temperature validated in a non-circular manner and a true measure of hot and cold. (See Chang, 2007 for a full discussion on the measurement of temperature.)

Thermometers could be tested by the criterion of comparability, whereas IQ tests, on the other hand, are “validated” circularly with tests of educational achievement, other IQ tests which were not themselves validated. and job performance (Howe, 1997; Richardson and Norgate, 2015; Richardson, 2017) which makes the “validation” circular since IQ tests and achievement tests are different versions of the same test (Schwartz, 1975).

For example, take intro chemistry. When one takes the intro course, they see how things are measured. Chemists may be measuring in mols, grams, the physical state of a substance, etc. We may measure water displacement, reactions between different chemicals or whatnot. And although chemistry does not reduce to physics, these are all actual physical measures.

But the same cannot be said for IQ (Nash, 1990). We can rightly say that one scores higher than another on an IQ tests but that does not signify that some “thing” is being measured and this is because, to use the temperature example again, there is no independent validation of the “construct.” IQ is a (latent) construct but temperature is a quantitative measure of hot and cold. It really exists, though the same cannot be said about IQ or “intelligence.” The concept of “intelligence” does not refer to something like weight and temperature, for example (Midgley, 2018).

Physical properties are observables. We observe the mercury in a thermometer change based on the temperature inside a building or outside. One may say that we observe “intelligence” daily, but that is NOT a “measure”, it’s just a descriptive claim. Blood pressure is another physical measure. It refers to the pressure in large arteries of the system. This is due to the heart pumping blood. An IQ-ist may say that intelligence is the emergent product of thinking and that this is due to the brain and that correlations between life outcomes, IQ tests and educational achievements then validate the measure. But, as noted above, this is circular. The two examples given—blood pressure and temperature—are real things that are physically measurable, unlike IQ (a latent construct).

It also should be noted that Eysenck claimed that if the measurement of temperature is scientific, then so is the measurement of intelligence. But thermometers are not identical to standardized scales. However, this claim fails, as Nash (1990: 131) notes:

In order to measure temperature three requirements are necessary: (i) a scale, (ii) some thermometric property of an object and, (iii) fixed points of reference. Zero temperature is defined theoretically and successive interval points are fixed by the physical properties of material objects. As Byerly (p. 379) notes, that ‘the length of a column of mercury is a thermometric property presupposes a lawful relationship between the order of length and the temperature order under certain conditions.’ It is precisely this lawful relationship which does not exist between the normative IQ scale and any property of intelligence.

This is where IQ-ists go the most wrong: the emphatically state that their tests are measuring SOMETHING! which is important for life success since they correlate with them. Though, there is no precise specification of the measured object, no object of measurement and no measurement unit, so this “means that the necessary conditions for metrication do not exist [for IQ]” (Nash, 1990: 145).

Since IQ tests have a scoring system, the general impressions is that IQ tests measure intelligence just like thermometers measure temperature—but this is a nonsense claim. IQ is an artifact of the test’s norming population. These points do not reflect any inherent property of individuals, they reflect one’s relation to the society they are in (since all standardized tests are proxies for social class).

Conclusion

One only needs to read into the history of IQ testing—and standardized testing as a whole—to see how and why these tests were first devised. From their beginnings wkth Binet and then over to Terman, Yerkes, and Goddard, the goal has been clear—enact eugenic policies on those deemed “unintelligent” by IQ tests which just so happen to correspond with lower classes in virtue of how the tests were constructed, which goes back originally to Binet and Simon. The history of the concept makes it clear that it’s not based on any kind of measurement theory like blood pressure and temperature. It is based on a priori notions of the structure and distribution of “intelligence” which then reproduces the social structure and “justifies” notions of superiority and inferiority on the basis of “intelligence tests” (Mensh and Mensh, 1991; Au, 2009; Garrison, 2009).

The attempts to hijack anatomic terminology, as I have shown, are nonsense since one doesn’t talk in other anatomic terminology about other kinds of things; the first IQ-ists’ intentions were explicit in what they were attempting to “show” which still holds for all standardized testing today.

Binet, Terman, Yerkes, Goddard and others all had their own priors which then led them to construct tests in such a way that would lead to their desired conclusions. No “property” is being “measured” by these tests, nor can they be used to show one’s “genetic standing” (Jensen, 1970) which implies that one is “genetically superior” (this can be justified by reading Jensen’s interview with American Renaissance and his comments on the “genetic enslavement” of a group of we continued our welfare policy).

Physiological measures, such as blood pressure, and measures of hot and cold, such as temperature, are valid measures and in no way, shape or form—contra Jensen—like the concept of IQ/”intelligence”, which Jensen conflates (Edwards, 1973). Intelligence (which is extra-physical) cannot be measured (see Berka, 1983 and see Nash, 1990: chapter 8 for a discussion of the measurement objection of Berka).

For these reasons, we should not claim that IQ tests ‘measure’ “intelligence”, nor do they measure one’s “genetic standing” or how “superior” one is to another and we should claim that psychometrics is nothing more than a political ring.

Race, Test Bias, and ‘IQ Measurement’

1800 words

The history of standardized testing—including IQ testing—has a contentious history. What causes score distributions between groups of people? I stated at least four reasons why there is a test gap:

(1) Differences in genes cause differences in IQ scores;

(2) Differences in environment cause differences in IQ scores;

(3) A combination of genes and environment cause differences in IQ scores; and

(4) Differences in IQ scores are built into the test based on the test constructors’ prior biases.

I hold to (4) since, as I have noted, the hereditarian-environmentalist debate is frivolous. There is, as I have been saying for years now, no agreed-upon definition of ‘intelligence’, since there are such disparate answers from the ‘experts’ (Lanz, 2000; Richardson, 2002).

For the lack of such a definition only reflects the fact that there is no worked-out theory of intelligence. Having a successful definition of intelligence without a corresponding theory would be like having a building without foundations. This lack of theory is also responsible for the lack of some principled regimentation of the very many uses the word ‘intelligence’ and its cognates are put to. Tao many questions concerning intelligence are still open, too many answers controversial. Consider a few examples of rather basic questions: Does ‘intelligence’ name some entity which underlies and explains certain classes of performances1, or is the word ‘intelligence’ only sort of a shorthand-description for ‘being good at a couple of tasks or tests’ (typically those used in IQ tests)? In other words: Is ‘intelligence’ primarily a descriptive or also an explanatorily useful term? Is there really something like intelligence or are there only different individual abilities (compare Deese 1993)? Or should we turn our backs on the noun ‘intelligence’ and focus on the adverb ‘intelligently’, used to characterize certain classes of behaviors? (Lanz, 2000: 20)

Nash (1990: 133-4) writes:

Always since there are just a series of tasks of one sort or another on which performance can be ranked and correlated with other performances. Some performances are defined as ‘cognitive performances’ and other performances as ‘attainment performances’ on essentially arbitrary, common sense grounds. Then, since ‘cognitive performances’ require ‘ability’ they are said to measure that ‘ability’. And, obviously, the more ‘cognitive ability’ an individual posesses the more that individual can acheive. These procedures can provide no evidence that IQ is or can be measured, and it is rather besides the point to look for any, since that IQ is a metric property is a fundamental assumption of IQ theory. It is imposible that any ‘evidence’ could be produced by such procedures. A standardised test score (whether on tests designated as IQ or attainment tests) obtained by an individual indicates the relative standing of that individual. A score lies within the top ten perent or bottom half, or whatever, of those  gained by the standardisation group. None of this demonstrates measurement of any property. People may be rank ordered by their telephone numbers but that would not indicate measurement of anything. IQ theory must demonstrate not that it has ranked people according to some performance (that requires no demonstration) but that they are ranked according to some real property revealed by that performance. If the test is an IQ test the property is IQ — by definition — and there can in consequence be no evidence dependent on measurement procedures for hypothesising its existence. The question is one of theory and meaning rather than one of technique. It is impossible to provide a satisfactory, that is non-circular, definition of the supposed ‘general cognitive ability’ IQ tests attempt to measure and without that definition IQ theory fails to meet the minimal conditions of measurement.

These is similar to Mary Midgley’s critique of ‘intelligence’ in her last book before her death What Is Philosophy For? (Midgley, 2018). The ‘definitions’ of ‘intelligence’ and, along with it, its ‘measurement’ have never been satisfactory. Haier (2016: 24) refers to Gottfredson’s ‘definition’ of ‘intelligence, stating that ‘intelligence’ is a ‘general mental ability.’ But if that is the case, that it is a ‘general mental ability’ (g) then ‘intelligence’ does not exist because ‘g’ does not exist as a property in the brain. Lanz’s (2000) critique is also like Howe’s (1988; 1997) that ‘intelligence’ is a descriptive, not explanatory, term.

Now that the concept of ‘intelligence’ has been covered, let’s turn to race and test bias.


Test items are biased when they have different psychological meanings across cultures (He and van de Vijver 2012: 7). If they have different meanings across cultures, then the tests will not reflect the same ‘ability’ between cultures. Being exposed to the knowledge—and correct usage of it—on a test is imperative for performance. For if one is not exposed to the content on the test, how are they expected to do well if they do not know the content? Indeed, there is much evidence that minority groups are not acculturated to the items on the test (Manly et al, 1997; Ryan et al, 2005; Boone et al, 2007). This is what IQ tests measure: acculturation to the the tests’ constructors, school cirriculum and school teachers—aspects of white, middle-class culture (Richardson, 1998). Ryan et al (2005) found that reading and and educational level, not race or ethnicity, was related to worse performance on psychological tests.

Serpell et al (2006) took 149 white and black fourth-graders and randomly assigned them to ethnically homogeneous groups of three, working on a motion task on a computer. Both blacks and whites learned equally well, but the transfer outcomes were better for blacks than for whites.

Helms (1992) claims that standardized tests are “Eurocentric”, which is “a perceptual set in which European and/ or European American values, customs, traditions and characteristics are used as exclusive standards against which people and events in the world are evaluated and perceived.” In her conclusion, she stated that “Acculturation
and assimilation to White Euro-American culture should enhance one’s performance on currently existing cognitive ability tests” (Helms, 1992: 1098). There just so happens to be evidence for this (along with the the studies referenced above).

Fagan and Holland (2002) showed that when exposure to different kinds of information was required, whites did better than blacks but when it was based on generally available knowledge, there was no difference between the groups. Fagan and Holland (2007) asked whites and blacks to solve problems found on usual IQ-type tests (e.g., standardized tests). Half of the items were solvable on the basis of available information, but the other items were solveable only on the basis of having acquired previous knowledge, which indicated test bais (Fagan and Holland, 2007). They, again, showed that when knowledge is equalized, so are IQ scores. Thus, cultural differences in information acquisition explain IQ scores. “There is no distinction between crassly biased IQ test items and those that appear to be non-biased” (Mensh and Mensh, 1991). This is because each item is chosen because it agrees with the distribution that the test constructors presuppose (Simon, 1997).

How do the neuropsychological studies referenced above along with Fagan and Holland’s studies show that test bias—and, along with it test construction—is built into the test which causes the distribution of the scores observed? Simple: Since the test constructors come from a higher social class, and the items chosen for inclusion on the test are more likely to be found in certain cultural groups than others, it follows that the reason for lower scores was that they were not exposed to the culturally-specific knowledge used on the test (Richardson, 2002; Hilliard, 2012).


The [IQ] tests do what their construction dictates; they correlate a group’s mental worth with its place in the social hierarchy. (Mensh and Mensh, 1991)

This is very easily seen with how such tests are constructed. The biases go back to the beginning of standardized testing—the first one being the SAT. The tests’ constructors had an idea of who was or was not ‘intelligent’ and so constructed the tests to show what they already ‘knew.’

…as one delves further … into test construction, one finds a maze of arbitrary steps taken to ensure that the items selected — the surrogates of intelligence — will rank children of different classes in conformity with a mental hierarchy that is presupposed to exist. (Mensh and Mensh, 1991)

Garrison (2009: 5) states that standardized tests “exist to assess social function” and that “Standardized testing—or the theory and practice known as “psychometrics” … is not a form of measurment.” The same way tests were constructed in the 1900s is the same way they are constructed today—with arbitrary items and a presuppossed mental hiearchy which then become baked into the tests by virtue of how they are constructed.

IQ-ists like to say that certain genes are associated with high intelligence (using their GWASes), but what could the argument possibly be that would show that variation in SNPs would cause variation in ‘intelligence’? What would a theory of that look like? How is the hereditarian hypothesis not a just-so story? Such tests were created to justify the hierarchies in society, the tests were constructed to give the results that they get. So, I don’t see how genetic ‘explanations’ are not just-so stories.

1 Blacks and whites are different cultural groups.

2 If (1), then they will have different experiences by virtue of being different cultural groups.is

3 So blacks and whites, being different cultural groups, will score differently on tests of ability, since they are exposed to different knowledge structures due to their different cultures and so, all tests of ability are culture-bound. Knowledge, Culture, Logic, and IQ

Rushton and Jensen (2005) claim that the evidence they review over the past 30 years of IQ testing points to a ‘genetic component’ to the black-white IQ gap, relying on the flawed Minnesota study of twinsreared apart” (Joseph, 2018)—among other methods—to generate heritability estimates and state that “The new evidence reviewed here points to some genetic component in Black–White differences in mean IQ.” The concept of heritability, however, is a flawed metric (Bailey, 1997; Schonemann, 1997; Guo, 2000; Moore, 2002; Rose, 2006; Schneider, 2007; Charney, 2012, 2013; Burt and Simons, 2014; Panofsky, 2014; Joseph et al, 2015; Moore and Shenk, 2016; Panofsky, 2016; Richardson, 2017). That G and E interact means that we cannot tease out “percentages” of nature and nurture’s “contribution” to a “trait.” So, one cannot point to heritability estimates as if they point to a “genetic cause” of the score gap between blacks and whites. Further note that the gap has closed in recent years (Dickens and Flynn, 2006; Smith, 2018).

And now, here is another argument based on the differing experiences that cultural groups experience which then explains IQ score differences (eg Mensh and Mensh, 1991; Manly et al, 1997; Kwate, 2001; Fagan and Holland, 2002, 2007; Cole, 2004; Ryan et al, 2005; Boone et al, 2007; Au, 2008; Hilliard, 2012; Au, 2013).

(1) If children of different class levels have experiences of different kinds with different material; and
(2) if IQ tests draw a disproportionate amount of test items from the higher classes; then
(c) higher class children should have higher scores than lower-class children.

Does Playing Violent Video Games Lead to Violent Behavior?

1400 words

President Trump was quoted the other day saying We have to look at the Internet because a lot of bad things are happening to young kids and young minds and their minds are being formed,” Trump said, according to a pool report, “and we have to do something about maybe what they’re seeing and how they’re seeing it. And also video games. I’m hearing more and more people say the level of violence on video games is really shaping young people’s thoughts.” But outside of broad assertions like this—that playing violent video games cause violent behavior—does it stack up to what the scientific literature says about it? In short, no, it does not. (A lot of publication bias exists in this debate, too.) Why do people think that violent video games cause violent behavior? Mostly due to the APA and their broad claims with little evidence.

Just doing a cursory Google search of ‘violence in video games pubmed‘ brings up 9 journal articles, so let’s take a look at a few of those.

The first article is titled The Effect of Online Violent Video Games on Levels of Aggression by Hollingdale and Greitemeyer (2014). They took 101 participants and randomized them to one of four experimental conditions: neutral, offline; neutral online; (Little Big Planet 2) violent offline; and violent online video games (Call of Duty: Modern Warfare). After they played said games, they answered a questionnaire and then measured aggression using the hot sauce paradigm (Lieberman et al, 1999) to measure aggressive behavior. Hollingdale and Greitemeyer (2014) conclude that “this study has identified that increases in aggression are not more pronounced when playing a violent video game online in comparison to playing a neutral video game online.”

Staude-Muller (2011) finds that “it was not the consumption of violent video games but rather an uncontrolled pattern of video game use that was associated with increasing aggressive tendencies.Przybylski, Ryan, and Rigby (2009) found that enjoyment, value, and desire to play in the future were strongly related to competence in the game. Players who were high in trait aggression, though, were more likely to prefer violent games, even though it didn’t add to their enjoyment of the game, while violent content lent little overall variance to the satisfactions previously cited.

Tear and Nielsen (2013) failed to find evidence that violent video game playing leads to a decrease in pro-social behavior (Szycik et al, 2017 also show that video games do not affect empathy). Gentile et al (2014) show that “habitual violent VGP increases long-term AB [aggressive behavior] by producing general changes in ACs [aggressive cognitions], and this occurs regardless of sex, age, initial aggressiveness, and parental involvement. These robust effects support the long-term predictions of social-cognitive theories of aggression and confirm that these effects generalize across culture.” The APA (2015) even states that “scientific research has demonstrated an association between violent video game use and both increases in aggressive behavior, aggressive affect, aggressive cognitions and decreases in prosocial behavior, empathy, and moral engagement.” How true is all of this, though? Does playing violent video games truly increase aggression/aggressive behavior? Does it have an effect on violence in America and shootings overall?

No.

Whitney (2015) states that the video-games-cause-violence paradigm has “weak support” (pg 11) and that, pretty much, we should be cautious before taking this “weak support” as conclusive. He concludes that there is not enough evidence to establish a truly causal connection between violent video game playing and violent and aggressive behavior. Cunningham, Engelstatter, and Ward (2016) tracked the sale of violent video games and criminal offenses after those games were sold. They found that violent crime actually decreased the weeks following the release of a violent game. Of course, this does not rule out any longer-term effects of violent game-playing, but in the short term, this is good evidence against the case of violent games causing violence. (Also see the PsychologyToday article on the matter.)

We seem to have a few problems here, though. How are we to untangle the effects of movies and other forms of violent media that children consume? You can’t. So the researcher(s) must assume that video games and only video games cause this type of aggression. I don’t even see how one can logically state that out of all other types of media that violent video games—and not violent movies, cartoons, TV shows etc—cause aggression/violent behavior.

Back in 2011, the Supreme Court case Brown vs. Entertainment Merchants Association concluding that since the effects on violent/aggressive behavior were so small and couldn’t be untangled from other so-called effects from other violent types of media. Ferguson (2015) found that violent video game playing had little effect on children’s mood, aggression levels, pro-social behavior or grades. He also found publication bias in this literature (Ferguson, 2017). Contrary to what those say about video games causing violence/aggressive behavior, video game playing was associated with a decrease in youth crime (Ferguson, 2014; Markey, Markey, and French, 2015 which is in line with Cunningham, Engelstatter, and Ward, 2016). You can read more about this in Ferguson’s article for The Conversationalong with his and others’ responses to the APA who state that violent video games cause violent behavior (with them stating that the APA is biased). (Also read a letter from 230 researchers on the bias in the APA’s Task Force on Violent Media.)

How would one actually untangle the effects of, say, violent video game playing and the effects of such other ‘problematic’ forms of media that also show aggression/aggressive acts towards others and actually pinpoint that violent video games are the culprit? That’s right, they can’t. How would you realistically control for the fact that the child grows up around—and consumes—so much ‘violent’ media, seeing others become violent around him etc; how can you logically state that the video games are the cause? Some may think it logical that someone who plays a game like, say, Call of Duty for hours on end a day would be more likely to be more violent/aggressive or more likely to commit such atrocities like school shootings. But none of these studies have ever come to the conclusion that violent video games may/will cause someone to kill or go on a shooting spree. It just doesn’t make sense. I can, of course, see the logic in believing that it would lead to aggressive behavior/lack of pro-social behavior (let’s say the kid played a lot of games and had little outside contact with people his age), but of course the literature on this subject should be enough to put claims like this to bed.

It’s just about impossible to untangle the so-called small effects of video games on violent/aggressive behavior from other types of media such as violent cartoons and violent movies. Who’s to say it’s not just the violent video games and not the violent movies and violent cartoons, too, that ’cause’ this type of behavior? It’s logically impossible to distinguish this, so therefore the small relationship between video games and violent behavior should be safely ignored. The media seems to be getting this right, which is a surprise (though I bet if Trump said the opposite—that violent video games didn’t cause violent behavior/shootings—that these same people would be saying that they do), but a broken clock is right twice a day.

So Trump’s claim (even if he didn’t outright state it) is wrong, along with anyone else who would want to jump in and attempt to say that video games cause violence. In fact, the literature shows a decrease in violence after games are released (Ferguson, 2014; Markey, Markey, and French, 2015; Cunningham, Engelstatter, and Ward, 2016). The amount of publication bias (also see Copenhaver and Ferguson, 2015 where they show how the APA ignores bias and methodological problems regarding these studies) in this field (Ferguson, 2017) should lead one to question the body of data we currently have, since studies that find an effect are more likely to get published than studies that find no effect.

Video games do not cause violent/aggressive behavior/school shootings. There is literally no evidence that they are linked to the deaths of individuals, and with the small effects noted on violent/aggressive behavior due to violent video game playing, we can disregard those claims. (One thing video games are good for, though, is improving reaction time (Benoit et al, 2017). The literature is strong here; playing these so-called “violent video games” such as Call of Duty improved children’s reaction time, so wouldn’t you say that these ‘violent video games’ have some utility?)

Delaying Gratification and Social Trust

1900 words

Tests of delayed gratification, such as the Marshmallow Experiment, show that those who can better delay their gratification have better life outcomes than those who cannot. The children who succumbed to eating the treat while the researcher was out of the room had worse life outcomes than the children who could wait. This was chalked up to cognitive processes by the originator of the test, while individual differences in these cognitive processes also were used as explanations for individual differences between children in the task. However, it doesn’t seem to be that simple. I did write an article back in December of 2015 on the Marshmallow Experiment and how it was a powerful predictor, but after extensive reading into the subject, my mind has changed. New research shows that social trust has a causal effect on whether or not one would wait for the reward—if the individual trusted the researcher he or she was more likely to wait for the other reward than if they did not trust the researcher, in which they were more likely to take what was offered in the first place.

The famous Marshmallow Experiment showed that children who could wait with a marshmallow or other treat in front of them while the researcher was out of the room, they would get an extra treat. The children who could not wait and ate the treat while the researcher was out of the room had worse life outcomes than the children who could wait for the other treat. These lead researchers to the conclusion that the ability to delay gratification depended on ‘hot’ and ‘cold’ cognitive processes. According to Walter Mischel, the originator of the study method, the ‘cool’ system is the thinking one, the cognitive system, which reminds you that you get a reward if you wait, while the ‘hot’ system is the impulsive system, the system that makes you want the treat now and not want to wait for the other treat (Metcalfe and Mischel, 1999).

Some of these participants were followed up on decades later, and those who could better delay their gratification had lower BMIs (Schlam et al, 2014); scored better on the SAT (Shoda, Mischel, and Peake, 1990) and other tests of educational attainment (Ayduk et al, 2000); along with other positive life outcomes. So it seems that placing a single treat—whether it be a marshmallow or another sweet treat—would predict one’s success, BMI, educational attainment and future prospects in life and that there are underlying cognitive processes, between individuals that lead to differences between them. But it’s not that simple.

After Mischel’s studies in the 50s, 60s and 70s on delayed gratification and positive and negative life outcomes (e.g., Mischel, 1958; Mischel, 1961Mischel, Ebbeson, and Zeiss, 1972) it was pretty much an accepted fact that delaying gratification somehow was related to these positive life outcomes, while the negative life outcomes were partly a result of the lack of ability to delay gratification. Though in 2014, a study was conducted showing that ability to delay gratification depends on social trust (Michaelson et al, 2013).

Using Amazon’s Mechanical Turk, (n = 78, 34 male, 39 female and 5 who preferred not to state their gender) completed online surveys and read three vignettes in order—trusty, untrustworthy and neutral—while using a scale of 1-7 to note how likeable, trustworthy, and how sharing their likelihood of sharing. Michaelson et al (2013) write:

Next, participants completed intertemporal choice questions (as in Kirby and Maraković, 1996), which varied in immediate reward values ($15–83), delayed reward values ($30–85), and length of delays (10–75 days). Each question was modified to mention an individual from one of the vignettes [e.g., “If (trustworthy individual) offered you $40 now or $65 in 70 days, which would you choose?”]. Participants completed 63 questions in total, with 21 different questions that occurred once with each vignette, interleaved in a single fixed but random order for all participants. The 21 choices were classified into 7 ranks (using the classification system from Kirby and Maraković, 1996), where higher ranks should yield higher likelihood of delaying, allowing a rough estimation of a subject’s willingness to delay using a small number of trials. Rewards were hypothetical, given that hypothetical and real rewards elicit equivalent behaviors (Madden et al., 2003) and brain activity (Bickel et al., 2009), and were preceded by instructions asking participants to consider each choice as if they would actually receive the option selected. Participants took as much time as they needed to complete the procedures.

When one’s trust was manipulated in the absence of a reward, within the group of subjects influenced their ability to delay gratification, along with how trustworthy one was perceived to be, influenced their ability to delay gratification. So this suggests that, in the absence of rewards, when social trust is reduced, ability to delay gratification would be lessened. Due to the issues of social trust manipulation due to the order of how the vignettes were read, they did a second experiment using the same model using 172 participants (65 males, 63 females, and 13 who chose not to state their gender). Though in this experiment, a computer-generated trustworthy, untrustworthy and neutral face was presented to the participants. They were only paid $.25 cents, though it has been shown that the compensation only affects turnout, not data quality (Burhmester, Kwang, and Gosling, 2011).

In this experiment, each participant read a vignette and there was a particular face attached to it (trustworthy, untrustworthy and neutral), which were used in previous studies on this matter. They found that when trust was manipulated in the absence of a reward between the subjects, this influenced the participants’ willingness and to delay gratification along with the perceived trustworthiness influencing it as well.

Michaelson et al (2013) conclude that the ability to delay gratification is predicated on social trust, and present an alternative hypothesis for all of these positive and negative life outcomes:

Social factors suggest intriguing alternative interpretations of prior findings on delay of gratification, and suggest new directions for intervention. For example, the struggles of certain populations, such as addicts, criminals, and youth, might reflect their reduced ability to trust that rewards will be delivered as promised. Such variations in trust might reflect experience (e.g., children have little control over whether parents will provide a promised toy) and predisposition (e.g., with genetic variations predicting trust; Krueger et al., 2012). Children show little change in their ability to delay gratification across the 2–5 years age range (Beck et al., 2011), despite dramatic improvements in self-control, indicating that other factors must be at work. The fact that delay of gratification at 4-years predicts successful outcomes years or decades later (Casey et al., 2011; Shoda et al., 1990) might reflect the importance of delaying gratification in other processes, or the importance of individual differences in trust from an early age (e.g., Kidd et al., 2012).

Another paper (small n, n = 28) showed that the children’s perception of the researchers’ reliability predicted delay of gratification (Kidd, Palmeri, and Aslin, 2012). They suggest that “children’s wait-times reflected reasoned beliefs about whether waiting would ultimately pay off.” So these tasks “may not only reflect differences in self-control abilities, but also beliefs about the stability of the world.” Children who had reliable interactions with the researcher waited about 4 times as long—12 minutes compared to 3 minutes—if they thought the researcher was trustworthy. Sean Last over at the Alternative Hypothesis uses these types of tasks (and other correlates) to show that blacks have lower self-control than whites, citing studies showing correlations with IQ and delay of gratification. Though, as can be seen, alternative explanations for these phenomena make just as much sense, and with the new experimental evidence on social trust and delaying gratification, this adds a new wrinkle to this debate. (He also shortly discusses ‘reasons’ why blacks have lower self-control, implicating the MAOA alleles. However, I have already discussed this and blaming ‘genes for’ violence/self-control doesn’t make sense.)

Michaelson and Munakata (2016) show more evidence for the relationship between social trust and delaying gratification. When children (age 4 years, 5 months, n = 34) observed an adult as trustworthy, they were able to wait for the reward, compared to when they observed the adult as untrustworthy they ate the treat thinking that, since they observed the adult as untrustworthy, they were not likely to get the second marshmallow than if they waited for the adult to return if they believed him to be untrustworthy. Ma et al (2018) also replicated these findings in a sample of 150 Chinese children aged 3 to 5 years old. They conclude that “there is more to delay of gratification than cognitive capacity, and they suggest that there are individual differences in whether children consider sacrificing for a future outcome to be worth the risk.” Those who had higher levels of generalized trust waited longer, even when age and level of executive functioning were controlled for.

Romer et al (2010) show that people who are more willing to take risks may be more likely to engage in risky behavior that provides insights to that specific individual on why delaying gratification and having patience leads to longer-term rewards. This is a case of social learning. However, people who are more willing to take risks have higher IQs than people who do not. Though SES was not controlled for, it is possible that the ability to delay gratification in this study came down to SES, with lower class people taking the money, while higher class people deferred. Raine et al (2002) showed a relationship between sensation seeking in 3-year-old children from Mauritius, which then was related to their ‘cognitive scores’ at age 11. As usual, parental occupation was used as a measure of ‘social class’, and since SES does not capture all aspects of social class then controlling for the variable does not seem to be too useful. Because a confound here could be that children from higher classes have more of a chance to sensation seek which may cause higher IQ scores due to cognitive enrichment. Either way, you can’t say that IQ ’causes’ delayed gratification since there are more robust predictors such as social trust.

Though the relationship is there, what to make of it? Since exploring more leads to, theoretically, more chances to get things wrong and take risks by being impulsive, those who are more open to experience will have had more chances to learn from their impulsivity, and so learn to delay gratification through social learning and being more open. ‘IQ’ correlating with it, in my opinion, doesn’t matter too much; it just shows that there is a social learning component to delaying gratification.

In conclusion, there are alternative ways to look at the results from Marshmallow Experiments, such as social trust and social learning (being impulsive and seeing what occurs when an impulsive act is carried out may have one learn, in the future, to wait for something). Though these experiments are new and the research is young, it’s very promising that there are other explanations for delayed gratification that don’t have to do with differences in ‘cognitive ability’, but depend on social trust—trust between the child and the researcher. If the child sees the researcher is trustworthy, then the child will wait for the reward, whereas if they see the researcher is not trustworthy, they ill take the marshmallow or whatnot, since they believe the researcher is not trustworthy and therefore won’t stick to their word. (I am also currently reading Mischel’s 2014 book Marshmallow Test: Mastering Self-Control and will have more thoughts on this in the future.)

Race and Medicine: Is Race a Useful Category?

2450 words

The New York Times published an article on December the 8th titled What Doctors Should Ignore: Science has revealed how arbitrary racial categories are. Perhaps medicine will abandon them, too. It is an interesting article and while I do not agree with all of it, I do agree with some.

It starts off by talking about sickle cell anemia (SCA) and how was once thought of as a ‘black disease’ because blacks were, it seemed, the only ones who were getting the disease. I recall back in high-school having a Sicilian friend who said he ‘was black’ because Sicilians can get SCA which is ‘a black disease’, and this indicates ‘black genes’. However, when I grew up and actually learned a bit about race I learned that it was much more nuanced than that and that whether or not a population has SCA is not based on race, but is based on the climate/environment of the area which would breed mosquitoes which carry malaria. SCA still, to this day, remains a selective factor in the evolution of humans; malaria selects for the sickle cell trait (Elguero et al, 2015).

This is a good point brought up by the article: the assumption that SCA was a ‘black disease’ had us look over numerous non-blacks who had the sickle cell trait and could get the help they needed, when they were overlooked due to their race with the assumption that they did not have this so-called ‘black disease’. Though it is understandable why it got labeled ‘a black disease’; malaria is more prevalent near to the equator and people whose ancestors evolved there are more likely to carry the trait. In regards to SCA, it should be known that blacks are more likely to get SCA, but just because someone is black does not automatically mean that it is a foregone conclusion that one has the disease.

The article then goes on to state that the push to excise race from medicine may undermine a ‘social justice concept’: that is, the want to rid the medical establishment of so-called ‘unconscious bias’ that doctors have when dealing with minorities. Of course, I will not discount that this doesn’t have some effect—however small—on racial health disparities but I do not believe that the scope of the matter is as large as it is claimed to be. This is now causing medical professionals to integrate ‘unconscious bias training’, in the hopes of ridding doctors of bias—whether conscious or not—in the hopes to ameliorate racial health disparities. Maybe it will work, maybe it will not, but what I do know is that if you know someone’s race, you can use it as a roadmap to what diseases they may or may not have, what they may or may not be susceptible to and so on. Of course, only relying on one’s race as a single data point when you’re assessing someone’s possible health risks makes no sense at all.

The author then goes on to write that the terms ‘Negroid, Caucasoid, and Mongoloid’ were revealed as ‘arbitrary’ by modern genetic science. I wouldn’t say that; I would say, though, that modern genetic science has shown us the true extent of human variation, while also showing that humans cluster into 5 distinct geographic categories, which we can call ‘race’ (Rosenberg et al, 2002; but see Wills, 2017 for alternative view that the clusters identified by Rosenberg et al, 2002 are not races. I will cover this in the future). The author then, of course, goes on to use the continuum fallacy stating that since “there are few sharp divides where one set of traits ends and another begins“. A basic rebuttal would be, can you point out where red and orange are distinct? How about violet and blue? Blue and Cyan? Yellow and orange? When people commit the continuum fallacy then the only logical conclusion is that if races don’t exist because there are “few sharp divides where one set of traits ends and another begins“, then, logically speaking, colors don’t exist either because there are ‘few [if any] sharp divides‘ where one color ends and another begins.

colors

The author also cites geneticist Sarah Tishkoff who states that the human species is too young to have races as we define them. This is not true, as I have covered numerous times. The author then cites this study (Ng et al, 2008) in which Craig Venter’s genome was matched with the (in)famous [I love Watson] James Watson and focused on six genes that had to do with how people respond to antipsychotics, antidepressants, and other drugs. It was discovered that Venter had two of the ‘Caucasian’ variants whereas Watson carried variants more common in East Asians. Watson would have gotten the wrong medicine based on the assumption of his race and not on the predictive power of his own personal genome.

The author then talks about kidney disease and the fact that blacks are more likely to have it (Martins, Agodoa, and Norris, 2012). It was assumed that environmental factors caused the disparity of kidney disease in blacks when compared to whites, however then the APOL1 gene variant was discovered, which is related to worse kidney outcomes and is in higher frequencies in black Americans, even in blacks with well-controlled blood pressure (BP) (Parsa et al, 2013). The author then discusses that black kidneys were seen as ‘more prone to failure’ than white kidneys, but this is, so it’s said, due to that one specific gene variant and so, race shouldn’t be looked at in regards to kidney disease but individual genetic variation.

In one aspect of the medical community can using medicine based on one’s race help: prostate cancer. Black men are more likely to be afflicted with prostate cancer in comparison to whites (Odedina et al, 2009; Bhardwaj et al, 2017) with it even being proposed that black men should get separate prostate screenings to save more lives (Shenoy et al, 2016). Then he writes that we still don’t know the genes responsible, however, I have argued in the past that diet explains a large amount—if not all of the variance. (It’s not testosterone that causes it like Ross et al, 1986 believe).

The author then discusses another medical professional who argues that racial health disparities come down to the social environment. Things like BP could—most definitely—be driven by the social environment. It is assumed that the darker one’s skin is, the higher chance they have to have high BP—though this is not the case for Africans in Africa so this is clearly an American-only problem. I could conjure up one explanation: the darker the individual, the more likely he is to believe he is being ‘pre-judged’ which then affects his state of mind and has his BP rise. I discussed this shortly in my previous article Black-White Differences in PhysiologyWilliams (1992) reviewed evidence that social, not genetic, factors are responsible for BP differences between blacks and whites. He reviews one study showing that BP is higher in lower SES, darker-skinned blacks in comparison to higher SES blacks whereas for blacks with higher SES no effect was noticed (Klag et al, 1991). Sweet et al (2007) showed that for lighter-skinned blacks, as SES rose BP decreased while for darker-skinned blacks BP increased as SES did while implicating factors like ‘racism’ as the ultimate causes.

There is evidence for the effect of psychosocial factors and BP (Marmot, 1985). In a 2014 review of the literature, Cuffee et al (2014) identify less sleep—along with other psychosocial factors—as another cause of higher BP. It just so happens that blacks average about one hour of sleep less than whites. This could cause a lot of the variation in BP differences between the races, so clearly in the case of this variable, it is useful to know one’s race, along with their SES. Keep in mind that any actual ‘racism’ doesn’t have to occur; the person only ‘needs to perceive it’, and their blood BP will rise in response to the perceived ‘racism’ (Krieger and Sidney, 1996). Harburg et al (1978) write in regards to Detroit blacks:

For 35 blacks whose fathers were from the West Indies, pressures were higher than those with American-born fathers. These findings suggest that varied gene mixtures may be related to blood pressure levels and that skin color, an indicator of possible metabolic significance, combines with socially induced stress to induce higher blood pressures in lower class American blacks.

Langford (1981) shows that when SES differences are taken into account that the black-white BP disparity vanishes. So there seems to be good evidence for the hypothesis that psychosocial factors, sleep deprivation, diet and ‘perceived discrimination’ (whether real or imagined) can explain a lot of this gap so race and SES need to be looked at when BP is taken into account. These things are easily changeable; educate people on good diets, teach people that, in most cases, no, people are not being ‘racist’ against you. That’s really what it is. This effect holds more for darker-skinned, lower-class blacks. And while I don’t deny a small part of this could be due to genetic factors, the physiology of the heart and how BP is regulated by even perceptions is pretty powerful and could have a lot of explanatory power for numerous physiological differences between races and ethnic groups.

Krieger (1990) states that in black women—not in white women—“internalized response to unfair treatment, plus non-reporting of race and gender discrimination, may constitute risk factors for high blood pressure among black women“. This could come into play in regards to black-white female differences in BP. Thomson and Lip (2005) show that “environmental influence and psychosocial factors may play a more important role than is widely accepted” in hypertension but “There remain many uncertainties to the relative importance and contribution of environmental versus genetic influences on the development of blood pressure – there is more than likely an influence from both. However, there is now evidence to necessitate increased attention in examining the non-genetic influences on blood pressure …” With how our physiology evolved to respond to environmental stimuli and respond in real time to perceived threats, it is no wonder that these types of ‘perceived discrimination’ causes higher BP in certain groups with lower SES.

Wilson (1988) implicates salt as the reason why blacks have higher BP than whites. High salt intake could affect the body’s metabolism by causing salt retention which influences blood plasma volume, cardiac output. However, whites have a higher salt intake than blacks, but blacks still ate twice the recommended amounts from the dietary guidelines (all ethnic subgroups they analyzed from America over-consumed salt as well) (Fulgoni et al, 2014). Blacks are also more ‘salt-sensitive’ than whites (Sowers et al 1988Schmidlin et al, 2009; Sanada, Jones, and Jose, 2014) which is also heritable in blacks (Svetke, McKeown, and Wilson, 1996). A slavery hypothesis does exist to explain higher rates of hypertension in blacks, citing salt deficiency in the parts of Africa that supplied the slaves to the Americas, to the trauma of the slave trade and slavery in America. However, historical evidence does not show this to be the case because “There is no evidence that diet or the resulting patterns of disease and demography among slaves in the American South were significantly different from those of other poor southerners” (Curtin, 1992) whereas Campese (1996) hypothesizes that blacks are more likely to get hypertension because they evolved in an area with low salt.

The NYT article concludes:

Science seeks to categorize nature, to sort it into discrete groupings to better understand it. That is one way to comprehend the race concept: as an honest scientific attempt to understand human variation. The problem is, the concept is imprecise. It has repeatedly slid toward pseudoscience and has become a major divider of humanity. Now, at a time when we desperately need ways to come together, there are scientists — intellectual descendants of the very people who helped give us the race concept — who want to retire it.

Race is a useful concept. Whether in medicine, population genetics, psychology, evolution, physiology, etc it can elucidate a lot of causes for differences between races and ethnic groups—whether or not they are genetic or psychosocial in nature. That just attests to both the power of suggestion along with psychosocial factors in regards to racial differences in physiological factors.

Finally let’s see what the literature says about race in medicine. Bonham et al (2009) showed that both black and white doctors concluded that race is medically relevant but couldn’t decide why however they did state that genetics did not explain most of the disparity in relation to race and disease aside from the obvious disorders like Tay Sachs and sickle cell anemia. Philosophers accept the usefulness of race in the biomedical sciences (Andreason, 2009Efstathiou, 2012; Hardimon, 2013Winther, Millstein, and Nielsen, 2015; Hardimon, 2017) whereas Risch et al (2002) and Tang et al (2002) concur that race is useful in the biomedical sciences. (See also Dorothy Roberts’ Ted Talk The problem with race-based medicine which I will cover in the future). Richard Lewontin, naturally, has hang-ups here but his contentions are taken care of above. Even if race were a ‘social construct‘, as Lewontin says, it would still be useful in a biomedical sense; but since there are differences between races/ethnic groups then they most definitely are useful in a biomedical sense, even if at the end of the day individual variation matters more than racial variation. Just knowing someone’s race and SES, for instance, can tell you a lot about possible maladies they may have, even if, utltimately, individual differences in physiology and anatomy matter more in regards to the biomedical context.

In conclusion, race is most definitely a useful concept in medicine, whether race is a ‘social construct’ or not. Just using Michael Hardimon’s race concepts, for instance, shows that race is extremely useful in the biomedical context, despite what naysayers may say. Yes, individual differences in anatomy and physiology trump racial differences, but just knowing a few things like race and SES can tell a lot about a particular person, for instance with blood pressure, resting metabolic rate, and so on. Denying that race is a useful concept in the biomedical sciences will lead to more—not less—racial health disparities, which is ironic because that’s exactly what race-deniers do not want. They will have to accept a race concept, and they would accept Hardimon’s socialrace concept because that still allows it to be a ‘social construct’ while acknowledging that race and psychosocial factors interact to cause higher physiological variables. Race is a useful concept in medicine, and if the medical establishment wants to save more lives and actually end the racial disparities in health then they should acknowledge the reality of race.

Sex and IQ

By Scott Jameson

800 words

Long and short of this issue is that something has to explain why most of the really, really smart people are men. There are two hypotheses: men have a higher mean, and men have a higher standard deviation. They don’t really have to compete, and so some people believe that both are true. Some believe neither, of course.

Let’s start with three facts:

  1. Women tend to get slammed by men on Raven’s Progressive Matrices; the second graph in the post linked above details this. It’s a difference of 5 IQ points on average, quite a bit, certainly more than on other IQ tests.
  2. Women tend to lose even harder in visuospatial measures. John Loehlin pointed out in The Handbook of Intelligence that the gap here was a whopping 13.5 points.
  3. Raven’s is so g loaded because your score is primarily driven by spatial and verbal-analytic abilities.

The biggest subtest difference is spatial, and I think that likely explains the abnormally large differences in Raven’s scores. Other IQ tests, like the SAT, hardly use visual abilities. Women do about as well as men on the SAT. I’ve also seen the White-Asian gap smaller on the SAT than in other IQ tests, and that gap is also driven in large part by spatial scores. Conversely you might expect the SAT to go better for a hypothetical demographic that scores well in math and verbal abilities, but not especially well in spatial. By hypothetically I mean that these people make up like a fifth of the kids at the Ivy Leagues, even more than you’d expect from an average IQ of, I don’t know, 111ish.

Off topic: these differences are probably going to be slighter still now that they’re fastidiously removing every useful element of the test in an effort to make it less “biased” by race. I wonder if colleges will just throw up their shoulders and start looking for kids who do well on the ACT. Moving on.

There are other sex differences in subtest scores. Pulling from Loehlin again: “females tend to have an advantage on verbal tests involving the fluent production of words belonging to a category, such as synonyms.” Women are known to do better on verbal than on math.

Loehlin also points out that girls do better at math in early childhood, but that boys outstrip them by the time it, uh, matters, when they take standardized tests in adolescence.

I have a wild hypothesis that men and women respectively being more oriented towards mathematical and verbal thought corresponds to observed differences in interests. Women are known to read more often than men on average, whereas male dominated activities like sports and video games often have a distinctly mathematical bent. My spurious hypothesis is that doing these different things differentially develops their abilities, constituting an example of crystallized intelligence rather than fluid intelligence; alternatively, they were differentially selected for ability to perform well on tasks that their respective sex does more of, in which case the abilities are innate.

Even if they aren’t innate, it’d be an instance of secondary heritability because evidence tends to show male-female personality differences as innate; in this scenario they are innately prone to practicing different abilities to different extents.

Loehlin points to Hedges and Nowell’s 1995 meta-analysis, showing a higher male variation in IQ and elucidating a few more small subtest differences. I’ve lifted a meaty bit here:

On average, females exhibited a slight tendency to perform better on tests of reading comprehension, perceptual speed, and associative memory, and males exhibited a
slight tendency to perform better on tests of mathematics and social studies. All of the effect sizes were relatively small except for those associated with vocational aptitude scales (mechanical reasoning, electronics information, and auto and shop information) in which average males performed much better than average females. The effect sizes for science were slightly to moderately positive, and those for perceptual speed were slightly to moderately negative. Thus, with respect to the effect size convention, these data suggest that average sex differences are generally rather small.

In summary:

  1. There are sex differences in scores of various IQ subtests, including but not limited to female orientation towards verbal and male orientation towards mathematical ability.
  2. The largest of these differences is a substantial male advantage in spatial ability.
  3. On any IQ test that doesn’t weight subtests such that men and women perform equally by default, men tend to score a hair better.
  4. Men also have a higher standard deviation in IQ.

There are more male geniuses, particularly with respect to mathematical genius. There are also more mentally retarded males. I just explained why men tend to populate CERN, NASA, Silicon Valley, and lists of who’s died in the Running of the Bulls.

r/K Selection Theory: A Response to Truth-Justice

1700 words

After the publishing of the article debunking r/K selection theory last week, I decided to go to a few places and provide the article to a few sites that talk about r/K selection theory and it’s (supposed) application to humans and psychometric qualities. I posted it on a site called ‘truthjustice.net‘, and the owner of the site responded to me:

Phillippe Rushton is not cited a single time in AC’s book. In no way, shape or form does the Theory depend on his opinions.

AC outlines a very coherent theoretical explanation for the differing psychological behavior patterns existing on a bell curve distribution in our population. Especially when it comes to the functioning of the Amygdala for which we have quite a lot of data by now.

Leftists are indeed in favor of early childhood sexualization to increase the quantity of offspring which will inevitably reduce the quality and competitive edge of children. They rank significantly lower on the moral foundations of “loyalty”, “authority” and “purity” as outlined by Jonathan Haidt’s research into moral psychology. Making them more accepting of all sorts of degeneracy, deviancy, and disloyalty to the ingroup.

http://people.stern.nyu.edu/jhaidt/

They desire a redestribution of resources to the less well performing part of our population to reduce competitive stress and advantage while giving far less to charity and being significantly more narcissistic to increase their own reproductive advantage.

https://anepigone.blogspot.com/2008/11/more-income-more-votes-republicans_13.html

Their general mindset becomes more and more nihilistic, atheistic, anarchistic, anti-authority and overall r-selected the further left you go on the bell curve. A denial of these biological realities in our modern age is ridiculous when we can easily measure their psychology and brain functionality in all sorts of ways by now.

Does that now mean that AC is completely right in his opinions on r/K-Selection Theory? No, much more research is necessary to understand the psychological differences between leftists and rightists in full detail.

But the general framework outlined by r/K-Selection Theory very likely applies to the bell curve distribution in psychological behavior patterns we see in our population.

I did respond, however, he removed my comment and banned me after I published my response. My response is here:

“Phillippe Rushton is not cited a single time in AC’s book. In no way, shape or form does the Theory depend on his opinions.”

Meaningless. He uses the r/K continuum so the link in my previous comment is apt.

“AC outlines a very coherent theoretical explanation for the differing psychological behavior patterns existing on a bell curve distribution in our population. Especially when it comes to the functioning of the Amygdala for which we have quite a lot of data by now.”

No, he doesn’t.

1) Psychological traits are not normally distributed,

2) even if r/K were a valid paradigm, it would not pertain to within species variation,

3) it’s just a ‘put these traits on one end that I don’t like and these traits at the other end that I like and that’s my team while the other team has all of the bad traits’ thing,

4) his theory literally rests on the r/K continuum proposed by Pianka. Furthermore, no experimental rationale “was ever given for the assignment of these traits [the r/K traits Pianka inserted into his continuum] to either category” (Graves, 2002: 135), and

5) the r/K paradigm was discredited in the late 70s (see Graves 2002 above for a review)

“Leftists are indeed in favor of early childhood sexualization to increase the quantity of offspring which will inevitably reduce the quality and competitive edge of children. They rank significantly lower on the moral foundations of “loyalty”, “authority” and “purity” as outlined by Jonathan Haidt’s research into moral psychology. Making them more accepting of all sorts of degeneracy, deviancy, and disloyalty to the ingroup.”

I love Haidt. I’ve read his book and all of his papers and articles. So you notice a few things. Then see the (discredited) r/K paradigm. Then you say “oh! liberals are bad and are on the r side while conservatives are K!!”

Let me ask you this: where does alpha-selection fall into this?

“They desire a redestribution of resources to the less well performing part of our population to reduce competitive stress and advantage while giving far less to charity and being significantly more narcissistic to increase their own reproductive advantage.”

Oh.. about that… liberals have fewer children than conservatives. Liberals are also more intelligent than conservatives. So going by Rushton’s r/K model, liberals are K while conservatives are r (conservatives are less intelligent and have more children). So the two cornerstones of the (discredited) r/K continuum show conservatives breeding more and also are less intelligent while it’s the reverse for liberals. So who is ‘r’ and ‘K’ again?

“Their general mindset becomes more and more nihilistic, atheistic, anarchistic, anti-authority and overall r-selected the further left you go on the bell curve. A denial of these biological realities in our modern age is ridiculous when we can easily measure their psychology and brain functionality in all sorts of ways by now.”

‘r’ and ‘K’ are not adjectives (Anderson, 1991: 57).

Why does no one understand r/K selection theory? You are aware that r/K selection theory is density-dependent selection, correct?

“Does that now mean that AC is completely right in his opinions on r/K-Selection Theory? No, much more research is necessary to understand the psychological differences between leftists and rightists in full detail.”

No, he’s horribly wrong with his ‘theory’. I don’t deny psych differences between libs and cons, but to put them on some (discredited) continuum makes no sense in reality.

“But the general framework outlined by r/K-Selection Theory very likely applies to the bell curve distribution in psychological behavior patterns we see in our population.”

No, it doesn’t. Psych traits are not normally distributed (see above). Just like Rushton, AC saw that some things ‘fit’ into this (discredited) continuum. What’s that mean? Absolutely nothing. He doesn’t even cite papers for his assertion; he called Pianka a leftist and said that he tried to sabotage the theory because he thought that it described libs (huh? this makes no sense). AC is a clear ideologue and is steeped in his own political biases as well as wanting to sell more copies of his book. So he will not admit that he is wrong.

Let me ask you a question: where did liberals and conservatives evolve? What selective pressures brought about these psych traits in these two ‘populations’? Are liberals and conservatives local populations?

I’ve also summarily discredited AC and I am waiting on a reply from him (I will be surprised if he replies).


However, unfortunately for AC et al, concerns have been raised “about the use of psychometric indicators of lifestyle and personality as proxies for life history strategy when they have not been validated against objective measures derived from contemporary life history theory and when their status as causes, mediators, or correlates has not been investigated” (Copping, Campbell, and Muncer, 2014). This ends it right here. People don’t understand density-dependent/independent selection since Rushton never talked about it. That, as has been brought up, is a huge flaw in Rushton’s application of r/K theory to the races of Man.

Liberals are, on average, more intelligent than conservatives (Kanazawa, 2010; Kanazawa, 2014) Lower cognitive ability has been linked to greater prejudice through right-wing ideology and low intergroup contact (Hodson and Busseri, 2012), with social conservatives (probably) having lower IQs. There are also three ‘psychological continents’—Europe, Australia, and, Canada and are the liberal countries whereas Southeast Asia, South Asia, South America and Africa contain more conservative countries with all other countries including Russia, the US and Asia in the middle and “In addition, gross domestic product (GDP) per capita, cognitive test performance, and governance indicators were found to be low in the most conservative group and high in the most liberal group” (Stankov and Lee, 2016). Further, economic liberals—as a group—tend to be better educated than Republicans—so intelligence is positively correlated with socially and economically liberal views (Carl, 2014).

There is also a ‘conservative baby boom‘ in the US—which, to the Rushtonites, is ‘r-selected behavior’. Furthermore, women who reported that religion was ‘very important to them’ reported having higher fertility than women who said that it was ‘somewhat important’ or ‘not important’ (Hayford and Morgan, 2008). Liberals are more likely to be atheist (Kanazawa, 2010), while, of course, conservatives are more likely to be religious (Morrison, Duncan, and Parton, 2015; McAdams et al, 2015).

All in all, even if we were to allow the use of liberals and conservatives as local populations, like Rushton’s erroneous use of r/K theory for human races, the use of r/K theory to explain the conservative/liberal divide makes no sense. People don’t know anything about ecology, evolution, or neuroscience. People should really educate themselves on the matters they speak about—I mean a full-on reading into whatever it is you believe. Because people like TIJ and AC are clearly idealogues, pushing a discredited ecological theory and applying it to liberals and conservatives, when the theory was never used that way in the first place.

For anyone who would like a look into the psychological differences between liberals and conservatives, Jonathan Haidt has an outstanding book outlining the differences between the two ideologies called The Righteous Mind: Why Good People are Divided by Politics and ReligionI actually just gave it a second read and I highly, highly recommend it. If you want to understand the true differences between the two ideologies then read that book. Try to always remember and look out for your own biases when it comes to your political beliefs and any other matter.

For instance, if you see yourself frantically attempting to gather support for a contention in a debate, then that’s the backfire effect in action (Nyhan and Reifler, 2012), and if you have a knowledge of the cognitive bias, you can better take steps to avoid such a heavy-handed bias. This, obviously, occurred with TIJ. The response above is airtight. If this ‘continuum’ did exist, then it’s completely reversed with liberals having fewer children and generally being more intelligent with the reverse for conservatives. So liberals would be K and conservatives would be r (following Rushton’s interpretation of the theory which is where the use of the continuum comes from).

Testosterone and Aggressive Behavior

1200 words

Testosterone gets a bad rep. People assume that if one has higher testosterone than average, that they will be a savage, bloodthirsty beast with an insatiable thirst for blood. This, however, is not the case. I’ve documented how testosterone is vital for male functioning, and how higher levels don’t lead to maladies such as prostate cancer. Testosterone is feared for no reason at all. The reason that people are scared of it is that of the anecdotal reports that individual A had higher testosterone when he committed crime B so, therefore, anyone who commits a crime has higher testosterone and that is the ultimate—not proximate—cause of crime. This is erroneous. There is a positive—albeit extremely low—correlation between physical aggression and violence at .14. That’s it. Furthermore, most of these claims of higher levels of testosterone causing violence is extrapolated from animal studies to humans.

Testosterone has been shown to lead to violent and aggressive behavior, largely only in animal studies (Archer, 1991; Book et al, 2001). For years, the relationship between the two variables was thought to be causal, i.e., high levels of testosterone cause violent crimes, which has been called into question over recent years. This is due to how the environment can raise testosterone levels. I have documented how these environmental factors can raise testosterone—and after these events, testosterone stays elevated.

Largely, animal studies are used to infer that high levels of testosterone in and of themselves lead to higher rates of aggression and therefore crime. However, two important meta-analyses show this is not necessarily the case (Archer, 1991; Book et al, 2001). Book et al, 2001 showed that two variables were important in seeing the relationship between aggression and crime—the time of day that the assay was taken and the age of the participant. This effect was seen to be largest in, not unexpectedly, males aged 13-20 (Book et al, 2001: 594). So since age confounds the relationship between aggression and testosterone in males, that is a variable that must also be controlled for (which, in the meta-analyses and other papers I cite on black and white testosterone is controlled for).

More interestingly, Book et al (2001) showed that the nature of the measure of aggression (self-reported or behavioral) did not have any effect on the relationship between testosterone and aggression. Since there is no difference between the two measures, then a pencil-and-paper test is a good enough index of measure of aggression, comparable to observing the behavior of the individual studied.

Archer (1991) also showed the same low—but positive—correlations between aggression and testosterone. Of course, as I’ve extensively documented since there is a positive relationship between the two variables does not necessarily mean that high-testosterone men commit more crime—since the outcome of certain situations can increase and decrease testosterone, no causal factors have been detangled. Book et al (2001) confirmed Archer’s (1991) finding that the correlation between violent and aggressive behavior was positive and low at .14.

Valois et al (2017) showed there was a relationship between emotional self-efficacy (ESE) and aggressive and violent behaviors in a statewide sample of high school children in South Carolina (n=3,386). Their results suggested that there was a relationship between carrying a weapon to school within the past 30 days along with being injured with a club, knife or gun in the past 12 months was significantly associated with ESE for specific race and sex groups.

Black girls who reported a low ESE reported carrying a weapon to school 30 days prior to the survey were 3.22 times more than black girls with a high ESE who did not report carrying a weapon to school within the past 30 days prior to the questionnaire. For black boys with low ESE, they were 3.07 times more likely to carry a weapon to school within the past 30 days in comparison to black boys with high ESE who did not carry a weapon to school in the past 30 days. White girls who reported low ESE had the highest chance of bringing a weapon to school in comparison to white girls with low ESE—they were 5.87 times more likely to carry a weapon to school 30 days prior to the survey. Finally, white boys with low ESE were slightly more than 2 times more likely than white boys with high ESE to carry a weapon to school 30 days prior to the survey.

Low ESE in white and black girls is associated with carrying a weapon to school, whereas low ESE for white and black boys is associated with being threatened. Further, their results suggested that carrying a weapon to school was associated with low ESE in black and white girls suggesting that low ESE is both situation-specific and specific to the female sex. The mediator between these things is low ESE—it is different for both black boys and black girls, and when it occurs different courses of action are taken, whether it’s through bringing a weapon to school or being threatened. What this tells me is that black and white boys with low ESE are more likely to be threatened because they are perceived to be more meek, while black and white girls with low ESE that get provoked at school are more likely to bring weapons. So it seems that girls bring weapons when provoked and boys fight.

The two meta-analyses reviewed above show that there is a low positive (.14) correlation between testosterone and aggression (Archer, 1991; Book et al, 2001). Thusly, high levels of testosterone on their own are not sufficient enough to explain high levels of aggression/violence. Further, there are race- and sex-specific differences when one is threatened at high school with black and white boys being more likely to report being threatened more (which implies a higher rate of physical fighting) while black and white girls when threatened brought weapons to school. These race- and sex-specific differences in the course of action taken when they are physically threatened needs to be looked into more.

I’d like to see the difference in testosterone levels for a matched sample of black and white boys from two neighboring districts with different murder rates as a proxy for the amount of violence in the area. I’d bet that the places with a higher murder rate would have children 1) report more violence and instances of bringing weapons to school and 2) report more harm from these encounters—especially if they have low ESE as seen in Valois (2017) and 3) the children in the high schools along with the residents of the area would have higher testosterone than the place with less violence. I would expect these differences to be magnified in the direction of Valois (2017) in that areas with higher murder rates would have black and white girls report bringing weapons to school when threatened whereas black and white boys would report more physical violence.

High testosterone itself is not sufficient enough to explain violence as the correlation is extremely low at .14. Testosterone levels fluctuate depending on the time of day (Brambilla et al, 2009; Long, Nguyen, and Stevermer, 2015) to the time of year (Stanton, Mullette-Gillman, and Huettel, 2011Demur, Uslu, and Arslun, 2016). How the genders/races react differently when threatened in adolescence is interesting and deserves further study.

Biases and Political Beliefs

2150 words

The study of political bias is very important. Once the source of what motivates political bias—which no doubt would translate to other facets of life—is found, individual action can be taken to minimize any future bias. Two recent studies found that contrary to other studies showing that conservatives are more biased than liberals, both groups were equally as biased.

Everyone is biased—even physicians (Cain and Detsky, 2008). When beliefs we hold to be true are questioned, we do anything we can to shield ourselves from conflicting information. Numerous studies have looked into biases in politics, with some studies showing that conservatives are more likely to be biased towards their views more than liberals. However, recent research has shown that this is not true.

Frimer, Skitka, and Motyl, (2017) showed there were similar motives to shield one’s self from contradictory information. Hearing opposite viewpoints—especially for staunch conservatives and liberals—clearly leads to them doing anything possible to, in their heads, defend their dearly held beliefs. In four studies (1: people would forgo the chance to win money if they didn’t have to hear the opposite sides’ opinions on the same-sex marriage debate; 2: thinking back to the 2012 election; 3: upcoming elections in the US and Canada; “a range of other Culture War Issues” (Frimer, Skitka, and Motyle, 2017); and 4: both groups reported similar diversions towards hearing the opposite group’s beliefs), both groups reported that hearing the other side’s beliefs would induce cognitive dissonance (Frimer, Skitka, and Motyle, 2017). They meta-analyzed all of their studies and still found that both groups would “rather remain in their ideological bubbles”.

Ditto et al (2017) also had similar findings. They meta-analyzed 41 studies with over 12,000 participants, testing two hypotheses: 1) conservatives would be more biased than liberals and 2) there would be equal amounts of bias. They discovered that the correlation for partisan bias was “robust”, with a correlation of .254. They showed that “liberals (r = .248) and conservatives (r = .247) showed nearly identical levels of bias across studies” (Ditto et al, 2017).

These two studies show what we know is true: it’s extremely hard/damn near impossible to change one’s view. Someone can be dead wrong, yet attempt to gather up whatever kind of data they possibly can to shield themselves from the truth.

This all comes down to one thing: the backfire effect. When we are presented with contradictory information, we immediately reject it. Everyone is affected by this bias. One study showed that corrections frequently failed to correct political misconceptions, with these attempted corrections actually doing the opposite, people increased their misconception of the group in question (Nyhan and Riefler, 2010). The thing is, people lack the knowledge about political matters which then affects their opinions. These studies show why it’s next to impossible to change one’s view in regards to anything, especially political matters.

New York University’s Professor of Ethical Leadership and social psychologist with a specialty in morality Jonathan Haidt also talks about partisan bias in his outstanding book on religion and politics The Righteous Mind: Why Good People are Divided by Politics and Religion (Haidt, 2013). This book is outstanding and I highly recommend it. I’ve written about some of his thoughts in his book, his theory on the evolution of morality is very well argued. Moral reasoning is just a post-hoc search for reasons to justify the judgments that people have already made. When asked why people are so averse to questions they find morally wrong, they cannot give good reasons to why they find the scenarios morally wrong (Haidt, 2001). More specifically, people couldn’t say why it was morally wrong to have sex with a sibling even though they were told that they used birth control and both enjoyed the act, suffering no emotional damage. This is direct evidence for Haidt’s ‘wag-the-dog’ illusion.

Haidt (2001: 13) writes:

 If moral reasoning is generally a post-hoc construction intended to justify automatic moral intuitions, then our moral life is plagued by two illusions. The first illusion can be called the “wag-the-dog” illusion: we believe that our own moral judgment (the dog) is driven by our own moral reasoning (the tail). The second illusion can be called the “wag-the-otherdog’s-tail” illusion: in a moral argument, we expect the successful rebuttal of an opponent’s arguments to change the opponent’s mind. Such a belief is like thinking that forcing a dog’s tail to wag by moving it with your hand should make the dog happy.

Except the opponent’s mind is never changed. People always search for things to affirm their worldviews.

In his book, Haidt cites a study done on 14 liberals and conservatives who were stuck into an fMRI machine to scan their brains when shown 18 slides to see how their brain changed when viewing them (Weston et al, 2006). The first of which slide one set was George W. Bush praiding Ken Lay, the CEO of Enron. After, they were shown a slide in which the former President avoided mentioning Lay’s name. “At this point, Republicans were squirming” (Haidt, 2013: 101). Then they were finally shown a slide that said that Bush “felt betrayed” by the CEO’s actions and was shocked to find out that he was corrupt. There was a set of similar slides showing similar contradictory statements from John Kerry. The researchers had engineered situations that made the individual uncomfortable when shown their candidate contradicted themselves, while at the same time not showing any signs of being uncomfortable when it was shown their ideological opposite was caught being a hypocrite (Haidt, 2013: 101).

This study shows that emotional and intuitive processes are the causes for such extreme biases, with one only employing reasoning when it supports their own conclusions. Weston et al (2006) saw that when the individuals looked at the final slides, they had a sense of ‘escape’ and ‘release’. They cite further studies showing that this sense of escape and release is associated with the release of dopamine in the nucleus accumbens and dorsal striatum in other animals (Weston et al, 2006). So the subjects experienced this small hit of dopamine when they saw the final slide that showed everything was “OK”. If this is true, then this explains why we engage in these ‘addictive behaviors’—believing things with such conviction, even when shown contradictory information.

Like rats that cannot stop pressing a button, partisans may be simply unable to stop believing weird things. The partisan brain has been reinforced so many times for performing mental contortions that free it from unwanted beliefs. Even partisanship may be literally addictive. (Haidt, 2013: 103)

Haidt has also been covering the recent University protests that have been occurring around the country. About fifty years ago, a judge predicted the political turmoil we see in Universities today, writing:

No one can be expected to accept an inferior status willingly. The black students, unable to compete on even terms in the study of law, inevitably will seek other means to achieve recognition and self-expression. This is likely to take two forms. First, agitation to change the environment from one in which they are unable to compete to one in which they can. Demands will be made for elimination of competition, reduction in standards of performance, adoption of courses of study which do not require intensive legal analysis, and recognition for academic credit of sociological activities which have only an indirect relationship to legal training. Second, it seems probable that this group will seek personal satisfaction and public recognition by aggressive conduct, which, although ostensibly directed at external injustices and problems, will in fact be primarily motivated by the psychological needs of the members of the group to overcome feelings of inferiority caused by lack of success in their studies. Since the common denominator of the group of students with lower qualifications is one of race this aggressive expression will undoubtedly take the form of racial demands–the employment of faculty on the basis of race, a marking system based on race, the establishment of a black curriculum and a black law journal, an increase in black financial aid, and a rule against expulsion of black students who fail to satisfy minimum academic standards.

This seems to have come true today, seeing as political diversity has decreased in psychology, for instance, in the past fifty years (Duarte et al, 2015). In America, they found that 58-66 percent of social science professors identified as liberals, whereas only 5-8 percent identified as conservatives. Self-identified Democrats also outnumbered Conservatives by almost 8 to 1. Other researchers found that 52 to 77 percent of humanities professors were liberal with only 4-8 percent identifying as Conservative, for a ratio of about 5 to 1, favoring liberals. Finally, 84 percent of psychologists identified as liberal, with only 8 percent identifying as conservative for a 10.5 to 1 ratio (Duarte et al, 2015). However, this skew has only existed for about fifty years. When our institutions show this heavy skew in political beliefs, self-affirming, self-fulfilling prophecies will affect the quality of what is taught to students which will have a negative effect on the type of education received. 

Finally, when talking about political biases, one cannot go without mentioning Stephen Jay Gould. Although I’ve come to love his work on evolutionary theory, he was horribly wrong on human differences and let his motivations, biases and political views cloud his judgement and drive him to be grossly dishonest in his posthumous attacks of a man long dead who could no longer defend himself in one Samuel Morton, which first appeared in 1978. This culminated in his widely acclaimed (and, as fas as I can tell, still given to college students to read) book Mismeasure of Man (Gould, 1981). In the book, he attacked Morton for being biased in his measurements of his skull collection. However, in 2011, an anthropology team lead by Jason Lewis remeasured Morton’s skulls and found that Morton was not biased and his measurements were correct (Lewis et al, 2011). Gould was the one who ended up showing the huge bias that he accused Morton of and, ironically for Gould, he was the case study in avoiding bias in scholarship and science, not Morton.

However, as is usually the case, long debates such as this are not so easily settled. Philosopher Michael Weisberg (Weisberg, 2014) argued that Gould’s arguments against Morton were sound and that “Although Gould made some errors and overstated his case in a number of places, he provided prima facie evidence, as yet unrefuted, that Morton did indeed mismeasure his skulls in ways that conformed to 19th century racial biases.” Further, Kaplan, Pigliucci and Banta (2015) argue that Gould’s problem with Morton’s measurements came down to how the measurements should have been done (lead shot or seed). They contend that many of Lewis et al’s (2011) claims against Gould were “misleading” and “had no relevance to Gould’s published analysis.” They also argue that both Gould’s and Morton’s methods (inclusion/exclusion of skulls, how to compute averages, etc) were “inappropriate”. Nevertheless, the point is, this debate seems to be far from over and I await the next chapter. Whatever the case may be, Gould vs. Morton is a perfect case of politics and bias in science.

Everyone is biased. Researchers, physicians, normal everyday people, etc. But where we become most biased is when politics comes into play. To become better, well-rounded people with a myriad of knowledge, we need to listen to other’s viewpoints without immediately rejecting them. But, first, we must recognize the cognitive bias and attempt to correct it. Political differences begin in the brain and then are shaped by experience. These political differences then lead to feelings of disgust when hearing of the views of the ‘opposite team’. Both sides of the political spectrum are equally as biased, contrary to each groups’ perception of this particular issue. There are differences in the brain between Conservatives and Liberals, and when they see their ‘enemy’ engage in contradictory behavior they get joy, whereas when they see their guy engage in the same contradictory behavior they show disgust.

The long debate on Morton’s skulls that’s been raging for over forty years is the perfect look into how politics, motivation, and bias comes into effect in science, no matter which camp ultimately ends up being right (I’m in the Morton camp, obviously). Studying the causes and effects of why we have such strong biases can lead to a better understanding of the causes of these underlying defense mechanisms—the causes of the backfire effect and similar cognitive biases. Everyone and anyone—from the scientist to the layman—should always let what the facts say guide their points of view and not their emotions.

When you are studying any matter, or considering any philosophy, ask yourself only what are the facts and what is the truth that the facts bear out. Never let yourself be diverted either by what you wish to believe, or by what you think would have beneficent social effects if it were believed. But look only, and solely, at what are the facts. That is the intellectual thing that I should wish to say.Bertrand Russel, 1959