NotPoliticallyCorrect

Home » Articles posted by RaceRealist

Author Archives: RaceRealist

A Systems View of Kenyan Success in Distance Running

1550 words

The causes of sporting success are multi-factorial, with no cause being more important than the other since the whole system needs to work in concert to produce the athletic phenotype–call this “causal parity” of athletic success determinants. For a refresher, take what Shenk (2010: 107):

As the search for athletic genes continues, therefore, the overwhelming evidence suggests that researchers will instead locate genes prone to certain types of interactions: gene variant A in combination with gene variant B, provoked into expression by X amount of training + Y altitude + Z will to win + a hundred other life variables (coaching, injuries, etc.), will produce some specific result R. What this means, of course, What this means, of course, is that we need to dispense rhetorically with thick firewall between biology (nature) and training (nurture). The reality of GxE assures that each person’s genes interacts with his climate, altitude, culture, meals, language, customs and spirituality—everything—to produce unique lifestyle trajectories. Genes play a critical role, but as dynamic instruments, not a fixed blueprint. A seven- or fourteen- or twenty-eight-year-old is not that way merely because of genetic instruction. (Shenk, 2010: 107) [Also read my article Explaining African Running Success Through a Systems View.]

This is how athletic success needs to be looked at; not reducing it to genes or a group of genes that ’cause’ athletic success. Since to be successful in the sport of the athlete’s choice takes more than being born with “the right” genes.

Recently, a Kenyan woman—Joyciline Jepkosgei—won the NYC marathon in here debut (November 3rd, 2019), while Eliud Kipchoge—another Kenyan—became the first human ever to complete a marathon (26.2 miles) in under 2 hours. I recall in the spring reading that he said he would break the 2-hour mark in October. He also attempted to break it in 2017 in Italy but, of course, he failed. His official time in Italy was 2:00:25! While he set the world record in Berlin at 2:01:39. Kipchoge’s official time was 1:59:40—twenty seconds shy of 2 hours—that means his average mile pace was about 4 minutes and 34 seconds. That is insane. (But the IAAF does not accept the time as a new world record since it was not in an open competition—Kipchoge had a slew of Olympic pacesetters following him; an electric car drove just ahead of him and pointed lasers at the ground showing him where to run; so he shaved 2 minutes off his time—2 crucial minutes—according to sport scientist Ross Tucker; and . So he did not set a world record. His feat, though, is still impressive.)

Now, Kipchoge is Kenyan—but what’s his ethnicity? Surprise surprise! He is of the Nandi tribe, more specifically, of the Talai subgroup, born in Kapsisiywa in the Nandi county. Jepkosgei, too, is Nandi, from Cheptil in Nandi county. (Jepkosgei also set the record for the half marathon in 2017. Also, see her regular training regimen and what she does throughout the day. This, of course, is how she is able to be so elite—without hard training, even without “the right genetic makeup”, one will not become an elite athlete.) What a strange coincidence that these two individuals who won recent marathons—and one who set the best time ever in the 26.2 mile race—are both Kenyan, specifically Nandi?

Both of these runners are from the same county in Kenya. Nandi county is elevated about 6,716 ft above sea level. Being born and living at a high elevation means that they have different kinds of physiological adaptations due to being born at such a higher elevation. Living and training at such high elevations means that they have greater lung capacities since they are breathing in thinner air. Those born in highlands like Kipchoge and Jepkosgei have larger lungs and thorax volumes, while oxygen intake is enhanced by increases in lung compliance, pulmonary diffusion, and ventilation (Meer, Heymans, and Zijlstra, 1995).

Those exposed to such elevation develop what is known as “high-altitude hypoxia.” Humans born at high altitudes are able to cope with such a lack of oxygen, since our physiological systems are dynamic—not static—and can respond to environmental changes within seconds of them occurring. Babes born at higher elevations have increased ventilation, and a rise in the alveolar and the pressure of arterial oxygen (Meer, Heymans, and Zjilstra, 1995).

Kenyans have 5 percent longer legs and 12 percent lighter muscles than Scandinavians (Suchy and Waic, 2017). Mooses et al (2014) notes that “upper leg length, total leg length and total leg length to body height ratio were correlated with running performance.” Kong and de Heer (2008) note that:

The slim limbs of Kenyan distance runners may positively contribute to performance by having a low moment of inertia and thus requiring less muscular effort in leg swing. The short ground contact time observed may be related to good running economy since there is less time for the braking force to decelerate forward motion of the body.

An abundance of type I muscle fibers is conducive to success in distance running (Zierath and Hawley, 2004), though Kenyans and Caucasians have no difference in type I muscle fibers (Saltin et al, 1995Larsen and Sheel, 2015). That, then, throws a wrench in the claim that a whole slew of anatomic and physiologic variables conducive to running success is the cause for Kenyan running success—specifically the type I fibers—right? Wrong. Recall that the appearance of the athletic phenotype is due to nature and nurture—genes and environment—working together in concert. Kenyans are more likely to have slim, long limbs with lower body fat while they lived and trained over 6000 ft high. Their will to win to better themselves and their families’ socioeconomic status, too, plays a part. As I have argued in-depth for years—we cannot understand athletic success and elite athleticism without understanding individual histories, how they grew up, and what they did as a child.

For example, Wilbur and Pitsiladis (2012) espouse a systems view of Kenyan marathon success, writing:

In general, it appears that Kenyan and Ethiopian distance-running success is not based on a unique genetic or physiological characteristic. Rather, it appears to be the result of favorable somatotypical characteristics lending to exceptional biomechanical and metabolic economy/efficiency; chronic exposure to altitude in combination with moderate-volume, high-intensity training (live high + train high), and a strong psychological motivation to succeed athletically for the purpose of economic and social advancement.

Becoming a successful runner in Kenya can lead to economic opportunities not afforded to those who do not do well in running. This, too, is a factor in Kenyan running success. So, for the ignorant people who would—pushing a false dichotomy of genes and environment—state that Kenyan running success is due to “socioeconomic status”—they are right, to a point (even if they are mocking it and making their genetic determinism seem more palatable). See figure 6 for their hypothetical model:

fig6

This is one of the best models I have come across explaining the success of these people. One can see that it is not reductonist; note that there is no appeal to genes (just variables that genes are implicated IN! Which is not the same as reductionism). It’s not as if one can have an endomorphic somatotype with Kenyan training and their psychological reasons for becoming runners. The ecto-dominant somatotype is a necessary factor for success; but all four of these—biomechanical & physiological, training, and psychological—factors explain the success of the running Kenyans and, in turn, the success of Kipchoge and Jepkosgei. African dominance in distance running is, also, dominated by the Nandi subtribe (Tucker, Onywera, and Santos-Concejero, 2015). Knechtle et al (2016) also note that male and female Kenyan and Ethiopian runners are the youngest and fast at the half and full marathons.

The actual environment—climate—on the day of the race, too plays a factor. El Helou et al (2012) note that “Air temperature is the most important factor influencing marathon running performance for runners of all levels.Nikolaidis et al (2019) note that “race times in the Boston Marathon are influenced by temperature, pressure, precipitations, WBGT, wind coming from the West and wind speed.

The success of Kenyans—and other groups—shows how the dictum “Athleticism is irreducible to biology” (St. Louis, 2004) is true. How does it make any sense to attempt to reduce athletic success down to one variable and say that that explains the overrepresentation of, say, Kenyans in distance running? A whole slew of factors needs to occur to an individual, along with actually wanting to do something, in order for them to succeed at distance running.

So, what makes Kenyans like Kipchoge and Jepkosgei so good at distance running? It’s due to an interaction with genes and environment, since we take a systems and not a reductionist view of sport success. Even though Kipchoge’s time does not count as an official world record, what he did was still impressive (though not as impressive if he would have done so without all of the help he had). Looking at the system, and not trying to reduce the system to its parts, is how we will explain why some groups are better than others. Genes, of course, play a role in the ontogeny of the athletic phenotype, but they are not the be-all-end-all that genetic reductionists seem to make it out to be. The systems view for Kenyan running success shown here is how and why Kenyans—Kipchoge and Jepkosgei—dominate distance running.

The History and Construction of IQ Tests

4100 words

The [IQ] tests do what their construction dictates; they correlate a group’s mental worth with its place in the social hierarchy. (Mensh and Mensh, 1991, The IQ Mythology, pg 30)

We have been attempting to measure “intelligence” in humans for over 100 years. Mental testing began with Galton and then shifted over to Binet, which then became the most-well-known IQ tests today—Stanford-Binet and the WAIS/WISC. But the history of IQ testing is rife with unethical conclusions derived from their use, along with such conclusions they drew actually being carried out (i.e., the sterilization of “morons”; see Wilson, 2017’s The Eugenic Mind Project).

History of IQ testing

Any history of ‘intelligence’ testing will, of course, include Francis Galton’s contributions to the creation of psychological tests (in terms of statistical analyses, the construction of some tests, among other things) to the field. Galton was, in effect, one of the first behavioral geneticists.

Galton (1869: 37) asked “Is reputation a fair test of natural ability?“, to which he answered, “it is the only one I can employ.” Galton, for example, stated that, theoretically or intuitively, there is a relationship between reaction time and intelligence (Khodadi et al, 2014). Galton then devised tests of “reaction time, discrimination in sight and hearing, judgment of length, and so on, and applied them to groups of volunteers, with the aim of obtaining a more reliable and ‘pure’ measure of his socially judged intelligence” (Richardson, 1991: 19). But there was little to no relationship between Galton’s proposed proxies for intelligence and social class.

In 1890, Galton, publishing in the journal Mind coined the term “mental test (Castles, 2012: 85), while Cattell then got Galton to move to Columbia and got him permission to use his “mental tests” to all of the entering students. This was about two decades before Goddard brought the test to America—Galton and Cattell were just getting America warmed up for the testing process.

Yet others still attempted to create tests that were purported to measure intelligence, using similar kinds of parameters as Galton. For instance, Miller, 1962 provides a list (quoted in Richardson, 1991: 19):

1 Dynamotor pressure How tightly can the hand squeeze?

2 Rate of movement How quickly can the hand move through a distance of 30 cms?

3 Sensation areas How far apart must two points be on the skin to be recognised as two rather than one?

4 Pressure causing pain How much pressure on the forehead is necessary to cause pain?

5 Least noticeable difference in weight How large must the difference be between two weights before it is reliably detected?

6 Reaction-time for sound How quickly can the hand be moved at the onset of an auditory signal?

7 Time for naming colours How long does it take to name a strop of ten colored papers?

8 Bisection on a 10 cm line How accurately can onr point to the centre of an ebony rule?

9 Judgment of 10 sec time How accurately can an interval of 10 secs be judged?

10 Number of letters remembered on once hearing How many letters, ordered at random, can be repeated exactly after one presentation?

Individuals differed on these measures, but when they were used to compare social classes, Cattell stated that they were “disappointingly low” (quoted in Richardson, 1991: 20). So-called mental tests, Richardson (1991: 20) states, were “not [a] measurement for a straightforward, objective scientific investigation. The theory was there, but it was hardly a scientific one, but one derived largely from common intuition; what we described earlier as a popular or informal theory. And the theory had strong social implications. Measurement was devised mainly as a way of applying the theory in accordance with the prejudices it entailed.”

It wasn’t until 1903 when Alfred Binet was tasked to construct a test that identified slow learners in grade-school. In 1904, Binet was appointed a member of a commission on special classes in schools (Murphy, 1949: 354). In fact, Binet constructed his test in order to limit the role of psychiatrists in making decisions on whether or not healthy children—but ‘abnormal’—children should be excluded from the standard material used in regular schools (Nicolas et al, 2013). (See Nicolas et al, 2013 for a full overview of the history of intelligence in Psychology and a fuller overview of Binet and Simon’s test and why they constructed it. Also see Fancher, 1985 and )

The way Binet constructed his tests were in a way to identify children who were not learning what the average child their age knew. But the tests must distinguish between the lazy from the mentally deficient. So in 1905, Binet teamed up with Simon, and they published their first IQ test, with items arranged from the simplest to the most difficult (but with no standardization). A few of these items include: naming objects, completing sentences, comparing lines, comprehending questions, and repeating digits. Their test consisted of 30 items, which increased in difficulty from easiest to hardest and the items were chosen on the basis of teacher assessment and checking the items and seeing which discriminated which child and that also agreed with the constructors’ presuppositions.

Richardson (2000: 32) discusses how IQ tests are constructed:

In this regard, the construction of IQ tests is perhaps best thought of as a reformatting exercise: ranks in one format (teachers’ estimates) are converted into ranks in another format (test scores, see figure 2.1).

In The Development of Intelligence in Children, Binet and Simon (1916: 309) discuss how teachers assessed students:

A teacher , whom I know, who is methodical and considerate, has given an account of the habits he has formed for studying his pupils; he has analysed his methods, and sent them to me. They have nothing original, which makes them all the more important. He instructs children from five and a half to seven and a half years old; they are 35 in number; they have come to his class after having passed a prepatory course, where they have commenced to learn to read. For judging each child, the teacher takes account of his age, his previous schooling (the child may have been one year, two years in the prepatory class, or else was never passed through the division at all), of his expression of countenance, his state of health, his knowledge, his attitude in class, and his replies. From thes diverse elements he forms his opinion. I have transcribed some of these notes on the following page.

In reading his judgments one can see how his opinion was formed, and of how many elements it took account; it seems to us that this detail is interesting; perhaps if one attempted to make it precise by giving coefficients to all of these remarks, one would realize still greater exactitude. But is it possible to define precisely an attitude, a physiognomy, interesting replies, animated eyes? It seems that in all this the best element of diagnosis is furnished by the degree of reading which the child has attained after a given number of months, and the rest remains constantly vague.

Binet chose the items used on his tests for practical, not theoretical reasons. They then learned that some of their tests were harder, and others were easier, so they then arranged their tests by age levels: how well the average child for that age could complete the test in question. For example, if the average child could complete 10/20 for their age group, then they were average for that age. Then, if they scored below that, they were below average and above that they were higher than average. So the “mental age” for the child in question was calculated with the following formula: IQ=MA/CA*100. So if one’s MA (mental age) was 13 and their chronological age was 9, then their IQ would be 144.

Before Binet’s death in 1911, he revised his and Simon’s previous test. Intelligence, to Binet, is “the ability to understand directions, to maintain a mental set, and to apply “autocriticism” (the correction of one’s own errors)” (Murphy, 1949: 355). Binet measured subnormality by subtracting mental age from chronological age. (If mental and chronological age are equal, then IQ is 100.) To Binet, relative retardation was important. But William Stern, in 1912, thought that relative retardation was not important, but relative retardation was, and so he proposed to divide the mental age by the chronological age and multiply by 100. This, he showed, was stable in most children.

Binet termed his new scale a test of intelligence. It is interesting to note that the primary connotation of the French term l’intelligence in Binet’s time was what we might call “school brightness,” and Binet himself claimed no function for his scales beyond that of measuring academic aptitude.

In 1908, Henry Goddard went on a trip to Europe, heard of Binet’s test, and brought home an original version to try out on his students at the Vineland Training School. He translated Binet’s 1908 edition of his test from French to English in 1909. Castles (2012: 90) notes that “American psychology would never be the same.Goddard was also the one who coined the term “moron” (Dolmage, 2018) for any adult with a mental age between 8 and 13. In 1912, Goddard administered tests to immigrants who landed at Ellis Island and found that 87 percent of Russians, 83 percent of Jews, 80 percent of Hungarians, and 79 percent of Italians were “feebleminded.” Deportations soon picked up, with Goddard reporting a 350 percent increase in 1913 and a 570 percent increase in 1914 (Mensh and Mensh, 1991: 26).

Then, in 1916, Terman published his revision of the Binet-Simon scale, which he termed the Stanford-Binet intelligence scale, based on a sample of 1,000 subjects and standardized for ages ranging from 3-18—the tests for 16-year-olds being were for adults, whereas the tests for 18-year-olds were for ‘superior’ adults (Murphy, 1949: 355). (Terman’s test was revised in 1937, when the question of sex differences came up, see below, and in 1960.) Murphy (1949: 355) goes on to write:

Many of Binet’s tests were placed at higher or lower age levels than those at which Binet had placed them, and new tests were added. Each age level was represented by a battery of tests, each test being assigned a certain number of month credits. It was possible, therefore, to reckon the subject’s intelligence quotient, as Stern had suggested, in terms of the ratio of mental age to chronological age. A child attaining a score of 120 months, but only 100 months old, would have an IQ of 120 (the decimal point omitted).

It wasn’t until 1917 that psychologists devised the Army Alpha test for literate test-takers and the Army Beta test for illiterate test-takers and non-English speakers. Examples for items on the Alpha and the Beta can be found below:

1. The Percheron is a kind of

(a) goat, (b) horse, (c) cow, (d) sheep.

2. The most prominent industry of Gloucester is

(a) fishing, (b) packing, (c) brewing, (d) automobiles.

3. “There’s a reason” is an advertisement for

(a) drink, (b) revolver, (c) flour, (d) cleanser.

4. The Knight engine is used in the

(a) drink, (b) Stearns, (c) Lozier, (d) Pierce Arrow.

5. The Stanchion is used in

(a) fishing, (b) hunting, (c) farming, (d) motoring. (Heine, 2017: 187)

beta

Mensh and Mensh (1991: 31) tell us that

… the tests’ very lack of effect on the placement of [army] personnel provides the clue to their use. The tests were used to justify, not alter, the army’s traditional personnel policy, which called for the selection of officers from among relatively affluent whites and the assignment of white of lower socioeconomic status go lower-status roles and African-Americans at the bottom rung.

Meanwhile, while Binet was devising his Binet scales at the beginning of the 20th century, Spearman was devising his theory of g over in Europe. Spearman noted in 1904 that children who did well or poorly on certain types of tests did well or poorly on all of them—they were correlated. Spearman’s discovery was that correlated scores reflect a common ability, and this ability is called ‘general intelligence’ or ‘g’ (which has been widely criticized).

In sum, the conception of ‘intelligence tests’ began as a way to attempt to justify the class/race hierarchy by constructing the tests in a way to agree with the constructors’ presuppositions of who is or is not intelligent—which will be covered below.

Test construction

When tests are standardized, a whole slew of candidate items are pooled together and used in the construction of the test. For an item to be used for the final test, it must agree with the a priori assumptions of the test’s constructors on who is or is not “intelligent.”

Andrew Strenio, author of The Testing Trap states exactly how IQ tests are constructed, writing:

We look at individual questions and see how many people get them right and who gets them right. … We consciously and deliberately select questions so that the kind of people who scored low on the pretest will score low on subsequent tests. We do the same for middle or high scorers. We are imposing our will on the outcome. (pg 95, quoted in Mensh and Mensh, 1991)

Richardson (2017a: 82) writes that IQ tests—and the items on them—are:

still based on the basic assumption of knowing in advance who is or is not intelligent and making up and selecting items accordingly. Items are invented by test designers themselves or sent out to other psychologists, educators, or other “experts” to come up with ideas. As described above, initial batches are then refined using some intuitive guidelines.

This is strange… I thought that IQ tests were “objective”? Well, this shows that they are anything but objective—they are, very clearly, subjective in their construction which leads to what the constructors of the test assumed—their score hierarchy. The test’s constructors assume that their preconceptions on who is or is not intelligent is true and that differences in intelligence are the cause for differences in social class, so the IQ test was created to justify the existing social hierarchy. (Nevermind the fact that IQ scores are an index of social class, Richardson, 2017b.)

Mensh and Mensh (1991: 5) write that:

Nor are the [IQ] tests objective in any scientific sense. In the special vocabulary of psychometrics, this term refers to the way standardized tests are graded, i.e., according to the answers designated “right” or “wrong” when the questions are written. This definition not only overlooks that the tests contain items of opinion, which cannot be answered according to universal standards of true/false, but also overlooks that the selection of items is an arbitrary or subjective matter.

Nor do the tests “allocate benefits.” Rather, because of their class and racial biases, they sort the test takers in a way that conforms to the existing allocation, thus justifying it. This is why the tests are so vehemently defended by some and so strongly opposed by others.

When it comes to Terman and his reconstruction of the Binet-Simon—which he called the Stanford-Binet—something must be noted.

There are negligible differences in IQ between men and women. In 1916, Terman thought that the sexes should be equal in IQ. So he constructed his test to mirror his assumption. Others (e.g., Yerkes) thought that whatever differences materialized between the sexes on the test should be kept and boys and girls should have different norms. Terman, though, to reflect his assumption, specifically constructed his test by including subtests in which sex differences were eliminated. This assumption is still used today. (See Richardson, 1998; Hilliard, 2012.) Richardson (2017a: 82) puts this into context:

It is in this context that we need to assess claims about social class and racial differences in IQ. These could be exaggerated, reduced, or eliminated in exactly the same way. That they are allowed to persist is a matter of social prejudice, not scientific fact. In all these ways, then, we find that the IQ testing movement is not merely describing properties of people—it has largely created them.

This is outright admission from the test’s constructors themselves that IQ differences can be built into and out of the test. It further shows that these tests are not “objective”, as they claim. In reality, they are subjective, based on prior assumptions. Take what Hilliard (2012: 115-116) noted about two white South African groups and differences in IQ between them:

A consistent 15- to 20-point IQ differential existed between the more economically privileged, better educated, urban-based, English-speaking whites and the lower-scoring, rural-based, poor, white Afrikaners. To avoid comparisons that would have led to political tensions between the two white groups, South African IQ testers squelched discussion about genetic differences between the two European ethnicities. They solved the problem by composing a modified version of the IQ test in Afrikaans. In this way they were able to normalize scores between the two white cultural groups.

The SAT suffers from the same problems. Mensh and Mensh (1991: 69) note that “the SAT has been weighted to widen a gender scoring differential that from the start favored males.” They note that, since the SAT’s inception, men have score higher than women, but the gap was due primarily to men’s scores on the math subtest “which was partially offset until 1972 by women’s higher scores on the verbal subtest.” But by 1986 men outscored women on the verbal portion, with the ETS stating that they created a “better balance for the scores between sexes” (quoted in Mensh and Mensh, 1991: 69). What they did, though, was exactly what Terman did: they added items where the context favored men and eliminated those that favored women. This prompts Hilliard (2012: 118) to ask “How then could they insist with such force that no cultural biases existed in the IQ tests given blacks, who scored 15 points below whites?

When it comes to test bias, Mensh and Mensh (1991: 51) write that:

From a functional standpoint, there is no distinction between crassly biased IQ-test items and those that appear to be non-biased. Because all types of test items are biased (if not explicitly, then implicitly, or in some combination thereof), and because the tests’ racial and class biased correspond to the society’s, each element of a test plays its part in ranking children in the way their respective groups are ranked in the social order.

This, then, returns to the normal distribution—the Gaussian distribution or bell curve.

The normal distribution is assumed. Items are selected to conform with the normal curve after the fact by trying out a whole slew of items for which Jensen (1980: 147-148) states that “items must simply emerge arbitrarily from the heads of test constructors.” Items that show little correlation with the testers’ expectations are then removed from the final test. Fischer et al (1996), Simon (1997), Richardson (1991; 1998; 2017) also discuss the myth of the normal distribution and how it is constructed by IQ test-makers. Further, Jensen brings up an important point about items emerging “arbitrarily from the heads of test constructors.” That is, test constructors have their idea in their head on who is or is not ‘intelligent’, they then try out a whole slew of items, and, unsurprisingly, they get the type of score distribution they want! Howe (1997: 20) writes that:

However, it is wrongly assumed that the fact that IQ scores have a bell-shaped distribution implies that differing intelligence levels of individuals are ‘naturally’ distributed in that way. This is incorrect: the bell-shaped distribution of IQ scores is an artifical product that results from test-makers initially assuming that intelligence is normally distributed, and then matchinig IQ scores to differing levels of test performance in a manner that results in a bell-shaped curve.

Richardson (1991) notes that the normal distribution “is achieved in the IQ test by the simple decision of including more items on which an average number of the trial group performed well, and relatively fewer on which either a substantial majority or a minority of subjects did well. Richardson (1991) also states that “if the bell-shaped curve is the myth it seems to be—for IQ as for much else—then it is devastating for nearly all discussion surrounding it.” Even Jensen (1980: 71) states that “It is claimed that the psychometrist can make up a test that will yield any kind of score distribution he pleases. This is roughly true, but some types of distributions are much easier to obtain than others.

The [IQ test] items are, after all, devised by test designers from a very narrow social class and culture, based on intuitions about intelligence and variation in it, and on a technology of item selection which builds in the required degree of convergence of performance. (Richardson, 1991)

Micceri (1988) examined score distributions from 400 tests administered all over the US in workplaces, universities, and schools. He found significant non-normal distributions of test scores. The same can be said about physiological processes, as well.

Candidate items are administered to a sample population, and to be selected for the final test, the question must establish the scoring norm for the whole group, along with subtest norms which is supposed to replicate when the test is then released for general use. So an item must play the role in creating a distribution of scores that places each subgroup (of people) in its predetermined place on the (artifact of test construction’s) normal curve. It is then how Hilliard (2012: 118) notes:

Validating a newly drawn-up IQ exam involved giving it to a prescribed sample population to determine whether it measured what it was designed to assess. The scores were then correlated, that is, compared with the test designers’ presumptions. If the individuals who were supposed to come out on top didn’t score highly or, conversely, if the individuals who were assumed would be at the bottom of the scores didn’t end up there, then the designers would scrap the test.

Howe (1997: 6) states that “A psychological test score is no more than an indication of how well someone has performed at a number of questions that have been chosen for largely practical reasons. Nothing is genuinely being measured.Howe (1997: 17) also noted that:

Because their construction has never been guided by any formal definition of what intelligence is, intelligence tests are strikingly different from genuine measuring instruments. Binet and Simon’s choice of items to include as the problems that made up their test was based purely on practical considerations.

IQ tests are ‘validated’ against older tests such as the Stanford-Binet, but the older tests were never validated themselves (see Richardson, 2002: 301.) Howe (1997: 18) continues:

In the case of the Binet and Simon test, since their main purpose was to help establish whether or not a child was capable of coping with the conventional school cirriculum, they sensibly chose items that seemed to assess a child’s capacity to succeed at the kinds of mental problems that are encountered in the classroom. Importantly, the content of the first intelligence test was decided by largely pragmatic considerations rather than being constrained by a formal definition of intelligence. That remains largely true of the tests that are used even now. As the early tests were revised and new assessment batteries constructed, the main benchmark for believing a new test to be adequate was its degree of agreement with the older ones. Each new test was assummed to be a proper measure of intelligence if the distributions of people’s scores at it matched the pattern of scores at a previous test, a line of reasoning that conveniently ignored the fact that the earlier ‘measures’ of intelligence that provided the basis for confirming the quality of the subsequent ones were never actually measures of anything. In reality … intelligence tests are very different from true measures (Nash, 1990). For instance, with a measure such as height it is clear that a particular quantity is the same irrespective of where it occurs. The 5 cm difference between 40 cm and 45 cm is the same as the 5 cm difference between 115 cm and 120 cm, but the same cannot be said about differing scores gained in a psychological test.

Conclusion

This discussion of the construction of IQ tests and the history of IQ testing can lead us to one conclusion: that differences in scores can be built into and out of the tests based on the prior assumptions of the test’s constructors; the history of IQ testing is rife with these same assumptions; and all newer tests are ‘validated’ on their agreement with older—still non-valid!—tests. The genesis of IQ testing beginning with social prejudices, constructing the tests to agree with the current hierarchy, however, does indeed damn the conclusions of the tests—that group A outscores group B does not mean that A is more ‘intelligent’ than B; it only means that A was exposed to more of the knowledge on the test.

The normal distribution, too, is a result of the same item addition/elimination to get the expected scores—the scores that then agree with the constructors’ racial and class biases. Bias in mental testing does exist, contra Jensen (1980). It exists due to carefully selected items to distinguish between different racial groups and social classes.

This critique in IQ testing I have mounted is not an ‘environmentalist’ critique, either. It is a methodological one.

Genetic and Epigenetic Determinism

1550 words

Genetic determinism is the belief that behavior/mental abilities are ‘controlled by’ genes. Gerick et al (2017) note that “Genetic determinism can be described as the attribution of the formation of traits to genes, where genes are ascribed more causal power than what scientific consensus suggests“, which is similar to Oyama (1985) who writes “Just as traditional though placed biological forms in the mind of God, so modern thought finds ways of endowing genes with ultimate formative power.” Moore (2014: 15) notes that genetic determinism is “the idea that genes can determine the nature of our characteristics” or “the old idea that biology controls the development of characteristics like intelligence, height, and personality” (pg 39). (See my article DNA is not a Blueprint for more information.)

On the other hand, epigenetic determinism is “the belief that epigenetic mechanisms determine the expression of human traits and behaviors” (Wagoner and Uller, 2016). Both views are, of course, espoused in the scientific literature as well as usual social discourse. Both views, as well, are false. Moore (2014: 245) notes that epigenetic determinism is “the idea that an organism’s epigenetic state invariably leads to a particular phenotype.

Genetic Determinism

The concept of genetic determinism was first proposed by Weismann in 1893 with a theory of germplasm. This, in contemporary times, is contrasted with “blank slatism” (Pinker, 2002), or the Standard Social Science Model (SSSM; Tooby and Cosmides, 1992; see Richardson, 2008 for a response). Genes, genetic determinists hold, determine the ontogeny of traits, being a sort of “director.” But this betrays modern thinking on genes, what they are, and what they “do.” Genes do nothing on their own without input from the physiological system—that is, from the environment (Noble, 2011). Thus, gene-environment interaction is the rule.

This lead to either-or thinking in regard to the origin of traits and their development—what we now call “the nature-nurture debate.” Nature (genes/biology) or nurture (experience, how one is raised), gene determinists hold, are the cause of certain traits, like, for example, IQ.

Plomin (2018) asserts that nature has won the battle over nurture—while also stating that they interact. So, which one is it? It’s obvious that they interact—if there were no genes there would still be an environment but if there were no environment there would be no genes. (See here and here for critiques of his book Blueprint.)

This belief that genes determine traits goes back to Galton—one of the first hereditarians. Indeed, Galton was the one to coin the phrase “nature vs nurture”, while being a proponent of ‘nature over nurture.’ Do genes or environment influence/cause human behavior? The obvious answer to the question is both do—and they are intertwined: they interact.

Griffiths (2002) notes that:

Genetic determinism is the idea that many significant human characteristics are rendered inevitable by the presence of certain genes; that it is futile to attempt to modify criminal behavior or obesity or alcoholism by any means other than genetic manipulation.

Griffiths then argues that genes are very unlikely to be deterministic causes of behavior. Genes are thought to have a kind of “information” in them which then determines how the organism will develop. This is what the “blueprint metaphor” for genes attempts to show. Genes contain this information for trait development. The implicit assumption here is that genes are context-independent—that the (environmental) context the organism is in does not matter. But genes are context-dependent—“the very concept of a gene requires the environment” (Schneider, 2007). This speaks to the context-dependency of genes. There is no “information”—genes are not like blueprints or recipes. So genetic determinism is false.

But even though genetic determinism is false, it still stays in the minds of our society/culture and scientists (Moore, 2008), while still being taught in schools (Jamieson and Radick, 2017).

The claim that genes determine phenotypes can be shown in the following figure from Kampourakis (2017: 187):

Figure 9.6 (a) The common representation of gene function: a single gene determines a single phenotype. It should be clear by what has been present in the book so far that is not accurate. (b) A more accurate representation of gene function that takes development and environment into account. In this case, a phenotype is produced in a particular environment by developmental processes in which genes are implicated. In a different environment the same genes might contribute to the development of a different phenotype. Note the “black box” of development.

Richardson (2017: 133) notes that “There is no direct command line between environments and genes or between genes and phenotypes.” The fact of the matter is, genes do not determine an organism’s characters, they are merely implicated in the development of the character—being passive, not active templates (Noble, 2011).

Moore (2014: 199) tells us how genetic determinism fails since genes do not work in a vaccuum:

There is just one problem with the neo-Darwinian assumption that “hard” inheritance is the only good explanation for the transgenerational transmission of phenotypes: It is hopelessly simplistic. Genetic determinism is a faulty idea, because genes do not operate in a vacuum; phenotypes develop when genes interact with nongenetic factors in their local environments, factors that are affected by the broader environment.

Epigenetic Determinism

On the other hand, epigenetic determinism, the belief that epigenetic mechanisms determine the behavior of the organism, is false but in the other direction. Epigenetic determinists decry genetic determinism, but I don’t think they realize that they are just as deterministic as they are.

Dupras et al (2018) note how “overly deterministic readings of epigenetic marks could promote discriminatory attitudes, discourses and practices based on the predictive nature of epigenetic information.” While epigenetics—specifically behavioral epigenetics—refutes notions of genetic determinism, we can then fall into a similar trap, but determinism all the same. This means, though, that since genes don’t determine, epigenetics does not either, so we cannot epigenetically manipulate pre- or perinatally since what we would attempt to manipulate—‘intelligence’, contentment, happiness—all  develop over the lifespan. Moore (2014: 248) continues:

Even in situations where we know that certain perinatal experiences can have very long-term effects, determinism is still an inappropriate framework for thinking about human development. For example, no one doubts that drinking alcohol during pregnancy is bad for the fetus, but in the hundreds of years before scientists established this relationship, innumerable fetuses exposed to some alcohol nonetheless grew up to be healthy, normal adults. This does not mean that pregnant women should drink alcohol freely, of course, but it does mean that developmental outcomes are not as easy to predict as we sometimes think. Therefore, it is probably always a bad idea to apply a deterministic worldview to a human being. Like DNA segments, epigenetic marks should not be considered destiny. How a given child will develop after trauma, for example, depends on a lot more than simply the experience of the trauma itself.

In an interview with The Psych Report Moore tells us that people not know enough about epigenetics for there to be epigenetic determinists (though many journalists and some scientists talk like they are :

I don’t think people know enough about epigenetics yet to be epigenetic determinists, but I foresee that as a problem. As soon as people start hearing about these kinds of data that suggest that your early experiences can have long-term effects, there’s a natural assumption we all make that those experiences are determinative. That is, we tend to assume that if you have this experience in poverty, you are going to be permanently scarred by it.

The data seem to suggest that it may work that way, but it also seems to be the case that the experiences we have later in life also have epigenetic effects. And there’s every reason to think that those later experiences can ameliorate some of the effects that happened early on. So, I don’t think we need to be overly concerned that the things that happen to us early in life necessarily fate us to certain kinds of outcomes.

While epigenetics refutes genetic determinism, we can run into the problem of epigenetic determinism, which Moore predicts. But journalists note how genes can be turned on or off by the environment, thereby dictating disease states, for example. Though, biological determinism—of any kind, epigenetic or genetic—is nonsensical as “the development of phenotypes depends on the contexts in which epigenetic marks (and other developmentally relevant factors, of course, are embedded” (Moore, 2014: 246).

What really happens?

What really happens regarding development if genetic and epigenetic determinism are false? It’s simple: causal parity (Oyama, 1985; Noble, 2012): the thesis that genes/DNA play an important role in development, but so do other variables, so there is no reason to privilege genes/DNA above other developmental variables. Genes are not special developmental resources and so, nor are they more important than other developmental resources. So the thesis is that genes and other developmental resources are developmentally ‘on par’. ALL traits develop through an interaction between genes and environment—nature and nurture. Contra ignorant pontifications (e.g., Plomin), neither has “won out”—they need each other to produce phenotypes.

So, genetic and epigenetic determinism are incoherent concepts: nature and nurture interact to produce the phenotypes we see around us today. Developmental systems theory, which integrates all factors of development, including epigenetics, is the superior framework to work with, but we should not, of course, be deterministic about organismal development.

A not uncommon reaction to DST is, ‘‘That’s completely crazy, and besides, I already knew it.” — Oyama, 2000, 195, Evolution’s Eye

Hereditarian “Reasoning” on Race

1100 words

The existence of race is important for the hereditarian paradigm. Since it is so important, there must be some theories of race that hereditarians use to ground their theories of race and IQ, right? Well, looking at the main hereditarians’ writings, they just assume the existence of race, and, along with the assumption, the existence of three races—Caucasoid, Negroid, and Mongoloid, to use Rushton’s (1997) terminology.

But just assuming race exists without a definition of what race is is troubling for the hereditarian position. Why just assume that race exists?

Fish (2002: 6) in Race and Intelligence: Separating Science from Myth critiques the usual hereditarians on what race is and their assumptions that it exists. He cites Jensen (1998: 425) who writes:

A race is one of a number of statistically distinguishable groups in which individual membership is not mutually exclusive by any single criterion, and individuals in a given group differ only statistically from one another and from the group’s central tendency on each of the many imperfectly correlated genetic characteristics that distinguish between groups as such.

Fish (2002: 6) continues:

This is an example of the kind of ethnocentric operational definition described earlier. A fair translation is, “As an American, I know that blacks and whites are races, so even though I can’t find any way of making sense of the biological facts, I’ll assign people to my cultural categories, do my statistical tests, and explain the differences in biological terms.” In essence, the process involves a kind of reasoning by converse. Instead of arguing, “If races exist there are genetic differences between them,” the argument is “Genetic differences between groups exist, therefore the groups are races.”

Fish goes on to write that if we take a group of bowlers and a group of golfers then, by chance, there may be genetic differences between them but we wouldn’t call them “golfer races” or “bowler races.” If there were differences in IQ, income and other variables, he continues, we wouldn’t argue that the differences are due to biology, we would attempt argue that the differences are social. (Though I can see behavioral geneticists try to argue that the differences are due to differences in genes between the groups.)

So the reasoning that Jensen uses is clearly fallacious. Though, it is better than Levin’s (1997) and Rushton’s (1997) assumptions that race exists, it still fails since Jensen (1998) is attempting argue that genetic differences between groups make them races. Lynn (2006: 11) uses a similar argument to the one Jensen provides above. (Nevermind Lynn conflating social and biological races in chapter 2 of Race Differences in Intelligence.)

Arguments exist for the existence of race that doesn’t, obviously, assume their existence. The two best ones I’m aware of are by Hardimon (2017) and Spencer (2014, 2019).

Hardimon has four concepts: the racialist race concept (what I take to be the hereditarian position), the minimalist/populationist race concept (they are two separate concepts, but the populationist race concept is the “scientization” of the minimalist race concept) and the socialrace concept. Specifically, Hardimon (2017: 99) defines ‘race’ as:

… a subdivision of Homo sapiens—a group of populations that exhibits a distinctive pattern of genetically transmitted phenotypic characters that corresponds to the group’s geographic ancestry and belongs to a biological line of descent initiated by a geographically separated and reproductively isolated founding population.

Spencer (2014, 2019), on the other hand, grounds his racial ontology in the Census and the OMB—what Spencer calls “the OMB race theory”—or “Blumenbachian partitions.” Take Spencer’s most recent (2019) formulation of his concept:

In this chapter, I have defended a nuanced biological racial realism as an account of how ‘race’ is used in one US race talk. I will call the theory OMB race theory, and the theory makes the following three claims:

(3.7) The set of races in OMB race talk is one meaning of ‘race’ in US race talk.

(3.8) The set of races in OMB race talk is the set of human continental populations.

(3.9) The set of human continental populations is biologically real.

I argued for (3.7) in sections 3.2 and 3.3. Here, I argued that OMB race talk is not only an ordinary race talk in the current United States, but a race talk where the meaning of ‘race’ in the race talk is just the set of races used in the race talk. I argued for (3.8) (a.k.a. ‘the identity thesis’) in sections 3.3 and 3.4. Here, I argued that the thing being referred to in OMB race talk (a.k.a. the meaning of ‘race’ in OMB race talk) is a set of biological populations in humans (Africans, East Asians, Eurasians, Native Americans, and Oceanians), which I’ve dubbed the human continental populations. Finally, I argued for (3.9) in section 3.4. Here, I argued that the set of human continental populations is biologically real because it currently occupies the K = 5 level of human population structure according to contemporary population genetics.

Whether or not one accepts Hardimon’s and Spencer’s arguments for the existence of race is not the point here, however. The point here is that these two philosophers have grounded their belief in the existence of race in a sound philosophical grounding—we cannot, though, say the same things for the hereditarians.

It should also be noted that both Spencer and Hardimon discount hereditarian theory—indeed, Spencer (2014: 1036) writes:

Nothing in Blumenbachian race theory entails that socially important differences exist among US races. This means that the theory does not entail that there are aesthetic, intellectual, or moral differences among US races. Nor does it entail that US races differ in drug metabolizing enzymes or genetic disorders. This is not political correctness either. Rather, the genetic evidence that supports the theory comes from noncoding DNA sequences. Thus, if individuals wish to make claims about one race being superior to another in some respect, they will have to look elsewhere for that evidence.

So, as can be seen, hereditarian ‘reasoning’ on race is not grounded in anything—they just assume that races exist. This stands in stark contrast to theories of race put forth by philosophers of race. Nonhereditarian theories of race exist—and, as I’ve shown, hereditarians don’t define race, nor do they have an argument for the existence of races, they just assume their existence. But, for the hereditarian paradigm to be valid, they must be biologically real. Hardimon and Spencer argue that they are, but hereditarian theories do not have any bearing on their theories of race.

There is the hereditarian ‘reasoning’ on race: either assume its existence sans argument or argue that genetic differences between groups exist so the groups are races. Hereditarians need to posit something like Hardimon or Spencer.

Jews, IQ, Genes, and Culture

1500 words

Jewish IQ is one of the most-talked-about things in the hereditarian sphere. Jews have higher IQs, Cochran, Hardy, and Harpending (2006: 2) argue due to “the unique demography and sociology of Ashkenazim in medieval Europe selected for intelligence.” To IQ-ists, IQ is influenced/caused by genetic factors—while environment accounts for only a small portion.

In The Chosen People: A Study of Jewish Intelligence, Lynn (2011) discusses one explanation for higher Jewish IQ—that of “pushy Jewish mothers” (Marjoribanks, 1972).

“Fourth, other environmentalists such as Majoribanks (1972) have argued that the high intelligence of the Ashkenazi Jews is attributable to the typical “pushy Jewish mother”. In a study carried out in Canada he compared 100 Jewish boys aged 11 years with 100 Protestant white gentile boys and 100 white French Canadians and assessed their mothers for “Press for Achievement”, i.e. the extent to which mothers put pressure on their sons to achieve. He found that the Jewish mothers scored higher on “Press for Achievement” than Protestant mothers by 5 SD units and higher than French Canadian mothers by 8 SD units and argued that this explains the high IQ of the children. But this inference does not follow. There is no general acceptance of the thesis that pushy mothers can raise the IQs of their children. Indeed, the contemporary consensus is that family environmental factors have no long term effect on the intelligence of children (Rowe, 1994).

The inference is a modus ponens:

P1 If p, then q.

P2 p.

C Therefore q.

Let p be “Jewish mothers scored higher on “Press for Achievement” by X SDs” and let q be “then this explains the high IQ of the children.”

So now we have:

Premise 1: If “Jewish mothers scored higher on “Press for Achievement” by X SDs”, then “this explains the high IQ of the children.”
Premise 2: “Jewish mothers scores higher on “Press for Achievement” by X SDs.”
Conclusion: Therefore, “Jewish mothers scoring higher on “Press for Achievement” by X SDs”  so “this explains the high IQ of the children.”

Vaughn (2008: 12) notes that an inference is “reasoning from a premise or premises to … conclusions based on those premises.” The conclusion follows from the two premises, so how does the inference not follow?

IQ tests are tests of specific knowledge and skills. It, therefore, follows that, for example, if a “mother is pushy” and being pushy leads to studying more then the IQ of the child can be raised.

Looking at Lynn’s claim that “family environmental factors have no long term effect on the intelligence of children” is puzzling. Rowe relies heavily on twin and adoption studies which have false assumptions underlying them, as noted by Richardson and Norgate (2005), Moore (2006)Joseph (2014), Fosse, Joseph, and Richardson (2015)Joseph et al (2015). The EEA is false so we, therefore, cannot accept the genetic conclusions from twin studies.

Lynn and Kanazawa (2008: 807) argue that their “results clearly support the high intelligence theory of Jewish achievement while at the same time provide no support for the cultural values theory as an explanation for Jewish success.” They are positing “intelligence” as an explanatory concept, though Howe (1988) notes that “intelligence” is “a descriptive measure, not an explanatory concept.” “Intelligence, says Howe (1997: ix) “is … an outcome … not a cause.” More specifically, it is an outcome of development from infancy all the way up to adulthood and being exposed to the items on the test. Lynn has claimed for decades that high intelligence explains Jewish achievement. But whence came intelligence? Intelligence develops throughout the life cycle—from infancy to adolescence to adulthood (Moore, 2014).

Ogbu and Simon (1998: 164) notes that Jews are “autonomous minorities”—groups with a small number. They note that “Although [Jews, the Amish, and Mormons] may suffer discrimination, they are not totally dominated and oppressed, and their school achievement is no different from the dominant group (Ogbu 1978)” (Ogbu and Simon, 1998: 164). Jews are voluntary minorities, and voluntary minorities, according to Ogbu (2002: 250-251; in Race and Intelligence: Separating Science from Myth) suggests five reasons for good test performance from these types of minorities:

  1. Their preimmigration experience: Some do well since they were exposed to the items and structure of the tests in their native countries.
  2. They are cognitively acculturated: They acquired the cognitive skills of the white middle-class when they began to participate in their culture, schools, and economy.
  3. The history and incentive of motivation: They are motivated to score well on the tests as they have this “preimmigration expectation” in which high test scores are necessary to achieve their goals for why they emigrated along with a “positive frame of reference” in which becoming successful in America is better than becoming successful at home, and the “folk theory of getting ahead in the United States”, that their chance of success is better in the US and the key to success is a good education—which they then equate with high test scores.

So if ‘intelligence’ is a test of specific culturally-specific knowledge and skills, and if certain groups are exposed more to this knowledge, it then follows that certain groups of people are better-prepared for test-taking—specifically IQ tests.

The IQ-ists attempt to argue that differences in IQ are due, largely, to differences in ‘genes for’ IQ, and this explanation is supposed to explain Jewish IQ, and, along with it, Jewish achievement. (See also Gilman, 2008 and Ferguson, 2008 for responses to the just-so storytelling from Cochran, Hardy, and Harpending, 2006.) Lynn, purportedly, is invoking ‘genetic confounding’—he is presupposing that Jews have ‘high IQ genes’ and this is what explains the “pushiness” of Jewish mothers. The Jewish mothers then pass on their “genes for” high IQ—according to Lynn. But the evolutionary accounts (just-so stories) explaining Jewish IQ fail. Ferguson (2008) shows how “there is no good reason to believe that the argument of [Cochran, Hardy, and Harpending, 2006] is likely, or even reasonably possible.” The tall-tale explanations for Jewish IQ, too, fail.

Prinz (2014: 68) notes that Cochran et al have “a seductive story” (aren’t all just-so stories seductive since they are selected to comport with the observation? Smith, 2016), while continuing (pg 71):

The very fact that the Utah researchers use to argue for a genetic difference actually points to a cultural difference between Ashkenazim and other groups. Ashkenazi Jews may have encouraged their children to study maths because it was the only way to get ahead. The emphasis remains widespread today, and it may be the major source of performance on IQ tests. In arguing that Ashkenazim are genetically different, the Utah researchers identify a major cultural difference, and that cultural difference is sufficient to explain the pattern of academic achievement. There is no solid evidence for thinking that the Ashkenazim advantage in IQ tests is genetically, as opposed to culturally, caused.

Nisbett (2008: 146) notes other problems with the theory—most notably Sephardic over-achievement under Islam:

It is also important to the Cochran theory that Sephardic Jews not be terribly accomplished, since they did not pass through the genetic filter of occupations that demanded high intelligence. Contemporary Sephardic Jews in fact do not seem to haave unusally high IQs. But Sephardic Jews under Islam achieved at very high levels. Fifteen percent of all scientists in the period AD 1150-1300 were Jewish—far out of proportion to their presence in the world population, or even the population of the Islamic world—and these scientists were overwhelmingly Sephardic. Cochran and company are left with only a cultural explanation of this Sephardic efflorescence, and it is not congenial to their genetic theory of Jewish intelligence.

Finally, Berg and Belmont (1990: 106) note that “The purpose of the present study was to clarify a possible misinterpretation of the results of Lesser et al’s (1965) influential study that suggested that existence of a “Jewish” pattern of mental abilities. In establishing that Jewish children of different socio-cultural backgrounds display different patterns of mental abilities, which tend to cluster by socio-cultural group, this study confirms Lesser et al’s position that intellectual patterns are, in large part, culturally derived.” Cultural differences exist; cultural differences have an effect on psychological traits; if cultural differences exist and cultural differences have an effect on psychological traits (with culture influencing a population’s beliefs and values) and IQ tests are culturally-/class-specific knowledge tests, then it necessarily follows that IQ differences are cultural/social in nature, not ‘genetic.’

In sum, Lynn’s claim that the inference does not follow is ridiculous. The argument provided is a modus ponens, so the inference does follow. Similarly, Lynn’s claim that “pushy Jewish mothers” don’t explain the high IQs of Jews doesn’t follow. If IQ tests are tests of middle-class knowledge and skills and they are exposed to the structure and items on them, then it follows that being “pushy” with children—that is, getting them to study and whatnot—would explain higher IQs. Lynn’s and Kanazawa’s assertion that “high intelligence is the most promising explanation of Jewish achievement” also fails since intelligence is not an explanatory concept—a cause—it is a descriptive measure that develops across the lifespan.

Knowledge, Culture, Logic, and IQ

5050 words

… what IQ tests actually assess is not some universal scale of cognitive strength but the presence of skills and knowledge structures more likely to be acquired in some groups than in others. (Richardson, 2017: 98)

For the past 100 years, the black-white IQ gap has puzzled psychometricians. There are two camps—hereditarians (those who believe that individual and group differences in IQ are due largely to genetics) and environmentalists/interactionists (those who believe that individual and group differences in IQ are largely due to differences in learning, exposure to knowledge, culture and immediate environment).

Knowledge

However, one of the most forceful arguments for the environmentalist (i.e., that the cause for differences in IQ are due to the cultural and social environment; note that an interactionist framework can be used here, too) side is one from Fagan and Holland (2007). They show that half of the questions on IQ tests had no racial bias, whereas other problems on the test were solvable with only a specific type of knowledge – knowledge that is found specifically in the middle class. So if blacks are more likely to be lower class than whites, then what explains lower test scores for blacks is differential exposure to knowledge – specifically, the knowledge to complete the items on the test.

But some hereditarians say otherwise – they claim that since knowledge is easily accessible for everyone, then therefore, everyone who wants to learn something will learn it and thus, the access to information has nothing to do with cultural/social effects.

A hereditarian can, for instance, state that anyone who wants to can learn the types of knowledge that are on IQ tests and that they are widely available everywhere. But racial gaps in IQ stay the same, even though all racial groups have the same access to the specific types of cultural knowledge on IQ tests. Therefore, differences in IQ are not due to differences in one’s immediate environment and what they are exposed to—differences in IQ are due to some innate, genetic differences between blacks and whites. Put into premise and conclusion form, the argument goes something like this:

P1 If racial gaps in IQ were due specifically to differences in knowledge, then anyone who wants to and is able to learn the stuff on the tests can do so for free on the Internet.

P2 Anyone who wants to and is able to learn stuff can do so for free on the Internet.

P3 Blacks score lower than whites on IQ tests, even though they have the same access to information if they would like to seek it out.

C Therefore, differences in IQ between races are due to innate, genetic factors, not any environmental ones.

This argument is strange. One would have to assume that blacks and whites have the same access to knowledge—we know that lower-income people have less access to knowledge in virtue of the environments they live in. For instance, they may have libraries with low funding or bad schools with teachers who do not care enough to teach the students what they need to succeed on these standardized tests (IQ tests, the SAT, etc are all different versions of the same test). (2) One would have to assume that everyone has the same type of motivation to learn what amounts to answers for questions on a test that have no real-world implications. And (3) the type of knowledge that one is exposed to dictates what one can tap into while they are attempting to solve a problem. All three of these reasons can cascade in causing the racial performance in IQ.

Familiarity with the items on the tests influences a faster processing of information, allowing one to correctly identify an answer in a shorter period of time. If we look at IQ tests as tests of middle-class knowledge of skills, and we rightly observe that blacks are lower class than whites who are more likely to be middle class, then it logically follows that the cause of differences in IQ between blacks and whites are cultural – and not genetic – in origin. This paper – and others – solves the century-old debate on racial IQ differences – what accounts for differences in IQ scores is differential exposure to knowledge. Claiming that people have the same type of access to knowledge and, thusly, won’t learn it if they won’t seek it out does not make sense.

Differing experiences lead to differing amounts of knowledge. If differing experiences lead to differing amounts of knowledge, and IQ tests are tests of knowledge—culturally-specific knowledge—then those who are not exposed to the knowledge on the test will score lower than those who are exposed to the knowledge. Therefore, Jensen’s Default Hypothesis is false (Fagan and Holland, 2002). Fagan and Holland (2002) compared blacks and whites on for their knowledge of the meaning of words, which are highly “g”-loaded and shows black-white differences. They review research showing that blacks have lower exposure to words and are therefore unfamiliar with certain words (keep this in mind for the end). They mixed in novel words with previously-known words to see if there was a difference.

Fagan and Holland (2002) picked out random words from the dictionary, then putting them into a sentence to attempt to give the testee some context. They carried out five experiments in all, and each one showed that, when equal opportunity was given to the groups, they were “equal in knowledge” (IQ). So, whites were more likely to know the items more likely to be found on IQ tests. Thus, there were no racial differences between blacks and whites when looked at from an information-processing point of view. Therefore, to expain racial differences in IQ, we must look to differences in the cultural/social environment. Fagan (2000) for instance, states that “Cultures may differ in the types of knowledge their members have but not in how well they process. Cultures may account for racial differences in IQ.

The results of Fagan and Holland (2002) are completely at-ends with Jensen’s Default Hypothesis—that the 15-point gap in IQ is due to the same environmental and cultural factors that underlie individual differences in the group. However, as Fagan and Holland (2002: 382) show that:

Contrary to what the default hypothesis would predict, however, the within racial group analyses in our study stand in sharp contrast to our between racial group findings. Specifically, individuals within a racial group who differed in general knowledge of word meanings also differed in performance when equal exposure to the information to be tested was provided. Thus, our results suggest that the average difference of 15 IQ points between Blacks and Whites is not due to the same genetic and environmental factors, in the same ratio, that account for differences among individuals within a racial group in IQ.

Exposure to information is critical, in fact. For instance, Ceci (1996) shows that familiarity with words dictates speed of processing to use in identifying the correct answer to the problem. In regard to differences in IQ, Ceci (1996) does not deny the role of biology—indeed, it’s a part of his bio-ecological model of IQ, which is a theory that postulates the development of intelligence as an interaction between biological dispositions and the environment in which those dispositions manifest themselves. Ceci (1996) does note that there are biological constraints on intelligence, but that “… individual differences in biological constraints on specific cognitive abilities are not necessarily (or even probably) directly responsible for producing the individual differences that have been reported in the psychometric literature.” That such potentials, though may be “genetic” in origin, of course, does not license the claim that genetic factors contribute to variance in IQ. “Everyone may possess them to the same degree, and the variance may be due to environment and/or motivations that led to their differential crystallization.” (Ceci, 1996: 171)

Ceci (1996) also further shows that people can differ in intellectual performance due to 3 things: (1) the efficiency of underlying cognitive potentials that are relevant to the cognitive ability in question; (2) the structure of knowledge relevant to the performance; and (3) contextual/motivational factors relevant to crystallize the underlying potentials gained through one’s knowledge. Thus, if one is lacking in the knowledge of the items on the test due to what they learned in school, then the test will be biased against them since they did not learn the relevant information on the tests.

Cahan and Cohen (1989) note that nine-year-olds in fourth grade had higher IQs than nine-year-olds in third grade. This is to be expected, if we take IQ scores as indices of—cultural-specific—knowledge and skills and this is because fourth-graders have been exposed to more information than third-graders. In virtue of being exposed to more information than their same-age cohort in different grades, they then score higher on IQ tests because they are exposed to more information.

Cockroft et al (2015) studied South African and British undergrads on the WAIS-III. They conclude that “the majority of the subtests in the WAIS-III hold cross-cultural biases“, while this is “most evident in tasks which tap crystallized, long-term learning, irrespective of whether the format is verbal or non-verbal” so “This challenges the view that visuo-spatial and non-verbal tests tend to be culturally fairer than verbal ones (Rosselli and Ardila, 2003)”.

IQ tests “simply reflect the different kinds of learning by children from different (sub)cultures: in other words, a measure of learning, not learning ability, and are merely a redescription of the class structure of society, not its causesit will always be quite impossible to measure such ability with an instrument that depends on learning in one particular culture” (Richardson, 2017: 99-100). This is the logical position to hold: for if IQ tests test class-specific type of knowledge and certain classes are not exposed to said items, then they will score lower. Therefore, since IQ tests are tests of a certain kind of knowledge, IQ tests cannot be “a measure of learning ability” and so, contra Gottfredson, ‘g’ or ‘intelligence’ (IQ test scores) cannot be called “basic learning ability” since we cannot create culture—knowledge—free tests because all human cognizing takes place in a cultural context which it cannot be divorced from.

Since all human cognition takes place through the medium of cultural/psychological tools, the very idea of a culture-free test is, as Cole (1999) notes, ‘a contradiction in terms . . . by its very nature, IQ testing is culture bound’ (p. 646). Individuals are simply more or less prepared for dealing with the cognitive and linguistic structures built in to the particular items. (Richardson, 2002: 293)

Heine (2017: 187) gives some examples of the World War I Alpha Test:

1. The Percheron is a kind of

(a) goat, (b) horse, (c) cow, (d) sheep.

2. The most prominent industry of Gloucester is

(a) fishing, (b) packing, (c) brewing, (d) automobiles.

3. “There’s a reason” is an advertisement for

(a) drink, (b) revolver, (c) flour, (d) cleanser.

4. The Knight engine is used in the

(a) drink, (b) Stearns, (c) Lozier, (d) Pierce Arrow.

5. The Stanchion is used in

(a) fishing, (b) hunting, (c) farming, (d) motoring.

Such test items are similar to what are on modern-day IQ tests. See, for example, Castles (2013: 150) who writes:

One section of the WAIS-III, for example, consists of arithmetic problems that the respondent must solve in his or her head. Others require test-takers to define a series of vocabulary words (many of which would be familiar only to skilled-readers), to answer school-related factual questions (e.g., “Who was the first president of the United States?” or “Who wrote the Canterbury Tales?”), and to recognize and endorse common cultural norms and values (e.g., “What should you do it a sale clerk accidentally gives you too much change?” or “Why does our Constitution call for division of powers?”). True, respondents are also given a few opportunities to solve novel problems (e.g., copying a series of abstract designs with colored blocks). But even these supposedly culture-fair items require an understanding of social conventions, familiarity with objects specific to American culture, and/or experience working with geometric shapes or symbols. [Since this is questions found on the WAIS-III, then go back and read Cockroft et al, 2015 since they used the British version which, of course, is similar.]

If one is not exposed to the structure of the test along with the items and information on them, how, then, can we say that the test is ‘fair’ to other cultural groups (social classes included)? For, if all tests are culture-bound and different groups of people have different cultures, histories, etc, then they will score differently by virtue of what they know. This is why it is ridiculous to state so confidently that IQ tests—however imperfectly—test “intelligence.” They test certain skills and knowledge more likely to be found in certain groups/classes over others—specifically in the dominant group. So what dictates IQ scores is differential access to knowledge (i.e., cultural tools) and how to use such cultural tools (which then become psychological tools.)

Lastly, take an Amazonian people called The Pirah. They have a different counting system than we do in the West called the “one-two-many system, where quantities beyond two are not counted but are simply referred to as “many”” (Gordon, 2005: 496). A Pirah adult was shown an empty can. Then the investigator put six nuts into the can and took five out, one at a time. The investigator then asked the adult if there were any nuts remaining in the can—the man answered that he had no idea. Everett (2005: 622) notes that “Piraha is the only language known without number, numerals, or a concept of counting. It also lacks terms for quantification such as “all,” “each,” “every,” “most,” and “some.”

(hbdchick, quite stupidly, on Twitter wrote “remember when supermisdreavus suggested that the tsimane (who only count to s’thing like two and beyond that it’s “many”) maybe went down an evolutionary pathway in which they *lost* such numbers genes?” Riiiight. Surely the Tsimane “went down an evolutionary pathway in which they *lost* such numbers genes.” This is the idiocy of “HBDers” in action. Of course, I wouldn’t expect them to read the actual literature beyond knowing something basic (Tsimane numbers beyond “two” are known as “many”) and the positing a just-so story for why they don’t count above “two.”

Non-verbal tests

Take a non-verbal test, such as the Bender-Gestalt test. There are nine index cards which have different geometrical designs on them, and the testee needs to copy what he saw before the next card is shown. The testee is then scored on how accurate his recreation of the index card is. Seems culture-fair, no? It’s just shapes and other similar things, how would that be influenced by class and culture? One would, on a cursory basis, claim that such tests have no basis in knowledge structure and exposure and so would rightly be called “culture-free.” While the shapes that come on Ravens tests are novel, the rules governing them are not.

Hoffmann (1966) studied 80 children (20 Kickapoo Indians (KIs), 20 low SES blacks (LSBs), 20 low SES whites (LSWs), and 20 middle-class whites (MCWs)) on the Bender-Gestalt test. The Kickapoo were selected from 5 urban schools; 20 blacks from majority-black elementary schools in Oklahoma City; 20 whites in low SES areas of Oklahoma; and 20 whites from middle-schools in Oklahoma from majority-white schools. All of the children were aged 8-10 years of age and in the third grade, while all had IQs in the range of 90-110. They were matched on a whole slew of different variables. Hoffman (1966: 52) states “that variations in cultural and socio-economic background affect Bender Gestalt reproduction.

Hoffman (1966: 86) writes that:

since the four groups were shown to exhibit no significant differences in motor, or perceptual discrimination ability it follows that differences among the four groups of boys in Bender Gestalt performance are assignable to interpretative factors. Furthermore, significant differences among the four groups in Bender performance illustrates that the Bender Gestalt test is indeed not a so called “culture-free” test.

Hoffman concluded that MCWs, KIs, LSBs, and LSWs did not differ in copying ability, nor did they differ significantly in discriminating in different phases in the Bender-Gestalt; there also was no bias in figures that had two of the different sexes on them. They did differ in their reproductions of Bender-Gestalt designs, and their differing performance can be, of course, interpreted differently by different people. If we start from the assumption that all IQ tests are culture-bound (Cole, 2004), then living in a different culture from the majority culture will have one score differently by virtue of having differing—culture-specific knowledge and experience. The four groups looked at the test in different ways, too. Thus, the main conclusion is that:

The Bender Gestalt test is not a “culture-free” test. Cultural and socio-economic background appear to significantly affect Bender Gestalt reproduction. (Hoffman, 1966: 88)

Drame and Ferguson (2017) and Dutton et al (2017) also show that there is bias in the Raven’s test in Mali and Sudan. This, of course, is due to the exposure to the types of problems on the items (Richardson, 2002: 291-293). Thus, their cultures do not allow exposure to the items on the test and they will, therefore, score lower in virtue of not being exposed to the items on the test. Richardson (1991) took 10 of the hardest Raven’s items and couched them in familiar terms with familiar, non-geometric, objects. Twenty eleven-year-olds performed way better with the new items than the original ones, even though they used the same exact logic in the problems that Richardson (1991) devised. This, obviously, shows that the Raven is not a “culture-free” measure of inductive and deductive logic.

The Raven is administered in a testing environment, which is a cultural device. They are then handed a paper with black and white figures ordered from left to right. Note that Abel-Kalek and Raven (2006: 171) write that Raven’s items “were transposed to read from right to left following the custom of Arabic writing.” So this is another way that the tests are biased and therefore not “culture-free.”) Richardson (2000: 164) writes that:

For example, one rule simply consists of the addition or subtraction of a figure as we move along a row or down a column; another might consist of substituting elements. My point is that these are emphatically culture-loaded, in the sense that they reflect further information-handling tools for storing and extracting information from the text, from tables of figures, from accounts or timetables, and so on, all of which are more prominent in some cultures and subcultures than others.

Richardson (1991: 83) quotes Keating and Maclean (1987: 243) who argue that tests like the Raven “tap highly formal and specific school skills related to text processing and decontextualized rule application, and are thus the most systematically acculturated tests” (their emphasis). Keating and Maclean (1987: 244) also state that the variation in scores between individuals is due to “the degree of acculturation to the mainstream school skills of Western society” (their emphasis). That’s the thing: all types of testing is biased towards a certain culture in virtue of the kinds of things they are exposed to—not being exposed to the items and structure of the test means that it is in effect biased against certain cultural/social groups.

Davis (2014) studied the Tsimane, a people from Bolivia, on the Raven. Average eleven-year-olds 78 percent or more of the questions correct whereas lower-performing individuals answered 47 percent correct. The eleven-year-old Tsimane, though, only answered 31 percent correct. There was another group of Tsimane who went to school and lived in villages—not living in the rainforest like the other group of Tsimane. They ended up scoring 72 percent correct, compared to the unschooled Tsimane who scored only 31 percent correct. “… the cognitive skills of the Tsimane have developed to master the challenges that their environment places on them, and the Raven’s test simply does not tap into those skills. It’s not a reflection of some kind of true universal intelligence; it just reflects how well they can answer those items” (Heine, 2017: 189). Thus, measures of “intelligence” are not an innate skill, but are learned through experience—what we learn from our environments.

Heine (2017: 190) discusses as-of-yet-to-be-published results on the Hadza who are known as “the most cognitively complex foragers on Earth.” So, “the most cognitively complex foragers on Earth” should be pretty “smart”, right? Well, the Hadza were given six-piece jigsaw puzzles to complete—the kinds of puzzles that American four-year-olds do for fun. They had never seen such puzzles before and so were stumped as to how to complete them. Even those who were able to complete them took several minutes to complete them. Is the conclusion then licensed that “Hadza are less smart than four-year-old American children?” No! As that is a specific cultural tool that the Hadza have never seen before and so, their performance mirrored their ignorance to the test.

Logic

The term “logical” comes from the Greek term logos, meaning “reason, idea, or word.” So, “logical reasoning” is based on reason and sound ideas, irrespective of bias and emotion. A simple syllogistic structure could be:

If X, then Y

X

∴ Y

We can substitute terms, too, for instance:

If it rains today, then I must bring an umbrella.

It’s raining today.

∴ I must bring an umbrella.

Richardson (2000: 161) notes how cross-cultural studies show that what is or is not logical thinking is not objective nor simple, but “comes in socially determined forms.” He notes how cross-cultural psychologist Sylvia Scribner showed some syllogisms to Kpelle farmers, which were couched in terms that were familiar to them. One syllogism given to them was:

All Kpelle men are rice farmers

Mr. Smith is not a rice farmer

Is he a Kpelle man? (Richardson, 2002: 162)

The individual then continuously replied that he did not know Mr. Smith, so how could he know whether or not he was a Kpelle man? Another example was:

All people who own a house pay a house tax

Boima does not pay a house tax

Does Boima own a house? (Richardson, 2000: 162)

The answer here was that Boima did not have any money to pay a house tax.

In regard to the first syllogism, Mr. Smith is not a rice farmer so he is not a Kpelle man. Regarding the second, Boima does not pay a house tax, so Boima does not own a house. The individual could give a syllogism that is something like:

All the deductions I can make are about individuals I know.

I do not know Mr. Smith.

Therefore I cannot make a deduction about Mr. Smith. (Richardson, 2000: 162)

They are using what are familiar terms to them, and so, they get the answer right for their culture based on the knowledge that they have. These examples, therefore, show that what can pass for “logical reasoning” is based on the time and place where it is said. The deductions the Kpelle made were perfectly valid, though they were not what the syllogism-designers had in mind. In fact, I would say that there are many—equally valid—ways of answering such syllogisms, and such answers will vary by culture and custom.

The bio-ecological framework, culture, and social class

The bio-ecological model of Ceci and Bronfenbrenner is a model of human development that relies on gene and environment interactions. The model is a Vygotskian one—in that learning is a social process where the support from parents, teachers, and all of society play an important role in the ontogeny of higher psychological functioning. (For a good primer on Vygotskian theory, see Vygotsky and the Social Formation of Mind, Wertsch, 1985.) Thus, it is a model of human development that, most hereditarians would say, that “they use too.” Though this is of course, contested by Ceci who compares his bio-ecological framework with other theories (Ceci, 1996: 220, table 10.1):

bioecol

Cognition (thinking) is extremely context-sensitive. Along with many ecological influences, individual differences in cognition are understood best with the bio-ecological framework which consists of three components: (1) ‘g’ doesn’t exist, but multiple cognitive potentials do; (2) motivational forces and social/physical aspects of a task or setting, how elaborate a knowledge domain is not only important in the development of the human, but also, of course, during testing; and (3) knowledge and aptitude are inseparable “such that cognitive potentials continuously access one’s knowledge base in the cascading process of producing cognitions, which in turn alter the contents and structure of the knowledge base” (Ceci, 1996: 123).

Block (1995) notes that “Blacks and Whites are to some extent separate cultural groups.” Sternberg (2004) defines culture as “the set of attitudes, values, beliefs and behaviors shared by a group of people, communicated from one generation to the next via language or some other means of communication.” In regard to social class—blacks and whites differ in social class (a form of culture), Richardson (2002: 298) notes that “Social class is a compound of the cultural tools (knowledge and cognitive and psycholingustic structures) individuals are exposed to; and beliefs, values, academic orientations, self-efficacy beliefs, and so on.” The APA notes that “Social status isn’t just about the cars we drive, the money we make or the schools we attend — it’s also about how we feel, think and act …” And the APS notes that social class can be seen as a form of culture. Since culture is a set of attitudes, beliefs and behaviors shared by a group of people, social classes, therefore, are forms of culture as different classes have different attitudes, beliefs and behaviors.

Ceci (1996 119) notes that:

large-scale cultural differences are likely to affect cognition in important ways. One’s way of thinking about things is determined in the course of interactions with others of the same culture; that is, the meaning of a cultural context is always negotiated between people of that culture. This, in turn, modifies both culture and thought.

While Manstead (2018) argues that:

There is solid evidence that the material circumstances in which people develop and live their lives have a profound influence on the ways in which they construe themselves and their social environments. The resulting differences in the ways that working‐class and middle‐ and upper‐class people think and act serve to reinforce these influences of social class background, making it harder for working‐class individuals to benefit from the kinds of educational and employment opportunities that would increase social mobility and thereby improve their material circumstances.

In fact, the bio-ecological model of human development (and IQ) is a developmental systems-type model. The types of things that go into the model are just like Richardson’s (2002) “sociocognitive affective nexus.” Richardson (2002) posits that the sources of IQ variation are mostly non-cognitive, writing that such factors include (pg 288):

(a) the extent to which people of different social classes and cultures have acquired a specific form of intelligence (or forms of knowledge and reasoning); (b) related variation in ‘academic orientation’ and ‘self-efficacy beliefs’; and (c) related variation in test anxiety, self-confidence, and so on, which affect performance in testing situations irrespective of actual ability

Cole (2004) concludes that:

Our imagined study of cross-cultural test construction makes it clear that tests of ability are inevitably cultural devices. This conclusion must seem dreary and disappointing to people who have been working to construct valid, culture-free tests. But from the perspective of history and logic, it simply confirms the fact, stated so clearly by Franz Boas half a century ago, that “mind, independent of experience, is inconceivable.”

The role of context is huge—and most psychometricians ignore it, as Ceci (1996: 107) quotes Bronfenbrenner (1989: 207) who writes:

It is a noteworthy feature of all preceding (cognitive approaches) that they make no reference whatsoever to the environment in which the person actually lives and grows. The implicit assumption is that the attributes in question are constant across place; the person carries them with her wherever she goes. Stating the issue more theoretically, the assumption is that the nature of the attribute does not change, irrespective of the context in which one finds one’s self.

Such contextual differences can be found in the intrinsic and extrinsic motivations of the individual in question. Self-efficacy, what one learns and how they learn it, motivation instilled from parents, all form part of the context of the specific individual and how they develop which then influences IQ scores (middle-class knowledge and skills scores).

(For a primer on the bio-ecological model, see Armour-Thomas and Gopaul-Mcnicel, 1997; Papierno et al, 2005; Bronfenbrenner and Morris, 2007; and O’Toole, 2016.)

Conclusion

If blacks and whites are, to some extent, different cultural groups, then they will—by definition—have differing cultures. So “cultural differences are known to exist, and cultural differences can have an impact on psychological traits [also in the knowledge one acquires which then is one part of dictating test scores] (see Prinz, 2014: 67, Beyond Human Nature). If blacks and whites are “separate cultural groups” (Block, 1995) and if they have different experiences by virtue of being cultural groups, then they will score differently on any test of ability (including IQ; see Fagan and Holland, 2002, 2007) as all tests of ability are culture-bound (see Cole, 2004).

1 Blacks and whites are different cultural groups.

2 If (1), then they will have different experiences by virtue of being different cultural groups.

3 So blacks and whites, being different cultural groups, will score differently on tests of ability, since they are exposed to different knowledge structures due to their different cultures and so, all tests of ability are culture-bound.

So, what accounts for the intercorrelations between tests of “cognitive ability”? They validate the new test with older, ‘more established’ tests so “based on this it is unlikely that a measure unrelated to g will emerge as a winner in current practice … [so] it is no wonder that the intelligence hierarchy for different racial/ethnic groups remains consistent across different measures. The tests are highly correlated among each other and are similar in item structure and format” (Suzuki and Aronson, 2005: 321).

Therefore, what accounts for differences in IQ is not intellectual ability, but cultural/social exposure to information—specifically the type of information used in the construction of IQ tests—along with the test constructors attempting to construct new tests that correlate with the old tests, and so, they get the foregone conclusion of their being racial differences, for example, in IQ which they trumpet as evidence for a “biological cause”—but it is anything but: such differences are built into the test (Simon, 1997). (Note that Fagan and Holland, 2002 also found evidence for test bias as well.)

Thus, we should take the logical conclusion: what explains racial IQ differences are not biological factors, but environmental ones—specifically in the exposure of knowledge—along with how new tests are created (see Suzuki and . All human cognizing takes place in specific cultural contexts—therefore “culture-free tests” (i.e., tests devoid of cultural knowledge and context) are an impossibility. IQ tests are experience-dependent so if one is not exposed to the relevant experiences to do well in a testing situation, then they will score lower than they would have if they were to have the requisite culturally-specific knowledge to perform well on the test.

Just-so Stories: How the Man Got His Bald Head

1250 words

Setting: 50kya in the Environment of Evolutionary Adaptedness

Imagine you and your band are being chased by a group of animals 50kya in the Savanna. You look back as the herd is almost at your band, about to rip them to pieces. When, suddenly, there is a bright, blinding flash and the animals stop in their tracks, giving your band enough time to escape.

Your band looks back about 200 feet away to see a man standing there with his bald head standing at the animals, directing a beam of UV rays at the group. The women stand there, lovestruck, as this man just saved the band.

This man—and men who looked like him—were then taken on many expeditions and the same thing happened whenever the band got into trouble out in the grasslands. A group of animals threatens the band? No problem, there is a bald man there, waiting to use his head to blind the back so the band can get away.

Women then started showing more affection to the bald men as they proved they can save the band, and without having to risk harm, at that. So women started mating more with them. Bald men then gained more power in the society, as the women of course continued to pick berries while the men hunted—bald men going with both groups to give protection.

Then, over time, genes the conferred a higher chance of becoming bald became fixated in the group as it conveyed a fitness benefit from which resulted from protecting the group by shining UV rays at a pack of animals to save the group resulting in women finding it attractive and so having more children with them.


Sound ridiculous? Well, searching for claims for bald heads being an adaptation, I discovered that someone said the same exact thing:

A well-polished bald male head was often used by tribes of cavemen to blind predators. As a result every cavemen hunting group of 8 had one bald member, and thus thousands of years later 1 in 8 men experience early on set of baldness. – Taz Boonsborg, London, UK

There are other similar adaptive stories for bald heads, such as a lather surface area to receive more vitamin D:

Loss of hair creates more skin area, which means more vitamin D can be absorbed from sunlight. This would provide a survival benefit for me, which would explain this trait being passed on.

While another person makes a similar claim:

I wonder if it can be linked to the time in evolution when Europeans lived in Central Asia before moving west to Europe. Vitamin D was a scarce necessity. I like to think of my bald head as sun ray receiver. I have noticed that women 30+ are a lot more likely to be attracted to me partially due to my baldness, sometimes very much so 😉

Richard, Tacoma, USA

There are, thankfully, some commenters that say it has no adaptive value. When it comes to the existence of any trait, of course, one can construct a plausible evolutionary narrative to explain the survival of the trait into the current day.

A conversation about baldness should include a conversation about bearded-ness—as I have written about before. The story goes that beards are adaptive for men, since “beards may have been valuable as a threat signal during direct male-versus-male competition for dominance and resources“, while also stating for pattern baldness that “The senescence feature of male pattern baldness may be an advertisement of social maturity. Social maturity includes enhanced social status but decreased physical threat, increased approachability, and a propensity to nurture.”  (Muscarella and Cunningham, 1996). They also discuss differing sexual selection theories for pattern baldness, such as “for an increase in the visual area for the intimidation display of reddening color during anger.” So to these authors, baldness signifies social status, and if one is bald or balding, they can then show their emotions more—especially if light-skinned. So baldness is a signal of senescence—biological aging.

Kabai (2008) hypothesizes (storytells) that androgenic alopecia—male pattern baldness (MPB)—is an adaptation. Each hair follicle has its own resistance to whether or not it will fall out. So, to Kabai (2008: 1039), MPB “evolved to elevate UV absorbance and thus to provide some protection against prostate cancer.” This is a classic just-so story. I have written a ton about the relationship between testosterone, prostate cancer (PCa), and vitamin D. Blacks have lower levels of vitamin D, and higher levels of prostate cancer. (It should be notes that Setty et al (1970) showed that MPB is four times less likely in blacks compared to whites.) So, in whites, baldness was adaptive in order to acquire more UV rays.

Unfortunately for Kabai (2008), the underlying logic of his hypothesis (that the more bald a head, the higher the vitamin D production) was tested. Bolland et al (2008: 675) “found no evidence to support the hypothesis that the degree of baldness in influences serum 25-OHD levels.” This could be, as the authors note, due to the fact that vitamin D is not produced in the scalp or that vitamin D is produced in the scalp but getting sunburned would modify a man’s behavior to spend less time in the sun and so, the so-called benefits of a bald head for vitamin D production would be limited. Bolland et al (2008) end up concluding that:

there is no accepted evolutionary explanation for the almost universal prevalence of hair loss in older men. We suggest that other hypotheses are required to determine why older men go bald and whether baldness serves any physiological purpose.

Why must baldness “serve a physiological purpose”? I like how the proposal of these hypotheses doesn’t consider the fact that hair just falls out at a certain rate in certain individuals with no evolutionary purpose behind it. Everything must have an adaptive purpose, it seems, and so, one creates these fantastic stories. It’s just like Rudyard Kipling’s stories, actually.

Lastly, Yanez (2004) proposes a cultural hypothesis for MPB. Yanez (2004: 982) states that the cause of MPB is “the detention in the sebum flow moving towards the root of hair.” Those who are more likely to suffer from baldness are those with thin hair who constantly cut their hair short, or hair that is straight or low-density. On the other hand, those with high hair density and thicker hair are less likely to go bald, even if they keep short hair. This, to Yanez (2004), the catalyst of balding is cultural but, of course, is driven by physiological factors (blocking the flow of sebum to the hair follicle).

Many kinds of stories have been crafted in order to explain how and why humans—mostly men—are bald. Though, this just speaks to the problem with adaptationist hypotheses—notice bald heads; bald heads are still around (obviously, since we are observing it); since we notice bald heads because they are still around then there must have been an evolutionary advantage for bald heads; *advantages noted above*; therefore baldness is an adaptation. This reasoning, though, is faulty—it’s a kind of rampant adaptationism, that if a trait exists today then it must, therefore, procure an advantage in an evolutionary context which was then therefore selected-for.

Of course, we can not—and should not—discard the hypothesis that bald men still exist because they could direct UV rays at oncoming predators trying to kill the band, the women seeing it, and it, therefore, becoming an attractive signal that the man can protect the family by directing UV rays at other men and animals. That’s a good hypothesis that’s worth investigating. (Sarcasm.)

Just-so Stories: The Brain Size Increase

1600 words

The increase in brain size in our species over the last 3 million years has been the subject of numerous articles and books. Over that time period, brain size increased from our ancestor Lucy, all the way to today. Many stories are proposed to explain how and why it exactly happened. The explanation is the same ol’ one: Those with bigger heads, and therefore bigger brains had more children and passed on their “brain genes” to the next generation until all that was left was bigger-brained individuals of that species. But there is a problem here, just like with all just-so stories. How do we know that selection ‘acted’ on brain size and thusly “selected-for” the ‘smarter’ individual?

Christopher Badcock, an evolutionary psychologist, as an intro to EP published in 2001, where he has a very balanced take on EP—noting its pitfalls and where, in his opinion, EP is useful. (Most may know my views on this already, see here.) In any case, Badcock cites R.D. Martin (1996: 155) who writes:

… when the effects of confounding variables such as body size and socio-economic status are excluded, no correlation is found between IQ and brain size among modern humans.

Badcock (2001: 48) also quotes George Williams—author of Adaptation and Natural Selection (1966; the precursor to Dawkins’ The Selfish Gene) where he writes:

Despite the arguments that have been advanced, I cannot readily accept the idea that advanced mental capabilities have ever been directly favored by selection. There is no reason for believing that a genius has ever been likely to leave more children than a man of somewhat below average intelligence. It has been suggested that a tribe that produces an occasional genius for its leadership is more likely to prevail in competition with tribes that lack this intellectual resource. This may well be true in the sense that a group with highly intelligent leaders is likely to gain political domination over less gifted groups, but political domination need not result in genetic domination, as indicated by the failure of many a ruling class to maintain its members.

In Adaptation and Natural Selection, Williams was much more cautious than adaptationists today, stating that adaptationism should be used only in very special cases. Too bad that adaptationists today did not get the memo. But what gives? Doesn’t it make sense that the “more intelligent” human 2 mya would be more successful when it comes to fitness than the “less intelligent” (whatever these words mean in this context) individual? Would a pre-historic Bill Gates have the most children due to his “high IQ” as PumpkinPerson has claimed in the past? I doubt it.

In any case, the increase in brain size—and therefore increase in intellectual ability in humans—has been the last stand for evolutionary progressionists. “Look at the increase in brain size”, the progressionist says “over the past 3mya. Doesn’t it look like there is a trend toward bigger, higher-quality brains in humans as our skills increased?” While it may look like that on its face, in fact, the real story is much more complicated.

Deacon (1990a) notes many fallacies that those who invoke the brain size increase across evolutionary history make, including: the evolutionary progression fallacy; the bigger-is-smarter fallacy; and the numerology fallacy. The evolutionary progression fallacy is simple enough. Deacon (1990a: 194) writes:

In theories of brain evolution, the concept of evolutionary progress finds implicit expression in the analysis of brain-size differences and presumed grade shifts in allometric brain/body size trends, in theories of comparative intelligence, in claims about the relative proportions of presumed advanced vs. primitive brain areas, in estimates of neural complexity, including the multiplication and differentiation of brain areas, and in the assessment of other species with respect to humans, as the presumed most advanced exemplar. Most of these accounts in some way or other are tied to problems of interpreting the correlates of brain size. The task that follows is to dispose of fallacious progressivist notions hidden in these analyses without ignoring the questions otherwise begged by the many enigmatic correlations of brain size in vertebrate evolution.

Of course, when it comes to the bigger-is-smarter fallacy, it’s quite obviously not true that bigger IS always better when it comes to brain size, as elephants and whales have larger brains than humans (also see Skoyles, 1999). But what they do not have more of than humans is cortical neurons (see Herculano-Houzel, 2009). Decon (1990a: 201) describes the numerology fallacy:

Numerology fallacies are apparent correlations that turn out to be artifacts of numerical oversimplification. Numerology fallacies in science, like their mystical counterparts, are likely to be committed when meaning is ascribed to some statistic merely by virtue of its numeric similarity to some other statistic, without supportive evidence from the empirical system that is being described.

While Deacon (1990a: 232) concludes that:

The idea, that there have been progressive trends of brain evolution, that include changes in the relative proportions of different structures (i.e., enlarging more “advanced” areas with respect to more primitive areas) and increased differentiation, interconnection, and overall complexity of neural circuits, is largely an artifact of misunderstanding the complex internal correlates of brain size. … Numerous statistical problems, arising from the difficulty of analyzing a system with so many interdependent scaling relationships, have served to reinforce these misconceptions, and have fostered the tacit assumption that intelligence, brain complexity, and brain size bear a simple relationship to one another.

Deacon (1990b: 255) notes how brains weren’t directly selected for, but bigger bodies (bigger bodies means bigger brains), and this does not lean near the natural selection fallacy theory for trait selection since this view is of the organism, not its trait:

I will argue that it is this remarkable parallelism, and not some progressive selection for increasing intelligence, that is responsible for many pseudoprogressive trends in mammalian brain evolution. Larger whole animals were being selected—not just larger brains—but along with the correlated brain enlargement in each lineage a multitude of parallel secondary internal adaptations followed.

Deacon (1990b: 697-698) notes that the large brain-to-body size ratio in humans compared to other primates is an illusion “a surface manifestation of a complex allometric reorganization within the brain” and that the brain itself is unlikely to be the object of selection. The correlated reorganization of the human brain, to Deacon, is what makes humans unique; not our “oversized” brains for our body. While Deacon (1990c) states that “To a great extent the apparent “progress” of mammalian brain evolution vanishes when the effects of brain size and functional specialization are taken into account.” (See also Deacon, 1997: chapter 5.)

So is there really progress in brain evolution, which would, in effect, lend credence to the idea that evolution is progressive? No, there is no progress in brain evolution; so-called size increases throughout human history are an artifact; when we take brain size and functional specialization into account (functional specialization is the claim that different areas in the brain are specialized to carry out different functions; see Mahon and Cantlon, 2014). Our brains only seem like they’ve increased; when we get down to the functional details, we can see that it’s just an artifact.

Skoyles and Sagan (2002: 240) note that erectus, for example, could have survived with much smaller brains and that the brain of erectus did not arise for the need for survival:

So how well equipped was Homo erectus? To throw some figures at you (calculations shown in the notes), easily well enough. Of Nariokotome boy’s 673 cc of cortex, 164 cc would have been prefrontal cortex, roughly the same as half-brained people. Nariokotome boy did not need the mental competence required by cotemporary hunter-gatherers. … Compared to that of our distant ancestors, Upper Paleolithic technology is high tech. And the organizational skills used in hunts greatly improved 400,000 years ago to 20,000 years ago. These skills, in terms of our species, are recent, occurring by some estimates in less than the last 1 percent of our 2.5 million year existence as people. Before then, hunting skills would have required less brain power, as they were less mentally demanding. If you do not make detailed forward plans, then you do not need as much mental planning abilities as those who do. This suggests that the brains of Homo erectus did not arise for reasons of survival. For what they did, they could have gotten away with much smaller, Daniel Lyon-sized brains.

In any case—irrespective of the problems that Deacon shows for arguments for increasing brain size—how would we be able to use the theory of natural selection to show what was selected-for, brain size or another correlated trait? The progressionist may say that it doesn’t matter which is selected-for, the brain size is still increasing even if the correlated trait—the free-rider—is being selected-for.

But, too bad for the progressionist: If the correlated non-fitness-enhancing trait is being selected-for and not brain size directly, then the progressionist cannot logically state that brain size—and along with it intelligence (as the implication always is)—is being directly selected-for. Deacon throws a wrench into such theories of evolutionary progress in regard to human brain size. Though, looking at erectus, it’s not clear that he really “needed” such a big brain for survival—it seems like he could have gotten away with a much smaller brain. And there is no reason, as George Williams notes, to attempt to argue that “high intelligence” was selected-for in our evolutionary history.

And so, Gould’s Full House argument still stands—there is no progress in evolution; bacteria occupy life’s mode; humans are insignificant to the number of bacteria on the planet, “big brains”, or not.

The Burakumin and the Koreans: The Japanese Underclass and Their Achievement

2350 words

Japan has a caste system just like India. Their lowest caste is called “the Burakumin”, a hereditary caste created in the 17th century—the descendants of tanners and butchers. (Buraku means ‘hamlet people’ in Japanese which took on a new meaning in the Meiji era.) Even though they gained “full rights” in 1871, they were still discriminated against in housing and work (only getting menial jobs). A Burakumin Liberation League has formed, to end discrimination against Buraku in 1922, protesting to end job discrimination by the dominant Ippan Japanese. Official numbers of the number of Buraku in Japan are about 1.2 million, but unofficial numbers bring it up to 6000 communities and 3 million Buraku.

Note the similarities here with black Americans. Black Americans got their freedom from American slavery in 1865. The Burakumin got theirs in 1865. Both groups get discriminated against—the things that the Burakumin face, the blacks in America have faced. De Vos (1973: 374) describes some employment statistics for Buraku and non-Buraku:

For instance, Mahara reports the employment statistics for 166 non-Buraku children and 83 Buraku children who were graduated in March 1859 from a junior high school in Kyoto. Those who were hired by small-scale enterprises employing fewer than ten workers numbered 29.8 percent of the Buraku and 13.1 percent of the non-Buraku children; 15.1 percent of non-Buraku children obtained work in large-scale industries employing more than one thousand workers, whereas only 1.5 percent of Buraku children did so.

Certain Japanese communities—in southwestern Japan—have a belief and tradition in having foxes as pets. Those who have the potential to have such foxes descends down the family line—there are “black” foxes and “white” foxes. So in this area in southwestern Japan, people are classified as either “white” or “black”, and marriage between these artificial color lines is forbidden. They believe that if someone from the “white” family marries someone from the “black” family that every other member of the “white” family becomes “black.”

Discrimination against the Buraku in Japan is so bad, that a 330 page list of Buraku names and community placements were sold to employers. Burakumin are also more likely to join the Yakuza criminal gang—most likely due to such opportunities they miss out on in their native land. (Note similarities between Buraku joining Yakuza and blacks joining their own ethnic gangs.) It was even declared that an “Eta” (the lowest of the Burakumin) was 1/7th of an ordinary person. This is eerily familiar to how blacks were treated in America with the three-fifths compromise—signifying that the population of slaves would be counted as three-fifths in total when being apportioned to votes for the Presidential electors, taxes and other representatives.

Now let’s get to the good stuff: “intelligence.” There is a gap in scores between “blacks”, “whites”, and Buraku. De Vos (1973: 377) describes score differences between “blacks”, “whites” and Buraku:

[Nomura] used two different kinds of “intelligence” tests, the nature of which are unfortunately unclear from his report. On both tests and in all three schools the results were uniform: “White” children averaged significantly higher than children from “black” families, and Buraku children, although not markedly lower than the “blacks,” averaged lowest.

BurakuIQ

According to Tojo, the results of a Tanaka-Binet Group I.Q. Test administered to 351 fifth- and sixth-grade children, including 77 Buraku children, at a school in Takatsuki City near Osaka shows that the I.Q. scores of the Buraku children are markedly lower than those of the non-Buraku children. [Here is the table from Sternberg and Grigorenko, 2001]

Burakuiqq

Also see the table from Hockenbury and Hockenbury’s textbook Psychology where they show IQ score differences between non-Buraku and Buraku people:

burakuuiq

De Vos (1973: 376) also notes the similarities between Buraku and black and Mexican Americans:

Buraku school children are less successful compared with the majority group children. Their truancy rate is often high, as it is in California among black and Mexican-American minority groups. The situation in Japan also probably parallels the response to education by certain but not all minority groups in the United States.

How similar. There is another group in Japan that is an ethnic minority that is the same race as the Japanese—the Koreans. They came to Japan as forced labor during WWII—about 7.8 million Koreans were conscripted to the Japanese, men participating in the military while women were used as sex slaves. Most are born in Japan and speak no Korean, but they still face discrimination—just like the Buraku. There are no IQ test scores for Koreans in Japan, but there are standardized test scores. Koreans in America are more likely to have higher educational attainment than are native-born Americans (see the Pew data on Korean American educational attainment). But this is not the case in Japan. The following table is from Sternberg and Grigorenko (2001).

koreduc

Just as Koreans do better than white Americans on standardized tests (and IQ tests), how weird is it for Koreans in Japan to score lower than ethnic Japanese and even the Burakumin? Sternberg and Grigorenko (2001) write:

Based on these cross-cultural comparison, we suggest that it is the manner in which caste and minority status combine rather than either minority position or low-caste status alone that lead to low cognitive or IQ test scores for low-status groups in complex, technological societies such as Japan and the United States. Often jobs and education require the adaptive intellectual skills of the dominant caste. In such societies, IQ tests discriminate against all minorities, but how the minority groups perform on the tests depends on whether they became minorities by immigration or choice (voluntary minorities) or were forced by the dominant group into minorities status (involuntary minorities). The evidence indicates that immigrant minority status and nonimmigrant status have different implications for IQ test performance.

The distinction between “voluntary” and “involuntary” minority is simple: voluntary minorities emigrate by choice, whereas involuntary minorities were forced against their will to be there. Black Americans, Native Hawaiians and Native Americans are involuntary minorities in America and, in the case of blacks, they face similar discrimination to the Buraku and there is a similar difference in test scores between the high and low castes (classes in America). (See the discussion in Ogbu and Simons (1998) on voluntary and involuntary minorities and also see Shimihara, (1984) for information on how the Burakumin are discriminated against.)

Ogbu and Simons (1988) explain the school performance of minorities using what Ogbu calls a “cultural-ecological theory” which considers societal and school factors along with community dynamics in minority communities. The first part of the theory is that minorities are discriminated against in terms of education, which Ogbu calls “the system.” The second part of the theory is how minorities respond to their treatment in the school system, which Ogbu calls “community forces.” See Figure 1 from Ogbu and Simons (1998: 156):

ogbu

Ogbu and Simon (1998: 158) write about the Buraku and Koreans:

Consider that some minority groups, like the Buraku outcast in Japan, do poorly in school in their country of origin but do quite well in the United States, or that Koreans do well in school in China and in the United States but do poorly in Japan.

Ogbu (1981: 13) even notes that when Buraku are in America—since they do not look different from the Ippan—they are treated like regular Japanese-Americans who are not discriminated against in America as the Buraku are in Japan and, what do you know, they have similar outcomes to other Japanese:

The contrasting school experiences of the Buraku outcastes in Japan and in the United States are even more instructive. In Japan Buraku children continue massively to perform academically lower than the dominant Ippan children. But in the United States where the Buraku and the Ippan are treated alike by the American people, government and schools, the Buraku do just as well in school as the Ippan (DeVos 1973; Ito,1967; Ogbu, 1978a).

So, clearly, this gap between the Buraku and the Nippon disappears when they are not stratified in a dominant-subordinate relation. It’s because IQ testing and other tests of ability are culture-bound (Cole, 2004) and so, when Burakumin emigrate to America (as voluntary minorities), they are seen as and treated like any other Japanese since there are no physical differences between them and their educational attainment and IQs match the other non-Burakumin Japanese. The very items on these tests are biased towards the dominant (middle-)class—so when the Buraku and Koreans emigrate to America they then have the types of cultural and psychological tools (Richardson, 2002) to do well on the tests and so, their scores change from when they were in their other country.

Note the striking similarities between black Americans and Buraku and Korean-Japanese—all three groups are discriminated against in their countries, all three groups have lower levels of achievement than the majority population, two groups (the Buraku and black Americans, there is no IQ data for Koreans in Japan that I am aware of) show the same gap between them and the dominant group, the Buraku and black Americans got their freedom at around the same times but still face similar types of discrimination. However, when Buraku and Korean-Japanese people emigrate here to America, their IQ scores and educational attainment match that of other East Asian groups. To Americans, there is no difference between Buraku and non-Buraku Japanese people.

Koreans in Japan “endure a climate of hate“, according to The Japan Times. Koreans are heavily discriminated against in Japan. Korean-Japanese people, in any case, score worse than the Buraku. Though, as we all know, when Koreans emigrate to America they have higher test scores than whites do.

Note, though, IQ scores for “voluntary minorities” that came to the US in the 1920s. The Irish, Italians, and even Jews were screened as “low IQ” and were thusly barred entry into the country due to it. For example, Young (1922: 422) writes that:

Over 85 per cent. of the Italian group, more than 80 per cent. of the Polish group and 75 per cent. of the Greeks received their final letter grades from the beta or other performance examination.

While Young (1922) shows the results of an IQ test administered to Southern Europeans in certain areas (one of the studies was carried out in New York City):

iqqf

assiq

adfdiq

These types of score differentials are just like what these lower castes in Japan and America show today. Though, as Thomas Sowell noted in regard to the IQs of Jews, Polish, Italians, and Greeks:

Like fertility rates, IQ scores differ substantially among ethnic groups at a given time, and have changed substantially over time— reshuffling the relative standings of the groups. As of about World War I, Jews scored sufficiently low on mental tests to cause a leading “expert” of that era to claim that the test score results “disprove the popular belief that the Jew is highly intelligent.” At that time, IQ scores for many of the other more recently arrived groups—Italians, Greeks, Poles, Portuguese, and Slovaks—were virtually identical to those found today among blacks, Hispanics, and other disadvantaged groups. However, over the succeeding decades, as most of these immigrant groups became more acculturated and advanced socioeconomically, their IQ scores have risen by substantial amounts. Jewish IQs were already above the national average by the 1920s, and recent studies of Italian and Polish IQs show them to have reached or passed the national average in the post-World War II era. Polish IQs, which averaged eighty-five in the earlier studies—the same as that of blacks today—had risen to 109 by the 1970s. This twenty-four-point increase in two generations is greater than the current black-white difference (fifteen points). [See also here.]

Ron Unz notes that Sowell says about the Eastern and Southern European immigrants IQs: “Slovaks at 85.6, Greeks at 83, Poles at 85, Spaniards at 78, and Italians ranging between 78 and 85 in different studies.” And, of course, their IQs rose throughout the 20th century. Gould (1996: 227) showed that the average mental age for whites was 13.08, with anything between 8 and 12 being denoted a “moron.” Gould noted that the average Russian had a mental age of 11.34, while the Italian was at 11.01 and the Pole was at 10.74. This, of course, changed as these immigrants acclimated to American life.

For an interesting story for the creation of the term “moron”, see Dolmage’s (2018: 43) book Disabled Upon Arrival:

… Goddard’s invention of [the term moron] as a “signifier of tainted whiteness” was the “most important contribution to the concept of feeble-mindedness as a signifier of racial taint,” through the diagnosis of the menace of alien races, but also as a way to divide out the impure elements of the white race.

The Buraku are a cultural class—not a racial or ethnic group. Looking at America, the terms “black” and “white” are socialraces (Hardimon, 2017)—so could the same reasons for low Buraku educational attainment and IQ be the cause for black Americans’ low IQ and educational attainment? Time will tell, though there are no countries—to the best of my knowledge—that blacks have emigrated to and not been seen as an underclass or ‘inferior.’

The thesis by Ogbu is certainly interesting and has some explanatory power. The fact of the matter is that IQ and other tests of ability are bound by culture, and so, when the Buraku leave Japan and come to America, they are seen as regular Japanese (I’m not aware if Americans know about the Buraku/non-Buraku distinction) and they score just as well if not better than Americans and other non-Buraku Japanese. This points to discrimination and other environmental causes as the root of Buraku problems—noting that the Buraku became “full citizens” in 1871, 6 years after black slavery was ended in America. That Koreans in Japan also have similarly low educational attainment but high in America—higher than native-born Americans—is yet another point in favor of Ogbu’s thesis. The “system” and “community forces” seem to change when the two, previously low-scoring, high-crime group comes to America.

The increase in IQ of Southern and Eastern European immigrants, too, is another point in favor of Ogbu. Koreans and Buraku (indistinguishable from other native Japanese), when they leave Japan, are seen as any other Asians immigrants, and so, their outcomes are different.

In any case, the Buraku of Japan and Koreans who are Japanese citizens are an interesting look into how a group is treated can—and does—decrease test scores and social standing in Japan. Might the same hold true for blacks one day?

Rampant Adaptationism

1500 words

Adaptationism is the main school of evolutionary change, through “natural selection” (NS). That is the only way for adaptations to appear, says the adaptationist: traits that were conducive to reproductive success in past environments were selected-for their contribution to fitness and therefore became fixated in the organism in question. That’s adaptationism in a nutshell. It’s also vacuous and tells us nothing interesting. In any case, the school of thought called adaptationism has been the subject of much criticism, most importantly, Gould and Lewontin (1972), Fodor (2008) and Fodor and Piatteli-Palmarini (2010). So, I would say that adaptationism becomes “rampant” when clearly cultural changes are conflated as having an evolutionary history and are still around today due to being adaptations.

Take Bret Weinstein’s recent conversation with Richard Dawkins:

Weinstein: “Understood through the perspective of German genes, vile as these behaviors were, they were completely comprehensible at the level of fitness. It was abhorrent and unacceptable—but understandable—that Germany should have viewed its Jewish population as a source of resources if you viewed Jews as non-people. And the belief structures that cause people to step onto the battlefields and fight were clearly comprehensible as adaptations of the lineages in question.”

Dawkins: “I think nationalism may be an even greater evil than religion. And I’m not sure that it’s actually helpful to speak of it in Darwinian terms.”

I find it funny that Weinstein is more of a Dawkins-ist than Dawkins himself is (in regard to his “selfish gene theory”, see Noble, 2011). In any case, what a ridiculous claim. “Guys, the Nazis were bad because of their genes and their genes made them view Jews as non-people and resources. Their behaviors were completely understandable at the level of fitness. But, Nazis bad!”

What a ridiculous claim. I like how Dawkins quickly shot the bullshit down. This is just-so storytelling on steroids. I wonder what “belief structures that cause people to step onto battlefields” are “adaptations of the lineages in question”? Do German belief structure adaptations different from any other groups? Can one prove that there are “belief structures” that are “adaptations to the lineages in question”? Or is Weinstein just telling just-so stories—stories with little evidence and that “fit” and “make sense” with the data we have (despicable Nazi behavior towards Jews after WWI and before and during WWII).

There is a larger problem with adaptationism, though: adaptationist confuse adaptiveness with adaptation (a trait can be adaptive without being an adaptation), they overlook nonadaptationist explanations, and adaptationist hypotheses are hard to falsify since a new story can be erected to explain the feature in question if one story gets disproved. That’s the dodginess of adaptationism.

An adaptationist may look at an organism, look at its traits, then construct a story as to why they have the traits they do. They will attempt to think of its evolutionary history by thinking of the environment it is currently in and what the traits in question that it has are useful for now. But there is a danger here. We can create many stories for just one so-called adaptation. How do we distinguish between which stories explain the fixation of the trait and which do not? We can’t: there is no way for us to know which of the causal stories explains the fixation of the trait.

Gould and Lewontin (1972) fault:

the adaptationist programme for its failure to distinguish current utility from reasons for origin (male tyrannosaurs may have used their diminutive front legs to titillate female partners, but this will not explain why they got so small); for its unwillingness to consider alternatives to adaptive stories; for its reliance upon plausibility alone as a criterion for accepting speculative tales; and for its failure to consider adequately such competing themes as random fixation of alleles, production of nonadaptive structures by developmental correlation with selected features (allometry, pleiotropy, material compensation, mechanically forced correlation), the separability of adaptation and selection, multiple adaptive peaks, and current utility as an epiphenomenon of nonadaptive structures.

[…]

One must not confuse the fact that a structure is used in some way (consider again the spandrels, ceiling spaces, and Aztec bodies) with the primary evolutionary reason for its existence and conformation.

Of course, though, adaptationists (e.g., evolutionary psychologists) do confuse structure for function. This is fallacious reasoning. That a trait is useful in a current environment is in no way evidence that it is an adaptation nor is it evidence that that’s why the trait evolved (e.g., a trait being useful and adaptive in a current environment).

But there is a problem with looking to the ecology of the organism in question and attempting to construct historical narratives about the evolution of the so-called adaptation. As Fodor and Piatteli-Palmarini (2010) note, “if evolutionary problems are individuated post hoc, it’s hardly surprising that phenotypes are so good at solving them.” So of course if an organism fails to secure a niche then that means that the niche was not for that organism.

That organisms are so “fit” to their environment, like a puzzle piece to its surrounding pieces, is supposed to prove that “traits are selected-for their contribution to fitness in a given ecology”, and this is what the theory of natural selection attempts to explain. Organisms fit their ecologies because its their ecologies that “design” their traits. So it is no wonder that organisms and their environments have such a tight relationship.

Take it from Fodor and Piatelli-Palmarini (2010: 137):

You don’t, after all, need an adaptationist account of evolution in order to explain the fact that phenotypes are so often appropriate to ecologies, since, first impressions to the contrary notwithstanding, there is no such fact. It is just a tautology (if it isn’t dead) a creature’s phenotype is appropriate for its survival in the ecology that it inhabits.

So since the terms “ecology” and “phenotype” are interdefined, is it any wonder why an organism’s phenotype has such a “great fit” with its ecology? I don’t think it is. Fodor and Piatteli-Palmarini (2010) note how:

it is interesting and false that creatures are well adapted to their environments; on the other hand it’s true but not interesting that creatures are well adapted to their ecologies. What, them, is the interesting truth about the fitness of phenotypes that we require adaptationism in order to explain? We’ve tried and tried, but we haven’t been able to think of one.

So the argument here could be:

P1) Niches are individuated post hoc by reference to the phenotypes that live in said niche.
P2) If the organisms weren’t there, the niche would not be there either.
C) Therefore there is no fitness of phenotypes to lifestyles that explain said adaptation.

Fodor and Piatteli-Palmarini put it bluntly about how the organism “fits” to its ecology: “although it’s very often cited in defence of Darwinism, the ‘exquisite fit’ of phenotypes to their niches is either true but tautological or irrelevant to questions about how phenotypes evolve. In either case, it provides no evidence for adaptationism.”

The million-dollar question is this, though: what would be evidence that a trait is an adaptation? Knowing what we now know about the so-called fit to the ecology, how can we say that a trait is an adaptation for problem X when niches are individuated post hoc? That right there is the folly of adaptationism, along with the fact that it is unfalsifiable and leads to just-so storytelling (Smith, 2016).

Such stories are “plausible”, but that is only because they are selected to be so. When such adaptationism becomes entrenched in thought, many traits are looked at as adaptations and then stories are constructed as to how and why the trait became fixated in the organism. But, just like EP which uses the theory of natural selection as its basis, so too does adaptationism fail. Nevermind the problem of the fitting of species to ecologies to render evolutionary problems post hoc; nevermind the problem that there is no identifying criteria for identifying adaptations; do mind the fact that there is no possible way for natural selection to do what it does: distinguish between coextensive traits.

In sum, adaptationism is a failed paradigm and we need to dispense with it. The logical problems with it are more than enough to disregard it. Sure, the fitness of a phenotype, say, the claws of a mole do make sense in the ecology it is in. But we only claim that the claws of a mole are adaptations after the fact, obviously. One may say “It’s obvious that the claws of a mole are adaptations, look at how it lives!” But this betrays a notion that Gould and Lewontin (1972) made: do not confuse structure with an evolutionary reason for its existence, which, unfortunately, many people do (most glaringly, evolutionary psychologists). Weinstein’s ridiculous claims about Nazi actions during WWII are a great example of how rampant adaptationism has become: we can explain any and all traits as an adaptation, we just need to be creative with the stories we tell. But just because we can create a story that “makes sense” and explains the observation does not mean that the story is a likely explanation for the trait’s existence.