Bans against consanguinity have been around for thousands, of years, though they differ by degree and culture. The Greeks had no single name for it, a similar term not appearing until around the 9th century—for instance, classic Athenian law stated that children of the same father could marry—i.e., a half-brother and half-sister (Ager, 2005). But what were some of the original reasons why they were banned—eugenic considerations or cultural/closeness reasons? Why were people banned from having a partner that was too close to them? Throughout history, different cultures obviously had different practices. The Roman Catholic Church, Pharaonic and Ptolemic Egypt, ancient Iran, all had different practices, for different reasons, on close marriage to relatives. The ban on first-cousin marriage appeared in American law around the time of the Civil War—clearly, then, the cousin-marriage ban in America was not based on the eugenics movement, though it was eugenic in nature (Paul and Spencer, 2008). Though there was debate on the matter during the Progressive Era (Wilson).
Schneider addressed sexual and matrimonial prohibitions among the Yapese in an early article (1957), but he developed his approach in a volume devoted to the subject of incest (1976). It was in the latter that he presented his culturalist views on the topic, making the important point that ‘the most frequent confusion found in the literature in my experience is the confusion between the question of the origin of the prohibition on incest and the question of why it is maintained long after the conditions which may account for its origin have passed’(Schneider 1976: 156). He went on to argue that ‘the incest prohibition is not universal’, supporting this affirmation with the cases of brother–sister marriages of Pharaonic and Ptolemaic Egypt, the ‘apparent lack’ of an incest prohibition in ancient Iran, and the marriages between members of the royal family of Hawaii as analysed by Marshall Sahlins (Schneider 1976: 154). None of these relations would be considered as incestuous for the native people. He also insisted on the inclusion of kin other than nuclear kin in the different prohibitions that Europeans identify as incestuous, on the importance of elements such as food for determining who a person can and cannot have sexual relations with and on cases in which incest includes non-sexual behaviour. He then proposed equating incest with the idea of acting ‘ungrammatically’ in a given cultural code (Schneider 1976: 167). Thus, for Schneider, a priori deﬁnitions of incest based on a Western tendency to relate kinship to sexual intercourse and the birth of a child should be avoided. Rather, he argued, we should adopt a cultural and symbolic approach towards each case. [See also Scheidel, 1997 for more information on sibling and half-sibling marriage in Roman Egypt. They did this to keep the throne in the family; Galton, 1998]
But the Romans were the first to dissuade consanguineous marriages when Emperor Claudius married his niece Agrippina in the middle of the 1st century. Then, in the middle of the fifth century, which the Roman Catholic Church eventually picked up, with the Pope citing passages in Leviticus to justify the banning of marriages with close kin (Bittles, 2009). (I should bring up the ‘Hajnal line’ now, but I’ll save that for an article by itself. In the meantime, read Steinbach, Kuhnt, and Knull, 2016 where they show that by taking marriage rate, divorce rate, step-families, and single-parent prevalence into account, we cannot use the ‘Hajnal line’ to explain differences between East and West Europe”; see also who argue that Szolyysek and Ogorek, 2019 who show that when regional populations cluster on familial traits that they lie outside of the ‘line’, which calls into questions the conclusions of Hajnal and his acolytes.)
When it comes to these cases, the ban on close marriages was not to have healthy children—and therefore attempt to prevent the types of problems that arise through the marriage of a close relative if they conceive a child—the ban was to avoid relationships that lacked difference on a bio-social level. One example here would be in certain Muslim communities. Children who shared the same wet nurse—a nurse who breastfeeds for parents—were banned from having any kind of relations later in life as they were known as ‘milk-siblings’:
Children who have been regularly breastfed (three to five or more times) by the same woman are considered “milk-siblings” and are prohibited from marrying each other. It is forbidden for a man to marry his milk mother (wet nurse) or for a woman to marry her milk mother’s husband.
In Leviticus 18:6-18, Deuteronomy 22:30, and Deuteronomy 27:2-23 the authors spake against marriage with close (blood) relatives while in Leviticus 20:11-21 along with the prohibitions against relations with blood kin, even your uncle’s wife was out of the question (the unrelated wife). When it comes to the Roman Catholic Church banning cousin marriage, however, there is a debate as to what the impetus for the ban was: was it due to eugenic considerations or to ban the marriage of two close individuals, no matter their relatedness status? MacKellar and Bechtel (2014: 62) write:
It is likely, however, that the basis for this prohibition on consanguinity [in the Roman Catholic Church] was not a concern for eugenic considerations. The condemnation of affinity, such as marrying a step-daughter (cannon 1092) and marrying an adopted child or sibling (cannon 1094) implies that these codes were again drafted on the basis of avoiding sexual relationships between people who were considered too similar or who had something ‘overly in common.’
Parkes (2005) notes that even marriage between a godparent and godchild was banned in Christian communities. I grew up Roman Catholic and I, too, would not marry my Godmother (who is my fourth cousin). MacKellar and Bechtel (2014: 63) do note that the Christian Church even banned relationships between, say, student and teacher to prevent “sexual corruption and abuse. These sexual restrictions were not, therefore, drafted to protect progeny from inheritable disorders but were similar to those that prevent relationships between teachers and their pupils or doctors with their patients. These relationships were prohibited even though it may have been certain that no child would ever be born.
Chinese cousin marriage prohibitions are interesting. First cousins could marry eac other if they did not have the same surname but if they had the same surname they were barred from marriage, as Wong (2017) notes that “The old Chinese system is a patriarchal system, where children take the surname of the father. In this patriarchal system, first cousins of the same surname could not marry. First cousins, with different surnames, could marry.”
Many Asian ethnies have the same or similar surnames. So, on that basis, it is interesting to note that in Korea, for example, much cultural shame is brought on people who choose to marry and have the same last name. It is so taboo that family and friends question their loved ones who date a person with their same last name. The New York Times has an interesting story from the mid-90s about Koreans and dating/marrying an individual with the same last name:
It should be a time of celebration. K. H. Lee and his girlfriend have fallen in love and want to get married soon to start a new life together.
But Mr. Lee, a 31-year-old civil servant, and his fiancee face a battle against Korean history that threatens to bring their love to ruin: they have the same last name. Even his friends disapprove of his plans.
“I can feel them asking, ‘Do you really have to do this?'” said Mr. Lee, who, who would not disclose his full given name or his girlfriend’s because the issue is so delicate. “Even if it were allowed by law, if the relatives found out, the whole family would be shamed because we have a strong sense of face.” Not being able to marry a person with the same family name is a special burden in South Korea, where 22 percent of South Korea’s 44 million people are named Kim. The figure leaps to 55 percent after adding in Park, Lee, Choi, and Chong.
The NYT story also notes an interesting bit of Korean folklore about this ban:
According to folklore, the practice was brought over from China in the 14th century, after a Korean messenger, named Lee, visited China. His Chinese host asked him his wife’s name, and upon hearing that it was also Lee, the Chinese supposedly replied: “Ah! You’re not an aristocrat. You’re a commoner!” When the messenger returned, he relayed the story to the Korean emperor, who immediately declared a ban against same-clan marriages.
Dating people with the same last name in South Korea is such a taboo that some people even attempt to find out their perspective SO’s last name discretely. The practice, to them is cultural as of now since presumably, marriage and children with one with the same last name won’t lead to any birth defects.
We have been marrying/conceiving children with close relatives since time immemorial. Though, different peoples have different reasons for shunning consanguineous marriages—some cultural, some biological, some both. The Greeks banned it in some instances, but allowed it in others. In Islam, children who are “milk-siblings” cannot marry. Asians (who are likely to share names with their own ethny, and even sometimes another Asian of a different ethny) have some interesting considerations on cousin marriages—it being so engrained in their culture that some will not talk to someone if they share their last name. The considerations of banning consanguineous marriages by the Church, though, could go both ways—it could be due to banning marriages between people who are ‘related’ socially, and not genetically.
The history of cousin marriage—along with the banning/allowing it throughout history, along with how different peoples handle the situation shows exactly how humans individuate through culture.
Sudden Infant Death Syndrome (SIDS) has a long history—almost as long as human civilization (Raven, 2018). The term was coined in 1969 to bring attention to children who died in the postnatal period (Kinney and Thach, 2012; Duncan and Byard, 2018). About 95 percent of SIDS cases occur within the first 6 months of life, happening around the 4-6 months mark (Fleming, Blair, and Pease, 2015). The syndrome is associated with the sleep period, presumed to have begun with the transition from sleep to waking (Kinney and Thach, 2012) The prone sleeping position, along with smoking, is said to increase the incidence of SIDS (Ramirez, Ramirez, and Anderson, 2018). Due to a campaign in the mid-90s, though (called the back-to-sleep campaign), it has been estimated that SIDS deaths have decreased by 50 percent, saving thousands of infant lives (Kinney and Thach, 2012).
But, those infants who die from SIDS may also have a problem with the part of their brain that controls waking/sleeping:
Infants who die from SIDS may have a problem with the part of the brain that helps control breathing and waking during sleep. If a baby is breathing stale air and not getting enough oxygen, the brain usually triggers the baby to wake up and cry to get more oxygen.
So, if a baby’s brain is not getting enough oxygen, its brain will have it wake up and cry in an attempt to rid itself of “stale oxygen”—this is one other purpose that crying serves—which then gets the baby more oxygen to its brain.
As can be imagined, acute rises in CO2 levels occur when an individual is unable to expel CO2, such as in the setting of an airway obstruction that might occur when an individual is lying prone in a crib or bed perhaps with a pillow and bedclothes covering the nose and mouth. It has been proposed that such a rise in CO2 would activate arousal circuitry in a normal baby to wake the baby up, cause them to cry out, summoning a caregiver who would come to their aid, and ostensibly correct the airway blockage to allow resumption of normal breathing [16,20,31]. It has been proposed, among other possibilities, that there is an impaired CO2-arousal system in SIDS-susceptible babies such that when they rebreathe CO2 as described above, they do not arouse, and thus do not cry out, and the blockage is not corrected [16,32]. They thus become acidotic and hypoxic and ultimately succumb.
So, if a babe’s airway gets blocked, for instance by a pillow or toy, they wake, cry out for attention and their caregiver comes to solve the problem or they change their laying position. But in SIDS cases, this does not occur. Why? Buchanan argues that those who succumb to sudden deaths like SIDS have screwy serotonin receptors—they ensure that blood oxygen and CO2 levels are healthy. But some of these infants may have brains that don’t allow them to detect the CO2 and blood oxygen levels—when the body may be suffocating. SIDS victims are usually found face-down in their cribs. But, there are no biomarkers for SIDS (Haynes, 2018). The SIDS diagnosis is only given after all other causes of death are ruled out—this is why SIDS is so mysterious. Genetic mutations have been posited as a cause (Männikkö et al, 2018), as has a pregnant mother smoking during pregancy, leading to a doubled risk of SIDS (Anderson et al, 2019).
But the best prevention against SIDS is nonprone sleeping—having the baby sleep on its back. The efficacy of this approach since the 1990ss has been noted (Gibson et al, 1992; de Luca and Hinide, 2016) while “Achieving recommended prenatal care and infant vaccinations, as well as reductions in maternal tobacco and substance use, has the potential to further reduce rates of SIDS and should be given as much attention as safe sleep advice in SIDS risk reduction campaigns” (Hauck and Tanabe, 2017: e289). The back-to-sleep program, though, has been associated with a decrease in motor development from the infant sending time in the supine position along with the strong possibility of developing plagiocephaly—which causes a “flat head” due to being placed in similar positions while the infant’s skull is soft and still developing (Miller et al, 2011). It has also been estimated that if it was known that the advice to place infants on their stomachs to sleep led to SIDS, then we “might have prevented over 10 000 infant deaths in the UK and at least 50 000 in Europe, the USA, and Australasia” (Gilbert et al, 2005: 884).
But the history of SIDS in America is a lot more sinister—rather than children dying from ‘natural causes’ (SIDS), in the 1970s, it was hypothesized by one SIDS researcher that SIDS was ‘genetic’ and ‘transmissible’ on the basis of one family who, unfortunately, had experienced this tragedy more than once.
This leads us to the story of Waneta Hoyt, who is the subject of this article.
Hoyt and Steinschneider: Genes vs environment
Horrible tragedies befell a woman from New York named Waneta Hoyt—five of her children had mysteriously died due to SIDS between the years of 1965-1971.
Waneta killed her first child, Eric who was three-months-old. SIDS is a diagnosis that is arrived at through a process of elimination—rule out all other causes at a young age (under 1) and the cause is then SIDS. But, the thing is, when an autopsy is performed on the infant, there is no difference between what would be said to be SIDS deaths and a light smothering.
After Waneta murdered her first child, she was cold and distant but it was not noticed. It was reported that she would never hold her children as a loving mother would, keeping them quite far from her. But it wasn’t until three years later that she, again, murdered. But this time it was two of her children—her two-year-old son and six-week-old daughter. The murders that Waneta were committing were wrongfully diagnosed as being due to SIDS.
This caught the attention of renowned SIDS researcher Alfred Steinschneider who had a clinic in which he specialized in caring for infants who were thought to be high-risk for SIDS. Steinschneider wanted to watch Waneta’s fourth child in his sleeping ward, in an attempt to prevent what he thought was due to SIDS. So, when he heard of Waneta’s story, he reached out to her to monitor her daughter, Molly.
The nurses at Steinschneider’s clinic, though, became suspicious of Waneta when she was at the clinic since she was cold and distant to Molly—she would not show her any affection. Steinschneider’s nurses emphatically told Steinschneider that it was Waneta who was murdering her children. Steinschneider shot back, and sent Molly home anyway. In an interview on the television program Deadly Women called Mothers Who Kill, one of the nurses who watched Molly before she was discharged by Steinschneider said:
And then about a quarter to eleven when we were getting ready to go off duty I said ‘Joyce, what do you think, do you think she’s still alive?’ Of course when I came on duty the next day she was dead.
Forty-eight hours later, on Thursday, June 4, Steinschneider scheduled Molly for her third discharge. By now, the nurses were speaking more openly about their suspicions. “I just know something’s going to happen,” Corrine Dower said to Thelma. “One of these times she’s going to do it.” Corrine was scornful of Steinschneider. “If he had any brains at all he would have seen that she didn’t want the baby,” she would say years later. “You can tell in the grocery store if a person cares about their child. We were just disgusted with Steinschneider.” (book excerpt from How Two Baby Deaths Led to a Misguided SIDS Theory)
Presumably, since this was Waneta’s fourth time experiencing the tragedy of SIDS, Steinschneider did not think that Waneta could be involved—but his nurses knew the truth. It was when Waneta had her fifth child that Steinschneider thought he would make his breakthrough in his research. Steinschneider was so convinced that the baby’s were dying due to SIDS, and he thought that if he could monitor Waneta’s new baby as much as possible, that he may figure out why babies die from SIDS.
Steinschneider believes that SIDS is hereditary—passed on through genes. The fifth child was watched at Steinschneider’s clinic and when Steinschneider discharged him—in an attempt to prove his theory—his nurses protested. Then, shortly after, Waneta called Steinschneider saying that it had happened again—her fifth child had mysteriously died.
After the death of Waneta’s fifth child, Steinschneider published his paper Prolonged Apnea and the Sudden Infant Death Syndrome: Clinical and Laboratory Observations arguing that SIDS was caused largely by hereditary sleep apnea (Steinschneider, 1972). By 1997, Steinschneider’s paper was the most-cited paper in the SIDS literature (Bergman, 1997). It was due to Steinschneider’s research, though, that parents began using sleep monitors to monitor their children’s sleep so they could be alerted in case their child had sleep apnea.
Steinschneider cared more about his research and theory of SIDS and sleep apnea over what was striking him right in the face—Waneta was responsible for the deaths of her five children. Steinschneider’s 1972 paper was cited and used for 22 years, until it was found upon an in-depth look into Steinschneider’s paper that what was clear to Steinschneider’s nurses and not him was true—Waneta was responsible for the deaths of her children. Steinschneider’s paper, in any case, concluded that SIDS is a genetic disorder and it was thusly inherited. And Waneta’s case, it seems, lent credence to his hypothesis. Steinschneider gave Waneta the perfect alibi—her woes were caused by a genetic disease and there was nothing that could have been done to prevent it.
Waneta was convicted in 1995 of five counts of murder and sentenced to 75 years in prison—therefore refuting Steinschneider’s theory. Three years after her sentence, though, Waneta died in prison of cancer. The case of Waneta Hoyt allowed mothers to kill their children in this specific way (a light smothering) for almost a quarter of a century.
Norton saw history repeating itself in the reluctance of many factors to face the fact that some deaths attributed to SIDS were homicides. She agreed with the bulk of SIDS research, which pointed to apnea, or the cessation of breathing, as the final pathway to death. But there were many causes of apnea, not all of them natural. An adult could place a hand or a pillow over an infant’s nose and mouth and stop the child from breathing. The pressure needed to smother an infant often left no telltale signs, Norton explained.
“There is no way for the pathologist at autopsy to distinguish between homicidal smothering and SIDS,” she concluded.
Norton worried that homicides were being passed off as SIDS because many doctors held the erroneous belief that SIDS ran in families. They ignored large-scale studies that had shown no genetic tendency toward SIDS. Flouting conventional wisdom, Norton warned that the sudden, unexplained death of a SIDS victim’s sibling should be treated as a possible homicide.
When Waneta was convicted, letters to the editor were sent about Steinschneider’s paper. The short correction to the paper chronicles, interestingly, a letter to the editor of the journal Pediatrics, who published the Steinschneider paper
“But the paper indicated a more sinister possibility to Dr. John F. Hick of Minnesota. In a letter to the journal, he wrote that the case offered “circumstantial evidence suggesting a critical role for the mother in the death of her children.” (See below.)
But his warning was dismissed, until Mr. Fitzpatrick read the paper 15 years later.
“The medical records described two happy, healthy, perfectly normal kids,” he said. “It convinced me that these children were murdered.”
Hick’s letter to Pediatrics says:
In reporting two siblings who succumbed to “sudden infant death syndrome,” Steinschneider exposes an unparalleled family chronicle of infant death.’ Of five children, four died in early infancy and the other died without explanation at age 28 months. Prolonged apnea is proposed as the common denominator in the deaths, yet the author leaves many questions relevant to the fate of these children unanswered.
In her signed confession, Waneta said that she smothered her five children because their screaming made her “feel useless”, though Waneta later stated that she only said that to stop the police from questioning her. Steinschneider, like another motivated-reasoner J.P. Rushton, ignored data that did not fit his theory of sleep-apnea-induced-SIDS—specifically how Waneta acted around her children while at his clinic and the thoughts of his own nursing staff telling him not to discharge the Hoyt infants.
Waneta recalled her strangling of her children—specifically Julie:
”They just kept crying and crying. . . . I just picked up Julie and I put her into my arm, in between my arm and my neck like this . . . and I just kept squeezing and squeezing and squeezing.”
Steinschneider’s testimony during Waneta’s trial, however, is very interesting. Reported by the New York Times, Steinschneider attempted to defend his patient Waneta against claims that she had murdered her children:
Autopsies were done,” he said, speaking of Molly. “They could not find a known cause of death.”
This, Dr. Steinschneider said, “by definition” is SIDS.
But under intense cross-examination. Dr. Steinschneider conceded that he could not remember — and did not record — crucial details from the medical histories of the two infants, whom he had hospitalized for observation soon after birth. In each case, the parents had reported that the baby was having difficulty breathing and that its older siblings had died mysteriously.
The doctor also acknowledged concluding that Molly and Noah had died of SIDS without knowing how thoroughly the authorities had probed the “death scene” for evidence of other causes, including murder.
It is said—even by the prosecutor on her case—that Waneta suffered from Munchausen by proxy (Firstman and Talan, 1996)—which is the intentional cause of illness, usually on children, in order for the mother to elicit sympathy for others (Gehlawat et al, 2015). In cases like this, mothers who have the Munchausen syndrome will suffocate their children and then rush them to the hospital—they get the satisfaction of inflicting pain and then the satisfaction of getting cared for for the so-called mysterious death of their baby. One study of 51 sleep apnea monitorings found that about 40 percent of the program treated infants who had apnea that seemed to be induced by the parent; this was inferred from the fact that once the infants were admitted to the hospital, the doctors found no signs of apnea (Light and Sheridan, 1990).
One doctor even took it upon himself to place cameras in his practice in order to monitor parents that were suspected of abusing their children. Thirty-nine infants were monitored—thirty-three infants were being abused by their parents, what’s more is that some of the infants in this study who were identified in the video also had a sibling who mysteriously died from SIDS (Southall et al, 1997). What the study shows is that these parents were suffocating their children, causing their breathing problems and that they most likely have gotten away with infanticide before. Another case involved a mother taking her daughter to eleven different hospitals, but none of them found anything wrong with the girl and she ended up dying under suspicious circumstances (Hassler, Zamorski, and Weirich, 2007).
We now know that Steinschneider ignored contrary evidence to his theory of genetically-induced sleep apnea causing SIDS which apparently ran in families, and since he brushed off his nursing staff telling him that Waneta was acting strangely around her two children that he had admitted into his clinic, he could have saved their lives. But Steinschneider’s genetic determinist theory was more important than seeing what was clear as day to his staff and even others who read his 1972 paper—a mother was strangling her own infants.
SIDS has a long history, dating back to biblical times. But, in the modern-era, erroneous theories on the causes of SIDS were pushed while other, more obvious causes were disregarded in favor of a grand genetic theory of SIDS causation. Waneta and Steinschneider both helped each other out: Steinschneider (unknowingly) helped Waneta evade detection for 22 years while Waneta lent credence to the hypothesis that Steinschneider was developing. The fact that, at the time of their first meeting, three of Waneta’s children had died in almost the same fashion pointed to a genetic, inherited cause in Steinschneider’s eyes.
At the time of publication of The Death of Innocents, Steinschneider still continues to defend his now-discredited theory and still lobbies for the use of infant sleep monitors. Of course, since he testified FOR Waneta, despite the mounting evidence against her, he could be seen as an accomplice, however weakly. But this case shows one thing that should be well-known—researchers become attached to their pet hypotheses/theories and will ignore contrary evidence if it is brought to their attention. Firstman and Talman estimate that between 5 and 10 percent of SIDS cases are actually homicides. (But see Milroy and Kepron, 2017.)
Steinschneider created the SIDS disease on the basis of Waneta’s story—and a multi-million dollar industry then appeared due to his paper—it’s all to save infants, buy these sleep apnea monitors. But there were two children that Steinschneider did not—could not—save: He could have saved those babies, if not for his genetic determinist beliefs on SIDS causation. Had Steinshneider looked at the more obvious answer to the problem which was right in front of his face, he may have seen that Waneta suffered from Munchausen by Proxy, and, as evidenced from the references above, those who suffer from the disease act out exactly how Waneta did—by strangling their children with the cause of death being blamed on SIDS.
The Hoyt-Steinschneider case is a warning—don’t jump so quickly to implicate heredity in the ontology of X, especially when other, more obvious, tells are right there in front of you.
I have been an avid reader and interested in astronomy/space ever since I could remember. I remember really loving Stephen Hawking and his documentaries on black holes. I would read anything I could find on constellations and stars. From there I went on to reading sci-fi. I then recall seeing The Martian Chronicles by Ray Bradbury and from then on I had become interested in sci-fi writing. But, as I grew older, I drifted away from sci-fi and now only read non-fiction. Then when I got older I got into ‘HBD’ (chronicled here) and along with it evolution—but, unlike other ‘HBDers’ I became enamored with the work of Gould, while some of my favorite books come from him. Gould wrote a lot about evolutionary contingency—the degree to which an outcome could be different. Evolutionary contingency is a big topic in philosophy of biology, and Bradbury has a great short story on this type of contingency.
Ray Bradbury is an interesting author—one who has many short stories and regular books. One of my favorite stories from Bradbury is one called A Sound of Thunder which chronicled a time machine company who let people go back in time to hunt any animal they’d like—if you want to take down the ancestor of a whale before it became aquatic, just name the place and they will send you there. They were told to only stay on the path laid out by the time machine company—animals they could shoot were marked with red paint, presumably those animals would have died anyway so killing them would not change any outcomes. The text from Bradbury is worth quoting in full, as it wonderfully captures the thought of evolutionary contingency:
He indicated a metal path that struck off into green wilderness, over streaming swamp, among giant ferns and palms. “And that,” he said, “is the Path, laid by Time Safari for your use, It floats six inches above the earth. Doesn’t touch so much as one grass blade, flower, or tree. It’s an anti-gravity metal. Its purpose is to keep you from touching this world of the past in any way. Stay on the Path. Don’t go off it. I repeat. Don’t go off. For any reason! If you fall off, there’s a penalty. And don’t shoot any animal we don’t okay.”
“Why?” asked Eckels.
They sat in the ancient wilderness. Far birds’ cries blew on a wind, and the smell of tar and an old salt sea, moist grasses, and flowers the color of blood.
“We don’t want to change the Future. We don’t belong here in the Past. The government doesn’t like us here. We have to pay big graft to keep our franchise. A Time Machine is finicky business. Not knowing it, we might kill an important animal, a small bird, a roach, a flower even, thus destroying an important link in a growing species.”
“That’s not clear,” said Eckels.
“All right,” Travis continued, “say we accidentally kill one mouse here. That means all the future families of this one particular mouse are destroyed, right?”
“And all the families of the families of the families of that one mouse! With a stamp of your foot, you annihilate first one, then a dozen, then a thousand, a million, a billion possible mice!”
“So they’re dead,” said Eckels. “So what?”
“So what?” Travis snorted quietly. “Well, what about the foxes that’ll need those mice to survive? For want of ten mice, a fox dies. For want of ten foxes a lion starves. For want of a lion, all manner of insects, vultures, infinite billions of life forms are thrown into chaos and destruction. Eventually it all boils down to this: fifty-nine million years later, a caveman, one of a dozen on the entire world, goes hunting wild boar or saber-toothed tiger for food. But you, friend, have stepped on all the tigers in that region. By stepping on one single mouse. So the caveman starves. And the caveman, please note, is not just any expendable man, no! He is an entire future nation. From his loins would have sprung ten sons. From their loins one hundred sons, and thus onward to a civilization. Destroy this one man, and you destroy a race, a people, an entire history of life. It is comparable to slaying some of Adam’s grandchildren. The stomp of your foot, on one mouse, could start an earthquake, the effects of which could shake our earth and destinies down through Time, to their very foundations. With the death of that one caveman, a billion others yet unborn are throttled in the womb. Perhaps Rome never rises on its seven hills. Perhaps Europe is forever a dark forest, and only Asia waxes healthy and teeming. Step on a mouse and you crush the Pyramids. Step on a mouse and you leave your print, like a Grand Canyon, across Eternity. Queen Elizabeth might never be born, Washington might not cross the Delaware, there might never be a United States at all. So be careful. Stay on the Path. Never step off!”
“I see,” said Eckels. “Then it wouldn’t pay for us even to touch the grass?”
“Correct. Crushing certain plants could add up infinitesimally. A little error here would multiply in sixty million years, all out of proportion. Of course maybe our theory is wrong. Maybe Time can’t be changed by us. Or maybe it can be changed only in little subtle ways. A dead mouse here makes an insect imbalance there, a population disproportion later, a bad harvest further on, a depression, mass starvation, and finally, a change in social temperament in far-flung countries. Something much more subtle, like that. Perhaps only a soft breath, a whisper, a hair, pollen on the air, such a slight, slight change that unless you looked close you wouldn’t see it. Who knows? Who really can say he knows? We don’t know. We’re guessing. But until we do know for certain whether our messing around in Time can make a big roar or a little rustle in history, we’re being careful. This Machine, this Path, your clothing and bodies, were sterilized, as you know, before the journey. We wear these oxygen helmets so we can’t introduce our bacteria into an ancient atmosphere.”
This passage from Bradbury wonderfully illustrates evolutionary—historical—contingency. Things could have been different—this is the basis of the contingency argument. The universe does not repeat itself—if we were to replay the tape of life we would get a completely different outcome—Lane (2015) states maybe octopi would rule the earth? We could replay the tape of life, have it go exactly as it did to lead up to today, change ONE SEEMINGLY MINISCULE THING (say, stepping on a bug that did not die) which would then cascade throughout history leading to a change in the future. Evolution is full of passive trends, with no indication that—for example with body plans—that there is a drive to become more complex—it is passive (Gould, 1996: 207):
All the tests provide evidence for a passive trend and no drive to complexity. McShea found twenty-four cases of significant increases or decreases in comparing the range of modern descendants with an ancestor (out of a potential sample of ninety comparisons, or five groups of mammals, each with six variables measured in each of three ways; for the other comparison, average descendants did not differ significantly from ancestors). Interestingly, thirteen of these significant changes led to decreases in complexity, while only nine showed an increase. (The difference between thirteen and nine is not statistically significant, but I am wryly amused, given all traditional expectation in the other direction, that more comparisons show increasing rather than decreasing complexity.
Gould first put forth his contingency argument in Wonderful Life—any replay would be different then the next. Gould critiqued the increasing complexity claim, arguing that diversification is always accompanied by decimation—once a mass extinction (say, an asteroid impact) occurs, there will then be subsequent diversification after the decimation.
We have no idea why certain organisms persisted over others after periods of decimation—and ‘adaptation’ to environments cannot be the whole story. Out of all of Gould’s writing that I have read in my life, this passage is one of my favorites as it perfectly captures the problem at hand:
Wind the tape of life back to Burgess times, and let it play again. If Pikaia does not survive in the replay, we are wiped out of future history—all of us, from shark to robin to orangutan. And I don’t think that any handicapper, given Burgess evidence as known today, would have granted very favorable odds for the persistence of Pikaia.
And so, if you wish to ask the question of the ages—why do humans exist?—a major part of the answer, touching those aspects of the issue that science can treat at all, must be: because Pikaia survived the Burgess decimation. This response does not cite a single law of nature; it embodies no statement about predictable evolutionary pathways, no calculation of probabilities based on general rules of anatomy or ecology. The survival of Pikaia was a contingency of “just history.” I do not think that any “higher” answer can be given, and I cannot imagine that any resolution could be more fascinating. We are the offspring of history, and must establish our own paths in this most diverse and interesting of conceivable universes—one indifferent to our suffering, and therefore offering us maximal freedom to thrive, or to fail, in our own chosen way. (Gould, 1989: 323)
Contingency is about counterfactuals—what could have happened, what could have been, or what would have been had some certain condition changed, with everything before that occurring as usual. Bradbury’s A Sound of Thunder wonderfully illustrates the contingency of the evolutionary process—change one seemingly small, minuscule thing in the past and this could snowball and cascade to huge changes in the future—we may never have existed or we would have existed but have been radically different. If we could go back in time and, say, crush a butterfly and see the changes it would have made, we could say that the event that caused the future to change was the crushing of that butterfly—this could have, eventually, led to the non-existence of a certain group of people or a certain group of animals which would have radically changed the outcome of the world—both the natural and human world.
So, if we could replay life’s tape from the very beginning, I do believe that life as we know it would be different—for if we played it from the beginning, we could have a scenario as described by Bradbury—everything could go exactly the same with one small seemingly minuscule change snowballing into a world that we would barely recognize.
The concept of “race” stretches back as long as human civilization. The concept of “racism” also stretches back just as far with it—they seem to be intertwined. There is a consensus, though, the term was constructed during the European Age of Exploration. This claim though, is false. The concept actually goes back at least 5,000 years. By looking at the art and reading the myths of these ancient civilizations, we can see that the social constructivist claim about race—that it is a recent creation—is false. They also described their physical features and also attempted to explain behavioral differences between races based on the limited knowledge they had in their day.
Sarich and Miele (2004) state that the PBS documentary on race—which is largely the main reason why they wrote their book Race: The Reality of Human Difference—claimed that race is a human invention and that since we create it we can then “unmake it.” We can look at art from ancient civilizations and see that they did sort people into groups based on their skin color and other physical characters. Each civilization, of course, thought itself and its racial features to be ‘superior’ to the others they encountered. The ancients used the set of observable features to describe what we now call “races.”
Our first trip on this long journey to understand the history of race is India. The earliest hints of what would become the caste system were written around 5kya. In the Rig Veda, a description of the Arya(n) invasion in the Indus valley where a dark-skinned people lived. The god of the Arya(n)s Indra “is described as “blowing away with supernatural might from earth and from the heavens the black skin which Indra hates” (Gossett, 1997: 3-4). They also called these dark-skinned people “Anasahs” which meant “the noseless people.” They then describe Indra killing all of the dark-skinned people and conquering the Indus for the Arya.
Sarich and Miele (2004: 47-48) note that the peoples that Indra hated were called Dasas—broad-nosed worshippers of the phallus. Even when Alexander the Great’s army reached India and described the Indians in the south of the country as some of the darkest people they have seen, they still made the distinction between Indians and Africans—by their hair type—so this tells us that race is more than ‘skin deep’ to the Greeks. Race, then, was known for thousands of years BEFORE the Age of Exploration.
We can look to ancient China, too, to see instances of racial descriptions and then racism along with it. For instance, a Chinese writer described yellow-haired, blue-eyed people from a distant province, “who greatly resembeled monkeys from whom they are descended” (Gossett, 1997: 4). Another Chinese legend describes differences between themselves and a brabarian tribe. A Chinese emperor stated that he would give his daughter to whomever slayed the chieftan he was having problems with. Then, the palace dog comes back with the head of the chieftan. The emperor did not go back on his word; he gave the dog his daughter and the resulting children were “fond of living in high altitudes and averse to plains” (Gossett, 1997: 4).
Like other civilizations, the ancient Han Chinese regarded other groups they came into contact with as barbarians. They were especially taken aback by the odd appearance of one group, the Yuehzi, because of their hairy, white, ruddy skin and their prominent noses, which the Chinese likened to those of monkeys.
The Han Chinese applied the term “Hu” to barbarians like the Yuehzi who had “deep eye sockets, prominent noses, and beards.” But they did not apply it to the Qiang, another barbarian group, who had a Mongoloid appearance and among whom some of the Yuezhi lived. Both groups were denigrated as uncivilized and inferior to the Chinese, but the Qiang were deemed to belong to the same racial stock, whereas the Yuezhi were viewed as being part of a very different stock, not only barbarian but ugly and monkey-like to boot.
The Egyptians used a color-coding system—red (themselves), yellow (for their eastern enemies), black (Africans), and white (those from the north). The Eygptians also accurately depicted Africans as early as the third century BCE, describing them exactly how 19th-century European anthropologists would. Below is a picture of how the Egyptians depicted these groups.
There is, also, an interesting bit about colorism—discrimination based on skin color—here:
Color prejudice, says one writer, depended on which ethnic group held sway. When the lighter-skinned Egyptians were dominant they referred to the darker group as “the evil race of Ish.” On the other hand, when the darker-skinned Egyptians were in power, they resorted to calling the lighter-skinned people “the pale, degraded race of Arvad.” (Gossett, 1997: 4)
The Jews are some of the oldest peoples on earth, so they should then have some stories about their encounters with different races. One of the oldest, thought to be first, racist sayings was asked by the prophet Jeremiah who said “Can the Ethiopian change his skin or the leopard his spots?” The Jews are said to have ‘invented’ anti-black racism (Gossett, 1997: 5; Sarich and Miele, 2004), but this has been contested (Goldenberg, 1998). Take the full text from Gossett on Ham:
The most famous example of racism among the Jews is found in the legends which greew up concerning Ham, the son of Noah. The account in Genesis tells us of Ham’s expressing contempt for his father because Noah had become drunk and was lying in a naked stupor. Noah’s other sons had covered their father’s nakedness, averting their eyes. Noah blessed the descendants of Shem and Japeth, his other sons, but cursed the descendants of Ham. There is some confusion in the account in Genesis because it is not clear whether the curse was to be visited upon Ham or upon Canaan, Ham being a later insertion. Nothng is said in Genesis about the descendants of either Ham or Canaan being Negroes. This idea is not found untl the oral traditions of the Jews were collected in the Babylonian Talmud from the second century to the sixth centry A.D. In this source, the descendants of Ham are said to be cursed by being black. In the Talmud, there are several contradictory legends concerning Ham—onoe that God forbade anyone to have sexual relations on the Ark and Ham disobeyed this command. Another story is that Ham was cursed with blackness because he resented the fact that his father desired to have a fourth son. To prevent the birth of a rival heir, Ham is said to have castrated his father. Elsewhere in the Talmud, Ham’s descendants are depicted as being led into captivity with their buttocks uncovered as a sign of degredation.
Greeks and Romans
The Greeks and the Romans are really interesting. Being near the intersection of the Medditerranean, they would have seen many different races of people—and this is reflected in their art and legends. The Greek myth of Phaethon, for example, shows that the Greeks knew that skin color was a function of climate.
In the story, Phaethon asked his father to drive the sun chariot, using it only for the day. He could not control the chariot so it came to close to the earth in some regions, burning the people there while for the people in the north he drove too far away from the earth, ligtening their skin. Greek and Roman myths, in fact, show exactly how things change and that if we had a different reference point—like the Greeks and Romans did—we would then create different theories of ‘intelligence’:
“The nations inhabiting the cold places and those of Europe are full of spirit but somewhat deficient in intelligence and skill, so that they continue comparatively free, but lacking in political organization and the capacity to rule their neighbors. The peoples of Asia on the other hand are intelligent and skillful in temperament, but lack spirit, so that they are in continuous subjection and slavery. But the Greek race participates in both characters, just as it occupies the middle position geographically, for it is both spirited and intelligent; hence it continues to be free and to have very good political institutions, and to be capable of ruling all mankind if it attains constitutional unity.” (Pol. 1327b23-33, my italics)
Views of direct environmental influence and the porosity of bodies to these effects also entered the military machines of ancient empires, like that of the Romans. Offices such as Vegetius (De re militari, I/2) suggested avoiding recruiting troops from cold climates as they had too much blood and, hence, inadequate intelligence. Instead, he argued, troops from temperate climates be recruited, as they possess the right amount of blood, ensuring their fitness for camp discipline (Irby, 2016). Delicate and effemenizing land was also to be abandoned as soon as possible, according Manilius and Caesar (ibid). Probably the most famous geopolitical dictum of antiquity reflects exactly this plastic power of laces: “soft lands breed soft men”, according to the claim that Herodotus attributed to Cyrus. (Meloni, 2017: 41-42)
The Roman historian Vitruvius “attributed the keen intelligence of his countrymen to the rarity of the atmosphere and to the heat. The less fortunate northern peoples, “being enveloped in a dense atmosphere, and chilled by moisture from the obstructing air … have but a sluggish intelligence”” (Gossett, 1997: 7). How convenient—people at the time thought they were ‘superior’ to others and then attempted to justify it on the basis of environmental—eventually evolutionary—differences. However, the Greek theory of humors. Such accounts, though, only speak to how the Greeks thought that the environment shaped individuals, not shared traits of the group. Such differences were thought to be almost immediately reversible. They believed that one could take a person who grew up in another environment who, therefore, had a different temperment which could be changed by switching his environment.
Thus if there were, say, a microregion of Germany where “Asiatic” environmental conditions prevailed, a person who settled in that microregion would end up with Asian attributes. Thus, humoral accounts of human diversity focused on the way environments shape individuals, rather than the way populations share traits. (Smith, 2016: 85)
The Greeks and the Romans, ironically, seemed to be really big on environmentalism—the thesis that environment drives the proliferation of traits and that changing the environment can change ones phenotypic traits. While this is not wholly true, there is a kernel of truth here.
Sarich and Miele (2004: 51) describe different ancient scholar’s writings on their observations of racial differences:
The most detailed surviving description of the racially denning characteristics of black Africans from the classical world appears in The Moretum, a poem attributed to Virgil (circa 1st century AD). A female character named Scybale. is described as “African in race—her hair tightly curled, lips thick, color dark, chest broad, breasts pendulous, belly somewhat pinched, legs thick, and feet broad and ample.” In his book Blacks in Antiquity: Ethiopians in the Greco-Roman Experience, Frank M. Snowden comared the description with portrayls by twentieth-century anthropologists E. A Hootn and M. J Herskovits. For example, Hootn described the “outstanding features of the ancient specialized Negro division of manking” as “narrow heads and wide noses, thick lips and thin legs, protruding jaws and receding chins, integument rich in pigment but poor in hairy growth, flat feet and round foreheads, tiny curls and big smiles.”
Snowden concluded: “While the author of The Moretum was writing poetry, not anthropology,” his description of the distinguishing racial characteristics of black Africans “is good anthropology; in fact, the ancient and modern phraseology is so similar that the modern might be considered a translation of the ancient” (emphasis added).
I’m sure most have heard the popular ‘myth’ that God burnt blacks by cooking them too long. Come to find out, there is a real basis for this myth. The Native Americans thought that white people weren’t baked enough, blacks were baked too much and they were—like Goldilocks—juuuuust right:
Earthmaker made the world with trees and fields, with rivers, lakes, and springs, and with hills and valleys. It was beautiful. However, there weren’t any humans, and so one day he decided to make some.
He scooped out a hole in a stream bank and lined the hole with stones to make a hearth, and he built a fire there. Then he took some clay and made a small figure that he put in the hearth. While it baked, he took some twigs and made tongs. When he pulled the figure out of the fire and had let it cool, he moved its limbs and breathed life into it, and it walked away. Earthmaker nonetheless realized that it was only half-baked. That figure became the white people.
Earthmaker decided to try again, and so he made another figure and put it on the hearth. This time he took a nap under a tree while the figure baked, and he slept longer than he intended. When he pulled the second figure out of the fire and had let it cool, he moved its limbs and breathed life into it, and it walked away. Earthmaker realized that this figure was overbaked, and it became the black people.
Earthmaker decided to try one more time. He cleaned the ashes out of the hearth and built a new fire. Then he scooped up some clay and cleaned it of any twigs or leaves, so that it was pure. He made a little figure and put it on the hearth, and this time he sat by the hearth and watched carefully as the figure baked. When this figure was done, he pulled it out of the fire and let it cool. Then he moved its limbs and breathed life into it, and it walked away. This figure was baked just right, and it became the red people. (A Potawatomi Story)
The first peoples to describe Africans in a racist manner was not Europeans, it was the Arabs—Islamics. They held slaves long before Europeans; they even castrated their slaves. Jahiz of Basra described Africans as “people of black color, flat noses, kinky hair.”…despite their dimness, their boundless stupidity, their crude prceptions and their evil dispositions” is how Jahiz of Basra described Africans. Ibn Khaldun stated “The only people who accept slavery are the Negroes, owing to their low degree of humanity and their proximity to the animal stage.” Nasir al-Din Tusi stated “Many have observed that the ape is more teachable than the Zanji [African].” (All quotes from Sarich and Miele, 2004: 60).
What this little tour of the concept of race throughout history tells us one thing: The concept ‘race’ is not a European invention—races were not socially constructed in 1492. They were constructed thousands of years in the past by many different peoples who had different explanations for the racial differences they had observed. While some of them, for their time, are great explanations for the observed differences, there was an element of racial prejudice, even all of those thousands of years ago. Yes, race is partly socially constructed (as evidenced here) but that social construction has a real, biological basis behind it.
It is obvious that the concept of ‘race’ and ‘racism’ went hand-in-hand all throughout antiquity. It is only today, it seems, that we can attempt to use the concept of race without having any ‘racist’ undertones. Though, the tour we went on proves one thing: race exists and was known to have existed for thousands of years.
Rachael Dolezal attracted media attention in 2015 since she, as a white women, presented and ‘acted’ what Americans would describe as (the socialrace) “black.” She was the former chapter president for the NAACP (National Association for the Advancement of Colored People) and former Africana studies instructor; when it was discovered that she had two white parents she resigned from the chapter. Her white parents then came out and said that she was “passing as black.” And, from looking at her appearance, one would be hardpressed to say that she did NOT look black and that she DID NOT attempt to make her appearance LOOK what the average American would describe as (the socialrace) “black.” In any case, though, the controversy is an interesting one: should she be able to self-identify as black, even though none of her (recent) ancestors derive from the African continent? I will discuss Quayshawn Spencer’s (2019) take on the Dolezal controversy then I will discuss my own thoughts on the matter.
The earliest use of the term “transracialism” I can find is from 2004, from Overall (2004) who states that transracialism is the “use of surgery to assist individuals to “cross” from being a member of one race to being a member of another” and that, if it is “morally acceptable” for one to have a surgery to have the sex they feel they should be, then it should be morally acceptable for one to have a surgery to change their race. (With this, I am reminded of the South Park episode where Kyle wanted to be black and play basketball so he went and got a “Negroscopy.”) Others, though, argue that transracialism does not exist (Botts, 2018). In any case, transracialism can be defined as the feeling of being a race other than what society has said your race is—even though, by attempting to “pass” as another race, society may see the individual in question as “black”, for example.
If one is black in America then, surely, there is a high chance that they have experienced what it is like TO BE, black in America, socially. In this specific case, has Dolezal ever experienced any sort of racial discrimination based on how she looks and presents herself, as a black woman? She claims to have been the victim of anti-black hate crimes by police, went to a HBCU (historically black college university) and, as stated, has changed her appearance in order to give off the “aura” that she is black, by tightly curling her and lightly tanning her skin—what black Americans would term a “high-yellow.” Her ex-husband is black. She ticks off the “black/African American” box on job applications. So, knowing all of this about Dolezal, and how she presents herself to the public, is she “black” socially?
“I was actually identified when I was doing human rights work in north Idaho as first transracial” Dolezal, 2015
When Dolezal filed anti-black hate crimes with the Spokane police, she was asked about her experience and then the reporter asked her if she was black. Dolezal responded by then ending the interview. Then, ABC found her birth parents who admitted that she was true of Caucasian (European) descent. Case closed? But wait: Dolezal eventually admitted that she was indeed born white, even though she used the terms “black” and “African American” to describe herself.
One debate was about whether Dolezal could accurately claim to be racially Black without posessing what was called Black ancestry in the conversation. 50 Furthermore, this debate was at least partially motivated by a genuine concern about whether Dolezal was taking away educational or employment opportunities that were intended for people with Black ancestry. For example, during Dolezal’s interview on The Real, co-host Loni Love said that she didn’t care about how Dolezal racially identified, but she did care about whether Dolezal marked ‘Black’ on her college applications because that act could have taken away scholarship money from a student with Black ancestry. 51 Interestingly, Dolezal said that Howard’s college application didn’t ask about race, but she did say that she marked ‘Black’ on her job application to Spokane’s Office of Police Ombudsman Commission. Furthermore, Dolezal said she marked ‘Black’ because “we all have human origins in Africa.” (Spencer, 2019: 252)
However, even though Dolezal may be “black-passing”, under her carefully constructed persona, she does look like a typical white American woman—indeed, I have seen many white women with hair like hers (not all had their hair done to look that way, either). In a 2015 interview, her adopted brother said that what Dolezal was doing was “blackface.” He recalls:
“She told me not to blow her cover about the fact that she had this secret life or alternate identity,” Ezra Dolezal said Saturday. “She told me not to tell anybody about Montana or her family over there. She said she was starting a new life … and this one person over there was actually going to be her black father.”
Let’s say that Dolezal did do this; she ‘constructed’ herself a fake ‘family’ where she has a black mother, black father and black siblings. She then goes out with them and the public sees them together. Rachael, by extension of being with her family, is now treated as “black”—since being “black” in America is social—is she now “black”? BUT, Dolezal seemed to be using “white privilege” when she would attempt to “black-pass” when convenient while “white-pass” when convenient—for instance, when she sued Brown for discriminating against her because she is white! Did she mark “black” on this application and then sue for being discriminated against for being white?
An example of this debate can be found once again during Dolezal’s interview on The Real. In that interview, co-host Tamar Braxton expressed exactly [the concern that Dolezal is a ‘race-shifter’] when she asked whether Dolezal thought she had “walked the walk of a Black woman.” Interestingly, Dolezal responded, “Absolutely,” and followed that up with, “the police mark ‘Black’ on my traffic tickets.” (Spencer, 2019: 253)
But it is easy to show that Dolezal’s claim about us all having African ancestry so there is nothing wrong with her putting “black” on employment forms and whatnot—racial membership is about “genomic ancestry, not ancestry simpliciter” (Spencer, 2019: 277, note 52). All living humans have African ancestry but not all living humans have genomic African ancestry. While the social is involved in OMB race theory, other conditions need to hold for one to be a member of a race.
Those that appear and present themselves as white are still considered black (Ginsberg, 1996; Hobbs, 2016). One could, for example, imagine an extremely light-skinned black—say like Beyonce—and ALMOST say “Oh, she’s white”, but something is off about the appearance, she does not look what the average American would term WHITE and so, after more inspection, she is—rightly—deemed ‘black.’ Thus, just because one presents themselves as a certain race, this does not mean that they ARE or BECOME that race. Race is NOT like a costume that one can choose to dress in and take it off at the end of the day—and while RACE is, partly, about one’s lived experiences in a racialized society, it is also about how society treats the individual they have deemed to be a certain race as well. While people are torn on Dolezal, the fact remains that she has altered her appearance considerably enough to “pass for” black.
‘White-passing blacks’, of course, have a ton of white (European) ancestry—which is how they can have light skin while still keep certain prominent ‘black’ features, such as the lips and nose. One story of a family that ‘white-passed’ is given in A Chosen Exile:
In California, the young woman passed as white. She married a white man, and they had children who never knew they had black blood. Then, one day, years later, her phone rang.
It was the woman’s mother with distressing news: Her father was dying, and she needed to return home immediately to tell him goodbye.
The cousin replied, “I can’t. I’m a white woman now.”
She missed her father’s funeral, and never saw her mother or siblings again.
Did this woman all of a sudden become white since she disavowed her family since she is “a white woman now?” If society treats her as ‘white’, is she white, disregarding her racial ancestry? Using Hardimon’s (2017) socialrace, yes, she would then be ‘white’ in America—but she, biologically, would still be ‘black.’
Asian eyes, white eyes?
Stories like this make me think back to a book I read in the seventh grade called Goodbye Vietnam (Whelan, 1993). From what I recall in the book all those years ago, the Vietnamese girl described Asians getting surgeries to change their eyelids (called a blepharoplasty) so they can ‘white-pass’, which would be ‘transracialism’ under Overall’s (2004) definition. Take this story from a plastic surgeon:
Millard first considered altering the human eye while reconstructing eyebrows for burn victims. He began to keenly study the eye, socket, and folds, musing how to change it from “Oriental to Occidental.”
Upon researching the operation, Millard found that surgeons in Japan, Hong Kong, and even Korea were already performing double-eyelid procedures for both medical and cosmetic reasons. Unable to find any publications about the surgery that were written in English, Millard devised his own operation. He decided to raise the nasal bridge and widen the eyes to reduce the “Asian-ness” of his patient’s visage. Millard first transplanted cartilage to the nose. He then tore the inner fold of the eyelid, removed fat resting above the eye, and sutured folds of skin together, creating a double eyelid. The interpreter was pleased with Millard’s work, and reported that after the operation, his ethnicity was often mistaken for Italian or Mexican.
For example, Kaw (1993: 75) writes that “the attempts by Asian American women to get the double-eyelid surgery “is an attempt to escape persisting racial prejudice that correlates their stereotyped genetic physical features (“small, slanty” eyes and a “flat” nose) with negative behavioral characteristics, such as passivity, dullness, and a lack of sociability.” The first writings of such a surgery in Asia, though, was in the 18th century, long before a strong European presence on the continent (Nguyen, Hsu, and Din, 2009) but I’m sure one can say that they saw Europeans over the ages and attempted to emulate what they saw. Nevertheless, this does seem to be a good case study into the Asian eyes, white eyes claim—they needed to attempt to ‘white-pass’ so they would not go to the concentration camps for the Japanese.
This is a good example of transracialism as stated by Overall; they attempted to ‘white-pass’ but it was for a reason—to live free. They would, presumably, then blend into society as their chosen race (or unchosen race), showing that one can, indeed, change their race by changing their outward appearance since race in America is partly (or fully depending on your view) based on one’s physical appearance—one’s phenotype, which they have some degree of control over.
Is Dolezal black? No, she is not. She can ‘black-pass’ all she wants, she can say that “We’re all African” all she wants, she can say that police mark her as black all she wants (and while she would be socially ‘black’, which is what she is going for, she is not ‘black’ in the OMB way), she can say that she ticks off the “black/African American’ box on applications, but this would only very weakly mean that she is ‘black.’ In virtue of having NO recent African ancestry, Dolezal is NOT black and is, therefore, running around in blackface. One cannot change their biological race, but it may be possible to change their socialrace—which race society says one is.
The many cases one can find on blacks that ‘white-pass’ and even those blacks that have NO IDEA that they ARE black speaks to the complex nature of ‘race’ in America. Yes, race is partially socially constructed and if we are going plainly off of how Americans in society state what ‘race’ is, just based on appearance, one would be hardpressed to say that Dolezal is not ‘black’—she ‘looks’ it, right? So Dolezal can be said to be ‘black-passing’ just as the woman mentioned above could be said to be ‘white-passing’—but this does not CHANGE THEIR RACE!
The case of blepharoplasties is interesting and further lends something to this discussion—certain Asian groups in America, and back home in their countries, attempted to have the double-eyelid surgery in order to change their appearance—some of them doing so to ‘white-pass’ so that they would not get sent to the concentration camps in America.
Lastly, there was a woman a few years back who had melanin injections in her skin and had botox and whatnot to change the appearance of her lips—her change is shocking, to say the least, and is an example of Overall’s definition of transracialism.
(1) We accept the following premises about trans people and the rights and dignity to which they are entitled; (2) we also accept the following premises about identities and identity change in general; (3) therefore, the common arguments against transracialism fail, and we should accept that there’s little apparent logically coherent reason to deny the possibility of genuine transracialism.
IQ-ists like to talk about the correlation between “IQ” tests and scholastic achievement tests like the SAT (Scholastic Assessment Test) and how this is one piece of evidence for the ‘validity’ of IQ—the same kinds of score distributions noted on the SAT are also noted in the ‘standard IQ tests.’ However, a confusion rests with the IQ-ists. They, circularly, point to the fact that there is a high correlation between “IQ tests” and the SAT. But what they fail to realize—and what I rarely see discussed—is the process of item selection and removal has a strong impact on scores. Such score differences are, indeed, built-in to the SAT, just as they are for IQ.
The SAT was created in 1924 by eugenicist Carl Brigham—one of the psychologists who also worked on the Army Alpha tests. When he created the test, it was called the Scholastic Aptitude Test. Harvard then used the test as an admissions test and then other Ivy League schools used it as a scholarship test. The SAT was developed directly off of the first IQ tests—so they are intricately linked. First, I will talk about gender differences; second I will talk about race differences. Then I will discuss how and why these differences persist.
Gender differences in the SAT
Differences in IQ were built-out of the test (like with Terman’s Stanford-Binet test), but for the SAT, items and subtests were directly chosen BECAUSE they showed a gap in knowledge between the two groups. Men have always scored higher on the SAT than women since the test’s inception which was due to men’s higher math scores while this was partially off-set by women’s higher verbal scores. However, the ETS then changed the test in the late 80s, stating that there was then “a better balance for the scores between the sexes” (quoted in Rosser, 1989: 38)—which was an eleven-point score advantage for men. They had added more verbal items that favored men, but they did not add more math items which favored women. BUT, interestingly, girls have higher GPAs than boys.
For example, of all of the SAT math questions, the one that produced the largest gender gap was a question in which the win-loss record of a basketball team needed to be computed, which is noted by Rosser (1989: 40-41) in tables 2 and 3:
Interestingly, Rosser (1989: 19) reports that in one county in Maryland, where boys and girls took the same advanced math courses, girls outscored boys academically, but they had SAT-M scores 37-47 points lower than boys. The kinds of items that go onto a test are tried-out on a sample of children, and then the kinds of distributions the constructors want is what they get. For example, by adding/subtracting certain questions and subtests, they can get what they want to see. Rosser (1989) notes that “if the 10 most “pro-boy” items were replaced with items similar to the 10 most “pro-girl” items, boys nationally would outscore girls by about 29 points thus eliminating more than a third of the existing gender gap” (pg 23). Further, for the 1986 SAT, if the ten items that favored boys the most were removed and were replaced by items that favored girls more, then girls would outscore boys by 4 points. In virtue of what was the current test the ‘right’ one, and what justifies the assumptions of the ETS? But in Rosser’s (1989) analysis, “Hispanic” women showed the largest gap while African-American women had the lowest gap when compared with men of their own ‘race.’ See some examples from the Appendixes on some of the items which showed the most extreme sex differences (pg 156-161):
Looking at types of questions such as these—and understanding how the SAT has evolved regarding gender differences since its inception since the mid-1920s—will understand how and why boys and girls score differently. For, if different assumptions were had on the ‘nature’ of ‘cognitive’ differences between boys and girls, more questions favoring girls would be added and then, we would be having a whole different kind of conversation right now.
When it comes to math, though, Niederle and Vesterlund (2010: 140) conclude:
… that competitive pressure may cause gender differences in test scores that exaggerate the underlying gender differences in math skills.
Women are, furthermore, less likely to guess (that is, less likely to risk-take) compared to men. This then translates to the testing environment where a guess is penalized while leaving it blank is not.
The new SAT has disadvantaged female testers; the AEI has stated that such differences have persisted for 50 years. Yes, SAT-M score differences are there, but, as noted above, when children were taught in the same advanced maths classroom, girls outperformed boys but they ended up scoring lower on the SAT-M section than boys—and looking at the SAT-M questions points us to why this paradox occurs. And, to top it all off, the SAT “underpredicts first-year college performance for women and overpredicts for men — thus violating one of the testers’ own, specially designed standard of validity” (Mensh and Mensh, 1991: 71).
Race and the SAT
Now, we turn to race and the SAT. Kidder and Rosner (2002) studied 100000 SAT test-takers in 1989 and also included another database of over 200000 people in New York. They examined around 580 SAT questions between the years 1988-89 and noted the percentage of questions that white, black, and Mexican students answered correctly. If 60 percent of whites answered a question correctly and only 20 percent of blacks did, then the racial impact was 40 percent for that question. For 78 verbal items, whites answered 59.8 percent correctly while blacks answered 46.4 percent correctly, for a racial impact of 13.4 (Kidder and Rosner, 2002: 148).
How are such differences explained? Of the six sections on the SAT, the ETS uses one of the sections for experimental test items. By using whites as a reference, if blacks or another group answers more questions correctly than whites, the item is discarded as invalid. Kidder and Rosner (2002) note that for an item with medium difficulty, whites scored 62 percent correctly while blacks answered 38 percent correctly. But, comparing a question with similar difficulty showed that blacks outscored whites by 8 percent, and 9 percent of women outscored men on the same question. Au (2008: 66) explains:
Test designers determined that this question, where African Americans scored higher than hites (and women higher than men), was psychometrically invalid and was not included in future SATs. The reason for this was that ETS bases its test question selection on statistics established by performance averages on previous tests: The students who statistically on average score higher on the SAT did not answer this question correctly enough of the time, while those who statistically on average score lower on the SAT answered this question correctly too often. By psychometric standards this means that this question was an anomaly and therefore was not considered a “valid” or “reliable” test question for a standardized test such as the SAT. White students outperform black students on the SAT. Higher-scoring students, who tend to be white, correctly answer SAT experiemental test questions at higher rates than typically lower scoring students, who tend to be non-White, ensuring that the test question selection process itself has a relf-reinforcing, racial bias.
Rosner, in his article On White Preferences, explains this well:
I don’t believe that ETS–the Educational Testing Service, the developer of the SAT and the source of this October 1998 test data–intended for the SAT to be a white preference test. However, the “scientific” test construction methods the company uses inexorably lead to this result. Each individual SAT question ETS chooses is required to parallel the outcomes of the test overall. So, if high-scoring test-takers–who are more likely to be white–tend to answer the question correctly in pretesting, it’s a worthy SAT question; if not, it’s thrown out. Race and ethnicity are not considered explicitly, but racially disparate scores drive question selection, which in turn reproduces racially disparate test results in an internally reinforcing cycle.
My considered hypothesis is that every question chosen to appear on every SAT in the past ten years has favored whites over blacks. The same pattern holds true on the LSAT and the other popular admissions tests, since they are developed similarly. The SAT question selection process has never, to my knowledge, been examined from this perspective. And the deeper one looks, the worse things get. For example, while all the questions on the October 1998 SAT favored whites over blacks, approximately one-fifth showed huge, 20 percent gaps favoring whites. Skewed question selection certainly contributes to the large test score disparities between blacks and whites.
So, in order to attempt to rectify this situation, the College Board wants to award out “adversity points”. Their SAT scores would be compared to their parental SES level and adjustments would then be made to their scores. Further, there was discussion on whether or not to give 230 “bonus points” to blacks, 130 to “Hispanics” and penalize Asians by 50 points.
But why do Asians score slightly higher than whites? Simple: they, too, would be in the group of higher-scoring students and, therefore, the test items would—indirectly—be shaped to them. The same holds for ‘Hispanics’ and blacks, as Kidder and Rosner note (regarding test questions), and so, the same would hold for Asians and whites. I think such discussions of “bonus points” and penalization on such tests, while a start, does not get to the assumptions so baked-in to these kinds of tests. Such tests are biased in virtue of the content on them—that is, the item content.
Kidder and Rosner (2002: 210) conclude:
… by reminding readers that, based on our empirical findings and review of the educational measurement literature, the process currently used to construct the SAT, LSAT, GRE, and similar tests unintentionally operates to select questions with larger racial and ethnic disparities (favoring Whites).
While, of course, test-prep can be identified as a factor that causes X group to score higher than Y group, other, more valid hypotheses can be—and have been—considered. Analyzing the items on these tests, we see that they are far from ‘objective’ ‘measures’ of ‘ability.’ The IQ-ist will cry that there is some’thing’ being measured in virtue of the correlation between the SAT and IQ—but, no ‘thing’ is being measured by any of these tests (Nash, 1990); they were created for the sole purpose of justifying and reproducing our current social hierarchies (Mensh and Mensh, 1991; Au, 2009; Garisson, 2009).
One needs only to know how such items are selected for inclusion on these tests. Andrew Strenio writes in his book The Testing Trap (1981: 95):
We look at individual questions and see how many people get them right, and which people get them right. We consciously and deliberately select questions so that the kind of people who scored low on the pretest will score low on subsequent tests. We do the same for the middle or high scorers. We are imposing our will on the outcome.
Only one way, though, exists for test constructors to do so—and this is to presuppose, a priori, who the high, middle and low scorers are and construct the test accordingly.
Take a thought experiment in a world in which our society was reversed. Blacks outscored whites and had better life prospects and the same holds for men and women. The hereditarians in this imagined world would then see that the scores on these tests correlated with smaller brain sizes, a lower amount of neurons, and whatnot. What, then, could the test constructors say justify how women and blacks scored higher than men and whites?
Though these are 35-year-old questions, I fail to see why there would be any changes in 2020—test construction has not changed. Such assumptions are, as argued at-length, built into the test. The outcome of these tests, of course, is determined by the nature of the content of the test—the test’s questions. IQ-ists, then, point to the score differentials between groups (men/women, blacks/whites, etc) and then say “See! There are differences so we are not all-the-same-blank-slates!” But statements like this fail to appreciate how tests are constructed—they believe that these tests are ‘objective “measures”‘ and that it, in a way, shows one’s ‘genetic potential’—and this claim is false.
If the nature of the test’s questions—which items are chosen for inclusion on the test—are determined by test constructors and the experimental questions on the SAT—of which whites are more likely to score higher—then it will indeed follow (and empirical evidence shows this) that what drives such large score disparities between whites and blacks on the SAT is, in fact, biased test questions. The same, too, holds for the differences between men and women. Change the assumptions, change the nature and the outcome of the test, then change what you study to ‘find’ the differences ‘causing’ such test score differences between groups. Hopefully, putting it in this way will show the absurdity of using biased tests to show that ‘biology’ is somehow responsible for score differences between groups.
Such inequalities in standardized test scores like the SAT—just like IQ—then, is structured into the test itself—so, tests like this only reproduce the differences between groups that they claim to ‘measure’—which is a circular claim. Studies like this show the folly of thinking that one group is ‘genetically smarter’ than another—which is what the hereditarians set out to prove. Too bad they have no meausuring unit, object of measurment or measured object.
The East Asian race has been held up as what a high “IQ” population can do and, along with the correlation between IQ and standardized testing, “HBDers” claim that this is proof that East Asians are more “intelligent” than Europeans and Africans. Lynn (2006: 114) states that the average IQ of China is 103. There are many problems with such a claim, though. Not least because of the many reports of Chinese cheating on standardized tests. East Asians are claimed to be “genetically superior” to other races as regards IQ, but this claim fails.
Chinese IQ and cheating
Differences in IQ scores have been noted all over China (Lynn and Cheng, 2013), but generally, the consensus is, as a country, that Chinese IQ is 105 while in Singapore and Hong Kong it is 103 and 107 respectively (Lynn, 2006: 118). To explain the patterns of racial IQ scores, Lynn has proposed the Cold Winters theory (of which a considerable response has been mounted against it) which proposes that the harshness of the environment in the ice age selected-for higher ‘general intelligence’ in East Asian and European populations; such a hypothesis is valid to hereditarians since East Asian (“Mongoloids” as Lynn and Rushton call them) consistently score higher on IQ tests than Europeans (eg Lynn and Dzobion, 1979; Lynn, 1991; Herrnstein and Murray, 1994). In a recent editorial in Psych, Lynn (2019) criticizes this claim from Flynn (2019):
While northern Chinese may have been north of the Himalayas during the last Ice Age, the southern Chinese took a coastal route from Africa to China. They went along the Southern coast of the Middle East, India, and Southeast Asia before they arrived at the Yangzi. They never were subject to extreme cold.
In response, Lynn cites Frost’s (2019) article where he claims that “mean intelligence seems to have risen during recorded history at temperate latitudes in Europe and East Asia.” Just-so storytelling about how and why such “abilities” were “selected-for”, the Chinese score higher on standardized tests than whites and blacks, and this deserves an explanation (the Cold Winters Theory fails; it’s a just-so story).
Before continuing, something must be noted about Lynn and his Chinese IQ data. Lynn ignores numerous studies on Chinese IQ—Lynn would presumably say that he wants to test those in good conditions and so disregards those parts of China with bad environmental conditions (as he did with African IQs). Here is a collection of forty studies that Lynn did not refer to—some showing that, even in regions in China with optimum living conditions, IQs below 90 are found (Qian et al, 2005). How could Lynn miss so many of these studies if he has been reading into the matter and, presumably, keeping up with the latest findings in the field? The only answer to the question is that Richard Lynn is dishonest. (I can see PumpkinPerson claiming that “Lynn is old! It’s hard to search through and read every study!” to defend this.)
Although the Chinese are currently trying to stop cheating on standardized testing (even a possible seven-year prison sentence, if caught cheating, does not deter cheating), cheating on standardized tests in China and by the Chinese in America is rampant. The following is but a sample of what could be found doing a cursory search on the matter.
One of the most popular ways of cheating on standardized tests is to have another person take the exam for you—which is rampant in China. In one story, as reported by The Atlantic, students can hire “gunmen” to sit-in on tests for them, though measures are being taken to fight back against that such as voice recognition and finger-printing. It is well-known that much of the cheating on such tests are being done by international students.
Even on the PISA—which is used as an “IQ” proxy since they correlate highly (.89) (Lynn and Mikk, 2009)—though, there is cheating. For the PISA, each country is to select, at random, 5,000 of their 15-year-old children around the country and administer the PISA—they chose their biggest provinces which are packed with universities. Further, score flucuations attract attention which indicates dishonesty. In 2000, more than 2000 people protested outside of a university to protest a new law which banned cheating on tests.
The rift amounted to this: Metal detectors had been installed in schools to route out students carrying hearing or transmitting devices. More invigilators were hired to monitor the college entrance exam and patrol campus for people transmitting answers to students. Female students were patted down. In response, angry parents and students championed their right to cheat. Not cheating, they said, would put them at a disadvantage in a country where student cheating has become standard practice. “We want fairness. There is no fairness if you do not let us cheat,” they chanted. (Chinese students and their parents fight for the right to cheat)
Surely, with rampant cheating on standardized tests in China (and for Chinese Americans), we can trust the Chinese IQ numbers in light of the news that there is a culture of cheating on tests in China and in America.
“Genetic superiority” and immigrant hyper-selectivity
Strangely, some proponents of the concept of “genetic superiority” and “progressive evolution” still exist. PumpkinPerson is one of those proponents, writing articles with titles like “Genetically superior: Are East Asians more socially intelligent too?, More evidence that East Asians are genetically superior, Oriental populations: Genetically superior, even referring to a fictional character on a TV show as a “genetic superior.” Such fantastical delusions come from Rushton’s ridiculous claim that evolution may be progressive and that some populations are, therefore, “more evolved” than others:
One theoretical possibility is that evolution is progressive and that some populations are more “advanced” than others. Rushton, 1992
Such notions of “evolutionary progress” and “superiority“—even back in my “HBD” days—never passed the smell test to me. In any case, how can East Asians be said to be “genetically superior”? What do “superior genes” or a “superior genome” look like? This has been outright stated by, for example, Lynn (1977) who prolcaims—for the Japanese—that his “findings indicate a genuine superiority of the Japanese in general intelligence.” This claim, though, is refuted by the empirical data—what explains East Asian educational achievement is not “superior genes”, but the belief that education is paramount for upward social mobility, and so, to preempt discrimination, this would then be why East Asians overperform in school (Sue and Okazaki, 1990).
Furthermore, the academic achievement of Asian cannot be reduced to Asian culture—the fact that they are hyper-selected is why social class matters less for Asian Americans (Lee and Zhou, 2017).
These counterfactuals illustrate that there is nothing essential about Chinese or Asian culture that promotes exceptional educational outcomes, but, rather, is the result of a circular process unique to Asian immigrants in the United States. Asian immigrants to the United States are hyper-selected, which results in the transmission and recreation of middle-class specific cultural frames, institutions, and practices, including a strict success frame as well as an ethnic system of supplementary education to support the success frame for the second generation. Moreover, because of the hyper-selectivity of East Asian immigrants and the racialisation of Asians in the United States, stereotypes of Asian-American students are positive, leading to ‘stereotype promise’, which also boosts academic outcomes
Inequalities reproduce at both ends of the educational spectrum. Some students are assumed to be low-achievers and undeserving, tracked into remedial classes, and then ‘prove’ their low achievement. On the other hand, others are assumed to be high-achievers and deserving of meeting their potential (regardless of actual performance); they are tracked into high-level classes, offered help with their coursework, encouraged to set their sights on the most competitive four-year universities, and then rise to the occasion, thus ‘proving’ the initial presumption of their ability. These are the spill-over effects and social psychological consequences of the hyper-selectivity of contemporary Asian immigration to the United States. Combined with the direct effects, these explain why class matters less for Asian-Americans and help to produce exceptional academic outcomes. (Lee and Zhou, 2017)
The success of second-generation Chinese Americans has, too, been held up as more evidence that the Chinese are ‘superior’ in their mental abilities—being deemed ‘model minorities’ in America. However, in Spain, the story is different. First- and second-generation Chinese immigrants score lower than the native Spanish population on standardized tests. The ‘types’ of immigrants that have emigrated has been forwarded as an explanation for why there are differences in attainments of Asian populations. For example, Yiu (2013: 574) writes:
Yet, on the other side of the Atlantic, a strikingly different story about Chinese immigrants and their offspring – a vastly understudied group – emerges. Findings from this study show that Chinese youth in Spain have substantially lower educational ambitions and attainment than youth from every other nationality. This is corroborated by recently published statistics which show that only 20 percent of Chinese youth are enrolled in post-compulsory secondary education, the prerequisite level of schooling for university education, compared to 40 percent of the entire adolescent population and 30 percent of the immigrant youth population in Catalonia, a major immigrant destination in Spain (Generalitat de Catalunyan, 2010).
… but results from this study show that compositional differences across immigrant groups by class origins and education backgrounds, while substantial, do not fully account for why some groups have higher ambitions than others. Moreover, existing studies have pointed out that even among Chinese American youth from humble, working-class origins, their drive for academic success is still strong, most likely due to their parents’ and even co-ethnic communities’ high expectations for them (e.g., Kao, 1995; Louie, 2004; Kasinitz et al., 2008).
The Chinese in Spain believe that education is a closed opportunity and so, they allocate their energy elsewhere—into entrepreneurship (Yiu, 2013). So, instead of Asian parents pushing for education, they push for entrepreneurship. What this shows is that what the Chinese do is based on context and how they perceive how they will be looked at in the society that they emigrate to. US-born Chinese immigrants are shuttled toward higher education whereas in the Netherlands, the second-generation Chinese have lower educational attainment and the differences come down to national context (Noam, 2014). The Chinese in the U.S. are hyper-selected whereas the Chinese in Spain are not and this shows—the Chinese in the US have a high educational attainment whereas they have a low educational attainment in Spain and the Netherlands—in fact, the Chinese in Spain show lower educational attainment than other ethnic groups (Central Americans, Dominicans, Morrocans; Lee and Zhou, 2017: 2236) which, to Americans would be seen as a surprise
Second-generation Chinese parents match their intergenerational transmission of their ethnocultural emphasis on education to the needs of their national surroundings, which, naturally, affects their third-generation children differently. In the U.S., adaptation implies that parents accept the part of their ethnoculture that stresses educational achievement. (Noam, 2014: 53)
So what explains the higher educational attainment of Asians? A mixture of culture and immigrant (hyper-) selectivity along with the belief that education is paramount for upward mobility (Sue and Okazaki, 1990; Hsin and Xie, 2014; Lee and Zhou, 2017) and the fact that what a Chinese immigrant chooses to do is based on national context (Noam, 2014; Lee and Zhou, 2017). Poor Asians do indeed perform better on scholastic achievement tests than poor whites and poor ‘Hispanics’ (Hsin and Xie, 2014; Liu and Xie, 2016). Teachers even favor Asian American students, perceiving them to be brighter than other students. But what are assumed to be cultural values are actually class values which is due to the hyper-selectivity of Asian immigrants to America (Hsin, 2016).
The fact that the term “Mongoloid idiot” was coined for those with Down syndrome because they looked Asian is very telling (see Hilliard, 2012 for discussion). But, the IQ-ists switched from talking about Caucasian superiority to Asian superiority right as the East began their economic boom (Liberman, 2001). The fact that there were disparate “estimates” of skulls in these centuries points to the fact such “scientific observations” are painted with a cultural brush. See eg table 1 from Lieberman (2001):
This tells us, again, that our “scientific objectivity” is clouded by political and economic prejudices of the time. This allows Rushton to proclaim “If my work was motivated by racism, why would I want Asians to have bigger brains than whites?” Indeed, what a good question. The answer is that the whole point of “HBD race realism” is to denigrate blacks, so as long as whites are above blacks in their little self-made “hierarchy” no such problem exists for them (Hilliard, 2012).
Note how Rushton’s long debunked- r/K selection theory (Anderson, 1991; Graves, 2002) used the current hierarchy and placed dozens of traits on a hierarchy where it was M > C > N (Mongoloids, Caucasoids, and Negroids respectively, to use Rushton’s outdated terminology). It is a political statement to put the ‘Mongoloids’ at the top of the racial hierarchy; the goal of ‘HBD’ is to denigrate blacks. But, do note that in the late 19th to early 20th century that East Asians were deemed to have small brains, large penises, and that Japanese men, for instance, would “debauch their [white] female classmates” (quoted in Hilliard, 2012: 91).
The “IQ” of China (along with scores on other standardized tests such as TIMMS and PISA), in light of the scandals occurring regarding standardized testing should be suspect. Richard Lynn has failed to report dozens of studies that show low IQ scores for China, thusly inflating their scores. This is, yet again, another nail in the coffin for the ‘Cold Winter Theory’, since the story is formulated on the basis of cherry-picked IQ scores of children. I have noted that if we have different assumptions that we would have different evolutionary stories. Thus, if the other data were provided and, say, Chinese IQ were found to be lower, we would just create a story to justify the score. This is illustrated wonderfully by Flynn (2019):
I will only say that I am suspicious of these because none of us can go back and really evaluate environment and mating patterns. Given free reign, I can supply an evolutionary scenario for almost any pattern of current IQ scores. If blacks had a mean IQ above other races I could posit something like this: they benefitted from exposure to the most rigorous environmental conditions possible, namely, competition from other people. Thanks to greater population pressures on resources, blacks would have benefitted more from this than any of those who left at least for a long time. Those who left eventually became Europeans and East Asians.
The hereditarians point to the academic success of East Asians in America as proof that IQ tests ‘measure’ intelligence, but East Asians in America are a hyper-selected sample. As the references I have provided show, second-generation Chinese immigrants show lower educational attainments than other ethnies (the opposite is true in America) and this is explained by the context that the immigrant family finds themselves in—where do you allocate your energy? Education or entrepreneurship? Such choices seem to be class-based due to the fact education is championed by the Chinese in America and not in Spain and the Netherlands—then dictate, and they also refute any claims of ‘genetic superiority’—they also refute, for that matter, the claim that genes matter for educational attainment (and therefore IQ)—although we did not need to know this to know that IQ is a bunk ‘measure’.
So if Chinese cheat on standardized tests, then we should not accept their IQ scores; the fact that they, for example, provide non-random children from large provinces speaks to their dishonesty. They are like Lynn, in a way, avoiding the evidence that IQ scores are not what they seem—both Lynn and the Chinese government are dishonest cherry-pickers. The ‘fact’ that East Asian educational attainment can be attributed to genes is false; it is attributed to hyper-selectivity and notions of class and what constitutes ‘success’ in the country they emigrate to—so what they attempt is based on (environmental) context.
In a conversation with an IQ-ist, one may eventually find themselves discussing the concept of “superiority” or “inferiority” as it Regards IQ. The IQ-ist may say that only critics of the concept of IQ place any sort of value-judgments on the number one gets when they take an IQ test. But if the IQ-ist says this, then they are showing their ignorance regarding the history of the concept of IQ. The concept was, in fact, formulated to show who was more “intelligent”—“superior”—and who was less “intelligent”—“inferior.” But here is the thing, though: The terms “superior” and “inferior” are, however, anatomic which shows the folly of the attempted appropriation of the term.
Superiority and inferiority
If one wants to find early IQ-ists talking about superiority and inferiority regarding IQ, they would only need to check out Lewis Terman’s very first Stanford-Binet tests. His scales—now in their fifth edition—state that IQs between 120 and 129 are “superior” while 130-144 is “gifted or very advanced” and 145-160 is “very gifted” or “highly advanced.” How strange… But, the IQ-ist can say that they were just products of their time and that no serious researcher believes such foolish things, that one is “superior” to another on the basis of an IQ score. What about proximal IQs? Lateral IQs? Posterior IQs? Distal IQs? It’s ridiculous to use anatomic terminology (for physical things) and attempt to use them to describe mental “things.”
But, perhaps the most famous hereditarian Arthur Jensen, as I have noted, wrongly stated that heritability estimates can be used to estimate one’s “genetic standing” (Jensen, 1970) and that if we continue our current welfare policies then we are in danger of creating a “genetic underclass” (Jensen, 1969). This, as does the creation of the concept of IQ in the early 1900s, speaks to the hereditarian agenda and the reason for the IQ enterprise as a whole. (See Taylor, 1980 for a wonderful discussion of Jensen’s confusion on the concept of heritability.)
This is no surprise when you understand that IQ tests were created to rank people on a mental hierarchy that reflected the current social hierarchy of the time which would then be used as justification for their spot on the social hierarchy (Mensh and Mensh, 1991). So it is no surprise that anatomic terminology was hijacked in an attempt at forwarding eugenic ideas. But the eugenicists concept of superiority didn’t always pan out the way they wanted it to go, which is evidenced a few decades before the conceptualization of standardized testing.
Galton attempted to show that those with the fastest reaction times were more intelligent, but when he found out that the common man had just as quick of a reaction time, he abandoned this test. Then Cattell came along and showed that no relationship existed between sensory perception and IQ scores. Finally, Binet showed that measures of the skull did not correspond with teacher’s assessment of who is or is not “intelligent.” Then, some decades later, Binet and Simon finally construct a test that discriminates between who they feel is or is not intelligent—which discriminated by social class. This test was finally the “measure” that would differentiate between social classes since it was based on a priori notions of an individual’s place in the social hierarchy (Garrison, 2009: 75). Binet and Simon’s “ideal city” would use test scores as a basis to shuttle people into occupations they “should be” in on the basis of their IQ scores which would show how they would work based on their “aptitudes” (Mensh and Mensh, 1991: 24; Garrison, 2009: 79). Bazemore-James, Shinaorayoon, and Martin (2017) write that:
The difference in racial subgroup mean scores mimics the intended outcomes of the original standardized IQ tests, with exception to Asian Americans. Such tests were invented in the 1910s to demonstrate the superiority of rich, U.S.-born, White men of northern European Descent over non-Whites and recent immigrants (Gersh, 1987). By developing an exclusion-inclusion criteria that favored the aforementioned groups, test developers created a norm “intelligent” (Gersh, 1987, p. 166) populationiot “to differentiate subjects of known superiority from subjects of known inferiority” (Terman, 1922, p. 656).
So, as one can see, this “superiority” was baked-in to IQ tests from the very start and the value-judgments, then, are not in the minds of IQ critics but is inherent in the scores themselves as stated by the pioneers of IQ testing in America and the originators of the concept that would become IQ. Garrison (2009: 79) writes:
With this understanding it is possible to make sense of Binet’s thinking on intelligence tests as group differentiation. That is, the goal was to group children as intelligent and unintelligent, and to grade (value) the various levels of the unintelligent (also see Wolf 1973, 152–154). From the point of view of this goal, it mattered little whether such differences were primarily biological or environmental in origin. The genius of the theory rests in how it postulates one group as “naturally” superior to the other without the assumptions of biology, for reason had already been established as a natural basis for distinction, irrespective of the origin of differences in reasoning ability.
While Binet and Simon were agnostic on the nature-nurture debate, the test items that they most liked were those items that differentiated between social classes the most (which means they were consciously chosen for those goals). But reading about their “ideal city”, we can see that those who have higher test scores are “superior” to those who do not. They were operating under the assumption that they would be organizing society along class lines with the tests being measures of group mental ability. For Binet and Simon, it does not matter whether or not the “intelligence he sought to define” was inherited or acquired, they just assumed that it was a property of groups. So, in effect, “Binet and Simon developed a standard whereby the value of people’s thinking could be judged in a standard way, in a way that corresponded with the exigencies of social reproduction at that time” (Garrison, 2009: 94). The only thing such tests do is reproduce the differences they claim to measure—making it circular (Au, 2009).
But the whole reason why Binet and Simon developed their test was to rank people from “best” to “worst”, “good” to “bad.” But, this does not mean that there is some “thing” inherent in individuals or groups that is being “measured” (Nash, 1990). Thus, since their inception, IQ tests (and by proxy all standardized testing) has pronouncements of such ranking built-in, even if it is not explicitly stated today. Such “measures” are not scientific and psychometrics is then shown for what it really is: “best understood as the development of tools for vertical classification and the production of social value” (Garrison, 2009: 5).
The goal, then, of psychometry is clear. Garrison (2009: 12) writes:
Ranking human worth on the basis of how well one competes in academic contests, with the effect that high ranks are associated with privilege, status, and power, suggests that psychometry is premised, not on knowledge of intellectual or emotional development, but on Anglo-American political ideals of rule by the best (most virtuous) and the brightest (most talented), a “natural aristocracy” in Jeffersonian parlance.
But, such notions of superiority and inferiority, as I have stated back in 2018, are nonsense when taken out of anatomic context:
It should be noted that the terms “superior” and “inferior” are nonsensical, when used outside of their anatomic contexts.
An IQ-ist may exclaim “Are you saying that you can’t say that person A has superior sprinting ability or breath-holding ability!? Are you denying that people are different?!” No, what I’m saying is that it is absurd to take anatomic terminology (physical measures) and attempt to liken it to IQ—this is because nothing physical is being measured, not least because the mental isn’t physical nor reducible to it.
They were presuming to measure one’s “intelligence” and then stating that one has ‘superior’ “intelligence” to another—and that IQ tests were measuring this “superiority”. However, pscyhometrics is not a form of measurement—rankings are not measures.
Knowledge becomes reducible to a score in regard to standardized testing, so students, and in effect their learning and knowledge, are then reduced to their scores on these tests. And so, “such inequalities [with the SAT, which holds for all standardized testing] are structured into the very foundations of standardized test construction itself” (Au, 2009: 64). So what is built into a test can also be built out of it (Richardson, 1990, 2000; Hilliard, 2012).
In first constructing its scales and only then preceding to induce what they ‘measure’ from correlational studies, psychometry has got into the habit of trying to do what cannot be done and doing it the wrong way round anyway. (Nash, 1990: 133)
…psychometry fails to meet its claim of measurement and … its object is not the measurement of nonphysical human attributes, but the marking of some human beings as having more worth or value than other human beings … Psychometry’s claim to measurement serves to veil and justify the fundamentally political act of marking social value, and the role this practice plays in legitimating vast social inequalities. (Garrison, 2009: 30-31)
One of the best examples of a valid measure is temperature—and it has a long history (Chang, 2007). It is valid because there is a well-accepted theory of temperature, what is hot and what is cold. It is a physical property of measure which quantitatively expressed heat and cold. So thermometers were invented to quantify temperature, whereas thermometers were invented to quantify “intelligence.” Those, like Jensen, attempt to make the analogy between temperature and IQ, thermometers and IQ tests. Thermometers, with a high degree of reliability, measure temperature and so do, Jensen, claims, IQ tests.
So, IQ-ists claim, temperature is measured by thermometers, by definition, therefore intelligence is what IQ tests measure, by definition. But there is a problem with claims such as this. Temperature was verified independently of the measuring device originally used to measure it. Fixed points were first established, and then numerical thermometers could be constructed in which we then find a procedure to assign numbers to degrees of heat between and beyond the fixed points. The thermoscope was what was used for the establishment of fixed points, The thermoscope has no fixed points, so we do not have to circularly rely on the concept of fixed points for reference. And if it goes up and down, we can then rightly infer that the temperature of blood is not stable. But what validates the thermoscope? Human sensation. We can see that when we put our hand into water that is scalding hot, if we put the thermoscope in the same water and note that it rises rapidly. So the thermoscopes agreement with our basic sensations of ‘hot’ and ‘cold’ serve as reliability for the fact that thermoscopes reliably justify (in a non-circular way) that temperature is truly being measured. We are trusting the physical sensation we get from whichever surface we are touching, and from this, we can infer that thermoscopes do indeed validate thermometers making the concept of temperature validated in a non-circular manner and a true measure of hot and cold. (See Chang, 2007 for a full discussion on the measurement of temperature.)
Thermometers could be tested by the criterion of comparability, whereas IQ tests, on the other hand, are “validated” circularly with tests of educational achievement, other IQ tests which were not themselves validated. and job performance (Howe, 1997; Richardson and Norgate, 2015; Richardson, 2017) which makes the “validation” circular since IQ tests and achievement tests are different versions of the same test (Schwartz, 1975).
For example, take intro chemistry. When one takes the intro course, they see how things are measured. Chemists may be measuring in mols, grams, the physical state of a substance, etc. We may measure water displacement, reactions between different chemicals or whatnot. And although chemistry does not reduce to physics, these are all actual physical measures.
But the same cannot be said for IQ (Nash, 1990). We can rightly say that one scores higher than another on an IQ tests but that does not signify that some “thing” is being measured and this is because, to use the temperature example again, there is no independent validation of the “construct.” IQ is a (latent) construct but temperature is a quantitative measure of hot and cold. It really exists, though the same cannot be said about IQ or “intelligence.” The concept of “intelligence” does not refer to something like weight and temperature, for example (Midgley, 2018).
Physical properties are observables. We observe the mercury in a thermometer change based on the temperature inside a building or outside. One may say that we observe “intelligence” daily, but that is NOT a “measure”, it’s just a descriptive claim. Blood pressure is another physical measure. It refers to the pressure in large arteries of the system. This is due to the heart pumping blood. An IQ-ist may say that intelligence is the emergent product of thinking and that this is due to the brain and that correlations between life outcomes, IQ tests and educational achievements then validate the measure. But, as noted above, this is circular. The two examples given—blood pressure and temperature—are real things that are physically measurable, unlike IQ (a latent construct).
It also should be noted that Eysenck claimed that if the measurement of temperature is scientific, then so is the measurement of intelligence. But thermometers are not identical to standardized scales. However, this claim fails, as Nash (1990: 131) notes:
In order to measure temperature three requirements are necessary: (i) a scale, (ii) some thermometric property of an object and, (iii) fixed points of reference. Zero temperature is defined theoretically and successive interval points are fixed by the physical properties of material objects. As Byerly (p. 379) notes, that ‘the length of a column of mercury is a thermometric property presupposes a lawful relationship between the order of length and the temperature order under certain conditions.’ It is precisely this lawful relationship which does not exist between the normative IQ scale and any property of intelligence.
This is where IQ-ists go the most wrong: the emphatically state that their tests are measuring SOMETHING! which is important for life success since they correlate with them. Though, there is no precise specification of the measured object, no object of measurement and no measurement unit, so this “means that the necessary conditions for metrication do not exist [for IQ]” (Nash, 1990: 145).
Since IQ tests have a scoring system, the general impressions is that IQ tests measure intelligence just like thermometers measure temperature—but this is a nonsense claim. IQ is an artifact of the test’s norming population. These points do not reflect any inherent property of individuals, they reflect one’s relation to the society they are in (since all standardized tests are proxies for social class).
One only needs to read into the history of IQ testing—and standardized testing as a whole—to see how and why these tests were first devised. From their beginnings wkth Binet and then over to Terman, Yerkes, and Goddard, the goal has been clear—enact eugenic policies on those deemed “unintelligent” by IQ tests which just so happen to correspond with lower classes in virtue of how the tests were constructed, which goes back originally to Binet and Simon. The history of the concept makes it clear that it’s not based on any kind of measurement theory like blood pressure and temperature. It is based on a priori notions of the structure and distribution of “intelligence” which then reproduces the social structure and “justifies” notions of superiority and inferiority on the basis of “intelligence tests” (Mensh and Mensh, 1991; Au, 2009; Garrison, 2009).
The attempts to hijack anatomic terminology, as I have shown, are nonsense since one doesn’t talk in other anatomic terminology about other kinds of things; the first IQ-ists’ intentions were explicit in what they were attempting to “show” which still holds for all standardized testing today.
Binet, Terman, Yerkes, Goddard and others all had their own priors which then led them to construct tests in such a way that would lead to their desired conclusions. No “property” is being “measured” by these tests, nor can they be used to show one’s “genetic standing” (Jensen, 1970) which implies that one is “genetically superior” (this can be justified by reading Jensen’s interview with American Renaissance and his comments on the “genetic enslavement” of a group of we continued our welfare policy).
Physiological measures, such as blood pressure, and measures of hot and cold, such as temperature, are valid measures and in no way, shape or form—contra Jensen—like the concept of IQ/”intelligence”, which Jensen conflates (Edwards, 1973). Intelligence (which is extra-physical) cannot be measured (see Berka, 1983 and see Nash, 1990: chapter 8 for a discussion of the measurement objection of Berka).
For these reasons, we should not claim that IQ tests ‘measure’ “intelligence”, nor do they measure one’s “genetic standing” or how “superior” one is to another and we should claim that psychometrics is nothing more than a political ring.
I was watching the program Diagnose Me on Discovery Health and a woman kept having seizures whenever she heard a certain type of music—“alternative high-pitched female singing”, according to the woman—but her doctors didn’t believe her. So her and her husband began looking for specialists who specialize in hard-to-treat epilepsy. He recommended an endocranial EEG (images of such a surgery can be found below), which meant that the top part of her skull would be removed and electrodes would be placed onto the top of her brain. After the electrodes were placed on the brain. they played the music she said triggered her epilepsy—which was “high-pitched female singing”—and she began to seize. The doctor was shocked and he couldn’t believe what he saw. They ended up finding out that a majority—not all—of her seizing was coming from the right temporal lobe. So her and her husband had a choice—live with the seizures (which she couldn’t because she did not know where she would hear the music) or get part of her brain removed. She chose to have part of her right temporal lobe removed and when it was removed she no longer seized from hearing the music that formerly triggered her symptoms.
The condition is called “musicogenic epilepsy” which is a rare form of what is called “reflex epilepsy”—of which, another similar form involved hitting something which then causes seizing in the patient. (It’s called “reflex epilepsy” since the epileptic events occurs after an event—music, hitting something with your foot, seeing something on the television, etc.) This occurs when certain types of music are heard, certain musical notes can trigger electrical brain activity. The cure is to remove the part of the brain that is affecting the patient. (It is worth noting that many individuals throughout the past 100 years have had large sections of their brains removed and had no loss-of-functioning, staying pretty much the same as they were.) It is important to note that the music is not causing the seizures, it is triggering them—it brings them out. Most of the seizing is localized in the right temporal lobe (Kaplan, 2003), further being localized in Heschl’s gyrus (Nagahama et al, 2017). This has been noted by a few researchers since last century (Shaw and Hill, 1946; Fujinawa and Kawai, 1978) while the Joan of Arc was said to have her perception scrambled while hearing church bells; a Chinese poet stated that he became “absent-minded” and “sick” when hearing the flute-playing from the street vendor (Murray, 2010: 173).
The condition was first noted by a doctor in 1937, with the first known reference to this form of epilepsy being observed in the 1600s (Kaplan, 2003: 465). It affects about 1 in 10,000,000 people (Ellis, 2017). Critical reviews state not to underestimate the power of anti-epileptic drugs in the treatment and management of musicogenic epilepsy (Maguire, 2012), but in the case described above, such drugs did nothing to cure the woman’s seizures that occurred each time she heard a certain kind of music. The effect of music on seizing, it seems, is dichotomous with certain kinds of music either helping manage or causing seizing. The same melody, however, could be played in a different key and not cause seizing (Kaplan and Stoker, 2010) and so, it seems that certain types of sound frequencies influence/screw up the electrical activity in the brain which then leads to seizures of this kind. A specialist in epilepsy explains:
In people with reflex epilepsy, the trigger is extremely specific, and the seizure happens soon thereafter. “It can be a specific song by a particular person or even a specific verse of the song,” says Dr. So, who is a past president of the American Epilepsy Society. For some people, the trigger is a touch or motion. “If patients are interrupted in a particular way, if they are walking along and someone steps in front of them, they may have a seizure,” says Dr. So. In Japan, seizures caused by video games have been reported, he says, but they are highly unusual.
Dr. So evaluated a woman from Tennessee who began having seizures during church when she heard highly emotional hymns. She would blank out and drop her hymn book. At other times, Whitney Houston’s “I Will Always Love You” triggered seizures. The woman had a history of small seizures, but having one while hearing music was a new development. She said the seizures would typically begin with a sense of dread and the feeling that someone was lurking by her side. Dr. So and his Mayo Clinic team attached electrodes to the woman’s scalp to study electrical activity while she listened to different types of music. An electroencephalogram (EEG) showed that slow, emotional songs triggered seizure activity in her brain’s temporal lobe, while faster tunes did not. Dr. So diagnosed the woman with musicogenic epilepsy, a type of reflex epilepsy where seizures are caused by specific music or types of music, and prescribed antiseizure medication. He says he’s had another patient whose seizures were triggered by Rihanna’s “Disturbia” and Pharrell Williams’ “Happy.”
Though musicogenic epilepsy is extremely rare, it may be slightly underreported since many people with the disease may not put two and two together and link their seizing with the type of music or sounds they hear in their day-to-day life. One individual with epilepsy also recounts his experience with this type of rare epilepsy:
… but I still find that certain music, high pitched noise set’s off a kind of aura, I feel spaced out, have intense fear and it sounds almost like water rushing and I hear voices.
One case report exists of a man in which his later seizures were induced by music which prompted stress and a bad mood, implying that the aetiology of musicogenic epilepsy involves an association between the seizing and the patient’s mental state (Cheng, 2016).
We can see how the endocranial EEG looks and how it gets done (WARNING: GRAPHIC) by referring to Nagahama et al (2019):
Intraoperative photographs demonstrating exposure and intracranial electrode placement. A right frontotemporoparietal craniotomy (A) allowed proper exposure for placement of grid, strip, and depth of electrodes (B), including the HG depth electrode. The sylvian fissure is marked with a dashed line. The HG depth electrode and PT depth electrose are marked with X symbols anteriorly and posteriorly, respectively, at their entry points at the cortical surface. Ant = anterior; inf = inferior; post = posterior; sup = superior.
Intraoperative placement of the HG depth electrode. A: The planning view on the frameless stereotactic system (Stealth Navigation, Medtronic) showing the entry point and the trajectory (green circles and dotted lines). B: The similar planning view showing the target and the trajectory. C and D: Intraoperative photographs showing placement of the HG depth electrode. A Stealth Navigus probe was used to select the appropriate trajectory of a guiding tube positioned over the entry point (C). An electrode-guiding cannula was advanced through the tube to the previously determined depth (D). An actual depth electrode was subsequently passed through the cannula, followed by removal of the guiding tube/cannula system. Note the unique anterolateral-to-posteromedial trajectory within the STP for placement of the HG depth electrode.
The average age of onset of musicogenic epilepsy is 28 (Wieser et al, 1997) while the first cases are not reported until around one’s mid-to-late 30s due to the fact that most people are unware that music may be causing their seizures (Pittau et al, 2008; Generalov et al, 2018). This may be due to the fact that seizing may begin several minutes after hearing the music that affects the patient in question (Avanzini, 2003). While the specific tempo and pitch of music seems to have no effect on the beginnings of seizing (Wieser et al, 1997), many patients report that their specific triggers are due to hearing certain lines in songs (Tayah et al, 2006) which implies that it is not the music itself which is causing the seizing, but the emotional response that occurs to the patient after hearing the music and this is supported by the fact that many patients who report such symptoms are interested in music or are musicians themselves (Wieser et al, 1997).
See table 1 from Kaplan (2003: 466) for causes of musicogenic epilepsy in the literature:
As can be seen by the above table, the mood component is related to the musical type; so the music elicits some sort of emotional state in the individual which would, it seems, to be part of the cause which then triggers the seizure—though the music/emotions are not causing the seizing itself, it is bringing them out.
Going to the shops was fraught with danger. Turning on the television was like playing russian roulette. Even getting into a lift was a gamble. For 23 years my life was hugely restricted because I had epileptic fits whenever I heard music.
If it was more than a few notes, a strange humming would start in my head, immediately followed by a seizure. I didn’t fall to the ground and twitch, but would wander around in a daze, my heart racing, my mind a blank. I also experienced hallucinations: people around me appeared microscopic and it felt as if I had been captured by an invisible force field. It was a terrifying experience and I felt drained for hours afterwards. (Experience: Music gave me seizures)
One woman describes her experience with musicogenic epilepsy for The Guardian. She did everything she could think of to stop the music-induced seizures—from sticking cotton balls into her ears to stop hearing sounds, to staying inside of the house (in case a car driving by played the type of music that triggered her seizing), to having a silent wedding with no music. She ended up getting referred to a specialist and she got her brain checked out. Come to find out, she had scarring on her right temporal lobe and so, surgery was done to fix it. She was cured from her condition and she could then attend social functions in which music was played.
The brain has the capacity to produce electricity, and so, in certain individuals with certain things wrong with the structure of their brains (like in their right temporal lobe), if they hear a certain kind of music or tune, they may then begin seizing. While the condition is rare (around 150 cases have been noted), strides are being made in discovering how and why such things occur. The only cure, it seems, is to remove the affected part of the brain—the right temporal lobe in a majority of cases. Such operations, however, do not always have the same debilitating effects (i.e., causing loss of mental capacity). That the brain’s normal functioning can be affected by sound (music) is very interesting and speaks to the fact that our brains are an enigma which is just beginning to be unraveled.
In 1969, Arthur Jensen published a bombshell article in the Harvard Educational Review titled How Much Can We Boost IQ and Scholastic Achievement? in which he argued that compensatory education has failed (e.g., Headstart) and it, therefore, should be abandoned. Jensen was a big proponent against school integration due to his research on IQ (Tucker, 2002). Tucker (1998) also argued, “that the supposed significance of the genetic influence on IQ has invariably reflected a particular ideological view of the purpose of education and its relation to the state that is rooted in conservative political thought.” Such ideological leanings of the IQ-ists have been well-noted (Tucker, 2002; Saini, 2019).(Though it should be noted that school integration didn’t cause any negative effects for whites and had many positive effects for blacks, see Nazaryan and Johnson, 2019). Note how the revival of “racial differences in intelligence” in the mainstream occurred after the Civil Rights Act of 1964. Such ideological leanings have been incipient in ‘intelligence’ testing since its inception.
In any case, what was the ultimate goal of such research into racial/class differences in “intelligence”? The original application of what eventually became tests of “intelligence” were to (1) identify those with learning disabilities and (2) shoe-horn people into jobs “for” them—what Binet called his “ideal city.” When IQ tests were brought to America and translated from French to English by Henry Goddard in 1911 and then again by Lewis Terman in 1916. Goddard was hesitant to force sterilization, but he did believe that those his tests designated as “feeble-minded” should not be allowed to bear children.
Proponents of IQ emphatically state that it’s not a “measure of superiority” and that it’s only the critics who believe that, with no evidence for the claim. However, if one reads Jensen’s earliest writings on IQ, they would see that Jensen did, in fact, believe that heritability could estimate one’s “genetic standing” (Jensen, 1970) and that if we continue our welfare policy that we would lead a group toward “genetic enslavement” (Jensen, 1969). Jensen ran with racists, so there is a possibility that he himself held similar types of views to the people he ran with. The following quotes show Jensen’s eugenic thinking:
“Is there a danger that current welfare policies, unaided by eugenic foresight, could lead to the genetic enslavement of a substantial segment of our population?” – Jensen, 1969: 95, How Much Can We Boost IQ and Scholastic Achievement?
“What the evidence on heritability tells us is that we can, in fact, estimate a person’s genetic standing on intelligence from his score on an IQ test.” – Jensen, 1970, Can We and Should We Study Race Difference?
“… the best thing the black community could do would be to limit the birth-rate among the least-able members, which of course is a eugenic proposal.” – A Conversation with Arthur Jensen, American Reinnasance, 1992
In a review of Raymond Cattell’s Beyondism, Richard Lynn stated:
“What is called for here is not genocide, the killing off of the populations of incompetent cultures. But we do need to think realistically in terms of “phasing out” of such peoples.”
I don’t see how he’s not calling for genocide—genocide is the systemic killing of a specific group of people. Eugenic methods are one way to accomplish this. Richard Lynn’s father was a eugenicist, signing his name to a manifesto which asked the question of how the genetic constitution of the world could be improved, per Lynn (see Interview with a pioneer, American Reinassance). Lynn continues:
My father’s interests did give me an early appreciation of the importance of genetics, although I think I would have adopted this position anyway since the evidence is irrefutable for a strong genetic determination of intelligence and educational attainment and a moderate genetic determination of personality. More importantly, my father served as a role model for scientific achievement and has given me the confidence to advance theories that have sometimes been controversial.
Lynn stated that he is “very pessimistic” about the future of the West, due to the immigration of individuals from low IQ countries who have a higher birthrate than Westerners along with the supposed dysgenic fertility that American white women are facing. (See Lynn, 1996, 2001 for a discussion and look into these views.) In Dysgenics, Lynn (1996: 2) writes that he hopes “To make the case that in the repudiation of eugenics an important truth has been lost, and to rehabilitate the argument that genetic deterioration is occurring in Western populations and in most of the developed world.”
Raymond Cattell was also one who believed that certain people should (voluntarily) be sterilized. He created a religion called “Beyondism” in an attempt to accomplish this goal, his research, in fact, served his eugenic and political beliefs (Tucker, 2009). Compassion was seen as evil to Cattell, which is one major way it strays from other religions. Presumably, one is compassionate to those less fortunate and, therefore, the compassion would help one who Cattell deems “genetically inferior” and so compassion is evil since it leads to the propagation of those Cattell deems less fit. Cattell also stated that, from the perspective of Beyondism, the propagation of ‘genetic failures’ is “positively evil” (Tucker, 2009: 136). He also coined the term ‘genthanasia’ which was “phasing out” a “moribund culture … by educational and birth measures, without a single member dying before his time” (Cattell, quoted in Tucker, 2009: 146).
William Shockley “reasoned” that if the problems that blacks face in America are hereditary, then by attempting to halt the reproduction of blacks, there would be less racism against them. Well if there are few people to be racist against, then there would be less racism against those people. Shocking. Further, Shockley wanted to enstate what he called a “Voluntary Sterilization Bonus Plan” where individuals with IQs below 100 would, with each single point below 100, be given $1000—although the plan was never implemented (Hilliard, 2012: 50). He also wanted to institute a sperm bank of ‘geniuses’ (whatever that means) but, he was never told, women did not want the sperm of Shockley’s short, balding self (he was 5’6” weighing 150 pounds)—although he had a ‘high IQ’ (he was rejected as being one of Terman’s Termites)—they wanted the sperm of taller, good-looking men, regardless of their IQ (Hilliard, 2012: 20).
It is worth noting that Shockley precedes Jensen’s thinking on race and IQ—Jensen was in the audience of one of Shockley’s talks in the late ’60s hearing him talk about racial differences in IQ. Psychology was Jensen’s second choice; his first was to be a symphony conductor. Hilliard (2012: 51) describes this:
“When Shockley addressed a meeting of the Center for Advanced Study in the Behavioral Sciences at Stanford in the late 1960s, one member of the audience drawn to his discourse was Arthur R. Jensen, a psychologist who taught at the University of California–Berkeley. Jensen, who had described himself as a “frustrated symphony conductor,” may have had his own reasons for reverencing Shockley’s every word. The younger psychologist had been forced to abandon a career in music because his own considerable talents in that area nevertheless lacked “soul,” or the emotional intensity needed to succeed in so competitive a profession. He decided on psychology as a second choice, carrying along with him a grudge against those American subcultures perceived as being “more expressive” than the white culture from which he sprang. Jensen received his bachelor’s degree in that field from the University of California– Berkeley in 1945.”
Shockley even disowned his son for dating a Costa Rican woman since it would “deteriorate their white gene pool” while describing his children as a “considerable regression”, even though they had advanced degrees. He blamed this ‘genetic misfortune’ on his wife who did not have as high educational attainment as he did (Hilliard, 2012: 49). This man greatly influenced Jensen—and it seems to show in his first writings on IQ—what eventually kicked off the ‘IQ debate’ (which is frivolous) back in the late 1960s. (James Thompson has said that Shockley wouldn’t talk to anyone if he didn’t know their IQ—presumably, because he did not want to talk to anyone ‘lower’ than he. The idiotic ‘thinking’ of eugenic IQ-ists.
Shockley was involved in a car accident, and received a head injury, with colleagues noting that his views on race and eugenics came about after his car accident (Hilliard, 2012: 48). So, it can rightly be argued that if Shockley would never have gotten into a car accident then he would have never had the views he did on race and IQ, meaning that he would not be speaking for Jensen to be in the audience to then eventually write his infamous 1969 paper. So, the current revival of the race-and-IQ debate can be said to be due to Shockly’s influence on Jensen, which is due (in some way) to head injuries sustained during a car accident.
IQ-ists speak of a “genetic deterioration”, what is termed “dysgenics” (the opposite of eugenics). Professor Seymour Itzkoff published The Decline of Intelligence in America (Itzkoff, 1994), arguing that the decline in our country’s “intelligence” is the cause of our economic and political woes. And, while he does not outright discuss eugenics in the book, he states that higher IQ people are not having children and so the national IQ is decreasing—a dysgenic effect. He is also a recipient of funding from the Pioneer Fund, published in Mankind Quarterly, and was one of the 52 signatories of Mainstream Science on Intelligence (Gottfredson, 1997).
The Decline of Intelligence in America was The Bell Curve before The Bell Curve. Itzkoff argued policy proscriptions, including encouraging certain people to breed and certain people not to breed (eugenics without calling it eugenics). Itzkoff stated that welfare policy is one reason why our “intelligence”—as a nation—has declined (see also Jensen, 1969; Lynn, 2001). Itzkoff (1994: 195) states that “Those at the bottom should be humanely persuaded, with generous gifts if deemed appropriate but for one generation only to refrain from conceiving and having children.” So his views are a mixture of Jensen’s, Shockley’s and others. Itzkoff advocates for both positive and negative eugenics for black Americans. I have not seen any IQ-ist discuss Itzkoff’s writings; I will do so in the future.
Philosopher and IQ-ist Jonathon Anomaly (see Winegard, Winegard, and Anomaly, 2020) has a paper in which he ‘defends eugenics’, even stating what we ‘should’ (cautiously) do about public policy in relation to eugenic ideas. He speaks of “undesirable genetic endowment“, while couching his “moral obligation to produce children with the best chance of the best life” (Anomaly, 2018) “through mechanisms of prenatal screening, enshrined in the principle of procreative beneficence and our responsibility to not pass along an “undesirable genetic endowment” (Love, 2018: 4). (See my arguments to discourage such research here and here.) Presumably, like Itzkoff (1994), such policies will be concentrated on the lower classes, of which minority populations are the majority. Robert Wilson (2019), author of The Eugenic Mind Project writes that Anomaly (2018) fails to argue for eugenics, mischaracterizes eugenics, mischaracterizes the scientific consensus, simplifies and misleads on the history, is careless about race and IQ, appealing for moral principles, and no substance linking demography, eugenics and policy recommendations. Anomaly could hardly contain his negative eugenist views; his views being akin to more traditional, negative forms of eugenics” (Wilson, 2019: 74).
In any case, IQ tests were used as a vehicle for sterilization and barring immigrants into America in the 1920s (Swanson, 1995; Gould, 1996; Wilson, 2017; Dolmage, 2018). In his book The Eugenic Mind Project, Wilson (2017) discusses standpoint eugenics—how eugenic policies affected people and their own personal experiences with eugenic policies. In the book, Wilson argues that, to the eugenicists, there are different ‘sorts’ of people that can be distinguished from others. Wilson (2017: 48) writes:
This was not, however, the way of human betterment favored by the applied science of eugenics and that continues to forms [sic] a key part of The Eugenic Mind. Instead, historically eugenicists typically followed Galton in emphasizing that quality was not equally distributed in the kinds of human populations that are regulated by governmental policies and jurisdictional legislation. More specifically, they thought of such populations as being composed of fundamentally distinct kinds of people, with some kinds being of higher quality than others. Some of these sorts of people were to be improved through eugenic policies that encouraged their own reproduction; others were to be eliminated over generational time. The goal of intergenerational human improvement within the eugenics movement was thus achieved by increasing the proportion of higher-quality people in future generations, and this could be achieved in two ways under eugenic logic. Thus, eugenicists historically advocated ideas, laws, policies, and practices either that aimed to maximize the reproduction of higher-quality people—positive eugenics—or that aimed to minimalize the reproduction of lower-quality people. Or both.
A hallmark of The Eugenic Mind, says Wilson, is the distinction between the ‘fit’ and ‘unfit’. Thus, those deemed ‘unfit’ would be sterilized as they are different ‘kinds’ of people. Eugenics is seen as an applied science and so, it attempts to achieve certain goals—the propagation of the ‘fit’ and elimination of the ‘unfit.’ The first IQ tests were constructed with class, so that the test scores mirrored current racial/class divisions, justifying the social hierarchy (Mensh and Mensh, 1991). Thus, using the ‘science of IQ’, we could then identify ‘feeble-minds’ and therefore select them out of the gene pool. What really would be going on here is not selecting out ‘low IQ people’ but selecting out those of lower classes—one of the main reasons these types of views sprang up. A trait was ‘eugenic’ if it fit “folk knowledge characteristic of people” (Wilson, 2017: 70).
Eugenic-type thinking also had its beginnings in criminality, right when the first IQ tests were being constructed by Binet in 1905 (Kuhar and Fatović-Ferenčić, 2012). Lombroso’s thesis of hereditary criminality also gave American eugenicists the platform for sterilization of criminals (Applegate, 2018: 438). But early eugenicists were more concerned over white female ‘morons’ and white lower-status, promiscuous white women being coerced and segregated in order to prevent them from breeding, it even being suggested by “one or two scientists” that the women should live on a farm performing menial tasks and should be sterilized; the eugenicists wanted state control of heredity (Applegate, 2018: 439). Mexican-American men and women were even sterilized in the 1900s (Lira, 2015). Such beliefs seem to be baked-in from political and social prejudices; not any basis in ‘science.’
Eugenicists wished for state control over the “propagation of the mentally incompetent,” whether through mental illness or disability. Ultimately, these beliefs would lead not only to forced detention and isolation, but also to regular affronts to human life and dignity. (Applegate, 2018: 442)
But Dr. Sullivan, the medical officer of Holloway Prison in The Eugenics Review stated that “Criminals, looked at from the eugenic standpoint, cannot be put into any single category; some of them, probably most of them, are of average stock, and become criminal under the influence of their milieu; they do not directly interest the eugenist” (Sullivan, 1909: 119-120). The “hyperincarceration of blacks” is also argued to be eugenic in nature (Oleson, 2016; Jones and Seabrook, 2017). Such race-based segregation, argues Oleson, significantly depresses the birthrate of affected groups—racialized minorities (socialgroups taken to be races, e.g., ‘Hispanics‘ and blacks). So since minority populations are overrepresented in prison and they are less likely to procreate, it then follows that such arguments like Oleson’s (2016) has some weight to it. So this then would have eugenic effects over generational time. Even today, America is still sterilizing prisoners, so, it seems, the legacy of the 20th century has yet to let up.
… the penal code is a eugenic instrument, although until today, it has been without consciousness of this function. And following the results of eugenic science, it can tomorrow widen or narrow the circle of crimes in the end of conducing to the physical and psychic improvement of the race. (Battaglini, 1914)
Hitler, noticing the American sterilization laws and the Immigration Act of 1924, instituted eugenic policies on this basis—yes, the Nazi eugenic movement was largely taken from the then-existing social policy in America at the time. Pioneer Fund president Henry Laughlin, who used data from IQ tests in front of Congress to bar certain immigrants from the United States (Swanson, 1995; Dolmage, 2018). The Nazis and Americans had extensive contact with each other, while Germany modeled their sterilization law after Henry Laughlin’s laws for sterilization in US states (Cornwell, 2001; Black, 2003; see Allen, 2004; Lelliot, 2004; and Weikart, 2006 for reviews; Wittmann, 2004). But it is worth noting that Hitler was not a Darwinian (Richards, 2012, 2013). Hitler’s laws in the early 1930s on eugenics “may have had some resemblance to the most extreme of American state’s laws” (Wittmann, 2004: 19), since he was observing the eugenic programs implemented by certain states (eugenic laws were never federally mandated).
The IQ-ist thinking that IQ tests ‘measure’ intelligence led to eugenic policies and the sterilization of criminals and those with low IQs. Jensen and Shockley were the forefronts of bringing IQ-ism back into the picture, and they both had eugenic views (Shockley being way more radical than Jensen, but it is clear that Shockley was his influence here and that, without Shockley, IQ-ism may not have had the sway it does today. The IQ-ist ideology that has led to eugenic thinking and social policies, race and ‘intelligence’ has been there since its inception and it is clear that it still exists today (Chitty, 2007). Most of the big-name IQ-ists have, either explicitly or implicitly, stated things that can be construed in a eugenic way, and thusly, the main goal of the IQ-ist program is revealed: limit the birthrates of the lower classes, which are mostly minorities.
The eugenics movement in America—which then influenced Nazi policy—was not built on science, nor was it political (even though it aimed to be “applied science”; Wilson, 2017), it was a political movement which was erected to control social groups thought to be inferior to the higher-ups (Quigley, 1995). The link between eugenic thinking and IQ-ism in its history go hand-in-hand and some IQ-ists, even today, still advocate for such social policy based on the results from IQ tests (Herrnstein and Murray, 1994; Itzkoff, 1994).
Such prescriptions from IQ-ists for what ‘should be’ done with low IQ people speak to their bias in the matter. One of the great IQ-ists, so revered, Arthur Jensen was very implicit in his views in the late 1960s and early 70s about what should be done for the black population in America. His predecessors, too, had the same type of eugenic beliefs which then influenced their thoughts and values on crime and ‘intelligence.’ This game that IQ-ists have been playing has been going on for over 100 years; and with the advent of new genetic technology, the IQ-ists can continue their eugenic games, attempting to prevent ‘certain people’ from having children.
These tests, originally devised to correlate group’s places on the social hierarchy, cannot be ‘used for good’, as the point of its inception was to justify current class hierarchies as ‘genetic and immutable.’ These psychologists and criminologists leave their fields of inquiry and then attempt to influence public policy using clearly biased tests, as the history of the field has shown since its inception in the early 1900s. These are yet more reasons why IQ testing should be banned, as no good can come from believing that a group or individual is ‘less intelligent’ than another. The eugenic thinking of IQ-ists and criminologists feed off each other, with the IQ-ist ideas being the catalyst for the eugenic policies that followed.