Blacks are better sprinters and whites are better swimmers. Why is this? A whole slew of factors influence this—social, physiological, anatomic. However, there is a stereotype about blacks that has been repeated since I was a child: that blacks can’t swim. How true is this? If it is true, what explains it? It is my opinion that it is true, and that social, cultural, and anatomic and physiologic factors account for this. The same for whites and running. Black children drown at a rate of about 3 times higher than white children. About 70 percent of black children cannot swim, compared to 60 percent of “Hispanic” children and 40 percent of white children. Why is that? Well, one of the most telling answers why is anatomic. Irwin et al (2011) note in their study that blacks are more likely to be “aquaphobes”—having a fear of water—compared to whites.
Almost three years ago, I wrote White Men Can’t Jump? That’s OK, Black Men Can’t Swim. In the article, I explain how and why blacks have a harder time swimming than whites. One anatomic reason is their chest cavity. Compared to whites, blacks have a narrower chest cavity. They have denser, shallower chests. This is a burden while swimming, since those who have a wider chest can take longer strides with their arms while swimming. Blacks have denser bones than whites (Ettinger et al, 1997), Swimmers have lower bone density than non-swimmers (Gomez-Bruton et al, 2013), and so, high bone density is not conducive to swimming success, either.
The first black man to make the swim team for America in the Olympics was Anthony Ervin in 2000. (Funny story. In a class I took a few years ago, racial differences in sports came up. I brought up race differences in swimming. A black guy behind me said “My grandfather was the first to qualify for the Olympics.” I said “Yea? Your grandfather is Anthony Ervin?” He didn’t say anything, it seemed like he got mad at me for calling him out.) That it took this long for a black man to qualify for the US in swimming is telling, and anatomy and physiology, in my opinion, are how we can explain the observed disparity,
So, blacks have lower body fat (on average), and narrower chest cavities. These two things play a role in why blacks are not good swimmers. Yet another role-player, could be, the fact that black women don’t want their hair to get wet and so never taught their children how to swim, parental encouragement, to “swimming is something that white people do” (Wiltse, 2014). Who knows? Maybe in the coming years, blacks could match whites at swimming. Though, with what we know about anatomy and physiology of elite swimmers, this is highly unlikely. It’s like saying “Who knows? Maybe in the coming years, whites could match blacks at running.” Our knowledge of anatomy and physiology throws a wrench in claims like that.
The phenomenon of fast black sprinters and fast white swimmers is predictable through physics (Bejan, Jones, and Charles, 2010). The finalists of running competitions are continuously black, whereas in the swimming competitions they are continuously white. What accounts for this? Well, other than the factors discussed above, there is one more: center of mass.
It is well-known that different races have different anatomic measurements. Blacks have longer limbs than whites (Gerace et al, 1994; Wagner and Heyward, 2000) and longer legs and smaller muscle circumferences (e.g., calves, arms), then they have a higher center of mass than an individual of the same height. So since Asians and whites have long torsos (i.e., since they are endomorphic), they have a lower centers of mass. Asians have the tallest sitting heights, matched with people of the same height, and so we would expect them to be exceptional swimmers. However, since they are not as tall as whites, they do not set records. Blacks, on the other hand, have a lower sitting height when matched with someone of the same height—3 cm shorter. Whites’ sitting height was lower than Asians, but whites are taller so whites dominate swimming compared to Asians because of their average center of mass. See Table 3 from Bejan, Jones, and Charles (2010).
So the difference between blacks’ and whites’ center of mass is 3 percent. This 3 percent difference can account for why the two races excel in running and swimming. When it comes to the runners (blacks), the 3 percent increase in center of mass translates to a 1.5 percent increase in winning speed for the 100 m dash, and a 1.5 percent decrease in winning time, from 10 s to 9.85 s, for example. So the 3 percent difference in running is a huge advantage for blacks.
When it comes to whites, the same holds, except for swimming. So the 3 percent increase in correct length for whites translates over to a 1.5 increase in winning speed and a 1.5 decrease in winning time.
So for taller athletes, mass that falls from a higher altitude falls faster, down and forward; speed increases with larger physiques. So since blacks have larger physiques than whites, then, at the extremes of elite sports (running), their mass allows them to fall down forward, faster and since they have larger physiques, they are faster. So world records are set by athletes with different centers of mass: black athletes in running and white athletes in swimming.
Shifting away from physics, we will now discuss the cultural/social component. The fact that many blacks do not know how to swim became apparent after the Red River drownings of 2010 (Wiltse, 2014). Wiltse (2014) notes three reasons why blacks may be bad swimmers compared to whites: (1) white swimmers denied blacks access to pools; (2) cities provided few pools to black communities and the pools they did provide were small; and (3) the cities closed many public pools after desegregation occurred. White parents taught their children how to swim, but black parents hardly ever did. As this occurred as swimming became popular in American culture, this could be one reason why blacks aren’t as represented in swimming when compared to whites.
Wiltse’s (2014) argument is that past discrimination to blacks from whites when it came to swimming explains the drowning disparity between the races. Whites passed down their swimming knowledge, whereas blacks had little to no chance to pass theirs down—if they even knew how to swim, that is. This type of cultural transmission could explain most—if not all—of the disparity in drowning between the races. It is simple: to address the disparity, the claim that swimming is “what white people do” needs to be addressed. I would assume that this claim grew from the 60s and desegregation from when blacks were barred from swimming pools, as Wiltse (2014) notes. While the swimming and drowning gap can be closed, the elite sports (running and swimming) gap cannot be—as most of what drives the relationship between race and those sports are anatomic and physiologic in nature, combined with numerous other irreducible variables.
However, pointing to these types of cultural/social causes can be reversed. We can say that since white parents don’t teach their children how to sprint and thus they have not taught their children how to sprint for successive generations, then if white parents did just that then whites would begin to close the gap when it comes to sprinting. While I do not deny that we would have more black swimmers had these types of discriminatory acts had not occurred, it is ridiculous to claim that the two races can and will become equal if this were to occur. It’d be like saying that if we train this person from birth to become an elite sprinter then they would be. Though the right analogy would be that since there are fewer whites than blacks in elite running sports, then what explains the disparity is that they are not trained that way from pretty much conception. However this betrays the systems view of athleticism, and while there are necessary factors in regard to running success, the whole system must be looked at when assessing what makes an elite athlete.
In conclusion, there are many anatomic and physiologic reasons why blacks and whites differ in running and swimming sports. Anatomic differences, such as center of mass, explain the disparities in swimming and running. Blacks’ morphology—long limbs and short torso—is conducive to running success. They can take longer strides and take fewer strides a race compared to someone of the same size that does not have the same limb length. When it comes to white swimmers, where the altitude is set by the body rising out of the water, whites hold a 1.5 percent speed advantage in swimming.
Though there are these anatomic differences that lend themselves to differences in elite sporting competitions, these differences do not lend themselves do the swimming and drowning gap in regard to blacks and whites. What explains those gaps is generational access to swimming pools; blacks were barred access to swimming pools just as they started to become popular in America, after the 60s when the country was desegregated. This led to swimming being looked at as “something that white people do”, and so, fewer and fewer black parents taught their children how to swim. Further cultural and social factors explain this, too. While I would assume that some of these aforementioned factors would then play a role in the black-white swimming/drowning gaps, I doubt that it would count for a super-majority of it. Thus, the gap can be closed by ridding the stigma that swimming is “something that white people do”.
The elite sporting gap in running and swimming, however, cannot be closed.
Usain Bolt is one of—if not the—fastest men who has ever lived. At age 12 he was already the fastest boy in his school (Irving, 2010: 54). At the 2009 Berlin Olympics, he set the world record for 100 m race (Bolt also holds the world record for the 200 m dash, at 19.19s), clocking in at 9.58 seconds. His average speed was 27.8 mph with an average speed of 23.5 mph. Why is Bolt so fast? Of course, there are multiple interacting factors that contribute to Bolt’s world record times. Bolt’s somatotype, muscle fibers, will to win, intense training, mind, etc all contribute to his world record—along with the type of athlete he is. In this article I will discuss what Bolt does, his anatomy and physiology, what lead up to his record-breaking time, and a possible challenge to his record.
Usain Bolt is tall, as far as sprinters go, with a height of 6’5”. Since he is so tall, compared to other sprinters, his average stride length is at the extreme upper-limit of modern sprinters. So what makes Bolt unique as a sprinter is his stride length (Shinabarger, Hellrich, and Baker, 2010). So Bolt has to take fewer strides than other sprinters, which, in part, explains his success.
During Bolt’s record-setting 9.69 s dash in 2008, during the last 2 seconds—when 20 meters were left to run—Bolt looked to the side and started celebrating (Eriksen et al, 2009). Bolt’s coach claimed that he would have shattered even his future record-setting performance of 9.58 s running 9.52 s or better. The runner-up of this race was Richard Thompson. By 4 s, Bolt and Thompson were neck-and-neck, so Bolt’s medal was won between 4 and 8 s. After 8 s, Bolt considerably decelerated while Thompson equalizes and surpasses Bolt. Thompson could not match Bolt’s speed, though, and slows down after 8.5 s. Then, to answer the question “How fast would Bolt have run had he not celebrated the last 2 s?”, Eriksen et al (2009: 226) make two assumptions:
Assumption 1: Bolt matches Thompson’s speed at up to 8 s.
Assumption 2: Bolt maintains a 0.5 m/s2 higher acceleration than Thompson at 8.5 s.
Of course the justification for A1 is obvious: Bolt outran Thompson between 4 and 8 s. But in regard to A2, it is difficult to quantify exactly how much stronger Bolt was than Thompson, since Bolt is a 200m specialist, they take the 0.5 m/s2. So, in two scenarios that Eriksen et al (2009) put forth, the world record would either be 9.61 s or 9.55 s. Eriksen et al (2009: 228) conclude “that a new world record of less than 9.5 s is within reach by Usain Bolt in the near future.” And what do you know: a year later, Bolt ran the 100 m at 9.59 s.
Bolt has a slow reaction to the gun—that is, he has more moving to do to get to the sprinting start since he is so tall. His reaction time at the Beijing Olympic Final was 0.165 s. So if he could reduce his reaction to the gun by .3 s then he would have beaten his world record of 9.58 s to 9.56 s. If he could get it down to 0.12 s then he would be looking at a 9.55 s time, and if he could get it down to as fast as the rules allow—at 0.10 s—then his time would have been 9.53 seconds, almost right there by his coach’s prediction had he not celebrated during his record-setting run (Darrow, 2012).
Since Bolt is so tall—taller than his competitors—he can take fewer steps per 100 m. For instance, he set his record time in 2009 taking 41 steps to win, whereas his competitors took 45 steps (Beneke and Taylor, 2010). The average sprinter has a higher proportion of type II fibers compares to type I fibers (Zierath and Hawley, 2004). So one thing that separates Bolt from his contemporaries is superior biomechanical efficiency along with relative power generated per-step (Beneke and Taylor, 2010; Coh et al, 2018). So Bolt’s record-setting performances comes down to anthropometric characters, coordinated motor abilities, his ability to generate power, and an effective running technique. Sprint performance on the force generated during ground contact.
Bolt has an ectomorphic-dominant somatotype. Since he is ecto-dominant, this gives him certain advantages over more endo- and meso-dominant competitors. Furthermore, along with his body type, Bolt is Jamaican. Most of the ancestry found in Jamaicans is derived from West Africa. Jamaicans are more likely to have the RR ACTN3 genotype (Scott et al, 2010), while the RR genotype—along with type II fibers (with a greater cross-section area) contributes to whole muscle performance during high-velocity contractions (Broos et al, 2016). I am not aware of any analyses of Bolt’s genotype, but I would bet what’s in my bank account that he has the RR genotype—that he has two copies of ACTN3.
Tyson Gay then emerged as a challenger to Bolt (in 2013, Gay gave a dirty urine for PEDs, performance-enhancing drugs, and Bolt said that Gay should be “kicked out of the sport“). Varlet et al (2015) state that Bolt and Gay influenced how fast the other ran in Berlin, 2009. Both Bolt’s and Gay’s steps were pretty much synchronized with each other. Though since Gay was slightly behind Bolt in the race, he had the better chance to synchronize his movement with Bolt’s. However, Blikslager and de Poel (2017) argue against this: they state that there is no sufficient evidence for the claim that Bolt and Gay had synchronized movements.
The center of mass in blacks is around 3 percent higher in blacks than it is in whites. This 3 percent difference in center of mass between whites and blacks leads to them doing better in one sport over another: sprinting for blacks and swimming for whites (this is one reason why blacks are worse swimmers than whites). Further, for runners, the 3 percent increase in center of height translates over to a 1.5 percent increase in running speed, translating to a difference of 10 s compared to 9.85 s (Bejan, Jones, and Charles, 2010). So the change is 0.15 s for runners. This is yet another reason why Bolt excels: he is exceptionally tall.
Bolt is really tall compared to his contemporaries; Bolt goes through insane training (as do his contemporaries). Of course, the explanation for Bolt’s running success is due to numerous factors, including (but not limited to) his height, leg length/stride length, running economy, Vo2 max, training, where he grew up, and a whole slew of other—irreducible—factors. The fact that Bolt could have set an even more unbelievable record had he not celebrated with 2 s—or 20 meters—left during his record-setting run is incredible. That he can even hit at or near to what his coach predicted that he would have gotten had he not celebrated, while getting his reaction time better is even more incredible. Bolt does not even need to improve his running skill to become better—just improve his reaction to the gun and he will, in my opinion—set records that no one wull ever break.
The Boston Marathon is one of the oldest continuous running marathons around. The 122nd just finished today, and—surprise surprise—a Kenyan man and Ethiopian woman took first place. For the men, Lawrence Chereno (time at 2:07: 57) barely edged out the second place winner Lelisa Desisa Benti (2:07:59; an Ethiopian) while for the women Worknesh Degefa (2:23:31) beat Edna Kiplagat (2:24:13; a Kenyan). For the men, all 5 of the top placers were East African, whereas for the women all 3 were East African. What explains Kenyan marathon success? Incredibly, from 1992 onward—with the exception of 2001 and 2018—East Africans have won the Boston Marathon. We know that athleticism is irreducible to biology, and while genes do play a part in morphology and other things that are conducive to running success, they do not—of course—tell the whole story. A whole slew of factors needs to come together to make an elite athlete, while one thing does not fully explain marathon success.
Back in September of 2017, I covered many factors that make both elite marathoners and sprinters. All of the factors that make an elite athlete combine, no one factor is more important than any other, but if one does not have the will to train and win, of course, they will not do well.
When it comes to Kenyans, a small tribe in Kenya explain the success of—the Kalenjin, most specifically, the Nandi sub-tribe and a complex interaction of genotype, phenotype, and socioeconomic factors explain their success (Tucker, Onywera, and Santos-Concejero, 2015). The Kalenjin account for a whopping 84% of Kenya’s Olympic and world championship medals, 79 percent of Kenya’ top 25 marathon performances, contributing to 34%. Kenyans have won 152 medals, compared with 145 with other African countries—42-61% being Ethiopian—while the rest of the world combined won 153 medals. The Nandi sub-tribe has won 72 medals, accounting for 47& of the total for Kenya. What accounts for the insane disparity between East African marathoners (specifically Kalenjin, and a more specific sub-tribe at that) and the rest?
In his book The Genius in All of Us, David Shenk (2010: 102) writes:
Take the running Kenyans. Relatively new to the international competitions, Kenyans have in recent years become overwhelmingly dominant in middle- and long-distance races. “It’s pointless for me to run on the pro-circuit,” complained American 10,000 meter champion Mike Mykytok to the New York Times in 1998. “With all of the Kenyans, I could set a personal best time, and still only place 12th and win $200.”
The Kenyan-born journalist John Manners describes a just-so story to explain how and why Kenyans dominate these competitions: The best young men who were the fastest and had more endurance acquired more cattle, and those who acquired more cattle could then get a bride and have more children, Shenk explains. “It is not hard to imagine that such a reproductive advantage might cause a significant shift in a group’s genetic makeup over the course of a few centuries” (John Manners, quoted in Shenk, 2010: 103).
However, no matter what the origin of Kenyan running success is, the Kalenjin have a passionate dedication to running. Kipchoge Keino was the one who put Kenya on the map regarding distance running. Shenk quotes Keino saying:
I used to run from the farm to school and back … we didn’t have a water tap in the house, so you run to the river, take your shower, run home, change [run] to school . . . Everything is running.
However, when Keino entered 1968 Olympics in Mexico, he came down with gallstones and his doctor told him not to race. However, he took a cab to Aztec Stadium, and when he get caught in traffic he ran the last mile to the stadium and barely got there before the race started. Even though Keino was sick, he destroyed the then-world record by 6 seconds.
Sports geographers don’t point to one variable that explains Kenyan running success—because they all interact. They train at high altitude—and while high altitude is not the only factor regarding long-distance running success, it is crucial. Because training at a high altitude and then running at a lower altitude can change running time by a large amount. One with a normal running economy who goes by the mantra “live high, train low” can shave off about 8 minutes of their time in a 26.2-mile marathon (Chapman and Levine, 2007). Further, socioeconomic variables also explain the success, with it being part of what drives them to succeed, along with favorable morphology, a strong running economy, high intensity training (living at and training at high altitude) and a slew of psychological factors related to social status and socioeconomic factors (Wilbur and Pitsiladis, 2012). This paper speaks perfectly to the slew of variables that need to come together to make an elite athlete.
Shenk (2010: 108) then reverses John Manner’s just-so story:
… it’s an entertaining theory that fits well with the popular gene-centric view of natural selection [it fits well because it’s selected to be so]. But developmental biologists would point out that you could take exactly the same story line and flip the conclusion on its head: the fastest man earns the most wives and has the most kids—but rather than passing on quickness genes, he passes on crucial ingredients. such as the knowledge and means to attain maximal nutrition, inspiring stories, the most propitious attitude and beliefs, access to the best trainers, the most leisure time to pursue training, and so on. This nongenetic aspect of inheritance is often overlooked by genetic determinists: culture, knowledge, attitudes, and environments are also passed on in many different ways.
Further, Shenk also cites sports scientist Tim Noakes who states that the best Kenyan runners cover 230 km (about 143 miles) a week at 6,000 feet in altitude—and this, of course, would be conducive to running success when the event is held at lower altitudes.
David Epstein wrote a solid book on athleticism in 2014—The Sports Gene. Chapters 12 and 13 are pivotal for this discussion. Chapter 12 titled Can Every Kalenjin Run? In this chapter, Epstein, too, cites John Manners, explaining the same thing that Shenk did, but adds this:
In the next breath of the very same chapter [after describing the just-so story about cattle-gathering and wife-acquisiton], though, Manners seems to doubt the suggestion as soon as he raises it. “The idea just occurred to me, so I just put it in.” (pg 184)
Manners came to see his just-so story as less powerful since, over the years as he interviewed Kalenjin runners because “other “hot spots” of endurance running talent have materialized in East Africa, and the athletes responsible are also from traditionally pastoralist cultures that once practiced cattle raiding” (Epstein, 2014: 184-185).
Epstein then discusses how 17 American men in history have run a marathon better than 2:10—or 4:58 per mile—while 32 Kalenjin men did it in October of 2011 alone. Five American high-schoolers have run a sub-4 minute mile, while one high-school in Kenya alone produced 4 sub-4 mile runners!
Kenyan runners have long legs for their height, along with “upper leg length, total leg length and total leg length to body height ratio were correlated with running performance” (Mooses et al, 2014)—which means that they can cover more distance than one with shorter legs. This is critical for running success—of any kind. Kenyans have a high number of type I muscle fibers, but, of course, this alone does not explain their running success. Elite Kenyan distance runners are characterized by low BMI, low fat mass and slim limbs (Kong and de Heer, 2008).
So now let’s discuss altitude adaptation. One objection to this variable—out of many others, of course—that are conducive to running success is why are Tibetans and Andeas not succeeding in these types of competitions as well as the Kalenjin? The answer is simple—because they do not have the long, ecto-dominant (Vernillo et al, 2013) body types. There is also another, perhaps more critical, component to altitude training—hemoglobin, since the amount of oxygen one has in their blood is dictated by two factors—how much hemoglobin one has in their blood and the amount of oxygen the hemoglobin carries. Altitude increases the number of red blood cells in the body, since it is a good way to get oxygen in an environment with less oxygen.
Epstein (2014: 208) writes:
Preferable to moving to altitude to rain is being born there. Altitude natives who are born and go through dilchood at elevation tend to have proportionally larger lungs than sea-level natives, and large lungs have large surface ares that permit more oxygen to pass from the lungs into the blood. This cannot be the result of altitude ancestry that has altered the genes over generations, because it occurs not only in natives of the Himalayas, but also among American children who do not have altitude ancestry but who grow up in the Rockies. Once childhood is gone, though, so too is the chance for this adaptation. It is not genetic, but neither is it alterable after adolescence.
Epstein (2014: 213) quotes the first man to run a sub-4 minute mile, Roger Bannister who says:
The human body is centuries in advance of the physiologist, and can perform an integration of heart, lungs, and muscles which is too complex for the sciencist to analyze.
This, of course, is a hard pill to swallow for some people, who may not believe this. I believe this is true—though we can point to certain factors, each individual’s trajectory into X is unique, and so, explaining Y for all will be close to impossible.
Finally, Epstein (2014: 214) cites Claudio Berardelli:
Berardelli believes that Kenyans are, in general, more likely to be gifted runners. But he also knows that no matter their talent or body type or childhood environment or country of origin, 2:05 marathon runners do not fall from the sky. Their gifts must be coupled with herculean will.
Although that, too, is not entirely seperable from innate [whatever that means] talent.
Hamilton (2000) concludes that:
It seems that the presumed causes of such domination are often recycled, out of date, and based on misinformation and myth.
This, however, betrays understanding of a systems view of running success. Just because North Africans are beginning to show up in these types of competitions it does not mean that the systems view of athleticism is false.
Of course, the East African running advantage is more than ‘genetic’, it is also cultural—which, rightly, shows how every part of the system interacts to produce an elite athletic phenotype. As Louis (2014: 41) notes “The analysis and explanation of racial athleticism is therefore irreducible to biological or socio-cultural determinants and requires a ‘biocultural approach’ (Malina, 1988; Burfoot, 1999; Entine, 2000) or must account for environmental factors (Himes, 1988; Samson and Yerl`es, 1988).” Genetics alone cannot explain the running success of East Africans.
In sum, what explains the success of East African runners? A whole slew of factors that are irreducible, since the whole system interacts. Of course, I do not deny the role that physiological and anatomic factors have on running performance—they are crucial, but not the only, determinant for running success. Reducing a complex bio-system to X, Y, or Z does not make any sense, as every factor interacts to create the elite athlete. East African dominance in middle- and long-distance running will, of course, continue, since they have the right mix of factors that all interact with each other.
Wind back the tape of life to the origin of modern multicellular animals in the Cambrian explosion, let the tape play again from this identical starting point, and the replay will populate the earth (and generate a right tail of life) with a radically different set of creatures. The chance that this alternative set will contain anything remotely like a human being must be effectively nil, while the probability of any kind of creature endowed with self‐consciousness must also be extremely small. (Gould, 1996. Full House)
Wind back the tape of life to the early days of the Burgess Shale; let it play again from an identical starting point, and the chance becomes vanishingly small that anything like human intelligence would grace the replay. (Gould, 1987. Wonderful Life)
Wind back the clock to Cambrian times, half a billion years ago, when mammals first exploded into the fossil record, and let it play forwards again. Would that parallel be similar to our own? Perhaps the hills would be crawling with giant terrestrial octopuses. (Lane, 2015: 21. The Vital Question)
I first read Full House (Gould, 1996) about two years ago. I never was one to believe in evolutionary “progress”, though. As I read through the book, seeing how Gould weaved his love for baseball into an argument against evolutionary “progress” enthralled me. I love baseball, I love evolution, so this was the perfect book for me (indeed, one of my favorite books I have read in my life—and I have read a lot of them). The basic argument goes like this: There are more bacteria on earth than other animals deemed more “advanced”; if evolutionary “progress”—as popularly believed— were true, then there would be more “advanced” mammals than bacteria; there are more bacteria (“simpler: animals) than mammals (more “advanced” animals); therefore evolutionary “progress” is an illusion.
Evolutionary “progress” is entrenched in our society, as can be seen from popular accounts of human evolution (see picture below):
This is the type of “progress” that permeates the minds of the public at large.
Some may look at the diversity of life and conclude that there is a type of “progress” to evolution. However, Gould dispatches with this type of assertion with his drunkard argument. Imagine a drunkard leaving the bar. There is the bar wall (the left wall of complexity) and the gutter (the right wall of complexity). As the drunkard walks, he may stumble in between the left wall and the gutter, but he will always end up in the gutter every time.
Gould explains then explains his reasoning for using this type of argument:
I bring up this old example to illustrate but one salient point: In a system of linear motion structurally constrained by a wall at one end, random movement, with no preferred directionality whatever, will inevitably propel the average position away from a starting point at the wall. The drunkard falls into the gutter every time, but his motion includes no trend whatever toward this form of perdition. Similarly, some average or extreme measure of life might move in a particular direction even if no evolutionary advantage, and no inherent trend, favor that pathway (Gould, 1996: 151).
The claim that there is a type of “progress” to evolution is only due to the fact—in my opinion—that humans exist and are the most “advanced” species on earth.
It seems that JP Rushton did not read this critique of evolutionary “progress”, since not even a year after Gould published Full House, Rushton published anew edition of Race, Evolution, and Behavior (Rushton, 1997) where Rushton argues (on pages 292-294) that there is, indeed, “progress” to evolution. He cites Aristotle, Darwin (1859), Wilson (1975) Russell (1983, 1989; read my critique of Russel’s theory), and Bonner.
To be brief:
The Great Chain of Being (which Rushton’s r/K selection theory attempts to revive) is not valid; Wilson’s idea of “biological progression” is taken care of by Gould’s drunkard argument; Bonner asks why there has been evolution from simple to advanced, and this, too, is taken care of by Gould’s drunkard argument, and finally Dale Russel’s argument about the troodon (I will expand on this below).
Rushton claims that Russell, in his 1989 book Odysseys in Time: Dinosaurs of North America (which I bought specifically to get more info on Russel’s thoughts on the matter and to get more information for an article on it) that “if [dinosaurs] had not gone extinct, dinosaurs would have progressed to a large-brained, bipedal descendent” (Rushton, 1997: 294). Either Rushton only glanced at Russel’s writings or he’s being inherently dishonest: Russel claimed that had the dinosaurs not gone extinct, one dinosaur—the troodon—would have evolved into a bipedal, human-like being. Russel made these claims since the troodon had EQs about 6 times the size of the average dinosaur and they ran on two legs and had use of their ‘hands.’ So, due to this, Russel argues that had the dinosaurs not gone extinct, the troodons could possibly have been human-like. However, there are two huge problems for this hypothesis.
In the book Up From Dragons, Skoyles and Sagan (2002: 12) write:
But cold-bloodedness is a dead-end for the great story of this book—the evolution of intelligence. Certainly reptiles could evolve huge sizes, as they did over vast sweeps of Earth as dinosaurs. But they never could have evolved our quick-witted and smart brains. Being tied to the sun restricts their behavior: Instead of being free and active, searching and understanding the world, they spend too much time avoiding getting too hot or too cold.
So, since dinosaurs are cold-blooded and being tied to the sun restricts their behavior, if they would have survived the K-T extinction event, then it is highly implausible that they would have grown brains our size.
Furthermore, Hopson (1977: 444) writes:
I would argue, as does Feduccia (44), that the mammalian/avian levels of activity claimed by Bakker for dinosaurs should be correlated with a great increase in motor and sensory control and this should be reflected in increased brain size. Such an increase is not indicated by most dinosaur endocasts.
Gould even writes in Wonderful Life:
If mammals had arisen late and helped to drive dinosaurs to their doom, then we could legitimately propose a scenario of expected progress. But dinosaurs remained dominant and probably became extinct only as a quirky result of the most unpredictable of all events—a mass dying triggered by extraterrestrial impact. If dinosaurs had not died in this event, they would probably still dominate the large-bodied vertebrates, as they had for so long with such conspicuous success, and mammals would still be small creatures in the interstices of their world. This situation prevailed for one hundred million years, why not sixty million more? Since dinosaurs were not moving towards markedly larger brains, and since such a prospect may lay outside the capability of reptilian design (Jerison, 1973; Hopson, 1977), we must assume that consciousness would not have evolved on our planet if a cosmic catastrophe had not claimed the dinosaurs as victims. In an entirely literal sense, we owe our existence, as large reasoning mammals, to our lucky stars. (Gould, 1989: 318)
I really don’t think it’s possible that brains our size would have evolved had the dinosaurs not gone extinct, and the data we have about dinosaurs strongly points to that assertion.
Staying on the topic of progression and brain size, there is one more thing I want to note. Deacon (1990a) argues that fallacies exist in the assertion that brain size progressed throughout evolutionary history. One of Deacon’s fallacies is the “evolutionary progression fallacy.” The concept of “progress” finds refuge “implicit expression in the analysis of brain-size differences and presumed grade shifts in allometric brain/body size trends, in theories of comparative intelligence, in claims about the relative proportions of presumed advanced vs. primitive brain areas, in estimates of neural complexity, including the multiplication and differentiation of brain areas, and in the assessment of other species with respect to humans, as the presumed most advanced exemplar” (Deacon, 1990a: 195).
This, in my opinion, is the last refuge for progressionists: looking at the apparent rise of brain size in evolutionary history and saying “Aha! There it is—progress!” So, the so-called progress in brain size evolution is only due to allometric processes, there is no true “progress” in brain size, no unbiased allometric baseline exists, therefore these types of claims from progressionists fail. Lastly, Deacon (1990b) argues that so-called brain size progress vanishes when functional specialization is taken into account.
Therefore it is unlikely that dinosaurs would have evolved brains our size.
In sum, there are many ways that progressionists attempt to show that there is “progress” in evolution. However, they all fail since Gould’s argument is always waiting to rear its head. Yes, some organisms have evolved greater complexity—i.e., moved toward the right wall—though this is not evidence for “progress.” Many—if not all—accounts of “progress” fail. There is no “progress” in brain size evolution; there would not be human-like dinosaurs had the dinosaurs not gone extinct in the K-T extinction event. We live on a planet of bacteria, and since we live on a planet of bacteria—that is, since bacteria are the most numerous type of organism on earth, evolutionary progress cannot be true.
Complexity—getting to the right wall—is an inevitability, just as it is an inevitability that the drunkard would eventually stumble to the gutter. But this does not mean that there is “progress” to evolution.
The argument in Gould’s Full House can be simply stated like this:
P1 The claim that evolutionary “progress” is real and not illusory can only be justified iff organisms deemed more “advanced” outnumber “lesser” organisms.
P2 There are more “lesser” organisms (bacteria/insects) on earth than “advanced” organisms (mammals/species of mammals).
C Therefore evolutionary “progress” is illusory.
The Shroud of Turin is a long cloth that bears the negative image of a man that was purportedly crucified. The study of the Shroud even has its own name—sindonology. The Shroud, ever since its discovery, has been the source of rampant controversy. Does the Shroud show a crucified man? Is this man Christ after the crucifixion? Is the Shroud real or is it a hoax? For centuries these questions have caused long and drawn-out debate. Fortunately, recent analyses seem to have finally ended this centuries-old question: “Does the Shroud of Turin show Christ after the crucifixion?” In this review, I will discuss the history of the Shroud, the pro- and the con-side of the Shroud and what, if any, bearing it has any on the truth of Christianity. The Shroud has been marred in controversy ever since its appearance in the historical record, and for good reason. If it can be proven that the Shroud was Christ’s burial linen and if it can be proven that, somehow, Christ’s image, say, became imprinted when his spirit left his body, this would lend credence to claims from Christians and Catholics—but reality seems to bear different facts of the matter from these claims.
Various theories of the Shroud have been put forth to explain the image that appears when a negative picture is taken of the Shroud. From people hypothesizing that the great Italian artist da Vinci drew it on the cloth (Picknett and Prince, 2012), to it actually being the blood-soaked burial cloth of Christ himself, to its being just a modern-day forgery, we now have the tools in the modern-day to carry out analyses to answer these questions and put them to rest for good.
The history of the Shroud dates back to around the 1350s, as the Vatican historian Barbara Frale notes in her book The Templars and the Shroud of Christ: A Priceless Relic in the Dawn of the Christian Era and the Men Who Swore to Protect It (Frale, 2015). Meachem (1983) discusses how the Shroud has generated controversy ever since it was put on display in 1357 in France. That the Shroud first appeared in written records in the 1350s does not, however, mean that the Shroud is not Christ’s burial linen.
Some, even back when the Shroud was discovered, argued that it was just a painting on linen cloth. As McCrone (1997: 180) writes “The arrangement of pigment particles on the “Shroud” is […] completely consistent with painting with a brush using a water-color paint.” So this would lend credence to the claim that the Shroud is nothing but a painting—most likely a medieval one.
The Shroud itself is around 14’3” long (Crispino, 1979) and, believers claim, it dates back to two millennia ago and bears the imprint of Jesus Christ. It is currently housed at the Cathedral of Saint John the Baptist in Turin, Italy. The Shroud shows what appears to be a tall man, though Crispino (1979) shows that there have been many height estimates of the man on the Shroud, estimates ranging between 5’3.5” to 6’1.5”. With such wide-ranging height estimates, it is therefore unlikely that we will get an agreed-upon height measurement of the man on the Shroud. But Crispino (1979) does note that estimates of Palestinian males 2000 years ago were between 5’1” to 5’3”, and so, if this were Christ’s burial cloth, then it stands to reason that the man would be closer to the lower bound noted by Crispino (1979).
The Shroud shows a man who seems to bear the marks of the crucifixion. If the Shroud was really the burial cloth of a man who was crucified, then there would be blood on the linen. There are blood spots on the Shroud, and there have been recent tests on the cloth to see if it really is human blood.
A method called blood pattern analysis (BPA) is a very useful—ingenious—way to ascertain whether or not the veracity of the claim that the Shroud really was Christ’s burial cloth is true. BPA “refers to the collection, categorization and interpretation of the shape and distribution of bloodstains connected with a crime” (Peschel et al, 2010). By using a model and draping a cloth over them and using the same wounds that Christ was said to have, we can glean—with good accuracy—if the claim that the Shroud was Christ’s burial cloth is true.
A recent study using BPA was undertaken, to ascertain whether or not the blood stains on the Shroud are realistic, and not just art (Borrini and Garleschelli, 2018). They used a living subject to see if the blood patterns on the cloth are realistic. Their analysis showed that “blood visible on the frontal side of the chest (the lance wound) shows that the Shroud represents the bleeding in a realistic manner for a standing position” whereas the stains on the back were “totally unrealistic.”
Zugibe (1989) also puts forth a scientific hypothesis: the claim that Christ was washed prior to his being rolled in the linen cloth that is the Shroud we know of today. So, if Christ were washed before being placed in the linen, then it would lend veracity to the claim that the Shroud truly is Christ’s burial linen. Citing the apocryphal text The Lost Gospel According to Peter, Zugibe (1989) provides scriptural evidence for the claim that Christ was washed before death, which lends credence to the hypothesis that the Shroud truly is Christ’s burial linen. Zugibe (1989) clearly shows that, even after a body has been washed, it can still bleed profusely, which may have caused the blood stains on the Shroud.
Indeed, even Wilson (1998: 235) writes that “ancient blood specialist Dr Thomas Loy confirm[s] that blood many thousands of years old can remain bright red in certain cases of traumatic death.” This coheres well with Zugibe’s (1989) argument that in certain cases, even after a body has been washed and wrapped in linen, that there can still be apparent blood stains on the linen (and we know from Biblical accounts that Jesus did die a traumatic death).
To really see if the Shroud truly is the burial cloth of Christ, analyses of the linen can be undertaken to ascertain an average range of time for when the linen was made. There have been analyses of the linen, and the dates that we get are between the 1250s to 1350s. However, those who believe that the Shroud is truly the burial cloth of Christ state that the fibers that were tested were taken from medieval repairs of the cloth, since one of the locations the cloth was housed in burned down due to a fire in 1532 (Adler, 1996), causing damage to the cloth. For example, Rogers (2005) argues that the threads of linen tested were from medieval repairs, and that the true date of the Shroud is between 1300 and 3000 years old.
Barcaccia et al (2015) undertook an analysis of some dust particles on the back of the Shroud by vacuuming them. They found that there were multiple, non-native plant species on the Shroud, along with multiple mtDNA (mitochondrial DNA) haplogroups (H1-3, H13/H2a, L3C, H33, R0a, M56, and R7/R8). However, the fact that multiple mtDNA halpogroups were found on the Shroud is consistent with the fact that the Shroud took many journeys throughout its time, before ending up in Turin, Italy. It could also reflect the fact that numerous contemporary researchers’ DNA has contaminated the Shroud as well. In any case, Barcaccia et al (2015) show that there were numerous species of plants from all around the world along with many different kinds of mtDNA, and so, both believers and skeptics can use this study as evidence for their side. Barcaccia et al (2015) conclude that the Shroud may have been weaved in India, due to its original name Sindon, which is a fabric from India—which mtDNA analyses corroborate.
There is even evidence that the face on the Shroud is that of da Vinci himself. Artist Lillian Schwartz, using computer imaging, showed that the facial dimensions on the Shroud matched up to the facial dimensions of da Vinci (Jamieson, 2009). It is hypothesized that da Vinci used what is called a camera obscura to put his facial features onto the Shroud. Since we know that da Vinci made some ultra-realistic drawings of the human body since he had access to the morgue (Shaikh, 2015), then it is highly plausible that da Vinci himself was responsible for the image on the Shroud. Jamieson (2009) states that, in the documentary that explains how da Vinci created the Shroud: the most likely way for it to have been made was hanging the Shroud in a dark room with silver sulfate. So when the sun’s rays went through a lens on the wall, da Vinci’s face would have been burnt into it.
Further, there are some arguments that state that the man on the Shroud is not Christ, but is, in fact, the Jacque de Molay—the last Grand Master of the Knights Templar. If the claim turns out to be true, this could be why the Knights protected the Shroud so fervently. Although there is little historical evidence as to how Molay was tortured, we do know he was tortured. Though what is also interesting is that apparently Molay was put through the same exact process of crucifixion that Christ was said to have gone through. If this is the case, then that could explain the same marks in the hands and blood on the Shroud that would have been on the Shroud had Christ been the one crucified and wrapped in the linen that eventually became the Shroud. Though, those who were to kill Molay were “expressly forbidden to shed blood“, though we know he was tortured, it is not out of the realm of possibility that he did bleed after death. Molay was killed in 1317 C.E., and this lines up with the accounts from Frale (2015) and Meachem (1983) that the Shroud appeared around the 1350s. So the Knights protecting the Shroud as they did—even if it were not Christ—would have some backing.
The validity of the Shroud is quite obviously a hot-button topic for Catholics. No matter the outcome of any study on the matter, they can concoct an ad-hoc hypothesis to immunize their beliefs from falsification. No doubt, some of the critiques they bring up are valid (e.g., that they are taking fibers of linen from restored parts of the Shroud), though, after so many analyses one would have to reason that the hypothesis that the Shroud truly was Christ’s burial cloth is false. Furthermore, even if the Shroud is dated back to the 1st century, that would not be evidence that the Shroud was Christ’s burial shroud. The mtDNA analyses also seem to establish that the Shroud passed through many hands—as the hypothesis predicts. However, this also coheres with the explanation that it was made during medieval times, with numerous people touching the linen that eventually ended up becoming the Shroud. It is also, of course, not out of the realm of possibility that contemporary researchers have contaminated the Shroud with their own DNA, making objective genomic analyses of the Shroud hard to verify, while there would be no way to partition contaminated DNA from DNA that was originally on the Shroud.
In any case—irrespective of the claims of those who wish that this is Christ’s burial cloth and thus will concoct any ad-hoc hypothesis to immunize their beliefs from falsification—it seems to be the case that the Shroud was not Christ’s burial cloth. On the basis of mtDNA analyses, blood pattern analyses (which seem to point to the fact that it is an artistic representation of Christ’s burial), to the evidence that facial dimensions on the Shroud match up with da Vinci’s face, the apparent claims that the Shroud was the last Grand Master of the Knights Templar, along with the fact that the original name of the Shroud was Indian in origin, all point to the fact that the Shroud was, in fact, not Christ’s burial linen, no matter how fervent believers are about the veracity of the claim.
Action and behavior are distinct concepts, although in common lexicon they are used interchangeably. The two concepts are needed to distinguish what one intends to do and what one reacts to and how they react. In this article, I will explain the distinction between the two and how and why people get it wrong when discussing the two concepts—since using them interchangeably is inaccurate.
Actions are intentional; they are done for reasons (Davidson, 1963). Actions are determined by one’s current intentional state and they then act for reasons. So, in effect, the agent’s intentional states cause the action, but the action is carried out for reasons. Actions are that which is done by an agent, but stated in this way, it could be used interchangeably with behavior. The Wikipedia article on “action” states:
action is an intentional, purposive, conscious and subjectively meaningful activity
So actions are conscious, compared to behaviors which are reflexive and unconscious—not done for reasons.
Davidson (1963: 685) writes:
Whenever someone does something for a reason, therefore, he can be characterized as (a) having some sort of pro attitude toward actions of a certain kind, and (b) believing (or knowing, perceiving, noticing, remembering) that his action is of that kind.
So providing the reason why an agent did A requires naming the pro-attitude—beliefs paired with desires—or the related belief that caused the agent’s action. When I explain behavior, this will become clear.
Behavior is different: behavior is a reaction to a stimulus and this reaction is unconscious. For example, take a doctor’s office visit. Hitting the knee in the right spot causes the knee to jerk up—doctors use this test to test for nerve damage. It tests the L2, L3, and L4 segments of the spinal cord, so if there is no reflex, the doctor knows there is a problem.
This is done without thought—the patient does not think about the reflex. This then shows how and why action and behavior are distinct concepts. Here’s what occurs when the doctor hits the patient’s knee:
When the doctor hits the knee, the patient’s thigh muscle stretches. When the thigh muscle stretches, a signal is then sent along the sensory neuron to the spinal cord where it interacts with a motor neuron which goes to the thigh muscle. The muscle then contracts which causes the reflex. (Recall my article on causes of muscle movement.)
So this, compared to consciously taking a step—consciously jerking your leg in the same way as a doctor expects the patellar reflex—is what distinguishes one from the other—what distinguishes action from behavior. Sure, the behavior of the patellar reflex occurred for a reason—but it was not done consciously by the agent so it is therefore not an action.
Perhaps it would be important at this point to explain the differences between action, conduct, and behavior, because we have used these three terms in the discussion of caring. …
Teleology, the reader is reminded, involves goals or lures that provide the reasons for a person actingin a certain way. It is goals or reasons that establish action from simple behavior. On the other hand the concept of efficient causation is involved in the concept of behavior. Behavior is the result of antecedentconditions. The individual behaves in response to causal stimuli or antecedent conditions. Hence, behavior is a reaction to what already is—the result of a push from the past to do something in the present. In contrast, an action aims at the future. It is motivated by a vision of what can be. (Brencick and Webster, 2000: 147)
This is also another thing that Darwin got wrong. He believed that instincts and reflexes are inherited—this is not wrong since they are behaviors and behaviors are dispositional which means they can be selected. However, he believed that before they were inherited as instincts and reflexes, they were intentional acts. As Badcock (2000: 56) writes in Evolutionary Psychology: A Critical Introduction:
Darwin explicitly states this when he says that ‘it seems probable that some actions, which were at first performed consciously, have become through habit and association converted into relex actions, and are now firmly fixed and inherited.’
This is quite obviously wrong, as I have explained above; instead of “reflexive actions”, Darwin meant “reflexive behaviors”. So, it seems that Darwin did not grasp the distinction between “action” and “behavior” either.
We can then form this simple argument, take cognition:
This is a natural outcome of what has been argued here, due to the distinction between action and behavior. So when we think of “cognition” what comes to mind? Thinking. Thinking is an action—so thinking (cognition) is intentional. Intentionality is “the power of minds and mental states to be about, to represent, or to stand for, things, properties and states of affairs.” So, when we think, our minds/mental states can represent, stand for things, properties and states of affairs. Therefore, cognition is intentional. Since cognition is intentional and behavior is dispositional, it directly follows that cognition cannot be responsible for behavior.
Thinking is a mental activity which results in a thought. So if thinking is a mental activity which results in a thought, what is a thought? A thought is a mental state of considering a particular idea or answer to a question or committing oneself to an idea or answer. These mental states are, or are related to, beliefs. When one considers a particular answer to a question they are paving the way to holding a particular belief; when they commit themselves to an answer they have formulated a new belief.
Beliefs are propositional attitudes: believing p involves adopting the belief attitude to proposition p. So, cognition is thinking: a mental process that results in the formation of a propositional belief. When one acquires a propositional attitude by thinking, a process takes place in stages. Future propositional attitudes are justified on earlier propositional attitudes. So cognition is thinking; thinking is a mental state of considering a particular view (proposition).
Therefore, thinking is an action (since it is intentional) and cannot possibly be a behavior (a disposition). Something can be either an action or a behavior—it cannot be both.
Let’s say that I have the belief that food is downtown. I desire to eat. So I intend to go downtown to get some food. While the cause is the sensation of hunger. This chain shows how actions are intentional—how one intends to act.
Furthermore, using the example I explained above, how a doctor assesses the patellar reflex is a behavior—it is not an action since the agent himself did not cause it. One could say that it is an action for the doctor performing the reflexive test, but it cannot be an action for the agent the test is being done on—it is, therefore, a behavior.
I have explained the difference between action and behavior and how and why they are distinct. I gave an example of action (cognition) and behavior (patellar reflex) and explained how they are distinct. I then gave an argument showing how cognition (an action) cannot possibly be responsible for behavior. I showed how Darwin believed (falsely) that actions could eventually become behaviors. Darwin pretty much stated “Actions can be selected and eventually become behaviors.” This is nonsense. Actions, by virtue of being intentional, cannot be selected, even if they are done over and over again, they do not eventually become behaviors. On the other hand, behavior, by virtue of being dispositional, can be selected. In any case, I have definitively shown that the two concepts are distinct and that it is nonsense to conflate the terms.
I’m not really one for social media (the only social media I use is Twitter and this blog) and so I don’t keep up with the new types of social media that continuously pop up. Snapchat has been around since 2011. It’s a type of social media where users can share pictures before they are then unavailable to the user they sent it to. I don’t understand the utility of media like this but maybe that’s because I’m not the target demographic.
In any case, I’m not going to talk about Snapchat in that way today, because that’s not what this blog is about. What I will talk about today, though, is the rise of “Snapchat dysmorphia.” “Dysmorphia” is defined by the Merriam Webster Dictionary as “characterized by malformation.” “Dysphoria”, according to the Merriam Webster Dictionary, is defined as “a state of feeling very unhappy, uneasy, or dissatisfied.” This is a type of body dysmorphia. The two terms—dysphoria and dysmorphia—are similar, in any case.
So where does Snapchat come into play here? Well, there are certain functions that one can do with their pictures—and I’m sure they can do the same with other applications as well. There are what I would term “picture editors” which can change how a person looks. From changing the background you’re in, to changing your facial features, there are a wide range of things these kinds of filters can generate onto photographs/videos.
Well, of course, with the rise of social media and people constantly being glued to their phones day in and day out—along with pretty much living their entire lives on social media—people get sucked into the digital world they make for themselves. People constantly send pictures to others about what they’re doing on that day—they don’t have a chance to live in the moment because they’re always trying to get the “best picture” of the moment they’re in, since they’re trying to get the best picture for their followers on social media. In any case, this is where the problem with these kinds of filters come in—and how Snapchat is driving these problems.
So people use these filters on their pictures. They then get used to seeing themselves as they see themselves in the filtered pictures. Since they spend so much time on social media, constantly filtering their pictures, they—and their social media followers—get used to seeing their filtered photos and not how they really like. This, right here, is the problem.
People then become dysphoric—they become unsatisfied with their appearance due to how they look in their filtered photos. This has lead numerous people to do what I believe is insane—they go and get plastic surgery to look like their Snapchat selves. This is, in part, what the meteoric rise of social media has done to the minds of the youth. They give them unrealistic expectations—through their filters—and then, since they spend so much time Snapchatting and whatever else they do, seeing their filtered pictures, they then get sad that they do not look like they do in their filtered pictures in their digital world, which causes them to become dysphoric about their facial features since they do not look like their Snapchat selves.
One Snapchat user said to Vice:
We’d rather have a digitally obscured version of ourselves than our actual selves out there. It’s honestly sad, but it’s a bitter reality. I try to avoid using them as much as I can because they seriously cause an unhealthy dysphoria.
Therein lies the problem: people become used to what I would say are “idealized versions” of themselves. Some of these filters completely change how one’s facial structure is; some of them give bigger or smaller eyes; others change the shape of the jawline and cheekbones; others give fuller lips. So now, instead of people bringing photographs of celebrities to plastic surgeons and saying to them “This is what I want to look like”, they’re bringing their edited Snapchat pictures to plastic surgeons and telling them that they want that look.
So it’s no wonder that people become dysphoric about their facial features when they pretty much live on social media. They constantly play around with this filter and that filter, and they become used to what then becomes an idealized version of themselves. These types of picture filters have been argued to be bad for self-esteem, and it’s no wonder why they are, given the types of things these filters can do to radically change the appearances of the users who use them.
There has been a rise in individuals bringing in their filtered photos to plastic surgeons, telling them that they want to look like the filtered picture. Indeed, some of the before and afters I have seen bear striking similarities to the filtered photo.
The term “Snapchat dysmorphia” has even made it into the journal JAMA in an article titled Selfies—living in the era of filtered photographs (Rajanala, Maymobe, and Vashi, 2018). They write that:
Previously, patients would bring images of celebrities to their consultations to emulate their attractive features. A new phenomenon, dubbed “Snapchat dysmorphia,” has patients seeking out cosmetic surgery to look like filtered versions of themselves instead, with fuller lips, bigger eyes, or a thinner nose.
Ramphus and Mejias (2018) state that while it may be too early to employ the term “Snapchat dysmorphia”, it is imperative to realize the reasons why many young people are thinking and getting plastic surgery. Indeed, a few plastic surgeons have stated that the type of alterations that patients describe to them, indeed, are what are found with Snapchat facial edits.
Ramphul and Mejias (2018) also write:
There are already some ongoing legal issues about the use of Snapchat in the operating room by some plastic surgeons but none currently involving any patients accusing Snapchat of giving them a false perception of themselves yet. The proper code of ethics among plastic surgeons should be respected and an early detection of associated symptoms in such patients might help provide them with the appropriate counseling and help they need.
Clearly, this issue is now becoming large enough that medical journals are now employing the term in their articles.
McLean et al’s (2015) results “showed that girls who regularly shared self-images on social media, relative to those who did not, reported significantly higher overvaluation of shape and weight, body dissatisfaction, dietary restraint, and internalization of the thin ideal. In addition, among girls who shared photos of themselves on social media, higher engagement in manipulation of and investment in these photos, but not higher media exposure, were associated with greater body-related and eating concerns, including after accounting for media use and internalization of the thin ideal.” This seems intuitive: the more time one spends on social media, sharing images, overvalues certain things. And putting this into context in regard to Snapchat dysmorphia, girls spending too much time on these types of applications that can change their appearance, may also develop eating disorders.
Ward et al (2018) report that in 2014, about 93 million selfies were taken per day. With the way selfies are taken—up close—this then distorts the nasal dimensions, increasing them (Ward et al, 2018). Although this is only tangentially related to the issue of Snapchat dysmorphia, it will also increase the chance of people seeking plastic surgery, since a lot of people spend so much time on social media, taking selfies and eventually idealizing their selves with the angles they take the pictures in.
Although there are only 2 pages on Google scholar when you search “Snapchat dysmorphia”, we can expect the number of journal articles and references to the term to increase in the coming years due to people basically living most of their lives on social media. This is troubling: that young people are spending so much time on social media, editing their photos and acquiring dysmorphia due to the types of edits that are possible with these applications is an issue that we will need to soon address. Quite obviously, getting plastic surgery to look more like the idealized Snapchat photo is not the solution to the problem—something more like counseling or therapy would seem to address the issue. Not pretty much telling people “If you have the money and the time to get this surgery done then you should, to look like how you idealize yourself.”
Should people get plastic surgery to fix their selves, or should they get counseling? People who look to, or get, survey to fix dysmorphic issues they have with themselves will never be satisfied. They will always see a blemish, an imperfection to fix. For this reason, getting surgery in an attempt “fix” yourself if addicted to your looks while using these picture filters won’t work, as the deeper problem isn’t addressed—which I would claim is rampant social media use.
Scientism is the belief that only scientific claims are meaningful, that science is the only way for us to gain knowledge. However, this is a self-refuting claim. The claim that “only scientific claims are meaningful” cannot be empirically observed, it is a philosophical claim. Thus, those who push scientism as the only way to gain knowledge fail, since the claim itself is a philosophical one. Although science (and the scientific method) do point us in the direction to gain knowledge, it is not the only way for us to gain knowledge. We can, too, gain knowledge through logic and reasoning. The claim that science and science along can point us towards objective facts about knowledge and reality is false.
There is no doubt about it: the natural sciences have pointed us toward facts, facts of the matter that exist in nature. However, this truth has been used in recent times to purport that the sciences are the only way for us to gain knowledge. That, itself, is a philosophical claim and cannot be empirically tested, and so it is self-defeating. Furthermore, the modern sciences themselves arose from philosophy.
Richard Dawkins puts forth a view of epistemology in his book The Magic of Reality in which all of the knowledge concerning reality is derived from our five senses. So if we cannot smell it, hear it, touch it, or taste it, we cannot know it. Thus, how we know what is true or not always comes back to our senses. The claim is “All knowledge of reality is derived from our senses.” There is a problem here, though: this is a philosophical claim and cannot be verified with the five senses. What a conundrum! Nothing that one can see, hear, taste or touch can verify that previous claim; it is a philosophical, not scientific claim and, therefore, the claim is self-refuting.
Science is dependent on philosophy, but philosophy is not dependent on science. Indeed, even the question “What is science?” is a philosophical, not scientific, question and cannot be known through the five senses. Even if all knowledge is acquired through the senses, the belief that all knowledge is acquired through the senses is itself not a scientific claim, but a philosophical one, and is therefore self-refuting.
In his book I am Not a Brain: Philosophy of Mind for the 21st Century, philosopher Markus Gabriel writes (pg 82-83):
The question of how we conceive of human knowledge acquisition has many far-reaching consequences and does not concern merely philosophical epistemology. Rampant empiricism — i.e., the thesis that all knowledge derives exclusively from sense experience — means trouble. If all knowledge stemmed from experience, and we could hence never really know anything definitively — since experience could always correct us — how could we know, for example, that one should not torture children or that political equality should be a goal of democratic politics? If empiricism were correct, how would we be supposed to know that 1 + 2 = 3, since it is hard to see how this could be easily revised by sense experience? How could we know on the basis of experience that we know everything only on the basis of experience?
Rampant empiricism breaks down in the face of simple questions. If all knowledge really stems from the source of sense experience, what are we supposed to make of the knowledge concerning this supposed fact? Do we know from sense experience that all knowledge stems from sense experience? One would then have to accept that experience can teach us wrongly even in regard to this claim. In principle, one would have to be able to learn through experience that we cannot learn anything through experience … How would this work? What kind of empirical discovery would tell us that not everything we know is through empirical discovery?
The thing is, the claim that “All knowledge stems from sense experience” cannot be corrected by sense experience, and so, it is not a scientific hypothesis that can be proven or disproven. No matter how well a scientific theory is established, it can always be revised and or refuted and pieces of evidence can come out that render the theory false. Therefore, what Gabriel terms “rampant empiricism” (scientism) is not a scientific hypothesis.
Scientism is believed to be justified on the basis of empirical discoveries and the fact that our senses can lead to the refutation or revision of these scientific theories. That in and of itself justifies scientism for most people. Though, as previously stated, that belief is a philosophical, not scientific, belief and cannot be empirically tested and is therefore self-refuting. Contemporary scientists and pundits who say, for example, that “Philosophy is dead” (i.e., Hawking and de Grasse Tyson) made philosophical claims, therefore proving that “Philosophy is not dead”!
Claims that science (empiricism) is the be-all-end-all for knowledge acquisition fall flat on their face; for if something is not logical, then how can it be scientifically valid? This is further buttressed by the fact that all science is philosophy and science needs philosophy, whereas philosophy does not need science since philosophy has existed long before the natural sciences.
Haak (2009: 3) articulates six signs of scientism:
1. Using the words “science,” “scientific,” “scientifically,” “scientist,” etc., honorifically, as generic terms of epistemic praise.
2. Adopting the manners, the trappings, the technical terminology, etc., of the sciences, irrespective of their real usefulness.
3. A preoccupation with demarcation, i.e., with drawing a sharp line between genuine science, the real thing, and “pseudo-scientific” imposters.
4. A corresponding preoccupation with identifying the “scientific method,” presumed to explain how the sciences have been so successful.
5. Looking to the sciences for answers to questions beyond their scope.
6. Denying or denigrating the legitimacy or the worth of other kinds of inquiry besides the scientific, or the value of human activities other than inquiry, such as poetry or art.
People who take science to be the only way to gain knowledge, in effect, take science to be a religion, which is ironic since most who push these views are atheists (Richard Dawkins, Sam Harris, Lawrence Kraus). Science is not the only way to gain knowledge; we can also gain knowledge through logic and reasoning. There are analytic truths that are known a priori (Zalta, 1988).
Thus, the claim that there is justification for scientism—that all of our knowledge is derived from the five senses—is false and self-refuting since the belief that scientism is true is a philosophical claim that cannot be empirically tested. There are other ways of knowledge-gaining—of course, without denigrating the knowledge we gain from science—and therefore, scientism is not a justified position. Since there are analytic, a priori truths, then the claim that “rampant empiricism“—scientism—is true, is clearly false.
Note that I am not denying that we can gain knowledge through sense experience; I am denying that it is the only way that we gain knowledge. Even Hossain (2014) concludes that:
Empiricism in the traditional sense cannot meet the demands of enquiries in the fields of epistemology and metaphysics because of its inherent limitations. Empiricism cannot provide us with the certainty of scientific knowledge in the sense that it denies the existence of objective reality, ignores the dialectical relationship of the subjective and objective contents of knowledge.
Quite clearly, it is not rational to be an empiricist—a rampant empiricist that believes that sense experience is the only way to acquire knowledge—since there are other ways of gaining knowledge that are not only based on sense experience. In any case, the argument I’ve formulated below proves that scientism is not justified since we can acquire knowledge through logic and reasoning. It is, for these reasons, that we should be against scientism since the claim that science is the only path to knowledge is itself a philosophical and not scientific claim which therefore falsifies the claim that empiricism is true.
Premise 1: Scientism is justified.
Premise 2: If scientism is justified, then science is the only way we can acquire knowledge.
Premise 3: We can acquire knowledge through logic and reasoning, along with science.
Conclusion: Therefore scientism is unjustified since we can acquire knowledge through logic and reasoning.
Construct validity for IQ is fleeting. Some people may refer to Haier’s brain imaging data as evidence for construct validity for IQ, even though there are numerous problems with brain imaging and that neuroreductionist explanations for cognition are “probably not” possible (Uttal, 2014; also see Uttal, 2012). Construct validity refers to how well a test measures what it purports to measure—and this is non-existent for IQ (see Richardson and Norgate, 2014). If the tests did test what they purport to (intelligence), then they would be construct valid. I will show an example of a measure that was validated and shown to be reliable without circular reliance of the instrument itself; I will show that the measures people use in attempt to prove that IQ has construct validity fail; and finally I will provide an argument that the claim “IQ tests test intelligence” is false since the tests are not construct valid.
Jung and Haier (2007) formulated the P-FIT hypothesis—the Parieto-Frontal Intelligence Theory. The theory purports to show how individual differences in test scores are linked to variations in brain structure and function. There are, however, a few problems with the theory (as Richardson and Norgate, 2007 point out in the same issue; pg 162-163). IQ and brain region volumes are experience-dependent (eg Shonkoff et al, 2014; Betancourt et al, 2015; Lipina, 2016; Kim et al, 2019). So since they are experience-dependent, then different experiences will form different brains/test scores. Richardson and Norgate (2007) state that such bigger brain areas are not the cause of IQ, rather that, the cause of IQ is the experience-dependency of both: exposure to middle-class knowledge and skills leads to a better knowledge base for test-taking (Richardson, 2002), whereas access to better nutrition would be found in middle- and upper-classes, which, as Richardson and Norgate (2007) note, lower-quality, more energy-dense foods are more likely to be found in lower classes. Thus, Haier et al did not “find” what they purported too, based on simplistic correlations.
Now let me provide the argument about IQ test experience-dependency:
Premise 1: IQ tests are experience-dependent.
Premise 2: IQ tests are experience-dependent because some classes are more exposed to the knowledge and structure of the test by way of being born into a certain social class.
Premise 3: If IQ tests are experience-dependent because some social classes are more exposed to the knowledge and structure of the test along with whatever else comes with the membership of that social class then the tests test distance from the middle class and its knowledge structure.
Conclusion 1: IQ tests test distance from the middle class and its knowledge structure (P1, P2, P3).
Premise 4: If IQ tests test distance from the middle class and its knowledge structure, then how an individual scores on a test is a function of that individual’s cultural/social distance from the middle class.
Conclusion 2: How an individual scores on a test is a function of that individual’s cultural/social distance from the middle class since the items on the test are more likely to be found in the middle class (i.e., they are experience-dependent) and so, one who is of a lower class will necessarily score lower due to not being exposed to the items on the test (C1, P4)
Conclusion 3: IQ tests test distance from the middle class and its knowledge structure, thus, IQ scores are middle-class scores (C1, C2).
Still further regarding neuroimaging, we need to take a look at William Uttal’s work.
Uttal (2014) shows that “The problem is that both of these approaches are deeply flawed for methodological, conceptual, and empirical reasons. One reason is that simple models composed of a few neurons may simulate behavior but actually be based on completely different neuronal interactions. Therefore, the current best answer to the question asked in the title of this contribution [Are neuroreductionist explanations of cognition possible?] is–probably not.”
Uttal even has a book on meta-analyses and brain imaging—which, of course, has implications for Jung and Haier’s P-FIT theory. In his book Reliability in Cognitive Neuroscience: A Meta-meta Analysis, Uttal (2012: 2) writes:
There is a real possibility, therefore, that we are ascribing much too much meaning to what are possibly random, quasi-random, or irrelevant response patterns. That is, given the many factors that can influence a brain image, it may be that cognitive states and braib image activations are, in actuality, only weakly associated. Other cryptic, uncontrolled intervening factors may account for much, if not all, of the observed findings. Furthermore, differences in the localization patterns observed from one experiment to the next nowadays seems to reflect the inescapable fact that most of the brain is involved in virtually any cognitive process.
Uttal (2012: 86) also warns about individual variability throughout the day, writing:
However, based on these findings, McGonigle and his colleagues emphasized the lack of reliability even within this highly constrained single-subject experimental design. They warned that: “If researchers had access to only a single session from a single subject, erroneous conclusions are a possibility, in that responses to this single session may be claimed to be typical responses for this subject” (p. 708).
The point, of course, is that if individual subjects are different from day to day, what chance will we have of answering the “where” question by pooling the results of a number of subjects?
That such neural activations gleaned from neuroimaging studies vary from individual to individual, and even time of day in regard to individual, means that these differences are not accounted for in such group analyses (meta-analyses). “… the pooling process could lead to grossly distorted interpretations that deviate greatly from the actual biological function of an individual brain. If this conclusion is generally confirmed, the goal of using pooled data to produce some kind of mythical average response to predict the location of activation sites on an individual brain would become less and less achievable“‘ (Uttal, 2012: 88).
Clearly, individual differences in brain imaging are not stable and they change day to day, hour to hour. Since this is the case, how does it make sense to pool (meta-analyze) such data and then point to a few brain images as important for X if there is such large variation in individuals day to day? Neuroimaging data is extremely variable, which I hope no one would deny. So when such studies are meta-analyzed, inter- and intrasubject variation is obscured.
The idea of an average or typical “activation region” is probably nonsensical in light of the neurophysiological and neuroanatomical differences among subjects. Researchers must acknowledge that pooling data obscures what may be meaningful differences among people and their brain mechanisms. THowever, there is an even more negative outcome. That is, by reifying some kinds of “average,” we may be abetting and preserving some false ideas concerning the localization of modular cognitive function (Uttal, 2012: 91).
So when we are dealing with the raw neuroimaging data (i.e., the unprocessed locations of activation peaks), the graphical plots provided of the peaks do not lead to convergence onto a small number of brain areas for that cognitive process.
… inconsistencies abount at all levels of data pooling when one uses brain imaging techniques to search for macroscopic regional correlates of cognitive processes. Individual subjects exhibit a high degree of day-to-day variability. Intersubject comparisons between subjects produce an even greater degree of variability.
The overall pattern of inconsistency and unreliability that is evident in the literature to be reviewed here again suggests that intrinsic variability observed at the subject and experimental level propagates upward into the meta-analysis level and is not relieved by subsequent pooling of additional data or averaging. It does not encourage us to believe that the individual meta-analyses will provide a better answer to the localization of cognitive processes question than does any individual study. Indeed, it now seems plausible that carrying out a meta-analysis actually increases variability of the empirical findings (Uttal, 2012: 132).
So since reliability is low at all levels of neuroimaging analysis, it is very likely that the relations between particular brain regions and specific cognitive processes have not been established and may not even exist. The numerous reports purporting to find such relations report random and quasi-random fluctuations in extremely complex systems.
Construct validity (CV) is “the degree to which a test measures what it claims, or purports, to be measuring.” A “construct” is a theoretical psychological construct. So CV in this instance refers to whether IQ tests test intelligence. We accept that unseen functions measure what they purport to when they’re mechanistically related to differences in two variables. E.g, blood alcohol and consumption level nd the height of the mercury column and blood pressure. These measures are valid because they rely on well-known theoretical constructs. There is no theory for individual intelligence differences (Richardson, 2012). So IQ tests can’t be construct valid.
The accuracy of thermometers was established without circular reliance on the instrument itself. Thermometers measure temperature. IQ tests (supposedly) measure intelligence. There is a difference between these two, though: the reliability of thermometers measuring temperature was established without circular reliance on the thermometer itself (see Chang, 2007).
In regard to IQ tests, it is proposed that the tests are valid since they predict school performance and adult occupation levels, income and wealth. Though, this is circular reasoning and doesn’t establish the claim that IQ tests are valid measures (Richardson, 2017). IQ tests rely on other tests to attempt to prove they are valid. Though, as seen with the valid example of thermometers being validated without circular reliance on the instrument itself, IQ tests are said to be valid by claiming that it predicts test scores and life success. IQ and other similar tests are different versions of the same test, and so, it cannot be said that they are validated on that measure, since they are relating how “well” the test is valid with previous IQ tests, for example, the Stanford-Binet test. This is because “Most other tests have followed the Stanford–Binet in this regard (and, indeed are usually ‘validated’ by their level of agreement with it; Anastasi, 1990)” (Richardson, 2002: 301). How weird… new tests are validated with their agreement with other, non-construct valid tests, which does not, of course, prove the validity of IQ tests.
IQ tests are constructed by excising items that discriminate between better and worse test takers, meaning, of course, that the bell curve is not natural, but forced (see Simon, 1997). Humans make the bell curve, it is not a natural phenomenon re IQ tests, since the first tests produced weird-looking distributions. (Also see Richardson, 2017a, Chapter 2 for more arguments against the bell curve distribution.)
Finally, Richardson and Norgate (2014) write:
In scientific method, generally, we accept external, observable, differences as a valid measure of an unseen function when we can mechanistically relate differences in one to differences in the other (e.g., height of a column of mercury and blood pressure; white cell count and internal infection; erythrocyte sedimentation rate (ESR) and internal levels of inflammation; breath alcohol and level of consumption). Such measures are valid because they rely on detailed, and widely accepted, theoretical models of the functions in question. There is no such theory for cognitive ability nor, therefore, of the true nature of individual differences in cognitive functions.
That “There is no such theory for cognitive ability” is even admitted by lead IQ-ist Ian Deary in his 2001 book Intelligence: A Very Short Introduction, in which he writes “There is no such thing as a theory of human intelligence differences—not in the way that grown-up sciences like physics or chemistry have theories” (Richardson, 2012). Thus, due to this, this is yet another barrier against IQ’s attempted validity, since there is no such thing as a theory of human intelligence.
In sum, neuroimaging meta-analyses (like Jung and Haier, 2007; see also Richardson and Norgate, 2007 in the same issue, pg 162-163) do not show what they purport to show for numerous reasons. (1) There are, of course, consequences of malnutrition for brain development and lower classes are more likely to not have their nutritional needs met (Ruxton and Kirk, 1996); (2) low classes are more likely to be exposed to substance abuse (Karriker-Jaffe, 2013), which may well impact brain regions; (3) “Stress arising from the poor sense of control over circumstances, including financial and workplace insecurity, affects children and leaves “an indelible impression on brain structure and function” (Teicher 2002, p. 68; cf. Austin et al. 2005)” (Richardson and Norgate, 2007: 163); and (4) working-class attitudes are related to poor self-efficacy beliefs, which also affect test performance (Richardson, 2002). So, Jung and Haier’s (2007) theory “merely redescribes the class structure and social history of society and its unfortunate consequences” (Richardson and Norgate, 2007: 163).
In regard to neuroimaging, pooling together (meta-analyzing) numerous studies is fraught with conceptual and methodological problems, since a high-degree of individual variability exists. Thus, attempting to find “average” brain differences in individuals fails, and the meta-analytic technique used (eg by Jung and Haier, 2007) fails to find what they want to find: average brain areas where, supposedly, cognition occurs between individuals. Meta-analyzing such disparate studies does not show an “average” where cognitive processes occur, and thusly, cause differences in IQ test-taking. Reductionist neuroimaging studies do not, as is popularly believed, pinpoint where cognitive processes take place in the brain, they have not been established and they may not even exist.
Nueroreductionism does not work; attempting to reduce cognitive processes to different regions of the brain, even using meta-analytic techniques as discussed here, fail. There “probably cannot” be neuroreductionist explanations for cognition (Uttal, 2014), and so, using these studies to attempt to pinpoint where in the brain—supposedly—cognition occurs for such ancillary things such as IQ test-taking fails. (Neuro)Reductionism fails.
Since there is no theory of individual differences in IQ, then they cannot be construct valid. Even if there were a theory of individual differences, IQ tests would still not be construct valid, since it would need to be established that there is a mechanistic relation between IQ tests and variable X. Attempts at validating IQ tests rely on correlations with other tests and older IQ tests—but that’s what is under contention, IQ validity, and so, correlating with older tests does not give the requisite validity to IQ tests to make the claim “IQ tests test intelligence” true. IQ does not even measure ability for complex cognition; real-life tasks are more complex than the most complex items on any IQ test (Richardson and Norgate, 2014b)
Now, having said all that, the argument can be formulated very simply:
Premise 1: If the claim “IQ tests test intelligence” is true, then IQ tests must be construct valid.
Premise 2: IQ tests are not construct valid.
Conclusion: Therefore, the claim “IQ tests test intelligence” is false. (modus tollens, P1, P2)