For the past 15 years, neuroscientist Suzanna Herculano-Houzel has been revolutionizing the way we look at the human brain. In 2005, Herculano-Houzel and Lent (2005) pioneered a new way to ascertain the neuronal make-up of brains: dissolving brains into soup and counting the neurons in it. Herculano-Houzel (2016: 33-34) describes it so:
Because we [Herculano-Houzel and Lent] were turning heterogeneous tissue into a homogeneous—or “isotropic”—suspension of nuclei, he proposed we call it the “isotropic fractionator.” The name stuck for lack of any better alternative. It has been pointed out to me by none other than Karl Herrup himself that it’s a terribly awkward name, and I agree. Whenever I can (which is not often because journal editors don’t appreciate informality), I prefer to call our method of counting cells what it is: “brain soup.”
So, using this method, we soon came to know that humans have 86 billion neurons. This flew in the face of the accepted wisdom—humans have 100 billion neurons in the brain. However, when Herculano-Houzel searched for the original reference for this claim, she came up empty-handed. The claim that we have 100 billion neurons “had become such an established “fact” that neuroscientists were allowed to start their review papers with generic phrases to that effect without citing references. It was the neuroscientist’s equivalent to stating that genes were made of DNA: it had become a universally known “fact” (Herculano-Houzel, 2016: 27). Herculano-Houzel (2016: 27) further states that “Digging through the literature for the original studies on how many cells brains are made of, the more I read, the more I realized that what I was looking for simply didn’t exist.”
So this “fact” that the human brain was made up of 100 billion neurons was so entrenched in the literature that it became something like common knowledge—for instance, that the sun is 93 million miles away from earth—that did not need a reference in the scientific literature. Herculano-Houzel asked her co-author of her 2005 paper (Roberto Lent) who authored a textbook called 100 Billion Neurons if he knew where the number came from, but of course he didn’t know. Though, subsequent editions added a question mark, making the title of the text 100 Billion Nuerons? (Herculano-Houzel, 2016: 28).
So using this method, we now know that the cellular composition of the human brain is expected for a brain our size (Herculano-Houzel, 2009). According to the encephilization quotient (EQ) first used by Harry Jerison, humans have an EQ of between 7 and 8—the largest for any mammal. And so, since humans are the most intelligent species on earth, this must account for Man’s exceptional abilities. But does it?
Herculano-Houzel et al (2007) showed that it wasn’t humans, as popularly believed, that had a larger brain than expected, but it was great apes, more specifically orangutans and gorillas that had bodies too big for their brains. So the human brain is nothing but a linearly scaled-up primate brain—humans have the amount of neurons expected for a primate brain of its size (Herculano-Houzel, 2012).
So Herculano-Houzel (2009) writes that “If cognitive abilities among non-human primates scale with absolute brain size (Deaner et al., 2007 ) and brain size scales linearly across primates with its number of neurons (Herculano-Houzel et al., 2007 ), it is tempting to infer that the cognitive abilities of a primate, and of other mammals for that matter, are directly related to the number of neurons in its brain.” Deaner et al (2007) showed that cognitive ability in non-human primates “is not strongly correlated with neuroanatomical measures that statistically control for a possible effect of body size, such as encephalization quotient or brain size residuals. Instead, absolute brain size measures were the best predictors of primate cognitive ability.” While Herculano-Houzel et al (2007) showed that brain size scales linearly across primates with the number of neurons—so as brain size increases so does the neuronal count of that primate brain.
This can be seen in Fonseca-Azevedo’s and Herculano-Houzel’s (2012) study on the metabolic constraints between humans and gorillas. Humans cook food while great apes eat uncooked plant foods. Larger animals have larger brains, more than likely. However, gorillas have larger bodies than we do but smaller brains than expected while humans have a smaller body and bigger brain. This is due to the diet that the two species eat—gorillas spend about 8-10 hours per day feeding while, if humans had the same number of nuerons but ate a raw, plant-based diet, they would need to feed for about 9 hours a day to be able to sustain a brain with that many neurons. This, however, was overcome by Homo erectus and his ability to cook food. Since he could cook food, he could afford a large brain with more neurons. Fonseca-Azevedo and Herculano-Houzel (2012) write that:
Given the difficulties that the largest great apes have to feed for more than 8 h/d (as detailed later), it is unlikely, therefore, that Homo species beginning with H. erectus could have afforded their combinations of MBD and number of brain neurons on a raw diet.
That cooking food leads to a greater amount of energy unlocked can be seen with Richard Wrangham’s studies. Since the process of cooking gelatinizes the protein in meat, it makes it easier to chew and therefore digest. This same denaturization of proteins occurs in vegetables, too. So, the claim that cooked food (a form of processing, along with using tools to mash food) has fewer calories (kcal) than raw food is false. It was the cooking of food (meat) that led to the expansion of the human brain—and of course, allowed our linearly scaled-up primate brain to be able to afford so many neurons. Large brains with a high neuronal count are extraordinarily expensive, as shown by Fonseca-Azevedo and Herculano-Houzel (2012).
Erectus had smaller teeth, reduced bite force, reduced chewing muscles and a relatively smaller gut compared to other species of Homo. Fink and Lieberman (2016) show that slicing and mashing meat and underground storage organs (USOs) would decrease the number of chews per year by 2 million (13 percent) while the total masticatory force would be reduced about 15 percent. Further, by slicing and pounding foodstuffs into 41 percent smaller particles, the number of chews would be reduced by 5 percent and the masticatory force reduced by 12 percent. So, of course, it was not only cooking that led to the changes we see in erectus compared to others, it was also the beginning of food processing (slicing and mashing are forms of processing) which led to these changes. (See also Catching Fire: How Cooking Made Us Human by Wrangham, 2013 for the evidence that cooking catapulted our brains and neuronal capacity to the size it is now, along with Wrangham, 2017.)
So, since the neuronal count of a brain is directly related to the cognitive ability that brain is capable of, then since Herculano-Houzel and Kaas (2011) showed that since the modern range of neurons was found in heidelbergensis and neanderthalensis, that they therefore had similar cognitive potential to us. This would then mean that “Compared to their societies, our outstanding accomplishments as individuals, as groups, and as a species, in this scenario, would be witnesses of the beneficial effects of cultural accumulation and transmission over the ages” (Herculano-Houzel and Kaas, 2011).
The diets of Neanderthals and humans, while similar (and differed due to the availability of foods), nevertheless, is a large reason why they have such large brains with a large number of neurons. Though, it must be said that there is no progress in hominin brain evolution (contra the evolutionary progressionists) as brain size is predicated on the available food and nutritional quality (Montgomery et al, 2010).
But there is a problem for Herculano-Houzel’s thesis that cognitive ability scales-up with the absolute number of neurons in the cerebral cortex. Mortensen et al (2014) used the optical fractionator (not to be confused with the isotropic fractionator) and came to the conclusion that “the long-finned pilot whale neocortex has approximately 37.2 × 109 neurons, which is almost twice as many as humans, and 127 × 109 glial cells. Thus, the absolute number of neurons in the human neocortex is not correlated with the superior cognitive abilities of humans (at least compared to cetaceans) as has previously been hypothesized.” This throws a wrench in Herculano-Houzel’s thesis—or does it?
There are a couple of glaring problems here, most importantly, I do not see how many slices of the cortex that Mortensen et al (2014) studied. They refer to the flawed stereological estimate of Eriksen and Pakkenberg (2007) showed that the Minke whale had an estimated 13 billion neurons while Walloe et al (2010) showed that the harbor porpoise had 15 billion cortical neurons with an even smaller cortex. These three studies are all from the same research team who use the same stereological methods, so Hercualano-Houzel’s (2016: 104-106) comments apply:
However, both these studies suffered from the same unfortunately common problem in stereology: undersampling, in one case drawing estimates from only 12 sections out of over 3,000 sections of the Minke whale’s cerebral cortex, sampling a total of only around 200 cells from the entire cortex, when it is recommended that around 700-1000 cells be counted per individual brain structure. with such extreme undersampling, it is easy to make invalid extrapolations—like trying to predict the outcome of a national election by consulting just a small handful of people.
It is thus very likely, given the undersampling of these studies and the neuronal scaling rules that apply to cetartiodactyls, that even the cerebral cortex of the largest whales is a fraction of the average 16 billion neurons that we find in the human cerebral cortex.
It seems fitting that great apes, elephants, and probably cetaceans have similar numbers of neurons in the cerebral cortex, in the range of 3 to 9 billion: fewer than humans have, but more than all other mammals do.
Kazu et al (2014) state that “If the neuronal scaling rules for artiodactyls extend to all cetartiodactyls, we predict that the large cerebral cortex of cetaceans will still have fewer neurons than the human cerebral cortex.” Artiodactyls are cousins of cetaceans—and the order is called cetariodactyls since it is thought that whales evolved from artiodactyls. So if they did evolve from artiodactyls, then the neruonal scaling rules would apply to them (just as humans have evolved from other primates and the neuronal scaling rules apply to us). So the predicted “cerebral cortex of Phocoena phocoena, Tursiops truncatus, Grampus griseus, and Globicephala macrorhyncha, at 340, 815, 1,127, and 2,045 cm3, to be composed of 1.04, 1.75, 2.11, and 3.01 billion neurons, respectively” (Kazu et al, 2014). So the predicted number of cerebellar neurons in the pilot whale is around 3 billion—nowhere near the staggering amount that humans have (16 billion).
Humans have the most cerebellar neurons of any animal on the planet—and this, according to Herculano-Houzel and her colleagues, accounts for the human advantage. Studies that purport to show that certain species of cetaceans have similar—or more—cereballar neurons than humans rest on methodological flaws. The neuronal scaling rules that Herculano-Houzel and colleagues have for cetaceans predict far, far fewer cortical neurons in the species. It is for this reason that studies that show similar—or more—cortical neurons in other species that do not use the isotropic fractionator must be looked at with extreme caution.
However, when Herculano-Houzel and colleagues do finally use the isotropic fractionator on pilot whales, and if their prediction does not come to pass but falls in-line with that of Mortensen et al (2014), this does not, in my opinion, cast doubt on her thesis. One must remember that cetaceans have completely different body plans from humans—most glaringly, the fact that we have hands to manipulate the world with. However, Fox, Muthukrishna, and Shultz (2017) show that whales and dolphins have human-like cultures and societies while using tools and passing down that information to future generations—just like humans do.
In any case, I believe that the prediction borne out from Kazu et al (2014) will show substantially fewer cortical neurons than in humans. There is no logical reason to accept the cortical neuronal estimates from the aforementioned studies since they undersampled parts of the cortex. Herculano-Houzel’s thesis still stands—what sets humans a part from other animals is the number of neurons which is tightly packed in to the cerebral cortex. The human brain is not that special.
The human advantage, I would say, lies in having the largest number of neurons in the cerebral cortex than any other animal species has managed—and it starts by having a cortex that is built in the image of other primate cortices: remarkable in its number of neurons, but not an exception to the rules that govern how it is put together. Because it is a primate brain—and not because it is special—the human brain manages to gather a number of neurons in a still comparatively small cerebral cortex that no other mammal with a viable brain, that is, smaller than 10 kilograms, would be able to muster. (Herculano-Houzel, 2016: 105-106)
franco was just eaten by a shark.
Fascinating. I enjoyed this post.