Home » Articles posted by RaceRealist (Page 49)
Author Archives: RaceRealist
The Concept of “More Evolved”: A Reply to Pumpkin Person
1000 words
Recently, PumpkinPerson has been stating that one population can be ‘more evolved‘ than another which doesn’t make any biological sense. PP’s basic thesis is that since we are the last branch on the tree in comparison to the lifeforms that came before Homo Sapiens, that due to that, we are ‘more evolved’ than other organisms on the planet. I get where he’s coming from; he’s just extremely wrong.
Organisms evolve to better adapt to their environment through Natural Selection (NS). NS does select for positive traits, however, evolution is not a linear process. PP also claims that “evolution is progressive“. That couldn’t be further from the truth. Stating that evolution is “progressive” means that evolution through NS is progressing to an “endgame”. Though, we know there is no “endgame” with evolution, as evolution just happens.
Evolution is not progressive. NS may select for traits not suitable for that environment, as NS is “not all-powerful”. Selecting for one advantageous trait may change another trait for the worse. (See “Misconceptions on Evolutionary Trees“, which is what PP did, from Berkely).
PP asks “Who is most evolved?”
No organism is “more evolved” than another. NS selects for traits that are advantageous to that current environment (it selects for negative traits as well). Due to this, the word “superior”, the phrase “more evolved” is meaningless comparing human races to one another and humanity as a whole to the other lifeforms on the planet.
PP quotes Rushton as saying
“One theoretical possibility is that evolution is progressive, and that some populations are more advanced than others.” J.P. Rushton, 1989
We know that evolution is not progressive, so due to this, some populations are not more advanced than others. Genetic superiority can be measured subjectively, but not objectively, as each organism has different strengths and weaknesses due to its environment.
PP then implies that bacteria are “less evolved” than we are. However, with recent breakthroughs in the HMP (Human Microbiome Project) we see the huge role that gut microbiota play when it comes to communicating with the brain, how antibiotics that kill gut microbiota also stop the growth of new brain cells, and how altered gut microbiota cause obesity. With more amazing uses and benefits we find involving gut microbiota and human health, can we really say that we’re “more evolved” than these organisms when they account for a huge amount of positive benefits for as a whole.
For another example, cows using their own genes wouldn’t be able to extract the fiber out of the food they eat. They would need special enzymes to break down the cell wall to extract the nutrients from the food. Though, evolving the genes to do this would take an extremely long time. This is where gut microbiota come in. Trillions of microbiomes live in the cows’ 4 stomachs. The microbiomes living in the cows’ gut processes the food back and forth through the mechanical grinding of the cows’ mouth and thus, the nutrients are extracted by the microbiomes that way.
In this instance, is a cow superior to its microbiomes if a cow’s microbiomes make it possible for it to digest its food?
PP then asks “Does more evolved mean superior?”
No, it doesn’t. There is no way to quantify this, as evolution is not progressive. Furthermore, saying that one organism is “more evolved” than another doesn’t make any sense since, as noted earlier in this article, each organism is suited to the environment it evolved in through NS.
PP then says that he prefers a 3 race model, when a 5 race model makes more sense. These populations are “Africa, Europe, Asia, Melanesia and the Americas.”
I assume he would put ‘Natives’ with Asian Mongoloids, but ‘Natives’ have been genetically isolated in the Americas for so long that they formed their own distinct clade away from other populations due to no introgression between them, when other populations have admixture from other parts of the world:
Significant genetic input from outside is not noticed in Meso and South American Amerindians according to the phylogenetic analyses; while all world populations (including Africans, Europeans, Asians, Australians, Polynesians, North American Na-Dene Indians and Eskimos) are genetically related. Meso and South American Amerindians tend to remain isolated in the Neighbor-Joining, correspondence and plane genetic distance analyses.
Hence, a 5 race model makes more sense as these populations show genetic differentiation between each other.
Still, others may take the concept of “more evolved” and believe that one race is “more evolved” than another. That’s another wrong statement.
The assumption here is that populations that evolved closer to the equator had evolution “stop” for them due to “ease of lifestyle” (life is easy nowhere). That too makes no evolutionary sense. If that were so, how did Africans evolve the sickle cell trait? Evolution is a constant, ongoing process and does not ‘speed up or slow down’ based on the environments in which ancestral evolution has occurred.
Moreover, r/K selection theory does dictate fast and slow life history strategies, but it has nothing to do with ‘fast or slow evolution within human populations’.
To state that evolution ‘is faster or slower’ in certain populations of humans is like saying ‘evolution has slowed for man since 50kya’ as anti-human-evolutionists have said:
“Something must have happened to weaken the selective pressure drastically. We cannot escape the conclusion that man’s evolution towards manness has suddenly come to a halt.” – Ernst Mayr
“There’s been no biological change in humans in 40,000 or 50,000 years. Everything we call culture and civilization we’ve built with the same body and brain.” – Stephen Jay Gould
Stating that evolution occurs faster in certain populations is on the complete opposite of the “evolution stopped for humans 50kya” camp, which we know is not true and evolution has sped up in the last 10kya.
To say that one organism, or population for that matter, is more evolved than another makes no biological sense. Each organism is suited to its own environment and where it evolved. Even then, different organisms evolve different traits depending on what they have to do in that ecosystem to survive. Darwin’s finches are a perfect example of that.
Misconceptions on Calories In and Calories Out
2550 words
(To those from “myproana.com”, DO NOT misconstrue what I wrote here. What I wrote here is perfectly understandable. I am NOT saying that “you have no metabolism”. My point is, low kcal dieting CAN and WILL destroy your metabolism. The literature is vast on this subject and it’s waiting for you to read it. Any further confusions, please comment and I will answer your questions.)
“Eat Less and move more!!! That’s how you lose weight!” What people who don’t understand about human metabolism and homeostasis is that when caloric reduction occurs, the body drops the metabolism to match the amount of kilocalories (kcal) it is receiving. Thus, weight will plateau and you will need to further decrease caloric consumption to lose more weight. In this article, I will go through what a calorie is, common misconceptions of Calories In and Calories Out, the reasons for metabolic slow down, the process of thermodynamics that people who don’t understand this research cry out whenever it’s said, and finally starvation experiments that prove metabolic slow down occurs during a decrease in caloric intake and how this metabolic slow down persists after the diet is over.
A kilocalorie is the heat required to raise 1 kilogram of water 1 degree celsius. This definition is used whenever people say ‘Calorie’.
Misconceptions on kcal in/kcal out
- One of the biggest misconceptions people have on Calories In/Calories out is that these variables are independent of each other. However, they are extremely dependent variables. When you decrease Calories In, your body decreases Calories Out. Basically, a 20 percent reduction in kcal will result in a 20 percent reduction in metabolism which the end result ends up being minimal weight loss.
- The next big assumption people have about Calories In and Calories Out is the assumption that the Basal Metabolic Rate (BMR) remains stable. Of course, measuring the caloric intake is simple. However, measuring caloric outtake is a much more complicated process. When ever the Total Daily Energy Expidenture (TDEE) is spoken of, that involves the BMR, thermic effect of food, nonexercise activity thermogenesis (the energy expidenture of all activities sans sports), excess post-exercise consumption (EPOC, a measurably increased rate of oxygen intake following increased oxygen depletion), as well as exercise. the TDEE can increase or decrease by as much as 50 percent depending on caloric intake as well as the aforementioned variables.
- The third misconception people have is that we have conscious control over what we eat. We decide to eat when we are hungry (obviously). But numerous hormonal factors dictate the decision on when to eat or when to stop. We stop eating when we are full, which is hormonally mediated. Like breathing, the regulation of body fat is under automatic control. Just like we don’t have to remind ourselves to breath or remind our heart to beat, we don’t need to remind ourselves to eat. Thus, since hormones control both Calories In and Calories Out, obesity is a hormonal, not caloric disorder.
- The fourth misconception is that fat stores are essentially unregulated. However, every single system in the body is regulated. Height increases come from growth hormones; blood sugar is regulated by insulin, glucagon, and numerous other hormones; sexual maturation is regulated by testosterone and estrogen (as well as the hormone leptin which I will return to later); body temperature is mediated by a thyroid-stimulating hormone, among numerous other biologic factors. Though, we are told that the production of fat cells is unregulated. This is false. The best researched hormone on the storage of fat cells that we know of is the hormone leptin which was discovered in 1994. So if hormones dictate fat gain, obesity is a hormonal, not caloric disorder.
- And the final misconception is that a calorie is a calorie. This implies that the only important variable on weight gain is caloric intake and thus all foods can be reduced to how much caloric energy they have. But a calorie of potatoes doesn’t have the same effect on the body as a calorie of olive oil. The potatoes will increase the blood glucose level, provoking a response from the pancreas, which olive oil will not. Olive oil is immediately transported to the liver and has no chance to induce an insulin response and so there is no increase in insulin or glucose.
All five of these assumptions have been proven false.
[9/21/16 edit:]
Calories in/out implies that during extended caloric restriction no matter the type of kcal (fat, CHO, protein, alcohol, except when alcohol is ingested your body puts fat storage on hold until all alcohol is metabolized from the body. You can see how wiith chronic drinkers as they are obese a lot of the time, with there being a strong link between alcoholism and obesity as there are nunmerous pathways related with each other that lead to excessive eating as well as dependance on alcohol and other drugs) ingested, as long as caloric restriction is continued that weight (fat) loss will be achieved. CICO adherents say that “a calorie is a calorie”, but what’s funny with that statement is that is violates the Second Law of Thermodynamics. Naturally, to CICO adherents since “a calorie is a calorie”, kcal would be restricted from fat since it’s the most calorie dense macro (alcohol coming in second at 7 kcal per gram). By doing this, CHO will be increased, as is recommended by all of the ‘experts’. “Increase CHO, fat leads to CD!!!” This isn’t true, that’s another reason for cutting fat, the supposed ‘increased risk of heart disease”. However, when this occurs, insulin is spiked and when insulin is spiked the body doesn’t use the fat stores for energy it uses the glucose from the carbs.
Putting this all together, let’s say someone’s TDEE is 2000 kcal per day (for a 14k kcal per week average) and they reduce it to 1200 kcal and go on a LFHC diet like is commonly recommended. Insulin remains high and therefore fat cannot be tapped into. This is due to the CICO mantra (which violates the 2nd LoT) “a calorie is a calorie” that leads people to believe that all calories are ‘equal’ in terms of hormonal responses in the body. Let’s take a piece of bread and a teaspoon of olive oil. When you eat the piece of bread, insulin is spiked in response to the glucose from the carbohydrate. When you drink the olive oil, it’s immediately absorbed by the liver eliciting no insulin spike. Clearly, with a long term LFHC diet, this will consistently occur and the body will be continuously using CHO for energy and not the fat stores as insulin is continuously spiked in the body. Insulin either tells the body to store fat or not burn it for energy. Eventually, over time, this leads to insulin resistance (however, insulin resistance may precede obesity and diabetes) and more metabolic problems amongst a myriad of other variables.
As kcal is reduced to 1200 per day, the body is forced to match its metabolism to what your intaking as it can’t get energy from anywhere else since “a calorie is a calorie”. This happens during any calorie restricted diet and is why diets are doomed to fail. This same thing happened with The Biggest Loser contestants. Notice how The First Law of Thermodynamics isn’t broken? It’s irrelevant.
See how the mantra “a calorie is a calorie” violates the Second law of thermodynamics and fails because the CICO model doesn’t take insulin into the equation, which is a causal factor with obesity?
[End edit]
The correlation between weight gain and caloric consumption has recently been discovered. Ladabaum, et al (2014) examined trends in obesity from 1988 to 2010. The trends they observed were: obesity, abdominal obesity, physical activity and caloric consumption in US adults. They discovered that the obesity rate increased at .37 percent per year while caloric intake remained virtually the same.
The Law of Thermodynamics
The first law of thermodynamics states that energy can not be created nor destroyed in an isolated system (this is important). People often invoke the LoT to support the Calories In and Calories Out model. Dr. Jules Hirsch says in this NYT article:
There is an inflexible law of physics – energy taken in must exactly equal the number of calories leaving the system when fat storage is unchanged. Calories leave the system when food is used to fuel the body. To lower fat content – reduce obesity – one must reduce calories taken in, or increase output by increasing activity, or both. This is true whether the calories come from pumpkins or peanuts or pâtés de foie gras.
To quote MD Jason Fung, author of The Obesity Code:
But thermodynamics, a law of physics, has minimal relevance to human biology for the simple reason that the human body is not an isolated system. Energy is constantly entering and leaving. In fact, the very act we are most concerned about-eating-puts energy into the system. Food energy is also excreted from the system in the form of stool Having studied a full year of thermodynamics in university, I can assure you that neither calories nor weight gain were mentioned even a single time. (Fung, 2016: 33)
We assume with the model of the calorie-balancing scale that fat gain or fat loss is unregulated, however, no system in the body is unregulated like that. Hormones tightly regulate all bodily functions. Body fat is no exception. The body actually has numerous ways in which to control body fat. Distribution of energy is the problem with fat accumulation. Too much energy is diverted to fat creation as opposed to body-heat production. Most of this is under automatic control, except exercise (which even then, there is a genetic basis for motivated exercise). We can’t decide whether or not to allocate calories to nail production or increase stroke volume. These metabolic processes are almost impossible to measure, and thus most assume that it’s relatively constant. Particularly, Calories In is not assumed to change in response to Calories Out. We assume these are independent variables. Reducing calories in only works if calories out remains constant. However what we find is that a sudden reduction of Calories In leads to a similar reduction of Calories Out and no weight is lost as the body balances its energy budget.
Starvation experiments
In 1919, a landmark study was carried out by Francis Benedict. The volunteers in the study agreed to a semi-starvation diet ranging from 1400 to 2100 kcal, approximately 30 percent of the subject’s bodyweight. The question was whether or not decreased caloric intake lead to a decrease in metabolism. The results were shocking.
The subjects experienced a 30 percent reduction in metabolism, with their initial caloric expidenture being 3000 kcal dropping to 1950 kcal. A 30 percent reduction in kcal resulted in a 30 percent decrease in metabolism. The First Law of Thermodynamics is not broken.
Towards the end of WWII, Dr. Ancel Keys wanted to improve understanding of starvation and better help Europe after the War. With an average height of 5 feet 10 inches and an average weight of 153 pounds, these were normal men, which Dr. Keys wanted to see the effects of a semi-starvation diet on those with a normal weight. For the first three months of the study, they were given slightly over 3000 kcal. Though over the next six months, they were given 1570 kcal. Eventually, some men were decreased to less than 1000 kcal a day. They were given a diet of foods high in carbs and low to no animal meat as that was the condition in Europe at the time. Moreover, they also had to walk 22 miles a week as exercise. Again, the results were shocking.
Dr. Keys showed that they had a 40 percent decrease in metabolic rate. The body decreased its metabolism to match the amount of calories consumed. They showed a 20 percent decrease in strength, a significant decrease in heart rate (55 to 35 beats per minute), stroke volume decreased by 20 percent, body temperature dropped to 95.8 degrees Fahrenheit (which makes sense since less caloric consumption means less energy for the body to convert into heat), physical endurance dropped by half, blood pressure dropped, they became tired and dizzy and finally their hair and nails grew extremely brittle. They couldn’t stop thinking about food. Some of them wrote cookbooks, others dreamed about food. They became obsessed with eating. All of these causes go directly back to decreased caloric consumption as the amount of heat produced by the body decreased due to an increase in caloric consumption. In sum, the body responds to a decrease in caloric intake by dropping metabolism.
Metabolic slow down
Recent data has come out on decreased energy expidenture due to dieting from contestants on the show The Biggest Loser. The contestants were followed for six years after the show ended. Fothergill, et al (2016) showed that after six years, most contestants gained back the original weight they lost, but their metabolism was still decreased by 600 kcal.
The mean metabolic adaptation had increased to 500 kcal per day, which explains why RMR remained 700 kcal per day below the baseline level despite a 90 lb body weight regain. The researchers even said that this large metabolic difference couldn’t be explained by the different calirometer used at the end of the six year period.
Substantial weight loss induces biological changes that promote weight gain.
Moreover, after a period of dieting, your brain panics and thinks it’s starving. During this time, the the production of the hunger hormone ghrelin increases. Levels of this hormone increase right before a meal and steadily decrease after. This is one of the many hormones that control when we’re hungry and this is one of the many reasons why diets fail and do not work long term.
Our bodies have homeostatic mechanisms that cause us to gain back or lose weight whenever caloric consumption is increased or decreased. The main cause is the body weight set-point which I will cover in a future article.
And a quote from Sandra Aamodt’s book “Why Diets Make Us Fat“:
“Leibel finds that metabolic suppression persists in dieters who have kept weight off for one to six years, so he scoffs at claims that the successful weight loss story disproves his ideas. “If you talk to people who’ve done it – not the studies, but people who actually manage to lose weight and keep it off – they’ll tell you what I’m telling you,” he says: that the only way to achieve this goal was to allow themselves to be hungry all the time while increasing their physical activity substantially. Indeed, his point is supported by data on the eating and exercise habits of people listed in the National Weight Control Registry, who have lost at least thirty pounds and kept it off for one year. A calorie calculator says that Dennis Asbury should have needed 2,100 calories to maintain his weight at 150 pounds, but instead he found that he needed to eat 400 to 500 calories less than that. Such metabolic suppression is the difference between being within the defended range and being below it. Many people blame others for eating too much or exercising too little, assuming incorrectly that both are under voluntary control, but it’s much harder to justify holding people responsible for diet-induced changes in the way the body burns energy.” (Aamodt, 2016, pg. 68)
Conclusion
The fact of the matter is, kcal in and out is completely misunderstood due to a non-understanding of human metabolism. As we decrease our caloric intake, our body adjusts its metabolism down to match the amount of kcal we are currently consuming. This is why Calories In and Calories Out does not tell the whole story. Our body constantly fights to maintain what is normal, its set-point. When thrown out of what the brain considers ‘normal’ the brain through the hypothalamus does whatever it can to get us back to its set-point. Thus, obesity is a hormonal, not a caloric disorder.
The Evolution of Violence: A Look at Infanticide and Rape
1700 words
The pioneer of criminology was a man named Cesare Lombroso, an Italian Jew (a leftover remnant from the Roman days), who had two central theories: 1) that criminal behavior originated in the brain and 2) criminals were an evolutionary throwback, a more primitive type of human. Lombroso felt strongly about the rehabilitation of criminals, at the same time believing in the death penalty for “born criminals”. Though, with new advances in criminology and new insights to the brain, it looks like Lombroso was right with his theory of born criminals.
Why are you 100 times more likely to be killed on your birthday? Why are children 50 times more likely to be murdered by their stepfather than biological one? Why do some parents kill their children? Finally, why do men rape not only strangers, but also rape their wives? All of these questions can be answered with evolutionary psychology.
Evolutionarily speaking, antisocial and violent behavior wasn’t a random occurrence. When these actions occurred tens of thousands of years ago, they were because resources were being acquired from these actions. Thus, we can see some modern criminal acts as resource competition. The more resources one has, the easier it is for him to pass his genes on to the next generation (a big driver for violence). In turn, women are more attracted to males who can provide resources and protection (those who were more antisocial and violent). This also explains these prison romances, in which women get into romances with murderous criminals since they are attracted to the violence (protection) and resources (theft).
The mugger who robs for a small amount of money is increasing his odds of resource acquisition. Drive-by shootings in violent neighborhoods increase the status of those who survive the shootout. What looks like a simple brawl over nothing may be one attempting to increase social dominance. All of these actions have evolutionary causes. What drive these actions are our ‘Selfish Genes’.
The more successful genes are more ruthlessly selfish in their struggle for survival, which then drives individual behavior. The individual behaviors that occur due to our selfish genes may be antisocial and violent in nature, which in our modern society is frowned upon. The name of the game is ‘fitness’. The amount of children you can have in your time allotted on Earth. This is all that matters to our genes. Even those accomplishments you think of, such as completing college or attaining mass amounts of capital all fall back to fitness. With that, increasing your fitness and ensuring your genetic lineage passes on to the next generation is greatly enhanced.
Biological fitness can be enhanced in one of two ways. You can have as many children as possible, giving little parental care to each, or you can have fewer children but show more attention and care to them. This is known as r/K Selection Theory. Rushton’s r/K Selection Theory compliments Dawkins Selfish Gene theory in that the r-strategist is maximizing his fitness by having as many children as possible, while the K-strategist increases his fitness by having fewer children in comparison to the r-strategist but showing more parental attention. There are, however, instances in which humans kill children, whether it’s a mother killing a newborn baby or a stepfather killing a child. What are the reasons for this?
Killing Kids
The risk of being a homicide victim in the first year of life is highest in the first year of life. Why? Canadian Psychologists Daly and Wilson demonstrated in inverse relationship between degree of genetic relatedness and being a victim of homicide. Daly and Wilson discovered that the offender and victim are genetically related in only 1.8 percent of all homicides. Therefore, 98 percent of all murders are killings of people who do not share the killer’s genes.
Many stories have been told about ‘wicked stepparents’ in numerous myths and fairytales. But, as we know, a lot of stories have some basis in reality. Children of stepparents are 40 times more likely to suffer abuse at the hands of a stepparent. People who are living together who are unrelated to one another are more likely to kill one another. Even adoptions are more successful when the adopting parents view the child as genetically similar to themselves.
In this study carried out by Maillart, et al, it was discovered that for mothers, the average age of offense for filicide was 29.5 years for the mother and 3.5 years for the babe. Bourget, Grace, and Whitehurst, 2007 showed that a risk factor for infanticide was a second child born to a mother under 20-years of age. The reasoning for this is simple: at a younger age the mother is more fertile, and thus, more attractive to potential mates. The older the woman is the more sense it makes to hold on to the genetic investment since it’s harder to make up for the genetic loss late in her reproductive life.
Genetic relatedness, fitness, and parental investment show, in part, why filicides and infanticides occur.
Raping Your Wife
There are evolutionary reasons for rape as well. The rape of a non-relative can be looked at as the ultimate form of ‘cheating’ in this selfish game of life. One who rapes doesn’t have to acquire resources in order to attract a mate, he can just go and ‘take what he wants’ and attempt to spread his genes to the next generation through non-consensual sex. It’s known that rape victims have a higher chance of getting pregnant, with 7.98 percent of rape victims becoming pregnant. (News article) One explanation for this is that the rapist may be able to possibly detect how fertile a woman is. Moreover, rapists are more likely to rape fertile women rather than infertile women.
One rapist that author of the book The Anatomy of Violence, Adrian Raine interviewed said that he specifically chose ugly women to rape (Raine, 2013: 28). He says that he’s giving ugly women ‘what they want’, which is sex. There is a belief that women actually enjoy sex, and even orgasm during the rape, even though they strongly resist and fight back during the attack. Reports of orgasm during rape are around 5 to 6 percent (Raine, 2013: 29), but the true number may be higher since most women are embarrassed to say that they orgasmed during a rape.
Men, as we all know, are more likely to engage in no-strings-attached sex more than women. This is due to the ‘burden’ of sex: children. Women are more likely to carefully select a partner who has numerous resources and the ability to protect the family. Men don’t have the burden of sticking around to raise the child.
Men are more likely to find a sexual relationship more upsetting in comparison to women who are more likely to find an emotional infidelity as more distressing. This data on Americans still held true for South Korea, Germany, Japan, and the Netherlands. Men are better than women at detecting infidelity, and are more likely to suspect cheating in their spouses (Raine, 2013: 32). Unconscious reason being, a man doesn’t want to raise a child who is not genetically similar to themselves.
But this begs another question: why would a man rape his wife? One reason is that when a man discovers his spouse has been unfaithful, he would want to inseminate her as quickly as possible.
There has never in the history of humankind been one example of women banding together to wage war on another society to gain territory, resources or power. Think about it. It is always men. There are about nine male murderers for every one female murderer. When it comes to same-sex homicides, data from twenty studies show that 97 percent of the perpetrators are male. Men are murderers. The simple evolutionary reason is that women are worth fighting for. (Raine, 2013: 32)
A feminist may look at this stat and say “MEN cause all of the violence, MEN hurt women” and attempt to use this data as ‘proof’ that men are violent. Yes, we men are violent, and there is an evolutionary basis for it. However, what feminists who push the ‘all sexes are equal’ card don’t know, is that when they say ‘men are more likely to be murderers’ (which is true), they are actively accepting biological differences between men and women. Most of these differences in crime come down to testosterone. I would also assume that men would be more likely to have the ‘warrior gene’, otherwise known as the MAOA-L gene, which ups the propensity for violence.
The sociobiological model suggests that poorer people kill due to lack of resources. And one reason that men are way more likely to be victims of homicide is because men are in competition with other men for resources.
Going back to the violence on stepchildren that I alluded to earlier, aggression towards stepchildren can be seen as a strategic way of motivating unwanted, genetically dissimilar others out of the home and not take up precious resources for the next generation bred by the stepfather (Raine, 2013: 34).
Women also have a way to increase their fitness, which a brunt of it is through sexual selection. Women are known to be ‘worriers’. That is, they rate dangerous and aggressive acts higher than men. Women are also more fearful of bodily injury and more likely to develop phobias of animals. In these situations, women are protecting themselves and their unborn (or born) children by maximizing their chances for survival by being more fearful of things. This can help explain why women are less physically violent than men and why those murder stats are so heavily skewed towards men: biology.
Women compete for their genetic interests with beauty and childbearing. The more beautiful the woman, the better resources a woman can acquire from a male and this will ensure a healthy life for the offspring.
Evolutionary psychology can help explain the differences in murder between men and women. It can also explain why young mothers kill their children and why stepparents are so abusive to, and are more likely to murder stepchildren. Of course, a social context is involved but we need to look at evolutionary causes for what we think we may be able to simply explain. Because it’s, more often than not, more complex than we could imagine. And that complexity is our Selfish Genes doing anything possible to reproduce more copies of itself through its vehicle: the human body.
Morality and Altruism
Moral reasoning and altruism evolved together. Both of these traits are beneficial to human survival, so they got selected for in human populations. I will show today how moral reasoning and altruism evolved side by side to increase fitness.
As discussed previously in my post The Evolution of Morality, moral reasoning is a post hoc search for reasons to justify judgements we already made. Moral reasoning evolved, according to Jonathan Haidt (2012) because of a bigger brain. Those with bigger brains can better process the environment around them and increase fitness for that population. As the brain grows more complex, more sophisticated thinking emerges. Since rapid and automatic processes drive our brain, those populations with bigger brains show more cognitive sophistication due to more cortical neurons as well as bigger overall brain areas which lead to increases in intelligence.
Both altruism and morality evolved hand-in-hand. Post hoc moral reasoning helps altruistic acts occur. Since “judgement” and “justification” are separate processes, one does not have to justify a moral act, instead relying on his innate judgement that this is the most beneficial act. The “judgement” that’s made is really the *genes* doing what is best to survive. What survives when self-sacrifice occurs aren’t the bodies, the vehicles for the genes, obviously. The gene only cares about the proliferation of more copies of itself.
Darwin said that those who have the altruistic trait are more evolutionarily successful than those who do not have it. Thus, those populations that have more alleles for altruism will be more evolutionarily successful than those populations without it.
Darwin held that morality evolved in humans because it was a beneficial trait for human social cohesiveness. Without even having a good reason for morale reasoning, just going on gut instinct (which I believe the gut instinct in these situations is the our selfish genes), altruistic acts can then occur without second thought.
If a trait is beneficial to a population, then it will be selected for in that group. Moral reasoning was a trait that was selected for since those with the trait could better aid the group they were apart of by being ‘selfless’.
For instance, when animals care for their babes, we don’t say that it’s ‘animal culture’ that causes them to care for their offspring. It’s obviously a trait evolved over time. When an immediate threat occurs, the animal will engage in what looks to be a ‘selfless act’, when in actuality the *selfish genes* are making sure the *copies* of themselves survive.
All human traits are heritable. So those blank slaters who believe that all of our behavioral traits are molded by the environment, there is a considerable genetic component involved. Thus, it would take us further away from the truth of why altruism and morality occurred in human populations.
We can see some altruistic-like traits in nonhuman animals. For instance, in bees. The worker bees inherit the queen’s matrigenes, which direct the altruistic behavior of the worker bees to their female kin. These genes inherited from the queen bee have the worker bees forgo their own reproduction to help rear their siblings. So when the queen does, the workers can begin to selfishly compete with one another to lay eggs. This behavior is inherited from the father.
Emotional intelligence can also be said to be a form of social intelligence. Though, it has been recently discovered that EQ is a mix of high IQ and the Big Five Personality Traits. Traits that enhanced human social cohesiveness get selected for. For instance, in Eurasia, the Big Five Personality Traits evolved since those who are more altruistic were better able to survive in the harsh Eurasian winters due to an increase of frequency in altruistic alleles.
The moral reasoning is the ‘gut instinct’, where the person *knows* something is ‘wrong’, they just can’t explain it rationally. This human behavior has an evolutionary basis, which increased human social cohesiveness and eventually led to our complex societies. Altruism would not have evolved without moral reasoning (which the reasoning we construct is post hoc to justify judgements we already made).
Thus, when when speaking of mortality with someone attempting to figure out truth, you will hear nonsensical answers. But thinking of moral reasoning as a skill that evolved to further our own agenda, moral reasoning makes a lot more sense. By keeping your eye on the intuition (what their *genes* want), you can see a person’s motivations for holding these views they do, even though they cannot think of a reason for their belief.
So, I’m proposing that moral reasoning evolved to increase human fitness and social cohesiveness, going hand-in-hand with altruism.
Without these two traits, we wouldn’t be able to build these complex societies, which moral reasoning (post hoc or not) and altruism are two of the driving force forces behind our both our evolution as well as our societal evolution.
Nordicist Fantasies: The Myth of the Blonde-Haired, Blue-Eyed Aryans and the Origins of the Indo-Europeans
750 words
Nordicists say that the Aryans, the Indo-Europeans, had blonde hair and blue eyes. Though, recent genetic evidence shows that the origin of the Indo-European language is from the Russian steppe, originating from the Yamnaya people. The originators of the Indo-European languages weren’t blonde-haired and blue-eyed, but dark-haired and dark-eyed. Better known as the ‘Kurgan Hypothesis’, this is now the leading theory for the origin of Indo-European people.
Haak et al (2016) showed that at the beginning of the Neolithic period in Europe (approximately 7 to 8 kya) that a closely related group of farmers appeared in Germany, Hungary, and Spain. These ancient populations were different from the indigenous peoples from the Russian steppe, the Yamnaya, who showed high affinity with a 24000-year-old Siberian sample. Approximately 5 to 6 kya, farmers throughout Europe had more hunter-gatherer ancestry than their predecessors from the early Neolithic, but the Yamnaya from the Russian steppe were descended from the Eastern European hunter-gatherers, but also from a population with Near East ancestry (Ancient North Eurasians, ANE). Further, the migration of haplotypes R1b and R1a traveled into Europe 5000 years ago.
The Late Neolithic Corded Ware culture from Germany trace approximately 75 percent of their ancestry to the Yamnaya, which confirms a massive migration from Eastern Europe to the heartland of the continent 4500 years ago. This ancestry from the Yamnaya persisted in all of the Europeans sampled up until approximately 3000 years ago, and is common in all modern-day Europeans. The researchers then conclude that this provides evidence for a steppe origin for some of the Indo-European languages from Europe.
As mentioned above, Haber et al (2016) show how, as I alluded to above, that the Yamnaya people share distant ancestry with the Siberians, which is probably the source of one of the three ancient populations that contributed to the modern-day European gene pool (Ancient North Eurasians, West European hunter-gatherers, and Early European farmers from Western Asia with the fourth population being the Yamnaya people).
Olade et al (2015) show that since the Basque people speak a pre-Indo-European language that this indicates that the expansion of Indo-European languages is unlikely to have begun during the early Neolithic (7 to 8 kya). They, like Haak et al, conclude that it’s in agreement with the hypothesis of the Indo-European languages coming out of the East, the Russian steppe, around 4500 years ago which is associated with the spread of Indo-European languages into Western Europe.
Finally, it is known that the Yamnaya people had dark skin (relative to today’s Europeans), dark hair, and dark eyes. Knowing what is presented in this article, this directly goes against the Nordicist fantasy of the blue-eyed, blonde-haired Indo-Europeans. Nordicists also like to claim that the Indo-Europeans had blonde hair and blue eyes, when genetic evidence goes directly against this claim:
For rs12913832, a major determinant of blue versus brown eyes in humans, our results indicate the presence of blue eyes already in Mesolithic hunter-gatherers as previously described. We find it at intermediate frequency in Bronze Age Europeans, but it is notably absent from the Pontic-Caspian steppe populations, suggesting a high prevalence of brown eyes in these individuals.
Further, the Yamnaya were a tall population. Since the Yamnaya had a greater genotypic height, it stands to reason that Northern European populations have more Yamnaya ancestry.
The Yamnaya herded cattle and other animals, buried their dead in mounds called kurgans, and may have created some of the world’s first wheeled vehicles. They were a nomadic population that, some linguists say, had a word for wheel. The massive migration into Western Europe from the Russian steppe contributed large amounts of North Asian ancestry in today’s Europeans. The Yamnaya are also shown to be the fourth ancient population that is responsible for modern-day Europeans.
Modern-day genetic testing is shattering all of these myths that are told about the origins of Europeans and Proto-Indo-European peoples and languages. The ACTUAL basis for most PIE languages is from the Russian steppe, from a relatively (to modern Europe) dark-skinned, dark-haired, and dark-eyed people who then spread into Europe 4500 years ago.
The Nordicist fantasies of the Aryans, the originators of Proto-Indo-European languages has been put to rest. It was originally proposed based off of myths and stories, mostly from ancient Indo-European cultures who were situated thousands of miles away from the original Indo-Europeans (the Yamnaya).
The Kurgan Hypothesis is now the theory that’s largely accepted by the scientific community as being the homeland of the Proto-Indo-Europeans. The Yamnaya people now make a fourth founding population for Europeans, with the other three being West European hunter-gatherers, Ancient North Eurasians, and Early European Farmers.
The Evolution of Morality
Summary: Moral reasoning is just a post hoc search for reasons to justify the judgments people have already made. When people are asked why, for certain questions, they find things morally wrong, they say they cannot think of a reason but they still think it is wrong. This has been verified by numerous studies. Moral reasoning evolved as a skill to further social cohesiveness and to further our social agendas. Even in different cultures, those with matching socioeconomic levels have the same moral reasoning. Morality cannot be entirely constructed by children based on their own understanding of harm. Thus, cultural learning must play a bigger role than the rationalists had given it. Larger and more complex brains also show more cognitive sophistication in making choices and judgments, confirming a theory of mine that larger brains are the cause of making correct choices as well as making moral judgments.
The evolution of morality is a much-debated subject in the field of evolutionary psychology. Is it, as the nativists say, innate? Or is it as the empiricists say, learned? Empiricists, better known as Blank Slatists, believe that we are born with a ‘blank slate’ and thus acquire our behaviors through culture and experience. In 1987 when John Haidt was studying moral psychology (now known as evolutionary psychology), moral psychology was focused on the third answer: rationalism. Rationalism dictates that children learn morality through social learning and interacting with other children to learn right from wrong.
Developmental psychologist Jean Piaget focused on the type of mistakes that children would make when seeing water moved from different shape glasses. He would, for example, put water into the same size glasses and ask children which one had more water. They all said they held the same amount of water. He then poured water from one glass into a taller glass and then asked the children which glass held more water. Children aged 6 and 7 say that the water level changed since the water was now in a taller glass. The children don’t understand that just because the water was moved to a taller glass doesn’t mean that there is now more water in the glass. Even when parents attempt to explain to their children why there is the same amount of water in the glass, they don’t understand it because they are not ready cognitively. It’s only when they reach an age and cognitive stage that they are ready to understand that the water level doesn’t change, just by playing around with cups of water themselves.
Basically, the understanding of the conservation of volume isn’t innate, nor is it learned by parents. Children figure it out for themselves only when their minds are cognitively ready and they are given the right experiences.
Piaget then applied his rules from the water experiment with the development of children’s morality. He played a marble game with them where he would break the rules and play dumb. The children the responded to his mistakes, correcting him, showing that they had the ability to settle disputes and respect and change rules. The growing knowledge progressed as children’s cognitive abilities matured.
Thus, Piaget argued that like children’s understanding of water conservation is like children’s understanding of morality. He concludes that children’s reasoning is self-constructed. You can’t teach 3-year-old children the concept of fairness or water conservation, no matter how hard you try. They will figure it out on their own through dispute and do things themselves, better than any parent could teach them, Piaget argued.
Piaget’s insights were then expanded by Lawrence Kohlberg who revolutionized the field of moral psychology with two innovations: developing a set of moral dilemmas that were presented to children of various ages. One example given was that a man broke into a drug store to steal medication for his ill wife. Is that a morally wrong act? Kohlberg wasn’t interested in whether the children said yes or no, but rather, their reasoning they gave when explaining their answers.
Kohlberg found a six-stage progression in children’s reasoning of the social world that matched up with what Piaget observed in children’s reasoning about the physical world. Young children judged right and wrong, for instance, on whether or not a child was punished for their actions, since if they were punished for their actions by an adult then they must be wrong. Kohlberg then called the first two stages the “pre-conventional level of moral judgment”, which corresponded to Piaget’s stage at which children judge the physical world by superficial features.
During elementary school, most children move on from the pre-conventional level and understand and manipulate rules and social conventions. Kids in this stage care more about social conformity, hardly ever questioning authority.
Kohlberg then discovered that after puberty, which is right when Piaget found that children had become capable of abstract thought, he found that some children begin to think for themselves about the nature of authority, the meaning of justice and the reasoning behind rules and laws. Kohlberg considered children “‘moral philosophers’ who are trying to work out coherent ethical systems for themselves”, which was the rationalist reasoning at the time behind morality. Kohlberg’s most influential finding was that the children who were more morally advanced frequently were those who had more opportunities for role-taking, putting themselves into another person’s shoes and attempting to feel how the other feels through their perspective.
We can see how Kohlberg and Piaget’s work can be used to support and egalitarian and leftist, individualistic worldview.
Kohlberg’s student, Elliot Turiel, then came along. He developed a technique to test for moral reasoning that doesn’t require verbal skill. His innovation was to tell children stories about children who break rules and then give them a series of yes or no questions. Turiel discovered that children as young as five normally say that the child was wrong to break the rule, but it would be fine if the teacher gave the child permission, or occurred in another school with no such rule.
But when children were asked about actions that harmed people, they were given a different set of responses. They were asked if a girl pushes a boy off of a swing because she wants to use it, is that OK? Nearly all of the children said that it was wrong, even when they were told that a teacher said it was fine; even if this occurred in a school with no such rule. Thus, Turiel concluded, children recognize that rules that prevent harm are moral rules related to “justice, rights, and welfare pertaining to how people ought to relate to one another” (Haidt, 2012, pg. 11). All though children can’t speak like moral philosophers, they were busy sorting information in a sophisticated way. Turiel realized that was the foundation of all moral development.
There are many rules and social conventions that have no moral reasoning behind them. For instance, the numerous laws of the Jews in the Old Testament in regards to eating or touching the swarming insects of the earth, to many Christians and Jews who believe that cleanliness is next to Godliness, to Westerners who believe that food and sex have a moral significance. If Piaget is right then why do so many Westerners moralize actions that don’t harm people?
Due to this, it is argued that there must be more to moral development than children constructing roles as they take the perspectives of others and feel their pain. There MUST be something beyond rationalism (Haidt, 2012, pg. 16).
Richard Shweder then came along and offered the idea that all societies must resolve a small set of questions about how to order society with the most important being how to balance the needs of the individual and group (Haidt, 2012, pg. 17).
Most societies choose a sociocentric, or collectivist model while individualistic societies choose a more individualist model. There is a direct relationship between consanguinity rates, IQ, and genetic similarity and whether or not a society is collectivist or individualistic.
Shweder thought that the concepts developed by Kohlberg and Turiel were made by and for those from individualistic societies. He doubted that the same results would occur in Orissa where morality was sociocentric and there was no line separating moral rules from social conventions. Shweder and two collaborators came up with 39 short stories in which someone does something that would violate a commonly held rule in the US or Orissa. They interviewed 180 children ranging from age 5 to 13 and 60 adults from Chicago and a matched sample of Brahmin children and adults from Orissa along with 120 people from lower Indian castes (Haidt, 2012, pg. 17).
In Chicago, Shweder found very little evidence for socially conventional thinking. Plenty of stories said that no harm or injustice occurred, and Americans said that those instances were fine. Basically, if something doesn’t protect an individual from harm, then it can’t be morally justified, which makes just a social convention.
Though Turiel wrote a long rebuttal essay to Shweder pointing out that most of the study that Shweder and his two collaborators proposed to the sample were trick questions. He brought up how, for instance, that in India eating fish is will stimulate a person’s sexual appetite and is thus forbidden to eat, with a widow eating hot foods she will be more likely to have sex, which would anger the spirit of her dead husband and prevent her from reincarnating on a higher plane. Turiel then argued that if you take into account the ‘informational assumptions’ about the way the world works, most of Shweder’s stories were really moral violations to the Indians, harming people in ways that Americans couldn’t see (Haidt, 2012, pg. 20).
Jonathan Haidt then traveled to Brazil to test which force was stronger: gut feelings about important cultural norms or reasoning about harmlessness. Haidt and one of his colleagues worked for two weeks to translate Haidt’s short stories to Portuguese, which he called ‘Harmless Taboo Violations’.
Haidt then returned to Philadelphia and trained his own team of interviewers and supervised the data collection for the four subjects in Philadelphia. He used three cities, using two levels of social class (high and low) and within each social class was two groups of children aged 10 to 12 and adults aged 18 to 28.
Haidt found that the harmless taboo stories could not be attributed to some way about the way he posed the questions or trained his interviewers, since he used two questions directly from Turiel’s experiment and found the same exact conclusions. Upper-class Brazilians looked like Americans on these stories (I would assume since Upper-class Brazilians have more European ancestry). Though in one example about breaking the dress-code of a school and wearing normal clothes, most middle-class children thought that it was morally wrong to do this. The pattern supported Shweder showing that the size of the moral-conventional distinction varied across cultural groups (Haidt, 2012, pg. 25).
The second thing that Haidt found was that people responded to harmless taboo stories just as Shweder predicted: upper-class Philadelphians judged them to be violations of social conventions while lower-class Brazilians judged them to be moral violations. Basically, well-educated people in all of the areas Haidt tested were more similar to each other in their response to harmless taboo stories than to their lower-class neighbors.
Haidt’s third finding was all differences stayed even when controlling for perceptions of harm. That is, he included a probe question at the end of each story asking: “Do you think anyone was harmed by what [the person in the story] did?” If Shweder’s findings were caused by perceptions of hidden victims, as was proposed by Turiel, then Haidt’s cross-cultural differences should have disappeared when he removed the subjects who said yes to the aforementioned question. But when he filtered out those who said yes, he found that the cultural differences got BIGGER, not smaller. This ended up being very strong evidence for Shweder’s claim that morality goes beyond harm. Most of Haidt’s subjects said that the taboos that were harmless were universally wrong, even though they harmed nobody.
Shweder had won the debate. Turiel’s findings had been replicated by Haidt using Turiel’s methods showing that the methods worked on people like himself, educated Westerners who grew up in an individualistic culture. He showed that morality varied across cultures and that for most people, morality extended beyond the issues of harm and fairness.
It was hard, Haidt argued, for a rationalist to explain these findings. How could children self-construct moral knowledge from disgust and disrespect from their private analyses of harmlessness (Haidt, 2012, pg. 26)? There must be other sources of moral knowledge, such as cultural learning, or innate moral intuitions about disgust and disrespect which Haidt argued years later.
Yet, surprises were found in the data. Haidt had written the stories carefully to remove all conceivable harm to other people. But, in 38 percent of the 1620 times people heard the harmless offensive story, they said that somebody was harmed.
Haidt found that it was obvious in his sample of Philadelphians that it was obvious that the subjects had invented post hoc fabrications. People normally condemned the action very quickly, but didn’t need a long time to decide what they thought, as well as taking a long time to think up a victim in the story.
He also taught his interviewers to correct people when they made claims that contradicted the story. Even when the subjects realized that the victim they constructed in their head was fake, they still refused to say that the act was fine. They, instead, continued to search for other victims. They just could not think of a reason why it was wrong, even though they intuitively knew it was wrong (Haidt, 2012, pg. 29).
The subjects were reasoning, but they weren’t reasoning in search for moral truth. They were reasoning in support of their emotional reactions. Haidt had found evidence for philosopher David Hume’s claim that moral reasoning was often a servant of moral emotions. Hume wrote in 1739: “reason is, and ought to be only the slave of the passions, and can never pretend to any other office than to serve and obey them.”
Judgment and justification are separate processes. Moral reasoning is just a post hoc search for reasons to justify the judgments people have already made.
The two most common answers of where morality came from are that it’s innate (nativists) or comes from childhood learning (empiricists), also known as “social learning theory”. Though the empiricist position is incorrect.
- The moral domain varies by culture. It is unusually narrow in western education and individualistic cultures. Sociocentric cultures broaden moral domain to encompass and regulate more aspects of life.
- People sometimes have gut feelings – particularly about disgust – that can drive their reasoning. Moral reasoning is sometimes a post hoc fabrication.
- Morality can’t be entirely self-constructed by children based on their understanding of harm. Cultural learning (social learning theory, Rushton, 1981) not guidance must play a larger role than rationalist had given it.
(Haidt, 2012, pg 30 to 31)
If morality doesn’t come primarily from reasoning, then that leaves a combination of innateness and social learning. Basically, intuitions come first, strategic reasoning second.
If you think that moral reasoning is something we do to figure out truth, you’ll be constantly frustrated by how foolish, biased, and illogical people become when they disagree with you. But if you think about moral reasoning as a skill we humans evolved to further our social agendas – to justify our own actions and to defend the teams we belong to – then things will make a lot more sense. Keep your eye on the intuitions, and don’t take people’s moral arguments at face value. They’re mostly post hoc constructions made up on the fly crafted to advance one or more strategic objectives (Haidt, 2012, pg XX to XXI).
Haidt also writes on page 50:
As brains get larger and more complex, animals begin to show more cognitive sophistication – choices (such as where to forage today, or when to fly south) and judgments (such as whether a subordinate chimpanzee showed proper differential behavior). But in all cases, the basic psychology is pattern matching.
…
It’s the sort of rapid, automatic and effortless processing that drives our perceptions in the Muller-Lyer Illusion. You can’t choose whether or not to see the illusion, you’re just “seeing that” one line is longer than the other. Margolis also called this kind of thinking “intuitive”.
This shows that moral reasoning came about due to a bigger brain and that the choices and judgments we make evolved because they better ensured our fitness, not due to ethics.
Moral reasoning evolved for us to increase our fitness on this earth. The field of ethics justifies what benefits group and kin selection with minimal harm to the individual. That is, the explanations people make through moral reasoning are just post hoc searches for people to justify their gut feelings, which they cannot think of a reason why they have them.
Source: The Righteous Mind: Why Good People Are Divided By Politics and Religion
Science Proves It: Fat-shaming Doesn’t Work
2250 words
Milo Yiannopoulos published an article yesterday saying that “fat-shaming works”. It’s clear that the few papers he cites he didn’t read correctly while disregarding the other studies stating the opposite saying “there is only one serious study”. There is a growing body of research that says otherwise.
He first claims that with the knowledge of what he is going to show will have you armed with the facts so that you can hurl all the insults you want at fat people and genuinely be helping them. This is objectively wrong.
In the study he’s citing, the researchers used a quantitative analysis using semi-structured interview data (which is used when subjects are seen only one time and are instructed by the researchers what the guidelines of the experiment will be in order to get the reliable, comparable, and quality data) on 40 adolescents who lost at least 10 pounds and maintained their weight loss for at least a year. This guideline came from Wing and Hill (2001) who say that maintaining a 10 percent weight loss for one year is successful maintenance. He claims that the abstract says that bullying by the peer group induces weight loss. Though, it’s clear that he didn’t read the abstract correctly because it says:
In contrast to existing literature, our findings suggest that primary motivating factors for adolescent weight loss may be intrinsic (e.g., desire for better health, desire to improve self-worth) rather than extrinsic. In addition, life transitions (e.g., transition to high school) were identified as substantial motivators for weight-related behavior change. Peer and parental encouragement and instrumental support were widely endorsed as central to success. The most commonly endorsed weight loss maintenance strategies included attending to dietary intake and physical activity levels, and making self-corrections when necessary.
Peer encouragement and instrumental support were two variables that are the keys to success in childhood weight loss maintenance, not fat-shaming as he claims.
The same study found that obese people were more likely to lose weight around “life transitions,” like starting high school. In other words, people start to worry about how others will see them, especially when they need to make a good first impression. Fear of social judgement is key. So keep judging them.
The study didn’t find that at all. In fact, it found the opposite.
According to a new study, while most teens’ weight loss attempts don’t work, the ones who do lose weight successfully, quite simply, do it for themselves, rather than to please their (bullying) peers or (over-pressuring) parents.
He then cites a paper from the UCLA stating that social pressure on the obese (fat-shaming) will lead to positive changes. Some of the pressures referenced are:
If you are overweight or obese, are you pleased with the way you look?
Are you happy that your added weight has made many ordinary activities, such as walking up a long flight of stairs, harder?
The average fat person would say no to the first two.
Are you pleased when your obese children are called “fatty” or otherwise teased at school?
Fair or not, do you know that many people look down upon those excessively overweight or obese, often in fact discriminating against them and making fun of them or calling them lazy and lacking in self-control?
Self-control has a genetic component.In a 30 year follow-up to the Marshmallow Experiment, those who lacked self-control during pre-school had a higher chance of becoming obese 30 years later. Analyzing self-reported heights and weights of those who participated in the follow-up (n=164, 57 percent women), the researchers found that the duration of the delay on the gratification task accounted for 4 percent of the variance in BMI between the subjects, which, according to the researchers, was responsible for a significant portion of the variation in the subjects. The researchers also found that each additional minute they delayed gratification that there was a .2 reduction in BMI.
Why? Because people change their health and dietary habits to mimic that of their friends and loved ones, especially if they spend lots of time around them. Peer pressure encourages people to look like the people they admire and whose company they enjoy. Unless there’s a more powerful source of social pressure (say, fat shaming) from the rest of society, of course.
Not even thinking of the genetic component. The increase in similarity relative to strangers is on the level of 4th cousins. Thus, since ‘dietary habits are mimicked by friends and family’, what’s really going on is genotypic matching and that, not socialization, is the cause for friends and family mimicking diets.
There is only one serious study, from University College London, that suggests fat-shaming doesn’t work, and it’s hopelessly flawed. Firstly, it’s based on survey data — relying on fat people to be honest about their weight and diets. Pardon the pun, but … fat chance!
Moreover, the study defines “weight discrimination” much like feminists define “misogyny,” extending it to a dubiously wide range of behaviours, including “being treated poorly in shops.” The study also takes survey answers from 50-year olds and tries to apply them to all adults. But in what world do 20-year-olds behave the same way as older people?
The paper he cites, Perceived Weight Discrimination and Changes in Weight, Waist Circumference, and Weight Status, does say what he claims. However, the researchers do say that due to having a sample of people aged 50 and older that it wasn’t applicable to younger populations (as well as other ethnicities, this sample being 97.9 percent white). (Which you can tell he did not read, and if he did he omitted this section.)
The researchers found that 5.1 percent of the participants reported being discriminated on the basis of their weight. They discovered that those who experienced weight discrimination were more likely to engage in behaviors that promoted weight gain, and were more likely to see an increase in weight and waist circumference. Also observed, was that weight discrimination was a factor in early onset obesity.
Present research indicates that in addition to poorer mental health outcomes, weight discrimination has implications for obesity. Rather than motivating people to lose weight, weight discrimination increases the risk for obesity. Sutin and Terraciano (2013) conclude that though fat shaming is thought to have a positive effect on weight loss and maintenance, it is, in reality, associated with maintenance of obesity. Also seen in this sample of over 6,000 people was that those who experienced weight discrimination were 2.5 times more likely to become obese in the next few years.Further, obese subjects were 3.2 times as likely to remain obese over the next few years.
Sutin et al (2014) also showed how weight discrimination can lead to “poor subjective health, greater disease burden, lower life satisfaction and greater loneliness at both assessments and with declines in health across the four years”.
Puhl and Heuer (2010) says that weight discrimination is not a tool for obesity prevention and that stigmatization of the obese leads to threatened health, the generation of health disparities and, most importantly, it interferes with effective treatments.
Tomiyama (2014) showed that any type of fat shaming leads to an increase in weight and caloric consumption.
Shvey, Puhl, and Brownell (2011) found in a sample of 73 overweight women, that those who watched a video in which weight discrimination occurred ate 3 times as many calories than those who did not see the video. The authors conclude that despite people claiming that weight discrimination works for weight loss, the results of the study showed that it leads to overeating, which directly challenges the (wrong) perception on weight discrimination being positive for weight loss.
Participants were from an older population, in which weight change and experiences of weight discrimination may differ relative to younger populations so findings cannot be assumed to generalize
Puhl and King (2013) show that weight discrimination and bullying during childhood can lead to “depression, anxiety, low self-esteem, body dissatisfaction, suicidal ideation, poor academic performance, lower physical activity, maladaptive eating behaviors, and avoidance of health care.”
I expect we’ll see more of these pseudo-studies, and not just because academics tend to be lefties. Like climate scientists before them, I suspect a substantial number of “fat researchers” will simply choose to follow the political winds, and the grant money that follows them, rather than seeking the truth.
He is denying the negative implications of fat-shaming, disregarding the ‘one study’ (or so he claims) that shows the opposite of what he cited (which he didn’t read fully). I also like how these studies are called ‘pseudo-studies’ when the conclusion that’s found is a conclusion he doesn’t like. Really objective journalism there.
The reverse is also true. Just being around attractive women raises a man’s testosterone.
The researchers say that talking with a beautiful woman for five minutes led to 14 percent increase in testosterone and a 48 percent increase in cortisol, the anti-stress hormone.
Of course, this has its grounds in evolution. When two people are attracted to each other, they begin to mimic each other’s movements and using the same body language unconsciously. The researchers he cited concluded that “women may release steroid hormones to facilitate courtship interactions with high-value men“. This, of course, has an evolutionary basis. Women seek the best mate that will be able to provide the most for them. Men and women who are more attractive are also more intelligent on average with the reverse holding true for fat people, who are uglier and less intelligent on average.
Though it would be to un-PC to conduct an experiment proving it, it stands to reason that looking at fat, ugly people depresses testosterone. This is certainly how any red-blooded man feels when looking at a hamplanet.
Depressed testosterone is associated with many negative health outcomes, and thus the mere presence of fat people is actively harming the population’s health — particularly men’s, since we’re more visual. We ban public smoking based on the minuscule effects of “passive” intake, so why aren’t the same lefty, public-health aware politicians clamouring for a ban on fat people being seen in public?
A study conducted on people’s hormonal response to the obese and overweight may indeed show a decrease in testosterone and cortisol. Though, these hormonal responses are temporary, which he doesn’t say.
Instead, the same lefties who want to stop us having fags or drinking too much in public (and even alcoholics and chain smokers are healthier than the obese) are the same ones urging the authorities to treat “fat-shaming” as a crime and investigate it. Insane!
There are, contrary to popular belief, obese people who are metabolically healthy. Blüher (2012) reviewed the data on obese patients and found that 30 percent of them were metabolically healthy with the obese patients having similar levels of insulin sensitivity similar to lean individuals.
Moreover, new research has found that having a BMI of 27 leads to a decrease in mortality. In a huge study of over 120,000 people, the researchers gathered people from Copenhagen, Denmark, recruiting people from 1976 to 2013. They were then separately compared to those who were recruited in the 70s, 90s, and 00s. Surprisingly, the BMI linked with the lowest risk of having died from any cause was 23.7 in the 70s, 24.6 in the 90s, and 27 from 2003-2013. Due to the results of this study, the researchers are arguing that BMI categories may need adjusting.
As shown in that 2014 study, young people in particular are concerned about what their peers think about them, especially when they start high school. That’s why it’s so critical to let them know that their instincts are correct, and that they can’t be “healthy at any size.”
If you can be unhealthy at any size, why can’t you be ‘healthy at any size’? As I’ve shown, those with a BMI of 27, on average, are metabolically similar to those with to those with lower BMIs. Since, in the study previously cited, BMI increased while mortality decreased, technological advancements in caring for diseases, such as Diabetes Mellitus, improved, this is one possible explanation for this.
Those with a BMI under 25 may still suffer from negative effects, the same as obese people. They may suffer from metabolic syndrome, high triglycerides, low HDL, small LDL particles, high blood sugar and high insulin. Those who are skinny fat need to worry more about their vital organs, as the fat deposits they carry are white fat which is wrapped around the vital organs in the body. These are some of the reasons why being skinny fat can be more dangerous than being obese or overweight: they think that because their BMI is in the ‘normal range’ that they’re fine and healthy. Clearly, sometimes even being ‘underlean’ can have serious consequences worse than obesity.
Then he brings up smoke shaming and bills being passed to stop smokers from smoking in certain public areas lead to a decrease in smoking, so fat shaming makes sense in that manner.
Except it doesn’t.
Humans need to eat, we don’t need to smoke. Moreover, since the rising rates in obesity coincide with the increase in height, it has been argued by some researchers that having an obese population is just a natural progression of first world societies.
Fat shaming doesn’t work. It, ironically, makes the problem worse. The physiological components involved with eating are a factor as well. It is known that the brain scans of the obese and those addicted to cocaine mirror each other. With this knowledge of food changing the brain, we can think of other avenues that do not involve shaming people for their weight, which increases the problem we all hate.
Dysgenic Fertility and America’s Obesity Crisis
1050 words
The dysgenic trend currently occurring in America has implications for obesity as well. Since intelligence is negatively correlated with obesity, as America’s average IQ decreases, the rates of obesity in our country will increase. This is due to the high correlation between intelligence and obesity. As we continue to allow unfettered immigration into America, the average IQ of the country will decrease, while the amount of people that are overweight and obese will increase.
The ethnic differences in obesity rates lead more credence to what I am saying. As the demographics shift, more people will be overweight or obese due to having a lower IQ. Whites, too, are experiencing this dysgenic effect, as intelligent people of all ethnicities are not reproducing. As more and more genetically less fit individuals continue to have a higher rate of reproduction in comparison to intelligent individuals, this crisis will continue to persist.
Those with lower intelligence have less of an ability to delay gratification, which has a strong genetic component. As more people breed who cannot delay their gratification, the rates of obesity will increase in the country. Of course, the lack of ability to delay gratification comes with a lowered IQ. This is what we see in regards to sex. Those with higher IQs lose their virginities at a later age in comparison to those with lower IQs. Along with the data from Kanazawa that shows that more intelligent people have a lower BMI than those with lower intelligence, this study gives more credence to the theory that those with higher levels of intelligence can better delay their gratification.
JayMan says that there is evidence for an increased genetic load for those with lower IQs, which we can then reason that this also leads to a higher prevalence for obesity in low IQ populations. JayMan then says that many of the genes found to influence obesity seem to operate in the brain and that they have a pleiotropic effect, meaning that multiple genes affect one or more traits. With the increased genetic load comes with an increased chance to have a lower IQ and become obese, as these two things correlate with the lack of ability to delay gratification.
Of course, these problems persist due to modern medicine. With the advent of better medicine, it allowed us to beat diseases that formerly would have been devastating to the population at large. This led to an increase of alleles with negative effects in the population that continue to pass down through the generations. Along with these advances in medical technology, welfare and other government-funded programs also enable those that are less genetically fit. Since intelligence is correlated with ability to care for offspring, as well as r- and K-selected traits, those with lower intelligence exhibit more r-selected traits. This is why America is facing a dysgenic fertility crisis. Welfare props up those with less intelligence, giving them more incentives to breed. They then breed more low IQ children who then will live off of the government. This vicious cycle then continues unfettered due to how America’s dysgenic welfare structure is implemented.
Before the advent of modern technology, those who were less genetically fit didn’t survive to pass on their genes. But, in the modern day with all of our superior technology, this allows the less intelligent to breed when in the past they would have been selected out of the gene pool due to being less biologically fit.
Another variable that is involved with the dysgenic fertility of America is Mexican immigration. With the influx of illegal (and legal) peoples from the South of the Border, this is having both dysgenic effect on both the average intelligence of our country along with the average BMI. The average BMI for the average American male is 28.6. In the 1950s, 10 percent of American adults were obese compared to 35 percent of American adults today. Now, this has to do with ability to access food, as well as the effect of the media on children has a huge effect on obesity, due in part to not getting a full nights sleep, as that is correlated with obesity. However, an increase in genetic load, which also comes with a decrease in intelligence, has a lot to do with this as well. The increase in the BMI of the average American has to do with immigration as well. The rates of obesity for different ethnicities in America are as follows: 67.3% for whites, 75.6% for blacks, and 77.9% for ‘Hispanics’. So of course, with more immigration from the South of the Border, the average IQ for America is decreasing while obesity rates are increasing, due mostly to this illegal immigration.
Height and intelligence are both correlated. Ever since the advent of the industrial revolution, we have had an excess surplus of food. As Gina Kolata says in her book Rethinking Thin, an increase in obesity is inevitable. She says this since the increase in genetic height and IQ has occurred, so the increase in obesity follows with it. We need to influence those with higher IQs to have more children. Further, we also need to restrict immigration to only high-skilled immigrants (only when necessary) to reverse this trend that has been occurring since the 1960s. Though, with higher levels of intelligence one can forgo their urges and live a healthier lifestyle due to having higher cognition which leads to a better ability to delay gratification than one with lower intelligence.
Those with higher IQs make better choices on what to eat than those with lower IQs. This is shown in the BMIs of the intelligent and non-intelligent population. As more and more people with lower genotypic IQ come into the country, the quality of life will decrease as will the average intelligence of the country. In turn, the BMI of the average American will increase along with the decrease of our country’s average intelligence. To ameliorate this, we need to have extremely stringent criteria on who we allow into the country. An IQ test, to start, would be a good idea. As those with higher intelligence have less of a genetic load and have less of a chance of becoming obese than one with a lower IQ, the current dysgenic effect that this unfettered immigration is having on America can be lessened.
Climate, Violence, r/K Selection Theory and the Vindication of JP Rushton
1850 words
Why do violent crimes increase as temperatures increase? Why do violent crimes decrease as the temperatures decrease? These phenomena are noticed every year, and criminologists set out to find the relationship between climate and violence and whether or not there is a curvilinear hypothesis, which crime increases as the temperature increases, but at extremely high temperatures the crime rate begins to dip down.
When the weather gets colder, crime decreases. All though crime does decrease in the Winter months, crimes that take more planning, such as property crime and robbery increase. This is due, obviously, to the fact that people don’t want to spend too much time outside so they plan their crimes ahead to minimize the time spent outside whereas in hotter temperatures this does not occur. It is known that when it’s colder, resulting criminal actions are less random than those committed in hotter temperatures.
The two trains of thought for the temperature/crime theory are the curvilinear hypothesis, as noted above, and the linear hypothesis, which argues for that as the temperature increases, so does crime without a drop in extremely high temperatures.
Mishra (2014) showed that the relationship is not a curvilinear one, but that crime rises steadily as the temperature increases. Looking at Allahabad city, India from a 62 year period from the years 1952 to 2013 with the variables being temperature, humidity and rainfall, the results of the analysis shows that temperature has a significant effect on the proclivity to commit crime, as well as murder. Relative to the temperature, humidity shows a strong correlation with crime with rainfall showing a negative correlation.
Mishra took annual data from the National Crime Record Bureau with monthly data taken from the various police stations of Allahabad city. The temperature and rainfall data was taken from local news stations and the Indian Meteorological Department.
Results of his analysis showed significant correlations with violent crime and temperature (r=.75) with murders increasing as temperatures increase. The relationship between relative humidity and crime was strong as well (r=.68) with rainfall having a negative correlation (r=-.14). Out of all three of these variables, the average temperature has more of an effect on crime than relative humidity. Using a regression model, Mishra discovered a correlation of .56, showing that temperature alone accounts for 56 percent of the variation in crime pattern. Including all three variables in the regression model shows a correlation of .61. This confirms that among the climate elements tested that temperature itself had the highest effect on crime.
Figure 2 of the paper shows that as temperatures rise (starting at about 25 degrees celsius), that the crime rate increases.Since very high temperatures are associated with rainfall, there is a reduction in crime when this occurs, thermal stress is reduced. However, when rainfall and humidity were both unchanged, higher temperatures would not cause a decrease in violence. This result is inconsistent with the curvilinear hypothesis and does not support the claim that extremely high temperatures cause decreased violence.
Van Lange, Rinderu, and Bushmen (2016) thought of the model CLASH (CLimate, Aggression, and Self-control in Humans) which shows differences within and between countries and their proclivities for aggression and criminal behavior. With lower temperatures, along with seasonal variation like what is seen in Northern Europe, peoples had to adopt a slower life history strategy with more focus on planning for the future as well as a need for self-control due to the differing variations in climate and how that has an effect on acquiring food. The CLASH model further shows that slow life history strategy, thinking into the future and self-control are important determinants in predicting violence.
As I have discussed here before, r/K Selection Theory (Life History Theory) shows that those who live in colder temperatures adopt slower strategies which lead to more future planning along with more self-control along with more altruistic behaviors shown. In a more harsh environment, such as Africa, Latin America and other locations situated near the equator, faster life history strategies are needed to offset the harsh environment, which leads to evolutionary causes for earlier menarche in black and Mexican-American girls. Faster life history strategies are needed in locations near the equator due to the harshness of the environment. This is why Africans and other peoples located at or near the equator have more children, to offset the harsher environment. No planning ahead was needed, as most likely populations near the equator wouldn’t have lived long enough to see the delayed payoff. Conversely, those in northerly climes live longer due to the need to plan ahead, and along with this ability to plan ahead came higher intelligence, which leads to yet another selector for high intellect in populations that evolved further from the equator, earlier childbirth. On top of that selector, deleterious Neanderthal alleles decreased historic fitness levels 1 percent in non-African populations, which further lead to evolution of the ability to think into the future due to less children beared. Since the future becomes more predictable the further you travel away from the equator, it becomes adaptive for peoples to adopt a slower life history strategy out of necessity, as that’s the only way to survive and they will see the fruits of their self-control due to having a longer life expectancy due to superior future time orientation in comparison to those in southerly climes.
Since a faster life history strategy is correlated with threats of harshness and higher morbidity and mortality, from the life history perspective we would reason that those with lower SES would have to adopt a faster life history strategy in order to offset the fact that they are more likely to suffer premature disability or death. Lower SES is also correlated with other r-selected strategies such as earlier sexual activity (a variable correlated with lower IQ), higher rates of childhood pregnancy and childbearing, greater number of offspring and less care and attention shown to those offspring, this study. For the third time this month, proves Rushton right with his application of r/K Selection Theory on the three races of humanity.
Van Lange, Rinderu, and Bushmen state that neighborhood deterioration, assaults, muggings, drug addicts, and presence of gangs are associated with earlier and higher rates of sexual activity. Not coincidentally, this is seen in many majority black and ‘Hispanic’-majority cities in America. They also say that as resources become scarce that women gravitate towards men with more access to resources and those that will invest in their children’s reproductive values. Though this is hardly seen in low-income communities around America, you do see a lot of black women who gravitate towards the drug dealer or another black male who is involved with illegal activities who then acquire mass amounts of capital. This is an evolutionary strategy for all women, since money is correlated with intelligence and therefore a mate with more money has better means to take care of any offspring conceived.
The CLASH model extends r/K Selection Theory, particularly where r/K Selection Theory emphasizes unpredictability and harshness as a source of environmental stress, the CLASH model emphasizes predictability over environmental stress. That is, those who evolved in northerly climes can deal with stress better than those who evolved near the equator, therefore lessening the amount of crime in those populations due to them being able to constrain themselves more. The CLASH model proposes that the combination of predictability and control shape a slow life history strategy, future time orientation, with a focus on self-control. Moreover, in an analysis of 40 work-related values in 40 countries, it was found that the countries located the furthest from the equator tended to place a greater value on future rewards, such as perseverance and thrift.
In countries closer to the equator, according to the 2014 World Fact Book, the average age of first birth for a female was 20 years of age (the countries were the Gaza strip, Liberia, Bangladesh, Kenya, Mali, Tanzania, Uganda and various other middle African countries). Conversely, for countries further away from the equator, the average age of first birth was 28 years of age (Japan, Canada, and most European countries). Those populations that evolved in warmer climates where the changes in season are minimal with unpredictable harshness tend to enact faster life history strategies than those in colder climates.
The researchers state on page 31:
One standard deviation increase in temperature was associated with a 11.3% increase in intergroup conflict and a 2.1% increase in interpersonal conflict. Examples of interpersonal conflict include spikes in domestic violence in India and Australia, greater likelihood of assaults and murders in the USA and Tanzania, ethnic violence in Europe and South Asia, and civil conflicts throughout tropical climates. Hence, we conclude that it is both differences in average temperature and differences in seasonal variation in temperature that help explain cross-national differences in aggression and violence around the world.
And on page 41:
Assuming CLASH is accurate, it is interesting to consider that people’s thoughts and behaviors may be quite different, based on the physical circumstances their ancestors faced and that they face themselves. The world is getting smaller and smaller. Electronic and social media (e.g., WhatsApp, Twitter, Facebook, email) connect us to people all over the world. Yet, people coming from differing ancestral histories and living in different locations face challenges of self-control in a variety of ways. A businessperson from London may expect a response the next day, but the alliance in Nairobi may want to take at least an extra day. If CLASH is correct, the same pattern should hold for within-country differences between a businessperson working in Chicago and the alliance working in New Orleans, or between a businessperson working in Melbourne and the alliance in Brisbane or Cairns (with London, Chicago, and Melbourne being relatively more remote from the equator, and facing greater variation in climate).
The correlation between temperature, crime and life history strategies is shockingly high. JP Rushton is now vindicated from all of the derision he experienced in the 30 plus years he was pushing his r/K Theory. This shows implications for the European ‘refugee’ crisis as well, due to the higher rates of all violent crime occurring ever since this mass exodus from MENA (Middle Eastern North African) countries.
The CLASH model is a great compliment to r/K Selection Theory and goes deeper into why behaviors differ in human populations based where ancestral evolution occurred. As temperatures increase, so does crime starting at 76 degrees Fahrenheit, with there being a negative correlation for crime committed during rainfall. The CLASH model vindicates Rushton’s supposedly ‘wacky theories’ on race, evolution and behavior. Further, the CLASH model also shows another cause for the current situation occurring in Europe. The people flooding into the continent have ancestral ties to hotter climes. They then bring their genetic proclivity to commit crimes with them to the new area, which then increases crime. This is one of many reasons for the cucking of Europe. As we look more into evolutionary causes for behavior and those behaviors that lead to more crime committed, Rushton and others will be further vindicated and when this occurs, with ample data, of course, sensible immigration policy can be had to quell the amount of crime committed by ‘migrants’ and other immigrants into our countries.
Marriage, Divorce and Genetic Similarity Theory
1100 words
Genetic Similarity Theory states that we seek out similar others in order to give our genes the best chance to produce copies of themselves. As Richard Dawkins says in The Selfish Gene, it is genes that survive to the next generation with more copies being found in siblings and related co-ethnics. Therefore, the theory goes, by benefitting genetically similar others, we are benefitting copies of our genes. Speed daters match on genotype, which shows evidence for ability to detect genetically similar others. On a subconscious level, we have the ability to detect genetically similar others.
Assortative mating is a form of sexual selection in which those with similar genotypes and phenotypes mate with each other more often than in would be expected under a random breeding model. One of the numerous ways we match by genetic similarity is phenotype. If the phenotype is similar, more often than not, the genotype is as well. This is what drives friendships and marriages, as well as being the cause for ethnocentrism.
Rushton (1987) showed that humans are able to detect degrees of genetic similarity in others, and prefer those most similar to themselves for friends and spouses than less genetically similar individuals, which is the basis for ethnocentrism. A husband and wife are, on average, as close as fourth cousins. Due to matching by GST, spouses should also match on heritable traits such as IQ, body measurements and personality traits. As McCrae et al (2008) write:
Altruism, Modesty, and Tender-Mindedness are characteristics that most people desire in a spouse (cf. Buss, 1986), but people are most likely to find a mate with these characteristics if they have them themselves. This is an instance of the principle that people with desirable qualities have more options in seeking a desirable mate. At the same time, it seems likely that there is a sense in which disagreeable people may actually prefer the company of their own kind, like the haughty Duke in Robert Browning’s “My Last Duchess,” who disposed of his wife because she was too indiscriminately nice.
Everyone has the perfect spouse in their head that they dream of. However, the type of spouse we end up with will, more often than not, be genetically similar to ourselves. Even spouses who are not of the same race or ethnicity match up on heritable traits such as The Big Five, IQ and physiological measurements.
Divorce is also influenced by genetic factors. Jockin, Mcgue and Lykken (1996) found that 40 percent of the variability in the heritability of divorce comes from genetic factors that affect the personality of one spouse. Traditionalism, extraversion and neuroticism (2 of the Big 5 Personality Traits) are causes for divorce. A few reasons I can think of for neuroticism and extraversion being personality traits correlated with divorce is highly neurotic people are more likely to be stressed, anxious, have hypochondria (the worry of contracting an illness) and obsessive behavior. This can put extra strain on a marriage, leading to both of the spouses not being happy in their marriage, leading to divorce. With extraversion, more extraverted people are more open to meeting others and are more social and talkative. This will lead to feelings of jealousy, causing a strain on the marriage.
The genetic and environmental influences responsible for marriage are different from those that are responsible for divorce. Evidence exists that after mate selection, there may be some protective factors for the couple, such as religion. While other factors that place couples at risk for divorce, such as alcoholism, are also genetic in nature.
Trumbetta and Gottesman (2000) suggested endophenotypes with one being oriented to pair bonding and the other to mate diversification. Pair bonding, obviously, leads to a happier marriage as both spouses are monogamous, whereas mate diversification is associated with multiple marriages. It sounds to me like those who pair bond are more introverted whereas those who have diversity in marriage partners are more extraverted, leading to high divorce rates due to jealousy and cheating. The conclude that there are significant genetic influences on both endophenotypes with unique environmental factors accounting for the rest of the variance,
Spouses, as well as friends, sort on characterisitics such as race, socioeconomics, physical attractiveness, level of education, family size and structure, IQ and longevity. This is the Selfish Gene in action. By seeking out copies of itself (which would be in co-ethnics in higher frequencies), the gene is able to ensure its survival onto the next generation.
Even in couples who are not the same race or ethnicity match on other heritable characteristics. Rushton and Nicholson (1988), tested predictions from genetic similarity theory and found that spouses select each other on the basis of more genetically influenced cognitive tests. It’s known since The Bell Curve came out in 1994 that spouses select each other based on IQ. What Rushton and Nicholson noted in the study was that estimates of genetic influence calculated on Koreans and Canadians predicted assortative mating in European Americans in Hawaii and California. Americans of mixed ancestry made up for ethnic dissimilarity by matching up on the more heritable traits, whereas the correlation is lower for those traits that are more influenced by the environment. The observations on genetic selection were weaker but still had a positive correlation, when the g factor was taken out of the equation. This suggests that we choose mates based on the general intelligence factor. This effect is seen in, for instance, white women who date black men. They, more often than not, have lower average IQs than the mean (100).
Pan and Wang (2011) showed that spouses are similar in academic achievement as well as IQ. 6 out of the 8 traits tested (reading, spelling, arithmetic, vocabulary, verbal and full-scale IQ) showing evidence of spousal correlations.
Humans have a natural instinct to marry genetically similar others. Whether the traits are environmentally or genetically influenced, spouses will match on traits with the highest correlations (BMI, waist size, arm size). Genetic Similarity Theory proposes that these phenomena is not by chance, but was how we evolved. Sexual selection, which is natural selection arising through preference by one sex for certain traits in individuals of the other sex, is the driving factor here. Through sexual selection, we humans were able to gain higer intelligence (for men) and gain higher verbal abilties that allowed to care for children (women). These differences remained even when controlling for geographic location. Spouses and friends being as similar as 4th cousins is no accident, in fact, it is evolution in action.