The Intelligence Paradox: Why the Intelligent Choice Isn't Always the Smart One - Satoshi Kanazawa 2012
Common Misconceptions about Intelligence
What Is Intelligence?
Intelligence (or, more precisely, general intelligence) refers to the ability to reason deductively or inductively, think abstractly, use analogies, synthesize information, and apply it to new domains.1 Perhaps no other concept in science suffers from greater misunderstanding and is plagued with more misconceptions than the concept of intelligence. Many of these misconceptions are politically motivated by the equation of intelligence with human worth that I mention in the Introduction. Before I discuss how general intelligence evolved and what it is good for, I want to attempt to dispel these common misconceptions about intelligence.[2]
Common Misconceptions about Intelligence
Misconception 1: IQ Tests Are Culturally Biased
Probably the most pervasive misconception about intelligence is that IQ tests, which measure intelligence, are culturally biased against certain racial and ethnic groups or social classes. This misconception stems from the well-established and replicated fact that different racial and ethnic groups on average score differently on standardized IQ tests. As I mention in the Introduction, social scientists and lay public alike assume (without any logical or empirical support) that everyone (and all racial and ethnic groups) is equally intelligent because they are all equally worthy human beings. If everybody is equally intelligent, yet some groups consistently score lower than others, then, the argument goes, it must by definition mean that the IQ tests are culturally biased against the groups who score lower.
But the claim of cultural bias rests entirely on the conviction and unquestioned assumption that everybody and all groups are equally intelligent, which in turn rests entirely on the conviction and unquestioned assumption that intelligence is the ultimate measure of human worth. The claim is untenable once we dismiss these untested and hence religiously held convictions and assumptions.
Think about the sphygmomanometer, for a moment. It is the device that doctors and nurses commonly use to measure blood pressure, with the inflatable cuff with Velcro and a mercury manometer to measure the pressure of blood flow. It is an unbiased and accurate (albeit imperfect) device to measure blood pressure. (It is very imperfect, as I discuss later in the chapter.) Nobody would argue that it is culturally biased against any racial or ethnic group. Yet there are well-established race differences in blood pressure; blacks on average have higher blood pressure than whites.3 Does that mean that the sphygmomanometer is culturally biased against (or for!) blacks? Is blood pressure a racist concept? Of course not. It simply means that blacks on average have higher blood pressure than whites. Nothing more, nothing less.
Or think of the bathroom scale. Once again, the bathroom scale is an unbiased and accurate (albeit imperfect) device to measure someone's weight. Nobody would argue that it is culturally biased against certain groups. Yet on any bathroom scale, women on average “score” lower than men, and Asians on average “score” lower than Caucasians. Does that mean the bathroom scale is culturally biased against women or Asians? Is weight a sexist or racist concept? Of course not. It simply means that women on average weigh less than men, and Asians on average weigh less than Caucasians. Nothing more, nothing less.
Nobody argues that blood pressure is a racist concept, or that the sphygmomanometer is culturally biased, because nobody equates (low) blood pressure with human worth. As a result, nobody gets upset about observed race differences in blood pressure. Nobody argues that weight is a racist or sexist concept, or that the bathroom scale is culturally biased, because nobody equates weight with human worth. As a result, nobody gets upset about observed sex or race differences in weight. Why are race and sex differences bona fide evidence of bias only with IQ tests?
The single most accurate IQ test currently available is called the Raven's Progressive Matrices. Intelligence researchers universally consider it to be the single best test to measure general intelligence because scores on this test are more strongly correlated with the underlying dimension of general intelligence than any other single intelligence test. (In technical language, Raven's Progressive Matrices are more highly g-loaded than any other cognitive test.) The test comes in three different versions: the standard version (Raven's Standard Progressive Matrices); the advanced version for college students and other more intelligent people (Raven's Advanced Progressive Matrices), designed to discriminate the higher end of the IQ distribution more precisely; and the multi-color version for children (Raven's Colored Progressive Matrices).
Here is an example of a question item from Raven's Advanced Progressive Matrices. The test comes with only one instruction: Choose the figure that fits the next in the progression of matrices. Which one of the eight alternatives comes next?
Figure 3.1 A question from the Raven's Advanced Progressive Matrices
All question items in all versions of Raven's Progressive Matrices are very similar to this one. Can anyone tell me exactly how this question, and all the other similar questions that comprise the Raven's Progressive Matrices, can possibly be culturally biased against any group? The question is a pure measure of reasoning ability. The only thing it's biased against is the inability to think logically.
By the way, if you are wondering, the correct answer to the above question is 7.
Misconception 2: Nobody Knows What Intelligence Is, because Intelligence and IQ Are Not the Same Thing
A related misconception that people have is the claim that IQ is not a measure of general intelligence. Some people believe in the concept of intelligence; they know that some people are more intelligent than others. But they do not believe that IQ tests accurately measure individuals’ intelligence, once again, because IQ test scores typically show average differences between different groups and they believe that individuals from different groups on average must be equally intelligent.
Contrary to this view, intelligence researchers unanimously agree that intelligence is exactly what IQ tests measure,4 in the same way that your weight is exactly what your bathroom scale measures. To maintain that intelligence is real and some people are more intelligent than others, yet IQ tests do not accurately measure intelligence is akin to claiming that weight is real and some people are heavier than others, but the bathroom scale does not accurately measure weight. It simply doesn't make any sense.
I have just said that Raven's Progressive Matrices is the single best IQ test currently available, and that is true. But there is actually a better way to measure someone's general intelligence than Raven’s, and that is to administer a series of different cognitive tests. The best way to assess someone's level of general intelligence is to administer a large number of cognitive tests like vocabulary, verbal comprehension, arithmetic, digit span (to measure the ability to repeat a sequence of digits after it is given, sometimes exactly as it is given, sometimes backwards), spatiovisual rotation (to measure the ability to imagine what a three-dimensional object would look like if it is rotated in space), etc. You will recall from the Introduction that this is precisely how NCDS measures intelligence, which is why NCDS has one of the best measures of general intelligence of all large-scale national surveys.
Across individuals, performances on all these cognitive tests are highly positively correlated. In other words, people who do well in verbal comprehension tests tend also to do well on arithmetic tests, and they have better ability to visualize a three-dimensional object from a different angle or to repeat a sequence of digits that is given to them backwards. Contrary to popular belief, people who are good with concrete tasks are also good with abstract tasks; people who are good with numbers are also good with words.
For example, in a classic paper published in 1904, Charles Spearman shows that students’ relative school performance in mathematics is highly correlated with their performance in classics (r = .87), French (r = .83), English (r = .78), pitch discrimination (r = .66) and music (r = .63). (The “r” is a measure of association between two variables, known in statistics as the correlation coefficient. It varies from -1, when the two variables are perfectly negatively correlated, through 0 when they are completely unrelated to each other, to +1 when they are perfectly positively correlated. As you can see, all of the correlations reported by Spearman are very highly positive.) In fact, the students’ relative performance in music is more highly correlated with their mathematical ability than with their pitch discrimination (r = .40)!
In the NCDS data, at age 16, the correlation between verbal comprehension and mathematical comprehension is .654, which once again is very high. As I note below in the next section, the correlation between true blood pressure and blood pressure measured by the sphygmomanometer is about .50. It means that using a verbal comprehension test to measure one's mathematical ability, or using one's relative performance in mathematics to measure one's relative performance in musical ability, is more accurate than using the sphygmomanometer to measure blood pressure. That is how high all measures of cognitive abilities are intercorrelated.
At the same time, it is also important to remember that as highly as verbal comprehension and mathematical comprehension are correlated in NCDS (r = .654), one can explain less than half of the variance in the other. (“Explained variance” in one variable by another is computed by squaring the correlation coefficient between them. So it means that one's score in the verbal comprehension test explains 43% ((.654)2 = .428) of the variance in mathematical comprehension and vice versa.) It means more than half of the variance in mathematical comprehension test scores across individuals cannot be explained by their scores in the verbal comprehension test scores.
What psychometricians (whose job it is to measure intelligence accurately and devise tests to do so) do then is to subject individual scores on all these cognitive tests to a statistical technique called factor analysis. What factor analysis does is to analyze the correlations between all pairs of cognitive tests and then measure an individual's latent cognitive ability that underlies their performance on all of the cognitive tests. This latent cognitive ability is general intelligence. Factor analysis also eliminates all random measurement errors that are inevitably associated with any individual cognitive test as a measure of intelligence. So it can measure general intelligence purely, without any random measurement errors.
The IQ score thus obtained is a pure measure of intelligence.5 It measures someone's ability to think and reason in various contexts and situations, such as numerical manipulations like arithmetic, verbal comprehension like reading, and mental visualization like spatiovisual rotation. Believe it or not, all these cognitive abilities have something in common, and that something is general intelligence. So intelligence is precisely and exactly what IQ tests measure. Intelligence is what allows us to perform on all kinds of cognitive tests.
Misconception 3: IQ Tests Are Unreliable
Unlike other misconceptions about intelligence, there is some truth to this one, in the sense that IQ tests are not perfectly reliable. IQ tests have some measurement errors, which is why psychometricians perform factor analysis to eliminate such random errors in measurement. So it is true that IQ tests are not perfectly reliable, but then no scientific measurements ever are.
If the same individuals take different IQ tests on different days, or even on the same day, their scores will be slightly different from test to test (but only slightly). So IQ tests do not give the perfect measurement of someone's intelligence. But then, if you step on the bathroom scale, get the reading, step off, and step on it again, it will give you slightly different readings as well. The same is true if you measure your height, your shoe size, and your vision. No measurements of any human quality are perfectly reliable.
So the measurement of intelligence is no different from the measurement of any other human trait. But nobody ever claims that, because the measurement of weight is never perfectly reliable, there is no such thing as weight and weight is a culturally constructed concept. But that's exactly what people who are unfamiliar with the latest psychometric research think about intelligence. Intelligence is no less real than height or weight, and its measurement is just as reliable (or unreliable).
In fact, Arthur R. Jensen, probably the greatest living intelligence researcher, claims that IQ tests have higher reliability than the measurement of height and weight in a doctor's office.6 He says that the reliability of IQ tests is between .90 and .99 (meaning that random measurement error is between 1% and 10%), whereas the measurement of blood pressure, blood cholesterol, and diagnosis based on chest X-rays typically has a reliability of around .50.
Reliability is the correlation coefficient between repeated measurements. If the measurement instrument is unbiased (as IQ tests are as a measure of general intelligence and the sphygmomanometer is as a measure of blood pressure), then the reliability translates into the correlation coefficient between the true values and the measured values. The reliability of .50, for example, like the reliability of the sphygmomanometer as a measure of blood pressure, means that the correlation between individuals’ true blood pressure and the readings on the sphygmomanometer is only .50. In contrast, the reliability of .90-.99, for example, the reliability of IQ tests as a measure of general intelligence, means that the correlation between individuals’ general intelligence and their IQ test scores is .90—99. So the measurement of intelligence is nearly twice as accurate as the measurement of blood pressure, yet nobody ever claims that blood pressure is not real or that it is a culturally constructed concept.
Misconception 4: Genes Don't Determine Intelligence, Only the Environment (Education and Socialization) Does
This is another widely held misconception about intelligence. It is true that genes don't determine intelligence completely; they only do so substantially and profoundly.
Heritability is the measure of the influence of genes on any trait.[7] Heritability of 1.0 means that genes determine the traits completely and the environment has absolutely no effect. As I mention in Chapter 1, some genetic diseases like Huntington's disease have a heritability of 1.0; genes entirely determine whether or not you will get Huntington's disease. If you have the affected genes for the disease, it does not matter at all how you live your life or what your environment is; you will develop the disease. One's natural eye color or natural hair color also has a heritability of 1.0. So does one's blood type. Very few other human traits have a heritability of 1.0.
On the other hand, a heritability of 0 means that genes have absolutely no influence on the given trait, and the environment completely determines whether or not someone has the trait. No human traits have a heritability of 0; genes partially influence all human traits to some degree. (This is known as Turkheimer's first law of behavior genetics.8)
Most personality traits and other characteristics—like whether you are politically liberal or conservative or how likely you are to get a divorce—have a heritability of .50; they are about 50% determined by genes.[9] In fact, most personality traits and social attitudes follow what I call the 50—0—50 rule:10 roughly 50% heritable (the influence of genes), roughly 0% what behavior geneticists call “shared environment” (parenting and everything else that happens within the family to make siblings similar to each other), and roughly 50% “nonshared environment” (everything that happens outside of the family to make siblings different from each other). It turns out that parenting has very little influence on how children turn out.11
Of course, this emphatically does not mean that parents are not important for how children turn out; they are massively and supremely important because children get their genes from their genetic parents. It simply means that parenting—how parents raise their children—is unimportant. This is why adopted children usually grow up to be nothing like their adoptive parents who raised them and a lot like their biological parents (or their twin reared apart) whom they have never even met.12
One of the very few exceptions to the 50—0—50 rule is intelligence, for which heritability is larger. Heritability of general intelligence increases from about .40 in childhood to about .80 in adulthood. Among adults, intelligence is about 80% determined by genes.
Yes, heritability of intelligence increases over the life course, and genes become more important as one gets older. This may at first sight seem counterintuitive, but it really isn’t. This is because for adults the environment is part of their genetic makeup whereas for children it isn’t. Children must live in the environment created by their parents, older siblings, teachers, neighbors, clergy, and other adults. In contrast, adults determine their own environment to a much greater extent than children do. So for adults, genes and the environment become more or less the same thing, whereas for children they are not. For adults, when the environment influences their intelligence, it shows up as the influence of their genes, which largely determine their environment, whereas for children it does not. This is why the influence of genes increases dramatically throughout life.
Sorry, Education Does Not Increase Your Intelligence; It's the Other Way Around
A subcategory of this common misconception is that you can become more intelligent, by reading more books, attending better schools, or receiving more education. It is true that there are strong associations among these traits. People who read more books are more intelligent; people who attend better schools are more intelligent; and people who attain more education are more intelligent. But the causal order is the opposite of what many people assume. There are associations among these traits, because more intelligent people read more books, attend better schools (partly because their parents are more intelligent and therefore make more money), and receive more education.
Early childhood experiences do affect adult intelligence, but they mostly function to decrease adult intelligence, not to increase it. Childhood illnesses, injuries, malnutrition, and other adverse conditions influence adult intelligence negatively, and these individuals often fail to fulfill their genetic potential. But there are very few childhood experiences that will increase adult intelligence much more than their genes would have inclined them to have.
Somewhat paradoxically, the wealthier, the safer, and the more egalitarian the nations become, the more (not less) important the genes become in determining adult intelligence. In poor nations, there are many children who grow up ill, injured or malnourished, and these children will decrease the correlation between genes and adult intelligence. In wealthy societies like the United States, where very few children now grow up ill and malnourished, the environment is more or less equalized. When the environment becomes equal for all individuals, it has the same effect for everyone and it can no longer explain any variance in the individual outcome. (Statistically, a factor that does not vary between individuals cannot be correlated with individual differences in an outcome. And no correlation means no explained variance, as zero squared equals zero.) So the more equal the environment between individuals, the more important the influence of genes becomes. A longitudinal study of Scottish people born in 1921 and 1936 shows that their intelligence does not change much after the age of 11.13 Their intelligence at age 11 is very strongly correlated with their intelligence at age 80.
So contrary to the popular misconception, genes largely (though, even for adults, never completely) determine intelligence. In fact, intelligence is one of the most heritable of all human traits and characteristics. For example, intelligence is just as heritable as height.14 Everybody knows that tall parents beget tall children, and nobody ever questions the strong influence of genes on height, yet they vehemently deny any influence of genes on intelligence.
There is something curious about heritability. A trait's heritability and its adaptiveness (how important it is for survival and reproductive success) are generally inversely related: The more adaptive the trait is (the more important it is for the organism's survival and reproductive success), the less heritable it is.15 This is because, when a trait is crucial for survival and reproductive success, every individual must have it at the optimal and most efficient level. Evolution cannot “allow” it to vary across individuals. It is only when a trait is less important for survival and reproductive success that evolution can “allow” it to vary across individuals. Thus, according to the basic principles of quantitative genetics, the fact that general intelligence is highly heritable suggests that it is not very important for our survival and reproductive success, as I argue throughout the book.