Get e-book The General Factor of Intelligence: How General Is It?

Free download. Book file PDF easily for everyone and every device. You can download and read online The General Factor of Intelligence: How General Is It? file PDF Book only if you are registered here. And also you can download or read online all Book PDF file that related with The General Factor of Intelligence: How General Is It? book. Happy reading The General Factor of Intelligence: How General Is It? Bookeveryone. Download file Free Book PDF The General Factor of Intelligence: How General Is It? at Complete PDF Library. This Book have some digital formats such us :paperbook, ebook, kindle, epub, fb2 and another formats. Here is The CompletePDF Book Library. It's free to register here to get Book file PDF The General Factor of Intelligence: How General Is It? Pocket Guide.

Thus it has been argued that when genes for intelligence are identified, they will be "generalist genes", each affecting many different cognitive abilities. The g loadings of mental tests have been found to correlate with their heritabilities, with correlations ranging from moderate to perfect in various studies.

Thus the heritability of a mental test is usually higher the larger its g loading is. Much research points to g being a highly polygenic trait influenced by a large number of common genetic variants, each having only small effects. Another possibility is that heritable differences in g are due to individuals having different "loads" of rare, deleterious mutations, with genetic variation among individuals persisting due to mutation—selection balance. A number of candidate genes have been reported to be associated with intelligence differences, but the effect sizes have been small and almost none of the findings have been replicated.


  • The Learning Differences Sourcebook.
  • Psychology For Dummies, 2nd Edition!
  • October 18, 2007.
  • From Genius to Madness.

No individual genetic variants have been conclusively linked to intelligence in the normal range so far. Many researchers believe that very large samples will be needed to reliably detect individual genetic polymorphisms associated with g. Several studies suggest that tests with larger g loadings are more affected by inbreeding depression lowering test scores. There is also evidence that tests with larger g loadings are associated with larger positive heterotic effects on test scores.

Inbreeding depression and heterosis suggest the presence of genetic dominance effects for g. MRI research on brain regions indicates that the volumes of frontal , parietal and temporal cortices , and the hippocampus are also correlated with g , generally at. Some but not all studies have also found positive correlations between g and cortical thickness.

However, the underlying reasons for these associations between the quantity of brain tissue and differences in cognitive abilities remain largely unknown. Most researchers believe that intelligence cannot be localized to a single brain region, such as the frontal lobe. Brain lesion studies have found small but consistent associations indicating that people with more white matter lesions tend to have lower cognitive ability. Research utilizing NMR spectroscopy has discovered somewhat inconsistent but generally positive correlations between intelligence and white matter integrity, supporting the notion that white matter is important for intelligence.

Some research suggests that aside from the integrity of white matter, also its organizational efficiency is related to intelligence. The hypothesis that brain efficiency has a role in intelligence is supported by functional MRI research showing that more intelligent people generally process information more efficiently, i. Small but relatively consistent associations with intelligence test scores include also brain activity, as measured by EEG records or event-related potentials , and nerve conduction velocity.

Evidence of a general factor of intelligence has also been observed in non-human animals. Non-human models of g such as mice are used to study genetic influences on intelligence and neurological developmental research into the mechanisms behind and biological correlates of g. Similar to g for individuals, a new research path aims to extract a general collective intelligence factor c for groups displaying a group's general ability to perform a wide range of tasks. Causes, predictive validity as well as additional parallels to g are investigated.

Myopia is known to be associated with intelligence, with a correlation of around. Cross-cultural studies indicate that the g factor can be observed whenever a battery of diverse, complex cognitive tests is administered to a human sample. The factor structure of IQ tests has also been found to be consistent across sexes and ethnic groups in the U.

For example, when the g factors computed from an American standardization sample of Wechsler's IQ battery and from large samples who completed the Japanese translation of the same battery were compared, the congruence coefficient was. Similarly, the congruence coefficient between the g factors obtained from white and black standardization samples of the WISC battery in the U.

Most studies suggest that there are negligible differences in the mean level of g between the sexes, and that sex differences in cognitive abilities are to be found in more narrow domains. For example, males generally outperform females in spatial tasks, while females generally outperform males in verbal tasks. Another difference that has been found in many studies is that males show more variability in both general and specific abilities than females, with proportionately more males at both the low end and the high end of the test score distribution. Consistent differences between racial and ethnic groups in g have been found, particularly in the U.

A meta-analysis of millions of subjects indicated that there is a 1. The mean score of Hispanic Americans was found to be. Elementary cognitive tasks ECTs also correlate strongly with g. ECTs are, as the name suggests, simple tasks that apparently require very little intelligence, but still correlate strongly with more exhaustive intelligence tests. Determining whether a light is red or blue and determining whether there are four or five squares drawn on a computer screen are two examples of ECTs.

The answers to such questions are usually provided by quickly pressing buttons.

subscribe to this blog via email

Often, in addition to buttons for the two options provided, a third button is held down from the start of the test. When the stimulus is given to the subject, they remove their hand from the starting button to the button of the correct answer. This allows the examiner to determine how much time was spent thinking about the answer to the question reaction time, usually measured in small fractions of second , and how much time was spent on physical hand movement to the correct button movement time.

Reaction time correlates strongly with g , while movement time correlates less strongly. One theory holds that g is identical or nearly identical to working memory capacity. Among other evidence for this view, some studies have found factors representing g and working memory to be perfectly correlated. However, in a meta-analysis the correlation was found to be considerably lower. Psychometric theories of intelligence aim at quantifying intellectual growth and identifying ability differences between individuals and groups.

In contrast, Jean Piaget 's theory of cognitive development seeks to understand qualitative changes in children's intellectual development. Piaget designed a number of tasks to verify hypotheses arising from his theory. The tasks were not intended to measure individual differences, and they have no equivalent in psychometric intelligence tests. After the child agrees that the amount is the same, the investigator pours the water from one of the glasses into a glass of different shape so that the amount appears different although it remains the same.

The child is then asked if the amount of water in the two glasses is the same or different. Notwithstanding the different research traditions in which psychometric tests and Piagetian tasks were developed, the correlations between the two types of measures have been found to be consistently positive and generally moderate in magnitude. A common general factor underlies them. It has been shown that it is possible to construct a battery consisting of Piagetian tasks that is as good a measure of g as standard IQ tests.

The traditional view in psychology is that there is no meaningful relationship between personality and intelligence, and that the two should be studied separately. Intelligence can be understood in terms of what an individual can do, or what his or her maximal performance is, while personality can be thought of in terms of what an individual will typically do, or what his or her general tendencies of behavior are. Research has indicated that correlations between measures of intelligence and personality are small, and it has thus been argued that g is a purely cognitive variable that is independent of personality traits.

In a meta-analysis the correlations between g and the "Big Five" personality traits were found to be as follows:. The same meta-analysis found a correlation of. Some researchers have argued that the associations between intelligence and personality, albeit modest, are consistent. They have interpreted correlations between intelligence and personality measures in two main ways.

The first perspective is that personality traits influence performance on intelligence tests. For example, a person may fail to perform at a maximal level on an IQ test due to his or her anxiety and stress-proneness. The second perspective considers intelligence and personality to be conceptually related, with personality traits determining how people apply and invest their cognitive abilities, leading to knowledge expansion and greater cognitive differentiation.

Some researchers believe that there is a threshold level of g below which socially significant creativity is rare, but that otherwise there is no relationship between the two. It has been suggested that this threshold is at least one standard deviation above the population mean. Above the threshold, personality differences are believed to be important determinants of individual variation in creativity. Others have challenged the threshold theory. While not disputing that opportunity and personal attributes other than intelligence, such as energy and commitment, are important for creativity, they argue that g is positively associated with creativity even at the high end of the ability distribution.

The longitudinal Study of Mathematically Precocious Youth has provided evidence for this contention. It has showed that individuals identified by standardized tests as intellectually gifted in early adolescence accomplish creative achievements for example, securing patents or publishing literary or scientific works at several times the rate of the general population, and that even within the top 1 percent of cognitive ability, those with higher ability are more likely to make outstanding achievements.

The study has also suggested that the level of g acts as a predictor of the level of achievement, while specific cognitive ability patterns predict the realm of achievement.

General intelligence factor | Psychology Wiki | FANDOM powered by Wikia

Raymond Cattell , a student of Charles Spearman's, rejected the unitary g factor model and divided g into two broad, relatively independent domains: fluid intelligence G f and crystallized intelligence G c. G f is conceptualized as a capacity to figure out novel problems, and it is best assessed with tests with little cultural or scholastic content, such as Raven's matrices. G c can be thought of as consolidated knowledge, reflecting the skills and information that an individual acquires and retains throughout his or her life.

G c is dependent on education and other forms of acculturation, and it is best assessed with tests that emphasize scholastic and cultural knowledge. The rationale for the separation of G f and G c was to explain individuals' cognitive development over time. While G f and G c have been found to be highly correlated, they differ in the way they change over a lifetime. G f tends to peak at around age 20, slowly declining thereafter. In contrast, G c is stable or increases across adulthood. A single general factor has been criticized as obscuring this bifurcated pattern of development.

Cattell argued that G f reflected individual differences in the efficiency of the central nervous system. G c was, in Cattell's thinking, the result of a person "investing" his or her G f in learning experiences throughout life. Cattell, together with John Horn, later expanded the G f -G c model to include a number of other broad abilities, such as G q quantitative reasoning and G v visual-spatial reasoning.

Psychology: Basics of Intelligence Types

While all the broad ability factors in the extended G f -G c model are positively correlated and thus would enable the extraction of a higher order g factor, Cattell and Horn maintained that it would be erroneous to posit that a general factor underlies these broad abilities. They argued that g factors computed from different test batteries are not invariant and would give different values of g , and that the correlations among tests arise because it is difficult to test just one ability at a time. However, several researchers have suggested that the G f -G c model is compatible with a g -centered understanding of cognitive abilities.

For example, John B. Carroll 's three-stratum model of intelligence includes both G f and G c together with a higher-order g factor. Based on factor analyses of many data sets, some researchers have also argued that G f and g are one and the same factor and that g factors from different test batteries are substantially invariant provided that the batteries are large and diverse. Several theorists have proposed that there are intellectual abilities that are uncorrelated with each other.

Among the earliest was L. Thurstone who created a model of primary mental abilities representing supposedly independent domains of intelligence. However, Thurstone's tests of these abilities were found to produce a strong general factor. He argued that the lack of independence among his tests reflected the difficulty of constructing "factorially pure" tests that measured just one ability. Similarly, J. Guilford proposed a model of intelligence that comprised up to distinct, uncorrelated abilities, and claimed to be able to test all of them. Later analyses have shown that the factorial procedures Guilford presented as evidence for his theory did not provide support for it, and that the test data that he claimed provided evidence against g did in fact exhibit the usual pattern of intercorrelations after correction for statistical artifacts.

More recently, Howard Gardner has developed the theory of multiple intelligences. He posits the existence of nine different and independent domains of intelligence, such as mathematical, linguistic, spatial, musical, bodily-kinesthetic, meta-cognitive, and existential intelligences, and contends that individuals who fail in some of them may excel in others. According to Gardner, tests and schools traditionally emphasize only linguistic and logical abilities while neglecting other forms of intelligence.

While popular among educationalists, Gardner's theory has been much criticized by psychologists and psychometricians. One criticism is that the theory does violence to both scientific and everyday usages of the word "intelligence. For example, Gardner contends that a successful career in professional sports or popular music reflects bodily-kinesthetic intelligence and musical intelligence , respectively, even though one might usually talk of athletic and musical skills , talents , or abilities instead.

Another criticism of Gardner's theory is that many of his purportedly independent domains of intelligence are in fact correlated with each other. Responding to empirical analyses showing correlations between the domains, Gardner has argued that the correlations exist because of the common format of tests and because all tests require linguistic and logical skills.

His critics have in turn pointed out that not all IQ tests are administered in the paper-and-pencil format, that aside from linguistic and logical abilities, IQ test batteries contain also measures of, for example, spatial abilities, and that elementary cognitive tasks for example, inspection time and reaction time that do not involve linguistic or logical reasoning correlate with conventional IQ batteries, too. Robert Sternberg , working with various colleagues, has also suggested that intelligence has dimensions independent of g.

He argues that there are three classes of intelligence: analytic, practical, and creative. According to Sternberg, traditional psychometric tests measure only analytic intelligence, and should be augmented to test creative and practical intelligence as well. He has devised several tests to this effect. Sternberg equates analytic intelligence with academic intelligence, and contrasts it with practical intelligence, defined as an ability to deal with ill-defined real-life problems.

Tacit intelligence is an important component of practical intelligence, consisting of knowledge that is not explicitly taught but is required in many real-life situations. Assessing creativity independent of intelligence tests has traditionally proved difficult, but Sternberg and colleagues have claimed to have created valid tests of creativity, too. The validation of Sternberg's theory requires that the three abilities tested are substantially uncorrelated and have independent predictive validity. Sternberg has conducted many experiments which he claims confirm the validity of his theory, but several researchers have disputed this conclusion.

For example, in his reanalysis of a validation study of Sternberg's STAT test, Nathan Brody showed that the predictive validity of the STAT, a test of three allegedly independent abilities, was almost solely due to a single general factor underlying the tests, which Brody equated with the g factor. James Flynn has argued that intelligence should be conceptualized at three different levels: brain physiology, cognitive differences between individuals, and social trends in intelligence over time. According to this model, the g factor is a useful concept with respect to individual differences but its explanatory power is limited when the focus of investigation is either brain physiology, or, especially, the effect of social trends on intelligence.

Flynn has criticized the notion that cognitive gains over time, or the Flynn effect, are "hollow" if they cannot be shown to be increases in g. He argues that the Flynn effect reflects shifting social priorities and individuals' adaptation to them. To apply the individual differences concept of g to the Flynn effect is to confuse different levels of analysis.

On the other hand, according to Flynn, it is also fallacious to deny, by referring to trends in intelligence over time, that some individuals have "better brains and minds" to cope with the cognitive demands of their particular time. At the level of brain physiology, Flynn has emphasized both that localized neural clusters can be affected differently by cognitive exercise, and that there are important factors that affect all neural clusters. Perhaps the most famous critique of the construct of g is that of the paleontologist and biologist Stephen Jay Gould , presented in his book The Mismeasure of Man.

He argued that psychometricians have fallaciously reified the g factor as a physical thing in the brain, even though it is simply the product of statistical calculations i. He further noted that it is possible to produce factor solutions of cognitive test data that do not contain a g factor yet explain the same amount of information as solutions that yield a g. According to Gould, there is no rationale for preferring one factor solution to another, and factor analysis therefore does not lend support to the existence of an entity like g.

In this case, the analysis makes up some variables which aren't too implausible-sounding, given our background knowledge. Mathematically, however, the first factor is just a weighted sum of the traits, with big positive weights on most variables and a negative weight on gas mileage. That we can make verbal sense of it is, to use a technical term, pure gravy. Really it's all just about redescribing the data. This brings me to the other major sort of factor analysis, what's called "confirmatory" factor analysis. This is about checking a model where some latent, unobserved variables are supposed to account for the relations among the actual observations.

To simplify, the logic is that if the model is right, then we should get certain patterns of correlations and no others — like checking whether the partial correlations are zero, as Spearman's original model required them to be, but adapted to other latent structures. This is a genuinely inferential and not just descriptive piece of statistics. It's also a pretty modest one, since failing one of these tests is decisive, but passing often isn't very informative, because, as we'll see, radically different arrangements of latent factors can give basically the same pattern of observed correlations.

In the jargon, the power of these tests can be very low at reasonable sample sizes. It is very striking how infrequently one finds people who use exploratory factor analysis checking things with confirmatory factor analysis, for which I think a lot of blame must rest with teachers of statistics, myself included. If my two-factor model for the cars was right, then all of the correlation between say gas mileage and horsepower should be due to their respective correlations with the two factors, with no partial correlation between them once the factors are accounted for.

The data falsify this hypothesis at any reasonable level of significance. I do not, however, teach my students to do this: mea culpa. And it's not just me. One of the most prominent ideas put forward on the basis of these exploratory techniques, aside from the general intelligence factor, is what's called the five factor theory of personality traits. This quite robustly fails confirmatory factor analyses: the "Big Five", despite being made up for the purpose, don't actually fit the correlations in the data, even on personality tests designed using the theory.

This has done next to nothing to make personality psychologists rethink, revise, or discard the theory, and leads mild-mannered psychometricians to tear their hair in frustration. So: exploratory factor analysis exploits correlations to summarize data, and confirmatory factor analysis — stuff like testing that the right partial correlations vanish — is a prudent way of checking whether a model with latent variables could possibly be right.

What the modern g -mongers do, however, is try to use exploratory factor analysis to uncover hidden causal structures. I am very, very interested in the latter pursuit, and if factor analysis was a solution I would embrace it gladly. But if factor analysis was a solution, when my students asked me as they inevitably do "so, how do we know how many factors we need? There are ways of estimating the intrinsic dimension of noisily-sampled manifolds, but that's not at all the same.

More broadly, factor analysis is part of a larger circle of ideas which all more or less boil down to some combination of least squares, linear regression and singular value decomposition , which are used in the overwhelming majority of work in quantitative social science, including, very much, work which tries to draw causal inferences without the benefit of experiments. A natural question — but one almost never asked by users of these tools — is whether they are reliable instruments of causal inference. The answer, unequivocally, is "no". I will push extra hard, once again, Clark Glymour's paper on The Bell Curve , which patiently explains why these tools are just not up to the job of causal inference.

Maybe more than two people will follow that link this time. They do not, of course, become reliable when used by the righteous, and Glymour was issuing such warnings long before Herrnstein and Murray's book appeared to trouble our counsels. The conclusions people reach with such methods may be right and may be wrong, but you basically can't tell which from their reports, because their methods are unreliable.

This is why I said that using factor analysis to find causal structure is like telling time with a stopped clock. It is, occasionally, right. Maybe the clock stopped at 12, and looking at its face inspires you to look at the sun and see that it's near its zenith, and look at shadows and see that they're short, and confirm that it's near noon.


  • Continue Reading.
  • Theoretical and Technical Issues in Identifying a Factor of General Intelligence | SpringerLink.
  • Constitutive Relation in High/Very High Strain Rates: IUTAM Symposium, Noda, Japan October 16–19, 1995.
  • Advances in Nuclear Science and Technology: Simulators for Nuclear Power!
  • Surrender: Appeasing Islam, Sacrificing Freedom.
  • Mukiwa: A White Boy in Africa.

Maybe you'd not have thought to do those things otherwise; but the clock gives no evidence that it's near noon, and becomes no more reliable when it's too cloudy for you to look at the sun. Now, I could go over the statistical issues involved in reliable causal inference, and why factor analysis doesn't measure up. But if I've learned anything teaching it's that examples are vastly more effective than proofs.

If you really want to know, start with Pearl and Spirtes, Glymour and Scheines. So I'm going to show you some cases where you can see that the data don't have a single dominant cause, because I made them up randomly, but they nonetheless give that appearance when viewed through the lens of factor analysis.

I learned this argument from a colleague, but so that they can lead a quiet life I'll leave them out of this; versions of the argument date back to Godfrey Thomson in the s [7]. Correlations explain g , not the other way around If I take any group of variables which are positively correlated, there will, as a matter of algebraic necessity, be a single dominant general factor, which describes more of the variance than any other, and all of them will be "positively loaded" on this factor, i.

Similarly, if you do hierarchical factor analysis, you will always be able to find a single higher-order factor which loads positively onto the lower-order factors and, through them, the actual observables [8] What psychologists sometimes call the "positive manifold" condition is enough, in and of itself, to guarantee that there will appear to be a general factor. Since intelligence tests are made to correlate with each other, it follows trivially that there must appear to be a general factor of intelligence.

This is true whether or not there really is a single variable which explains test scores or not. It is not an automatic consequence of the algebra that the apparent general factor describes a lot of the variance in the scores. Nonetheless, while less trivial, it is still trivial. Recall that factor analysis works only with the correlations among the measured variables.

If I take an arbitrary set of positive correlations, provided there are not too many variables and the individual correlations are not too weak, then the apparent general factor will, typically, seem to describe a large chunk of the variance in the individual scores. To support that statement, I want to show you some evidence from what happens with random, artificial patterns of correlation, where we know where the data came from my computer , and can repeat the experiment many times to see what is, indeed, typical.

So that you don't have to just take my word for this, I describe my procedure, and link to my simulation code, in a footnote [9]. Here is the first correlation matrix R produced for me after I debugged my code, for five variables: 1. All of the entries on the diagonal are 1, because everything is perfectly correlated with itself. Some of these variables are strongly correlated e. All of them, however, are positively correlated. If these variables represented actual observations, this pattern of correlations would rule out the possibility of some causal structures underlying the measurements, but would still be compatible with a huge range of different mechanisms.

But remember, this is a completely random example, with no real causal factors behind it whatsoever.

Short intro to: General Intelligence - IQ and g factor

At this stage, I could have done a factor analysis of the correlation matrix, but to make things look more realistic, I instead generated "test scores" for "subjects" with these correlations, with each test having a mean of and a standard deviation of 15 just like an IQ test. I then used a completely standard piece of software R's factanal function; a maximum-likelihood routine to find the single factor which best accounted for the correlations in the measurements.

For example, a typical value for the fraction of variance described by g on actual intelligence tests seems to be somewhere in the range of a quarter to two-thirds, and generally in the lower part of that range, say around a third. From looking at the table of loadings, it appears that variable 2, whatever it is, is not well-described by the factor.

If I think of the factor as something real — intelligence or athleticism or neuroticism or car-bigness, it doesn't matter — I might then drop variable 2 from my battery of tests. If I do so and re-calculate the factor loadings, they hardly change, variable 1 3 4 5 loading 0. I will come back to this point later. These results are no fluke. So my first random sample was a little on the high side, but not remarkably so.

If I repeat the experiment with six imaginary tests rather than five, then the mean proportion of variance described is 0. If I stick to five dimensions, and let the correlations go over the whole range from -1 to 1, then I get a somewhat smaller mean proportion-of-variance-described, namely 0. If I force the correlation coefficients to all be negative, in the range -1 to 0, again it's smaller but not negligible 0. And so on, and so on. Why does this matter? Well, if you take people and give them pretty much any battery of tests of mental abilities, skills and knowledge you care to name, you will find positive correlations among the scores — especially if you exclude people who have received specialized training in skills relevant to one test or another, or the tests on which people have been trained.

This is what Thomson was pointing out, all those years ago, when he said that the apparent descriptive strength of the leading factor for test results was more a mathematical theorem than a psychological fact. How to make independent abilities look like one g factor But — and I can hear people preparing this answer already — doesn't the fact that there are these correlations in test scores mean that there must be a single common factor somewhere? To which question a definite and unambiguous answer can be given: No.

You can get strong positive correlations — even ones with vanishing partial correlations, so it looks like there's one factor — even when all the real causes are about equal in importance and completely independent of one another. This was, again, first demonstrated by Thomson — in I'll go over a slight variant of his original model, in the hope that it will lessen the odds that we have to spend the next 93 years debating what ought to be a closed issue. The model goes like this: there are lots of different mental abilities, a huge number of them. Thomson sometimes called them "factors", but I'll reserve that for the things found by factor analysis.

Any one given intelligence test calls on many of these abilities, some of which are shared with other tests, some of which are specific to that test at least among those being analyzed. For each test, draw a number between 1 and ; that is the number of shared abilities used in that test. Draw another number between 1 and ; that is the number of test-specific abilities it uses. In my simulation, I used 11 tests, because one of the more widely used IQ measures, the revised Weschler Adult Intelligence Scale, was a battery of 11 tests.

So here's what I got: variable 1 2 3 4 5 6 7 8 9 10 11 shared abilities specific abilities 28 62 45 78 26 total To determine which shared abilities go with which variable, I draw a sample of the specified size from my pool of abilities. Now variable 1 for example is determined by abilities, of which it might have in common with other tests, and I know which abilities those are. The total number of abilities invoked in this model is To make the result which is coming as stark as possible, Thomson assumed, as I will, that there is no dependence whatsoever among these abilities; they are totally and completely uncorrelated.

For convenience, I'll assume that these abilities are not only independent but also identically distributed IID ; to keep things looking familiar, I made them normally distributed with a mean of and a standard deviation of Some abilities are involved in more than one test, but since there are 3 which are shared by all the tests, and 34 which are shared by at least ten of them, it's hard to say that there is a common ability.

Also, every shared ability is shared by at least three tests. Since every test involves at least distinct abilities, these widely-shared abilities are not overwhelming determinants of the test scores, either. Heat-map of the "factor pattern" connecting shared abilities to tests in a single random draw from the Thomson model.

Alternatives and Criticisms

Click for full-sized PDF. Yellow indicates that the test uses that ability, red that it does not. The white horizontal line is meaningless but stubborn artifact of my limited graphics skills. The tests have been automatically re-ordered to bring ones with similar abilities closer, and the abilities have been re-ordered likewise. The trees at the top and the left are automatic attempts to cluster the tests and abilities respectively on the basis of these similarities, which are, of course, pure sampling artifacts.

To generate test scores, I made up a random sample of independent individuals, and assigned them values of these abilities. I then summed the abilities, as prescribed, to get scores on the 11 tests. To add an air of verisimilitude, I topped each test score off with a little extra noise mean zero, s. Once again, let me emphasize that every ability contributing to the test scores is completely independent of every other, and none of them is preponderant on any of the tests, much less all of them.

When I do a factor analysis as before, I find that a single made-up factor, call it g , describes nearly half 0. The g loadings are as follows: variable 1 2 3 4 5 6 7 8 9 10 11 loadings 0. As they used to say: This is no coincidence, comrades! More, if I do a standard test for whether this pattern of correlations is adequately explained by a single factor, the data pass with flying colors chi-squared is All of which is, by construction, a complete artifact. Once again, this isn't a fluke. Repeating the simulation from scratch i.

You can use my code for this simulation to play around with what happens as you vary the number of tests and the number of abilities. Now, I don't mean to suggest this model of thousands of IID abilities adding up as a serious depiction of how thought works, or even of how intelligence test scores work.

My point, like Thomson's, is to show you that the signs which the g -mongers point to as evidence for its reality, for there having to be a single predominant common cause, actually indicate nothing of the kind. Thomson's model does this in a particularly extreme way, where those signs are generated entirely through the imprecision of our measurements. There are other models — for instance, the "dynamical mutualism" model of van der Maas et al.

This should surprise no one who's even casually familiar with distributed systems or self-organization. Those supposed signs of a real general factor are thus completely uninformative as to the causes of performance on intelligence tests. Heritability is irrelevant Someone will object that g is highly heritable, and say that this couldn't be true if it wasn't just an artifact. But this also has no force: Thomson's model can easily be extended to give the appearance of heritability, too.

Having spent far too long, in a previous post , covering what heritability is, why estimating the heritability of IQ is difficult to meaningless, and why it tells us nothing about how malleable IQ is, I won't re-traverse that ground here. Determining the heritability of an unobserved variable like g raises a whole extra set of problems — there is a reason you see so many more estimates of the heritability of IQ than of g — though if you want to define "general intelligence" as a certain weighted sum of test scores, that is at least operationally measurable.

Suppose that, mirabile dictu , all the problems are solved and we learn the heritability of g , and it's about the same as the best estimate of the narrow-sense heritability of IQ, which is 0. Does it make sense to go from " g is heritable" to " g is real and important"? I have to say that I find it an extraordinarily silly inference, and I'm astonished that anyone who understands how to calculate a heritability has ever thought otherwise.

Height, in developed countries, has a heritability around 0. Blood triglyceride levels have a heritability of about 0. Thus the sum of height and triglycerides is heritable. How heritable will depend on the correlations between the additive components of height and those of triglycerides; assuming, for simplicity, that there aren't any, the heritability of their sum will be anywhere from 0. The fact that this trait is heritable doesn't make it any less meaningless.

It'd still be embarrassing for the Thomson model if it couldn't produce its appearance, since after all no one is saying that the measured or even real heritability of IQ is always and exactly zero. But that's very easy, and the logic is the same as for combining height and triglycerides. Assume, as in classical biometric models, that the strength of each ability for each person is then the sum of three components, one purely genetic and additive across genes, one purely genetic and associated with gene interactions, and one purely environmental, and that these are perfectly independent of each other.

Although proponents of multiple intelligence theory reject this interpretation, factor analysis remains one of the most important tools in 21 st century intelligence research. Spearman derived a statistical formula to correct for this underestimation. Through an extended formula, he was able to demonstrate that a common source of variance accounted for the correlations among all the mental tests, and he called this the general factor, or g.

This finding reinvigorated the idea that intelligent behavior arises from a single metaphorical entity, and it forms the foundation for many present-day theories of human intelligence Jensen , Spearman, C. American Journal of Psychology , 15 , London: Macmillan. Murchison Ed. Jensen, A. Spearman, Charles Edward.