Make your own free website on Tripod.com


WHY IGNORE THE G FACTOR? -- Historical considerations



By:

Christopher R. BRAND, Edinburgh
Consultant to the Woodhill Foundation (USA)
71, South Clerk Street; Edinburgh, UK
brand@crispian.demon.co.uk

Denis CONSTALES, University of Ghent
Postdoctoral Scientific Researcher, Department of Mathematical Analysis
Ghent University, Galglaan 2; B9000 Gent, Belgium

Harrison KANE, University of West Carolina
Assistant Professor of School Psychology
Western Carolina University
311 Killian Bldg., Cullowhee, NC 28723

An invited chapter in a Festschrift for Arthur Jensen: The scientific study of general intelligence (Edited by Helmuth Nyborg and published by Pergamon Press, 2003).



ABSTRACT

Today's neglect of general intelligence (g) and IQ by psychologists, educationists and the media is the West's version of Lysenkoism. By 2000, hysterical denial of g became effectively the official science policy of the USA as Stephen Jay Gould, the author of The Mismeasure of Man -- and thus Arthur Jensen's main rival -- was elected President of the American Association for the Advancement of Science. Rooted in an egalitarian ideology that the West had managed to expel from the field of economic policy in the Reagan/Thatcher years, denial of g has typically been supported by wilful ignorance, wishful thinking and downright censoriousness. Such denial provides a powerful base for social work and state education.

Those who deplore g and its links to heredity, achievement and race often rehearse the multifactorial/componential ambitions of the nineteenth-century phrenologists which eventually appealed to American psychologists in the 1930's and subsequently. Alternatively, g-denial may deploy both ancient and modern arguments that nothing can be 'measured' in psychology. These two contradictory positions of IQ's more scholarly detractors are especially considered in this chapter, as is the less-often-remarked problem for the London School that so few Christian-era philosophers and psychologists -- prior to Herbert Spencer and Sir Francis Galton -- made much room in their systems for g. Despite considerable tacit acceptance of Plato's stress on the centrality of reason in human psychology, Plato's elitism and eugenicism are feared for their supposedly authoritarian implications. A hypothesis is advanced here, and supported empirically, which attributes neglect of g by intellectuals partly to their limited experience of real life - across the full IQ range; and it is suggested that Platonic realism actually enjoys distinguished support in modern philosophy and provides a basis for a new liberalism.


Readers of this Festschrift for Emeritus Professor Arthur Jensen (of the University of California at Berkeley) need little introduction to the preference among today's psychologists, educators, politicians and commentators for denying, or at least ignoring the g factor. Around 1960, Arthur Jensen narrowed his research focus as a differential psychologist on to the role of heritable g in explaining educational outcomes - not least the attainments of Blacks and Whites in the USA. Following his invited article in Harvard Educational Review (1969), Jensen became the best-known exponent of, and martyr to the central thesis of psychology's London School. Subsequently, an intensifying inquisition against 'the Jensenist heresy' inspired by egalitarians such as Leon Kamin, Stephen Jay Gould, Richard Lewontin, Steven Rose and Barry Mehler has kept other Western academics shivering in their shoes.

Opposition to g has really come to dominate turn-of-the-century psychology. Experimental psychologists have slowly re-discovered general intelligence after years of behaviourism (e.g. Mackintosh, 1997; Conway et al., 1999), yet they talk of it only as 'working memory' and decline to show interest in measuring it reliably in individuals or in examining its heritability. Social psychologists wishing to avoid g have felt it safer to avoid all talk of trait differences and to engage in a rhetoric that further denies the existence of human races. Differential psychologists should have been enjoying the credit for proving the general equality of the sexes in intelligence, for allowing bright children from poor home backgrounds to be routed towards the highest educational achievement, and for exempting mentally subnormal people from the rigours of the criminal law. Instead, they have paid a massive price for pointing out that there are differences between the races and social classes in a g factor that is substantially heritable. For thirty years, only the 'far right' Pioneer Foundation has been willing to fund research by psychologists declaring plainly hereditarian views.

The consequences for education have been still more serious. To acknowledge deep-seated differences in general intelligence had always seemed pessimistic in a post-Nietzschean West which no longer held out to its citizens the hope of future equality in a Christian heaven; and, unlike the late Hans Eysenck, Arthur Jensen entertained no optimistic notion that behaviour therapy might quickly allow amelioration of the psychological problems revealed by his work. From the first storm of controversy over 'How much can we boost IQ and scholastic achievement?', through Bias in Mental Testing (Jensen's 'Old Testament', vindicating the fairness of IQ-type tests) to his magnum opus, The g Factor: the Science of Mental Ability (his 'New Testament', covering psychogenetic studies), Jensen defied the politics of neosocialism which attributes all the problems of 'minorities' to 'disadvantages', 'prejudices' or 'low expectations' that can be rectified by interventions of a social, as distinct from a biological type. Consistently, Jensen doubted there could be any great degree of intellectual equalization for children having serious educational problems in a computerized world where high levels of g are increasingly demanded.

Seizing on such 'pessimism', critics ignored the positive aspects of Jensen's thesis. Instead of IQ differences being addressed realistically by the use of school streaming or tracking as progressive educators had once maintained (see Ravitch, 2000), modern educationists have refused to admit that society works by division of labour. A febrile piety has been created in the West to the effect that all children (except perhaps the grossly mentally retarded) have equal intellectual potential - at least so long as they are kept within a rigid state school system that frowns on individuation of teaching and allows specialization only for children having gifts for music and ballet. By the 1990's, poor performance from any group of children came to be blamed not on genetic differences but on alleged failures by teachers and on wider 'low expectations' and 'racism' (whether 'institutionalized' or otherwise). Desperate to 'turn round' failing inner-city schools, Britain's 'New Labour' government in 1999 began appointing 'superheads' at salaries of œ70,000p.a. and with no ancillary expense spared. There was much talk of 'situations' and 'cultures of failure' that would soon be rectified. Yet, denied the possibility of expelling unruly pupils, three of the over-optimistic superheads soon resigned in despair and the eleven selected 'Fresh Start' schools had no better academic results after a year of their new regimes than they had at the beginning (Independent [London], 2 ii 2000, p. 8). Universities also had to ignore intelligence: they risked serious criticism in the 1990's if they failed to represent 'minorities' pro rata in their ranks; and they repeatedly sought fresh admissions criteria which might enable them to admit more non-White and state-school applicants even when to do so would accelerate their already fast-declining academic standards. Any doubts about programmes of 'affirmative action' were denounced as 'racist' -- thus inhibiting sensible discussion. Indeed, no proposal for real improvement, even to inrease UK medical practitioners' fluency in English, went without criticism as 'racist.' Jensen's own record of support for the racial desegregation of US schools was not enough to stop Steven Rose (1997) calling him 'the grandfather of modern scientific racism' and declaring 1969 'the beginning of the last big wave of scientific racism.'

Notoriously, opposition to g came to a head when Richard Herrnstein and Charles Murray (1994) published, in The Bell Curve, their estimates of the wider social importance of the g factor. Large-scale IQ testing and follow-up of US youth had, by 1990, shown that g differences were more important than differences in parental socio-economic status (SES) in accounting for life outcomes at age 30 in qualifications, employability, law-abidingness and procreational self-control. That such g differences should be thought even 40% heritable by Herrnstein & Murray incensed America's academics, especially since The Bell Curve also set out reasons for thinking the intellectual differences between Blacks and Whites to be deep-seated. Herrnstein and Murray were swiftly and widely denounced as 'attempting to revive scientific racism' (e.g. by Washington State University's Obed Norman, 1995). In terror, mainstream publishers withdrew from publishing work supporting classic London School views on race and IQ. No mainstream publisher could be found for Phil Rushton's (1994) Race, Evolution and Behaviour - even the mail-order house Transaction pulped its 1999 abridgment of Rushton's book after threats from the US social science community; Wiley withdrew The g Factor: General Intelligence and Its Implications (Brand, 1996; 2001 edition available at http://www.douance.org/qi/brandtgf.htm) from UK bookshops; and Jensen found his own The g Factor: the Science of Mental Ability (1998) rejected by Wiley and several other mainstream publishers and given only mail-order publication. In London, in September 1999, noisy anti-eugenic protesters forced the closure of a conference of the Galton Institute where Arthur Jensen and the race-realist psychologists Emeritus Professor Richard Lynn and Professor Glayde Whitney were among the scheduled speakers (Brand, 1999).

The years 1994 to 2000 saw important breakthroughs for the London School. There was startling new evidence from Africa of a 25-IQ point difference between Blacks and Whites (Rushton & Skuy, in press). Genetic engineering of 'Doogie' mice yielded a substantial improvement on learning tasks (Tang et al., 1999). A convincing review showed the general unimportance of parental SES in accounting for children's differences in personality or intellect (Bruer, 1999) - contradicting the belief of Richardson (1999) that "IQ tests are merely clever numerical surrogates for social class." There was a leaked report of a research finding by Robert Plomin of some genes for IQ (http://news.bbc.co.uk/hi/english/sci/tech/newsid_850000/850358.stm). New evidence appeared favouring streaming in schools (see Brand, 1998). The international journal Intelligence devoted a whole issue in 1998 to articles that were largely celebratory of Jensen's work; and the editor of Intelligence, Douglas Detterman (who had himself once hoped the g factor would "go away"), condemned as "absurd environmentalism" the theories of the one surviving British behaviourist, the University of Exeter's M. J. A. Howe (e.g. 1997). Nevertheless, these achievements of the London School counted for little in the media or in the increasingly cowardly universities of the West where politically correct 'sensitivity' to the problems of minorities had become the norm.

It is easy to explain how public egalitarianism increased in parallel with Arthur Jensen's lifetime of scholarly effort to understand intelligence differences and their origins. The self-declared imperative of socialists was always to help the poor, or at least 'the working class.' Today, as the West has learned the folly of communism and seen the collapse of most of the regimes that ever adopted it, left-wing politicians in democratic countries have no longer been able to offer economic policies of state control, high taxation, welfare extravagance and serious redistribution of wealth. Instead, a busy new method of rectifying 'disadvantage' has been found: the neosocialists of modern America and Britain have offered to minorities - whom they encourage to immigrate -- not hard cash but all the perquisites of 'affirmative action.' Such 'positive' discrimination against healthy and heterosexual males of European descent gives minorities degrees and jobs regardless of merit, and a working environment that is 'safe' from the many forms of harassment and prejudice from which they are deemed to suffer. Formal and informal witch-hunting of 'harassers', 'stalkers', 'paedophiles', 'date rapists', 'homophobes', 'lookists' and 'racists' rids the workplace of men and provides jobs for women and a few token Black people who, poorly qualified as they are, must be carried as passengers by firms under what is essentially a novel form of taxation.

Even beyond the workplace, the media and the Internet are carefully scrutinized for 'racism' and for the cardinal sin of 'Holocaust denial.' Mainstream publishers are bullied into withdrawing books such as that by the freelance historian David Irving - called a Holocaust denier even though he believes and writes that the Nazis murdered at least two million Jews (almost as many as the Jews murdered by Stalin). In return, people seeking the kind of favouritism from government that the working class once enjoyed under socialism now realize they must join minority groups that will campaign against the ageism, sexism, fat-ism, handicapism and eugenicism by which they must argue they are victimised - and were, like American Blacks and Jews, victimized in the past, requiring campaigns for back-dated prosecution and financial compensation. Lastly, beyond the attraction of such neosocialism to 'victims', the egalitarian package is most agreeable to those who receive state appointments to administer schemes of relevant monitoring, teaching and committee work. Such job creation provides both salaries and moral satisfaction to the academics, educationists, counsellors and social workers who are appointed.

Nevertheless, to understand the motives of Jensen's enemies is still not to have a full appreciation of the extent of their ignorance and their wish to suppress evidence. Critics ignore John Carroll's (1993) establishment of g as accounting for far more mental ability variance than all other factors put together. They set aside Tom Bouchard et al's (1990) dramatic evidence from separated monozygotic twins of a high heritability for g - saying a priori that "phenomena such as canalization, divergent epigenesis, exon-shuffling (which modifies gene-products to suit current developmental needs), and even developmental modification of gene-structures themselves, now make a nonsense of the idea of a one-to-one relationship between incremental accumulations of 'good' or 'bad' genes, and increments in a phenotype" (Richardson, 1999). Critics neglect Linda Gottfredson's (e.g. 1997) demonstration of the g levels required in different occupations. Still more astonishing, environmentalists and egalitarians themselves lack any positive account of how intelligence differences arise.

The chief current recourse of the critics of the London School is to say that genes always work 'in interaction with the environment.' According to the Provost of King's College, Cambridge (Bateson & Martin, 2000): "The continuous process of exchange between individuals and their environments that underlies development makes a nonsense of the notion that an individual's characteristics can be predicted from their genes and experiences." Critics apparently think that such obfuscation will dissuade people from manipulating genes to achieve the kind of eugenic effects that are already achievable in plants and animals. Already there is a queue of American and Canadian ex-parents waiting to pay million-dollar sums to clone their dead children in the confident expectation that they will largely succeed in re-creating the same intelligence and personality as had been lost to them. (Egalitarians themselves will probably abandon 'intereractionism' as soon as genetic engineering of g becomes possible - for they will have no compunction about using the state's apparatus to equalize IQ levels.)

When an appeal to 'interactionism' is thought too risky or over-used, the second refuge is in the work of James Flynn (e.g. 1984) telling of the secular rise in IQ-test scoring that was first noticed in 1948. Unfortunately for this tactic, these test-score gains are greatest on sub-tests of copying skill [Coding or Digit Symbol] that are relatively poor measures of g (Rushton, 1999); and no-one has ever explained them or been able to speed them up. Flynn himself had hoped that Black test scores might be rising fast as those of Whites once did; and Hunt (1999) still thought the Black-White gap was "clearly decreasing" and had already declined to .8 SD units (i.e. 12 IQ points). However, Murray (1999) reported National Longitudinal Survey of Youth data from the previous generation showing NO closing of the racial gap in fluid g; and in 2000 the US federal Department of Education said the Black-White gap in reading had actually been increasing through the 1990's -- leaving the average Black 17-year-old of 2000 reading only about as well as the average White 13-year-old.

That such appeals to mystery and complexity have to substitute for empirical demonstration of powerful social-environmental effects on intelligence is the massive flaw in the opposition to Jensen. Yet, beyond the lame and complexity-venerating responses to London School achievements, there are two lines of argument which come from eminent and well-informed psychologists and which continue the critique of g more seriously into the present. They can be called "componentialist" and "constructivist" respectively. In addition, there is one line of attack on g which has not yet been tried by critics but which probably should be: it is to ask why, if Jensen is essentially right about g, so many great philosophers, thinkers and scientists of the past showed so little appreciation of the occurrence of g differences. This can be titled the problem of "classic neglect".



Componentialism

Undoubtedly the simplest and strongest reply to the London School would be to point to g' s being only one of several measurable dimensions of mentality - and perhaps not even the dimension having the greatest power to account for human differences in behaviour, personality and achievement. Just as g can seem less important when it is considered that people also differ in looks, wealth, health and strength, so g can be played down by setting it alongside other personality features like the Big Five (extraversion, anxiety, independence, conscientiousness, tender-mindedness - e.g. Brand, 1997) which so appeal to today's psychometricians. Even within the realm of mental abilities, it may be argued that there are several independent factors -- or at least factors that correlate so weakly as to make talk of g irrelevant.

Such multifactorial hopes continue the time-honoured ambitions of philosophers and psychologists to identify the main 'components of the mind.' Whether Plato with his three, Aristotle with his greater but undecided number, Aquinas with his eight (eventually ten), Gall with his 28, Spurzheim with his 35, Guilford with his 150 or Sternberg with his 666 components (including interaction effects), many have held out a vision from which some egalitarian satisfaction might be extracted. Certainly the failure of any definite number of components to emerge seems no deterrrent to component-loving psychologists. In the past decade, Harvard's Howard Gardner has advocated some seven (gradually becoming eight-and-a-half - Taub, 1998; but possibly dropping to three - Gardner, 1999) 'intelligences' without ever citing the work of multifactorial theorists like the great Louis Thurstone who might have been his guide; and Daniel Goleman (1995) has proposed an entirely new type of intelligence, 'emotional intelligence' (EQ) which he thinks eluded a century of correlational psychology - his confidence undimmed by the failure of himself or his supporters to come up with any way at all in which EQ can be objectively or reliably measured. The great merit of componentialism is that it can always be envisaged that new forms of observation or testing might allow the emergence of new dimensions - just as cognitive psychologists now have Chomskyan modules for aspects of language acquisition and evolutionary psychologists envisage that (some) people's minds may house previously unremarked 'landscape seekers' and 'cheater detectors.' As knowledge of the brain grows, there is more evidence of brain centres specialized for recognizing vegetables (but not fruit), proper names, and people of other races. Such discrete faculties might realize the wildest dreams of any phrenologist.

To such ambitions there can be no entirely compelling answer - even though Nathan Brody (1992) and Arthur Jensen (1998) have explained in the strongest terms that Gardner's scheme is 'arbitrary and without empirical foundation.' Plainly, it is the precise hope of any scientist of the mind to discover previously unnoticed aspects of mental functioning and of individual differences. Just as some successful form of conversational psychotherapy may eventually be discovered, and some further generations of 'positive discrimination' may at last boost Black IQ, it just could be that expensive new forms of in-depth child assessment being developed by non-psychometricians at Harvard and Oxford may achieve more than a re-invention of the wheel. Where once IQ testers tried to test children's intelligence adequately in group sessions lasting 40 minutes, today's state-funded research psychologists are granted many leisurely hours of observation, with one observer per child, to try to find aspects of game-playing that can be used as a new criterion to counsel university entry for low-IQ applicants from 'underprivileged' backgrounds. London School theorists can only remark what must be the decreasing likelihood that any stone has been left unturned in the hunt for mental abilities which they themselves once led - studiously exploring from 1920 to 1960 numerous schemes allowing talk of non-g factors (usually called 'specific' or 'group' factors). Notably, factor analytic methods showed long ago how levels of intelligence may differ somewhat across verbal, numerical, spatial and musical symbol systems - e.g. one person in eight has a statistically significant (p <.05) discrepancy between verbal and spatial intelligence (Wechsler, 1939).

----

What must be specially remarked in 2001, however, is the latest failure to deliver a substantially improved componentialism. For twenty years, Britain's leading psychologist, Nicholas Mackintosh, has occupied the Chair of Experimental Psychology at Cambridge University. Following service on a UK government commission into the poor educational performance of Black children, Mackintosh - best known as a learning theorist of animal behaviour - increasingly concentrated on the topic of intelligence; and his 1997 book IQ and Human Intelligence was the product.

Although this work opens with lofty condescension towards IQ, a magisterial impartiality is largely preserved on matters of fact. Unusually for a psychology professor in the public eye, Mackintosh does not blame IQ psychologists for the restrictive 1924 US Immigration Act; nor for Britain's mid-century selective system of grammar schools. He is emphatic that IQ allows prediction of a child's educational future that goes substantially beyond whatever can be predicted from parental SES; he allows that the nature of g as mental speed has been becoming clearer, whether because inspection time tests correlate at .40 with IQ or because tests of working memory and 'Tower of Hanoi' ability correlate as high as .77; and he rejects the wish of psychologists like Stephen Ceci and Anders Ericsson to distract attention to special learning abilities, and the wish of Leda Cosmides and Nicholas Humphrey to talk of specialized social intelligence.

Despite having come to hold views that would qualify him for membership of the London School, Mackintosh is concerned throughout his book to go beyond g and identify sub-factors of intelligence, and most notably to envisage some distinction between verbal and spatial intelligence (which abilities, he concludes, have their own special links to verbal and spatial memory). Yet all that Mackintosh has to show for his componentialist concern is summed up in his high regard for the work of John Snow (1984) which once distinguished four sub-factors to g (verbal, spatial, crystallized, memory). Altogether, Mackintosh's modest multifactorialism best approximates the London School model proposed by Sir Cyril Burt and Philip Vernon in the nineteen-fifties (e.g. Mackintosh, p. 266). By the end of his book, Mackintosh admits that "some readers may feel disappointed, even cheated" by his answers to the main questions about human intelligence; and, while remarking the "risk of concluding..on a somewhat sceptical or sour note", he concedes the existence of the g factor as classically envisaged. Altogether, IQ and Human Intelligence is probably the worst news for componentialists since Eysenck and Burt noticed the many considerable correlations between Thurstone's theoretically independent 'primary abilities' (Eysenck, 1939). Mackintosh dismisses the classic multifactorial effort of J. P. Guilford and likewise the recent empty posturing of Howard Gardner - saying that "if [Gardner] means there is no positive manifold, he is simply wrong."



Constructivism

If the quest to establish a compensatory componentialism has failed, an alternative for egalitarians is to try to scorn g itself. Since 1982, the most popular critique of IQ and all its works has been Stephen Jay Gould's The Mismeasure of Man. In that book, Gould disparaged the IQ testing movement by pretending it had some close connection with nineteenth-century claims that brain size was the main determinant of intelligence - claims for which Gould felt there was insufficient good evidence by the standards of a century later. In fact, 1990's brain scan evidence did actually yield several correlations of around .40 between cerebral volume and IQ - correlations which remained unremarked by Gould as he was busy ascending to preside over the American Academy for the Advancement of Science. Gould achieved the feat of persuading his many left-ish readers to forget about the twentieth-century history of IQ and see the g factor as a preposterous legacy of Victorian imperialism, supposedly responsible for untold damage to race relations and working class life chances.

More important than Gould's neglect of post-1969 IQ research (documented in Rushton, 1996) was his claim that there is really no such thing as g, except by a statistical sleight of hand. In particular, Gould maintained it was wrong to 'reify' intelligence - to talk of it as something which had any existence or any possibility of being measured. In this, Gould struck a chord with many psychologists - not least with those personality theorists who had long doubted the possibility of 'measuring' personality or any similar aspects of human beings. Had not the behaviourist philosopher Gilbert Ryle (1949) pointed out that, rather than talk of people 'being intelligent' or 'having intelligence', it would be less metaphysical and more precise to describe just which actions they performed and which problems they solved 'in an intelligent way' or 'intelligently'? Once, J. B. Watson and his behaviourist followers had removed all mental concepts from the repertoire of much academic psychology. Now, Gould would finish the job by eliminating the most dangerous survivor from the days of mentalism.

In fact, Gould's own campaign against reification was deeply flawed.


First, Gould ignored the fact that both Jensen and Eysenck themselves had - in days when positivism was more popular than it is today - expressed reservations about status of g. In 1969, Jensen had written: "We should not reify g as an entity, of course, since it is only a hypothetical construct intended to explain covariation among tests" (p.9). And, in 1981, Eysenck wrote: "[I]t is..meaningfulness, or proven usefulness in explanation and prediction, that is important in a theoretical concept; .. the notion of "existence" is philosophically meaningless in relation to concepts" (p.82). Even today, Jensen (1998) recommends dissociating g from intelligence - breaking g's real-world connection and using it only as a scientific handle -- to avoid confusion and the over-heated discussions with which he has been only too painfully familiar. Anderson (1999) has especially complained that Jensen's operationalization of intelligence as only the g factor serves to avoid serious theorizing about mental structures.

Secondly, Gould's point about g being bound to show up in factor analysis would always have been perfectly familiar to the humblest user of that statistical technique. What matters, however, is not the tautological emergence of a first factor, accounting for as much variance as possible in a matrix, but rather the size of that factor. Especially important is the ratio of the first factor to further independent factors -- which g invariably dwarfs in mental ability matrices by five-to-one.

Thirdly, despite his hullaballoo about g being some kind of trick, Gould was actually to prove perfectly sympathetic to mental measurement when it came to the oblique factors and multiple components that have attracted so many American psychologists. By the end of his book, Gould is found cheerfully praising the componential vision of Thurstone and hoping for more of the same from modern researchers - thus showing himself quite content that there is a mental realm of ability factors in which notions of existence and quantification are perfectly proper. By 1999, Gould was even to be found in his sixteenth book, Rocks of Ages: Science and Religion in the Fullness of Life, trying to negotiate a stand-off between science and religion, so far was he officially from any thoroughgoing materialism.


----

Not so distractible, however, are some theorists who have written at length about the iniquity of attempting or pretending to measure mental abilities. The most accessible of these to English-speaking psychologists is the Australian educationist, Roy Nash (1990).

Nash's "materialist critique of IQ" argues essentially that IQ is no more than a descriptor of test performance and that there is no reason to posit some underlying reality, 'intelligence', as an explanation of performance. Nash is not hostile to IQ as a descriptive exercise - so long as it remains purely descriptive. Indeed, he defends Jensen against those who think it invalid to make comparisons between the intelligence levels of different species.
"Jensen is quite right -- the great apes are more intelligent than than dogs, and, provided they have had some experience with sticks, ropes and boxes, are remarkably good at this sort of problem solving. It is pure obfuscation to try to argue that chimpanzees are not 'really' more intelligent than dogs, that 'intelligence' is a human concept, that dogs can find their way home better than chimpanzees, and so on and so forth. Words may be difficult to define in terms that everyone will find acceptable, but there is a central meaning to words and if we cannot say meanigfully that chimpanzees are more intelligent than dogs we might as well give up the effort of communication in this area at all."
But Nash regards the phenomenon of intelligence differences as arising not from differences in traits or faculties but rather from children's rates of progress through the 'syllabus' that their culture provides. Following Jean Piaget, Nash is content that children "are likely to accrue knowledge, processes or whatever at different rates but in a similar order." Thus the phenomenon of a g hierarchy arises as a variety of different factors - including physiological differences - impact on children's rates of learning. To talk otherwise of some measurable mental possession of intelligence, says Nash, is just "pseudoscience."
"The entire problematic of IQ theory seems to be based on an error of startling simplicity. People can hear, and their hearing can be tested, they are able to hear this or that well, and for that there must be all sorts of reasons, but no one would dream of offering in explanation of relatively poor hearing -- 'not enough construct of hearing ability.' That would be a very poor way to refer to the actual physiological mechanism of hearing. Why are some people able to perform tasks held to demand cognitive thought better than others? According to IQ theory because they possess greater 'cognitive ability.' That they possess greater 'cognitive ability' may be demonstrated by their performance on tests of 'cognitive ability.' It is not difficult to understand why so many contemporary cognitive psychologists stand well back from an argument with a built-in self-destruct device which ticks as loudly as this one."
Unlike Gould, Nash does not hamstring himself by relaxing his criticism for other psychometric measures by which he is less politically exercised. Rather, he extends his hardcore-nominalist condemnation beyond London School theorists to psychometricians as a whole. Further, he wins a certain plausibility for his argument by pointing to the uncertainties of Jensen and Eysenck themselves about g as they sometimes departed from the faculty conception held by the Aristotelian Charles Spearman. (Effectively, Eysenck and Jensen sometimes opted to accept the 'test theory pragmatism' which treats IQs as nothing but convenient numbers while at other times calling g a "biological reality.") Nor does Nash settle for condemning only what he takes to be the muddle and pretentiousness of psychometrician-psychologists: he is equally scathing about the mighty Nicholas Mackintosh, fearing (correctly - see above) that the latter's approach over the years "contributes to the legitimation of IQ."

Nash insists not just that IQ theorists have never managed to quantify any property beyond or beneath the performances that yield IQ estimates, but that there is no such property to be measured. This complaint, based on the fact that IQs are essentially rankings and have no true zero point, is one which also impressed the British psychometrician-psychologist Paul Kline (2000) before his death and which has led Kline's able student, Paul Barrett (e.g. 2001), to doubt whether present paradigms for g can be usefully continued. It is therefore worth examining in some detail Nash's claim that a Czech logician, Karel Berka, has succeeded in showing that IQ involves no true measurement of anything.

Berka's main work on the philosophical theory of measurement is his Measurement: Its Concepts, Theories and Problems (1983). There are two important points about the context in which Berka's book was written. In 1980, Czechoslovakia was under Communist rule, so the book adheres carefully to basic Marxist-Leninist tenets which are repeatedly invoked when alternatives are considered and choices have to be made. Secondly, Berka expressly intended to present critical objections to what he called the "wider" concept of measurement - the view that almost all human actions can be viewed to some extent as measurements (intentionally or not), so that, for example, responses to psychometrists' questions need no further philosophical justification to provide a basis for scientific measurement. (Some philosophers argue that a person who tosses a coin is 'measuring its fairness', and that a person who drinks tea is 'measuring temperature.') Thus Berka's book thus tries to answer the question: "Can one formulate a theory of measurement which is in full accordance with Marxism-Leninism and which allows only of 'narrow' measurement?"

Philosophically, this is a valid and interesting question. The slump in the popularity of Marxism-Leninism today is accidental and irrelevant. Yet other, quite different, questions could be asked. For instance: "Can one formulate a wider doctrine of measurement that is in full accordance with Marxism-Leninism?" Or: "What could be the Marxist-Leninist objections to disallowing 'wider' measurement (Berka having already investigated the Marxist-Leninist objections to ALLOWING 'wider' measurement)?" Or: "Can one formulate a theory of wide or narrow measurement which is in full accordance with rationalism?" - or with positivism, empiricism or any other philosophical doctrine.

Thus it is largely pointless for critics of IQ to quote Berka's rejection of extra-physical measurements similar to IQ. One of Berka's essential underlying assumptions is that only certain very special actions can be termed "measurements", and the restrictions he makes on purposive actions easily disqualify as measurements not only IQ, but even such a tangible and widely used concept as economic utility. Berka's rejection of IQ as measurement is not a consequence of any of his arguments or investigations. Rather, it follows at once from his openly stated purpose to give the most restrictive interpretation possible to the term "measurement."

In Berka's view, counting cannot be accepted as a form of measurement, however non-intuitive this rejection may seem. Similarly, statistics are completely absent from Berka's view of science. IQ is admitted by Berka only as a form of "quasi-quantification". Berka thus accepts that IQ values can actually be meaningfully compared and ordered. He only objects to them being added to each other and arithmetically averaged. However, it can be argued that accepting IQ as quasi-quantification is actually quite sufficient to justify most talk of IQ measurement -- as follows.

The IQ values of a population have a distribution which is a good approximation to the Gaussian bell curve. This curve is symmetrical around its arithmetical average, so the mean of the distribution coincides with the median value. According to Berka, the arithmetical average makes no sense, being based on 'meaningless' addition and division; but the median does make sense so long as the values considered can be ordered uniquely -- so long as they are "quasi-quantified". Even though the mean as such is not 'meaningful' in Berka's view, its actual value will coincide with that of the median; and the median is itself meaningful since it only requires ordering to be defined -- not any adding or averaging.

Similarly, the standard deviation of the distribution (which is used to calibrate the IQ scale) is obtained from an average of squared deviations, and is thus not meaningful in Berka's view. But the difference between, for example, the first and the third quartile values is meaningful, being based only on ordering. Again, the value of this difference will coincide in any normal distribution with a fixed multiple of the standard deviation (1.3489 times the standard deviation).

Thus the answer to Berka's critique and to those who invoke it is simple. All statements about the mean IQ and standard deviations of different populations can be rewritten in "quasi-quantified" terms which would be completely equivalent and entirely acceptable to Berka. Opponents of IQ cannot, therefore, truly claim to follow Berka in 'dismissing IQ as quasi-quantification.'

Opponents further like to claim they are following Berka when they liken IQ testing to ranking people by the numerical value of their telephone numbers (numbers that indeed cannot be meaningfully compared). But that analogy is entirely misleading. Phone-number ranking is not quasi-, but pseudo-quantification in Berka's terminology. IQ-bashers like to feel they have a philosopher on their side in dismissing means and standard deviations for IQ as 'meaningless.' Yet, on Berka's own account, IQ values are a perfectly respectable form of quasi-quantification, for which statistical descriptions in terms of ordering (e.g. the median and quartile values) are well-defined. When the observed distributions for a population and for its sub-populations look like normal distributions, and pass suitable statistical tests for normality, this is true scientific evidence for the reality of IQ values (at least at the population level). Thus it is fully justifiable to represent them via the most widely used descriptors, the mean and the standard deviation.

----

Is such argumentation against Berka, Nash and Kline sufficient to carry the day against an outright 'constructivist' who holds more widely that there is no reality - not even material reality - because all we can know are words and a vast language game that allows no escape from culture and politics? For example, the Parisian philosopher, Jean Fran‡ois Lyotard, was a leading 'anti-racist' and Marxist -- though he never went quite so far as to join the French Communist Party. A key saying of his was that he "could not accept that there was any reality that the philosopher could observe." Biography was a particular object of his scorn: "La biographie, c'est l'imb‚cillit‚," he would pronounce, since 'people' did not really exist. Together with Michel Foucault, Jacques Derrida and Gilles Deleuze, Lyotard was responsible for 'postmodernism' -- the late-twentieth-century version of Western philosophical idealism [see Introduction to The g Factor (Brand, 1996)]. (Full considerations of the inanities and hypernegativity of Parisian poststructuralism and constructivism are provided by Mark Lilla (1998) and by Gerard Delanty (1998). For a summary, see 'Reconstructivism deconstructed' at http://www.crispian.demon.co.uk/McDNLArch4b.)

Much constructivism consists merely in making impossible and pedantic demands for tight definitions of terms like intelligence and race and then denying there can be any reality when these definitions are not forthcoming. The constructivist may insist that heredity has not been demonstrated - except as a 'social fact' -- unless every last genetic detail of DNA is known and a full account furnished of how genes do their work. Needless to say, the meaninglessness of the concept of 'race' is a specially treasured item in the constructivist repertoire. It is as if the constructivist could never have a useful discussion of 'trees' because it is hard to say whether a bonsai is a tree or a bush. Like its practical arm of Political Correctness (PeeCee), constructivism aims to banish sensible scientific discussion and research at least to the distant future. However, there are four reasons why the demands of constructivists merit no serious reply at present.

1). The constructivist rhetoric that has come to dominate much social psychology as well as the academic world of the arts is a continuation of the idealistic and relativistic traditions of philosophy. These reached their previous high point as Hegel and Nietzsche urged no truth was to be found beyond, respectively, the social collective and the individual will. Today, unconstrained idealism finds its practical expression in the PeeCee movement emanating from Harvard which insists on speech control so as to be polite about (in fact, to ignore) the real problems posed by minorities. As a corollary, PeeCee expects minorities to achieve social rewards (degrees, jobs, parliamentary representation etc.) pro rata and not according to any criterion of merit.

Unfortunately, the constructivist's idea of the importance of words and 'labels' in stage-managing human nature has been largely confuted by the advance of psychiatric medicine in the twentieth century. While theorists like Foucault pontificated about the 'social creation of madness' and its convenience for capitalists and authoritarians, the mental hospitals of the West were actually having their human contents emptied on to the streets thanks to breakthroughs by drug companies. By contrast, despite Britain's 'comprehensive' schools becoming the fief of teachers of left-wing persuasions and great modern piety about 'disadvantage' and 'learning difficulties', Britain's levels of educational attainment fell (by international standards) thanks to the simple reality-feature of comprehensivisation - which largely precludes teaching in accordance with ability levels. Thus have genuine twentieth-century changes quite simply contradicted those who believed that madness would be cured by re-labelling, and educational levels raised by avoiding all talk of stupidity and failure.

2). Sceptical doubts as to the existence of core realities are invariably self-undermining and of dubious use to the 'progressive' causes that idealists typically wish to advance. In their wish not to talk of IQ or race or sex - preferring the non-biological terms 'ethnicity' and 'gender' -- constructivists invite the question of what they themselves will ever be able to say about anything. Anti-essentialism has been notably attacked by the left-wing anti-racist crusader, Kenan Malik (1996). Just as radical critics once used to ask the Scottish existentialist psychiatrist, Ronald Laing, what he could really do for schizophrenics if he did not believe in the diagnosis, so Malik doubts constructivists' ability to say or do anything about poverty, low intelligence, racism or sexism. "Relativism," he observes, "undermines the capacity to challenge racism." To claim that humans are equal requires an acknowledgment of some human essence in which some important equality can occur. By contrast, Malik points out, "Poststructuralism inevitably leads to the questioning of equality itself."

3). The idea that some scientists "cling to the dogma [of an 'objective external world'] imposed by the long post-Enlightenment hegemony" was approvingly explored in an article in the refereed social-science journal, Social Texts. Unfortunately for the constructivists who usually read and write for Social Texts, the article was entirely meaningless and had been written as a spoof by a perky physicist at New York University, Alan Sokal (1996). What the journal had published from Sokal was a Trojan horse bedecked with all the buzzwords, academic references and flattery for the journal's own editors that these postmodernists could have wished. Here, after all, was a real-life physicist saying right-on things such as: "The pi of Euclid and the G of Newton, formerly thought to be constant and universal, are now perceived in their ineluctable historicity." This was irresistible stuff for the constructibabbling journal's Special Issue entitled 'Science Wars' (May, 1996). Here was a splendid chance to pray G”del's theorem, chaos theory and quantum mechanics in aid of modern neosocialist relativism and outright nihilism. Today, however, Sokal can -- like Doctor Johnson kicking his table -- simply laugh at all those associated with the ludicrous journal. Says Sokal (1998), "Anyone who believes that the laws of gravity are mere social conventions should try transgressing those conventions from the windows of his flat on the 21st floor." Apparently, only two non-scientists had realized from reading a draft that his article was a spoof.

4). The days of the philosophe engage are long since over in Paris. As the French saw Hungary, Solidarity, Solzhenitsyn, Cambodia and Leipzig 1989, they gave up their intellectuals and settled for taking over the Deutschmark. Derrida is now marginal, at best, to French thinking about cognitive science and philosophy of mind -- thinking which increasingly runs in English-speaking grooves [as witness the top French psychology journal, Cahiers de Psychologie Cognitive, which publishes in English]. (The only problem is that Derrida has found a new stamping ground on American campuses -- among les grands enfants who do not understand their country's general success and wish Whites to beat their breasts about America's failure to solve its Black problem. No wonder Derrida has a new saying as he professes to admire the 'states' rights' of America and advocates Black power: "La deconstruction, c'est l'Amerique.")


Classic Neglect

In view of the feebleness of even their best arguments against a real, measurable and powerful g factor, it is perhaps unsurprising that the critics of the London School largely settle for what Raymond Cattell called ignoracism. They try to ignore the writings of Eysenck and Jensen - bestirring themselves only to advise the 'publishing' trade of the trouble that they will make for any pro-IQ works appearing in bookshops. Nevertheless, it is surprising that critics have not used a line of criticism that should have a big appeal for idealistic anti-empiricists: to cite the authority of Western philosophers against the Jensenist heresy.

PeeCee is a religion that is currently at the stage where Christianity was before Emperor Constantine took it by the scruff of the neck in 314A.D. and put it to imperial work. Because of this immaturity, no modern authorities of much general stature in psychology itself can be found to challenge London School ideas. Reliance has had to be placed instead on a biologist (Gould), a neuroscientist (Rose) and behaviourists like Howe and Kamin who have spent their working lives committed to a mentality-denying exercise that has itself been officially rejected by modern cognitive science and the rest of psychology. However, a more promising scene opens up for the critic who looks to the past. Psychology's founding fathers - who were what would be called philosophers or (especially in the case of the rationalist philosophers) scientists - showed a clear propensity to do without the g factor.

It was famously observed by the mathematician-philosopher, Alfred Whitehead, that Western philosophy can be characterized as a series of footnotes to Plato. Certainly the quest for truth and goodness on which Plato embarked (drawing on Socrates, and followed pretty faithfully by Aristotle) has arrived after 2,400 years at a miserable state of affairs where no modern philosopher is known to the general public apart from the depressive Ludwig Wittgenstein. Unlike his mentor, the realism-seeking Bertrand Russell, Wittgenstein had no interest in science and was happy to leave psychology to the arid evasions of behaviourism while he dismissed as 'language games' the West's classic concerns with metaphysics - with how to describe objectively the world that lies beyond the efforts of the physicist. Thus it is that, in today's public debates (e.g. Sturrock, 1998) over the concerns of Parisian Professor Luce Irigay to modify or qualify Einstein's equation E=MCý because it is sexist (entirely concerned with things going very fast in straight lines..), no big-name philosopher can be found to speak for science against PeeCee.

Needless to say, Lyotard's proposal that 'people do not exist' goes equally unchallenged by any philosopher feeling able to draw on 100 years of empirical psychology and its findings. Although differential psychologists find impressive personal continuities over time (not least in IQ which correlates .78 with itself across forty years of adulthood -- Schwartzman et al., 1987), philosophy in the English-speaking world still shares David Hume's sceptical worry that a person cannot be proved to be anything more than a changeful "bundle of sensations." Sometimes, indeed, it can seem that the only important agreements to be found among Christian thinkers of two thousand years have been on the inadvisability of sex and on the need for women to be punished for the sin of the Knowledge-seeking Eve - agreements which flourished under Catholicism thanks to the breast-beating St Augustine (who had had a ten-year-old girlfriend and a bastard son) and which infected Protestantism from its inception thanks to the agonized Martin Luther (whose sex life only began - with a newly defrocked nun -- at age 42). Those long-running moral agreements have been utterly reversed in today's society; but it is notable that they have weirdly achieved practical instantiation as Western people have stopped having children and thus put their women at risk of a miserable old age.

Often it is Plato who is blamed for the West's follies, as befits his philosophical pre-eminence. Certainly it was Plato who provided the most enduring answer to the materialism of thinkers such as Democritus and the relativism of the Sophists. Building on the mathematical discoveries of Pythagoras, Plato argued that there was a world of truth beyond the senses and urged men to seek such truth, claiming that in it they would also find freedom, beauty, goodness and justice. Plato envisaged three types of being: the timeless, unchanging Ideas of a realm of intelligible and true Being; the objects of sense-perception in a realm of Becoming; and the human soul whose business was to mediate between the first two realms. Plato's improvement on materialism and relativism markedly resembles that of the greatest modern philosopher of science, the late Sir Karl Popper, who finally came to a 'three world' metaphysical theory (of products of mind, mental experiences and dispositions and physical objects). Plato's school, the Academy, lasted almost a thousand years and remained -- thanks also to Aristotle -- an abiding influence on the Christian world.

However, there were three enduring problems for three-worldism.

The first was that there were not enough truths to stock the 'higher' realm, for the truths of geometry and the laws of logic and the 'clear and distinct' intuition of Descartes that he must exist can take one only so far.

The second problem was causal to the first. It proved hard to agree criteria by which to decide what was and what was not a higher truth. In particular, it proved hard to provide a resounding endorsement of empirical science - at least until Popper provided his rationale that scientific truth required not positive demonstration but the failure of attempts to falsify a theory's predictions.

Thirdly, it proved hard to establish any interesting number of moral truths. Though Kant worked hard to argue that one should behave as if one's behaviour might become a universal maxim, this was not very suitable to coping with individual differences and Kant's authority was eventually dented by Einstein's proving that space-time was not in fact neatly four-dimensional as Kant had stoutly maintained it must be. Nor was utilitarianism much help, again because of individual differences: partly, individual happiness is substantially under genetic control; partly, happiness is caused idiosyncratically in different people, defying the grander utilitarian ideas of improving the human condition. Lastly, Plato's own insistence that the good life should essentially involve a quest for higher truth understandably came to be taken by others as a puritanical abjuration of the world of the senses and of sex. Plato's model of the human soul resembles the one that would be adopted by Freud, of a charioteer (the voice of reason, Freud's ego, allowing reality-contact and wisdom) battling to get the best from two very different horses, one passionate and impulsive (the appetitive id) and the other more organized and focussed (the purposeful superego). People who adopt such a model can understandably slip towards thinking that the charioteer might be better off working with just the one, relatively controlled horse and doing without the passionate horse altogether. The later formulations of mystical neoplatonism encouraged such slippage, as did the Church.


Of course, Christianity did not altogether forget the body. Indeed, it insisted on a bodily resurrection as a key part of the after-life of the believer. In particular, Aristotle's less mystical version of Platonic realism eventually became central to Christianity as articulated by Saint Thomas Aquinas. The mediaeval church was happy to accept that the existence of God could be proved by reason as well as by faith and it happily added to its repertoire Aristotle's (never very forcefully expressed) belief that the earth was the centre of the universe, as well as his more considered beliefs in the inferiority of women and in the naturalness of slavery. (Aristotle was a romantic who had loved his wife dearly till her early death -- when he proceeded to have children by her slave; but he had departed from Plato's views that women were the equals of men and that Greek should not enslave Greek.) More importantly, Aristotle's two categories of cognitive and affective functions departed from Plato's three-world view that allowed a distinction between the realms of intellect (products) and intelligence (operations). Indeed, the Church would pay a high price for linking itself to Aristotle, for the latter's insistence on teleological causation would prove unacceptable to John Locke, Voltaire and the many other Enlightenment thinkers who took Galileo and Isaac Newton as their heroes. Having foolishly embarked on making truth claims about the natural world, the Church was unable to resist the temptation to have fights with Darwin and Freud, following which it lost most of its following in the West even though maintaining an active dysgenic influence in Africa. Psychology, too, paid a price: wrapped up in Aristotle's articulation of logic, concern with intelligence (as distinct from the intellectual work of reason) was lost until it was revived by Herbert Spencer (1855); and it was yet another eighty years before Raymond Cattell (e.g. 1936) began to make the vital distinction between fluid and crystallized intelligence (gf and gc).

Even Aristotle - Christianity's King Solomon, adept in science as much as philosophy - had not proved able to sustain his self-selected supporters. So Western philosophy collapsed into a set of unedifying arguments about whether there were any native faculties that gave secure access to bits and pieces truth - or whether a sufficient basis for human knowledge could be found empirically in individually learned associations from the world of the senses. Even at their high points, neither rationalists nor empiricists came up with very much. Instead, their writing involves a constant struggle to keep the wolves of scepticism, relativism and nihilism from the door. Eventually, following Kant's 'transcendental idealism' - admitting it might be hard to have true knowledge of reality but claiming some of our ideas just had to be right - the high road to all-round idealism was wide open.

First, Hegel gloried in what had to be the work of the insuperable social collective; then Nietzsche held out the unreasoned hope of a Superman. Martin Heidegger, the philosopher most revered by constructivists today, played an active part in encouraging Nazism while at the same time inspiring Jean Paul Sartre who would pass on to the post-1945 world an 'existential' denial of essence and truth together with a sympathy for communism. Today, though science and mathematics remain the practical bulwark of everyday truth claims in the West, few would care to provide a defence of why this is so; and many in faculties of arts and social science now flagrantly challenge even the best-established truths of psychology - about IQ and race -- in their pursuit of an ideological egalitarianism no less fanatical than the Christian gene-denying belief in the 'brotherhood' of man.

Any simple return to Platonism has seemed ruled out at once by Plato's sympathy for eugenics and by his seeing no need for private property. Plato even felt able to propose a considerable scheme of censorship, especially of the poetry and pictures which he thought could so easily mislead people into untruth - an argument which today has considerable force when visual advertising and television make vivid mendacious propaganda for multiculturalism. Indeed, it is Plato's determinist and authoritarian tendencies that repelled his natural supporter, Popper, from endorsing Platonism. Long unhappy with evolution theory and with the genetic and biological realm that could usefully make a Fourth World in his own metaphysics, Popper (1945) was unhappy with Plato's question of 'Who should rule?' Sadly, Popper saw Plato's concern with human nature as no more worthy than the "gibberish" of Hegel's "renaissance of tribalism" which began "the tragi-comedy of German idealism"; and, when he arrived at his three-world metaphysics late in life, Popper (e.g. 1994) had no inclination to examine its political implications or to revise his youthful condemnation of Plato.

----

In fact, it is far from obvious that Plato should be blamed for the collapse of his system. Plato's faith in the work of human reason extended far beyond discovering the truths of mathematics. The high-born and personally courageous Plato was able to derive from his principles a system of governance which he believed would improve both on aristocracy and on the democracy that had demanded the death of his hero, Socrates, and would also furnish a model of mind. Plato is the only philosopher to have made axiomatic to his thought the 'principle of specialization' - that each person is himself and not another thing, and that behaviour should be expected to reflect individual differences and would only thus achieve the highest co-operation and happiness. In Plato's utopian Republic, people would occupy positions according to their own individual natures (metaphorically: gold, silver, brass or iron) yet open and reasoned discussion among the 'guardian' leaders would be essential to government and inter-generational social mobility was expected rather than any static caste system. Plato saw the qualities of his selected philosopher-kings, following from their intelligence and knowledge, as likely to include courage, self-discipline, a broad vision, a good memory and quickness in learning. Plato's thought clearly allows the presentation of a rich morality and complete politics even if ardent democrats will be a little shocked.

By insisting on the importance of reason in public affairs, Plato was arguably just spelling out what actually tends to happen in all decent Western democracies where, by one route or another, intelligence, education and money all help secure more access to political power. And Plato gratifies any differential psychologist by his frank endorsement of inherited personality differences and the need for society to be adapted sensibly to them. Because of his realism, Plato need upset no feminist who wants to be one of 'Blair's Babes': he thought women fully capable of government and he made a priestess, Diotima, responsible for passing the truth about love to Socrates. Even on the question of sex, Plato was far from being a puritan - he openly proclaimed eros to be on a par with truth as the proper objective of humans; and his paedophilic drinking scene, the Symposium, which concludes with an argument between Socrates' adolescent boy lovers, is correctly the most famous of all Plato's works. Platonic authoritarianism is no greater than could be expected in a democratic Athens recently defeated by Sparta after engaging in policies that were a foretaste of Mao Tse Tung and Pol Pot. Even the Republic's scheme for breeding from anonymous 'guardians' is arguably becoming fashionable today as rich American career women today seek via sperm banks to have babies with the qualities they wish and which will make for success. The reasons for the West's rejection of Platonism must be found elsewhere.

----

Doubtless Plato's elitism proved less than ideal to the running of the Alexandrian empire that was soon to emerge - for empires need to sweet-talk their different tribes, nations and races into a passable co-operation, so the topic of innate human differences is best avoided. Certainly Christianity sensed a tension with its own stress on the 'brotherhood of man' which required equal respect all round or even a positive veneration of the poor, meek and needy. By the time of Saint Augustine, official Christian philosophy became quite strictly egalitarian - abandoning through the Dark Ages any attempt to rely on human reason and instead adopting the criterion of blind faith.

Yet, to a psychologist, the most obvious problem with the Platonic scheme is just that Plato relied on reason rather than on general intelligence to provide the key method in the search for truth. As is appreciated today, there are many trivial reasoning tasks that are quite often failed by people of good general intelligence - though doubtless even more frequently failed by low-IQ people. Plato's biggest problem was to have committed himself to a road to truth that is not easily defensible as a way of selecting his 'guardians'; and he was optimistic enough to believe that many would be able to master reasoning and pursue truth directly if they had a proper education. IQ testing could have solved Plato's problem. Recognizing g would have provided at once a guiding method for selecting officials, a social goal to be pursued, and the likelihood of persisting individual differences in achievement.

Unsurprisingly, subsequent enthusiasts for equality, democracy and utilitarianism did little to rectify Plato's omission. And rationalist philosophers persisted with the original Platonic task of detecting by reason what had to be true about the world - and what they thought would appear readily to any who accepted philosophical discipline. Yet there is a peculiarity here that critics of IQ should have noted. How is it that, in the epistemological and metaphysical struggles of the West, no one till Spencer and Galton (1869) said plainly that the high road to truth (and also to a just and contented society) might be via not reasoning, let alone any 'pursuit of happiness', but via general intelligence?

The matter can be put more simply: How could a great philosopher like Thomas Hobbes have said "As to the faculties of the mind..I find a greater equality amongst men than [in] physical strength"? Having once formulated questions about human equality and about whether human knowledge is innate or learned, how could philosophers have avoided the observation that some people are more generally intelligent than others? Doubtless, thinking men would have occasionally made, like Doctor Johnson, the observation that "[True genius] is a mind of large general powers accidentally determined to some particular directions." But what prevented them exploring and testing the idea? By the eighteenth century, the faculty philosophy of Scotland's Thomas Reid tried to preserve liberalism from Hume's scepticism by replacing the fruitless search for innate ideas with a recognition of the faculty of 'common sense.' Following England's Francis Bacon and running alongside Joseph Gall's phrenology, Reid's belief in constitutional determinants of thought was the antecedent of the views of Galton, William McDougall, Charles Spearman and Burt. Is such theorizing, though backed by twentieth century research, doomed to be but a flash in the philosophers' pan?

----

Fortunately, it is possible to advance a hypothesis as to how so many intelligent people - including the chattering classes and media personages of modern times - remain in denial about g differences as they pursue their pious schemes for rectifying the human condition by social skills training, re-labelling, organic apple juice or affirmative action. Indeed, critics of g can be pretty thoroughly obliged. One can simply accept that there is not much of a g factor to be found in the people among whom the educated, not to mention the hyper-educated, chiefly have their everyday being. This 'differentiation hypothesis' dates back to an observation of Spearman's (1927) though it was chiefly advocated in the twentieth century by the distinguished American race-realist psychologist, Henry Garrett, who chaired Columbia University's psychology department and was a one-time president of the American Psychological Association (e.g. 1938, 1946, 1980). It is that the g factor 'differentiates' at higher levels of mental ability, perhaps as people reach serious options to specialize rewardingly in particular skills and topics. Thus the g factor is markedly stronger (i.e. accounts for more ability variance) among samples having lower intelligence -- whether lower IQ or lower Mental Age. (For fuller presentations of theorizing and a history of empirical work, see Chapter 2 of The g Factor by Brand, 1996; or Appendix A of The g Factor by Jensen, 1998.)

Because differentiation of g is usually studied by comparing the cognitive performance of high- and low-IQ individuals, researchers invariably encounter nagging problems associated with restriction in range of ability. For example, in the most widely cited modern study of the hypothesis, Detterman and Daniel (1989) divided the standardization samples of the WISC-R and WAIS into five ability groups based on performance on either the Vocabulary or Information subtests. Within each ability group, average correlations between abilities were calculated, providing an indicant of the pervasiveness of Spearman's g. Differentiation was demonstrated in so far as average correlations among subtests decreased monotonically from about +.70 for the lowest IQ group (< 78) to about +.35 for the highest IQ group (> 122). However, as a means of equating the variances in the ability groups, Detterman and Daniel applied statistical corrections for restriction of range. As Jensen (1998) has explained, an underlying assumption of the correction for restriction of range is that the "true" correlation between the variables in question (i.e., subtests) is equal throughout the full range of the latent trait (here, Spearman's g). The basic idea of cognitive differentiation is that abilities are not uniformly interrelated across the entire spectrum of intelligence. Therefore, such statistical corrections run counter to the very hypothesis under study. Other studies (Lynn, 1992; Lynn & Cooper, 1994) that replicated Detterman and Daniel's methods similarly failed to consider restriction of range, and therefore provide only weak evidence in support of differentiation.

In the largest study of differentiation to date, researchers from the Edinburgh Structural Psychometrics Group (ESPG) analyzed Irish standardization data for the Differential Aptitude Test (DAT; Deary, Egan, Brand, & Kellaghan, 1996). The DAT consists of eight subtests measuring verbal ability, abstract reasoning, numerical reasoning, clerical speed, mechanical reasoning, spatial ability, spelling, and language usage. Deary et al. divided the normative sample (N = 10,353) into four smaller groups, based on age and ability. Average IQs of the low- and high-ability groups were 90 and 110 respectively. Test scores were equated for variance. The authors' primary finding was that the g factor (first principal component) accounted for about 49% of the variance among the less able children, and 47% among children of above average ability. This 2% difference in variance was thus vanishingly slight - possibly attributable to the relatively small mean IQ difference (20 points) between ability groups and to the higher ability children themselves being at no very high level of mental development. The handful of other studies (e.g., Fogerty & Stankov,1995) in this area are plagued by small sample sizes that drastically limit their generalizability. In order to evaluate the true nature of g in groups of varying ability, the researcher must compose high- and low-IQ groups of sufficient size so that they have equal standard deviations on the selection test, while also differing substantially in IQ.

The most recent study of cognitive differentiation comes from the University of Nevada (with help from the ESPG) (Kane & Brand, 2001). This study was specifically designed to avoid the methodological imperfections of earlier investigations that may have clouded results. Researchers used normative data (N = 6,359) from the Woodcock-Johnson Psychoeducational Battery Revised (WJ-R) which is an individually administered test of academic achievement and cognitive abilities. The standardization sample is representative of the population of the United States and covers a wide age range, from childhood to adulthood. The WJ-R is an operational representation of the Horn-Cattell theory of crystallized (gc) and fluid (gf) abilities (e.g. Horn & Cattell, 1966; Horn, 1985). Twenty-one diverse subtests measure an array of eight primary abilities within the gc/gf framework. The WJ-R has excellent psychometric properties, and a number of empirical studies corroborate its clinical and theoretical validity (e.g., Bickley, Keith & Wolfe, 1995). The WJ-R also provides an excellent representation of Carroll's (1993) Three Stratum Theory of cognitive abilities, with the eight cognitive clusters corresponding to Stratum II. Complemented by fourteen subtests of academic achievement, the WJ-R is the most comprehensive battery of cognitive processing tests available to modern researchers. The diverse nature of the WJ-R provides a "good" g (Jensen & Weng, 1994); and the theoretical framework enables insight into possible mechanisms of differentiation.

In contrast to the sophisticated procedures used in previous studies (e.g. Deary, et al., 1996), Kane & Brand used a relatively simple and straightforward approach to identify high- and low-IQ groups. First, scores on the Numbers Reversed, Listening Comprehension, and Verbal Analogies subtests were averaged to create a composite variable, `SortIQ.' These subtests were not involved in calculating Broad Cognitive Ability (BCA) or in in any subsequent analysis, and therefore provided an independent estimate of overall intelligence. The simple correlation between SortIQ and BCA was .94. Dividing the entire data set at the mean of SortIQ yielded two ability groups. Next, desired characteristics for the ability groups were assigned. Sample size for both groups was set at 500. Standard deviations were set at 7.5, or about half of the value typically observed in the general population. Means for the high- and low-IQ groups were set at 115 and 85, respectively. Once these characteristics were fixed, z-scores were calculated within each ability group, using BCA as the criterion variable. Finally, individuals were randomly sampled by z-score intervals, with the desired number of subjects sampled at each z-score interval corresponding to the proportion observed in the normal distribution. These simple procedures resulted in two ability groups being formed, each normally distributed and equated for variance, with respective means of 115 and 85. Sixteen subtests were chosen for analysis, with each subtest identified by previous research (McGrew, 1997) as being a strong indicator of its respective primary gc/gf factor.

A series of analyses compared the primaries measured by the WJ-R across each ability level. Evidence for cognitive differentiation was unequivocal. In the low- and high-IQ groups, the g factor (the first unrotated principal component) accounted respectively for 52% and 29% of the variance in cognitive performance. Simply stated, for individuals of lesser intellect, general intelligence is the dominating influence, accounting for nearly twice the amount of variance in overall cognitive performance on sixteen tests. Conversely, above-average individuals display markedly more specialization, or differentiation of their mental abilities.

The g loadings of the various cognitive clusters are presented in Table 1.



Table 1

Primary abilities' g Loadings in High- and Low-IQ Groups

Primary Ability................................ g Loading

.................................................. Low IQ / High IQ

Fluid Intelligence (gf)..........................89..... .80
Visual Processing (gv)........................88..... .77
Processing Speed (gs)..................... .95..... .75
Long-Term Retrieval (glr)................. .69..... .65
Crystallized Intelligence (gc)........... . .84..... .39
Auditory Processing (ga).................. .81...... .65
Short-Term Memory (gsm)............... .72..... .39
Quantitative Reasoning (gq)............ .86..... .72



The patterns of loadings are consistent with the observations made by Spearman (1927) and with his 'law of diminishing returns.' (Spearman himself interpreted g-differentiation to mean that successive g-increments have diminishing effects across the full range of abilities.) Factor loadings also suggest possible mechanisms of differentiation. Quite the largest decline in g-factor loadings occurred on measures of crystallized intelligence (gc). The loading of gc on g declined from .84 for the low-IQ group to .39 for the high-IQ group. In the Horn-Cattell model, gc is indicative of an individual's store of information. Thus, a possible conclusion is that low-IQ individuals, more than their above-average counterparts, depend on g for the acquisition of knowledge and information. To the annoyance of its detractors, this finding offers compelling evidence that g plays a causal role in the formation of an individual's fund of knowledge and information. This result is also in keeping with the ideas of Garrett (1946). He offered the first coherent theory of differentiation, in which through the course of mental development, g becomes increasingly invested in specialized activities that result in differentiation. Presumably, the gifted have more intellectual capital to invest than the less able; so, in them, cognitive differentiation is more pronounced.

The next largest difference in g factor loadings occurred on measures of gsm, which is usually interpreted as indicating working memory (WM) capacity. Over the past decade or so, researchers have assigned an increasingly important role to working memory (WM) as an explanatory agent in understanding individual differences in human cognitive performance. Mackintosh (above) finds this idea attractive and Kyllonen and Christal (1990) went so far as to equate WM capacity with Spearman's g, citing a simple correlation between them of .91. The present finding supports the idea that g determines WM for low-ability groups but shows that g cannot be understood as WM in people of higher ability. Not surprisingly, the Nevada study also provides evidence that high-IQ individuals, more than low-IQ individuals, rely on "noncognitive" constructs (e.g., introversion, motivation) for successful performance. For the gifted, aspects of personality complement intellect to assure exceptional accomplishment.

----

Clearly, the diminishing influence of g on quite a few mental abilities at the higher levels of general intelligence suggests that distinct, non-g abilities play important parts in the accomplishments (and personal eccentricities) of the gifted. Conversely, g serves as quite the most prominent source of mental limitations in the less able. Indeed, g is such a source of intellectual limitation among low-IQ individuals that educationists have felt obliged to organize expenditures on the retarded that are a hundredfold greater than those on high-g children (Herrnstein & Murray, 1994). The g factor is particularly strong among Black testees: studying thousands of South African secondary school pupils, Lynn and Owen (1994) found g correlated .62 with subtest variation in Blacks but only .23 among Whites (who were two standard deviations higher in IQ); and Rushton and Skuy (2000, their Table 3) similarly found that 83% of Standard Ravens Matrices' items were better correlated with total Ravens IQ scores in Blacks than in Whites

The Nevada study is the most methodologically adequate attempt so far to assess the differentiation hypothesis - involving more subjects, more subtests, better sampling and a bigger (30-point) IQ range. Its striking results confirm the need to consider the range of ability when venturing theories of intelligence and attainment. After all, what lasting impressions do average or poor musicians, writers, or mathematicians make? Mediocre accomplishment is seldom documented, simply because its preservation would be of no lasting benefit to society. The thing to remember is that individuals who are noticed and remembered for their accomplishments come from an extremely restricted range of abilities to which their precise g levels may apparently be of little immediate importance. When this is ignored, it is easy to see why the importance of Spearman's g in the rest of the population can often be under-rated. Thomas Edison once remarked that genius was "1% inspiration and 99% perspiration." For Edison and others of his intellectual gentry, that ratio may summarize important truths. For duller people, however, inspiration, and of course g, will be both scarce and also a more important determinant of intellectual outcomes.



Conclusion

Despite the heroic efforts of Arthur Jensen, realism about the g factor has been in short supply in recent years. Critics of IQ ignore the strongly positive correlations that obtain between all mental abilities - especially across the lower reaches of intelligence; and they set impossibly high standards of 'measurement' that are never met elsewhere in social science. Claiming to fear that acceptance of g differences must lead to the type of regimented (though sexually rewarding) society that Plato once envisaged, critics deplore London School ideas as 'fascist'.

In fact, the case for g has strengthened markedly in recent years as the ambitions of massively-funded multifactorialists have come to grief. Now it turns out that the failure of many intellectuals of the past to recognize the importance of g can be explained by their lack of contact with low-IQ people: fifty per cent of Western philosophers could not even bring themselves to marry, let alone have the extensive contacts with normal youngsters that characterised the militaristic and paedophilic society of classical Athens. How Galton and Burt differed from other psychologists of their day was in their wide experience of life - Galton as an adolescent surgeon working with his medical family around Birmingham, and Burt undertaking live-in social work in the slums of Liverpool. In Paris, Alfred Binet too, thanks to government funding, saw the problems of low IQ at first hand.

Moreover, there is in fact no necessity for the facts of life about g to lead to authoritarian social arrangements. Plato himself envisaged that his utopia run by philosopher-kings would involve much discussion, choice, social mobility and indeed sexual opportunity; it was Aristotle, not Plato, who set about justifying slavery and female subordination -- whereas Plato counselled individuation of treatment rather than the use of group labels; Plato recommended outright censorship only in the primary education of trainee guardians -- a principle endorsed world-wide today, for all societies make many restrictions on what can be shown to pre-adolescent children; and any true liberalism is essentially assisted by Plato's recognition that people differ importantly from each other and thus should not be forced into identical schooling, employment, medical insurance arrangements or marital contracts.

Liberalism has been advocated in the past by Protestants, nationalists, hedonists and empiricists wanting to throw off the chains for which they blamed Aristotle and the Catholic Church. But such negative liberalism has a bizarre feature: for what is the point of liberalism unless there are radically different individuals to be liberated? Liberalism is altogether more likely to flourish if the truth is acknowledged that each person is a debating society, as Plato and Freud both thought, and that society should mirror and articulate that arrangement in ways likely to lead to such moral progress as is gradually possible. The bloody experiments of 1642 in Britain, of 1789 in France, of 1917 in Russia and of 1933 in Germany give no reason at all to think that utopias arise from ideologies of brotherly equality. Instead of seeking an equality that invariably and quickly turns out to deny freedom, it is time to put freedom first.

That the most important truths of human psychological nature steer us logically towards intelligent and informed choice would not have surprised Plato - who after all wanted such choice to apply even to the question of breeding the next generation. Presently, the breakdown of marriage in the West is threatening a much reduced White population which will come increasingly from the least responsible parents. Rather than blunder into such an Afro-Caribbean future, it is time to admit the realities of human g differences - which have classically liberal consequences when properly considered. To his eternal credit, Arthur Jensen - though perhaps no Platonist himself -- has helped mightily to keep that option open.



References


ANDERSON, M. (2000) An Unassailable Defense of g but a Siren-song for Theories of Intelligence. Psycoloquy: 11(013) Intelligence g Factor (28)

BARRETT, P.T. (2001). 'Quantitative science and intelligence.' International Journal of Psychophysiology.

BATESON, P. & MARTIN, P. (2000). 'Recipes for humans.' Guardian [London], 6 ix.

BERKA, K. (1983) Measurement : Its Concepts, Theories and Problems. Written originally in Czech (Mereni: Pojmy, Teorie, Problmy), the book was translated into English in 1983 and published as one of the Boston Studies in the Philosophy of Science (eds. Robert S. Cohen and Marx W. Wartofsky).

BICKLEY, P. G., KEITH, T.Z., & WOLFE, L. (1995). 'The three-stratum theory of intelligence: Test of the structure of intelligence across the life span.' Intelligence 20, 309-328.

BOUCHARD, T.J., Jr., LYKKEN, D.T., McGUE, M., SEGAL, N.L. & TELLEGEN, A. (1990). `Sources of human psychological differences: the Minnesota Study of twins reared apart.' Science 250, 223-228.

BRAND, C. R. (1996). The g Factor: General Intelligence and Its Implications. Chichester, U.K. : Wiley DePublisher. (The 2000 edition is available free at http://www.douance.org/qi/brandtgf.htm and http://www.solargeneral.com/library/GFactor.pdf.)

BRAND, C. R. (1997). 'Hans Eysenck's personality dimensions: their number and nature.' In H. Nyborg, The Scientific Study of Human Nature: Tribute to Hans Eysenck at Eighty, pp. 17-35. Oxford : Pergamon.

BRAND, C. R. (1998). 'Fast track learning comes of age' - a review of Camilla P. Benbow & David Lubinski (eds.) Intellectual Talent, Baltimore, John Hopkins University Press. Personality & Individual Differences 24, 6, 899-900.

BRAND, C. R. (1999). Genetic science versus authoritarianism on the left: the disruption of yet another academic meeting by radical protestors. Mankind Quarterly 40, 2, Winter.

BRODY, N. 1992, Intelligence, 2nd edition. San Diego, CA : Academic Press

BRUER, John T. 1999, The Myth of the First Three Years. New York : Free Press

BURT, C. (1954). The differentiation of intellectual ability. British Journal of Educational Psychology 24, 76-90.

CARROLL, J.B. (1993). Human Cognitive Abilities: A Survey of Factor-Analytic Studies. Cambridge, UK : Cambridge University Press.

CARROLL, J.B. (1997). 'The three-stratum theory of cognitive abilities.' In D.P. Flanagan, J.L. Genshaft, & P.L. Harrison (eds.), Contemporary Intellectual Assessment: Theories, Tests, and Issues, pp. 122-130. New York, NY: The Guilford Press.

CATTELL, R.B. (1936). A Guide to Mental Testing. London University Press

CATTELL, R.B. (1941). 'Some theoretical issues in adult intelligence testing.' Psychological Bulletin 38, 592.

CONWAY, A.R., KANE, M.J. and ENGLE, R.W. (1999). 'Is Spearman's g determined by speed or working memory capacity?' Psycoloquy 10 (74) ftp://ftp.princeton.edu/pub/harnad/Psycoloquy/1999.volume.10/ Psyc.99.10.074.intelligence-g-factor.16.conway; http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?10.074

DEARY, I.J., GIBSON, G.J., EGAN, V., AUSTIN, Elizabeth, BRAND, C.R. & KELLAGHAN, T. (1996). `Intelligence and the differentiation hypothesis.' Intelligence 23, 105-132..

DELANTY, G. (1998). Social Science: Beyond Constructivism and Realism, Oxford University Press.

DETTERMAN, D. K.& DANIEL, M. H.(1989). `Correlations of mental tests with each other and with cognitive variables are highest for low IQ groups.' Intelligence 13, 349-359.

EYSENCK, H. J. (1939). Review of L. L. Thurstone, Primary Mental Abilities. British Journal of Educational Psychology 9, 270-275.

FLYNN, J. R. (1984). 'The mean IQ of Americans: massive gains 1932 to 1978.' Psychological Bulletin 95, 29-51.

FOGERTY, G. J., & STANKOV, L. (1995). 'Challenging the law of diminishing returns.' Intelligence 21, 157-174.

GALTON, F. (1869). Hereditary Genius: An Inquiry into its Laws and Consequences. London : Collins.

GARDNER, H. (1999). Reframing Intelligence. New York : Basic Books.

GARRETT, H.E. (1938). `Differentiable mental traits.' Psychological Record 2, 259-298.

GARRETT, H.E. (1946). 'A developmental theory of intelligence.' American Psychologist 1, 372-377.

GARRETT, H. E. (1980). IQ and Racial Differences. Torrance, CA : Noontide Press and Brighton Historical Review Press.

GOLEMAN, D. (1995). Emotional Intelligencei. New York : Bantam.

GOTTFREDSON, Linda (1997). 'Why g matters: the complexity of everyday life' Intelligence 24, 79-132.

HERRNSTEIN, R. & MURRAY, C. (1994). The Bell Curve. New York : Free Press.

HORN, J. L. (1985). 'Remodeling old models of intelligence.' In B. B. Wolman, Handbook of Intelligence: Theories, Measurements and Applications, 267-300. New York : Wiley DePublisher.

HORN, J. L., & CATTELL , R. B. (1966). 'Refinement and test of the theory of fluid and crystallized ability intelligence.' Journal of Education Psychology 57, 253-270.

HOWE, M. J. A. (1997). IQ in Question. London : Sage.

HUNT, E. (1997). 'Nature vs. nurture: The feeling of vuj… d‚. In R. J. Sternberg & Elena Grigorenko, Intelligence, Heredity and Environment. Cambridge, UK : Cambridge University Press.

HUNT, E. (1999). 'The modifiability of intelligence.' Psycoloquy 10(072) Intelligence g Factor (14). http://www.cogsci.soton.ac.uk/psyc-bin/newpsy?article=10.072&submit=View+Article.

JENSEN, A.R.(1969). `How much can we boost IQ and scholastic attainment?' In Environment, Heredity and Intelligence. Cambridge, Mass. : Harvard Educational Review.

JENSEN, A.R. (1998). The g Factor: the Science of Mental Ability. Westport, CT: Praeger.

JENSEN, A. R. & WENG, L. J. (1994). 'What is a good g?' Intelligence 18, 231-258.

KANE, H. & BRAND, C. R. (2001). 'The Structure of Intelligence in groups of varying cognitive ability: a test of Carroll's three-stratum theory.' [Provisionally accepted for Intelligence.]

KLINE, P. (2000). A Psychometrics Primer. London : Free Association.

KYLLONEN, P. C. & CHRISTAL, R. E. (1990). 'Reasoning ability is little more than working memory capacity?!' Intelligence 14, 389-433.

LILLA, M. (1998). The politics of Jacques Derrida. New York Review of Books, 25 vi.

LYNN, R (1992). 'Does Spearmans g decline at high IQ levels? Some evidence from Scotland.' Journal of Genetic Psychology 153, 229-230.

LYNN, R. & COOPER, C. (1993). 'A secular decline in Spearmans g in France.' Learning and Individual Differences 5, 43-48.

LYNN, R. & OWEN, K. (1994). 'Spearman's hypothesis and test score differences between Whites, Indians and Blacks in South Africa.' Journal of General Psychology 121, 27-36.

MACKINTOSH, N. J. (1997). IQ and Human Intelligence. Oxford and New York : Oxford University Press.

MALIK, K. (1996). The Meaning of Race: Race, History and Culture in Western Society. Basingstoke : Macmillan.

McGREW, K.S. (1997). 'Analysis of the major intelligence batteries according to a proposed comprehensive gf-gc framework.' In D.P. Flanagan, J.L. Genshaft, & P.L. Harrison, Contemporary Intellectual Assessment: Theories, Tests, and Issues, 151-174. New York, NY: The Guilford Press.

MURRAY, C. (1999). http://www.lrainc.com/swtaboo/taboos/cmurraybga0799.pdf.

NASH, R. (1990). Intelligence and Realism: A Materialist Critique of IQ. Basingstoke : Macmillan.

NORMAN, O. (1995). http://www.vancouver.wsu.edu/fac/norman/kuhn.html.

POPPER, K. (1945). The Open Society and Its Enemies. London : Routledge & Kegan Paul.

POPPER, K. (1994). Knowledge and the Body-Mind Problem: In Defence of Interaction. London : Routledge. [Based on lectures given at Emory University, USA, in 1969.]

QUINTON, A. (1998). From Wodehouse to Wittgenstein. Manchester : Carcanet.

RAVITCH, Diane (2000). Left Back: A Century of Failed School Reforms. New York : Simon & Schuster.

RICHARDSON, Ken, 1999, 'Demystifying g.' Psycoloquy: 10(048) Intelligence g Factor (5).

ROSE, S. (1997). http://www.carf.demon.co.uk/feat01.html.

RUSHTON, J. P. (1995). Race, Evolution and Behaviour. New Jersey : Transaction.

RUSHTON, J. P. (1996). 'Race, intelligence and the brain: the errors and omissions of the "revised" edition of S. J. Gould's The Mismeasure of Man. Personality & Individual Differences 21.

RUSHTON, J. P. (1999) 'Secular gains in IQ not related to the g factor and inbreeding depression unlike Black-White differences: A reply to Flynn.' Personality & Individual Differences 26, 381-389.

RUSHTON, J. Philippe & SKUY, Mervin (2000). 'Performance on Raven's Matrices by African and White university students in South Africa.' Intelligence 28, 4, 251-265.

RYLE, G. (1949). The Concept of Mind. London : Hutchinson.

SCHWARTZMAN, A.E., GOLD, D., ANDRES, D. ARBUCKLE, T.Y. & CHAIKELSON, J. (1987). `Stability of intelligence: a forty-year follow-up.' Canadian Journal of Psychology 41, 244-256.

SOKAL, A. (1996). 'Transgressing the boundaries: towards a transformative hermeneutics of quantum gravity', at http://www.nyu.edu/gsas/dept/physics/faculty/sokal/index.html. {For further demonstration of the ease with which plausible constructivist nonsense can be generated, see http://www.elsewhere.org/cgi-bin/postmodern.}

SOKAL, A. (1998). Interviewed in Scientific American, iii.

STURROCK, J. (1998). Review of A. Sokal & J. Bricmont, Intellectual Impostures. London Review of Books, 16 vii.

SPEARMAN, C. (1927). Abilities of Man. London : Macmillan.

SPENCER, H. (1855). Principles of Psychology, Volumes I & II. London : Williams & Norgate. (4th edition in 1889).

TALBOT, Margaret (2001). 'A desire to duplicate.' New York Times Magazine, 4 ii.

TANG, Ya-ping, SHIMIZU, Eiji, DUBE, Gilles R., RAMPON, Claire, KERCHNER, Geoffrey A., ZHUO, Min, LIU Guosong & TSIENl, Joe Z. (1999). 'Genetic enhancement of learning and memory in mice.' Nature 401, 63-68.

TRAUB, J. (1998). 'Multiple intelligence disorder.' New Republic 26 x, pp. 20-23.

WECHSLER, D. (1939). The Measurement and Appraisal of Adult Intelligence. Baltimore : Williams & Wilkins.

WOODCOCK, R. W. (1990). 'Theoretical foundations of the WJ-R measures of cognitive ability.' Journal of Pyschoeducational Assessment 8, 231-258.

WOODCOCK, R. W., & JOHNSON, M. B. (1989). Woodcock-Johnson Psycho-Educational Battery-Revised. Chicago: Riverside.



(This chapter was requested in 1999, submitted and accepted in 2000, and finally published in June, 2003. The present version includes a few minor corrections made in May, 2004.
-- C. Brand)




Back to Table of contents for this site

Back to "Wicked Thoughts"