IQ

Elliot Temple's picture
Submitted by Elliot Temple on Thu, 2017-10-12 18:27

This is a reply to Ed Powell writing about IQ.

I believe IQ tests measure a mix of intelligence, culture and background knowledge.

That's useful! Suppose I'm screening employees to hire. Is a smart employee the only thing I care about? No. I also want him to fit in culturally and be knowledgable. Same thing with immigrants.

The culture and background knowledge measured by IQ tests isn't superficial. It's largely learned in early childhood and is hard to change. It is possible to change. I would expect assimilating to raise IQ scores on many IQ tests, just as learning arithmetic raises scores on many IQ tests for people who didn't know it before.

Many IQ test questions are flawed. They have ambiguities. But this doesn't make IQ tests useless. It just makes them less accurate, especially for people who are smarter than the test creators. Besides, task assignments from your teacher or boss contain ambiguities too, and you're routinely expected to know what they mean anyway. So it matters whether you can understand communications in a culturally normal way.

Here's a typical example of a flawed IQ test question. We could discuss the flaws if people are interested in talking about it. And I'm curious what people think the answer is supposed to be.

IQ tests don't give perfect foresight about an individual's future. So what? You don't need perfectly accurate screening for hiring, college admissions or immigration. Generally you want pretty good screening which is cheap. If someone comes up with a better approach, more power to them.

Would it be "unfair" to some individual that they aren't hired for a job they'd be great at because IQ tests aren't perfect? Sure, sorta. That sucks. The world is full of things going wrong. Pick yourself up and keep trying – you can still have a great life. You have no right to be treated "fairly". The business does have a right to decide who to hire or not. There's no way to making hiring perfect. If you know how to do hiring better, sell them the method. But don't get mad at hiring managers for lacking omniscience. (BTW hiring is already unfair and stupid in lots of ways. They should use more work sample tests and less social metaphysics. But the problems are largely due to ignorance and error, not conscious malice.)

==============================================

Ed Powell writes:

Since between 60% and 80% of IQ is heritable, it means that their kids won't be able to read either. Jordan Peterson in one of his videos claims that studies show there are no jobs at all in the US/Canadian economies for anyone with an IQ below about 83. That means 85% of the Somalian immigrants (and their children!) are essentially unemployable. No immigration policy of the US should ignore this fact.

I've watched most of Jordan Peterson's videos. And I know, e.g., that the first video YouTube sandboxed in their new censorship campaign was about race and IQ.

I agree that it's unrealistic for a bunch of low IQ Somalians to come here and be productive in U.S. jobs. I think we agree on lots of conclusions.

But I don't think IQ is heritable in the normal sense of the word "heritable", meaning that it's controlled by genes passed on by parents. (There's also a technical definition of "heritable", which basically means correlation.) For arguments, see: Yet More on the Heritability and Malleability of IQ.

I don't think intelligence is genetic. The studies claiming it's (partly) genetic basically leave open the possibility that it's a gene-environment interaction of some kind, which leaves open the possibility that intelligence is basically due to memes. Suppose parents in our culture give worse treatment to babies with black skin, and this causes lower intelligence. That's a gene-environment interaction. In this scenario, would you say that the gene for black skin is a gene for low intelligence? Even partly? I wouldn't. I'd say genes aren't controlling intelligence in this scenario, culture is (and, yes, our culture has some opinions about some genetic traits like skin color).

When people claim intelligence (or other things) are due to ideas, they usually mean it's easy to change. Just use some willpower and change your mind! But memetic traits can actually be harder to change than genetic traits. Memes evolve faster than genes, and some old memes are very highly adapted to prevent themselves from being changed. Meanwhile, it's pretty easy to intervene to change your genetic hair color with dye.

I think intelligence is a primarily memetic issue, and the memes are normally entrenched in early childhood, and people largely don't know how to change them later. So while the mechanism is different, the conclusions are still similar to if it were genetic. One difference is that I'm hopeful that dramatically improved parenting practices will make a large difference in the world, including by raising people's intelligence.

Also, if memes are crucial, then current IQ score correlations may fall apart if there's a big cultural shift of the right kind. IQ test research only holds within some range of cultures, not in all imaginable cultures. But so what? It's not as if we're going to wake up in a dramatically different culture tomorrow...

==============================================

I don't believe that IQ tests measure general intelligence – which I don't think exists as a single, well-defined thing. I have epistemological reasons for this which are complicated and differ from Objectivism on some points. I do think that some people are smarter than others. I do think there are mental skills, which fall under the imprecise term "intelligence", and have significant amounts of generality.

Because of arguments about universality (which we can discuss if there's interest), I think all healthy people are theoretically capable of learning anything that can be learned. But that doesn't mean they will! What stops them isn't their genes, it's their ideas. They have anti-rational memes from early childhood which are very strongly entrenched. (I also think people have free will, but often choose to evade, rationalize, breach their integrity, etc.)

Some people have better ideas and memes than others. So I share a conclusion with you: some people are dumber than others in important very-hard-to-change ways (even if it's not genetic), and IQ test scores do represent some of this (imperfectly, but meaningfully).

For info about memes and universality, see The Beginning of Infinity.

And, btw, of course there are cultural and memetic differences correlated with e.g. race, religion and nationality. For example, on average, if you teach your kids not to "act white" then they're going to turn out dumber.

So, while I disagree about many of the details regarding IQ, I'm fine with a statement like "criminality is mainly concentrated in the 80-90 IQ range". And I think IQ tests could improve immigration screening.


"Since Elliot wishes to make

Elliot Temple's picture

"Since Elliot wishes to make me look like an idiot"

This stuff about appearances is social metaphysics. It's treating discussion as a contest for looking good in front of audiences. That's not how I approach things.

"I guess people have bigger or smaller heads just by accident? Some are more artistic than analytic just by accident? Oh I'm sorry, I mean by memes, not accident. Of course.

This is just another naive tabula rasa conception in a new dress."

This is also social metaphysics. It's trying to impress the audience with how clever and biting the insults are, rather than trying to seek the truth. Presenting intentional things (saying "by accident" instead of "by memes" initially) as a slip is a dishonest, social tactic. It's playing a game relating to what kind of attacks are currently trendy, fashionable and approved of by other people; it's not a serious attempt at objective communication. Similarly the "new dress" comment follows current social trends for insults, rather than being productive.

The point here is to mock me in a socially calibrated way in response to a perceived reputational (rather than intellectual) threat. It's an attempt to fight over who looks ends up looking bad. That shouldn't be the point.

I told you about how you

Elliot Temple's picture

I told you about how you could read until you find one mistake, and you're ignoring me and saying you can't continue because you don't have time to read the whole book. That doesn't make sense.

I wrote, "For Popper and

Elliot Temple's picture

I wrote, "For Popper and Deutsch, I'd advise against starting with anything other than Deutsch's two books." The context was even the Harris podcast. I also linked my blog post criticizing one of Deutsch's interviews.

Ed follows up: "In an attempt to try to understand what Elliott was saying, I re-listened to the two appearances by Deutsch on Sam Harris’s podcast."

And then Ed complains he didn't understand the material from the sources I advised against using, and complains that the podcast didn't have the level of detail necessary (right, hence the books). From this, I believe I'm supposed to accept that Ed made a serious effort – but I don't accept that.

1. The human brain is a general purpose computer with the same capabilities as any other general purpose computer, except for speed and memory. Rather a desktop computer is functionally indistinguishable from the brain, except for speed and storage.

The universality of computation stuff is uncontroversial (and unoriginal) btw. Also the brain being a computer is basically intellectually uncontroversial, although there's plenty of magical thinking opposing it (it's also opposed by ignorance – people who don't know much about the theory of computation). However, plenty of stuff I'm saying does not follow from just this. It's not a claim that you should find threatening. (You can use your brain to calculate NAND. What more is there to say?)

By contrast, the universal explainers idea is novel and not well known, and clashes much more with your views.

Regarding free will, DD believes free will exists and included pro-free-will arguments in his first book. The rest of your summary is in the right ballpark, but your comments relied substantially on reversing DD's free will position.

Another possibility, besides the ones you mention, is that you're mistaken.

Anyway, you haven't engaged with my explanations about universality and you also haven't engaged with my explanations about methodology. Why, exactly, are you disregarding the points about methodology?

You also didn't explain why you chose not to followup until prompted. Do you want out of the conversation? Why?

Deutsch

edpowell's picture

In an attempt to try to understand what Elliott was saying, I re-listened to the two appearances by Deutsch on Sam Harris’s podcast. Now for all of my disagreements with Harris, he’s no dummy. He’s done a lot of first-handed work over the years in both neuroscience and philosophy. So what struck me about the conversation was how condescending and patronizing Deutsch was toward Harris, and how Harris would keep saying things like, “In your book, you say X, but it’s much deeper than X, its hard to explain simply, it’s like X but not really X,” and Deutsch would respond with additional condescension. I now realize why I was so frustrated ion my first hearing of these conversations, as Harris was clearly taken with Deutsch’s arguments in his books, but could not clearly repeat these arguments, and nor could Deutsch, at least not clearly enough for me to understand.

And indeed, because of the fundamental incoherence of the conversation, a number of Deutsch’s “conclusions” are presented as naked assertions, such as:

1. The human brain is a general purpose computer with the same capabilities as any other general purpose computer, except for speed and memory. Rather a desktop computer is functionally indistinguishable from the brain, except for speed and storage.
2. What we call “intelligence” is just knowledge. If we could magically print knowledge/ideas on a person’s brain, we could raise their IQ (or lower it) to any value. He does not believe IQ measures intelligence anyway.
3. Science is not based on the evidence of the senses, but on “reason”, meaning (in my interpretation) that science is disconnected in some form from reality, a la Plato.
4. Morality is based on allowing error-correction to occur with the end being more, better knowledge. That is, the standard of moral value is more knowledge. While he claims that this leads to human freedom in general and freedom of thought/speech/publication in particular, he does not present us with any idea why, if the standard of value is “knowledge,” Dr. Mengele is immoral. He does not speak of “rights” in the Objectivist or even Lockean sense, but only as contingent on this concept of knowledge acquisition.
5. The concept of “evil” is simply a “lack of knowledge”, since the brain only contains information, just as a computer only contains information. If the information is bad, then it does not qualify as knowledge, and can lead the person to do evil. Evil can be cured by education, or by magically replacing the bad knowledge with better knowledge. He speaks a lot about some future when we could “3D-print” atom-by-atom in a person’s brain to change the contents of the brain, as a type of thought experiment.
6. He and Harris both agree that free will does not exist, and that humans are essentially automata. This idea is never reconciled with the idea of “morality”, “good”, or “evil”. It’s as if morality were unrelated to individual human life in his view, and is only related to the propagation of reified knowledge.

So there are two possible conclusions to these five assertions. The obvious conclusion is that he is nothing but a crank, since all of the evidence of psychology and psychometrics verify that intelligence is NOT just knowledge, and that no intervention at all (above the norm of a well-fed, non-abused childhood) can raise the IQ of anyone. “Well,” Deutsch would argue, “if we could 3D-print brains atom by atom we could.” Perhaps so, but maybe we could do so if we spent time with elves and rode unicorns too. That’s not an argument. Also, there is no evidence that the human brain is like a general purpose computer. He simply presents that as an assumption rather than a finding that requires a lot of experimental evidence behind it. The theory of general purpose computing by Turing et al. is not the be-all and end-all of philosophy. It is primarily a mathematical finding, not a physical one, and DEFINITELY not a biological one. His view of morality as the gaining of reified knowledge may make sense in his the-only-thing-that’s-Real-are-memes philosophy, but it makes no sense from an Objectivist perspective where morality is supposed to be principles for HUMANS living successfully on EARTH. And of course, their denial of free will is evidence of an acute rationalism, as if free will can be denied by mathematical derivation. As I said above, neither Deutsch nor Harris addresses the obvious complaint that "no free will" automatically makes nonsense of ethics, indeed it makes nonsense of happiness, science, math, human life, everything. What also strikes me is that two smart articulate fellows can’t describe the essence of Deutsch’s ideas in an intelligible fashion given two hours to an educated novice, e.g., Harris himself, or me, and I have a PhD in Astrophysics.

The OTHER possible conclusion to these assertions by Deutsch is that he is using words—many words—in different ways than the usual colloquial meaning. Intelligence doesn’t exist, IQ tests are fundamentally meaningless, but they do measure something and that something is in fact correlated with academic and career success. So Deutsch’s “knowledge” is not our “knowledge”. His “intelligence” is not our “intelligence.” His “morality” is not our “morality.” And his “free will” is not our “free will.” Harris is predisposed to any ideas that confirm his own bias that humans are automatons lacking free will, so perhaps that’s what he saw in Deutsch’s writings, but perhaps Deutsch has something more profound to say about all of this but is incapable of saying it intelligibly in a two-hour interview.

In any case, as I said in my post below, there is no real basis for discussion between us on these issues without taking the time (and it would take me a month to read Deutsch’s book, a month that I neither have nor want) to understand Deutsch’s arguments to get to a point where I could start to refute them. Given the rejection of free will up front, I’m not really energized to perform such a task. So unless I get energized for some other reason, I’m done on the Deutsch thread.

Mea culpa

Bruno Turner's picture

Sorry, Justin. That was me. I edited instead of responding. Then I went back and fixed it.

my post was just edited

JustinCEO's picture

my previous post was just edited by someone else multiple times. what's going on?

Justin,

Bruno Turner's picture

I'm an Objectivist. Do I need to spell it out, what my view of Platonism is?

As for my moral judgment, I

JustinCEO's picture

As for my moral judgment, I already gave it. This is Platonism.

That's not a moral judgment by itself. You skipped a step.

You have nothing on me except a bunch of words

Bruno Turner's picture

Just to be clear for readers, in case you missed it. Since Elliot wishes to make me look like an idiot (because I have "less knowledge", which has nothing to do with intelligence incidentally). Read this again.

Quote from Elliot: "Universal knowledge creation (intelligence) is a crucial capability our genes give us. From there, it’s up to us to decide what to do with it. The difference between a moron and a genius is how they use their capability.
Differences in degrees of human intelligence, among healthy people (with e.g. adequate food) are due to approximately 100% ideas, not genes.."

First of all, where is his justification for arbitrarily changing the definition of intelligence to fit his theory?
Second, where is the evidence, if not the proof, that ideas determine "intelligence" (redefined)?
Where is the evidence, if not the proof, that all our genes "give us" is this capability and nothing more? I guess people have bigger or smaller heads just by accident? Some are more artistic than analytic just by accident? Oh I'm sorry, I mean by memes, not accident. Of course.

This is just another naive tabula rasa conception in a new dress.

As for my moral judgment, I already gave it. This is Platonism.

Although you weren't very

Elliot Temple's picture

Although you weren't very specific, you seem to be calling uncontroversial knowledge, from a field you're ignorant of, "Assertions, assertions. No proof." and non-advanced. (Not everything I said is uncontroversial, but some major parts are. And btw of course the theory of computation is connected to reality – we're having this discussion using computers.)

You don't know about the science; I do. You don't know about the statistics; I do. You don't know enough about memes; I do. You don't know enough about computation; I do. You don't know about universality; I do. You don't have a precise enough mental model of how minds work; I do. You don't want to read books; I do. You don't understand my position on the matter; I do understand yours. You have an attitude that won't change these things; I have an attitude that led to learning this stuff in the first place. You don't want to have an intellectual discussion; I do, but I have the same problem Ayn Rand had: it's so terribly hard to find anyone else who wants to think.

Why have I stated this? VoS: "One must never fail to pronounce moral judgment."

Platonism

Bruno Turner's picture

Assertions, assertions. No proof. There is nothing advanced. Unless you mean logically advanced, just like Platonism is. Too bad there is no connection to reality. I'm done here.

If you read my posts, you

Elliot Temple's picture

If you read my posts, you would have found that I explained universality:

> First, one has to know about universality, which is best approached via the theory of computation. Universal classical computers are well understood. The repertoire of a classical computer is the set of all computations it can compute. A universal classical computer can do any computation which any other classical computer can do. For evaluating a computer’s repertoire, it’s allowed unlimited time and data storage.

And it goes on from there. Did you simply skip reading most of what I said, and then accused me of not arguing my points?

However, I don't expect my explanations above to be enough for you to understand the issue because you lack lots of relevant background knowledge. If you're unwilling to read books and try to educate yourself more, you can't reasonably expect to deal with advanced intellectual issues like these. That's on you.

Where is the argument?

Bruno Turner's picture

"Because of arguments about universality (which we can discuss if there's interest), I think all healthy people are theoretically capable of learning anything that can be learned. But that doesn't mean they will! What stops them isn't their genes, it's their ideas. They have anti-rational memes from early childhood which are very strongly entrenched. (I also think people have free will, but often choose to evade, rationalize, breach their integrity, etc.)

Some people have better ideas and memes than others. So I share a conclusion with you: some people are dumber than others in important very-hard-to-change ways (even if it's not genetic), and IQ test scores do represent some of this (imperfectly, but meaningfully).

For info about memes and universality, see The Beginning of Infinity."

You say that you give an argument here, I don't see any. I see you sending me to read a book or other lenghty material on universality. That is an assertion not an argument.

If the argument is about memes, I already ask you to bring evidence. If you don't bring evidence, I don't what kind of argument we are supposed to have. You're just asserting that ideas can make people smarter.

I use the words intelligence (or being smart) and wisdom in their traditional sense. Intelligence has to do with power, ability. Wisdom has to do with good judgment.

It's not a 100% scientific

Elliot Temple's picture

It's not a 100% scientific question. For example, philosophy of science is relevant to figuring it out.

Details of biology are a scientific question

Bruno Turner's picture

As to details of how much of the biologically determined height or intelligence of people is due to genes purely and how much variation can occur because of environment, that is not a philosophic question. It is a scientific question. I can't say anything about that except parrot what people in the field have reported.

I don't know why you think

Elliot Temple's picture

I don't know why you think metaphysical means biological. It doesn't.

> I disagree completely that any healthy person can learn anything. I see no evidence whatsoever of such. Some things don't take simply time to understand but raw power of abstraction.

I provided an argument, not evidence. You didn't understand it. That doesn't make it false. You could try to learn about it or ask questions, if you were interested.

You can learn more about

Elliot Temple's picture

You can learn more about memes in David Deutsch's books, especially BoI. http://beginningofinfinity.com...

Can you give an exact definition of intelligence vs. wisdom? And a few typical examples. But it doesn't sound like you want to debate it. That's just what I'd ask if you wanted to discuss. If you aren't interested, please don't waste both our time by continuing unseriously and then quitting halfway through figuring the issues out.

How about finishing the rest

Bruno Turner's picture

How about finishing the rest of the paragraph at least?
> The fact that they are metaphysical in nature is also well established

In other words they are biologically determined. Not determined by memes.

I don't see any evidence to the contrary.

I disagree completely that any healthy person can learn anything. I see no evidence whatsoever of such. Some things don't take simply time to understand but raw power of abstraction.

As to memes

Bruno Turner's picture

I read Dawkins' Selfish Gene, where he introduces the term meme.

If my memory serves me well, a meme is an idea which if evolutionarily succesful replicates itself effectively through many brains.

I don't know where you are getting the idea that memes can make people smarter. Ideas don't make people smarter. They can make them wiser.

> Differences in intellectual

Elliot Temple's picture

> Differences in intellectual ability are an obvious reality.

Could you quote what you're trying to argue with? You're just talking past me.

I am interested but I'm not getting into diatribes

Bruno Turner's picture

I am interested, but I'm not going to get into diatribes about measurement and data. I am not a statistician.

Differences in intellectual ability are an obvious reality. The fact that they are metaphysical in nature is also well established by basically the entirety of Western thought and Common Sense. Ayn Rand agreed. I can give you the quotes to prove it if you like. There's plenty.

The burden of proof as to the differences being "learned" is entirely on the naive tabula rasa proponents. (Naive is not a moral estimate, it is the description of the theory as opposed to Tabula Rasa in the Randian sense. I've talked about this elsewhere.)

Brains are different, brain power is different. You can't change that. I am tall, I did not "will myself" into height. Similarly I did not "will myself" into intelligence.

I just now this moment finished playing with a girl friend of mine as to who could multiply large numbers faster. She is a statistician, almost finished with her masters. I haven't done math in 4 years. I still beat her.

You can't change metaphysical potentiality through force of will, but you can change the actualization of the potential. That's where free will kicks in (again, free will in the Randian sense as opposed to the absolute free will).

If the potential is augmented through better nutrition and so forth, that's not actually an augmentation. It is the bringing into being of the complete potential as opposed to incomplete. The complete potential is reached when maximum development is achieved. In other words, you can only go so far and that's it. That's what logic suggests to me. The data seems to confirm it.

Why no followup? And why is

Elliot Temple's picture

Why no followup?

And why is no one else here interested?

The questions quoted above

Elliot Temple's picture

The questions quoted above from the Wonderlic (which I certainly don't defend) are from the 15-question test, which cannot be statistically valid because it has too few questions, and I certainly don't know the mean and standard deviation from that test, since the document I found on the internet only referred to the 50-question test.

The 15-question test and the 50-question test use the same questions.

Thanks for writing a

Elliot Temple's picture

Thanks for writing a reasonable reply to someone you disagree with. My most important comments are at the bottom and concern a methodology that could be used to make progress in the discussion.

I think we both have the right idea of "heritable." Lots of things are strongly heritable without being genetic.

OK, cool. Is there a single written work – which agrees “heritable” doesn’t imply genetic – which you think adequately expresses the argument today for genetic degrees of intelligence? It’d be fine if it’s a broad piece discussing lots of arguments with research citations that it’s willing to bet its claims on, or if it focuses on one single unanswerable point.

I think you take my analogy of a brain with a computer too far.

It's not an analogy, brains are literally computers. A computer is basically something that performs arbitrary computations, like 2+3 or reversing the letters in a word. That’s not nearly enough for intelligence, but it’s a building block intelligence requires. Computation and information flow are a big part of physics now, and if you try to avoid them you're stuck with alternatives like souls and magic.

I don't pretend to understand your argument above, and so I won't spend time debating it, but you surely realize that human intelligence evolved gradually over the last 5 or so million years (since our progenitors split from the branch that became chimps), and that this evolution did not consist of a mutant ADD Gate gene and another mutant NOT Gate gene.

There are lots of different ways to build computers. I don't think brains are made out of a big pile of NAND gates. But computers with totally different designs can all be universal – able to compute all the same things.

Indeed, if intelligence is properly defined as "the ability to learn", then plenty of animals have some level of intelligence. Certain my cats are pretty smart, and one can, among the thousands of cute cat videos on the internet, find examples of cats reasoning through options to open doors or get from one place to another. Dogs are even more intelligent. Even Peikoff changed his mind on Rand's pronouncement that animals and man are in different distinct classes of beings (animals obey instinct, man has no instinct and thinks) when he got a dog. Who knew that first hand experience with something might illuminate a philosophical issue?

I agree with Rand and I can also reach the same conclusion with independent, Popperian reasons.

I've actually had several dogs and cats. So I'm not disagreeing from lack of first hand experience.

What I would ask if I lacked that experience – and this is relevant anyway – is if you could point out one thing I'm missing (due to lack of experience, or for any other reason). What fact was learned from experience with animals that I don't know, and which contradicts my view?

I think you're not being precise enough about learning, and that with your approach you'd have to conclude that some video game characters also learn and are pretty smart. Whatever examples you provide about animal behaviors, I’ll be happy to provide parallel software examples – which I absolutely don’t think constitute human-like intelligence (maybe you do?).

Rand's belief in the distinct separation between man and animals when it comes to intellect is pretty contrary to the idea that man evolved gradually,

The jump to universality argument provides a way that gradual evolution could create something so distinct.

in the next few years the genetic basis of intelligence will in fact be found and we will no longer have anything to argue about. I don't think there's any real point arguing over this idea.

Rather than argue, would you prefer to bet on whether the genetic basis higher intelligence will be found within the next 5 years? I'd love to bet $10,000 on that issue.

In any case, even if there was such a finding, there’d still be plenty to argue about. It wouldn’t automatically and straightforwardly settle the issues regarding the right epistemology, theory of computation, way to understand universality, etc.

We all know a bunch of really smart people who are in some ways either socially inept or completely nuts.

Yes, but there are cultural explanations for why that would be, and I don't think genes can control social skill (what exactly could the entire mechanism be, in hypothetical-but-rigorous detail?).

I know a number of people smarter than myself who have developed some form of mental illness, and it's fairly clear that these things are not unrelated.

Tangent: I consider the idea of "mental illness" a means of excusing and legitimizing the initiation of force. It's used to subvert the rule of law – both by imprisoning persons without trial and by keeping some criminals out of jail.

Link: Thomas Szasz Manifesto.

The point of IQ tests is to determine (on average) whether an individual will do well in school or work, and the correspondence between test results and success in school and work is too close to dismiss the tests as invalid, even if you don't believe in g or don't believe in intelligence at all.

Sure. As I said, I think IQ tests should be used more.

The tests are excellent predictors, especially in the +/- 3 SD area

Yes. I agree the tests do worse with outliers, but working well for over 99% of people is still useful!

The government has banned IQ tests from being used as discriminators for job fitness;

That's an awful attack on freedom and reason!

Take four or five internet IQ tests. I guarantee you the answers will be in a small range (+/- 5ish), even though they are all different. Clearly they measure something! And that something is correlated with success in school and work (for large enough groups).

I agree.

My one experience with Deutsch was his two interviews on Sam Harris's podcast

For Popper and Deutsch, I'd advise against starting with anything other than Deutsch's two books.

FYI Deutsch is a fan of Ayn Rand, an opponent of global warming, strongly in favor of capitalism, a huge supporter of Israel, and totally opposed to cultural and moral relativism (thinks Western culture is objectively and morally better, etc.).

I have some (basically Objectivist) criticism of Deutsch's interviews which will interest people here. In short, he's recently started sucking up to lefty intellectuals, kinda like ARI. But his flawed approach to dealing with the public doesn't prevent some of his technical ideas about physics, computation and epistemology from being true.

But if one doesn't believe g exists,

I think g is a statistical construct best forgotten.

or that IQ tests measure anything real,

I agree that they do, and that the thing measured is hard to change. Many people equate genetic with hard to change, and non-genetic with easy to change, but I don't. There are actual academic papers in this field which say, more or less, "Even if it's not genetic, we may as well count it as genetic because it's hard to change."

or that IQ test results don't correlate with scholastics or job success across large groups, then there's really nothing to discuss.

I agree that they do. I am in favor of more widespread use of IQ testing.

As I said, I think IQ tests measure a mix of intelligence, culture and background knowledge. I think these are all real, important, and hard to change. (Some types of culture and background knowledge are easy to change, but some other types are very hard to change, and IQ tests focus primarily on measuring the hard to change stuff, which is mostly developed in early childhood.)

Of course intelligence, culture and knowledge all correlate with job and school success.

Finally, I don't think agreement is possible on this issue, because much of your argument depends upon epistemological ideas of Pooper/Deutsch and yourself, and I have read none of the source material. [...] I don't see how a discussion can proceed though on this IQ issue--or really any other issue--with you coming from such an alien (to me) perspective on epistemology that I have absolutely no insight into. I can't argue one way or the other about cultural memes since I have no idea what they are and what scientific basis for them exists. So I won't. I'm not saying you're wrong, I'm just saying I won't argue about something I know nothing about.

I'd be thrilled to find a substantial view on an interesting topic that I didn't already know about, that implied I was wrong about something important. Especially if it had some living representatives willing to respond to questions and arguments. I've done this (investigated ideas) many times, and currently have no high priority backlog. E.g. I know of no outstanding arguments against my views on epistemology or computation to address, nor any substantial rivals which aren't already refuted by an existing argument that I know of.

I've written a lot about methods for dealing with rival ideas. I call my approach Paths Forward. The basic idea is that it's rational to act so that:

  1. If I'm mistaken
  2. And someone knows it (and they're willing to share their knowledge)
  3. Then there's some reasonable way that I can find out and correct my mistake.

This way I don't actively prevent fixing my mistakes and making intellectual progress.

There are a variety of methods that can be used to achieve this, and also a variety of common methods which fail to achieve this. I consider the Paths-Forward-compatible methods rational, and the others irrational.

The rational methods vary greatly on how much time they take. There are ways to study things in depth, and also faster methods available when desired. Here's a fairly minimal rational method you could use in this situation:

Read until you find one mistake. Then stop and criticize.

You’ll find the first mistake early on unless the material is actually good. (BTW you're allowed to criticize meta mistakes, such as that the author failing to say why his stuff matters, rather than only criticizing internal or factual errors. You can also stop reading at your first question, instead of criticism.)

Your first criticism (or question) will often be met with dumb replies that you can evaluate using knowledge you already have about argument, logic, etc. Most people with bad ideas will make utter fools of themselves in answer to your first criticism or question. OK, done. Rather than ignore them, you've actually addressed their position, and their position now has an outstanding criticism (or unanswered question), and there is a path forward available (they could, one day, wise up and address the issue).

Sometimes the first criticism will be met with a quality reply which addresses the issue or refers you to a source which addresses it. In that case, you can continue reading until you find one more mistake. Keep repeating this process. If you end up spending a bunch of time learning the whole thing, it's because you can't find any unaddressed mistakes in it (it's actually great)!

A crucial part of this method is actually saying your criticism or question. A lot of people read until the first thing they think is a mistake, then stop with no opportunity for a counter-argument. By staying silent, they're also giving the author (and his fans) no information to use to change their minds. Silence prevents progress regardless of which side is mistaken. Refusing to give even one argument leaves the other guy's position unrefuted, and leaves your position as not part of the public debate.

Another important method is to cite some pre-existing criticism of a work. You must be willing to take responsibility for what you cite, since you're using it to speak for you. It can be your own past arguments, or someone else's. The point is, the same bad idea doesn't need to be refuted twice – one canonical, reusable refutation is adequate. And by intentionally writing reusable material throughout your life, you'll develop a large stockpile which addresses common ideas you disagree with.

Rational methods aren't always fast, even when the other guy is mistaken. The less you know about the issues, the longer it can take. However, learning more about issues you don't know about is worthwhile. And once you learn enough important broad ideas – particularly philosophy – you can use it to argue about most ideas in most fields, even without much field-specific knowledge. Philosophy is that powerful! Especially when combined with a moderate amount of knowledge of the most important other fields.

Given limited time and many things worth learning, there are options about prioritization. One reasonable thing to do, which many people are completely unwilling to do, is to talk about one's interests and priorities, and actually think them through in writing and then expose one's reasoning to public criticism. That way there's a path forward for one's priorities themselves.

To conclude, I think a diversion into methodology could allow us to get the genetic intelligence discussion unstuck. I also believe that such methodology (epistemology) issues are a super important topic in their own right.

Impossibility of Agreement

edpowell's picture

I think we both have the right idea of "heritable." Lots of things are strongly heritable without being genetic. You mention dialect, but there's also politics, religion, and which professional baseball team one roots for, and there is certainly not a Christianity gene or a Conservative gene (though there may be genetic influence here). And, while there certainly should be a Philadelphia Phillies gene, alas, there is none.

I think you take my analogy of a brain with a computer too far. I certainly don't believe a brain is like a Core-i7 processor with NAND and ADD gates, nor do I believe that intelligence consists of coding for this type of computing. Indeed, despite all the science fiction I've read, I've yet to find plausible any ideas on creating "artificial intelligence" that consists of anything like human thought from anything close to modern desktop computing. I don't pretend to understand your argument above, and so I won't spend time debating it, but you surely realize that human intelligence evolved gradually over the last 5 or so million years (since our progenitors split from the branch that became chimps), and that this evolution did not consist of a mutant ADD Gate gene and another mutant NOT Gate gene. Indeed, if intelligence is properly defined as "the ability to learn", then plenty of animals have some level of intelligence. Certain my cats are pretty smart, and one can, among the thousands of cute cat videos on the internet, find examples of cats reasoning through options to open doors or get from one place to another. Dogs are even more intelligent. Even Peikoff changed his mind on Rand's pronouncement that animals and man are in different distinct classes of beings (animals obey instinct, man has no instinct and thinks) when he got a dog. Who knew that first hand experience with something might illuminate a philosophical issue? (Certainly not the current crop of Objectivist "leaders"). This issue of evolution was one that bothered Rand for many years. Neil has written an excellent essay in JARS about it that explains that (and I'm paraphrasing) Rand's belief in the distinct separation between man and animals when it comes to intellect is pretty contrary to the idea that man evolved gradually, and so his current intellect is a product of many thousands of small changes built up over the last 200,000 years (the first diaspora) or really the last 50,000 (the second diaspora). Rand's psychological/philosophical conclusion--man is utterly distinct from animals--did not agree with the science of evolutionary biology. But Rand was smart enough NOT to say "evolution is bullshit", unlike many current Objectivists who say similar stupid things about, e.g., quantum mechanics, relativity, or the Big Bang Theory. But neither did Rand try to alter her ideas in response to biological science. This was a flaw in her thinking.

The idea that there was a day when the first human mutant was born who could have (had he been taught) understood calculus, while his ancestors could not have done so is not correct. This is not how biology (and evolution) works, which is through gradual change, trial and error. Man's intellect evolved like his upright posture and his opposable thumbs, bit by little bit, and this basic biological fact is a strong indicator that in the next few years the genetic basis of intelligence will in fact be found and we will no longer have anything to argue about. I don't think there's any real point arguing over this idea. The really interesting thing about genetics is that genes code for proteins that can lead to multiple different phenotypical traits simultaneously, so a mutation in one gene may lead to a favorable result (resistance to malaria) while leading also to an unfavorable result (sickle-cell anemia). The evolution of intelligence is one of those things that must have happened in fits and starts, going backwards before going forwards, as the genes that coded for being smarter also possibly coded for other things less desirable. We all know a bunch of really smart people who are in some ways either socially inept or completely nuts. I know a number of people smarter than myself who have developed some form of mental illness, and it's fairly clear that these things are not unrelated. My wife is smarter than I am from a test perspective but doesn't have the common sense God gave a goose. So there's obviously variation. I happen to know a bunch of millennial artist types (don't ask!) and if you heard any of them speak about economics, politics, history, science, or math, you'd think they were all borderline retarded. But in fact they are quite smart from an IQ perspective; they've just been fed a bunch of nonsensical propaganda in pedagogically destructive government schools, not to mention the destructive peer pressure they are all under in Hollywood. Yet give em an IQ test, and they'd almost all be >120. So the common notion of, for example, "Yaron Brook is a complete idiot" is true in one sense--Yaron is outrageously condescendingly arrogant about topics about which he knows nothing (just like my millennial friends), but would score high on an IQ test and so is "smart" in the technical sense of the term. And indeed his "smarts" have correlated with success in the real world, though not with the ability to recognize that the Cato Institute is not a defender of liberty. My "idiot" millennial friends are also successful in the real world, so there's that too.

The point of IQ tests is to determine (on average) whether an individual will do well in school or work, and the correspondence between test results and success in school and work is too close to dismiss the tests as invalid, even if you don't believe in g or don't believe in intelligence at all. The tests are excellent predictors, especially in the +/- 3 SD area where you avoid the really, really smart (and potentially kooky) and the really, really dumb. The government has banned IQ tests from being used as discriminators for job fitness; except, for some reason, the Wonderlic. The Wonderlic people have modified the test questions in some ways to get around the government's ban on IQ tests, and yet still maintain a pretty good correspondence with job success. Obviously we can all argue about individual test questions, but remember, this particular test was designed to be given to English-speakers in America who have gone to high school and can read, not for everybody. The reason I recommended it was not because I thought it was some paragon of IQ test virtue, but because it was 12 minutes long and readers might be willing to set aside that time, rather than the hour or more of some other internet IQ tests. There are plenty of IQ tests on the internet, some free and some for money, and if you don't like the Wonderlic, feel free to try one or more of them. The questions quoted above from the Wonderlic (which I certainly don't defend) are from the 15-question test, which cannot be statistically valid because it has too few questions, and I certainly don't know the mean and standard deviation from that test, since the document I found on the internet only referred to the 50-question test. Take four or five internet IQ tests. I guarantee you the answers will be in a small range (+/- 5ish), even though they are all different. Clearly they measure *something*! And that something is correlated with success in school and work (for large enough groups).

Finally, I don't think agreement is possible on this issue, because much of your argument depends upon epistemological ideas of Pooper/Deutsch and yourself, and I have read none of the source material. My one experience with Deutsch was his two interviews on Sam Harris's podcast, the second of which I couldn't get past the 45 minute mark because he wasn't making any sense to me. Now that could be me, not him. For all I know, if I were to read the Deutsch book you recommend, it might be entirely clear and reasonable. I have in fact put the book on my list to read, but with over 100 other books already purchased but not yet read, it'll probably be a while before I get around to buying it, much less reading it. I don't see how a discussion can proceed though on this IQ issue--or really any other issue--with you coming from such an alien (to me) perspective on epistemology that I have absolutely no insight into. I can't argue one way or the other about cultural memes since I have no idea what they are and what scientific basis for them exists. So I won't. I'm not saying you're wrong, I'm just saying I won't argue about something I know nothing about. I do know about the twin studies, though, and they show substantial correlation of IQ between shared genetic environment and no correlation between shared environment, or even non-shared environment. These are not perfect studies, and I'm sure they have flaws, not least of which is restriction in range. But still, it's kind of looking good for Team Genetics. But if one doesn't believe g exists, or that IQ tests measure anything real, or that IQ test results don't correlate with scholastics or job success across large groups, then there's really nothing to discuss.

I believe I understand that

Elliot Temple's picture

I believe I understand that you’re fed up with various bad counter-arguments about IQ, and why, and I sympathize with that. I think we can have a friendly and productive discussion, if you’re interested, and if you either already have sophisticated knowledge of the field or you’re willing to learn some of it (and if, perhaps as an additional qualification, you have an IQ over 130). As I emphasized, I think we have some major points of agreement on these issues, including rejecting some PC beliefs. I’m not going to smear you as a racist!

Each of these assertions is contrary to the data.

My claims are contrary to certain interpretations of the data, which is different than contradicting the data itself. I’m contradicting some people regarding some of their arguments, but that’s different than contradicting facts.

Just look around at the people you know: some are a lot smarter than others, some are average smart, and some are utter morons.

I agree. I disagree about the details of the underlying mechanism. I don’t think smart vs. moron is due to a single underlying thing. I think it’s due to multiple underlying things.

This also explains reversion to the mean

Reversion to the mean can also be explained by smarter parents not being much better parents in some crucial ways. (And dumber parents not being much worse parents in some crucial ways.)

Every piece of "circumstantial evidence" points to genes

No piece of evidence that fails to contradict my position can point to genes over my position.

assertion that there exists a thing called g

A quote about g:

To summarize ... the case for g rests on a statistical technique, factor analysis, which works solely on correlations between tests. Factor analysis is handy for summarizing data, but can't tell us where the correlations came from; it always says that there is a general factor whenever there are only positive correlations. The appearance of g is a trivial reflection of that correlation structure. A clear example, known since 1916, shows that factor analysis can give the appearance of a general factor when there are actually many thousands of completely independent and equally strong causes at work. Heritability doesn't distinguish these alternatives either. Exploratory factor analysis being no good at discovering causal structure, it provides no support for the reality of g.

Back to quoting Ed:

I just read an article the other day where researchers have identified a large number of genes thought to influence intelligence.

I’ve read many primary source articles. That kind of correlation research doesn’t refute what I’m saying.

What do you think psychometricians have been doing for the last 100 years?

Remaining ignorant of philosophy, particularly epistemology, as well as the theory of computation.

It is certainly true that one can create culturally biased IQ test questions. This issue has been studied to death, and such questions have been ruthlessly removed from IQ tests.

They haven’t been removed from the version of the Wonderlic IQ test you chose to link, which I took my example from.

I think there’s an important issue here. I think you believe there are other IQ tests which are better. But you also believe the Wonderlic is pretty good and gets the roughly same results as the better tests for lots of people. Why, given the flawed question I pointed out (which had a lot more wrong with it than cultural bias), would the Wonderlic results be similar to the results of some better IQ test? If one is flawed and one isn’t flawed, why would they get similar results?

My opinion is as before: IQ tests don’t have to avoid cultural bias (and some other things) to be useful, because culture matters to things like job performance, university success, and how much crime an immigrant commits.

I don't use the term "genetic" because I don't mean "genetic", I mean "heritable," because the evidence supports the term "heritable."

The word "heritable" is a huge source of confusion. A technical meaning of "heritable" has been defined which is dramatically different than the standard English meaning. E.g. accent is highly "heritable" in the terminology of heritability research.

The technical meaning of “heritable” is basically: “Variance in this trait is correlated with changes in genes, in the environment we did the study in, via some mechanism of some sort. We have no idea how much of the trait is controlled by what, and we have no idea what environmental changes or other interventions would affect the trait in what ways.” When researchers know more than that, it’s knowledge of something other than “heritability”. More on this below.

I have not read the articles you reference on epistemology, but intelligence has nothing to do with epistemology, just as a computer's hardware has nothing to do with what operating system or applications you run on it.

Surely you accept that ideas (software) have some role in who is smart and who is a moron? And so epistemology is relevant. If one uses bad methods of thinking, one will make mistakes and look dumb.

Epistemology also tells us how knowledge can and can’t be created, and knowledge creation is a part of intelligent thinking.

OF COURSE INTELLIGENCE IS BASED ON GENES, because humans are smarter than chimpanzees.

I have a position on this matter which is complicated. I will briefly give you some of the outline. If you are interested, we can discuss more details.

First, one has to know about universality, which is best approached via the theory of computation. Universal classical computers are well understood. The repertoire of a classical computer is the set of all computations it can compute. A universal classical computer can do any computation which any other classical computer can do. For evaluating a computer’s repertoire, it’s allowed unlimited time and data storage.

Examples of universal classical computers are Macs, PCs, iPhones and Android phones (any of them, not just specific models). Human brains are also universal classical computers, and so are the brains of animals like dogs, cows, cats and horses. “Classical” is specified to omit quantum computers, which use aspects of quantum physics to do computations that classical computers can’t do.

Computational universality sounds very fancy and advanced, but it’s actually cheap and easy. It turns out it’s difficult to avoid computational universality while designing a useful classical computer. For example, the binary logic operations NOT and AND (plus some control flow and input/output details) are enough for computational universality. That means they can be used to calculate division, Fibonacci numbers, optimal chess moves, etc.

There’s a jump to universality. Take a very limited thing, and add one new feature, and all of a sudden it gains universality! E.g. our previous computer was trivial with only NOT, and universal when we added AND. The same new feature which allowed it to perform addition also allowed it to perform trigonometry, calculus, and matrix math.

There are different types of universality, e.g. universal number systems (systems capable of representing any number which any other number system can represent) and universal constructors. Some things, such as the jump to universality, apply to multiple types of universality. The jump has to do with universality itself rather than with computation specifically.

Healthy human minds are universal knowledge creators. Animal minds aren’t. This means humans can create any knowledge which is possible to create (they have a universal repertoire). This is the difference between being intelligent or not intelligent. Genes control this difference (with the usual caveats, e.g. that a fetal environment with poison could cause birth defects).

Among humans, there are also degrees of intelligence. E.g. a smart person vs. an idiot. Animals are simply unintelligent and don’t have degrees of intelligence at all. Why do animals appear somewhat intelligent? Because their genes contain evolved knowledge and code for algorithms to control animal behavior. But that’s a fundamentally different thing than human intelligence, which can create new knowledge rather than relying on previously evolved knowledge present in genes.

Because of the jump to universality, there are no people or animals which can create 20%, 50%, 80% or 99% of all knowledge. Nothing exists with that kind of partial knowledge creation repertoire. It’s only 100% (universal) or approximately zero. If you have a conversation with someone and determine they can create a variety of knowledge (a very low bar for human beings, though no animal can meet it), then you can infer they have the capability to do universal knowledge creation.

Universal knowledge creation (intelligence) is a crucial capability our genes give us. From there, it’s up to us to decide what to do with it. The difference between a moron and a genius is how they use their capability.

Differences in degrees of human intelligence, among healthy people (with e.g. adequate food) are due to approximately 100% ideas, not genes. Some of the main factors in early childhood idea development are:

  • Your culture’s anti-rational memes.
  • The behavior of your parents.
  • The behavior of other members of your culture that you interact with.
  • Sources of cultural information such as YouTube.
  • Your own choices, including mental choices about what to think.

The relevant ideas for intelligence are mostly unconscious and involve lots of methodology. They’re very hard for adults in our culture to change.

This is not the only important argument on this topic, but it’s enough for now.

This isn’t refuted in The Bell Curve, which doesn’t discuss universality. The concept of universal knowledge creators was first published in 2011. (FYI this book is by my colleague, and I contributed to the writing process).

Below I provide some comments on The Bell Curve, primarily about how it misunderstands heritability research.

===========================================================

There is a most absurd and audacious Method of reasoning avowed by some Bigots and Enthusiasts, and through Fear assented to by some wiser and better Men; it is this. They argue against a fair Discussion of popular Prejudices, because, say they, tho’ they would be found without any reasonable Support, yet the Discovery might be productive of the most dangerous Consequences. Absurd and blasphemous Notion! As if all Happiness was not connected with the Practice of Virtue, which necessarily depends upon the Knowledge of Truth.

EDMUND BURKE A Vindication of Natural Society

This is a side note, but I don’t think the authors realize Burke was being ironic and was attacking the position stated in this quote. The whole work, called a vindication of natural society (anarchy), is an ironic attack, not actually a vindication.

Heritability, in other words, is a ratio that ranges between 0 and 1 and measures the relative contribution of genes to the variation observed in a trait.

This is incomplete because it omits the simplifying assumptions being made. From Yet More on the Heritability and Malleability of IQ:

To summarize: Heritability is a technical measure of how much of the variance in a quantitative trait (such as IQ) is associated with genetic differences, in a population with a certain distribution of genotypes and environments. Under some very strong simplifying assumptions, quantitative geneticists use it to calculate the changes to be expected from artificial or natural selection in a statistically steady environment. It says nothing about how much the over-all level of the trait is under genetic control, and it says nothing about how much the trait can change under environmental interventions. If, despite this, one does want to find out the heritability of IQ for some human population, the fact that the simplifying assumptions I mentioned are clearly false in this case means that existing estimates are unreliable, and probably too high, maybe much too high.

Note that the word “associated” in the quote refers to correlation, not to causality. Whereas the authors of The Bell Curve use the word “contribution” instead, which doesn’t mean “correlation” and is therefore wrong.

Here’s another source on the same point, Genetics and Reductionism:

high [narrow] heritability, which is routinely taken as indicative of the genetic origin of traits, can occur when genes alone do not provide an explanation of the genesis of that trait. To philosophers, at least, this should come as no paradox: good correlations need not even provide a hint of what is going on. They need not point to what is sometimes called a "common cause". They need not provide any guide to what should be regarded as the best explanation.

You can also read some primary source research in the field (as I have) and see what sort of “heritability” it does and doesn’t study, and what sort of limitations it has. If you disagree, feel free to provide a counter example (primary source research, not meta or summary), which you’ve read, which studies a different sort of IQ “heritability” than my two quotes talk about.

What happens when one understands “heritable” incorrectly?

Then one of us, Richard Herrnstein, an experimental psychologist at Harvard, strayed into forbidden territory with an article in the September 1971 Atlantic Monthly. Herrnstein barely mentioned race, but he did talk about heritability of IQ. His proposition, put in the form of a syllogism, was that because IQ is substantially heritable, because economic success in life depends in part on the talents measured by IQ tests, and because social standing depends in part on economic success, it follows that social standing is bound to be based to some extent on inherited differences.

This is incorrect because it treats “heritable” (as measured in the research) as meaning “inherited”.

How Much Is IQ a Matter Genes?

In fact, IQ is substantially heritable. [...] The most unambiguous direct estimates, based on identical twins raised apart, produce some of the highest estimates of heritability.

This incorrectly suggests that IQ is substantially a matter of genes because it’s “heritable” (as determined by twin studies).

Specialists have come up with dozens of procedures for estimating heritability. Nonspecialists need not concern themselves with nuts and bolts, but they may need to be reassured on a few basic points. First, the heritability of any trait can be estimated as long as its variation in a population can be measured. IQ meets that criterion handily. There are, in fact, no other human traits—physical or psychological—that provide as many good data for the estimation of heritability as the IQ. Second, heritability describes something about a population of people, not an individual. It makes no more sense to talk about the heritability of an individual’s IQ than it does to talk about his birthrate. A given individual’s IQ may have been greatly affected by his special circumstances even though IQ is substantially heritable in the population as a whole. Third, the heritability of a trait may change when the conditions producing variation change. If, one hundred years ago, the variations in exposure to education were greater than they are now (as is no doubt the case), and if education is one source of variation in IQ, then, other things equal, the heritability of IQ was lower then than it is now.

...

Now for the answer to the question, How much is IQ a matter of genes? Heritability is estimated from data on people with varying amounts of genetic overlap and varying amounts of shared environment. Broadly speaking, the estimates may be characterized as direct or indirect. Direct estimates are based on samples of blood relatives who were raised apart. Their genetic overlap can be estimated from basic genetic considerations. The direct methods assume that the correlations between them are due to the shared genes rather than shared environments because they do not, in fact, share environments, an assumption that is more or less plausible, given the particular conditions of the study. The purest of the direct comparisons is based on identical (monozygotic, MZ) twins reared apart, often not knowing of each other’s existence. Identical twins share all their genes, and if they have been raised apart since birth, then the only environment they shared was that in the womb. Except for the effects on their IQs of the shared uterine environment, their IQ correlation directly estimates heritability. The most modern study of identical twins reared in separate homes suggests a heritability for general intelligence between .75 and .80, a value near the top of the range found in the contemporary technical literature. Other direct estimates use data on ordinary siblings who were raised apart or on parents and their adopted-away children. Usually, the heritability estimates from such data are lower but rarely below .4.

This is largely correct if you read “heritability” with the correct, technical meaning. But the assumption that people raised apart don’t share environment is utterly false. People raised apart – e.g. in different cities in the U.S. – share tons of cultural environment. For example, many ideas about parenting practices are shared between parents in different cities.

Despite my awareness of these huge problems with IQ research, I still agree with some things you’re saying and believe I know how to defend them correctly. In short, genetic inferiority is no good (and contradicts Ayn Rand, btw), but cultural inferiority is a major world issue (and correlates with race, which has led to lots of confusion).

As a concrete reminder of what we’re discussing, I’ll leave you with an IQ test question to ponder:

General Intelligence, etc.

Neil Parille's picture

Elliott:

Richard Haier has done lots of work on this.

In 2013 he came out with "The Intelligent Brain," a course from The Great Courses. It's on-sale for 30 bucks. It's well worth it.

He also did a recent interview with Jordan Peterson.

Haier is a neuroscientist and has found that when brain scans are done on people who are doing more g-loaded parts of IQ tests that the brain "lights up" differently. So we now know that there is a biological basis for g.

Ed:

"My score on the Wonderlic last year was identical to my score on the GRE taken 34 years previously. From a subjective viewpoint, I believed that I was at the absolute top of my intellectual game when I took the GRE."

Haier discusses Deary's study of Scottish IQ. One year in the 30's every student in Scottland (or perhaps just all 15 year old, I don't recall) were given an IQ test. Deary followed up decades later with these now almost elderly former students. He found that IQ was remarkably stable over the years. A person's IQ was generally the same unless he had dementia or a serious illness/accident. (In other words it didn't vary more than a person's IQ results when he takes tests a few months apart.)

I don't know what to say here

edpowell's picture

You make three assertions:
1) IQ tests measure a mix of intelligence, culture and background knowledge
2) I don't think IQ is heritable in the normal sense of the word "heritable", meaning that it's controlled by genes
3) I don't believe that IQ tests measure general intelligence

Each of these assertions is contrary to the data. It is not my job to reproduce 800 pages of The Bell Curve in a blog post. I implore you to read that book, because Herrnstein and Murray go through in excruciating detail every one of these objections to IQ, (and many more!) and show the actual data that contradict the objections.

1) It is certainly true that one can create culturally biased IQ test questions. This issue has been studied to death, and such questions have been ruthlessly removed from IQ tests. It would be ridiculous to give a written IQ test in English to a person who does not know English, and psychologists don't do this. There are any number of IQ tests that are pictorial only. There is even a strong correlation between reaction time and g, so a rough estimate can be obtained just by playing a game. What do you think psychometricians have been doing for the last 100 years?

2) I don't use the term "genetic" because I don't mean "genetic", I mean "heritable," because the evidence supports the term "heritable." It is certainly the case that intelligence variations can be 100% caused by environment and yet still be 80% heritable. When The Bell Curve was written, the authors were much closer in their opinions to what you propose: a gene-environment interaction of some kind, especially with the then new discovery of epigenetics. They were careful to not use the word "genetic" because there was no evidence at the time that specific genes coded for intelligence. The field has changed radically nowadays with the sequencing of the human genome, and I just read an article the other day where researchers have identified a large number of genes thought to influence intelligence. More work is required, but the answer will probably be found in less than 5 years. But let's step back a moment and look at the big picture: OF COURSE INTELLIGENCE IS BASED ON GENES, because humans are smarter than chimpanzees. There's no "genetic-environmental" interaction making chimpanzees less smart than humans--you can't adopt a chimpanzee in a human home and when he doesn't grow up to get into Harvard claim that, well, the chimpanzee suffered from racist anti-simianism. He just doesn't have a genes for the big brain. So obviously intelligence is based on genes. Also obviously, since intelligence in humans is distributed in a normal distribution, there must be a LOT of genes contributing to intelligence (the central limit theorem). This also explains reversion to the mean--it's a simple combinatorics problem. It also explains the finding that while intelligence seems less heritable in youth, it becomes more and more heritable as one gets older (from <50% to more than 80%) and that when basic environmental factors are equalized (essentially enough decent food) then heritability variance drops considerably and converges to the high end. Every piece of "circumstantial evidence" points to genes; what's lacking is the identification of the genes themselves. That will come soon.

3) All I can do is point you to The Bell Curve. They go over this objection in excruciating detail and present the data that underlies the assertion that there exists a thing called g, and that g can be statistically teased out of the scores on multiple different kinds of tests given in parallel. But how can one doubt the existence of general intelligence? Just look around at the people you know: some are a lot smarter than others, some are average smart, and some are utter morons. This is true even within families where the shared environment is quite similar. Occam's Razor alone would lead one to believe there is something like "general intelligence" just from casual observations. The fact that this observation can be backed up by data makes it essentially unassailable.

I have not read the articles you reference on epistemology, but intelligence has nothing to do with epistemology, just as a computer's hardware has nothing to do with what operating system or applications you run on it. The hardware comes first. It can be fast or slow. The software can be loaded on the hardware if the hardware meets the minimum specs. You can't run Call of Duty on a Chromebook. Intelligence is hardware capability. A person's psycho-epistemology is the software running on the hardware. These are completely different things. And while it is true that the brain, being a biological organ, can somewhat atrophy if not used, IQ is remarkably consistent with age (barring disease). My score on the Wonderlic last year was identical to my score on the GRE taken 34 years previously. From a subjective viewpoint, I believed that I was at the absolute top of my intellectual game when I took the GRE. But, apparently not. Yay me!

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.