IEET > Rights > HealthLongevity > Vision > Directors > George Dvorsky > FreeThought > Innovation
A Test to Measure How Rational You Really Are
George Dvorsky   Jul 5, 2013   io9  

Standard IQ tests are problematic on many levels — not least, because they do very little to tell us about the quality of our thinking. Looking to overcome this oversight, psychologist Keith Stanovich has started to work on the first-ever Rationality Quotient test. We spoke to him to learn more.

Keith E. Stanovich is Professor of Human Development and Applied Psychology at the University of Toronto. The author of over 200 scientific articles and seven books, he, along with Richard West, was recently given a grant by the John Templeton Foundation to create the first comprehensive assessment of rational thinking — a test that will ultimately determine a person's 'rationality quotient'.

And indeed, the value of rationality and “good thinking” tends to be diminished by the importance we place on intelligence. But as we learned from Stanovich, the two often have very little to do with each other.

Don't standard IQ tests already measure for rationality — and doesn't intelligence correlate with rationality?

IQ [tests] do not at all directly assess processes of rational thinking, as they are defined by cognitive scientists.

This is why a separate test is needed to assess RQ. We — my colleague Richard West and I — were led many years ago to through our longstanding interest in the heuristics and biases research program inaugurated by Daniel Kahneman and Amos Tversky several decades ago.

This all got started back in 2002 when Kahneman won the Nobel Prize in Economics (Tversky died in 1996). The press release for the award from the Royal Swedish Academy of Sciences drew attention to the roots of the award-winning work in “the analysis of human judgment and decision-making by cognitive psychologists.”

Kahneman was lauded for discovering “how human judgment may take heuristic shortcuts that systematically depart from basic principles of probability. His work has inspired a new generation of researchers in economics and finance to enrich economic theory using insights from cognitive psychology into intrinsic human motivation.”

One reason that the Kahneman and Tversky work was so influential was that it addressed deep issues concerning human rationality. Their work, along with that of many others, has shown how the basic architecture of human cognition makes all of us prone to these awful errors of judgment and decision making.

But being prone to these errors does not mean that we always make them. Every person, on some occasions, overrides the tendency to make these reasoning errors and instead makes the rational response. It is not that we make errors all the time. Even more importantly, our research group has shown that there are systematic differences among individuals in the tendency to make errors of judgment and decision making.

And the fact that there are systematic individual differences in the judgment and decision making situations studied by Kahneman and Tversky means that there are variations in important attributes of human cognition related to rationality — how efficient we are in achieving our goals.

This fact is curious because most laypeople are prone to think that IQ tests are tests of, to put it colloquially, good thinking. Scientists and laypeople alike would tend to agree that “good thinking” encompasses good judgment and decision making — the type of thinking that helps us achieve our goals. In fact, the type of “good thinking” that Kahneman and Tversky studied was deemed so important that research on it was awarded the Nobel Prize. Yet assessments of such good thinking are nowhere to be found on IQ tests.

Are you interested in how cognitive biases affect rationality? Which ones should we be most aware of?

Absolutely. Cognitive biases are an essential part of the modern definition of rationality in cognitive science.

To think rationally means taking the appropriate action given one’s goals and beliefs — what we call instrumental rationality — and holding beliefs that are in synch with available evidence, or epistemic rationality. Collectively, the many tasks of the heuristics and biases program — and the even wider literature in decision science — comprise the operational definition of rationality in modern cognitive science (see my book Rationality and the Reflective Mind, 2011).

Let me give you some examples of instrumental rationality and irrationality:

  • The ability to display disjunctive reasoning in decision making [e.g. Either the Sun orbits the Earth, or the Earth orbits the Sun. The Sun does not orbit the Earth. Therefore, the Earth orbits the Sun.]
  • The tendency to show inconsistent preferences because of framing effects [e.g. saying a ‘glass is half empty’ can often be more persuasive than suggesting the inverse; this is somewhat related to the negativity bias]
  • The tendency to show a default bias [a.k.a. the status quo bias in which we hold a preference for the way things currently are]
  • The tendency to substitute affect for difficult evaluations [sometimes when we have to answer a difficult question we actually answer a related but different question without realizing a substitution has taken place]
  • The tendency to over-weight short-term rewards at the expense of long-term well-being [which is also referred to as the current moment bias]
  • The tendency to have choices affected by vivid stimuli [e.g. men have been shown to make poor decisions in the presence of an attractive female]
  • The tendency for decisions to be affected by irrelevant context

Likewise, they have studied aspects of epistemic rationality and irrationality, such as:

  • The tendency to show incoherent probability assessments
  • The tendency toward overconfidence in knowledge judgments
  • The tendency to ignore base-rates [a.k.a. the base rate fallacy; sometimes we don’t take new information into account when making probability assessments]
  • The tendency not to seek to falsify hypotheses
  • The tendency to try to explain chance events
  • The tendency toward self-serving personal judgments
  • The tendency to evaluate evidence with a myside bias [where we only seek out perspectives that are sympathetic to our own]
  • The tendency to ignore the alternative hypothesis

You just started a 3-year project to create the first comprehensive assessment of rational thinking. Why do we need such a thing?

All of the biases and processes listed above will be on our prototype measure of rational thinking that will be the outcome of our grant from the John Templeton Foundation. It is necessary to assess them directly on such a test because none of them are directly assessed on IQ test.

However, there is an important caveat here. Although the tests fail to assess rational thinking directly, it could be argued that the processes that are tapped by IQ tests largely overlap with variation in rational thinking ability.

Perhaps intelligence is highly associated with rationality even though tasks tapping the latter are not assessed directly on the tests. Here is where empirical research comes in — some of which has been generated by our own research group. We have found that many rational thinking tasks show surprising degrees of dissociation from cognitive ability in university samples. Many classic effects from the heuristics and biases literature — base-rate neglect, framing effects, conjunction effects, anchoring biases, and outcome bias — are only modestly related to intelligence if run in between-subjects designs.

Most rational thinking tasks correlate to some degree with intelligence, but the correlation is almost always moderate enough (.60 or so at the very highest) to still create many cases where intelligence and rationality are out of synch. Hence my coining of the term dysrationalia in the early 1990s.

And what do you mean by "dysrationalia"?

I coined the term dysrationalia — an analogue of the word dyslexia — in the early-1990’s in order to draw attention to what is missing in IQ tests. I define dysrationalia as the inability to think and behave rationally despite having adequate intelligence. Many people display the systematic inability to think or behave rationally despite the fact that they have more than adequate IQs.

One of the reasons that many of us are dysrationalic to some extent is that, for a variety of reasons, we have come to overvalue the kinds of thinking skills that IQ tests measure and undervalue other critically important cognitive skills, such as the ability to think rationally.

What are some examples of a person exhibiting low rationality? What are some unexpected or lesser known "risks" of not thinking completely rationally?

In my book What Intelligence Tests Miss (2009), I begin with a discussion of David Denby (NY Times writer) and John Paulos (math professor) making disastrous person investment decisions. I also discuss how many feel that George W. Bush was dysrationalic.

But in terms of specifics, here are some irrational thinking tendencies to consider:

  • Physicians choose less effective medical treatments
  • People fail to accurately assess risks in their environment
  • Information is misused in legal proceedings
  • Millions of dollars are spent on unneeded projects by government and private industry
  • Parents fail to vaccinate their children
  • Unnecessary surgery is performed
  • Animals are hunted to extinction
  • Billions of dollars are wasted on quack medical remedies
  • Costly financial misjudgments are made

Is rationality something that's innate? Or is there hope for people with low RQ?

Rationality is not entirely innate. It is as malleable as intelligence and possibly much more so. Roughly one half of rationality as we define it is not process but knowledge — knowledge that could be acquired by perhaps 70% or more of the population.

Here’s what I mean: The model of individual differences in rational thought that West and I have put together partitions rationality into fluid and crystallized components by analogy to the Gf and Gc of the Cattell/Horn/Carroll fluid-crystallized theory of intelligence.

Fluid rationality encompasses the process part of rational thought — the thinking dispositions of the reflective mind that lead to rational thought and action. Crystallized rationality encompasses all of the knowledge structures that relate to rational thought.

These knowledge structures are the tools of rationality (probabilistic thinking, logic, scientific reasoning) and they represent declarative knowledge that is often incompletely learned or not acquired at all. But they can be learned by most and in this sense rationality is teachable.

Rational thinking errors due to such knowledge gaps can occur in a potentially large set of coherent knowledge bases in the domains of probabilistic reasoning, causal reasoning, knowledge of risks, logic, practical numeracy, financial literacy, and scientific thinking (the importance of alternative hypotheses, etc.).

Photo of Kahneman: Andreas Rentz/Getty Images for Burda Media.

This article was originally published on

George P. Dvorsky serves as Chair of the IEET Board of Directors and also heads our Rights of Non-Human Persons program. He is a Canadian futurist, science writer, and bioethicist. He is a contributing editor at io9 — where he writes about science, culture, and futurism — and producer of the Sentient Developments blog and podcast. He served for two terms at Humanity+ (formerly the World Transhumanist Association). George produces Sentient Developments blog and podcast.


This all sounds very technical, but it still does not describe or specify what is exactly meant by “rationality”?

Does it describe a rationality specifically applied through logic, and acquired knowledge and training, or the ability to control one’s emotions and apply mindfulness and vigilance to overcome “irrationality”, external influences/forces and moods?

In any case. it would seem that a part of the test would need to be a preliminary mood/emotion evaluation to help validate and not invalidate any results of this RQ Testing?

I do agree that rationality through mindfulness and vigilance can be instructed with training, (Buddhism 101), and that perhaps should even be introduced into nurture at home and school for children at the earliest of ages, (Huxley), as long as this applied philosophy does not stifle imagination, creativity, Self-expression and play?

Vulcan’s have inclination to take rationality a bit too far in my book.

Not the Rationality Fetish again.

“Rationality” is to those who do not fully understand cognitive psychology, what “quantum” is to science crackpots.  An exquisitely uber-cool concept, and a wonderful badge to wear if you want to distinguish oneself from the unwashed masses.

So, I stand with Steven Pinker on this.  The tests that measure ‘rationality’ don’t.

Just like IQ tests don’t measure intelligence.

@CygnusX1 The interviewee defines what he means by ‘rationality’ as follows: “To think rationally means taking the appropriate action given one’s goals and beliefs — what we call instrumental rationality — and holding beliefs that are in synch with available evidence, or epistemic rationality.”

@Richard I didn’t really spot anywhere on the interview signs of ‘Rationality Fetish’. It seems to me to be a good assessment of what rationality is, the different ways in which we can be rational or irrational, how at can be tested, and why it’s important. I don’t see any evidence that he thinks it is ALL-important, or that we should be striving for 100% rationality, so I don’t really see the problem. On the whole, I think humanity would benefit people became generally more rational. It’s not that I don’t agree that it can be taken too far, I just don’t particularly see it happening here.

“To think rationally means taking the appropriate action given one’s goals and beliefs — what we call instrumental rationality — and holding beliefs that are in synch with available evidence, or epistemic rationality.”

Yet this does not define “rationality”? A terrorist may commit murderous acts and still claim he is acting rationally and especially in line with his beliefs and goals?

The caveat is this “epistemic rationality.” whatever this may be is still unclear?

However, yes, rationality to “me” means being both logical where possible and being mindful and in control of emotional states and moods where possible. None of us are experts at this. Come to think of it, controlling moods and emotions does not correlate with rationality, but merely strength of will to suppress emotions?

“A terrorist may commit murderous acts and still claim he is acting rationally and especially in line with his beliefs and goals?”

And one person’s terrorist is another person’s freedom fighter. But point taken: taking appropriate action given one’s goals and beliefs doesn’t necessarily mean those goals are going to be peaceful. But then, why should we want ‘rational’ to be a synonym of ‘good’? Are we still so attached to Plato’s Noble Lie? When I said I thought humanity would benefit from people becoming more rational, I was making an empirical claim. It may be wrong. I certainly don’t think we should seek to build it into the definition of ‘rational’.

‘Epistemic rationality’ refer to “holding beliefs that are in synch with available evidence”, in contrast to instrumental rationality which is taking appropriate action give one’s goals and beliefs.

Re being “in control of emotional states”, I think we need to be a bit careful of the wording here. It really depends what you mean by “in control of”. I prefer acceptance of emotions rather than trying to control them. What we can control, much more easily and directly, provided we are sufficiently mindful, is our behaviour, i.e. how we express those emotional states. The trap is when we see emotions like fear and anger as negative and then try to get rid of them, or deny that they are there.

With this caveat, I agree that mindfulness tends to make us more rational (and, I suggest, better people - but once again, we don’t need to see these as synonyms). The more aware and accepting of our emotions we are, the more likely we are to prevent them from makings us do stupid things - or jump to wrong conclusions.

“Quality of rationality..

It is believed by some philosophers (notably A.C. Grayling) that a good rationale must be independent of emotions, personal feelings or any kind of instincts. Any process of evaluation or analysis, that may be called rational, is expected to be highly objective, logical and “mechanical”. If these minimum requirements are not satisfied i.e. if a person has been, even slightly, influenced by personal emotions, feelings, instincts or culturally specific, moral codes and norms, then the analysis may be termed irrational, due to the injection of subjective bias.

Modern cognitive science and neuroscience show that studying the role of emotion in mental function (including topics ranging from flashes of scientific insight to making future plans), that no human has ever satisfied this criterion, except perhaps a person with no affective feelings, for example an individual with a massively damaged amygdala or severe psychopathy. Thus, such an idealized form of rationality is best exemplified by computers, and not people. However, scholars may productively appeal to the idealization as a point of reference.”

“Artificial Intelligence..

Within artificial intelligence , a rational agent is one that maximizes its expected utility , given its current knowledge. Utility is the usefulness of the consequences of its actions. The utility function is arbitrarily defined by the designer, but should be a function of performance, which is the directly measurable consequences, such as winning or losing money. In order to make a safe agent that plays defensively, a nonlinear function of performance is often desired, so that the reward for winning is lower than the punishment for losing.

An agent might be rational within its own problem area, but finding the rational decision for arbitrarily complex problems is not practically possible. The rationality of human thought is a key problem in the
psychology of reasoning.


Not sure I get the relevance, CygnusX1. Is this supposed to be directly relevant to the discussion we were having and/or the points made in the article?

A definition of rationality does not have to be perfect to be useful. Same goes for a measure of rationality.

Some areas of imperfection in the definition of rationality is that it relies on externally defined criteria for what is “best” for someone.

If a person universally only ever makes long term decisions to delay gratification, then that person will never actually receive any gratification. In practical life, gratification can almost always profitably be “deferred” yet again.

The fallacy in the “defers gratification for greater long-term gain” definition of rationality is that: 1) we don’t live forever; eventually you’re not going to be around to get that gain. 2) in the real world, circumstances may intervene to deprive someone of an intended long term gain.

Different people live in different circumstances with different risks of not getting a long term gain.

Say I live with George. We have a bag of no-calorie chocolates that will keep indefinitely in the freezer. We cannot buy more chocolate for two weeks. George eats his of the chocolates today. I eat two chocolates and save the rest to space out as treats over the next two weeks.

Whether that was a good decision or a bad one depends entirely on the risk of George eating my chocolates for me.

In hiking there’s a saying that the best place to carry your water is inside you. You’re often better off fully hydrating now than spacing out your water to save it for later—especially if something could happen to your water.

The studies about money and delayed gratification: Say you promised me $10 now OR $50 tomorrow IN THE LAB. If I leave the lab with $10, I might be able to spend it on myself. If I leave the lab with $50, I may have any number of people in my life imposing a social norm that I have to use it on them. That social norm localized to my own life may even be that if I have $50, someone at home will “help” me spend $60.

Or, maybe my experience of life is that I don’t believe you’d really give me $50 tomorrow. Maybe my experience of life is that I expect someone like you to lie to someone like me.

People who have “trouble delaying gratification” tend to be people who are in low-power positions or who have some other reason for having difficulty saying “no” to other people. For people with little power, delayed gratification often means only the bullies or emotional blackmailers in their lives get any gratification.

As with the help spending $60 when you get $50, a person with low power in their immediate socio-familial environment may even face punishing consequences for delaying gratification.

Only the powerful have the luxury of expecting gratification they “save up” to actually be theirs later.

So what, for that person, is the true likelihood of their getting those two birds in the bush?

Game theory has run across this in applying the “Prisoner’s Dilemma” game in non-Western cultures. In some cultures, there is no tendency to punish defections because the actual likelihood of two birds in a bush is so low in their culture that they think a person would have to be crazy to turn down a bird in the hand (cash). They regard the prisoner who got offered the deal to defect as being simply the lucky one.

If I’m at the bottom of the pecking order in my personal environment, if you offer me an ice cream cone right now or fifty dollars tomorrow, it’s in my legitimate best interests to take the ice cream cone—-because my chance of benefiting from that fifty dollars is essentially zero.

Take subjects out of their natural environment and put them in a highly structured environment without scarcity of vital resources, design the long-term gain so that the subject will genuinely be able to personally benefit from it, give them experience of promises of gain being absolutely kept, and then test them again.

That is, you make the benefit something that in practical terms is absolutely non-transferable.

There will still be a bias in favor of immediate gratification, but I predict it would not be so large, and that the variability among the subjects would be reduced.

Delaying gratification is a useful strategy for those privileged with strength. It is a losing strategy for those burdened with weakness.

I’ve also wondered to what extent the marshmallow thing might be culturally specific,  and I definitely agree with bluewillow991967 that delaying gratification isn’t always in our long-term interests. Another example of this, tragically lived out by many individuals, is if you are told that by forsaking earthly pleasures you will go to heaven.

That said, while delaying gratification is not always the best approach, the ability to do so does seem to be important, and not only for the strong and privileged. The oppressed person (or group) might, for example, be tempted to seek immediate gratification by engaging in some futile act of rebellion, which only leads to harsh punishment, and marks that person or group for special surveillance. The ones who can delay gratification might play along, like Snowden, until they have figured out a way to _really_ stick it to the man.

I would have thought that the Institute for Ethics and Emerging Technologies would have known that Sun and Earth orbit around each other.

Peter, I wonder if you’re not committing the disrationalia of:

•The tendency toward overconfidence in knowledge judgments

if you think religious penance has categorically nothing to do with Heaven, or if religious truths would have no bearing on the truths arrived at by reason.  Would you mind clarifying, please, what charge you’re bringing against the religious penitent?  Thanks,

YOUR COMMENT Login or Register to post a comment.

Next entry: Transhumanism

Previous entry: The Elephant Angels