Teaching Critical Thinking
David Eubanks
2013-02-20 00:00:00
URL

I just came across a 2007 article by Daniel T. Willingham "Critical Thinking: Why is it so hard to teach?" Critical thinking is very commonly found in lists of learning outcomes for general education or even at the institution level. In practice, it's very difficult to even define, let alone teach or assess. The article is a nice survey of the problem.



The approach I've taken in the past (with the FACS assessment) I've simplified 'critical thinking' into two types of reasoning that are easy to identify: deductive and inductive. Interestingly, this shows up in the article too, where the author describes the difference (in his mind) between critical and non-critical thinking:



For example, solving a complex but familiar physics problem by applying a multi-step algorithm isn’t critical thinking because you are really drawing on memory to solve the problem. But devising a new algorithm is critical thinking.


Applying a multi-step algorithm is deductive "follow-the-rules" thinking. He's excluding that from critical thinking per se. To my mind this is splitting hairs: one cannot find a clever chess move unless one knows the rules. We would probably agree that deductive thinking is absolutely prerequisite to critical thinking, and this point is made throughout the article, where it's included in "domain knowledge."



In the quote above, the creation of a new algorithm exemplifies critical thinking--this is precisely inductive thinking, a kind of inference.



Now I don't really believe that even the combination of deductive and inductive reasoning covers all of what people call 'critical thinking,' because it's too amorphous. It's interesting to consider how one might create a curriculum that focuses on 'critical' rather than 'thinking.' It could be a course on all the ways that people are commonly fooled, either by themselves or others. It would be easy enough to come up with a reading list.



Another alternative is to focus on the 'thinking' part first. This seems like a very worthy goal, and in retrospect it's striking that we don't seem to have a model of intelligence that we apply to teaching and learning. We have domain-specific tricks and rules, conventions and received wisdom, but we generally don't try to fit all those into a common framework, which we might call "general intelligence" as easily as "critical thinking." Usually it's the other way around--how do I embed some critical thinking object into  my calculus class? This latter method doesn't work very well because the assessments results (despite our desires) don't transfer easily from one subject to the next. This is the main point of the article linked at the top--domain-specific knowledge is very important to whatever "critical thinking" may be.





A Model for Thinking



I don't presume to have discovered the way thinking works, but it's reasonable to try to organize a framework for the purposes of approaching 'critical thinking' as an educational goal. The following one comes from a series of articles I wrote for the Institute for Ethics and Emerging Technologies (first, second, third), which all began with this article. The theme is how to address threats to the survival of intelligent systems, and it's informed by artificial intelligence research.



A schematic of the model is shown below.

 







We might think of this as a cycle of awareness, comprising perception, prediction,  motivation, and action. If these correspond to the whims of external reality, then we can reasonably be said to function intelligently.



The part we usually think of as intelligence is the top left box, but it has no usefulness on its own. It's a general purpose predictor that I'll refer to as an informational ontology. It works with language exclusively, just as a computer's CPU does, or the neurons in a our brains do (the "language" of transmitted nerve impulses). Languages have internal organization by some convention (syntax), and associations with the real world (semantics). The latter cannot exist solely as a cognitive element--it has to be hooked up to an input/output system. These are represented by the lower left and right blue boxes. The left one converts reality into language (usually very approximately), and the right one attempts to affect external reality by taking some action described in language.



All of these parts are goal-oriented, as driven by some preset motivation. All of this perfectly models the typical view of institutional effectiveness, by the way, except that the role of the ontology is minimized--which is why IE looks easy until you try to actually do it.



Each of these components is a useful point of analysis for teaching and learning.  Going around the figure from bottom left:



Measurement/Description When we encode physical reality into language, we do so selectively, depending on the bandwidth and motivation, and our ability to use the result in our ontology. At the beach, we could spend the entire day counting grains of sand, so as to get a better idea of how many there are, but we generally don't because we don't care to that level of precision. We do care that there's sand (the point of going to beach), but there are limits to how accurately we want to know.



Some language is precise (as in the sciences), and other sorts not (everyday speech, usually). What makes it usefully precise is not the expression of the language itself (e.g. I drank 13.439594859 oz of coffee this morning), but how reliably that information can be used to make predictions that we care about. This involves the whole cycle of awareness.



Example 1: According to wikipedia, the mass of a proton is 1.672621777×10e-27. This is a very precise bit of language that means something to physicists who work with protons. That is, they have an ontology within which to use this information in ways they care about. Most of us lack this understanding, and so come away with merely "protons weigh a very tiny amount."



Example 1: Your friend says to you "Whatever you do, don't ride in the car with Stanislav driving--he's a maniac!" Assuming you know the person in question, this might be information that you perceive as important enough to act on. The summary and implication in your friend's declaration constitutes the translation from physical reality into language in a way that is instantly usable in the predictive apparatus of the ontology. Assuming you care about life and limb, you may feel disinclined to carpool with Stanislav. On the other hand, if the speaker is someone whom you think exaggerates (this is part of your ontology), then you may discount this observation as not useful information.



The point of these examples is that description is closely tied with the other elements of awareness. This is why our ways of forming information through perception are very approximate. They're good enough for us to get what we want, but no better. (This is called Interface Theory.)



Here are some questions for our nascent critical thinkers:




  1. Where did the information come from? 

  2. Can it be reliably reproduced?

  3. What self-motivations are involved?

  4. What motivations do the information's source have?

  5. What is the ontology that the information is intended to be used in?

  6. How does using the information affect physical reality (as perceived by subsequent observations)?



Notice that these questions are also very applicable to any IE loop.



Question five is a very rich one because it asks us to compare what the provider of the information believes versus what we believe. Every one of us has our own unique ontology, comprising our uses of language, beliefs, and domain-specific language. If I say that "your horoscope predicts bad luck for you tomorrow," then you are being invited to adopt my ontology as your own. You essentially have to if you want to use the information provided. This represents a dilemma that we face constantly as social animals--which bits of ontology do we internalize as our own, and which do we reject? Which brings us to the 'critical' part of 'critical thinking.'



It's interesting that the discussion around critical thinking as an academic object focuses on the cognitive at the expense of the non-cognitive. But in fact, it's purely a question of motivation. I will believe in astrology if I want to, or I will not believe in it because I don't want to. The question is much more complicated than that, of course, because every part of the ontology is linked to every other part. I can't just take my whole system of beliefs and plop astrology down in the middle and then hook up all the pipes so it works again. For me personally, it would require significant rewiring of what I believe about cause and effect, so I'd have to subtract (stop believing some things) part of the ontology. But this, in turn, is only because I like my ontology to be logical. There's no a priori reason why we can't believe two incompatible ideas, other than we may prefer not to. In fact, there are inevitably countless contradictions in what we believe, owing to the fact that we have a jumble of motivations hacked together and presented to us by our evolutionary history.



Intelligence



The usefulness of intelligence lies in being able to predict the future (with or without our active involvement) in order to satisfy motivations. The way we maintain these informational ontologies is a dark mystery. We seem to be able to absorb facts and implications reasonably easily (Moravec's Paradox notwithstanding); we can't deduce nearly as quickly as a computer can, but we manage well enough. It's the inductive/creative process that's the real mystery, and there is a lot of theoretical work on that, trying to reproduce in machines what humans can do. Within this block are several rich topics to teach and assess:




  1. Domain-specific knowledge. This is what a lot of course content is about: facts and deductive rules and conventions of various disciplines, ways of thinking about particular subjects, so that we can predict specific kinds of events. This connects to epistemology when one adds doubt as an element of knowledge, which then leads to...

  2. Inference. How do we get from the specific to the general? At what point do we believe something? This links to philosophy, the scientific method, math and logic, computer science, neuroscience, and so on. Another connection is the role of creativity or random exploration in the process of discovering patterns. We might sum up the situation as "assumptions: you can't live with them, and you can't live without them." Because inference is a fancy word for guessing, it's particularly susceptible to influence from motivation. Superstition,  for example, is an application of inference (if I break a mirror, then I will have bad luck), and one's bias toward or away from this sort of believe comes from a motivational pay-off (e.g. a good feeling that comes from understanding and hence controlling the world).

  3. Meta-cognition. This is the business of improving our ontologies by weeding out things we don't like, or by making things work better by pruning or introducing better methods of (for example) inference. This is what Daniel Kahneman's book Thinking, Fast and Slow, is about. That book alone could be a semester-length course. Any educational treatment of critical thinking is about meta-cognition.

  4. Nominal versus real. Because we live in complex information-laden societies, we deal not just with physical reality but also with system reality. For more on these, refer to my IEET articles. One example will suffice: a system pronouncement of "guilt" in a trial may or  may not correspond to events in physical reality. At the point the verdict is announced, it becomes a system reality (what I call a nominal reality). The ontology of the system becomes a big part of our own personal version, and one could spend a long time sorting out what's real and what's nominal. For more on that topic, see this paper I wrote for a lit conference.



Motivation

Humans and the systems we build are very selective about what we want to know, and what we do with that knowledge. Understanding our own motivations and the that of others (theory of mind), and the ways these influence the cycle of perceive-predict-act, is essential in order to make accurate predictions. That is, intelligence has to take motivation into consideration. This invites a conversation about game theory, for example. The interpretation of critical thinking as the kind of thing that investigative reporters to, for example, must take motivations of sources into consideration as a matter of course.



In economics, motivation looks like a utility function to be optimized. Part of what makes humans so interesting is that we are laden with a hodge-podge of motivations courtesy of our genes and culture, and they are often contradictory (we can be afraid of a car crash, yet fall asleep at the wheel). The search for an 'ultimate' motivation has occupied our race for a long time, with no end in sight.



Here's a critical thinking problem: If motivations are like utility functions, they must be acted on in the context of some particular ontology, which goes out of date as we get smarter. How then are we to update motivations? A specific example would be physical pain--it's an old hardwired system that helped our ancestors survive, but it's a crude instrument, and leads to a lot of senseless suffering. The invention of pain-killers gives us a crude hack to cut the signal, but they have their own drawbacks. Wouldn't it be better to re-engineer the whole system? But we have to be motivated to do that. Now apply that principle generally. Do you see the problem?


Taking Action

This isn't usually thought of in connection with intelligence or critical thinking, but it's integral to the whole project. This is generally not the approach we take in formal education, where we implicitly assume that lectures and tests suffice to increase student abilities. Come to think of it, we don't even have a word for "active use of intelligence." Maybe 'street smarts' comes close, because of its association with 'real-world' rather than academic, but that's an unproductive distinction. I've heard military people call it the X-factor, which I take to mean a seamless connection between perception, prediction, and action (all tied to some underlying motivation, of course).



But of course the point of all this intelligence apparatus is to allow us to act for some purpose. There are great illustrations of this in Michael Lewis's book The Big Short, which show the struggle between hope and fear (motivations) in the analysis of the looming mortgage disaster, and the actions that resulted.



I've argued before (in "The End of Preparation," which is becoming a book) that technological and societal changes allow us to introduce meaningful action as pedagogy. It's the actual proof that someone has learned to think critically--if they act on it.



Being Critical

If some framework like the one described above can be used to examine intelligence in a curriculum, where exactly does the modifier come in? What's critical about critical thinking? Perhaps the simplest interpretation is that critique allows us to distinguish between two important cases (which may vary, but correspond to motivations). For example, in a jury trial, the question is whether or not to convict, based on the perceptions and analysis of the proceedings. It's these sorts of dichotomies--the aggravating fact that we can't take both paths in the wood--that makes intelligence necessary in the first place.



This general task is big business these days, in the form of machine learning, where distinguishing between a movie you will like and one you won't is called a classification problem. Netflix paid a million dollars to the winner of a contest to find a find a better classifier for assigning movie ratings.



It also makes a nice framework for teaching, and it's a common technique to set up a A vs. B problem and ask students to defend a position (and there's a whole library of resources set up to provide support for this kind of thing). In the abstract, these undoubtedly have some value it honing research and debate skills, but it seems to me that they would be more valuable when connect to real actions that a student might take. Is it worth my while to go to Washington to protest against X? Or go door-to-door to raise money for Y? Or invest my efforts in raising awareness about Z with my project? Maybe we need a new name for this: active critical thinking, perhaps.



So as educators, we are then left with the meta-question: is this worth doing?