Print Page   |   Contact Us   |   Sign In   |   Register
Eye on Psi Chi: Winter 2009
Tips for Spotting Psychological Pseudoscience:
A Student-Friendly Guide

Scott O. Lilienfeld, PhD, Emory University (GA)

One need not have taken any courses in psychology to see that pseudoscience has fundamentally altered the landscape of modern life. Whether they be astrologers, crystal ball readers, spirit mediums, psychics, unidentified flying object (UFO) enthusiasts, or peddlers of the latest herbal remedy for boosting our memories, proponents of pseudoscientific claims are difficult to miss (Lilienfeld, Lohr, & Morier, 2001; Shermer, 2002). We can find them in shopping malls, on late-night television shows, on radio advertisements, in supermarket tabloids, and on websites. In the domain of psychology, proponents of subliminal persuasion, energy therapies, extrasensory perception (ESP), psychic healing, and handwriting analysis, to name a few disciplines, routinely make extravagant claims that outstrip their meager scientific support (Hines, 2003).

The Prevalence of Pseudoscience in Modern Life
Surveys show that substantial percentages of the American public endorse pseudoscientific or at least highly questionable assertions about the world. Forty-one percent believe in ESP, 32% in ghosts, 25% in astrology, and 21% in communication with the dead (Moore, 2005). Of course, the fact that many Americans remain open to the possibility of these phenomena isn’t itself all that troubling. For example, it’s not outside the realm of possibility that future research could provide support for ESP, although most of the current scientific evidence points strongly against it (Hines, 2003; Milton & Wiseman, 1999). Moreover, a certain degree of open-mindedness to unconventional ideas is essential to scientific thinking (Sagan, 1995).

Instead, what’s troubling is that many more Americans profess belief in highly questionable claims than in scientifically supported claims. There are about 20 times more astrologers than astronomers in the United States (Gilovich, 1991), and surveys show that twice as many Americans say they "definitely” believe in creationism (39%) than in what’s probably the best supported theory in all of natural science, namely, Darwin’s theory of natural selection (18%; USA Today/Gallup Poll Results, 2007). These percentages point to a striking disconnect between popular belief and scientific evidence.

What is Pseudoscience?
What do we mean by pseudoscience, and how can we tell which claims are pseudoscientific, scientific, or somewhere in between? Although the boundaries between pseudoscience and science aren’t clear-cut (Leahy & Leahy, 1983; Lilienfeld, Lynn, Namy, & Wolff, 2009), we can define pseudosciences as disciplines that pretend to be scientific but aren’t. They display the superficial trappings of science but lack its substance. As a consequence, pseudosciences can easily fool all of us into believing they’re scientific even though they’re not.

Pseudosciences differ from sciences not in their content, but in their approach to evidence, especially negative evidence. For example, what makes the discipline of UFOlogy (the study of UFOs) largely pseudoscientific is not that its claims are false—it’s remotely possible that certain reports of flying saucers from alien worlds will turn out to be true—but that most advocates of UFOs don’t avail themselves of the essential protections of the scientific method when evaluating their claims. In particular, they rarely make use of research safeguards against confirmation bias (Nickerson, 1998)—the tendency to seek out evidence that supports our hypotheses and ignore, minimize, or misinterpret evidence that doesn’t. Among these crucial safeguards are demands for "blind” observation (the requirement that observers who are examining the data aren’t aware of crucial factors that could bias their ratings), independent replication (the requirement that one’s findings be duplicated by other observers), and peer review (the requirement that one’s findings be subjected to evaluation by largely impartial colleagues). Each of these requirements minimizes, although certainly doesn’t eliminate, the possibility that confirmation bias will fool us into accepting our hypotheses and into believing what we want to believe (Kida, 2006).

Proponents of sciences aren’t always right, nor are proponents of pseudosciences always wrong. But over the long haul, scientists are much more likely to be correct than pseudoscientists, largely because they make concerted efforts to compensate for confirmation bias. As we will soon discover, scientific research endeavors tend to be self-correcting, whereas pseudoscientific research endeavors do not.

Warning Signs of Pseudoscience
A number of authors have developed lists of criteria, or "warning signs,” for distinguishing pseudoscience from science (Bunge, 1983; Langmuir, 1989; Lilienfeld, 1998; Park, 2000; Ruscio, 2006). The presence of multiple warning signs isn’t proof positive that a claim is pseudoscientific, but it should arouse suspicion of its proponent’s claims. Here are five warning signs that I’ve found especially helpful for teaching students to distinguish pseudoscience from science in psychology:

(1) Overuse of ad hoc hypotheses. An "ad hoc hypothesis” is a fancy term for an escape hatch or loophole that proponents of a hypothesis invoke to explain away negative findings. Most scientists use ad hoc hypotheses (e.g., "Maybe our questionnaire wasn’t reliable or valid enough,” "Maybe our participants misunderstood our instructions,” "Maybe our sample size was too small to detect the effect we were seeking”) from time to time to account for why a study didn’t pan out. Indeed, in some cases ad hoc hypotheses can provide legitimate explanations for negative results and point to suggestions for designing our study better the next time.

Yet we can take ad hoc hypotheses too far, as when they merely become excuses to dismiss every result that’s not to our liking. For example, some proponents of parapsychology—the study of ESP and related phenomena—have invoked the "experimenter effect” to explain why people who seem to display ESP in the outside world usually do no better than chance when brought into the more tightly controlled confines of the laboratory (Gilovich, 1991). The experimenter effect describes the supposed "inhibition” of ESP by skeptical experimenters. The problem with invoking the experimenter effect for negative ESP findings is that it makes the existence of ESP impossible to disprove: If laboratory findings for ESP are positive, this provides evidence for ESP; but if laboratory findings for ESP are negative, this doesn’t provide evidence against ESP, because these findings could be due to negative "vibes” emitted by skeptical researchers.

(2) Absence of self-correction. As we’ve already seen, science doesn’t always generate correct answers. Moreover, scientists have sometimes been guilty of dismissing hypotheses that ran counter to accepted views. When a young German researcher named Alfred Wegener proposed in 1912 that all of the earth’s major continents were once joined in a gigantic land mass (he called it "Pangea”) and were slowly moving apart, many scientists dismissed him as a crackpot (Schwarzbach, 1986). Yet today we know that Wegener was right, because later research provided not only evidence for his claims, but a well-established physical mechanism for them, namely continental drift.

The Wegener saga underscores a crucial point: over time, sciences tend to be self-correcting, because they eventually revise their theories in response to contrary evidence. In contrast, most pseudosciences remain stagnant, because they tend to ignore or dismiss contrary evidence. For example, most forms of astrology have remained essentially identical for the past 4,000 years despite spectacular advances in our knowledge of the solar system and stars. Moreover, because of a well-documented astronomical phenomenon called "precession,” which refers to a gradual shift in the earth’s axis over time (Hines, 2003), astrological charts that are several millennia old are now hopelessly out of date, although many astrologers continue to use them.

(3) Exaggerated claims. Science is a prescription for humility (McFall, 1996). As the late Carl Sagan (1995) noted, good scientists have a little voice in their heads reminding them, "I might be wrong.” Of course, not all scientists are modest in their assertions, and some are guilty of hyperbole. Nevertheless, good scientists are careful not to overstate the strength of evidence for their claims.

In contrast, proponents of pseudoscientific or questionable techniques often advance extravagant claims that aren’t justified by research evidence. For example, research on the polygraph or so-called "lie detector” test reveals that this method probably does only somewhat better than chance—perhaps 70%—in distinguishing truths from falsehoods (Lykken, 1998; Ruscio, 2005). To a large extent, that’s because the polygraph test is misnamed: it’s an arousal detector, not a lie detector. This test picks up not only the anxiety or guilt associated with lying, but also surprise, indignation, and the fear of being convicted or a crime one didn’t commit. Yet some advocates of the polygraph test have cited accuracy rates as high as high as 95% (Raskin & Honts, 2002) or even 99%. These optimistic percentages go well beyond those found in most carefully controlled studies.

(4) Overreliance on anecdotes. Anecdotes are typically "I know a person who” stories (Nisbett & Ross, 1980; Stanovich, 2006). "I know a person who went to a professional fortune teller who told her that her boyfriend would break up with her and sure enough, he broke up with her a month later”; "I know a person who told me his migraines got better after he went to an acupuncturist”; "I know a person with severe depression who started feeling better after going to a therapist who had him relive his birth experience under hypnosis.”

All of these anecdotes are intriguing and perhaps worth investigating, but they’re difficult to interpret. As a wise person once said, "the plural of anecdote isn’t fact.” For one thing, "I know a person who” stories are usually open to many alternative explanations (Loftus & Guyer, 2002). The fortune teller who predicted the woman’s break-up may have merely been "lucky” (or unlucky from the woman’s standpoint), or may have picked up on subtle cues about the woman’s relationship (such as apparent insecurity about her boyfriend’s commitment to her) while talking to her. For another, anecdotes may be atypical of most people’s experiences. The migraine sufferer and depressed person who felt better following these treatments may be in a tiny minority; the overwhelming majority of people who received these treatments either may not have improved, or even gotten worse.

Most pseudosciences place too much stock in anecdotes. For example, advocates of ESP may point to stories in which a husband "had an eerie feeling” that his wife was in a serious car accident in another city, and this hunch came true (Hines, 2003). Although such stories are dramatic, they don’t provide convincing evidence for ESP. Perhaps the husband had good reason to fear that his wife might get into a car accident (e.g., he might have known that his wife was extremely tired that day), or perhaps he’d experienced that eerie same feeling hundreds of times without his wife getting into a car accident.

(5) Psychobabble. Unless we’re careful, we can all be fooled by highfalutin’ language that sounds scientific, but isn’t. Such "psychobabble” (Rosen, 1977) is a common persuasion technique among many advocates of pseudoscience (van Rillaer, 1991). For example, proponents of voice stress analyzers claim that they can distinguish truths from lies on the basis of high-frequency modulations in subaudible "laryngeal micro-tremors” in people’s voices. That claim sounds awfully impressive until we learn that voice stress analyzers barely do any better than chance at detecting lies (Hollien, Geison, & Hicks, 1987). The language typical of psychobabble is by no means unique to psychology and surely characterizes other fields, including philosophy and sociology (Andrewski, 1973; Lilienfeld & Landfield, 2008). Nevertheless, this language is especially widespread in psychology, probably because many unsupported psychological claims can be cloaked in the terminology of allied scientific fields (e.g., neuroscience, medicine) that lend them the cachet of respectability.

How to Avoid Falling Prey to Pseudoscience
We’re all prone to the seductive charms of pseudoscience, largely because many pseudosciences promise us quick fixes and easy answers to life’s problems (Beyerstein, 1990). After all, who among us wouldn’t want to be able to predict the winning number in next month’s $60 million Powerball© lottery? For example, some advocates of Thought Field Therapy, a treatment that supposedly treats anxiety disorders by tapping on people’s invisible energy fields, claim to be able to cure phobias in as little as 5 minutes (Gaudiano & Herbert, 2000). Much as those of us with extreme fears of cats, blood, or flying would love to believe otherwise, there’s no research support for this astonishing assertion. In addition, our mind’s tendency to make "sense out of nonsense” and seek "order in disorder,” although generally helpful in daily life, can render us vulnerable to certain pseudoscientific beliefs (Gilovich, 1991; Pinker, 1997). For example, our tendency to notice cases of crimes and suicides during full moons can lead us to perceive an association between full moons and these events, even though research shows that this association is nonexistent (Rotton & Kelly, 1985).

How, then, can we resist the powerful allure of pseudoscientific claims? Although there’s no standard formula for avoiding the quicksand of pseudoscience, here are three pointers I’ve found especially helpful in my teaching and in my everyday life.

First, we should be sure not to confuse correlation with causation. That is, merely because one variable (that is, anything that varies) goes together with a second variable statistically doesn’t necessarily mean that the first variable caused the second, nor that the second variable caused the first. The correlation between these two variables could just as easily be due to one or more third variables. Most psychology students have heard this advice in their classes. But they often forget to apply it to real-world examples, because their preconceptions can lead them to find a causal explanation plausible even when it’s wrong. For example, one study revealed that men who remain sexually active into their 70’s and 80s tend to live longer than other men (Smith, Frankel, & Yarnell, 1997). Sure enough, some popular writers quickly concluded that having sex makes people live longer, with one even proclaiming that sex can be a "lifesaver.” Can you figure out why this conclusion isn’t warranted (a hint: ask yourself whether the researchers’ finding is correlational)?
Second, we should "give chance a chance.” That is, we should remain open to the possibility that certain striking findings are merely due to chance (non-repeating)

Pseudosciences differ from sciences not in their content, but in their approach to evidence, especially negative evidence fluctuations (Abelson, 1995). For example, when a friend with whom we haven’t spoken in many years calls us a few seconds after that friend came to mind, we may be inclined to attribute that astonishing coincidence to ESP. Although that explanation is possible, to evaluate it adequately we’d need to consider all of the times we’ve thought of that friend when she didn’t call us. We’d also need to consider all of the other old friends we’ve thought of who never called us (Falk, 1981). In a clever illustration of the role of chance in life and death, Nobel-prize winning physicist Luis Alvarez (1965) described an eerie event that occurred to him while traveling abroad. When reading a newspaper, he came upon a phrase that reminded him of an old friend he hadn’t thought about for decades, and only a few pages later he found that friend’s obituary! After getting over his initial shock, Alvarez conducted some calculations and discovered that this kind of coincidence probably occurs to 8 to 9 people each day across the world.

Third, we should remember that extraordinary claims require extraordinary evidence. That guideline, sometimes called "Hume’s maxim” after the Scottish philosopher who described it in somewhat different terms (Shermer, 2002), reminds us that if a claim runs counter to virtually everything we know, we should demand especially convincing evidence for it (Pigliucci, 2005). For example, if we see an advertisement for a speed-reading course that claims to boost college students’ reading speeds from 300 words per minute to 25,000 words per minute—which some courses claim to be able to do (Carroll, 2003)—we should be exceedingly skeptical. After all, if we accept that claim, people who’ve taken such courses can polish off Leo Tolstoy’s 1,400 page novel, War and Peace, in a mere 22 minutes. Moreover, the speeds advertised by many speed reading courses exceed the maximum speed obtainable by the human eyeball!

Concluding Thoughts
Many educators, both inside and outside of psychology, have rightfully bemoaned the paucity of critical thinking skills among much of the general public (Halpern, 1998; Tavris, 2000). Fortunately, there may be reason for hope. Recognizing that we’re all drawn to pseudoscience and vulnerable to its superficial appeal is a crucial first step. Moreover, taking a few basic tips for evaluating pseudoscientific claims, including those I’ve outlined here, can help us spot these claims not only inside but outside the classroom. In turn, these tips can help us make better decisions about a host of important matters in real life, ranging from what used car to buy, to what self-help book to read, what herbal remedy to take, and what psychotherapy to recommend to a loved one. A student who’s learned to distinguish pseudoscience from science is also an informed consumer of claims in everyday life.

References
Andrewski, S. (1973). Social sciences as sorcery. London: Andre Deutsch.

Abelson, R. P. (1995). Statistics as principled argument. Hillsdale, NJ: Lawrence Erlbaum.

Alvarez, L. W. (1965) . A pseudo experience in parapsychology. Science, 148, 1541.

Beyerstein, B. L. (1990) Brainscams: Neuromythologies of the New Age. International Journal of Mental Health, 19, 27-36.

Bunge, M. (1983). Demarcating science from pseudoscience. Fundamenta Scientiae, 3, 369-388.

Carroll, R. T. (2003). The skeptic’s dictionary: A collection of strange beliefs, amusing deceptions, and dangerous delusions. New York: Wiley.

Falk, R. (1981, Winter). On coincidences. Skeptical Inquirer, 6 (2), 18-31.

Gaudiano, B. A., & Herbert, J. D. (2000, March/April). Can we really tap away our problems? A critical analysis of Thought Field Therapy. Skeptical Inquirer, 24(4), 29-33, 36.

Gilovich, T. (1991). How we know what isn’t so: The fallibility of human reason in everyday life. New York: Free Press.

Halpern, D. F. (1998). Teaching critical thinking for transfer across domains. American Psychologist, 53, 449-455.

Hines, T. M. (2003). Pseudoscience and the paranormal: A critical examination of the evidence (2nd ed.). Amherst, NY: Prometheus.

Hollien, H., Geison, L., & Hicks, J. W. (1987). Voice stress analysis and lie detection. Journal of Forensic Sciences, 32, 405-418.

Kida, T. (2006). Don’t believe everything you think: The 6 basic mistakes we make in thinking. Amherst, NY: Prometheus.

Langmuir, I. (1989, October). Pathological science. Physics Today, 42(11), 44.

Leahy, T. H., & Leahy, G. E. (1983). Psychology’s occult doubles: Psychology and the problem of pseudoscience. Chicago: Nelson/Hall.

Lilienfeld, S. O. (1998, Fall). Pseudoscience in contemporary clinical psychology: What it is and what we can do about it. The Clinical Psychologist, 51(4), 3–9.

Lilienfeld, S. O., & Landfield, K. (2008). Science and pseudoscience in law enforcement. Criminal Justice and Behavior, 35, 1215-1230.

Lilienfeld, S. O., Lohr, J. M., & Morier, D. (2001). The teaching of courses in the science and pseudoscience of psychology: Useful resources. Teaching of Psychology, 28, 182-191.

Lilienfeld, S. O., Lynn, S. J., Namy, L., & Wolff, N. (2009). Psychology: From inquiry to understanding. Boston, MA: Allyn & Bacon.

Loftus, E. F., & Guyer, M. (2002, May/June). Who abused Jane Doe? The hazards of the single case study: Part I. Skeptical Inquirer, 26 (3), 24-32.

Lykken, D. T. (1998). A tremor in the blood: Uses and abuses of the lie detector (2nd ed.). Plenum: New York.

McFall, R. M. (1996). Making psychology incorruptible. Applied and Preventive Psychology, 5, 9-15.

Milton, J., & Wiseman, R. (1999). Does psi exist? Lack of replication of an anomalous process of information transfer. Psychological Bulletin, 125, 387–391.

Moore, D. W. (2005, June 16). Three of four Americans believe in paranormal. Gallup News Service. Retrieved June 16, 2008, from http://home.sandiego.edu/~baber/logic/gallup.html

Nickerson, R. S. (1998). Confirmation bias: A ubiquitous phenomenon in many guises. Review of General Psychology, 2, 175-220.

Nisbett, R. E., & Ross, L. D. (1980). Human inference: Strategies and shortcomings of social judgment. Englewood Cliffs, NJ: Prentice-Hall.

Park, R. (2000). Voodoo science: The road from foolishness to fraud. New York: Oxford University Press.

Pigliucci, M. (2005, March/April). Do extraordinary claims require extraordinary evidence? Skeptical Inquirer, 29(2), 14, 43.

Pinker, S. (1997). How the mind works. New York: Norton.

Raskin, D.C., & Honts, C. R. (2002). The comparison question test. In M. Kleiner (Ed.), Handbook of polygraph testing (pp. 1-47). London, UK: Academic Press.

Rosen, R. D. (1977). Psychobabble. New York: Avon.

Rotton, J., & Kelly, I. W. (1985). Much ado about the full moon: A meta-analysis of lunar-lunacy research. Psychological Bulletin, 97, 286–306.

Ruscio, J. (2005, January/February). Exploring controversies in the art and science of polygraph testing. Skeptical Inquirer, 29(1), 34-39.

Ruscio, J. (2006). Critical thinking in psychology: Separating sense from nonsense (2nd ed.). Pacific Grove, CA: Wadsworth.

Sagan, C. (1995). The demon-haunted world: Science as a candle in the dark. New York: Random House.

Schwarzbach, M. (1986). Alfred Wegener: The father of continental drift (Carla Love, Trans.). Madison, Wisconsin: Science Tech. (Original work published 1986)

Shermer, M. (2002). Why people believe weird things: Pseudoscience, superstition, and other confusions of our time (2nd ed.). New York: Freeman.

Smith, D. G., Frankel, S., & Yarnell, J. (1997). Sex and death: Are they related? Findings from the Caerphilly cohort study. British Medical Journal, 315, 1641-1645.

Stanovich, K. (2006). How to think straight about psychology (7th ed.). New York: HarperCollins.

Tavris, C. (2000). Psychobabble and biobunk: Using psychology to think critically about issues in the news. Upper Saddle River, NJ: Prentice-Hall.

USA Today/Gallup Poll Results (2007, June 7th). Creationism and evolution. Retrieved October 22, 2008, from http://www.usatoday.com/news/politics/2007-06-07-evolution-poll-results_n.htm

van Rillaer, J. (1991). Strategies of dissimulation in the pseudosciences. New Ideas in Psychology, 9, 235–244.


Scott O. Lilienfeld, PhD, is a professor of psychology at Emory University in Atlanta. He is editor-in-chief of the Scientific Review of Mental Health Practice and past president of the Society for a Science of Clinical Psychology. He has served on nine journal editorial boards and is a columnist for Scientific American Mind magazine. Dr. Lilienfeld has published over 190 articles, book chapters, and books on personality disorders, psychiatric classification, and pseudoscience in clinical psychology. Among his books are Psychology: From Inquiry to Understanding (Allyn & Bacon, 2009; coauthored with Steven Jay Lynn, Laura Namy, and Nancy Woolf) and Science and Pseudoscience in Clinical Psychology (Guilford, 2003; coedited with Steven Jay Lynn and Jeffrey M. Lohr). His work has been featured in the New York Times, Newsweek, Boston Globe, Washington Post, USA Today, and New Yorker; and he has appeared on ABC’s 20/20, CNN, and CBS Evening News. In 1998, Dr. Lilienfeld received the David Shakow Award for Outstanding Early Career Contributions to Clinical Psychology from APA Division 12, and in 2007 he was elected a Fellow of the Association for Psychological Science. In 1998, he was selected as a member of Emory University’s "Great Teachers” lecturer series.

Please address all correspondence concerning this article to Scott O. Lilienfeld, PhD Department of Psychology, Room 206 Emory University, Atlanta, Georgia 30322

Electronic mail: slilien@emory.edu

Copyright 2009 (Volume 13, Issue 2) by Psi Chi, the International Honor Society in Psychology



EYE ON PSI CHI
VIEW THIS ISSUE
PAST ISSUES
SUBMISSIONS
» CHAPTER ACTIVITY
» FEATURE ARTICLES

Eye on Psi Chi is a magazine designed to keep members and alumni up-to-date with all the latest information about Psi Chi’s programs, awards, and chapter activities. It features informative articles about careers, graduate school admission, chapter ideas, personal development, the various fields of psychology, and important issues related to our discipline.

Eye on Psi Chi is published quarterly:
Spring (February)
Summer (April)
Fall (September)
Winter (November)


 

 

 

 

PSICHI.ORG | LEGAL | SITE MAP | CONTACT US

 © 2013 | PSI CHI, THE INTERNATIONAL HONOR SOCIETY IN PSYCHOLOGY
Phone: (423) 756-2044 | Fax: (877) 774-2443 | Certified member of the Association of College Honor Societies
Membership Software Powered by YourMembership.com®  ::  Legal/Privacy