This website uses cookies to store information on your computer. Some of these cookies are used for visitor analysis, others are essential to making our site function properly and improve the user experience. By using this site, you consent to the placement of these cookies. Click Accept to consent and dismiss this message or Deny to leave this website. Read our Privacy Statement for more.
Psi-Chi-ology Lab
History Isn't History:
Why Every Psychology Course Should Be a History Course

Steven Rouse, PhD, Pepperdine University (CA)
March 14, 2017

I often
tell my classes that every psychology course should be a history course. At one time, many universities offered a course on the History of Psychology, but these are becoming more and more rare as many undergraduate programs have shifted their focus to current knowledge and future directions. In the words of Dean Simonton, “history may soon be history.”

If a full-semester course on the history of psychology is no longer available at most college or universities, then an understanding of history should be embedded into each course because it is impossible to fully understand the present and our movement toward the future without understanding the past. For example, my primary area of interest is the subfield of personality psychology, a subfield that has been dramatically affected by the Five Factor Model of Personality. The development of the Big Five provided personality trait researchers with a standard set of constructs to study, and this model promoted dramatic increases in our understanding of many stable, predictable, observable aspects of a person’s characteristic behavioral style. However, the only way to fully understand the nuances of what the Big Five comprises (and, perhaps even more importantly, the aspects of human personality that are not encompassed by the Big Five) would be to fully understand the Lexical Hypothesis and Raymond B. Cattell’s efforts to create a lexical taxonomy of personality traits. And Cattell’s work is best understood in the context of Gordon Allport’s exploration of personality traits communicated in language, which can best be understood as a reaction against what Allport perceived as an unnecessarily complicated “depth” psychology proposed by Sigmund Freud.

Likewise, in every subfield of psychology, a full appreciation of the present requires an understanding of the developments and underlying assumptions of the past. This is even true for courses like Statistics and Research Methods—courses that, on the surface, may seem like they are about processes rather than about concepts. Many students approach these courses (and, indeed, many professors teach these courses) as though the goal is only to learn formulaic processes. Much like learning a language, sometimes the best way to learn statistics is to learn step-by-step procedures; sometimes you have to learn how to “do” first, and learn how to “understand” later. But fully understanding statistics and research methods—the main forms of persuasion accepted in scientific psychology—requires an understanding of the assumptions, arguments, biases, and even mistakes of the people who developed these methods.

This is what prompted me to write my recent
Psi Chi Journal editorial, Of Teacups and t Tests: Best Practices in Contemporary Null Hypothesis Significance Testing.” It may not be obvious, but we appear to be at a critical turning point in the history of psychological research regarding our discipline’s attitudes and approach toward significance tests.
"We appear to be at a critical turning point in the history of psychological research..."

For decades, students have been taught the standard practice of setting significance test p values at .05 in order to keep the probability of a Type I error at or below 5%. However, over time, this standard practice became an unbending rule, eventually getting to the point where p values below .05 were interpreted (simplistically and inaccurately) to mean “This claim is true,” and p values above .05 were interpreted (simplistically and inaccurately) to mean “This claim is false”. By treating this cut-off level in a mindless, formulaic way, it is easy to forget that “p” stands for probability; the lower the p value, the less likely it is that we observed the results by chance alone. Ultimately, two example p values of .051 and .049 aren’t very different from each other; nevertheless, it would have been common practice not long ago to simply conclude that the first example was false (“p > .05”) and that the second example was true (“p < .05”).

In reality, there is nothing magical, or even particularly special, about a p value of .05. Many people use this standard without knowing its origin. As far as we can tell, the first time this was used, it was simply a way to settle a friendly bet to determine whether or not someone could really claim to taste subtle differences in the way a cup of tea was prepared (as I explained in greater depth in the editorial).

Ultimately, the over-emphasis on (and ultimately the reification of) this particular cut-off level can be traced as the source of many problems that the field of empirical psychology is currently wrestling with. These include the uncritical acceptance of single studies without replication, the exaggeration of the size of effects observed in large samples, ethical transgressions committed in order to barely squeak under that publish/no-publish criterion, and the burial of nonsignificant results in the file drawers and wastebaskets of researchers around the world.

Understanding this past gives us greater understanding of the present and the future. Because the field slowly became aware of the severity of these problems I just listed, we are putting best practices in place. This is why:
  • Many journals including the Psi Chi Journal are once again emphasizing the value of replication research.
  • The American Psychological Association’s publication manual (section 2.07) requires that effect sizes be reported every time statistical significance tests are used.
  • Many journals are beginning to award Open Science Badges to journal articles that took steps to prevent “p-hacking,” like registering their methods and hypotheses before data collection and making their data and research materials publicly available.
  • Many journals are broadening their acceptance of research that might have something important to say despite (or maybe even because of) nonsignificant results.

Our simplistic overreliance on a specific cut-off level led to several problems, and we are beginning to take steps to correct those problems. However, the only way to understand why we are taking those steps is to understand what the underlying problems are and how we got to this point in the first place.

At Psi Chi Journal of Psychological Research, we are doing our best to keep up-to-date with these new developments and best practices in contemporary psychological research, and trying to ensure that our authors are educated about these best practices. That’s why I was glad to be asked to write my recent editorial. Plus, it’s just fun to tell an obscure story about two people bickering about a cup of tea.

Conduct a Lab Experiment

Psi Chi members, do you feel that more emphasis on the history of psychology at your institution would be helpful? Tell us what you think on Psi Chi's private LinkedIn Group. And remember to submit your own empirical research to Psi Chi Journal too!

Copyright 2017 by Psi Chi, the International Honor Society in Psychology








Psi Chi Central Office
651 East 4th Street, Suite 600
Chattanooga, TN 37403

Phone: 423.756.2044 | Fax: 423.265.1529


Certified member of the
Association of College Honor Societies