This website uses cookies to store information on your computer. Some of these cookies are used for visitor analysis, others are essential to making our site function properly and improve the user experience. By using this site, you consent to the placement of these cookies. Click Accept to consent and dismiss this message or Deny to leave this website. Read our Privacy Statement for more.
Eye on Psi Chi: Summer 2018

Eye on Psi Chi

Summer 2018 | Volume 22 | Issue 4

Writing Strong Conference Abstracts

Marianne Fallon, PhD, Central Connecticut State University Bonnie A. Green, PhD, East Stroudsburg University

View this issue in Digital and PDF formats.

Attending a conference can be a life-changing event (Mabrouk, 2009); presenting your research at one may be even more transformative. Your foot in the door to presenting at conferences rests upon writing a strong abstract. In this article, we share the makings of a successful abstract based on our collective experience mentoring students and reviewing conference submissions. The following suggestions apply primarily to empirical research projects.

A conference abstract is just a summary of a research manuscript, so pulling one together should be easy, right? Nope. Like packing a single tiny suitcase for a week-long holiday, it is challenging to distill an entire study into a few paragraphs. Many professional conferences require a short abstract (50 to 100 words) to appear in the conference program and a longer abstract or summary (250 to 1,000 words) for reviewers to evaluate. To put this in perspective, 50 words is approximately three typed lines of text; 1,000 words is roughly 2 ½ double-spaced pages. Usually the guidelines for paper (oral presentation) or poster abstracts are comparable. However, papers are more prestigious presentations, and the bar for acceptance is consequently higher.

To maximize your chances of being accepted to present at a professional conference, we make three broad recommendations. First, follow the submission guidelines. If you do not, your abstract may be rejected without even being read. Second, tell the complete story of your research project. Complete does not mean exhaustive; complete means that there is a clear beginning, middle, and end. Third, make deliberate and strategic choices in your writing. Abstracts are neither a research manuscript nor a class assignment, and demand a unique set of tools.

Follow Submission Guidelines

Before you begin drafting your abstract, read the submission instructions VERY carefully. Just as different scholarly outlets have specific requirements and formats for manuscripts, so do conferences. Some conferences require you to compartmentalize your abstract into sections (e.g., Problem, Method, Results, Conclusions); other conferences allow a narrative approach. Pay attention to word count, not only for short and long abstracts/summaries, but also for titles. Further, most conferences require that you have completed data collection by submission date. Sometimes poster sessions specifically geared toward undergraduates—particularly at regional conferences—allow submissions with data collection that is in progress. If your data collection is not complete or even started at the time of submission, state that. Far better for you to honestly relay your current status than to promise something you might not deliver.

Tell a Complete Story

Problem. Every research project begins with a problem that you are trying to solve. (Yes, we understand that one study does not fully solve a problem; each study attempts to address a piece of the problem.) State your problem clearly and situate it squarely in the most relevant existing literature. You have precious little room: be judicious in selecting the theory and empirical findings that are most critical to help your reader understand the problem. If your problem involves theory or phenomena that reviewers may not know (e.g., embodied cognition, dark triad, hypergendered attitudes), define them.

Purpose. Clearly state your purpose—your reason for doing your study. Were there conflicting results in the literature? Did your methodology evaluate competing theories? Did a previous finding require direct or conceptual replication? If so, why? Had relationships between certain constructs not been examined before? Why should relationships between those constructs be examined? Did you validate an original self-report instrument? Why was it important to develop a new instrument? Was your study designed to shed light on a serious societal problem (e.g., obesity, attitudes about vaccinations, sexism)? Did you attempt to replicate a published study? Why was it important to replicate that particular study? In short, explain how our current understanding of your problem is limited and how your study will push psychological science forward.

Predictions. Specify your predictions and ground them in the literature. One of the first things drilled into you as a psychological scientist—as ANY scientist—is that theory gives rise to predictions. Unless your study is truly exploratory, make your predictions prominent. Avoid making predictions for secondary concerns such as manipulation checks or participant variables that are not centrally related to your problem.

Method. Describe the methodology you used to address your problem. In manuscripts, Method sections are written in enough detail to exactly replicate the study. You do not have that luxury in an abstract. Your goal is to hit the Goldilocks just-right standard: provide ample, but not exhaustive information. If your sample comprises living beings, include the number of participants and relevant characteristics (e.g., biological sex and/or gender, age, race/ethnicity, class standing).

Unless you indicate otherwise, reviewers will assume that you collected your data; projects involving secondary data analysis (i.e., data that you personally did not collect) should note when the data were collected and by whom. For content analyses, describe the units of analysis (e.g., books, videos) and selection procedure. Include information about your materials—stimuli, questionnaires, coding schemes—and enough about your procedure for reviewers to critically appreciate your study. For example, reviewers should know that you used validated measures for a questionnaire study. But specifying the total number of items on the measure and providing thorough descriptions of subscales may not be critical (unless your goal was to contrast an existing measure with one that you developed and validated).

Similarly, omit procedural information that would be considered standard operating practice: your study received IRB approval, informed consent was obtained, debriefing was provided. Clearly, you should adhere to the highest levels of ethical practice; reviewers will assume that you have even if you don’t say so in your abstract. Include features that signal strong methodological decisions such as manipulation checks and counterbalancing. And if you registered your study on the Open Science Framework (OSF), state that. Your commitment to integrity and transparency demonstrates that you are serious about science.

Findings. When you stated your predictions, you entered into a contract to evaluate them. Ensure that your analysis is appropriate for your data and addresses the predictions you made. If you conducted quantitative analyses, provide descriptive and inferential statistics, effect sizes, and confidence intervals. (Some disciplines traditionally do not expect such detail within an abstract. When in doubt, consult your faculty mentor . . . and the submission guidelines!) Avoid using language that you would use in your Introduction to Statistics class: “Because p was less than .05, I rejected the null hypothesis at a = .05.” Rather, state the direction of the relationship and convey whether you rejected the null hypothesis through your linguistic choices:

The effect of gender on attractiveness was qualified by a gender x makeup interaction [F(2, 84) = 3.25, p = .049, η2p = .07], such that female raters did not perceive differences across levels of attractiveness (3.41 ≤ M ≤ 3.60, 0.74 ≤ SD ≤ 0.79). However, men rated women wearing moderate makeup the most attractive (M = 2.96, SD = 0.68), heavy makeup less attractive (M = 2.74, SD = 0.82), and no makeup the least attractive (M = 2.48, SD = 0.79), [quadratic trend: F(1, 85) = 6.27, p = .015, η2p = .07]. (Pattacini & Fallon, 2018)

Assume that your reviewers have enough statistical knowledge to understand at least intermediate statistics (e.g., complex factorial design, regression, moderation, mediation, path analyses). You cannot include tables or figures within your abstract, so use your text to etch an image of your findings. Be mindful of statistical representations: sometimes Greek letters (e.g., η), superscripts, and subscripts do not paste well into online submission portals. It is better to write out a statistical representation (e.g., partial eta squared) than risk the dreaded ☐, which could mean anything! Space permitting, include supplemental analyses (e.g., manipulation check, exploration of a moderator) that add substantially to your story. If you have performed a qualitative analysis, explain the method and theory you used to derive your conclusions. No matter your approach, state your findings clearly and succinctly—make them as well-engineered as a Tesla.

Conclusions. Think of the best dinner you ever had. Now imagine it without dessert. Like a fine meal, your abstract needs closure. Too often we read abstracts that abruptly end with the findings, leaving the reviewer to divine whether results complement the existing literature. Reviewers don’t have the time—or the patience—to read tea leaves. Revisit the theory or literature you used to derive your predictions and place your findings in context. Entertain alternative explanations for your findings. Convey that your study, like all studies, involved some tradeoffs (e.g., internal for external validity). Include a future direction to show that you have considered where someone should pick up from where you left off.

Make Sound Strategic Choices

Your guiding priority is to tell your story succinctly, clearly, and coherently. Writing a short abstract or summary (250–350 words) necessitates some brutal decisions. Forget the extra sweater and the third pair of shoes—they won’t fit in your suitcase. Gone are the multiple in-text citations and results from supplemental analyses. Your future directions or broader applications of your work might end up on the cutting room floor. Nevertheless, your story must hold together. For example, if you sacrifice describing your response scale (e.g., 5-point Likert scale from 0 to 4) but include descriptive statistics, your reviewer does not have enough context to evaluate the meaning of those values.

Sometimes our students ask us to estimate ranges of how much real estate they should apportion to each element of the abstract. Should the Method account for 25% of the abstract’s total words? Tell me, o science oracle! As with full manuscripts, your study dictates how much space you should devote to each section. To give you a sense of the variation, we examined our own students’ abstract submissions. Although this sample is far from representative, the percentage range for Introductions was 35% to 55%, Method 8% to 16%, Results 16% to 41%, and Conclusions 12% to 24%. Notably, the word limit on these submissions was between 500 and 1,000 words. The range and variability would likely shift and constrict with shorter word counts.

As a general rule, use most of the words you have been allotted within your abstract. Reviewers for conferences with 1,000-word limits expect meatier submissions. That said, if you have a 1,000-word limit and you have constructed a tight 600-word story, don’t pad your abstract. Adding fluff to increase word count will likely backfire.

Actively look for places where you can reduce word count without sacrificing content. Most conferences do not require a full reference list because it engorges word count. In-text citations can sufficiently convey that you have done your due diligence researching the literature. Another space-saving option is to sacrifice spaces around mathematical operators (=, <) when reporting statistics. Consider: r(99) = .39, p < .001. That is perfect APA format. And six words. Now consider: r(99)=.39, p<.001. Removing the spaces results in no loss of information or confusion. Now the word count is two words—you sat on the suitcase and zipped it up. We fully expect the APA formatting gods to strike us down for merely suggesting such heresy. But pragmatists might agree with us. If conference guidelines clearly state that submissions must strictly adhere to APA style or else risk immediate rejection, follow the APA Publication Manual to the letter. This cutesy, pragmatic workaround is not worth the risk.

Before you submit anything in its final form, proofread for grammar, spelling, punctuation, and formatting. You could have an otherwise great abstract, but lack of attention to these details could decrease your chances of acceptance. Reviewers usually have many, many abstracts to evaluate. Making careless errors is like poking a hibernating bear with a stick.

Final Thoughts

Abstracts are not only your ticket to a conference, they can be used to determine awards for excellent research. Indeed, Psi Chi’s Regional Research Awards ( and APS/APA Convention Awards ( and, respectively) are based on conference abstracts. As such, you want your abstract to be as darn near perfect as possible. We hope that reading this article helps you aspire to that goal. See you at the next conference!

SIDEBAR: What Writing About Manuscript Abstracts?

Convention and manuscript abstracts are generally similar in that they are both synopses of research. However, most convention abstracts are longer—some considerably so. If you’d like to see how to write a manuscript abstract, here are some great examples from one of Psi Chi’s best teaching tools, Psi Chi Journal of Psychological Research.

The Effects of Religion and Career Priming on Self-Control During Difficult Tasks in College Students

Abby S. Boytos and Terry F. Pettijohn II, Coastal Carolina University Winter 2017,

The purpose of this study was to investigate ways that religion and career could be used to increase self-control. Participants (N = 60) were primed by taking the religion or career implicit association test (IAT). These tests were given before participants attempted to solve 3 creative analytical problems. The amount of time spent trying to solve the problems was used to measure self-control under the assumption that participants had to resist the temptation to give up and view the solutions. The riddles given to participants were chosen because they each require extensive thinking and many trials and errors before reaching the solution. Participants were told a cover story that the experiment was about the effects of technology on problem-solving ability, so they were not aware of any connection between the IAT and the problems. After being primed with either religion or career, participants worked longer on the problems than participants who were not primed, F(2, 53) = 5.46, p = .007, ηp2 = .17. Locus of control was also measured but did not influence the time that participants spent on the problems. Results indicated that briefly priming participants with either religion or career can lead to greater persistence in the face of difficult tasks.

Memory for Missing Parts of Witnessed Events

Lindsay T. Hobson and Kenith V. Sobel, University of Central Arkansas Spring 2017,

This study examined how children and adults fill in missing parts of witnessed events. In 2 experiments, children and adults studied 6 series of PowerPoint slides that each depicted a single event. At test in Experiment 1, participants viewed old slides, new slides, and slides that had been missing from studied events. Both children and adults falsely recognized missing slides more than new slides: F(1, 104) = 162.97, p < .001, ηp2 = .61 for children, and F(1, 104) = 497.23, p < .001, ηp2 = .83 for adults. These results suggest that participants filled in the missing parts of witnessed events. However, an alternative explanation is that children falsely recognized missing slides because the missing slides superficially resembled the studied slides. At test in Experiment 2, participants viewed old slides, new slides, and slides that contained the same items as studied slides but with the items rearranged in the slides so they were incongruent with studied slides. Both children and adults recognized old slides more than incongruent slides: F(1, 90) = 16.86, p < .001, ηp2 = .16 for children, and F(1, 90) = 215.20, p < .001, ηp2 = .70 for adults. This undermined the alternative explanation, thereby supporting the original explanation that the false recognition of missing slides in Experiment 1 is attributable to the filling in of missing information.


Mabrouk, P. A. (2009). Survey study investigating the significance of conference participation to undergraduate research students. Journal of Chemical Education, 86, 1335–1340.

Pattacini, M., & Fallon, M. (March, 2018). Makeup differentially affects men’s and women’s perceptions of women’s attractiveness and competence. Poster accepted for presentation at the Eastern Psychological Association Annual Conference, Philadelphia, PA.

Marianne Fallon, PhD, is an associate professor of psychological science at Central Connecticut State University. She won the Connecticut State University Trustees Teaching Award in 2010 and has twice been named a finalist for Central Connecticut State University’s Excellence in Teaching Award. A cognitive psychologist, Marianne conducts research in learning, memory, perception, and motivation. In 2016, she published Writing Up Quantitative Research in the Social and Behavioral Sciences as part of the Teaching Writing Series for Sense Publishers. Approximately 70 of her students have presented research at professional conferences and 12 of her students or student teams have won Psi Chi Regional Research Awards or the APS Society Research Award. Marianne currently serves as the Eastern Regional Vice-President for Psi Chi.

Bonnie A. Green, PhD, is a professor of psychology at East Stroudsburg University in Pennsylvania. Bonnie began her professional career as an elementary school teacher before attending Lehigh University to obtain a PhD in experimental psychology. She conducts research in academic success and has mentored over 100 undergraduate research students as they presented their research at professional conferences. Bonnie has served on program committees reviewing presentation and poster submissions for local undergraduate conferences, the Eastern Psychological Association (EPA), and the Southeastern Psychological Association (SEPA). She is currently the program chair for EPA.

Copyright 2018 (Vol. 22, Iss. 4) Psi Chi, the International Honor Society in Psychology

Psi Chi Central Office
651 East 4th Street, Suite 600
Chattanooga, TN 37403

Phone: 423.756.2044 | Fax: 423.265.1529


Certified member of the
Association of College Honor Societies