This website uses cookies to store information on your computer. Some of these cookies are used for visitor analysis, others are essential to making our site function properly and improve the user experience. By using this site, you consent to the placement of these cookies. Click Accept to consent and dismiss this message or Deny to leave this website. Read our Privacy Statement for more.
Eye on Psi Chi: Spring 2019


Eye on Psi Chi

Spring 2019 | Volume 23 | Issue 3

/

Practicing Quantitative Psychology (From an Aerial Circus Trapeze!?) With Amanda Montoya, PhD

Amanda Montoya, PhD,
University of California, Los Angeles

Bradley Cannon,
Psi Chi Central Office

https://doi.org/10.24839/2164-9812.Eye23.3.22

View this issue in Digital and PDF formats.

Have you ever had a psychology-related question on your mind that you weren’t quite sure how to empirically explore? Not to worry! A quantitative psychologist can help. According to Dr. Amanda Montoya, quantitative psychology is “a research area which focuses on developing and assessing statistical methods, research designs, and measurement practices used in psychology research. Typically, a quantitative psychologist doesn’t have a specific goal in terms of studying ‘human behavior’ but rather focuses on a statistical method and its uses.”

GET THIS! By day, Dr. Montoya is an assistant professor at the University of California, Los Angeles. But by night, she practices aerial circus arts twice a week. She says this is a really physically demanding activity that requires a lot of concentration. “I really find it’s a great way to push myself and try new and artistic things that otherwise would not be part of my normal life. I also do competitive cosplay, play board games, rock climb, and I play in a band. I take the ‘work hard, play hard’ mentality very seriously.”

Specifically, Dr. Montoya focuses her research on mediation and moderation analysis, which is used for exploring how and when certain effects occur, and can be used across different areas of psychology like social, developmental, clinical, etc. In this interview, she will enlighten you on the ever-changing study of quantitative methods. (She’ll share career advice too, in case you’d like to pursue a future in quantitative psychology!)

We at Psi Chi are appreciative of her willingness to answer our questions. Thank you, Dr. Montoya, for making quantitative psychology significantly and statistically awesome, p < .001, d = 2.0!

LET’S START WITH AN EASY ONE: HOW DID YOU BECOME INTERESTED IN THIS FIELD?

I suppose it all started in my Research Methods class in undergrad. We all were assigned to groups and did our own (very small study). Our group was looking at how parenting style predicted college stress. We had three measures of parenting style (authoritative, permissive, and authoritarian). We only knew how to do t tests (comparing two groups), so I spent hours in Microsoft Excel trying to create some sort of grouping system based on these three measures. I eventually took what I did to my teacher, Dr. Ann Voorhies (University of Washington). She said I might be interested in this thing called “cluster analysis,” which does exactly that (creates groups based on a variety of measures) and introduced me to quantitative psychology. I really owe her a lot! From there I was hooked. I got really curious about what you could do with statistics and how they could be applied to psychological data.

WHAT SORT OF JOBS ARE AVAILABLE FOR QUANTITATIVE PSYCHOLOGISTS?

Quantitative psychology is the only area of psychology where we have more jobs than students within academia, which is pretty crazy. In 2006, the American Psychological Association put together a task force to deal with this problem because it was so bad (Aiken et al., 2009). So, within the area, it’s not that unusual to think you can have a career as a professor. We also have a ton of students going into industry because data science is in such demand. Our students are versed in statistics, programming, and communication, and that last one sets them apart from students coming from statistics and biostatistics programs in particular. We had a lot of students in quantitative psychology beat out statistics students for internships because of their communication skills. Many students, especially those who specialize in modeling, go into the tech industry (Google, Facebook, YouTube), insurance, or banking. Students who have more of a measurement focus tend to work for testing companies (ETS, College Board) or government agencies. Some students also stay at the university and help with institutional research. Anywhere that collects data (especially data from people) could use a quantitative psychologist.

WHAT ACADEMIC AND EXTRACURRICULAR PATH SHOULD STUDENTS TAKE WHO ARE INTERESTED IN BECOMING QUANTITATIVE PSYCHOLOGISTS?

I got really into quant because I worked in a research lab as an undergrad; I had the opportunity to do data analysis and think about how data is collected. That type of experience is key. I would highly recommend finding some sort of undergraduate research experience, and the closer you can get to the data, the better. Academically, a strong math background looks really good. Most students come in with some calculus, ideally multivariate calculus, and anything with matrices (matrix algebra, linear analysis)looks really good. I minored in math as an undergrad, but that is not required in any way. Computer science or programming experience of any type is a big strength. Much of our research is done using computer simulations, so students need to learn to program. I came in pretty cold, with just a little experience with SPSS and R, but my advisor focused on teaching me to program and so I caught on. It’s not required, but it’s a huge strength if students already have it coming into graduate school.

HOW CAN RESEARCHERS STRENGTHEN THEIR ABILITY TO TRANSLATE REAL-LIFE RESEARCH PROBLEMS TO STATISTICAL METHODS AND BACK AGAIN?

This one’s tough, it’s a skill I’m still working on myself. A major part of this is being able to form your research question very specifically. This is maybe the hardest part of my consulting meetings—drilling down what the researcher does and does not expect in order to create a statistical model that can test the question that they’re asking. The other side is really a focus on interpretation. Sometimes it’s really fun to estimate some crazy statistical model, but if you can’t actually interpret the results, is it really useful? I spend a lot of time sitting down with researchers and helping them understand what each number means in terms of their research problem. I suppose that’s why I think quantitative psychology is so important: you need people who specialize in that “translation” part. A major part of this is practice and feedback (from a mentor or peer). Constantly ask yourself “Do I know what this means?” when you’re doing an analysis. That’s a core part of our training in quantitative psychology, and hopefully we also rub off on our collaborators in other fields.

RESEARCHERS OFTEN STUDY TOPICS THEY ARE PASSIONATE ABOUT. DO YOU HAVE ANY TIPS FOR KEEPING PERSONAL BIASES OUT OF THEIR RESEARCH DESIGNS AND ANALYSES OF THEIR FINDINGS?

I’m really pushing preregistration, which is when you create a public record of your research and analysis plan before collecting data. It keeps you accountable. I’ve started doing this with all of my substantive research and collaborations. I’m also strongly recommending it to anyone I do consulting work for. It really helps to have a published record of a plan because it really helps you stick with that plan. It also forces you to think about how you’re going to do your analysis before you get your data. It makes it really clear when you start to explore your data. Data exploration is really fun and exciting, but it’s important that, when we report the results of that analysis, it’s labeled as exploratory. My other recommendation is for researchers to (in a preregistration or elsewhere) make some sort of agreement with themselves about what they would need to see from a study to convince themselves that they’re wrong. We’re often looking for evidence that we’re right, but don’t make any criteria for when we are wrong. So, that’s something I like to encourage people to think about, especially with sample size planning. When someone asks me to do a power analysis, I usually ask them how many people they would need to collect to be convinced that they are wrong (I think I stole this from Uri Simohnson; see SPSP, 2014).

WHAT IS THE IMPACT OF CONDUCTING RESEARCH THAT IS REPLICABLE?

I think the major thing is that we have more confidence in what we know. When the issues of replication came to light in psychology, we all kind of had this moment like “Everything I know is wrong!” Which may be an exaggeration, but it really has put things into perspective. Much of what’s in our Intro Psychology textbooks isn’t replicable, which means we just have no idea if it’s true or not. And some people might argue we now have more evidence that it’s not true than that it is. Replicable research practices mean we start valuing the truth of our claims over the surprise or “wow factor.” I’m a scientist because I want to learn more about the real world, and conducting replicable research helps the field do that. Right now the incentive is not to “find truth” but to “produce research” (the quality of which is not easily assessed), so people cut corners because they want to keep their jobs. I am hopeful the incentive system will change. I’ve already seen proposals to include open science practices in tenure reviews, and I am hoping to see a big push for graduate students conducting replications as part of their training. These things, I think, will help us conduct more replicable research.

QUANTITATIVE METHODS SEEM TO BE CONSTANTLY EVOLVING. WHAT ARE SOME NOTABLE CHANGES IN PRACTICES AND STANDARDS FROM THE PAST FEW YEARS?

I suppose to me one of the most exciting changes in the increased emphasis on Bayesian statistics. Typically, when you take an undergraduate statistics class, you learn what’s called “frequentist statistics,” which relies on null hypothesis testing and creating confidence intervals. The students have a really hard time understanding the interpretation of a p value because it’s this really weird theoretical “what if” statement. Bayesian statistics is very straight forward from a theoretical perspective. The idea is that you make some guess ahead of time about how probable different events might be, and then you use that information in combination with your data, to make a statement about how probable the events are now (your “prior” + your data). Much more straight forward. The major advance that’s happened recently is in computing. Bayesian statistics were just not possible for complex situations until computers got really good and fast, so it’s become this new exciting tool (even though the ideas have been around for a long time). I’m really looking forward to a generation of students who are taught both methods and to see how they incorporate each approach in different situations. I think about these things like tools, rather than philosophies, so I think it will just expand what’s possible for data analysis.

WHAT ARE SOME SHORTCOMINGS IN CURRENT QUANTITATIVE METHODS? AND HOW ARE YOU AND OTHER RESEARCHERS WORKING TO CORRECT OR ADVANCE THESE METHODS?

I think the major shortcoming in quantitative methods is the education gap. There’s a lot of research and expansion in methods, but there’s not enough being done to help people learn and apply these methods in an informed and responsible way. One of the things that I do is spend a lot of time teaching. I teach at conferences and I teach independent statistics workshops, and I really try to improve applied researchers’ knowledge about the analyses that I research. Mediation analysis is something that’s misused a lot, so I really try to teach a conscientious mindset when using the analysis. My advisor, Andrew Hayes, and others really share this mentality of teaching focus, so we see quantitative psychologists publishing more tutorials and publishing in substantive journals. Psychological Methods has recently implemented a tutorial section that I think has been really helpful. There is a new journal, Advances in Methods and Practice in Psychological Science, that I believe is going to be a great resource for people trying to learn new skills.

IN WHAT WAYS HAS THE STUDY OF QUANTITATIVE PSYCHOLOGY INFLUENCED YOU AND YOUR PERSONAL RESEARCH?

In general, I think it’s given me a lot of versatility. A big part of my job is consulting, so someone comes to me with a statistics problem, and I have to learn a little bit about what they’re trying to do substantively in order to communicate with them about what to do statistically. I’ve worked with all types of researchers (maybe the most “out there” was a team of entomologists). It means I have to think creatively and be willing to meet people where they are. I like being able to dabble in things, and I think this experience has really helped me be open to exploring new things. I think previously I might have said “well that’s not my area” but now I don’t really mind that; if I’m curious about something, I’ll explore it.

WHAT OFTEN IGNORED RESEARCH FINDING BUGS YOU THE MOST?

I think a lot of my research pet peeves come from dieting or nutrition research. People have a really hard time with the idea that eating cholesterol doesn’t necessarily raise your cholesterol (and other similar findings). Having recently moved to Los Angeles where everyone seems to be peddling raw/organic/natural foods without much care for what that does or means for your body is a huge frustration to me. But, maybe I just like rationalizing eating ice cream? Who knows.

FAVORITE USE (OR MISUSE) OF QUANTITATIVE RESEARCH IN A FILM, NOVEL, OR TELEVISION SHOW?

I guess I’ll pick on the show Numbers, even though I used to watch it religiously. The thing about being a statistician is that we focus on variability and error. In that show, every calculation is done with absolutely no margin of error. He’s always like “Given the guy on the bike was riding this direction at this speed, and the car hit him at this angle at this speed, his missing tooth should have landed right HERE.” Ridiculous.

PAST PSI CHI MEMORIES OR EXPERIENCES?

I went to a Psi Chi event at the University of Washington when I was in undergrad called “Pizza With the Professor.” They would invite different professors to come and talk about their careers and how they got there. I remember going when Dr. Tony Greenwald was speaking. He told this story about how he went to graduate school because he was worried about getting drafted and he wasn’t even particularly interested in psychology at the time. It was maybe the first time I realized professors are just people, and a lot of life is chance.

FAVORITE QUANTITATIVE METHOD? AND WHY?

I was recently in a meeting and we were all talking about how much we love χ2(chi-square) tests. They were the one test in intro stat that you actually felt okay calculating by hand. I’m also particularly attached to χ2 tables, because you can calculate almost any other statistical table from a χ2 table, which I think is just so cool.

References

Aiken, L. S., Aguinis, H., Appelbaum, M., Boodoo, G. M., Edwards, M. C., Gonzalez, R. D., . . . Patelis, T. (2009). Report of the task force for increasing the number of quantitative psychologists. American Psychological Association. Retrieved from http://www.apa.org/science/leadership/bsa/quantitative/index.aspx

Society for Personality and Social Psychology. (2014, June 5). Uri Simonhson-SPSP 2014 session on defining research integrity [Video file]. Retrieved from https://www.youtube.com/watch?v=HzE9HtOX_sE


Amanda Montoya, PhD, is an assistant professor of psychology (Quantitative Area) at the University of California, Los Angeles. She grew up in Seattle, WA, and will always be a lover of outdoor activities and quality coffee. Amanda started her academic pursuits at North Seattle Community College and completed her bachelors in psychology at the University of Washington in 2013. Drs. Sapna Cheryan and Allison Master served as her mentors for her undergraduate thesis on using group work to encourage women’s interest in computer science. During college, she worked as a stage manager for small theater and dance productions. Amanda completed her masters in statistics and masters in psychology at Ohio State in 2016 and her PhD in psychology in 2018 studying under Dr. Andrew Hayes. Her research focuses on developing statistical methods for questions which address “how” and “when” certain effects occur, particularly with data collected repeatedly from the same individuals. She also has a strong focus in meta-science: studying how science is done, with an eye toward replication, meta-analysis, and open science practices.

Copyright 2019 (Vol. 23, Iss. 3) Psi Chi, the International Honor Society in Psychology

Psi Chi Central Office
651 East 4th Street, Suite 600
Chattanooga, TN 37403

Phone: 423.756.2044 | Fax: 423.265.1529

© 2019 PSI CHI, THE INTERNATIONAL HONOR SOCIETY IN PSYCHOLOGY

Certified member of the
Association of College Honor Societies