advertisement
Home » Blog » The Powerless Pose: How the ‘Power Pose’ Debacle Illustrates Good Science at Work

The Powerless Pose: How the ‘Power Pose’ Debacle Illustrates Good Science at Work

A simple example of how the scientific method is supposed to work: you observe that every time you drop your hammer, it takes the same amount of time before it hits your foot (ouch!). So you decide to conduct a test, to see just how fast your hammer falls! So, you decide to drop the hammer from different heights: 1 foot, 2 feet, 3 feet, 4 feet, and 5 feet. “Height” is therefore your independent variable. You then measure the amount of time it takes for the hammer to hit the ground from each of these heights. “Amount of time” is your dependent variable.

If you’ve measured both height and time in a precise manner, you will be able to extrapolate a model of how fast your hammer falls. You can then test this model by calculating how long the hammer takes to fall from 10 feet — and then actually drop it from 10 feet, to see if your model was accurate! Rinse and repeat, until your model accurately predicts how fast your hammer falls from a wide variety of heights.

Congratulations! You’ve just conducted a line of scientific inquiry!

This is easy to say, but harder to do. There will be variation in the amount of time it takes you to press the stop/start button on your stopwatch. Even if you build a machine to measure the amount of time for you, the hammer won’t hit the ground at exactly the same time, even if you drop it from exactly the same height. Other factors, like air resistance, will have tiny impacts on how long your hammer takes to hit the ground.

So, if you aren’t aware of these factors, or if you are unable to adjust your experiment to eliminate them, there will be little bits of variance between different observations of the exact same phenomenon. This variance will cause imperfection in your mathematical model. And if it’s imperfect, other people may decide to completely ignore all your hard work!

If you’re a careful scientist, you will drop the hammer from the same height, multiple times. This will help to even out inconsistencies, such as differences in how long you take to hit the start/stop button on your stopwatch. Now, you can find an average amount of time it takes for the hammer to fall from 1 foot, and the average time it takes the hammer to fall from 2 feet, and the average time it takes the hammer to fall from 3 feet, and so on.

Next, you might wonder if other things fall as fast as your hammer. You can test this with a variety of other objects. You can also test this with more sophisticated methodologies, such as dropping things in a vacuum-sealed chamber (to eliminate any effects of air resistance on different objects). Eventually, you’ll arrive at a model that can be used to accurately predict how fast anything will fall.

Just in case you’re not a physicist, we already know the answer: objects accelerate toward the ground at 9.8 meters per second squared. So now that you know this, you can calculate just how fast your phone was going that time you dropped it and shattered the screen…

Like the hammer, science is a tool — nothing more. The scientific method is only useful for determining what is true in the natural world. If a variable is measurable and quantifiable, it is subject to scientific scrutiny. But, just as the hammer cannot tell us what to build, science cannot tell us what to study — or how to communicate our results.

And that’s where the problem can come in.

Jumping the Gun

It’s natural to be excited about the results of a study. When your research gets published in a prestigious journal, you want people to read your work, and cite it in their own publications. But you don’t just want to tell your peers about what you found: you want to tell everyone!

Not so fast. Remember a few paragraphs ago, when I mentioned that you’re going to find some inconsistencies in your timing when you use your stopwatch? A similar problem afflicts all sorts of different types of research.

In psychological research, this issue can rear its ugly head in the form of measurement error, or in the form of sampling error. Since people can be so different from one another, you may just happen to randomly get a bunch of people who behave in some way that is not typical.

As a researcher, you couldn’t possibly know that you got an unusual sample! Therefore, the researcher arrives at an erroneous conclusion about people in general…but the researcher has no way of knowing that the conclusion is erroneous! This is called Type I error.

This is precisely why, according to proper scientific methodology, researchers should replicate (or re-do) their experiments. Replication ensures that one weird sample doesn’t mislead the psychological community into a false conclusion about how the mind works. Science is therefore supposed to be a self-correcting process — and often, it is.

But there’s a snake in paradise. There is little incentive to re-do someone else’s work, as most journals will only publish unique, original studies. That means that only the first person to find an effect gets rewarded with seeing his or her work in print. Since journal publications are an important metric for academic hiring committees, or for tenure committees, this means that if Joe Researcher fails to beat other people to the punch, that can have a negative impact on his career!

As you can probably imagine, this tends to result in a couple problems. One major issue is the so-called “file-drawer” problem: lots of research gets filed away and forgotten because it doesn’t yield a statistically significant result. The researchers therefore conclude that there is “no effect,” even if the effect is real and they just happened to get a weird sample.

The Powerless Pose: How the ‘Power Pose’ Debacle Illustrates Good Science at Work


Zachariah Basehore, MA

Zach is currently a PhD student in Neural and Cognitive Sciences at Bowling Green State University. He studies judgment and decision-making, specifically the use of heuristics as a potentially adaptive decision strategy. His first peer-reviewed publication can be found here: The Simple Life: New experimental tests of the recognition heuristic.


4 comments: View Comments / Leave a Comment
APA Reference
Basehore, Z. (2018). The Powerless Pose: How the ‘Power Pose’ Debacle Illustrates Good Science at Work. Psych Central. Retrieved on September 22, 2019, from https://psychcentral.com/blog/the-powerless-pose-how-the-power-pose-debacle-illustrates-good-science-at-work/
Scientifically Reviewed
Last updated: 8 Jul 2018
Last reviewed: By a member of our scientific advisory board on 8 Jul 2018
Published on Psych Central.com. All rights reserved.