<iframe src="https://5923915.fls.doubleclick.net/activityi;src=5923915;type=hbx_core;cat=hbx_b0;dc_lat=;dc_rdid=;tag_for_child_directed_treatment=;ord=1;num=1?" width="1" height="1" frameborder="0" style="display:none">
HBX Business Blog

3 Reasons You Should Take Statistical Significance with a Grain of Salt

Posted by Jenny Gutbezahl on August 2, 2016 at 12:08 PM


If you read the results of any type of study, you've likely been told that results are "significant" in at least some cases. Clickbait headlines may use the word "significance" to make readers think the finding is important. But significance and importance are two very different things.

What is statistical significance, again?

Statistical Significance: A statistic is considered significant if the likelihood of it occurring by chance is less than a preselected significance level (often 0.05 or less).

If you're in need of a more in-depth refresher, check out this helpful article from Harvard Business Review HERE.

How can you tell if a finding that is statistically significant is actually important? Here are three things to keep an eye out for.

1. Just because something is statistically significant does not mean that it isn't due to chance.

For example, if you tossed a coin 5 times, it is unlikely to come up heads all 5 times. There are 32 possible outcomes for tossing a coin 5 times and only one way to get 5 heads. So you'd only expect to get 5 heads one time out of 32 on average, or about 3% of the time. Generally, anything that would happen by chance less than 5% of the time is considered to be statistically significant. Thus, an unscrupulous researcher could get "significant" effects simply by conducting a lot of analyses and picking the ones that reach the threshold.

2. Just because something is not statistically significant doesn't mean that it isn't due to a real effect.

If one hundred people each tossed a fair coin 5 times, we'd expect 3 of them to get 5 heads in a row. Similarly, just because something is not statistically significant doesn't mean that it is due to chance. If a weighted coin that comes up heads 80% of the time is tossed 5 times, it may well come up 4 heads and 1 tail, a distribution that would happen 16% of the time by chance with a fair coin, so it would not reach statistical significance. Thus, an unscrupulous researcher could report "no effect" of something simply by conducting a study with a very small sample and little power to detect differences.

3. Just because something is statistically significant does not mean that it is practically significant.

When I was in graduate school, I fractured my navicular bone, a small bone in the wrist. My doctor told me that I could get a cast that stopped either right below my elbow, or one that continued past my elbow & would keep my arm bent until the cast came off. He informed me that medical research indicated that people spend (statistically) significantly longer in a cast if it stops below the elbow.

That certainly seemed like a good argument for getting a longer cast! But I asked for the average time spent in a cast under each condition. He told me that people who got the shorter cast spent, on average, a full six weeks in a cast while those who had their elbow immobile were out of the cast in only five weeks and six days! This may have been statistically significant, but the practical significance was not great enough for me to give up bending my elbow for a month and a half.

When you hear about a "significant" finding, you should take it with a grain of salt, especially if it's only seen in one study. A report about, say, chocolate significantly reducing the chance of hair loss (something I'm completely making up – I've never seen that particular claim), could be the result of either lots of analyses producing one statistically significant result by chance. Or it could be the result of a study that found a very small connection (for example, eating 5 pounds of chocolate a day would delay the onset of hair loss by 45 minutes), that happened to be unlikely due to chance.

Interested in learning Financial Accounting, Business Analytics, and Economics for Managers?

Learn more about HBX CORe


About the Author

Jenny is a member of the HBX Course Delivery Team and currently works on the Business Analytics course for the Credential of Readiness (CORe) program, and supports the development of a new course in Management for the HBX platform. Jenny holds a BFA in theater from New York University and a PhD in Social Psychology from University of Massachusetts at Amherst. She is active in the greater Boston arts and theater community, and she enjoys solving and creating diabolically difficult word puzzles.

Topics: HBX CORe, HBX Insights