If you’ve ever been in a meeting about metrics or results, you might've heard someone ask about the statistical significance of the data presented.
Whether you're familiar with statistical significance or not, as a manager, HR representative, or business leader who relies on statistics to guide your talent strategies—understanding what statistical significance means is critical when making data-driven decisions about your workforce and workplace.
Statistical significance is numerically represented as a p-value (e.g., p = 0.03). You might’ve heard that the p-value is the probability of your results happening by chance; the lower the p-value, the lower the probability that your results happened by chance. In other words, lower p-values are associated with results that are more important. This explanation is straightforward and easy to understand.
There’s just one problem: all of that is completely wrong, as is conventional wisdom surrounding statistical significance.
The technical and most accurate definition is that statistical significance is the probability of getting results at least as extreme as the ones you observed given that the null hypothesis is true.
Unless you’re well-versed in statistics, that definition probably doesn’t make much sense. And unfortunately, I can’t explain it differently. It’s not because I don’t want to, but because I literally can’t – even scientists can’t easily explain what a p-value is.
And that’s a huge red flag. The p-value is commonly misunderstood and difficult to explain, yet the sciences are absolutely obsessed with it. It’s often used as a threshold to determine whether scientific studies get published or funded, whether certain strategies or products are implemented within businesses, or even whether certain drugs can be used to treat diseases.
This means the p-value can make or break careers, businesses, and lives. Thankfully statistical significance is coming under attack and quickly becoming a more controversial statistic.
Here at Quantum Workplace, we currently don’t report p-values in our software for the sake of simplicity and statistical soundness. Below are five reasons why we avoid p-values; the first two illustrate our emphasis on simplicity, whereas the final three illustrate our emphasis on statistical soundness.
Instead of relying on statistical significance to determine whether a difference or change is important, we recommend focusing on relative differences or changes within your data. For example, say that most survey questions increased in favorability since your previous engagement survey, yet a few decreased. Relatively speaking, those questions that decreased in favorability are more practically important to focus on. And more specifically, those questions that decreased the most should receive highest priority.
Likewise, if one department has especially low overall favorability, that department is most important to focus on. Yet if all departments have similar yet fairly low favorability, then that suggests a strong organization-wide effort is required.
Statistical significance has become a convenient shortcut for a lot of decisions made across the world, and I don’t want to suggest that statistical significance should be completely abandoned. It does have its place, but that place isn’t in engagement survey software.
If you're as obsessed with numbers as I am, you can't afford to miss out of on our free ebook, The New Era of Employee Engagement. It's the ultimate resource for gauging engagement.