Businesses *love* making surveys. They really do. Ask a people a bunch of questions, aggregate the results, and you’ve got all the stuff your great powerpoint presentation needs. Upper management adores this kind of shit.

But there’s a problem, and its name is statistical significance.

Too often I’ve seen results of surveys done with an extremely small sample size taken as gospel. Asking 5-10 people to answer questions on a Likert scale and then interpreting the data as something valid is absolutely laughable. “The average here is 1,2 so action must be taken”. With small sample sizes, such claims are a) outright lies and b) deserving of absolutely no attention whatsoever.

Let’s use an example to illustrate why such results shouldn’t be taken seriously.

Say you are a university professor and want to ask students of your course to give a simple rating from one to five (one = strongly disagree, five = strongly agree) on the following statement:

“The course material provided me with all the information I needed to do well in the exam.”

Further assume that 100 students have enrolled in the course, and although you send the survey to 50 randomly chosen students, you only get ten responses.

In such a scenario, the population is the set of 100 enrolled students. The response rate of the survey is 10/50 = 20%. The sample size is 10 (the number of students that responded).

Let’s say we get the following responses:

- 1
- 5
- 3
- 3
- 4
- 5
- 4
- 3
- 4
- 4

Calculating the mean would give a rather respectable 3.6, enough for the professor in question to give himself or herself a pat on the back.

Or is it?

Let’s calculate some descriptive statistics. I cheated and used an online tool:

- Mean: 3.60
- Min: 1
- Max 5
- Standard deviation: 1.17
- Confidence interval (95%): 2.92 to 4.28

It’s the last statistic is the most interesting. Basically, it’s saying that if we run the survey 20 times with possible different samples, 19 times out of 20 the mean would be something ranging from 2.76 to 4.44. Given our question, the only conclusion we can draw that the course material may, at worst, be slightly poor (2.76/5.0) or, at best, very good (4.44/5.0). That may sound like a stupid conclusion, but that’s because it is. Intuitively, it tells us jack shit.

On second thought, it does tell us something valuable. It tells us to pay the survey results *no mind whatsoever*. They are useless. Our professor should know better.

So why are people so adamant about making surveys? I don’t know. Maybe it has to do with them being seemingly easy to make. Maybe it’s because people hope to get a large sample, thereby gleaning statistically significant information. Maybe it’s because people don’t know that it’s a) entirely possible and b) advisable to calculate the sample size you’ll need to obtain a statistically significant result before a survey is carried out.

Or maybe it’s because although the high level statistical concepts are easy to grasp, statistical methods (both their inner workings and purpose) are much more complicated. I must confess that although I’ve studied statistics for years, much of it still baffles me. I’ve had the honour to work with many great data scientists and statisticians, and I’m still in awe of how concepts like confidence intervals, null hypotheses, t-tests, degrees of freedom and z-scores come naturally to them. I have to work tremendously hard at it and as I learn more, the more like an imposter I feel.

My statistics professor once told me that statistics are everywhere, and once you realise that you’ll understand just how important they are.

He wasn’t wrong.