User testing is the key to creating an EdTech product your users will love. That’s the whole reason you do it in the first place: To uncover critical user insights and leverage them to make smart, user-centric design decisions. But the truth is that simply conducting the research — even well-designed research — isn’t enough. In order to get meaningful takeaways from your research findings, you must apply the right level of statistical rigor.
Statistics are essential to interpreting your UX research data. Without these important calculations, you can’t know whether or not your research findings are reliable or meaningful (statistically significant). And you also can’t know how they might apply to your larger pool of users. In other words, without the necessary statistics to back your research up, you might as well be taking a stab in the dark. Put simply, user research without statistics is about as useful as no user research at all.
The Role of Statistics in EdTech User Testing
Let’s say you survey 20 people about a new feature you’re developing. How do you know whether your findings are actually representative of your broader user base? And what do they mean for the 20,000 users you couldn’t talk to? With what degree of confidence can you be sure that your audience will respond a certain way?
Statistics are the only way to answer those and other important questions. In particular, statistics allow UX researchers to:
- Put together a right-sized, statistically representative panel of users for testing activities.
- Show whether the data collected are reliable (meaning similar results would likely be produced under the same conditions with a different set of users).
- Verify that UX research data are statistically significant (meaning it can’t be attributed to chance alone).
- Understand the amount of variability within a data set using standard deviations.
- Recognize if additional research is needed (for example, when findings are unclear, unreliable, or not really representative of your users).
- Predict how research findings might apply at scale, within a larger population.
- Determine the degree of confidence with which they can make specific predictions.
- Make informed design decisions and weigh the risks involved with each choice.
- Prioritize features, design options, or other elements based on findings.
Raw data often appears to tell a certain story. But only with statistics can you really know what your data is telling you. For example, let’s say you want to compare two designs for a new feature. After showing the two options to a panel of users, you see that option A is 75% successful, while option B is 50% successful. Seems like a no-brainer, right?
Not so fast.
After performing a few statistical calculations, you discover that the two options are actually likely to perform about the same at scale. Rather than a no-brainer, you have a neck-and-neck race. At that point, you have two options. You can either conduct more research to see if option A or B emerges as a winner. Or, you can decide between the two options based on other criteria, such as the cost or difficulty of implementing them.
Another example: Let’s say you run a usability test on a new feature with ten users. Your panel of users gives your feature a score of 5 out of 7. Sounds pretty good, huh? But after running statistics, you find that your user panel isn’t as representative of your larger user base as you thought. In fact, if you were to ask each one of your users to assess the same feature, you’d be much more likely to score a measly 2 out of 7. Back to the drawing board.
The Risks of Not Applying Rigorous Statistics to your UX Research
Shortchanging statistics in your UX research is never a good idea. Do that, and you’re much more likely to:
- Waste your investment in UX research. Most EdTech companies devote significant resources to user testing and other UX research activities. Without running rigorous statistics, those resources are wasted. User research without statistics simply isn’t productive. It’s about as useful as doing no research at all (and a lot more expensive and time-intensive, too).
- Make misinformed decisions. Raw data isn’t always very reliable. If you make decisions based on it, you may wind up designing a bespoke product that is just right for your small group of test users — and all wrong for your market. By running statistics, your team can pinpoint the relative reliability of your data as well as the level of risk it brings. With those and other details, your team can make the right decisions for your users.
- Lose time to market — and market share. Without statistics, you’re much more likely to invest your energy and resources in the wrong places. On top of that, you may create a product that doesn’t really meet your users’ needs (all while believing you are doing the opposite).
Statistics Are Necessary, But They Aren’t Silver Bullets
Statistics are incredibly useful. But they can’t tell you exactly what to do or guarantee success given a particular outcome.
Remember, because you could never survey or collect data from absolutely every one of your users, you can never fully prove that your audience will respond in a particular way. You simply can’t be 100% certain. Statistics enable you to assign a probability to the data you have so you can make calculated speculations. But they are still just that: predictions.
A talented statistician will give you everything you need to make an informed decision. However, you’ll still need to weigh the level of risk and investment against the probability of a successful outcome. Of course, that’s worlds better than the alternative.
By applying statistics to your research data, you can protect your UX research investment — and make the (almost certainly) right decisions for your product.