User testing is a critical component of the product development process. You need your users’ feedback to shape your EdTech offering in a way that meets their needs and creates an enjoyable user experience.
But when it comes to following the best practices of effective UX testing, not just any users will do. The reliability of your tests depends on your ability to recruit the right mix of users at the right time. And those user requirements are liable to change from one testing cycle to the next. One of the challenges of successful user testing is figuring out how to put together a panel of users that meets your testing needs.
It can be difficult to find users who are willing to spend their valuable time giving your company feedback. For this reason, many EdTech companies cultivate a small base of test users and stop there. They go back to the same small well each time they need a new round of feedback, regardless of their testing goals. In addition, they often find that it’s easiest to recruit dedicated users with a high degree of product familiarity. The result is a panel of “power users” who are so experienced with the product they are ostensibly testing that they might as well be part of the product team.
What these companies often don’t realize is that in taking the path of least resistance, they may be skewing the results of their user testing. The result? Misinformed decisions that lead to poorly designed products.
Best Practices for Building the Right User Panel for UX Testing: The Openfield Approach
In our work with clients, we help EdTech companies put together the right panel of users to meet their testing needs. This is how we approach the process:
- Start by identifying the goals of your user testing. A single round of testing typically isolates a particular task, set of workflows, or area of a product. And that means user tests are always designed with a specific set of goals in mind beyond whether or not users simply “like” the product. Putting together the right panel of users for a round of UX testing begins with knowing the purpose of your testing. The next step is to connect those goals with the sorts of users who are best equipped to help you achieve them. For example, let’s say your product is used by both professors and students. If you plan to test a workflow used solely by professors, then one very obvious criteria would be that all your test users should be professors, not students.
- Utilize screeners. At Openfield, we prepare simple surveys called screeners to identify the right users for each round of testing. The screeners ask questions about experience, demographics, and any other important factors on which your choice of users may be predicated. For example, screeners can be used to isolate instructors in a specific area of study, teachers with small versus large classes, or students who are majoring in a particular area who are also relatively new to your product. This simple tool gives us a high degree of flexibility in strategically crafting a user panel. Screeners can be used to recruit new users. Or, in the case of EdTech companies with a larger existing pool of test users, they can be used to sort through those users and identify the best-fit group for the next round of testing.
- Keep your user base fresh. Your user panel doesn’t need to be large. The widely accepted rule championed by Jakob Nielsen of the Nielsen Norman Group, you can expect to identify the majority of issues with just five test users. If you test with more than 5-10 users, you’ll hear the same feedback over and over again — and effectively burn time and budget. You don’t need many users on any given test panel, but you do want to avoid over-relying on the same users time and again. This rule holds even for users who may have relatively little experience with your product. The more an individual user is involved with testing your product, the more your product will be tailored to meet his or her individual needs rather than the needs of the majority.
- Know — and leverage — your users’ level of product experience. When we put together a user panel for our clients, we always take each user’s level of experience into consideration. Each of your users falls along a spectrum of experience with your specific product. This ranges from first-timers all the way up to “power users” who are deeply familiar with your product and have ingrained expectations about how it works. Individual users move along this spectrum as they use your product over time, but users of all experience levels have the ability to share important insights about your product. It’s just that their insights need to be applied to the right situations. For example, if we’re working with a client to test a new onboarding process, we make sure to build out a panel of non-users and new users. In this case, we steer clear of power users, who simply can’t remember what it feels like to learn your product for the first time. If, on the other hand, we’re putting together a user panel to test a new feature or a change to a product’s core functionality, we make sure to include a mix of newer and more experienced users. Here, experienced users’ familiarity with the product is key to their ability to provide useful feedback. As your most experienced users graduate to “power user” status — meaning they are true product experts — they may become less helpful as test users in many situations. The good news, though, is that these users are often prepared to act as brand advocates and connectors. They can also be called on to help with beta testing or even making suggestions for new product features.
Putting together the right panel of test users can spell the difference between meaningful user feedback and wasted energy. Want Openfield’s assistance crafting a program of UX testing for your EdTech product? We’re happy to discuss how we can help.