ARTICLE: Sarah Freitag

How to use baseline UX metrics to fine-tune your EdTech product plan

As an EdTech executive, you’re always tracking your product’s big-picture performance, from sales to customer satisfaction. And when it comes to improving your product, your goal is to focus your team’s efforts on initiatives that will yield the highest impact. To that end, you most likely measure your users’ overall experience with your product as a whole using a standardized UX benchmarking scale such as the System Usability Scale (SUS).

A single “full-product” measurement is a great place to start. But it doesn’t offer the full story. If you want to understand which individual elements are contributing positively to your overall score and which are the weak links, you must establish baseline metrics for each of your product’s key tasks.

Establishing task-level baseline metrics is the only way to properly prioritize your UX team’s efforts moving forward. For example, if you score high on assigning work but low on tasks related to setup, you almost certainly need to focus on onboarding — not adding new assignment features.

But interpreting baseline metrics isn’t as straightforward as it might seem. Here’s what you need to know to ensure that you use these metrics to maximize the ROI of your efforts.

Using the Single Ease of Use Question (SEQ) to Establish Baseline Metrics

The Single Ease of Use Question (SEQ) is a quick and easy metric that UX researchers frequently use. It is derived from another, more lengthy questionnaire-based assessment called the System Usability Scale (SUS). The Single Ease of Use Question is incredibly simple. It asks users to rate how easy a task or product is to use on a seven-point scale, with one being “very difficult” and seven being “very easy.”

UX researchers have determined that the SEQ’s average score is 5.5. Because of that, a score above 5.5 is generally considered strong.

Chances are good that you already use the System Usability Scale (SUS) to measure your product’s overall success among users. Using the SUS in this way offers a quick snapshot of your product’s usability. But it doesn’t tell you what, exactly, accounts for your score. You can’t fully understand your product’s overall SUS rating (no matter how good or bad) without first measuring the key components that make up your product.

The solution is to use the SEQ (Single Ease of Use Question) to systematically measure the baseline usability of each of the key tasks that make up your product. Establishing these baseline metrics will help you uncover areas of strength and opportunity within your product.

As an added bonus, doing this also enables you to measure the impact of how future work contributed to your product’s overall user experience as scores change over time. This can give you the ability to directly tie your UX team’s initiatives to an increased UX score — and more concretely measure your UX team’s ROI.

Once you’ve identified your baseline metrics, you can create a plan to improve your product’s overall usability by strategically targeting your efforts toward lower-scoring tasks.

Simple, right? Not so fast.

Adapting Baseline Metrics to Create a Custom Remediation Plan

Creating a remediation plan isn’t as easy as prioritizing tasks in order of their SEQ scores below the 5.5 cutoff. And it’s likewise unrealistic to demand that all tasks across your product achieve the same high score. Setting this expectation won’t just hurt your team’s morale. It could also cause them to expend a great deal of effort and resources without netting a proportionate gain in your overall product experience.

The bottom line is that you must learn to look at baseline metrics with a critical and realistic eye. After all, not all tasks are equally easy to complete. In order to make the best use of these measurements, you must dig deeper into the data. Rather than blindly following standardized recommendations, you’ll need to identify what constitutes an acceptable score for each task.

Standardized UX Metrics Aren’t EdTech-Specific

UX researchers rely on many standardized measurements to understand usability, from SEQ and SUS to Affect Grids and Net Promoter Scores. These universal scales are useful precisely because they come “prepackaged” with established baselines.

However, it’s important to understand that these baselines aren’t industry-specific. They represent an average across a wide range of industries, products, and tasks. This matters because the most complex tasks commonly found in EdTech products are inherently more difficult than the standardized baseline. For example, consider the process of setting up a course and creating assignments for students. That’s far and away more complicated than ordering a household item or booking a hotel room online. No amount of UX research and design can “fix” that.

The upshot? An especially complex task within your product may never achieve a high score when compared with the standardized metric’s baseline. However, if you compare it to competitors’ scores on similar tasks, and you may see a very different picture emerge. In other words, when complex tasks in your product fall below the benchmark for success, you shouldn’t ignore it. But in the case of certain tasks, that “bad” score doesn’t necessarily mean poor performance.

Each task should be considered individually. A task’s complexity and centrality to your product — in combination with industry benchmarking — should help you determine a reasonable target score.

How to Apply EdTech-Specific Expertise to UX Metrics

If you’re like many EdTech companies, you probably don’t have concrete data on how your competitors’ products perform at the task level. But your ability to set the proper usability targets for key tasks within your own product depends on your ability to do so.

Your best bet? Partner with a UX agency that’s laser-focused on your industry. At Openfield, we specialize in EdTech. When it comes to user experience, we have a bird’s-eye view of how products within the industry perform. We know what constitutes best-in-class for many individual task types. As a result, we can do more than just identify your task-by-task product baseline. We can also accurately consult you on how best to prioritize your UX initiatives once you have that information. In addition, we can work with you to perform a competitive analysis of key tasks within your product to determine what level of effort would push you ahead of the pack. In doing so, we can steer you away from throwing good money after bad — and point you toward the most impactful UX efforts.

Standardized UX metrics are incredibly useful. But only by taking a more nuanced view of them can you properly prioritize your roadmap and set realistic product goals. Want to learn more about how Openfield can help you craft a strategic plan to boost your product’s usability? We’d love to talk.

  • Photo of Sarah Freitag
    Sarah Freitag

    As Director of UX Research, Sarah draws on her deep understanding of EdTech users and her background in research, design and business strategy to enable our clients to make confident decisions that result in products that solve real needs and create demonstrable impacts on their business’ bottom lines. Like her design-side counterpart at Openfield, Sarah is responsible for fostering collaboration, team development and for bringing new strategic initiatives and methodologies that allow our company to stay ahead of the curve of what EdTech users truly need to realize higher levels of learning and teaching success. Sarah is an avid reader and an adventurous explorer. Highlights from her favorite travels include Morocco, Peru, Italy, Denmark and France. With the recent pandemic-induced reduction in travel, she makes it a point to fulfill her wanderlust with another one of her passions, cooking and baking, by experimenting with recipes inspired by cultures around the world.

Spread the word – help avoid the traps of digital product development!