ARTICLE: Sarah Freitag

Avoid these mistakes to craft effective UX research surveys and improve your EdTech product

Surveys are critical tools for UX researchers. In the EdTech space, they can be used to collect standardized feedback about your users’ needs as well as your product’s usability. Yet not all surveys are equally useful. The quality of your surveys, which depends on how they are written and structured, can significantly impact the value of your findings. 

Unfortunately, the ability to craft clear, effective, and unbiased surveys is a skill not all product teams (or even UX firms) possess. That’s a big problem. After all, if you unknowingly collect bad data — data that is unclear, exclusionary, incomplete, or apples-to-oranges — you could take away erroneous findings and make ill-informed decisions. 

At Openfield, we understand the most common problems that plague UX research surveys, and we’ve developed a clear set of best practices that enable product teams to glean crucial insights with each round of research. This is what we’ve learned about what to avoid in UX research surveys — and how to do them right. 

UX Research Survey Pitfalls: 7 Mistakes to Avoid 

Before discussing Openfield’s approach, let’s pause to consider the most common pitfalls that plague poorly designed UX research surveys. 

1. Vague language 

Survey questions should be as clear and specific as possible. If they are vague, overly technical, or otherwise difficult to understand, your respondents’ feedback is bound to be much less reliable. 

Let’s say you prepare a survey question that asks respondents to “rate how satisfied you are with the tools you use in your classroom.” What you really want to know about are the EdTech tools your survey participants use. But for all they know you could be asking about their analog tools, general digital tools (like email and Zoom), or something else entirely. Because different respondents will almost certainly interpret your unclear question differently, your data is now as vague and undecipherable as your survey question.

2. A poorly structured rating scale

The way you structure your rating scale is as important as how you word your questions. Your rating scale’s structure includes: 

  • The number of points on the scale 
  • How each point on the scale is defined 

If your scale includes too many numbers, or if the points themselves are poorly defined, respondents may get confused. For example, if you don’t define the middle point on a scale, some respondents may assume it means “both.” Others may interpret it as a midway point on a spectrum, and still others may think it means “neutral” or “I don’t care.” 

In general, five-to-seven-point scales work well so long as you define the end and middle points. Ten point scales and above offer relatively little additional value while requiring significantly more mental effort on the part of respondents. 

3. Biased questions 

Biased survey questions may lead respondents to select answers they wouldn’t ordinarily choose — or unintentionally exclude respondents altogether.

For example, let’s say your survey includes a question about users’ education level. If you include “some college” as the only possible response between “high school diploma” and “four-year degree,” you may be demonstrating a bias toward four-year degrees. As a result, you effectively “erase” respondents with technical degrees or professional certifications.  

4. Questions with multiple variables

Each question in your survey should include just a single variable. If you include more than one assumption or question, your respondents will be forced to choose which question to base their answer on. 

For example, you might ask, “how satisfied are you and your students with X product’s onboarding experience?” Unless both user groups happen to feel exactly the same way about your product’s onboarding, your survey respondents must now answer based on one or the other — without a way to specify whose experience they are rating.

The result? Unreliable data — and an unclear picture of what your users want and need from your product. 

5. Answers that force respondents to pick a side 

Beware of constructing forced binary or multiple choice questions that force users to “pick a side” that isn’t truly representative of their experience.

Without an “other” or write-in field, your survey data may be less meaningful than you think. 

6. Non-inclusive elements 

Like EdTech products themselves, UX research surveys should be designed with accessibility and inclusivity in mind. That mandate pertains to your survey’s content and format. Yet without careful attention, many surveys inadvertently include accessibility traps, from a lack of screen reader metadata to emoji rating scales. 

It goes without saying: If your survey isn’t accessible to all your respondents, it’s not going to yield results that represent your full range of users. 

7. Too many questions

It can be tempting to ask a multitude of questions in a single survey. After all, you’ve already got your respondents’ attention; why not make it count? 

Unfortunately, overly long surveys breed boredom and frustration. More than that, as your respondents’ attention wanes, they are less and less likely to give careful thought and attention to each subsequent question they answer. 

How to Write Effective UX Research Survey Questions

Knowing the pitfalls of poorly written UX surveys is the first step in avoiding them. However, when we partner with product teams to develop surveys, we use a number of tactics to ensure they are clear, effective, bias-free, and inclusive. 

This includes: 

  • Running reader level checkers. When we write surveys, we use reading level checkers to make sure the questions don’t go above an eighth-grade reading level. This is a good way of making sure all respondents — including non-native English-speaking users — can clearly understand and appropriately respond to the survey. 
  • Considering participants’ feelings. We use empathy to craft questions that are sensitive to respondents’ feelings and lived realities. For example, when we want to ask respondents their gender, we make sure to include a “self-describe” option in addition to “male” and “female.” Using “other” as an alternative option in this instance may be technically accurate. But it would also be insensitive and potentially hurtful to certain respondents. 
  • Testing the survey itself. Just as we employ user testing to improve EdTech products, we often use testing to refine and optimize a survey before sending it to a full group of respondents. To do this, we send the survey to a small subset of participants first. Then, we ask them to weigh in on whether any of the questions or scales were confusing or frustrating. (We may pay respondents extra for this additional information.) With their feedback in hand, we can refine and optimize the survey before sending it out to the broader population. 
  • Applying inclusive design best practices. Accessibility isn’t a fringe user need. It’s a mandate across all digital products (and EdTech in particular). At Openfield, we fold inclusive design best practices into everything we do — including research surveys. For example, drag-and-drop surveys (in which users are asked to order a series of responses by moving them around the screen) can be incredibly challenging for respondents who don’t use a mouse or have certain disabilities. When we use drag-and-drop surveys, we always include the side-by-side option of simply ordering the answers numerically using text boxes. 
  • Reducing the cognitive load required to complete the survey. We help clients craft surveys that are short and varied enough to capture and retain respondents’ limited attention. For example, we know from experience that keeping a survey under ten minutes leads to better engagement. We also know that too many of the same kinds of questions (especially short answer questions or questions that employ a ten-point scale) can fatigue users. We most commonly use five-to-seven point scales. These allow researchers to collect nuanced answers without adding so many options that it requires a lot of thought and mental effort on the part of respondents.

Want to learn more about how Openfield can help you conduct UX research that takes your EdTech product to the next level? Let’s be in touch.

  • Photo of Sarah Freitag
    Sarah Freitag

    As Director of UX Research, Sarah draws on her deep understanding of EdTech users and her background in research, design and business strategy to enable our clients to make confident decisions that result in products that solve real needs and create demonstrable impacts on their business’ bottom lines. Like her design-side counterpart at Openfield, Sarah is responsible for fostering collaboration, team development and for bringing new strategic initiatives and methodologies that allow our company to stay ahead of the curve of what EdTech users truly need to realize higher levels of learning and teaching success. Sarah is an avid reader and an adventurous explorer. Highlights from her favorite travels include Morocco, Peru, Italy, Denmark and France. With the recent pandemic-induced reduction in travel, she makes it a point to fulfill her wanderlust with another one of her passions, cooking and baking, by experimenting with recipes inspired by cultures around the world.

Spread the word – help avoid the traps of digital product development!