I have a confession to make.
Despite two years of surveying thousands of respondents for our clients I still fire up Google in the hopes that it will one day tell me how to conduct surveys the right way. Now, this is mainly due to being overly ambitious: I would one day love to see a 100 percent response rate or a completely full raw dataset. In the meantime though, I’ll settle for actionable insights that will make each iteration of our surveys slightly better.
Fortunately, the folks over at Partially Derivative, a podcast about data science in the world around us, went straight to the source for us. In their episode titled “Your Surveys Suck“, Chris Albon interviewed Professor Frauke Kreuter, a survey methodology expert at the University of Maryland, about conducting better surveys.
By and large, her biggest piece of advice was:
“It should feel easy”
My take on that is “It should tell a story.” That sounds even more vague than the advice you normally find online but the magic is in the simplicity of the message. Focusing on just this one idea makes it sticky. Even if you forget the rest of this post I have faith that you’ll remember this and will likely even spread it around to others.
How do you make your surveys feel easy though? Here are some suggestions:
- Start with an ‘awesome question’. Too often we start with demographic questions when in reality Prof. Kreuter says we should start with something the respondent is interested in and wants to tell you about.
- Keep it short. We’ve all heard this one before but somehow after input from all parties we always end up with a longer survey than we originally intended. Prof. Kreuter recommends that we avoid this by mocking up what the analysis will look like – including tables and graphs. Whatever is not in the mockup is likely not needed in the survey.
- Make sure your questions are clear. People need to understand your questions without having to conduct a back and forth with you. To avoid a flood of confused respondents, my colleagues and I often conduct pretests in which respondents are asked if they understand the questions. Prof. Kreuter also recommends that we ask our pretest respondents to repeat the question in their own words. Remember to note any signs of confusion or hesitation as well as the time it takes to answer each question.
- Stay away from grid questions. Prof. Kreuter means it. These larger grids often result in the respondent having to scroll out of the view of the answer options and even questions. In my case, I always end up closing the survey.
- Avoid “Other” answer options. Prof. Kreuter believes that having to provide your own answer above and beyond the choices offered tends to be too much for most people. However, if you truly feel you haven’t captured all the possibilities then by all means include it. Just make sure to use a small text box so respondents are not pressured to write a lot – or even worse – skip it all together.
- Treat non-responses as the lowest score. Keystone has found that telling respondents that if they do not respond they are essentially giving the lowest score communicates that this is not just another survey. This is an easy way for those true detractors to communicate how they feel but also a way to incentivize participation in those that are on the fence.
However, it should be easy for your team to glean insights from the survey as well. A few simple ways to do this include:
- Allow for the opposite. We all know we should avoid leading questions like the plague but what does that really mean? Prof. Kreuter says we should continuously push for a balance in not just questions but answer categories as well. As you review your questions make sure you can give answers on opposite sides of the spectrum.
- Randomize. Prof. Kreuter recommends that when possible, we should try to randomize the answer categories. People tend to be short on time so they often simply select the first answer. If you can’t randomize, try to look for patterns within individuals in the data. Keystone also recommends checking for data quality by looking at responses over time. Only after data has been shown to be consistent over a number of survey cycles and validated through dialogue do we begin to treat it as reliable data.
- Include ‘Don’t Know’ or ‘Does Not Apply’. My colleagues and I have found that without these we often end up with fake answers from respondents that think they have to fill every question out. Additionally, it makes it easier for us to exclude these responses from the analysis.
What are other ways that your organization has made surveys easier on your respondents?