Those Low Response Rates

Surveys are a double-edged sword. It is useful, irresistibly so, to be able to assert something that is supported by survey results. Most of us are enthralled by this sirens’ call – I certainly am!

But at the same time, surveys are riddled with two kinds of distortions. These have various technical names but boil down to two simple things – accuracy and representativeness. First, in surveys we tend not to give “honest” responses. It is not that we are lying or deceiving – though that happens – as it is that we consciously and unconsciously shade, interpret, and answer from our unique psychologies, our context of one. The result is that survey answers can often be shown to be at odds with subsequent behavior. Second, no one answer, even an accurate one, can represent the entire population being surveyed. For that you would either collect responses from everyone, or from a sub-set that accurately reflects everyone’s views.

These distortions are so serious that there is a large field of social research dedicated entirely to solving them. This is Nirvana for Academics. And it is something of a racket. Academics invented a tool – the survey – that is both addictive and flawed. Then they sell us their fixes for the flawed tool. Good work if you can get it!

At Keystone we run a lot of surveys, but we have turned the social research mindset on its head. Instead of seeing imperfect and unrepresentative survey answers as problems to overcome though, we leverage them as assets. And in so doing, we challenge the way social science thinks of rigor.

Our Constituent Voice™ (CV) method utilizes surveys as a form of engagement and exploration. Sure, surveys generate data, but primarily as a stepping-stone to dialogue. We know our survey data is likely to be imperfect, so we recognize that fact and build on it. Our surveys are not extractive; they are part of a closed feedback loop. “We asked you. You told us. This is what we heard, so let’s explore together what it might mean, and agree on how each of us should respond.”

In CV, we earn our way to survey evidence quality over time. Only after data has been shown to be consistent over a number of survey cycles (we prefer micro-surveys of 2 to 3 questions asked at regular touch points), and validated through dialogue do we begin to treat it as reliable in itself. Data collected and used consistently in this way is much more likely to be accurate and representative than that collected in a single research event.

Which brings me to the title of this piece – those low response rates. For CV, response rates are an uber indicator. After all, if your respondents understand that the survey is a meaningful part of how you work with them, and they don’t answer, this is a direct signal of what they think of you. Sure, there are a few people that may have meant to respond but just didn’t get to it. But in the main, if they understand this is not “just another survey”, they are sending you a signal.

One great way that some organizations bake this into their feedback scores is to take a non-response as the lowest score, say 0. These organizations make sure that respondents understand that if they do not respond they are giving the lowest score. This approach, combined with careful reporting back to respondents on survey results and corrective actions, leads in time to response rates consistently above 67 percent, with the super stars breaking above 90 percent.

For most organizations, the transition from surveys as research to surveys as engagement takes about a year. However, it is anything but a wasted year. All kinds of striking new “findings” and solutions are conjured forth. But even more importantly, the frontline staff and primary constituents of the organization become the lead agents of discovery, knowledge, and improvement.

Sorry about that Academia.

 

Leave a Reply