Recently I got an invitation to participate in a survey for a national broadband provider I use, for which I would be given a guaranteed cash incentive if I qualified. I signed up, as I do for all of these offers--not because I needed the cash, but because I like to examine how companies are executing online research initiatives. What this survey did right, of course, was to offer a guaranteed cash incentive for participation. I just don't trust lengthy surveys offered without incentives, or offered for the "chance" to win a prize--surveys are work, and if you expect quality respondents to commit their time and considered effort, you get the best results when you make that commitment a fair transaction. Sure, there are plenty of long-form surveys that hit their targets with either no incentive or with some kind of sweepstakes scheme, but who takes them? The issue, as always, is not the bias in the actual responses, it's the non-response bias of the folks who didn't take the survey, and the fact that you have no idea who they are or how many of them are your customers. If you directly offer an incentive (cash is king) that a normal person would deem fair, you'll get normal people, no?
However, what the survey did wrong is a practice I see all too often--lengthy pre-screening questionnaires to determine the qualifications of the respondent and their appropriateness for the survey. In the case of this broadband company's survey, I was asked over 25 questions--almost a survey unto itself!--before I was told that I didn't qualify for the study, thanks for my time, yada yada.
Again, I wasn't doing it for the money, so I wasn't crushed. But asking potential respondents (and current clients, as the survey was clearly identified and attached to the broadband company in question) to answer over two dozen questions just to see if they are qualified to answer more is ludicrous. To respondents, anything more than a couple of questions constitutes a survey, and thus, work. Assuming that potential respondents understand the difference between a screener and a survey questionnaire is a little bit too inside baseball, and plays more to the "professional" respondent than Joe/Jill Six-pack.
Either the fielding company was trying to get a healthy bit of uncompensated research at the expense of its customers, or--much worse--actually required almost 30 questions to determine whether or not the respondent was qualified. If you have to ask more than a handful of questions to prescreen a respondent, your screen is waaaaaaaaaay too tight to be healthy.
I rail about these things because they are rampant, and they are poisoning the pool of potential online respondents. Just because you can ask questions online doesn't mean you are getting good answers, and the more we mistreat online respondents, the more likely we are to get unrepresentative samples in the long run as "normal" people decide they've had enough surveys.
I have a very simple rule when designing an online survey and incentive scheme. Would my wife do this? If it doesn't pass the Miriam test, it doesn't make it out the door. Using the "monkey" is fine if the questionnaire is short and/or you are providing some kind of value in return, but data is not insight, and if you design a survey proposition that most people wouldn't take, most people won't take it.