I came across a couple of things this weekend that *almost* inspired a rant. However, ranting doesn’t agree with me in the long term, so I slept on it, came up with a good link-bait-y title, and wrote this instead.
The first thing I came across was the results of a survey a BrandSavant reader sent me that showed social media platform usage across younger demos. When I saw that their results disagreed *violently* with some of the data put out there by Edison, Pew, Nielsen and other reputable sources, I looked into the methodology. Turns out, it was from a new survey tool that trades survey question responses for access to content across a publishing network–and I’m *not* talking about Google Consumer Surveys, which has a robust and growing network and continues to iterate.
No, this is a different tool, and a different “publishers network” (and, hence, a different sample base) from Google’s. Here’s what I could learn about the sample base for this survey, and for other surveys offered on this platform, expressed in Danish: ∅.
The FAQ offered no guidance on even basic demographics, save for a line that indicated they use “inferred demographics” based on IP addresses, which seems like a lot of work when you could just *ask* me how old I am, considering I am taking your survey in the first place. But I digress.
The tool is not important here, so I’m not linking to it. But the existence of this tool, which does indeed provide “information” quickly and cheaply, is symptomatic of something larger, which I do want to address. But I wasn’t sure about how I would address it until I came across a couple of blog posts this weekend that I nearly left intemperate responses to. I’m glad I didn’t.
I read two posts this weekend from social media marketers that disparaged survey data in favor of real-time social media data. The people who wrote these posts are smart people. So I’m not linking to the posts, because I’m not making it about the posts (or their authors.) Instead, I want to talk about the sentiment behind these posts: that survey data is “artificial,” while social media data is unfiltered and “real.”
The fact is, neither source of data is perfect. Here’s a simple question that no social media monitoring tool alone can answer: what percentage of a brand’s customers or prospects tweet, or read tweets, about that brand? That’s a pretty basic question that a survey can answer that makes all of that social media monitoring data comprehensible.
Pitting survey and server data against each other is a ridiculous false choice. Smart market researchers use *both* to generate insights on behalf of their brand or client. Survey research generates insights in a controlled setting. Social media monitoring generates insights in an uncontrolled setting. As a professional market researcher, I use the latter to generate language and ideas, but I use the former to *test* them scientifically. In short, I use them both. I *need* both and I’m glad to have all of these things in my arsenal.
Put another way, both surveys and social media monitoring are tools in my utility belt. Positing that listening will kill asking is like saying hammers will kill wrenches. What discourages me about these posts, and the data generated by the survey tool I mentioned above, is this: both “tools” generate research data. I do not dispute that. But here is a distinction with a difference:
There is a wide chasm between “research” and “market research.”
Unfiltered social media data that doesn’t address the caveats of social media research gives you: research. Asking questions with a survey tool to a sample you can’t characterize gives you: research.
When a client asks me to find the features and benefits that financial services professionals want to see in an investment portal? I need to give them market research.
Unfiltered DIY questionnaires and raw social media monitoring give you information about the people. Market research researches your target market. It offers actionable insights based upon the data we can gather–both through social media and questionnaires–about a sample of the people you want to reach.
A penultimate thought about what the Internet has done to research: we have never had more access to bulk data than we have today. In a sense, the two items that inspired this post work at opposite ends of the same continuum. While social media monitoring gives us more data about how people react to brands in the wild than we’ve ever had, the spate of DIY research tools gives us more access to directed “answers” than we have ever had. But quantity is not quality. The metric tonnage of data we can mine from Twitter, and the quantity of data thrown off by the DIY “polling” apps of the world, are all the by-products of great tools that can certainly provide research, but need work to make the leap to providing market research.
If you can’t answer within statistically acceptable parameters who tweeted about your brand, or who answered your question, you have research. If you *can,* you have market research. That’s the hard fact.
Finally, a thought to those who continue to believe that surveys have “stopped working” due to the Internet, mobile phones, and other factors: on Election Day in 2012 we fielded over 100,000 surveys in one day across 50 states to provide the sole, lasting record on who voted and why, and to give the major U.S. news networks decision support to project the results of nearly 100 races from the Presidential election on down to various Senate and House races and assorted referenda. No inaccurate projections were made. It wasn’t magic; it was science, which still–refreshingly–works just fine.
The Internet has given my profession an enormously powerful set of tools to work with, including social media monitoring and quick Internet surveys. But the Internet didn’t kill the survey. It made the survey better.