Tom Webster, writing and speaking

Social Media Monitoring 201: The Market Research Perspective

Added on by Tom Webster.

So, you've taken the big first step in social media for your business: listening. You've set up Google Alerts for your brand, done some Twitter searches, and maybe even signed up for some heavy duty monitoring services from the likes of Radian6, Trackur or Tweetfeel. Sentiment analysis is far from perfect, but on a pretty good day, it will give you a pretty good idea of the consumer zeitgeist out there for your products and services. So, let's stipulate that you are listening, you can measure the conversations, and you have a pretty good idea whether or not those conversations are trending positively or negatively towards your brand. Now what?

Certainly part of social media monitoring operates under the aegis of customer service--if people are tweeting problems, your customer service representatives should be out there actively listening to and solving those problems. Those are important interactions, but they are tactical interactions. How do you incorporate that information into more strategic initiatives?

Before you can tackle that, you first have to  make sure that your ears are as wide open as they can be. Listening to Twitter is fine, but Twitter conversations represent a small fraction of the overall universe of social media conversations, and are subject to biases unimaginable (though I'll have some exciting data on that score soon...). If you want to elevate social media monitoring from the tactical to the strategic, you owe it to yourself to take in those inputs from as many sources as possible, to insure the stability of your data and improve the representativeness of your sample. So--Twitter, Facebook, blogs or message boards? The answer is yes.

Once you have ensured that your input stream is as robust as possible, then you are ready to go beyond simple sentiment analysis and start to segment those conversations. In market research, we do this all the time with open-ended responses to quantitative surveys. In most national survey work we do, we leave openings for respondents to provide open-ended responses to questions--the "whys" behind the "whats." In order to make sense of those responses, we code them--we have trained survey teams go through these verbatim answers one-by-one and assign them categories. There may be, for instance, 1000 different responses to a question like "why are you dining less often at J.P. McBeers?" but chances are they fit into one of a handful of buckets. Developing those buckets is an iterative process, but after going through a hundred or so responses, y0u'll probably be 99% there, and the rest will go fairly quickly. Are there computer algorithms to do this? Yes, but they aren't as good as people, and once you have a good code list it's fast work anyway--that's what you have interns for. Many of the top social media monitoring platforms have tools to help, so feel free to use them if you have them.

So, let's say that you've combed through 10,000 brand mentions for J.P. McBeers, and you've segmented them into buckets--declining food quality, appearance of other casual dining alternatives, change in economic circumstance, etc. The next step is to go back out into the wonderful world of social media and probe around some of these segments by engaging with commenters. The wonderful thing about social media for customer service reps and community managers--the infinitely variable interactions between company and customer--turn out to be the bane of your market research team. We like to control for variables--it helps us provide better answers to your questions. In terms of this exercise, that means doing something that might be counter-intuitive to your community managers, but really just gives them a framework to do their jobs more effectively. For each of the conversation segments you have identified, come up with a few questions you'd like to ask those people, and standardize them. Have your social media teams go back to the folks who made the original comment, tweet or post, and ask them a specific follow-up question--just one or two, and make them the same every time (for each specific bucket/issue.) Asking one or two really great questions the same way every time beats asking 1000 different questions six ways to Sunday, and gives you more confidence in extrapolating quantitative data from social media--and insight from that quantitative data.

In the case of our example, let's say that one of the big buckets of conversation topics revolving around J.P. McBeers are conversations revolving around people spending less going out to eat. A crappy consumer insights team would take that data and deduce that J.P. McBeers needs to cut its prices. You can do better. Here's what I would like to ask the folks who made these comments: have your economic circumstances changed for the worse? If not, what are you spending more money on lately? Knowing the answer to that opens up a whole new world of consumer insights for not just the brand in question, but the evolving needs of  your target consumers and how you can best serve those needs.

If you can standardize the question(s), you can standardize the responses and make apples-to-apples comparisons. Is this the last word on reliable consumer insights? Hardly. Will it let you make some intelligent hypotheses that you can test in situ or with split-tested offers? Definitely! To me, this represents the real next generation of market research for brands--reengagement after the question, turning market research into relationships, and relationships into market research. The key is to do a little thinking beforehand, standardize on a few really good questions, and empower your engagement team to go out and get the answers.