Tom Webster, writing and speaking

Knowing Who Is Half The Battle

Added on by Tom Webster.

A reader passed a study along to me a while back to get my take on its findings. The report is listed as a "CEO, Social Media and Leadership Survey," and purports to be a study of “C-Suite engagement on social media channels and attitudes of customer and employees toward that brand.”

I can’t really comment on the findings of the study—and again, this space will steadfastly not be used to rip on people’s research (which is why I’m not linking to it here—this isn’t about them.) But the reasons I can’t comment on the study’s findings might be useful to you, dear readers, as a way to think about reading reports like these.

I have a very sanguine take on most research I encounter. My bias is this: there is almost always value in asking someone questions. The key to understanding the answers, however, is being clear about who you asked. The most common sin here is to survey the readers of your marketing blog and then present the respondents as “marketers.” They aren’t marketers. They are people of varying professions who read your blog and responded to your survey. That isn’t a negative - it’s about clarity. The results of your survey may or may not be projectable to the universe of marketers, but they are at least potentially descriptive of your audience, and that has value. Knowing what I know about your blog/audience, I can contextualize the data, and put it to use

It is tempting to think of the myriad surveys and studies we encounter in binary fashion—they’re either great or crap. But the truth, like people, is far messier. I can find value in almost any form of study, as long as I know who you asked, and how you asked them. Just those two pieces of information go a long way towards contextualizing information. I’m an information synthesist. Every piece of data has its place—and knowing who you asked, and how you asked them lets me place that data in the right spot on the sticky-note-encrusted, spinning cork ball of my brain. Without that information, the sticky note has no place to stick. It falls to the ground in a heap. There’s a lot of that in my brain, too.

Back to our CEO study. In order for me to process a study of C-Suite opinions regarding social media, I need to know who was interviewed, and how. Here is the complete  methodology section for this report:

[The study] surveyed several hundred employees of diverse companies, spanning in size from startups to Fortune 500 companies, and working at all levels of their respective organizations. Respondents representing a wide selection of industries, professions and regions were asked to answer questions pertaining to social media participation by their organization and executive leadership team. Respondents were also asked about their perception of other companies and brands, based on executive participation in social media channels.

Before I continue, let me reiterate that I am not going to denigrate this survey. That’s not what this post is about. I will, however, point out what else I need to know to find its kernel of value.

First, let’s start with sample size: several hundred. Now, I don’t expect “people polls” like this to necessarily adhere to the same standards that our election exit polls do, but I would like to know exactly how many people were interviewed. Again, that isn’t to “judge” the survey, but it does help me get a sense of how deeply the data can really be dissected before subgroups become too small to be stable. Webster’s (please buy one—my family gets a nickel) defines “several" as more than two but fewer than many. So we can surmise that the sample size here is north of 200, but not many hundred. The Many Hundred would be a great straight-to-VCR ripoff of a gladiator movie, though. SPARTA!

The title and purview of the study would lead one to believe that the sample comprised CEOs and other C-Suite executives, but this information is also not provided. Instead, the sample is described—and I’m simplifying here—as employees of companies. That’s pretty broad. What I need to know is the exact makeup of that sample—the sample composition of C-level employees, managers, line and staff. If I know that, then I can contextualize the findings, which again are meant to portray C-suite engagement with social media. I’d love to know what C-level employees think, vs. what line and staff employees think. This sort of study won’t tell you the objective truth, but it could provide valuable insights into the varying perceptions of the truth at various levels of the organization.

Finally, I don’t know how the questions were asked, or how the sample was recruited/obtained, which again I need to know to put the data in its proper box. This data is not useless, nor are the study’s conclusions necessarily bad. But, as I said at the start, I don’t know what to do with them.

The lessons here are two-fold: for the creators of such data, please find a way to tell me who you asked and how you asked them. Sometimes, I think (and not necessarily in this case) people fear posting that information, because it will expose some weakness or flaw in the study. All studies have weaknesses and flaws. I’ll take the ones I can quantify over the ones I can’t any day. And for the consumers of such data: if you are genuinely curious, ask for this information from the providers of these sorts of studies. Insist on knowing who was asked before you post their infographic.

Because knowing who is half the battle. The rest of the battle, of course, consists of red and blue lasers.