The Election's Most Important Insights For Marketers, Part One: Playing Marketing Battleship

US Navy 100530-N-2798F-011 Aviation Ordnanceman Airman Justin Stout and Aviation Electronics Technician Airman Anthony Bertolino spend their break playing the game Battleship aboard the Nimitz-class aircraft carrier USS Harry SAs a part of Edison Research's election exit polling team (if you watched TV that night, you saw our work), I was privileged to witness some amazing stories play out on November 6th. Throughout the course of Election Day (and night, and early the next morning…), I observed a number of things that were not only relevant to politics, but also to marketers. So, I'm going to spend the next few posts examining how marketers make decisions, and the lessons from this year's election about consumer insights, big data and the roles of both listening and asking in the marketer's toolkit. Today, let's look at Part One: "Playing Marketing Battleship."

Nate Silver has gotten heaps of (justifiable) praise for accurately predicting the election results. He wasn't the only one, however--just the best known. Using similar methods, last Tuesday's results were also nailed by HuffPo's Simon Jackman, Princeton's Sam Wang, Drew Linzer and others. Huffington Post's Mark Blumenthal has written the definitive article on this, so go read it, but the gist of it is this: yes, celebrate the achievements of these quant superstars, but do not forget the source of their data: the polls.

The Nate Silvers of the world aren't manufacturing their own numbers for Romney's On Base Percentage; instead, they are aggregating and averaging the results of pollsters, who are actually asking the questions. And you know what? The polls were pretty good. In fact, it isn't an overstatement to call this election year a triumph for polling and survey research. So, yes--let's toast Nate Silver and others who were able to average these data and hit the home run, but do not forget that their success was based upon the sterling service of pollsters, doing their best in challenging conditions, hitting predictable and consistent singles and doubles all year long.

So, what can marketers learn from this? Well, I don't want to denigrate the efforts of Silver and others who aggregated the polls, but to put it simply: of course aggregating multiple sources of data is going to be more accurate than looking at one survey in isolation. The media often like to focus on outliers: those polls that seem out of whack with other polls, or conventional wisdom, or the opinions of pundits. And yes, some of those polls were, indeed, outliers. But let's not lose track of the fact that polls are estimates, and estimates have a margin of error. This means that a properly sampled poll will be pretty much right, most of the time. Asking a poll or any kind of survey to give you the exact right answer is asking too much of the tool. But, done properly, consumer surveys give you a good clue to the right direction. A match in the darkness, if you will. And smart marketers will keep lighting those matches, aggregating the data over time, until the best path is revealed.

In that sense, conducting consumer surveys is a lot like playing Battleship (no, that's not the game in which you see how little you can stick Liam Neeson in a movie and still promote him as the star. The other one.) Think of the results of any one survey as a "hit." You look at the results and you know: B6. Do you know whether you need to continue along row B? Or Column 6? Or whether you have a submarine or a battleship in your sights? No. But you have a pretty damned good guess, and you get the benefit of being able to keep asking, and having an even smarter guess for the next time.

So it is with survey research. No one survey is perfect--there is no perfect study, whether it involves listening or asking. That's not a reason to stop asking, or to vilify the tool. Instead, it's a reason to keep asking, early and often, iterating as you go. Market research shouldn't be an "event" in which you sit around and wait for the tablets to descend from the mount. It's a process--a learning process--and the smartest organizations are learning organizations. For those companies, market research and other consumer insights exercises are continually baked into the process, and not used as one-time "fire-fighting" exercises.

In the case of the election polls, the best pollsters this year did exactly that, repeatedly asking the questions, tweaking the models, and learning as they went along. Sometimes you get a "miss." Sometimes you get a hit. And sometimes, you sink the battleship. But getting a "miss" on a snapshot survey is a poor reason to stop asking the questions. It's information, pure and simple: and it tells you where not to look next time. As we like to say in our business, the trend is your friend. You can't observe--or predict--a trend, without a few dots on the graph.

Lighting one small match in a dark room may not help you find the exit. But keep lighting them, and eventually the way through will reveal itself. So it is with polling, survey research, and indeed any means of gathering consumer feedback. It's far better to continually light small matches along the way than to save your powder, as it were, on one big fireball. That's what the best election pollsters did all year, and why Nate Silver had the best ingredients for his election stew.

If you don't do this, rest assured that someone you compete against is. If you quit asking, you quit playing, and your competition will keep firing away. That's a good way to lose your battleship, your carrier and whatever that little boat-dealy is with the two holes.

Next time, Election Insights for Marketers Part Two: Excuses vs. Variables.