Thomas Jefferson would have been a very good marketer.
When he said, "The majority, oppressing an individual, is guilty of a crime, abuses its strength, and by acting on the law of the strongest breaks up the foundations of society," I'm almost positive he was taking about email marketing.
Today, with retargeting and re-re-targeting and delicious cookies, digital marketing has the ability to be directly personalized on a one-to-one basis. But look at the best practices for email marketing software: most major email service providers have some capacity to do what's called "A/B" testing. You mock up two versions of your email (maybe more), and send both out to a small portion of your database to test which message works better--which one was opened or clicked on by a higher percentage of your test sample.
Then, helpfully, these ESP software packages allow you to "pick the winner," and send off the highest testing version to the rest of your database. But consider this:
Let's say you test two versions of your email, Version "A," and Version "B," with a sample of your database. You get the results, and Version A "won," by a margin of 3:1. In other words, 75% of the people who interacted with one of your emails, interacted with Version A. You flip a switch, and everyone gets Version A.
But 25% picked Version B. And, if your test sample was large enough, that's likely projectable throughout your entire database. Knowing that, why would you then send Version A to a quarter of your customers, when they'd rather get Version B?
Let's rethink the whole process. What if the goal of A/B "testing" were not to pick the winner, but to learn something? To me, the best way to attack big data is to focus on the small numbers. When you get a report back that says 75% prefer 'A,' while 25% prefer 'B,' what if you replaced the urge to send everyone "the winner" with a question: I wonder why 25% chose B?
This is not only an answerable question, it's likely a crucial question. If your database is robust, you likely have a lot of profile data about that 25%, so start by making them a "segment." What have they got in common? What interests have they shown in the past? What other behaviors link them together? Of course, you might not have the answers in your database, but the questions are still worth asking. So why not ask them? Take that "25%" segment, offer them a discount off their next purchase, and ask them why they clicked on your test mail, and what they found appealing? Pick a smaller sample, and call a few of them.
The most important question, always, is not "what," as in "what email did you prefer," but why. Knowing why people do what they do allows you do anticipate their needs and delight them profitably, which is the best "theory of the firm" I know.
So, the next time you conduct "A/B" testing (or indeed any kind of content testing), flip your thinking. Instead of thinking about the test as an opportunity to learn about your content (an exercise with diminishing returns), think about these exercises as opportunities to learn more about your customers, and why they gravitate (or not) to your brand. Do that, and suddenly "A/B Testing" is a test you'll always win, no matter which version is better. Your goal is never to understand your data. It's always to understand people.
Also, President Jefferson, I am sorry I dragged you into this. You're a good sport.