Tom Webster, writing and speaking

Filtering by Category: Traditional Media

A Semi-Radical Idea About Offline Media

Added on by Tom Webster.

Funnel CapIt's time for this week's Marketing Companion, in which my co-host Mark Schaefer and I discuss the recent purchase of the Washington Post by Jeff Bezos, John Henry's purchase of the Boston Globe, and sundry other things related to the future of traditional media like print and AM/FM radio. Mark and I share a love for the topic, certainly--he has a background in journalism, and I cut my teeth on traditional media research early in my career--so we have a lively chat about why Bezos might have bought the Post and what he might do next. But along the way, I dropped a semi-radical idea about "offline" media, so I wanted to expand on it here.

I saw a graphic a few weeks ago that compared the "old" marketing funnel (with TV, Print and Radio at the top) with the "new" funnel (all social and digital.) The basic thesis of this graphic (and it was in a college textbook) was that digital and social media have replaced all of these other media for everything from top-of-mind awareness to purchase. I do agree that online media is as much responsible for awareness as anything else, and with the proliferation of information choices we have, we are as likely--if not more so--to hear about brands and products for the first time online as we are anywhere else.

Where this graphic fell flat, however, was not only in leaving traditional media off of the "new" funnel (that's preposterous), but also in relegating its place on the "old" funnel to top-of-mind awareness only. That makes for tidy funnels, but isn't entirely accurate.

According to The Infinite Dial 2013, our joint study with Arbitron, the number one medium people are exposed to in the 30 minutes just prior to visiting an offline store to make a purchase is AM/FM radio, at 49% (billboards, a favorite with my friend Tim Hayden, were at number 2, and no sign of faxes, Aaron Strout!)

Edison Research Arbitron Infinite Dial 2013 Page 66

To me, this puts a traditional medium like AM/FM radio not at the top of the funnel, but at the action level, right before purchase. If a retail store, apparel brand, car dealer or other purveyor of the bricks and mortar wants to build awareness, the ways are legion. But if they want to communicate an offer right before purchase, to drive an action, radio is a pretty sound bet. And when you think of it in that light, you begin to see how you can work AM/FM radio into a digital campaign pretty smartly--not at the beginning, but somewhere in the middle.

Yet, this is not what I see. Every time I hear a radio spot (and to be clear, I am speaking solely of AM/FM, tower-delivered radio here) that "drives traffic" to an online site, I question that decision. If you are in radio sales, and you sell spots to a client looking to send listeners to a Facebook page, or an online video, you might think twice about the utility of this. Even taking mobile phones into account, there is a layer of friction sitting between a listener hearing about an online action on the radio, and then taking that action when they are in front of a computer.

If a radio station uses the power of its off-air broadcast to drive traffic to an online location, three things are true: first, radio will never be as good at doing this as an online medium; second, radio will have a hard time tracking its effectiveness in this effort (since there are likely other components and promotional tactics going on); and third, if the effort is successful from the client's perspective, who will get the credit? The digital/social agency. Radio gets laundered out of the equation, and that's a shame.

So my semi-radical suggestion about offline media is this--use it to drive offline action. It's pretty good at that. In fact, it's arguably better than online media. Using AM/FM radio or other offline media to drive people online is playing a game that is stacked against you from the start. So, when possible, don't play it. And if you are in traditional offline media, and you must play this game for sales purposes, make sure you are in a position to work SMS and mobile into the play--it helps bring the odds of this game slightly back into your favor.

Am I saying radio should stay away from online? Of course not--that's what radio's own online properties are for. But I wonder if the relentless drive to use AM/FM radio to drive people to web sites (the "power of the tower!") hasn't done the medium more harm than good.

Rant over. Mark and I discuss this, and other aspects of traditional media on this week's Marketing Companion, and I truly hope you'll listen.

Other Ways to Listen to the Podcast:

Waystations

Added on by Tom Webster.

Apple is an enormously successful company, and as such has spurred a cottage industry in business writing as we all try to generalize lessons in hindsight from an outlier. I note this because I recently came across this interesting juxtaposition in the almost violently pro-Apple blog, Daring Fireball, about Philips exiting the consumer electronics industry: Gruber

What is the "lesson" here, in following this news article with the "bag of hurt" comment? One assumes that it's yet another example of Apple's prescience; their ability to recognize and avoid dead-end markets (and, by dint of the juxtaposition, a dig at Philips, as well, for making those hurt-bag-ful Blu-Ray players for so many years.)

It is true that Apple has, in the last decade, successfully "skated to where the puck is going," as The Great One once said. That, of course, is not the sole component of success (just ask Preston Tucker); let's also acknowledge that there is also a fair amount of luck involved. But Apple has in recent years had more hits than misses, and there are scores of valid business lessons we can all learn from how they have relentlessly prioritized design and innovation. So stipulated.

Let me say this, however: predicting the future is child's play. It takes no great skill to predict, as Steve Jobs did, what will be. Here--I'll do it for you: in the future, we will fly faster than light to interplanetary colonies, eat pills that replicate entire thanksgiving dinners, live to be 200 and, eventually, learn Chinese in minutes with a suppository. You're welcome.

No, predicting the future is not difficult. Predicting the timing of the future, however, is a singularly difficult skill, and rarely works twice in a row. Of course Blu-Ray was a "bag of hurt;' eventually we will be able to watch movies on our eyelids by clenching our buttocks. So was Philips stupid for making Blu-Ray players? That's a more complicated question than Daring Fireball's simplistic juxtaposition implies.

Let me humbly suggest something Philips and Apple have in common: they are both thriving businesses. Philips has a market cap of around 27 billion, which, while nowhere near that of Apple's, is not a lemonade stand, either. You might know the Philips name from their consumer electronics business, or maybe their electric toothbrushes, but they also make CAT scanners and radiology equipment. In fact, technology from Philips has demonstrably saved lives, which is more than you can say for the Apple Newton, which merely scheduled me for grunch with my bother-in-law.

There is a clear lesson here in this recent move by Philips, and it's one that any business can learn from, not just those with $500 billion market caps. Steve Jobs predicted the futility of Blu-Ray in 2008. He wasn't wrong. Also true: Best Buy sold a crapton of Blu-Ray players from 2008-2012. See, there are two ways to bet on the future: one, the current Apple way, is to formulate one vision of a future and attempt to conjure that vision into reality through superior design, engineering and marketing skills. That worked for them in the latter years of Jobs' tenure. There is also another way--the portfolio approach.

Transitional technologies like Blu-Ray, Minidiscs and (back in the Dragon's Lair days) Laserdiscs were all doomed to eventual failure. Duh. But their death throes lasted long enough for some companies to generate a ton of cash flow selling those corpses for years. In effect, Blu-Ray players are like interstate restaurants--they are waystations; brief respites on the journey to where you'd rather be. Sometimes those journeys take longer than you think they will, and you end up eating a taco at Exit 42 even though a 5-star dinner awaits you in Manhattan. You know you're going to get better food at the end of the journey. But the journey is a long one, and everyone's gotta eat.

The danger in the kind of thinking that compares Philips with Apple here is the danger of the false choice: simplistic thinking boils these sorts of strategies down to either/or scenarios. Smart thinking deals with both/and. The smart money knows that Blu-Ray is a bag of hurt, AND recognizes that people are still gonna buy a ton of disc players on the way to that eventual but impossible to schedule disc-less future.

Think of it this way: before Henry Ford democratized auto ownership with the modern assembly line, I don't think smart people thought cars were a passing fad. Cars were inevitable. But there were still a lot of buggy whips sold on the way to that destination. And for the smartest companies, selling those buggy whips financed their bets on what that future would look like.

And that is what, for some companies, the cash flow from those transitional technologies enabled--the financial ability to respond, smartly, to where the puck was going. What Philips announced was not an admission of failure--far from it. It was an acknowledgement that they had milked that particular cash cow as long as they could, and were wisely exiting the industry before it became cash flow negative. Really, is there any better lesson to learn about "market timing" than that?

Philips may not sell another DVD player for all eternity, but the cash flow they generated from this transitional technology was just as much a bet on the future as Apple's "all-or-nothing" bets are. The difference is that the bet Philips made didn't depend on any one future occurring. By building waystations--blu-ray players, MP3 players, headphones, etc.--they were just betting differently. If tech is a big game of roulette, Apple pushed in all its chips on one number, while Philips spread its bets, with the anticipation that one or more would pay off in the short term to finance its long term vision.

The only stupid thing Philips could have done here was to keep making Blu-Ray players in the face of dwindling cash flow. They didn't do that. Instead, they made some cash opportunistically and got out when the market indicators told them to--liquid, and ready to fight another day.

The portfolio approach is a viable and important business strategy for the rest of us. Having "waystation" products or technologies is not a sign of weakness or an inability to predict the future. In the best examples, it's an acknowledgement that the future may or may not happen when we think it is going to happen--and the most important thing you can to to greet that inevitable future is to still be in business and in a financial position to capitalize on it. The "Apple Way" is to be right about how and when that future arrives. Waystations allow you to stay in business when you guess wrong.

So, I suppose what I am suggesting is this: formulate your vision for the future. Make no small plans. But don't rule out building a few exit 42 taco stands or Blu-Ray players to profit from the present, so you are ready to cash in on the opportunities of the future. Build your waystations, and find the buggy whips that will enable the jetpacks.

Or you could just "be like Apple."

A Dramatic Rise in Internet Radio Usage

Added on by Tom Webster.

On April 10th, Edison and Arbitron will debut the The Infinite Dial 2012: Navigating Digital Platforms, the latest in a research series which has now hit a remarkable milestone. This year's report is our 20th study of America's media and technology consumption habits, and represents the richest longitudinal mine of such data in the world. We've been tracking things like the usage of Internet radio for well over a decade - which makes this year's jump in that particular behavior all the more remarkable. This year, we are reporting that the weekly usage of Internet radio (which includes both the online streams of terrestrial broadcasters and streams from pure-play streamers such as Pandora) has increased from 22% of Americans 12+ in 2011 to 29% in 2012 - a jump of over 30%. This is a number that we are accustomed to seeing grow bit by bit each year, but this is the largest year-over-year increase we've seen since we began tracking this stat in 1998. It's easy to say that this kind of discontinuous jump is due to the increased usage of Pandora or Slacker or iHeartRadio or other individual brands, but I think there is a different dynamic at play here, driven by another discontinuous jump.

We'll reveal our number on April 10th, but let's just say that the percentage of Americans - mainstream Americans - who now own smartphones is going to show some growth, to put it mildly. The success of some of today's popular on-demand and streaming Internet audio services is partially of their own doing, but also partially a trailing variable of the rise in smartphones and mobile media consumption. True, we've been able to consume digital media on the go for over a decade, but there has always been some friction involved with this process. I don't know the hard cap on the number of Americans who would program their own playlists by mood and music type and then upload that content to their phones and iPods, but I'm going to suggest that the number of Americans who now own smartphones has blown right by that hard cap.

The friction involved in mobile digital media consumption has now been removed for vast numbers of mainstream Americans who want someone else to program their content for them--and that's what the mobile Internet has really enabled. In a sense, the continued penetration of smartphones is encouraging something of a radio renaissance, though it doesn't look like your father's Victrola. Mobile phones are increasingly providing the digital soundtrack to people's lives on the go - just count the white earbuds, Beats, Boses and other headphones the next time you walk down Main Street. Previously those earbuds delivered mostly our own music files, but what our data shows is that there is pent-up demand for frictionless, mobile audio programming to provide that soundtrack for us, and smartphones are opening the floodgates.

Thanks to the myriad online music services available, music as "product' has become an economic commodity, and the delivery of that commodity akin to trucking wheat. Since "cost" is not a competitive dimension, and everyone has access to the same product, how that product is packaged and delivered is all you have. Smartphones have changed the game here from music as active entertainment choice to music as the quite literal soundtrack to your life. And that's the dimension many of these services are lacking.

Today, I have thousands of sources to hear a stream of great 80's hits, or hair bands, or Bon Iver clones, or songs with guest raps by Pit Bull. The smart Internet audio providers of tomorrow will transcend the jukebox, and remember that "soundtrack of your life" concept. This means new opportunities for podcasters, personalities, local content, and most importantly curation.

The ability to pick songs is now an algorithm. The Internet radio services of tomorrow have to show me how that content matters, if they want to matter.

We'll have much more data and the implications of that data on April 10th, when Edison and Arbitron present The Infinite Dial 2012. Register today!

A Dramatic Rise in Internet Radio Usage originally appeared on the Edison Research blog.

A True Measure Of Influence

Added on by Tom Webster.

Influence scores, as we know them today, are all based upon algorithms. Algorithms are commonly confused with formulae, but they are surely two different things. The volume of a circle is a formula - it's math. That x number of retweets has y effect on your influence score, however, is an algorithm. There might be some math in there, but I like to think of algorithms as math plus assumptions. An influence score makes assumptions about the value of your follower count, how many people click on your links, etc., and then bashes those assumed values together with yet another set of assumptions - their supposed relationship to each other. Yes, there are mathematical functions involved, but just as the "likely voter model" many pollsters use for pre-election polls can never predict whether or not a specific individual will actually vote, the influence score will never be able to predict the impact of an individual on the behavior(s) you are trying to influence.

And that's really the biggest issue with these scores, isn't it? All of the algorithms being used by these services are amalgamating the behaviors of the many, and attempting to assign a value to the individual. This kind of inductive reasoning is always problematic. Here's why:

Measure Three Times, Cut Once

There are, broadly, three kinds of measures: descriptive, diagnostic, and predictive (and these aren't mutually exclusive - the best measures have elements of two or three of these all rolled into one.) Descriptive measures tell us what happened. Diagnostic measures tell us why it happened. And predictive measures help us make good guesses about what might happen in the future. The modern crop of influence scores (and I'm talking specifically about the single, reductive and non-context-specific number from 1-100 most of these sites spit out) are, I would argue, purely descriptive measures.

What Klout scores (or those from PeerIndex, or TweetLevel) can fairly be said to reflect is this: activity. It's demonstrably true that increased activity on social networks (particularly Twitter) has a correlation with higher scores. Activity is not "influence," of course, but it is something, and I'm not prepared to dismiss that something out of hand. So my influence score may in fact reflect some measure of my activity online, and my ability to encourage some form of activity in others. Thus, my score is descriptive of that activity level. It is not diagnostic of that level, however.

The scores, as they are presented, are inscrutable. My Klout score has fluctuated a fair amount in the past 60 days. I'm not sure why. I'm sure there are some very defensible assumptions for that fluctuation built in to Klout's algorithm, but the point is that the reasons for that variance are entirely opaque to me. In other words, my score, and even the peripherals around it to which I have access, do not tell me why the fluctuation occurred. Thus, influence scores can not be used as diagnostic measures. (My topics, however, are right on the money. Klout is nailing this lately.)

A Cosmetic Problem

Similarly, the scores are predictive of nothing, which actually makes them very difficult to use. For example, I'm fond of comparing my Klout score with Snooki's Klout score. After several months of concentrated effort, I have finally pulled ahead of Snooki (see, Mom? I told you I'd eventually make you proud.) But if you represented a cosmetics company trying to launch a new brand of sub-premium skin bronzer, who would you target - me, or Snooki? The answer is obvious, of course, but consider this: if my Klout is 68, and Snooki's is 65, how much worse would I be at pushing bronzer? Would Snooki be twice as effective? Three times? A thousand times? There are two answers to this, of course. One is that as I am just one shade darker than an albino, the right answer is probably one million. The other answer is - you cannot possibly tell, and the scores obfuscate this, if anything.

So we have a purely descriptive measure - the influence score - but we lack the diagnostic and predictive measures that would allow us to do what every organization should be doing: learning, optimizing, and getting better. How can your company or brand take a flawed measure - the influence score - and make it better?

What Are We Really Trying To Measure?

Well, since the various influence measures are based upon a series of assumptions, let's make a few of our own, here. First of all, most popular influence measures are heavily, if not entirely, based upon Twitter activity. Twitter's asymmetric nature essentially means it functions as a broadcast platform - the few, reaching the many - so let's start with something we can sink our analytical teeth into: reach and frequency. When an individual tweets out a link to some kind of content or offer, they do so with two hopes: that their followers will click on the link, and that their followers will retweet or otherwise disseminate the link to their networks, thereby increasing the potential reach of the message. So, when someone solicits, either explicitly or craftily, one of the various social media power users to help disseminate a message, the clear hope is that their message will be spread to as many people as possible using network effects.

While the exact relationship between followers and impressions is nearly impossible to calculate using clickstream measures (you have no way of knowing, after all, how many of your followers actually had the opportunity to see your message, let alone read it), it's safe to say that more is better; in other words, there is undoubtedly a positive correlation between follower count and the number of people who interact with a given message to those followers. So, let's assume that the behavior you are measuring for is retweets: tacit endorsements of your message, and increased exposure. Again, this is a pure reach and frequency game, and far easier to measure than "influence," per se.

Introducing "APM"

Here is a thing you can know: the average number of retweets per follower on Twitter. If you sifted through all that clickstream data from Twitter and examined tweets that contained links (we'll exclude "conversational" tweets,) you could come up with the number of people who retweeted a given message, and then compare that to the number of followers to the original tweeter. In other words, if I had 5000 followers, and my typical links are retweeted by an average of 20 people, then I have a concrete number to look at: I can generate one retweet for every 250 followers, or 4 for every 1000. This smells suspiciously like a CPM number, doesn't it? But to be cute, let's call it "APM," or actions-per-thousand. If my average link tweet gets retweeted 20 times, and I have 5000 followers, I can generate 4 APM.

With me so far? Now, let's say that we do this for all Twitter users over a period of time to come up with an "average" APM. It won't look as linear as the graph below suggests, but roughly let us assume that the average tweeted link is retweeted 10 times for every 1000 followers of the original tweeter. So, as the graph below shows, 20,000 followers would get me 200 retweets, 30,000 would elicit 300, and so on. So, the "Twitter average" APM is 10 (it isn't, by the way ) :).

So now I have a benchmark by which to measure my influencer campaign. Back to my original example, suppose my sub-premium bronzer brand (Ecruage, by CASPER) used Klout Perks to identify people with Klout scores above 65 to target. Now, since neither Snooki nor I have "Cosmetics" as a topic, this requires a bit of a leap of faith on the part of our brand, but not the worst one I've seen. So, Snooki and I each get sent a crate of bronzer, and we go to town on the Twitters. Snooki has a lot more followers than I do, of course, but we can both fairly be graded on the APM scale I've outlined above.

So I try this crappy bronzer, and I tweet about it. My followers expect me to talk about social media research, consumer behavior, bad music and gin, so my crappy bronzer message comes off as a bit of a non sequitur, as the graph below illustrates:

So while the average Twitter user might generate an APM of 10 (10 actions per 1,000 followers), on this particular message I only got an APM of 4.2. Not so good, CASPER! Snooki, however, gets all serious about this bronzer, and tweets the crap out of it. On an apples-to-apples, retweets-per-follower basis, her graph might look like this (Snooki is the top line):

So, on the topic of crappy bronzer, Snooki might have initiated an APM of 15. There is a clear delta between Snooki's effectiveness in disseminating this message (the top line) and mine (the bottom line). Two things about this delta: first, it's endlessly reassuring to me (this is not a contest I'd care to win.) Second - that delta between the expected value (10 APM, or retweets-per-thousand-followers) and Snooki's (15 APM) can fairly be described by one word:

Influence.

This is influence, folks. Whatever magical power Snooki worked on this crappy bronzer message (a likely mixture of the relevance of her message to her audience, her perceived authority on the topic, and the actual logical content of her tweet) she was simply better at disseminating this message than I was - and not by a little. The variance shown between her APM and the expected APM IS influence - it's the mojo she worked using the same system as everyone else, measured like-for-like, that made her far more effective at getting people to spread her message. More message dissemination = more awareness = more trial = more usage. The circle of marketing life goes ever on and on.

The APM Index

Now, if you'd really like to wow your CMO, you could convert Snooki's effectiveness and my (in)effectiveness into indices, which allows you to compare all of the "influencers" whom you targeted relative to the average. Here's a primer on calculating index scores if you need one, but essentially all you do is divide the average for the category into the number you are comparing it to, and multiply by 100. This means that the average for ANY index is 100 (in essence, if you divide the average into the average, you get 1, which multiplied by 100 = 100.) Snooki's APM of 15 equates to an index of 150 ((15/10) x 100), while my paltry effort comes out to an index of 42.

So, to close the loop on this, we started with two similar Klout scores:

Snooki: 65 Tom: 68

...and we end up with our own, topic-specific measure of actual, observed influence - as expressed by the differential in message dissemination:

Snooki: 150 Average: 100 Tom: 42

In my example, there is considerable difference between the original descriptive statistic (the Klout score) and this statistic, which moves us much more in the direction of a predictive statistic (at least on the topic of bronzer, and perhaps the category of cosmetics) that the learning organization can use to make the next "influencer" campaign even better. The influence score helped to make the initial cut, perhaps, but the only way for your company or brand to truly gauge influence is to do the work, and determine which individuals outperformed the average, and which underperformed.

Caveats, Carefully Considered

Now, there are a couple of things (at least) that one might take issue with here - both of which could fairly be described as oversimplifications on my part. The first, obviously, is that the mystical force that allowed Snooki to generate an APM of 15 compared to the average of 10 might not wholly be attributable to "influence." But if it ain't an answer, I don't care - it at least serves as a handy heuristic for the nearly unmeasurable constellation of circumstances between the original tweeter and his/her audience that caused the message to mysteriously do better than the average would have predicted. Influence? Yeah, I think so. It's at least behavioral, relevant, and a lot closer to "influence" than the activity-based scores we currently have - with the bonus of being relevant to your brand.

The other bone you might pick with me here is that my calculation - and reducing the whole model to differential message dissemination - is also overly reductive. I've taken what is surely a complex system and turned it into a back-of-the-envelope calculation. You're right - it is a back-of-the-envelope calculation. That's why companies might actually do it. You don't need an analytics whiz on your staff to take this first pass at measuring your influencer campaigns, and until everybody catches up with you, this'll do. Master this first, then break out the HAL 9000 when it's time to make finer distinctions. (I also know a really smart social media research company that could help. Just sayin'.)

The bottom line is this - let's say you actually use influence scores as some kind of crude segmentation - how will you test your work? How will you know, in other words, if your efforts were successful - and more importantly - what you can learn from them to make them better? The answer, I would submit, is to start with the current crop of popular influence measures as a first pass, but remember that they will never be as accurate as your own performance measures, even as crude as the one I've suggested here. There is nothing wrong with Klout, PeerIndex or any of these measures. There are only lazy marketers. And if you are reading this far, my friend, at word 2,200, you are surely not that.

Radio's Passion Gap

Added on by Tom Webster.

Edison, in conjunction with Arbitron and Scarborough, recently released a study of in-car media consumption called "The Road Ahead - Media and Entertainment in the Car." Radio still enjoys tremendous reach, so it was not surprising to see that Radio topped the usage charts for car-based media consumption. The medium certainly has the advantage of a healthy installation base, of course - it's likely that a higher percentage of cars have an AM/FM radio than do homes, or workplaces. Snapshots are rarely helpful, however. In our business, we have a mantra: "the trend is your friend." In-car radio usage has declined, and other entertainment options from satellite radio to iPods have become more and more viable. Indeed, one of the more astounding stats from this survey was the proportion of 18-24 year olds who have plugged Pandora into their car stereo directly from their mobile phones: now one in five.

Still, Radio does maintain a healthy user base of in-car listeners, and from a strict reach perspective, continues to remain "the king of in-car media." The hallways at the recently concluded NAB Radio Show were full of positive takeaways and comments about the study, which on the one hand is extremely gratifying, but on the other - more than a little troubling.

Take this graph, which shows the percentage of in-car media consumers who indicated that they "love" using various media in their car:

What the industry chose to focus on were the quantity graphs - yes, radio is still the most widely consumed in-car medium. However, the industry ignored the message of the quality graphs, like this one, which highlight what everyday consumers are increasingly telling us in study after study - they just aren't that passionate about radio. While some of that can be attributed to radio's reach, look at the top items on this list - they are all digital. Even streaming an AM/FM station over the phone engenders more passion than terrestrial broadcasting.

What's more, not only are in-car consumers most passionate about digital options, look what else the items at the very top have in common: content. Satellite radio is not genius technology, and lacks the interactivity of other popular digital audio platforms, such as Pandora - but SiriusXM continues to invest in content. It's content that puts "audiobooks" as the third most "loved" item on this list, especially as our average commute times continue to creep ever upward. Of course, another thing that Pandora, SiriusXM and audiobooks have in common - all are either nearly or completely commercial-free. Radio's terrestrial bargain - free music, in exchange for attention to commercial messages - may still be intact, but the industry has yet to embrace the fact that expectations are different online; while simply re-purposing a terrestrial stream online might be cost-effective, it's hardly the most competitive solution.

In short, it's investment in innovation - either in technology or content - that is driving passion. Radio makes some of those investments, but needs to make more. Instead, too much of radio's precious capital continues to be spent copying Pandora and Groupon. No one loves copycats. Yes, radio must make up the technology gap with its digital competition. And it can't allow the "Daily Deal" to damage their local sales advantages. But the real gap - the most sinister gap of all - is the passion gap. Jukeboxes, simulcasts, automation, failure to invest in talent, 16-minutes of spots per hour and half-price races to the bottom will never make people love radio.

Radio can ape Slacker, out-cheap Groupon and continue to push HD all it wants. People don't talk about things they don't care about. If your station is not being talked about on social media, you don't have a social media problem. You have a passion problem - the passion that you are not engendering amongst your listeners. And that is far more sinister than a technology problem.