Jeremiah Owyang recently alerted me to Edelman's newest toy, TweetLevel, which purports to measure the total influence of a Twitter account. There seems to be a lengthy, though highly arbitrary, formula that basically weights a series of data points (equally??) and compares them to observed norms (the Z in the denominator). These sorts of meta-Twitter measures and metrics are popular at the moment, but they surely don't represent anything more than a crude reflection of activity. Deriving "influence" from observed behavior assumes a lot of facts not in evidence, not the least of which is the 4th plank of Edelman's TweetLevel score, "Trust." As best as I can determine, Edelman is measuring trust mainly by aggregating trust measures from other folks making Twitter toys (Twitalyzer, Twinfluence, etc.) and by retweet behaviors.
There are three huge problems with this, and with even getting caught up in the madness of trying. First, retweet behavior also seems to be tied into some of the other planks of Edelman's formula (it's also listed as a component of "Influence" and is used in the formulae of the third-party tools they are aggregating.) Discerning whether or not a retweet shows signs of influence is one thing--influence can be either positive or negative--but to then ascribe any level of trust to that influence is a serious leap of faith. Retweets are confounding variables. We can't really know why a user retweeted something--it might be because the original tweeter is a friend, or because the retweeter is trying to curry favor with the original tweeter. The retweeter might even be violently disagreeing with the original tweet, which gives you a crude mechanic for influence, yes, but certainly not trust!
The second huge problem with any of these measures, and especially TweetLevel, is that you cannot derive trust from any unstructured online data without some kind of accompanying survey data. Neither server nor survey data alone will get the job done in 2010--they need each other. "Trust" is almost patently ridiculous to try and measure based upon any online behavior (save repeat purchase behavior). Not only can we not assume that retweets are a proxy for "trust," I don't think I have seen any reputable studies that indicate any kind of behavior on Twitter can serve as a proxy for trust. Unless the Edelman engine has some kind of motive/sentiment engine to rival Wintermute, I defy anyone to demonstrate not only how much "trust" a retweet imparts, but also if such a linkage even exists. It might--it probably does--but it's the undiscovered country from a legitimate consumer insights perspective.
Which leads me to my last point--the utility of even trying. To me, Twitter analyzers like this are just one more example of the fascination with unstructured online data as the holy grail superstring theory to sussing out consumers online. Until an algorithm can really determine motives for online behavior (I will probably have shuffled off this mortal coil before that happens), relying on unstructured online data alone merely gives you a statistical descriptor of online behavior. You can say most people did "x," or even that "x" might be associated with "y," but trying to stubbornly ascertain why someone did "X" without asking them why they did X will remain a fool's errand for some time.
If determining motive is important for marketers, then marketers should just go ahead and ask the questions! Server + Survey. And if it really doesn't matter why someone engaged in an online behavior (as long as they engaged in the behavior), then why bother trying to determine "influence?"
And if you don't believe me, it's probably because TweetLevel declares my "Trust" level to be 35, a far cry from Soulja Boy's 96!