Profiles in Analytics – Does Web Governance Drive Digital Analytics?
Thursday January 26th 2012, 7:17 pm
Filed under: Phil Kemelor
By Phil Kemelor
I’ve just released this year’s Profile in Analytics research. Focusing on the relationship between Web governance and digital analytics, I drilled down on whether Web governance-oriented organizations really are “better” at analytics…Do they serve more stakeholders? Do they do higher value analytics work? Are they in alignment with senior leadership?
In doing the research, it was clear that analytics teams are now more common place and growing in size. Nearly 75 percent of those surveyed are part of an analytics team. Teams of 5 and more now outnumber single person departments, 34 percent to nearly 27 percent. Most teams have between two to four people. Add to this that most digital analytics teams are working with between 1 to 3 third party agencies, and we can see that digital analytics is requiring management of many elements, as well as people who can carry out the tasks.
This raised a few questions:
- Is digital analytics growing in importance because it is recognized as having real benefits? Or is it simply viewed as a central location in the organization to manage ever increasing sources of digital data.
- Are those who work in digital analytics equipped with the management experience and leadership to promote the value of analytics at senior levels?
- Are analytics insights used within organizations?
- Are the analytics insights of high enough quality to be used?
One of my secondary agendas in looking at the influence of Web governance on analytics comes from my general skepticism whenever I hear an organization bill themselves as “data driven.” I’m always wondering:What does that really mean?
I know it does not mean cranking out dozens of reports to dozens of people and not hearing a word back. I also know it does not mean preparing ad hoc analysis based on a senior manager’s request for a unique visitor report to present to the board of directors.
Does a Web governance structure ensure that analytics is part of the organization’s “mainstream” Web operations and therefore potentially used with more purpose, and therefore actually “data-driven”?
I invite you to draw your own conclusions after reading the findings:
- Research Analysis: Does Web Governance Drive Digital Analytics?
- Research Results: Survey Response Summaries
- Profiles in Analytics: Collection of Individual Responses, Shared Wisdom and Experiences
All the papers are available at:
Building the Right Digital Measurement Infrastructure: The Celebrus White Paper
Thursday January 19th 2012, 7:06 pm
Filed under: Gary Angel
By Gary Angel
Late last year I worked with Celebrus Technologies on a white paper entitled “The Future of Digital Measurement and Personalization.” I know that sounds a little bit grand, but I promise, it’s not the sort of empty, for-pay puffery you may have come to expect in our industry. First, I’m deeply concerned with the issues and arguments covered by the white paper: the right way to build a digital measurement infrastructure. So much so, in fact, that I’m working on a second, Semphonic only, white paper that extends the Celebrus Technologies effort with our research into the creation of an online data model for the warehouse. Second, I’m convinced that the Celebrus solution ought to be getting significant interest and traction in the marketplace. If you’re an enterprise committed to advanced digital measurement, you’d be remiss not to be at least considering Celebrus. Third, and I suppose this is what really matters, I believe the white paper makes an important case for a specific direction when it comes to building a digital measurement infrastructure.
My intention is to write two posts relative to the White paper – ones that largely mirror its structure. In this first post, I intend to layout some of the significant challenges in creating a really good digital measurement infrastructure. In the second post, I’ll explain why the Celebrus solution solves some of the biggest of those challenges and is a compelling direction for any organization wanting to build a measurement infrastructure that can support more advanced customer analytics, segmentation and real-time personalization.
While I hope to convey a good sense of the white paper in these two posts, I also intend to keep both rather short – precisely because the white paper exists and it is, after all, free. Here then, is a little sample, I hope tantalizing…
All last year I wrote about the evolution of Web into Customer analytics. If evolution in the natural world comes in fits and sudden dramatic starts, the same is surely true of technical advances. In 2011 we saw a dramatic acceleration in the use of data warehousing technologies to provide Customer Analytics even as traditional Web analytic tools worked hard to re-platform themselves to support deeper, faster, and better analysis at the customer level.
If you believe that data warehousing is the future of your digital measurement platform, then you should be thinking carefully about the nature of your measurement infrastructure. The vast majority of digital data warehouses that Semphonic worked with last year were sourced via a data-feed from a Web analytics solution. That’s an obvious choice since the work to create a good collection system via Web analytics tags has already been largely accomplished at most enterprises. But is it really the right direction?
In the white paper, I cover four areas of key concern when thinking about digital measurement infrastructure: Governance, Robust Data Collection, Data Model, and Real-time capability. In each case, sourcing from your existing Web analytics system presents real problems.
The problems of Web analytics tagging governance have become well-documented. Indeed, the problems have created a whole new industry of Tag Management Systems (TMS) that create a new layer of abstraction between the measurement system and the Website. I’m a big believer in TMS. Ensighten, Tealium, Tagman and Omniture’s new solution each have their respective advantages and disadvantages, but all provide significantly better governance than out-of-the-box Web analytics tagging. On the other hand, none of them entirely solve the single biggest problem in creating a digital measurement infrastructure – the necessity to pre-plan your information capture when designing a tag. Governance is improved because you move the tag creation function from IT to measurement. This is all to the good, but it doesn’t change the underlying dynamic. Someone still has to do all of the customization, and the customization has to happen before the measurement takes place. So a TMS only changes one piece of the problem. Measurement still has to be carefully planned. Customization still has to take place at the page level. You still lose anything you didn’t collect. That’s simply not ideal. What’s more, a TMS introduces an additional cost into the system – sometimes a fairly significant one. You’re paying more to collect the same information. If you’re committed to a Web analytics tool for digital customer analysis, then a TMS is the right way to go. Believe me, the improvement in governance is worth the cost. If you’re focused on the warehouse, however, a TMS may not be the best option.
This problem of pre-planned measurement and governance is all of a piece with the next issue – robust data collection. Web analytics tags are largely page-based. They have to be manually added to individual links, they require specific (and often rather arduous) customization to capture intra-page based actions. Scrolls, internal search, DHTML, Ajax, and Forms are all completely mysterious to the traditional Web analytics one-per-page tag. Not only does this fundamental limitation make governance and tag creation far more difficult, it limits the collection of many of the most interesting customer remarketing data points. Did a customer start a form? Did they fill-in a field? Did they scroll? Which link did a customer click? It would be nice to get all this seamlessly, without effort, and without customization.
Of course, collecting information is the fundamental purpose of a digital measurement infrastructure. Nevertheless, how that information gets organized turns out to be a huge challenge. No issue in warehousing digital data has proven to be more difficult than creating a good data model of digital data. For all the attention paid to “big-data” technologies, it’s my belief that poor modelling cripples many more efforts than does query speed. You may not think of the Digital Data Model as a part of a collection infrastructure, but you’d be mistaken. True, the data model implicit in a typical Web analytics data feed is so poor as to be hardly a model at all; it’s built for interchange not usage. A really good collection infrastructure will house data in a good data model and the two are largely inseparable.
Finally, there is the issue of real-time. Few aspects of digital measurement are as poorly understood and misrepresented as the need and function for real-time measurement. Traditional tag-based Web analytics tools were often sold on the premise that real-time data collection and reporting was a significant and important advantage. In general, that’s simply not true. Very few business decisions and very few analytic tasks can or should be tackled in real-time. There are a few verticals and few problem sets where true real-time reporting is essential. For most of our clients, it’s simply unnecessary. So the 24 hour delay inherent in sourcing your data warehouse from a Web analytics data feed should be no big deal, right?
Wrong. Because if you want to use your digital data to drive personalization or re-marketing, time suddenly becomes much more important. Real-time decision-making is fundamental to good personalization and re-marketing because nothing, NOTHING, is more important than what a customer just did. While some people believe that this type of real-time decision-making is the sole purview of black-box tools, I strongly disagree. The best personalization opportunities will come from the integration of customer data with real-time data using rule-based, analyst driven optimizations.
If I’m right, then sourcing your data warehouse from a Web analytics solutions will effectively preclude you from advanced personalization and remarketing. Even if that means little to you right now, I suggest that building your warehouse on a technology stack with such a critical limitation is short-sighted.
In short, if you’re committed to the warehouse as your solution for digital customer analytics and optimization, there are some pretty compelling reasons why sourcing that warehouse from your Web analytics solution isn’t ideal. It looks easy, it may be the right direction for a pilot, but it places some potentially crippling limitations on the long-term capabilities of your program.
In my next post, I’ll show how the Celebrus solution – a digital measurement infrastructure built specifically for the data warehouse – solves some of these challenges and explain why I think it ought be getting serious attention from any enterprise focused on digital data warehousing.
Or you can get the whole white paper here….
Social Media Measurement Tools
Thursday January 19th 2012, 7:06 pm
Filed under: Gary Angel
Answers to your questions!
By Scott Wilder, Gary Angel and Marshall Sponder
As usual I enjoyed the recent Social Media Measurement webinar – and it was great to have Marshall on as well. Tools always draw a crowd and this was no exception. Here’s the questions we got along with our joint answers…
Question: What tools are best for measuring social media ROI or business lift, with respect to advertising on Facebook, Twitter, Linkedin, etc
Marshall: There’s actually a new platform launching next week called Unified (UnifiedSocial.com – I will be at the launch) that promises to do something like that – I’ve seen the platform close up and I can tell you I am impressed. It may be that 2012 will be a year where ROI will no longer be a totally elusive goal for social media.
Gary: This is far more difficult, I think, than people generally believe. The only easy path to ROI measurement is when user’s are either directly engaged in commerce on social sites (which is rare) or are directly clicking through to sites where they are engaged in commerce. In these cases, measurement is generally a straightforward application of existing Web analytics campaign tracking capabilities. Unfortunately, this isn’t often the case. In some cases, I’m not even sure that ROI is the proper path to measurement and where it is, I don’t think there is likely to be one answer or approach. If your Facebook advertising is directed toward increasing your Fanbase, you need to be able to measure the incremental value of Fan (and this won’t be one value by the way) to your marketing. Getting that measure takes a concerted research effort and won’t (in my opinion) be delivered by any single tool. I sometimes think that it might be better for organizations to – first glance – concentrate on the obvious optimizations points. It’s much easier to measure which campaigns generates engaged Fans and calculate their cost-efficiency in that respect. You can then optimize campaigns within the set of those targeted toward increasing your fanbase. It’s not ideal, but it is more practical.
Scott: In most cases, companies have to guestimate true ROI because of some of the limitations of the tools and also companies own infrastructure. I find it useful to create proxies – like determining cost estimates for certain activities, which in turn, would lead to a transaction.
Question: US cost is too high – example Engage121 is $1000 per month for first base level search – one profile with 3 seats.
Marshal: Well, as Gary pointed out, Engage121 is designed for a specific use case and type of client such as an airline or large franchised business with thousands of stores that each want a different response and editorial controls – think Dominos or Dunkin Donuts (though I think neither are Engage121 clients). My point being, you can’t take the price of a platform in isolation from the use case and clients for whom it is designed and targeted to. The Dominos and Dunkin’s of the world have plenty of money and need for this kind of platform – but if your looking for an “affordable point of entry” into Social Engagement- than go with HootSuite and be happy there are still some free platforms you can play with and get your feet wet.
Gary: Not every market is going to be served by a tool like Google Analytics – free and really good. I basically agree with Marshall here. One thing I will say that’s more general is that in my experience some pricing models are much worse than others for doing serious enterprise work. To do our kind of measurement (Semphonic) we need a pretty free hand to construct, test and use profiles of all sorts and we generally need quite a lot of them because all the interesting questions involve categorization. At the enterprise level, I’d much rather pay a significant lump sum for a pretty free hand with the data than have a pay-per-item model. Pay-per-item models tend to cripple analysis.
Question: Do you have preference for tools to measure public opinion about political candidates – public policy or litigation issues?
Marshall: Yes, I am working with one right now – 6Dgree.com – we are tracking two candidates in Rhode Island and breaking down their overlapping audiences – along with “persona” breakdowns of their twitter streams – here is what that looks like (I erased the names of the candidates because this is still in the very early exploratory stage of what works).
So far, the persona development breakdown looks impressive, as we can break it down by various sub dimensions and the founders at 6Dgree are very willing to pursue my suggestions, which really impresses me about them. So yes, as of now, I believe 6Dgree might have a winning platform at an affordable price level that works for Twitter and Facebook. Another is PeekAnalytics, but it’s not adapted specifically to Politics, yet.
6Dgree has done some interesting work with Australian Labor party around issues and produces a weekly portal report that breaks down tweets around several issues – I’m impressed with the solution, but of course, each campaign is slightly different and customization will always be a fact of life.
Question: What are the better tools for global internal scale? If any? Or just by world region?
Marshall: I like Comscore Media Metrix for world reporting – but that’s mostly panel based reporting -but it does a fairly extensive job of categorization of lifestyle and interest across channels, countries and technologies such as video, mobile and search.
Gary: Ditto Marshall. I like NMIncite for many larger markets. Alterian provides excellent language coverage.
Question: Do you believe the sampling of data should include statistical testing? Or how do you ensure your sampling is reflective of the entire population to provide confidence in the recommendations?
Marshall Well, Gary has a pretty good post on that, written recently, and I think, rather than speak to it, I’ll let Gary address it http://semphonic.blogs.com/semangel/2011/11/the-limits-of-machine-analysis.html
Gary: Thanks for the plug! Let me know if the several blogs I’ve written on the subject don’t fully answer the question! Social Media Measurement is an odd blend of attempts to get universal coverage and hidden samples – which makes a single approach challenging. You can use statistical testing to measure the variations in your samples and, where possible (it isn’t at all levels) that’s certainly advisable.
Question: When one wants to search and analyze Twitter postings and the topic is very low salience, so likely a very, very small percentage of Twitter mentions in U.S. in a given week, what are the best ways to maximize the amount of Twitter Firehose that you search to catch as many Twitter postings on your low salience topic as possible?
Gary: Depending on your method of access, you might want to start by talking with your vendor (if you’re using a vendor to make the initial data pulls). The initial pull is often tunable. This also speaks to your ability to capture the topic in all its forms. Traditional keyword research of the type often done for long-tail SEO can be useful. There is a range of tools appropriate for this – we’ve also just used scanning tools to pull the text off of sites (both client Websites, communities, and competitors) to try and build rich topic profiles. You can also take advantage of wildcards (in some tools) to scan from hash tags that include but are not limited to your topic. Hash tag references are often concatenations of the topic with other words and are nearly always pertinent. Sometimes, too, you have to be creative about what you’re looking for. If, for instance, you’re launching a product that is distinct, you can’t expect to identify potential influencers by targeting the obvious words – they generally won’t have any traction. So you have to look for analogs that might allow you to find and target a reasonably set of influencers.
Q: Any views on Netbase, which SAP just partnered with?
Marshall: Yes, it seems like a good partnership. Netbase does a pretty good job at NLP and creating structure and meaning around unstructured social data, and rather than SAP trying to build that (or buy Netbase, which is an option) they just partnered with them.
Scott: Netbase is doing some really interesting stuff, especially when it comes to Netnography (see www. Netnography.com). I think the partnership with SAP will be good because I know that the company is putting a lot of energy into understanding their own segmentation better. We are doing some work for them right now. SAP is also making a big push in mobile analytics and would probably pull Netbase into.
Question: Gary, perhaps you could ask each speaker to summarize which tool they think is strongest in each of the three key use cases you’ve outlined?
Marshall: Here’s a list of companies to consider
- For PR Effectiveness - I’d say mPACT and Cision.
- For Consumer Sentiment – I would recommend be NetBase (in fact) for its NLP capabilities.
- For Social Campaign Effectiveness – Unified (once it launches)
- For PR Effectiveness: NMIncite – though it does a poor job with identifying influencers the segmentation is excellent for tracking them.
- For Consumer Sentiment: Clarabridge and Crimson Hexagon – though we haven’t gotten to use Crimson Hexagon as much as we’d really like.
- For Social Campaign Effectiveness: This is a tough one. Most of the new management tools provide some integrated reporting – but I think that really good effectiveness measurement demands that level of reporting plus Web analytics, plus traditional listening configured for the purpose, and maybe CRM-based extracts at the individual level as well (we sometimes analyze Facebook campaigns by extracting all the individuals and looking at their pre/post behavior).
NetPromoter Scores vs. Site Satisfaction
Thursday January 19th 2012, 7:05 pm
Filed under: Gary Angel
By Gary Angel
After I posted my blog on measuring and benchmarking overall Site Satisfaction, Marshall Sponder sent me this comment/question:
Another great post! Question – how does NetPromoter scores figure into the points? They are, after all, Survey based.
It’s a really interesting question, because I believe there are both similarities and differences relevant to my discussion. A quick recap – in my last post I argued that overall Site Satisfaction suffered the same issues as almost any other site-wide metric. Site-wide metrics – be they Conversion Rate or Revenue or Site Satisfaction – all confuse multiple factors together in a way that makes them almost useless and un-interpretable. This is contrary, of course, to the broad industry view of KPIs, but it’s a topic I’ve canvassed thoroughly in previous postsand I have yet to hear a convincing argument to the contrary. In addition to this problem common to nearly any site-wide variable, Survey data – when collected by traditional site intercept means – also suffers from a sampling problem. Because your site population varies with your marketing efforts, you’re mostly measuring shifts in the underlying population you’re attracting when you measure (or compare or trend) site-wide Satisfaction scores.
So what about NetPromoter?
On the whole, NetPromoter scores will suffer from pretty much the same problems. When you measure NetPromoter scores using site-intercept surveys, your likely measuring changes in your sample population not changes to your actual customer likelihood to recommend. So a trend or benchmark of NetPromoter scores is no better, in this respect, than Site Satisfaction.
However, there are a few differences. As I thought about Marshall’s question, I realized that in many respects my criticism of overall Site Satisfaction mirrors my criticism of Total Mention Counts in Social Media. In Total Mention Counts, you’re adding up fundamentally different things into a meaningless whole (mentions in the NY Times + Twitter Customer Support Mentions doesn’t equal an interesting Total Mentions). It’s similar with Site Satisfaction. Adding Site Satisfaction for Customer Support visits to Site Satisfaction for Pre-Purchase Visits to Site Satisfaction for Brand Visits doesn’t really add up to a meaningful number. The meaningful numbers are all at or beneath the Visit Type level.
NetPromoter scores, on the other hand, ARE coherent across both Visit Types and an entire population. Willingness to recommend is independent (to some extent) of whether you are buying, or getting support, or finding out about the brand. You’d still probably want to understand the impact of visitor and visit type on NetPromoter score but it’s not totally unreasonable to think about NetPromoter as an attribute of an entire population.
This also tells us something about what NetPromoter isn’t. It isn’t, for example, a good way to measure success by visit type. Site Satisfaction is actually much better for that.
Indeed, as I re-read my posts, I don’t want to leave the impression that I dislike Site Satisfaction as a metric or that I am opposed to online intercept surveys and their use. Not at all. Online surveys are incredibly valuable as is the Site Satisfaction question. You just have to understand and work effectively within the limitations imposed by the sampling method. In fact, I think Site Satisfaction is a better metric than NetPromoter if you’re intent is to measure visit-level site experiences. It gets at something much more specific and real. What Site Satisfaction isn’t, is a metric independent of those visit types in a way that makes it a plausible candidate for aggregation or site-wide benchmarking.