Analyst vs. Implementer – Redux
Tuesday July 19th 2011, 5:37 pm
Filed under: Gary Angel
By Gary Angel
I’m going to try for an ambitious double posting this weekend – I’ve been working on my ongoing series on Digital Analytics and Database Marketing Convergence so I have the latest in that series. I also wanted to take the time to respond to Adam Greco’s detailed, thoughtful and interesting “reply to a reply” around some best practices in Omniture implementations.
I’m going to start with the 2nd topic.
Every school produces different students. Just as James Bond might be able tell not just the year and grape but the side of the mountain on which a wine originated (and probably the weather that year), so too do we classify painters, musicians, economists and even businessmen. The school of GE and the school of P&G are both as famous as they are distinct.
A week or so ago, I wrote Analyst vs. Implementer as a commentary on Adam Greco’s blog on “Top Omniture Implementation Pet Peeves.” My thesis in that post was one that I have touched on before; namely that there are two very distinct schools of Omniture implementers; I called these schools Technical Implementers and Analysts. Of course, my thesis didn’t end with the simple naming of these schools. I described technical implementers as being primarily focused on knowledge of Omniture as a system and of being primarily concerned with the cleanliness of an implementation. I would say, further, that the school they are implicitly produced in is Omniture; whether working at Omniture or being trained by Omniture or simply having worked on nothing but Omniture implementations. They tend to have a strongly software-centric view of implementations.
I contrasted the Technical school with the Analytics school. Analyst implementers are primarily concerned with the richness of an Omniture implementation; they are focused on data capture not on cleanliness and their primary concern with an implementation is that it not sacrifice information. They are concerned with what I might describe as sins of omission rather than sins of commision. I would say, further, that the school they are implicitly produced in is outside Omniture, they have worked in multiple systems, and they come to implementation via analysis. They tend to have a much less interest in the software as a system.
At least insofar as any broad generalization can capture reality, I think these “schools” do so. I think they aptly describe significant features of the way people approach a set of real world problems (Omniture implementations). What’s more, I think it’s important that people understand the differences between the two schools and why it might matter. Those in the Technical school are not shy (and why should they be?) about the obvious advantages they entertain. When thinking about an Omniture implementation, it’s surely and advantage to work at Omniture or have worked there, it’s surely and advantage to have a deep technical knowledge of the software, and it’s surely and advantage to know how to produce a clean implementation.
That being so, why should those of us who come from a different school and think it has its own advantages be any less shy of discussing them? I’ve come to believe that there is no more important or better preparation for doing really good Omniture (or other Web Analytics) implementations than doing analysis – especially with Omniture though also outside of it as well. At Semphonic, we don’t have single person on staff who’s ever worked for Omniture. It’s not a policy or anything – it’s just worked out that way. And yet, I believe that our implementations are consistently deeper and more useful than implementations produced by those trained in that school. Why? I think it’s because we train analysts first and implementers second; and I believe SiteCatalyst happens to be the sort of software system that rewards that approach.
In Analyst vs. Implementer, I described my “two schools” thesis and used Adam’s post as an example. It wasn’t so much that I disagreed with Adam’s pet peeves (which were obviously sound advice), as that I found them representative of a set of concerns that I find secondary. In particular, I highlighted one issue (which happened to be his #1 pet peeve) that I thought was not just secondary but, at least in its full description, distinctly misguided. That issue was the duplication of eVars as sProps.
So, in effect, my post contained one large argument (that two schools around Omniture implementations exist, that the concerns of those schools are different, and that the Analyst school is ultimately focused on the most important set of problems, and that this difference was well illustrated in the types of things that made Adam’s list and, in particular, in his #1 pet peeve around eVar duplication). Inevitably, much of the focus of this immediately devolved into a Twitter argument about sProps and eVars; which, if you think about it, is rather ironic and totally representative of the Technical school.
In his reply, however, Adam addressed both sides of the argument. He starts with a discussion of the broader thread and then dives down into a technical discussion of the sProp/eVar question.
Now it’s pretty clear Adam resented getting lumped into the “Technical” school, and I get that – particularly since I was picking fault. Composers hate getting labeled as neo-romantic just as writers hate getting labeled post-modern. The best practitioners tend to be the most resentful of classification and Adam is, to my mind, the best practitioner in the Technical school (sorry – I just think it fits) and always has been – it’s what makes him so effective. He has regularly shown (and shows in his reply – read his Listprop discussion for example) a fairly remarkable interest in enabling and using analytic techniques. To my mind, there aren’t other representatives of that school who are remotely his equal. So it was probably unfair on my part to choose one of his posts to illustrate my thesis. Truly, my apologies!
On the other hand, I think “Pet Peeves” is far from his finest work – and was much more representative of the “school” than his frequent ability to transcend it. Not a single one of his pet-peeves would have made my list had I tackled a comparable topic. Not one. They are, without exception, technical sins. The sort of sins, if I may borrow another Catholic analogy, for which I would need but one bead on a rosary to atone for. On issue after issue, the big problems with Omniture implementations (failure to capture the necessary information or turn on the right capabilities to use it) seemed to me missing and, in the case of the eVar/sProp discussion, treated rather poorly.
Nevertheless, after digesting my take, Adam came back with a very detailed and carefully considered reply.
I’ll summarize it (as best I can) as two separate threads.
In the first thread, Adam rejects my thesis that there is something significant about the differences in what the two of us would likely call-out as pet peeves – mine being almost all sins of omission (what’s missing) and his being overwhelmingly sins of commission (what’s there). On the other hand, I think Adam offers more of a disgreement with my thesis than an argument against it. It’s just not clear to me which of the top implementation peeves listed might be taken as representative of a deep concern for the analytic potential of an implementation. Nor does Adam really take up the essence of the argument except from a purely personal perspective. Is it really unhelful to create this type of classification? I don’t think so. Not only is it common practice in every discipline, but an interesting classification does real intellectual work. What’s more, it’s a classification that Omniture Professional Services (and every other Web Analytics vendor) and every spin-off of Omniture PSC uses with regularity – only spun as an advantage not a disadvantage.
I believe that not only is the difference real, but that it’s far more important than our differences at the technical level over sProps. I hope it’s not foolishily immodest to say flatly that I think Adam and I are both really good at this stuff – both experts in the field. I think it’s surprising and interesting that we don’t share similar implementation pet peeves. To me, that reflects more than technical arguments over sProps and I’m sticking by my thesis until someone suggests a better one to explain the difference.
So that’s thread #1.
In thread #2, we get into the true technical nitty-gritty and Adam offers a whole array of reasons why duplication is a poor-practice. Now, I once had a Professor of Philosophy who told me that if someone gives you ten arguments for something, it’s probably because none of them are good enough to stand on their own!
However, before I fault Adam for that, I have to admit my original post relied on an identical laundry list of why duplication might be appropriate. So either Adam and I are both offering a series of weak arguments or else this is simply the kind of question where you have to think across a list of practical pros-and-cons and decide where you stand. I’m going option #2, but I do think that some of Adam’s laundry list of cons feel a bit post hoc – reasons manufactured to justify an argument not reasons that you would have started out with if you had no cause to defend.
I’ll classify Adam’s arguments in four categories: uncommonness (there really aren’t that many cases to duplicate and get benefit), hidden costs (latency, $, pageload, variable preservation), and UI (users get confused).
Let’s start with uncommonness. I’ve shortened Adam’s post but I don’t think I’ve done damage to any of them:
1. Using List sProps –…I maintain that the use of List sProps was justifiably covered in my statement of other sProp uses that are “few and far between.” I don’t use List sProps very often because I feel that there are better ways to achieve the same goals. …List sProps have severe limitations and there is a reason that they are rarely used (maybe 2% of the implementations I have seen use them). I have found that you can achieve almost any goal you want to use List sProps for by re-using the Products variable and its multi-value capabilities instead. By using the Products variable, you can associate list items to KPI’s (Success Events) rather than just Traffic metrics…
2. Page-Based Containers & Segmentation…[no real argument here]
3. Correlations – With respect to Correlations… I included Correlations in my list! I also mentioned that this justification for using an sProp may go away in SiteCatalyst v15 where all eVars have Full Subrelations. Also, one of the reasons I prefer Subrelations to Correlations is that Correlations only show intersections (Page Views) and do not show any cross-tabulation of KPI’s (Success Events). Personally, I would disagree …about over-doing Correlations, since in my experience, implementing too many Correlations (especially 5-item or 20-item Correlations), with too many unique values, can cost a lot of $$$, lead to corruption and latency.
4. Pathing – In the area of Pathing… on the same page about its importance… Again, I might differ … in that I don’t think enabling Pathing on too many sProps is a good idea since it can cost $$$ and produce report suite latency, which is why I prefer to use Pathing only when it adds value.
I just don’t find any of this at all compelling.
While using the products variable as a substitute for a ListProp is a great solution when you can do it and certainly more representative of the best thinking Adam does, it has some drawbacks. The biggest is that the products variable usually contains product information. Only sites that aren’t doing ANY ecommerce have the luxury of that strategy unless they want to dump a bunch of confusing non-product information into their product reports. Second, it only conveniently works once. Yes, if we have a single list we want to save and we don’t have any eCommerce, we’ll use the product variable. Otherwise, we use Listprops. No doubt this is an edge case, but it comes up more often these days than it used to. It’s an important technique for the capture of modules and internal ad impressions on a site. That it only shows up in 2% of implementations doesn’t mean much – few really good practices in Omniture from an analytics perspective show up much more than that.
I think the correlations discussion is just wrong – the weakest argument in the whole post. Omniture provides most clients with unlimited, free two-way correlations (crosstabs) of sProps – configurable in the interface. I rarely recommend purchasing additional 5 or 20 item correlations and I don’t think anything in my post suggested otherwise ($$$ implication to the contrary). Most Omniture contracts let you fully cross-tabulate (subrelate in Omniture speak) only 3 eVars. After that, you have to pay more. It’s true that Correlation was included in Adam’s original list of reasons why you might duplicate, but if you need lots of correlations and you don’t get them from eVars without paying extra, then doesn’t that mean you should be duplicating eVars as sProps?
So I think Adam misses the point. Two-way cross-tabulation is fundamental and common. If, in V14, you want to do that, it’s much cheaper and often easier (Adam doesn’t touch on my points about the complexity of eVar cross-tabulation at all) to do it with sProps. I just don’t see an argument here much less a convincing one.
Since we don’t really disagree on pathing there isn’t much to add. I’m pretty sure I never suggested the indiscriminate purchase of pathing on sProps. People accuse me of many things, but indiscriminancy isn’t often one of them. I did suggest that one of my pet peeves is when people pay for pathing and don’t use it. Yep. And I still think it’s illustrative of my thesis and what I didn’t like about “Pet Peeves”. Adam’s pet peeve on pathing was a case when implementations path a static, unchanging variable. Undeniably stupid and a total waste, but, again, it would just never have crossed my mind as a serious problem. Underuse of pathing is a far, far more common and serious problem than misuse or overuse.
So that’s uncommonness. I feel like my arguments here are untouched. sProps add value and do so in a pretty significant number of cases and at less cost than if you rely solely on eVars.
So is there a hidden downside?
Here’s the next set of Adam’s bullet-points on the problems of duplication:
- Over-implementing variables and enabling features unnecessarily can cause report suite latency
- Over-implementing variables can increase page load time, which can negatively impact conversion
- Over-implementing variables and features can cost additional $$$ as described above (e.g. Pathing, Correlations)
I’d lump all of these under the category of hidden costs. Both latency and page-load time are legitimate issues, but I’m not so sure about dollar-costs. Over-implementing features can cost you money, but I’m unclear what that has to do with duplication of eVars as sProps. Unless you start adding extra fee items, there simply is no cost to this.
I suppose one could argue that if you make an eVar an sProp you’ll then be somehow tempted to turn on pathing or twenty-item correlations unnecessarily and that will cost you money. But that feels a bit like the arguments I hear on political campaign ads – “With Prop 99, politicians will give the power to Insurance companies to trick your senile grandmother out of her inheritance.” Uh yeah. I’m sure that’s what the Proposition is really about. Duplication of eVars as sProps doesn’t cost a dime and implying that it does is simply not right.
So what about page-time and latency?
Let’s deal with latency first. It’s very hard argue with latency issues, which is why I’ve always felt it’s a kind of a technical “boogie man” that implementers throw out to keep us analytic folks off-balance. I’ve always taken the attitude that Omniture latency issues are Omniture problems not customer problems. In fact, I’ll throw out another of my pet peeves here. It’s when Omniture Marketing folks sell analytics software based on the vastly greater number of variables and events it supports and then Omniture technical folks tell you that you can’t use those variables or your system won’t work well. That’s a pet peeve of mine for sure! In fact, though, I don’t think they do this as much as they used to because the system mostly works pretty well.
I’ve observed that most latency issues on Omniture are systemic not report suite specific (that’s why we usually have multiple customers suffering at the same time). That means that if you don’t take advantage of features but everyone else does, you pay the price but get no benefit. You could refuse to login to SiteCatalyst at 9AM because it causes UI latency, but you won’t get your reports first thing in the morning if you don’t. When Omniture latency issues strike, our customers with just a few variables seem to suffer just as much as our customers with a veritable variable banquet.
Nor am I convinced that sProps are a significant contributor to report suite latency. They are much simpler to process and report on (for Omniture) than eVars. So if you’re genuinely concerned about latency in your report suites, you probably need to concentrate on removing eVars and events. I’d be very surprised if duplicating 20 props adds the overhead of adding a single event in an implementation rich in eVars. Adam knows more about this internal guts stuff than I do, but I doubt that he really thinks sProps are a huge driver of latency and, in any case, I’m not willing to sacrifice my implementation (or my client’s) on the threadbare hope that it will improve Omniture latency. I think this is a case where too much concern for the technical implementation on the Omniture side short-sells the client’s real interest.
In my last post, I argued that page load-time was critical in today’s organization. So Adam has me on page time since duplication does add a tiny bit of code. Still, by my calculation, if your average page passes 30 eVars, then duplicating them as sProps would cost you about 180 bytes. But that assumes that you don’t need ANY of them passed as sProps for a good implementation. If I’m right and it’s a common case that you actually need many of them as sProps anyway, then full duplication might cost you somewhere between 20-80 wasted bytes. As fanatical as I am about page load times, that’s really, really small in the scheme of an Omniture implementation. In fact, there are cases where wholesale duplication might actually save you code space since it can be handled on the back-end. If wasting 20 bytes in your page code is enough to qualify as a top implementation pet-peeve, have at it!
These issues all seem like small potatoes to me. Perhaps microscopic might be a better word than small. The benefits to duplication would have to be tiny not to outweigh these concerns and even if there were NO BENEFITS it’s hard to see how, with these as the drawbacks, the practice of wholesale duplication would count as the #1 pet peeve in Omniture implementations.
Which brings me to the two most interesting points: variable conservation and adoption. Here’s the first:
- When you implement SiteCatalyst on a global scale, you often need to conserve variables for different departments or countries to track their own unique data points. This means that variables (even 75 of them!) are at a premium. Therefore, duplicating variables has, at times, caused issues in which clients run out of usable variables.
I completely agree with this. It’s pretty much the only reason why at Semphonic we don’t indiscriminately replicate sProps. The loss of valuable sProp slots in a global implementation is the single biggest reason why I classify it as a sloppy practice. That being said, Adam’s original pet peeve specifically stated that most sites didn’t need to use many sProps. Here’s the exact language:
I only set an sProp if:
• There is a need to see Unique Visitor counts for the values stored in the sProp
• There is a need for Pathing
• You have run out of eVar Subrelations and need to break one variable down by another through the use of a Correlation (which will go away in SiteCatalyst v15)
• There will be many values (exceeding the unique limits and you just want data stored so I can get to it in DataWarehouse or Adobe Insight
For the most part, that is it [highlights mine]… Beyond that, I tend to use eVars and Success Events for most of my implementation items.
That last sentence was the one that particularly bothered me and that I objected to in my original post. If you don’t need sProps for much, you probably don’t need to worry about their conservation! But if it comes right down to it, I agree that variable conservation IS what makes duplication at least a mildly bad practice and I think this point is pretty much in tune with my original argument. You don’t waste sProps because they do have value. Duplication isn’t usually a waste, but sometimes it is – and that’s why wholesale duplication is lazy and less than ideal even if, in most cases, it isn’t a very big deal.
So what about Adam’s point about adoption?
Most importantly, however, is the impact on adoption. Again, I may be biased due to my in-house experience, but here is a real-life example: Let’s say that you have duplicated all eVars as sProps. Now you get a phone call from a new SiteCatalyst user (who you have begged/pleaded for six months to get to login!). The end-user says they are trying to see Form Completions broken down by City. They opened the City report, but were only able to see Page Views or Visits as metrics. Why can’t they find the Form Completions metric? Is SiteCatalyst broken? Of course not! The issue is that they have chosen to view the sProp version of the report instead of the eVar version. That makes sense to a SiteCatalyst expert, but I have seen the puzzled look on the faces of people who don’t have any desire to understand the difference between an sProp and an eVar! In fact, if you try to explain it to them, you will win the battle, but lose the war. In their minds, you just implemented something that is way too complicated. You’ve just lost one advocate for your web analytics program – all so that you can track City in an sProp when you may not have needed to in the first place. In my experience, adoption is a huge problem for web analytics and is a valid reason to think twice about whether duplicating an sProp is worthwhile. While I’ll admit that duplicating all variables certainly helps “cover your butt,” I worry about the people who are at the client, left to navigate a bloated, confusing implementation
I notice that while Adam objects to my pitting an “analyst” experience against an “implementers” this argument seems not unwilling to pit “in-house” experience against “consulting” experience. I’m not complaining. I think both are fair enough, though we at Semphonic certainly support a lot of Omniture users. I’m sure sometimes “in-house” vs. “consultative” makes for an interesting difference, though I don’t think this is the place.
Still, it’s an interesting point and the hardest one to be sure about – maybe the only one in the post that really gave me pause. Do I agree with it? Nope, though I sure had to think about it a bit. The case Adam describes is dead-on. Could happen. I’m sure did happen.
But here’s another one that’s every bit or even more common.
User opens up a report on an eVar, tries to cross-tabulate it with page name, finds it’s not sub-related and that he’ll have to pay extra to see that. Decides he’d like a unique visitor count. But no – not available. Cross-tabulates it with another eVar and gets a set of numbers that just don’t seem right. Plays around with the report for bit and then walks over and says, “Hey, this thing gives me totally different numbers when I look at something called Instances and something called Page Views- what the heck is that and why does one variable have two totally different values?”
Been there, done that.
Omniture is a deeply complicated system. The difference between eVars and sProps is notoriously difficult to communicate and understand. There are times when I’m not sure whether duplication is appropriate. I’m unconvinced that the right answer to this complexity is to remove the information that is it’s source. What’s more, if you don’t duplicate sProps and eVars, then the user has to know which list any given variable is likely to reside in. How are they do that unless they understand why a variable is put in one place instead of another? Yes, you can re-engineer the menu system, but then you make life harder for regular Omniture users. As I wrote in my original post, many aspects of eVars (because they are state not instance variables) make them harder for users to understand and use appropriately than sProps; so while I can see Adam’s point here, I think the argument actually cuts in the completely opposite direction.
Eliminate the duplication, and you’ll force users to always deal with an even more difficult set of variables than if you retained both and you’ll force them to know where to look everytime they want to find a variable.
You’ll have to be a pretty sophisticated Omniture user to weigh our arguments here – and if you are, you’ll likely have your own opinions. But I think (just guessing) that a heavy majority would fall on my side of this particular issue. What’s more, if I’m right that many variables need to be duplicated, you aren’t really adding confusion by duplicating another four or five. The argument only works at all (and I don’t think it does work) if you assume that you won’t have to duplicate any or more than a few sProps.
I figure that by this point in my post, I’m down to the truly hard-core Omniture nuts (which surely includes both Adam and I). Some of these points are extremely technical and others frankly arcane. Do we really disagree much over Listprops? Probably not. Latency? Put in in the context of a different issue and I rather doubt it. In fact, I doubt we disagree over much of anything except – when you get right down to it – the real nub of my original post.
The nub of Analyst vs. Implementer is that the stuff that really bugs me is different than what really bugs Adam. You can disagree with my thesis about why that’s the case and still admit that there’s something interesting going on there that isn’t really captured in a discussion (however scintillating) about sProps vs. eVars. I believe (and think I’ve made a pretty darn convincing case) for the idea that there is far too much focus on the technical aspects of Omniture integrations and far too little on the analytic aspects. That focus leads to many, many sins of omission and to implementations that are much less useful than they ought to be. Lump every single one of Adam’s top implementation pet-peeves together, and they mean less to me (and I believe should mean less to you) than a single missing meta-data variable!
But if you’re really hung up on sProps and eVars, what’s my best and final advice?
If you’re building an implementation it’s best (surprise!) to actually know why/whether you need a variable as an sProp or an eVar or both. It will save you six or seven bytes of page weight. It will give Omniture one less excuse for your report latency. It will make for a cleaner global implementation. It will, by only duplicating appropriate variables (of which there will be many), perhaps contribute to some long-term user understanding of the difference between sProps and eVars.
Take all of this together and it’s much, much more important to have an interesting idea for actually using one or both.
But if you really aren’t sure whether an eVar might make a good sProp (and vice versa), then by all means, duplicate. To my mind, duplication is a perfectly acceptable (free, very low-impact, very easy to implement, sometimes beneficial) way of never having to say you’re sorry – and I haven’t read a single even mildly convincing reason that makes me think otherwise.
Seventh Inning Stretch
Wednesday July 13th 2011, 10:15 pm
Filed under: Gary Angel
By Gary Angel
The Convergence of Traditional Database Marketing and Web Analytics – Tying the Threads Together
Like many a sports fan, I often bemoan the length of the sport’s seasons. Baseball’s interminable 162 game marathon. Hockey’s never ending playoffs after a seemingly never-ending regular season. Basketballs meaningless and difficult to endure regular season. But in the case of this most extended of blog series’ – begun in January and nowhere near completion here in July – who (else) can I possibly blame?
Not only is the series large, but it spans the inevitable interruptions for the hot topic du jour, the Conferences and webinars, the occasional vacation or holiday, and the inevitable flotsam of a blog. It’s hard enough for me to keep the thread intact, so I can only assume that the task is more than I can reasonably ask of any reader.
Yet I have reached another key pivot point in the series. It may not be entirely clear how my extended discussion of Two-Tiered Segmentation fits into the broader theme. In the next posts, I’m going to tackle that question. Before I do, however, I thought it worthwhile to recap the series, refresh the key points, and provide some convenient linkages to the various posts in the series.
The series began with a set of foundational posts on the nature of Web analytics and how it works. In the very first post, I laid out the claim that “EVERY single Web analytics technique depends on some combination of the assumption of intentionality and an understanding of the ‘natural structure’ of the Web site.” By the “assumption of intentionality,” I meant that you could infer what a visitor wanted to do by studying their actual behavior. By the “natural structure” of a Website, I mean that a Website has a set of defined navigational paths that restrict and channel users – sometimes in ways that aren’t suggestive of actual intention.
These two principles are both constantly in-play and often in direct contradiction. Imagine following a person as they walk across NY City. If they go to directly to Point B (a Starbucks) and stop, and, in tracing their route from Point A to Point B we realize it’s the shortest possible route, then we assume that their intention was to go to Starbucks. Suppose, however, that they walk in a more erratic fashion, circle a block once or twice, and then finally go into a Starbucks they passed earlier. Might we assume they were looking for something else and ended up at a Starbucks? The navigational structure of the city imposed limits on their path. The type of intention is inferred from the behavior. That’s how Web analytics works.
In my second post, I showed how the concept of “natural structure” changes analytics methods and makes the direct adaptation of statistical modeling to Web analtyics data mostly fruitless. If you model basic Web behaviors using standard statistical techniques, what you capture is correlations caused by the built in structure of the Website. It’s like concluding that the Holland Tunnel is the favorite drive of NY’ers because they spend the most time there! No one would make that mistake with highways, but statisticians make that type of mistake routinely with Web data. This post also shows how our Functional Analysis techniques are specifically designed to solve some of the problems introduced by Website structure.
In the next couple of posts, I delved into the traditional Database Marketing world – a world with a set of rich and effective analytic techniques being applied to channels that are slowly dying. I showed how Database Marketing uses a few simple techniques to enrich customer data and to make the link between the data we have available and the intentionality we seek. In subsequent posts, I showed how the same techniques could be applied to Web analytics but also discussed some of the challenges. Traditional database marketing didn’t have to deal with the “natural structure” issues inherent in the Web, and they had a direct tie in survey research between the variables they used to target and the variables they collected (Demographics). In digital analytics, we don’t have that direct tie. Nothing we capture in opinion research is a direct tie to the targeting data (web behaviors) that we have available to us in most cases. The task for Web analytics, then, becomes clear. We need to find a way to forge a tie between behavioral data and intentionality.
In “It’s all about the (Meta) Data”, I showed the first step in building that bridge. Meta-Data about page view events provides context to the events in ways that often make intentionality much easier to infer. I described abaker’s dozen meta-data elements – often critical to effective digital analytics – that are mostly ignored by Web analytics implementations. That’s a point I’ve hammered home several times since in showing how an over-reliance on technical expertise in Omniture (and other tool) implementations often leads to analytically impoverished results. People (and companies) who build but don’t use Web analytics implementations for analysis or Targeted Marketing simply don’t understand what types of variables need to be captured no matter how well they understand the workings of the tag or the software.
From there, I jumped into an extended discussion of Two-Tiered Segmentation – a segmentation by Visitor-Type (audience) and Visit-Type (intent). As part of that discussion, I show how our industry has consistently gotten it wrong with our insistence on a “small set of actionable, site-wide KPIs.”
Such KPIs don’t exist and the search for them produces metrics that range between worthless and deceptive. When I walk into an executive’s office and announce that “Traffic is up 5% each month!” I want the executive to ask two questions.: “With whom?” and “What are they trying to accomplish?” Until I can answer those two questions, I haven’t said anything worth hearing. Every metric and every KPI should be placed within the framework of Semphonic’s Two-Tiered Segmentation.
But I haven’t quite brought you up to date. I wanted to show how a two-tiered segmentation can be applied to a wide range of verticals. So I fleshed out examples for Financial Services, Hospitality, Media and .Gov. I also delved into a long discussion of the techniques for actually creating a Two-Tiered Segmentation. In the first of two posts, I showed how behavioral cues including the extensive use of Meta-data and hierarchical segmentation could be used to construct the Visit-Type segmentation essential to a Two-Tiered model. In the second post, I showed how Opinion Research data could be used to both improve and validate that model.
Which brings me, finally, to here. The Two-Tiered Segmentation is the perfect bridge between action and intent (the variables we use to target and the variables we use to understand intent) that I described as the key problem in bringing Database Marketing techniques to Digital. By combining Audience Type and Visit Intent (and incorporating a set of methods for building visit-intent), it creates a bridge between the type of information we use to target (who the customer is and what we think they need) and the behaviors (page views) we measure.
In the next few posts, I’ll show how you can use that bridge to create customer-level aggregations in the warehouse that provide effective targeted marketing – re-uniting the world of Database Marketing with Digital Analytics.
Analyst vs. Implementer in SiteCatalyst
Wednesday July 13th 2011, 10:13 pm
Filed under: Gary Angel
By Gary Angel
Adam Greco of Web Analytics Demystified recently published a list of “pet-peeves” when it comes to Omniture implementations. Now Adam is not just one of the smartest people in our industry, he knows as much about Omniture as it’s possible to know. Still, the list contains a couple of points that I think illustrate the difference between the perspectives of an implementer versus those of an analyst and highlight why so many Omniture implementations go wrong when it comes to an analysis perspective.
Most of Adam’s pet peeves identify obvious bad practices in SiteCatalyst tagging. There are a couple, however, that raise more difficult questions. Among these I would number Adam’s first and biggest pet-peeve:
Tracking Every eVar as an sProp
I would say that my biggest pet peeve is when clients have an sProp for every eVar they have set (or vice versa). When I see this, it is an early warning sign that the client doesn’t fully understand the fundamentals of SiteCatalyst. While there are definitely cases where you would capture the same data in both an eVar and an sProp, they are usually few and far between. As a rule of thumb, I only set an sProp if:
• There is a need to see Unique Visitor counts for the values stored in the sProp
• There is a need for Pathing
• You have run out of eVar Subrelations and need to break one variable down by another through the use of a Correlation (which will go away in SiteCatalyst v15)
• There will be many values (exceeding the unique limits and you just want data stored so I can get to it in DataWarehouse or Adobe Insight
For the most part, that is it… Beyond that, I tend to use eVars and Success Events for most of my implementation items.
I think this is far too hard on the much-maligned sProp. Setting every eVar as an sProp is undeniably sloppy and almost always does reflect some lack of understanding about the functions of each. But when an implementer isn’t deeply certain of the roles for each, I’d much rather have them set every eVar as an sProp than follow Adam’s advice and neglect their sProps.
From my perspective, the really problematic sentence here is this one:
“While there are definitely cases where you would capture the same data in both an eVar and an sProp, they are usually few and far between.”
It’s a problem because the cases where you would capture the same data in an eVar and sProp are all too common in SiteCatalyst – even given Adam’s list; and there are several common cases not on Adam’s list at all.
One of the most common cases where an sProp extends an eVar is the association of a variable with the page name. Props do this automatically (on page calls), eVars don’t. For implementations that neglect to move pageName into an eVar, this is a critical difference. There is almost no variable in Web analytics that you don’t want to cross-tabulate with the page on which it occurred. Props do this out-of-the-box. So copying an eVar to a Prop automatically gives you at least one significant and new capability.
Props can also be setup as lists. List Props have a bunch of unfortunate limitations in SiteCatalyst, but they also have some specific uses. If you’re trying to track impressions on a module or dynamic site, you’ll often find that ListProps are the most convenient mechanism you have. This isn’t a duplication of an eVar (because you can’t do this in an eVar), but it is an important sProp function that often gets ignored in standard implementations.
sProps also work differently when it comes to some aspects of segmentation (particularly Page Containers) – and they often provide a mechanism for picking out specific pages in a segment in a way that simply can’t be duplicated with an eVar. This isn’t the kind of problem that an implementer has to deal with – but segmentation is at the heart of analytics and I’m always reluctant to give up analytics capability.
The truth is that the prop variable is like the classical concept of the atom (I’m indebted to our own Paul Legutko for this analogy) – indivisible and unique, for reporting and analysis purposes. What you see is what you get in unique image-requests – even if this means lots of “unspecifieds” in the interface or blank values in DW. The eVar is more like modern quantum theory, where eVar values can create “spooky action at a distance” (Einstein) within Data-Warehouse, Discover, and the interface sub-relations. It’s often the case that you can run a DW request asking for two different variables (say, Internal Search Term and Site Content) in both a set of props and a set of eVars, and get 80% “unspecified” in the prop variables but 20% unspecified in the eVars. In that situation, the prop is usually what you want and to get similar clarity from the eVars would take significantly more work.
You don’t need to really add functions to Adam’s list to see why duplication of many eVars makes sense. Let’s consider some of the key points:
- You have run out of eVar Subrelations and need to break one variable down by another through the use of a Correlation (which will go away in SiteCatalyst v15)
Most standard Omniture contracts severely limit your ability to do eVar Subrelations (in V14 – which is, after all, the basis for every implementation we’ve all seen). Most sites live with only three full subrelations. That means you can only fully cross-tabulate three of your eVars unless you want to pay Omniture extra.
And with eVars, the story is even more complicated. eVar subrelations have hidden “gotcha’s” – an eVar with a week expiration and last attribution (like a marketing eVar) will not subrelate successfully to another eVar with a visit expiration and linear attribution (like a content eVar). Props, being discrete, don’t have this issue.
sProps provide a much more generous set of cross-tabulations (Correlations in Omniture speak) without additional payment. Unless you’ve setup a very skinny implementation, you’ll run out of eVar subrelations and still have about 30 variables to go (maybe more).
There is nothing more maddening than finding that you can’t cross-tabulate a variable like Search Term with Search Results Count because they’ve been setup as eVars and not with corresponding sProps. Cross-tabulation is a basic function of analysis and the idea that you wouldn’t want to cross-tabulate almost every variable is foreign to an analyst.
- There is a need to see Unique Visitor counts for the values stored in the sProp
I know there is a deep uncertainty in the Web analytics community over the value of visitor-level statistics. I don’t share it. Visitor is the fundamental level of all analysis. There are variables for which you don’t need to understand visitor counts and visitor distributions (would that I could always get visitor distributions – and you can use eVar counters to custom create these), but I’ll paraphrase and say that they are few and far between.
- There is a need for Pathing
This bullet point actually leads to another “pet-peeve” that I think illustrative of the difference between an implementer and analyst viewpoint. Adam rightly complains that setting up Pathing on a variable that doesn’t change in a session is a mistake. That’s pretty hard to argue with!
But here’s my “analystics” pet peeve with pathing – people don’t use it nearly enough. In a previous post I listed a whole range of typical meta-data items that should be captured in an implementation:
- Functional Taxonomy: Describing what the pages is supposed to be doing in the broader site-structure
- Site Taxomony: The hierarchical levels that the page occupies (e.g. Products/Detail)
- Product Taxonomy: The product/family the page concerns (e.g. TVs/LCD/ModelX)
- Topic Taxonomy: A topic coding of the copntent (e.g. International Affairs/Middle East/Egypt/Revolution)
- Audience: The visitor segments the page is designed for (e.g. All, engineers, consumers, health-care providers, professionals, etc.)
- Sales-Stage: The place in the sales stage the content is direct to (e.g. Early, Middle, Late)
- Page Components: The modules the page contains (e.g. videos, images, reviews, etc.)
- Component Classification: The value or status of the page or component (e.g. Overall Review Rating is High or Low, Price is Discounted or List, Availability is Out-of-Stock)
- Content Cardinality: The amount of line-item content on a page (e.g. Number of Search Results Returned, Number of Products Listed, Number of Reviews)
- Page Length: The number of words or screens of text on the page (e.g. 800 word description, 200 word article, article in 3 pages, article in 1 page)
- Content Source: The publisher, source, author of the content (e.g. Columnist X, Database Y, blogger Z)
- Publish Date & Days since Changed: The recency and freshness of the content
Every single one of these variables should be cross-tabulated. Every one. About half of these variables would benefit from pathing.
Pathing is also powerful in the analysis of dynamic applications, systems like faceted search (especially!), internal link tracking and even internal search term tracking. Unfortunately, the vast majority of Omniture implementations I review don’t enable pathing on any meta-data variables and don’t track faceting or dynamic applications at all.This is a much, much more widespread and serious problem than mistakenly pathing a static variable.
I’d rather have an implementer randomly enable pathing on props than not use it at all – at least there’s some chance they are providing me useful information! From an analyst perspective, I’m far more annoyed when people don’t try to use functionality they paid for than when they use it poorly.
There’s no doubt that eVars and sProps duplicate functionality. There’s also little doubt that Omniture is increasingly making eVars a more general purpose type of variable – one that may eventually replace sProps. Version 15 solves two of the biggest issues with eVars in all previous versions of SiteCatalyst: uniques counting and cross-tabulation (to some extent). Make pathing of eVar instances an option, and you might well be getting close to a single variable-type view of the world.
I’d welcome that. The two variable types in SiteCatalyst are needlessly confusing and have little in common with other BI or analysis systems. But that’s the future – not a version that any implementation we’ve ever seen was built for. In V14 and every extant implementation of Omniture, it’s sound practice to duplicate many of your sProps and eVars and it’s even better practice not to ignore the sProp and its additional capabilities. I’ll take a confused but robust implementation over a clear but crippled one any day. It’s a little painful to see an eVar duplicated as a prop (or vice-versa) when it makes no sense, but it’s much less painful than having to pay for a cross-tabulation when you happen to need one or being simply unable to see a visitor count or setup a path analysis.
As an analyst, it’s just hard for me to understand how the biggest pet peeve around Omniture implementations should be something that doesn’t cost a penny, doesn’t sacrifice any functionality, and adds at least some analytic richness.
I have similar issues with Adam’s take on Vista Rules:
VISTA Rule Chaos
The final pet peeve I will mention is related to VISTA Rules. Let me start by saying that VISTA and DB Vista rules are not bad. They can be very powerful, but it is also true that they can be easily misused and wreak havoc on a SiteCatalyst implementation. When using VISTA rules, it is critical that you and your entire team understand WHEN the rules are being used and WHAT they do in terms of setting variables. I have seen many cases where a developer will change a variable not knowing that there are VISTA rules impacting it. You need to make sure VISTA rules are heavily documented and as you change your site or implementation, they need to be factored into the equation. One suggestion I have is to add the phrase (SET VIA VISTA) in the name of any variable that is set via a VISTA rule in your documentation so there is no missing it!
The other pet peeve I have related to VISTA rules is when they are used as a “band-aid” to avoid doing real tagging. In the long-run, this always comes back to haunt you. I see many clients creating band-aids on top of band-aids until things fall apart. I am ok with companies using Vista rules to get things done quickly, but I recommend that, over time, you phase out as many VISTA rules as you can and move their logic to your regular tagging so you have all of your logic in one place.
There’s good stuff in here and, in fact, I agree with almost everything Adam says. I really like his idea of naming variables explicitly as Vista – that’s great practice. It can quite confusing for developers to understand how a Vista-encoded variable is getting set (I’ve had that experience). And who could argue with extensive documentation?
It’s what’s missing more than what’s here that bugs me. I see many Omniture implementers who’s attitude seems to be that whatever can be put in the tag should be. I take the opposite viewpoint. Our job isn’t just analysis – it’s optimization – making the Website better. Every single thing you add to a tag adds page weight and execution time.
The best organizations these days are incredibly sensitive to page load times; I think they’re right. If I can collect or classify information without impacting page weight, I’m usually in favor of it. Omniture tags (and some of the Omniture plug-ins) are quite weighty. If you can replace them with a Processing Rule (V15) or a Vista Rule, you ought to consider it. Any discussion of Vista Rules which neglects the benefits of back-office processing as opposed to client-side processing is missing a vital dimension.
Omniture is a big and complicated system and there is no one right approach to an implementation. Even very skilled practitioners will likely disagree in certain cases. But that’s also why I keep hammering away at the theme that when you’re planning an Omniture implementation, you can’t afford to ignore analytics knowledge no matter how skilled or knowledgeable your technical implementers are.
It’s important to know the ins-and-outs of every technical feature of Omniture. It’s even more important to know exactly how and why you’re going to use those features. Omniture implementations place a significant premium on pre-planning and careful thinking about the problems you’re trying to solve when you create an implementation – and nowhere is that more relevant than in the types of variables you’ll capture and the nature of the variables you’ll use.