Insights

Uncommon Sense: Advertising Process Control – It’s All Relative
Article

Uncommon Sense: Advertising Process Control – It’s All Relative

In my previous post on Advertising Process Control, I explained the basic concept: that there was great value in applying process control mechanisms to your advertising just as you do to things like average days outstanding for accounts receivable and customer satisfaction. Here, I talk about what achieving Advertising Process Control requires.

Before you can start managing your advertising production process, you need to accurately assess where your organization is on the Advertising Process Control continuum. At the far left is simply not measuring the effect of your advertising campaigns (Position 0) and what I call Position 0.5 – using different measures for each of your advertising campaigns, perhaps depending on the story you want to tell.

Then follow positions 1-6. If you achieve the last of these, you can claim to have mastered the discipline:

  • Position 1: You are using a consistent set of reliably-produced metrics to evaluate your campaigns.
  • Position 2: You are tracking trends in your metrics and comparing them to internal norms.
  • Position 3: You are optimizing from campaign to campaign based on what you learn from each new campaign.
  • Position 4: You are optimizing within each campaign in real time based upon solid data you are receiving.
  • Position 5: You are comparing your metrics to external norms.
  • Position 6: You are comparing the variance with which you achieve your norms to internal and external benchmarks and dissecting the drivers of positive and negative variance.

Once you start consistently measuring your advertising campaigns with your metrics of choice, it is a simple step to average your scores over time to create your own norm. It’s also simple enough to compare your individual and average scores to industry norms or benchmarks if you are using a third party that provides consistent measurement of your campaigns and those of others.

Once you have your own norm (also known as average or mean) score for a particular measure, you can then look at the distribution or variance of your scores to understand how consistently you produce advertising of your average quality. For instance, is your average score of “5” produced by a lot of 4, 5 and 6 scores or a bunch of 1s and 9s? Either way, there are important things to learn from the variance.

If you’re producing advertising with low variance, you might want to look at whether or not your average scores could be improved, perhaps relative to an external benchmark. If you have high advertising process control variance, you need to begin to dissect what is driving the difference between your high performing campaigns and your low performing ones so you can do more of what is working and less of what is not.

Once you have your advertising process control variance score, you can create benchmarks for variance itself. Either through a third-party measurement company or independently, if you are a big enough organization, you can likely compare variance scores internally to positive effect. For instance, you might compare the advertising process variance scores among your different divisions, brands, or regions. You might even be able to segment scores on other variables such as media type or agency used.

At Nielsen, one of the measures that we track for thousands of campaigns each year is brand lift (i.e., the change in attitudes and perceptions for a product based on exposure to advertising). Other measures such as recall, breakthrough, on-target percentage and sales lift can also be used.

The above framework, with which advertisers can measure the variability of their advertising performance, features average brand lift of all campaigns on the Y-axis, and the variance of those campaigns on the X-axis.

Advertisers that consistently produce high performing campaigns–that is, with low variance or volatility–are the ideal (top left quadrant). Top right is second best: high-performing campaigns, intermitted with low-performing ones, but an overall high average. An inconsistent campaign performance track record is often indicative of campaigns that are not being optimized, that are being produced in a siloed fashion, or where the learning and process improvement has not taken hold.

As we go around the square, things get worse: bottom right holds relatively volatile performers, but with a low average performance. So the top right box and the lower right box capture equally volatile players, but the first set generally put forward strong campaigns with some weak ones, where the second set generally put forward weak ones with some strong ones. Finally, worst of all are those at bottom left who consistently perform badly. At least, as the joke goes, they are consistent.

The fact that the worst quadrant is low in volatility is a way of noting that volatility is important – but it is not everything. Those in this quadrant are probably optimizing nothing in their campaigns. If they were to do so, they might quickly find out that they probably had an underperforming media planner or consistently underperforming creative, which they could address.

This framework can be used to measure participants in the process–agencies, publishers, and advertisers or individual brands or business units–to identify how well the advertising process they have in place can consistently produce good results–or, indeed, if they have a process in place for consistently producing any kind of result at all.

Obviously, not every advertising campaign will be a home run. It is precisely because of this that you need to be tracking performance in this way. Tracking the performance of multiple campaigns lets you know if you are reliably achieving your brand objective, or if there is some persistent problem in your process. Knowing this allows you to develop best practices with which to drive brand lift consistently in your campaigns.

To see just how important advertising process control can be, we put a series of media agencies, publishers and brands on the grid using real data from the online ad campaigns of real, anonymized clients.

Exhibit 2 shows results that are all over the map. Agencies B, C and D are performing strongly on a consistent basis, while agencies E, F, G and H are consistently performing poorly. The others are putting in volatile performance, some with a high average performance, some with a low one. The result is an average brand lift of 1.86, with a volatility of 2.52.

This chart does not say definitively that one media agency is good or bad for an individual client–some agencies are good at some things, some at others–but it does open the door to a variety of questions: What is causing the volatility? Is it the creative, the media plans, the clients, the categories that the agency works in, the whole process? And so on. Taking this kind of systematic view of the different parties to and components of the advertising process makes it possible to identify best practices and apply them consistently to the advertising process. Exhibit 3 puts publishers on the same grid.

Again, we have consistent, strong performers (C & G) and consistent poor ones (J, K, N & O), for an average lift of 2.96 (better), but a variance of 2.89 (worse). The others are on the spectrum in between. That is the mathematical expression of what we can easily see looking at the chart: the results are skewed to the right, which marks improved brand lift, but the spread is measurably greater. Here the questions would be a little different: What publishers can change is different from what agencies can change. Are the right choices being made as to where on the site to show the ads? How often should an ad be shown on a site? Which specific ads should be shown on the site? Are they selling to the right category of advertisers given their audience? And so on.

Finally, Exhibit 4 maps different CPG brands to the grid: how are an advertiser’s different brands performing?

Here, brands A, B and C are performing strongly and consistently. Brands K, L and M are at the other end of the scale, with the remainder in between. Total brand lift is 2.31, with a variance of 2.23, but the performance difference between the first and second group of brands is sufficiently great and sufficiently consistent that it is reasonable to say that the first set of brands are benefitting from better advertising process control than the second set. Meanwhile, those in the bottom right quadrant need to tighten their advertising processes, or risk losing market share and revenue to those in the two upper quadrants. Again, it is helpful to look for patterns in the data. Are most of the brands in one quadrant using the same creative agency or media agency? Are they all in the same division or product category? Do they use similar media partners, precision marketing, creative types?

Armed with a full understanding of the opportunity, we can pinpoint what can be done to upgrade the process—what what can be measured can be improved.

This article first appeared on www.warc.com and is the second in a three-part series. The first is called “Get a Grip on Your Advertising.” For more insights from our thought leaders, view our other Uncommon Sense blog entries and our recent webinar.