The Big Mistakes Marketers Make When Measuring Facebook’s Ad Performance

By Kirill Gil, Persado Global Solutions Consultant for Paid Media

The precise way to measure the performance of a Facebook ad varies between companies and can be the subject of debate between digital marketers. The truth is there’s no right answer — companies can and should interpret results using metrics that best fit their goals. But there are still certain facts about metrics that, if ignored, are going to make the figures appear inaccurate. At Persado, I help brands plan better ads and create strategies for success. During my tenure, I’ve noticed these six common mistakes marketers are making with paid media metrics. The businesses have learned from them, and now you can too.  

Mixing Attribution Windows 

Having inconsistent look-back windows across your reports can cause serious differences between the perceived performance of campaigns if you are comparing anything except click-through rates. Why? Because CTRs are the only instantaneous metric paid media offers. When making any sort of comparison, it is critical to know exactly what types of settings were used to pull each set of data. If the data is pulled with a different look-back window, it should not be compared.
 

Related Content
Should You Personalize Your Paid Media Content?
What Metrics Should You be Using to Measure Email Campaigns?
Create Facebook Ads That Run Circles Around the Competition 

Not Waiting for the Entire Attribution Window to Pull Results  

Oh no! You just launched a campaign last week and pulled data and — much to your chagrin — the data looks terrible compared to last month’s! Step. Away. From. The. Panic. Button. There’s a huge difference between a 28-day window and the seven days that you had the campaign running. When brands forget this, they start making rash decisions such as re-using old creative because the new direction just “isn’t working.” It is always good to check if the picture changes if you shorten the look-back window to the shortest possible (24 hours) when a campaign is fresh.

Forgetting Delayed Events Are Measured Differently Across Platforms 

Facebook and DoubleClick count delayed conversions differently. Facebook will attribute the conversion to the last time a customer saw an ad within a look-back window, not the day of the actual conversion. DoubleClick assigns the conversion to the day it happened. Comparing the two different attribution models will give you significantly different results — certain days may look dormant on DoubleClick compared to Facebook, when in reality someone may simply be seeing an ad one time and converting later. 
 

Comparing Raw Metrics After Using a Creative Optimization Black Box

It is extremely hard to decipher the results if an optimization engine is involved. For example, Facebook’s optimization algorithm will push the budget towards the ad it considers best for performance without any insights as to why. The results get mangled by the optimization algorithms and the observer may not get an accurate read of performance. Unfortunately, optimization algorithms are hard to avoid, especially when it comes to Facebook. Keep this in mind. If a clean test is important to you, make sure to set up the campaign in a way that limits interference from black boxes.    

Lumping Too Many Variables Into One Metric 

If your KPI looks like a calculus formula, it becomes very difficult to understand and get meaningful insights about an ad’s performance. The further you move away from the initial impression, the more factors there are that could contribute to a particular result. In most cases, it is best to look at the immediate response rates such as the CTR, page views and conversion rates rather than more complex metrics like Return on Ad Spend (ROAS) when making optimization decisions. Breaking the performance down into more manageable KPIs can get you more actionable insights.
 

Mixing Unique and Non-Unique Values

Often, unique and non-unique values are used interchangeably, which can cause significant issues when digging into the numbers. For example, it can cause serious confusion about how users flow through the funnel if the CTR is only taking unique clicks into consideration and page visits are not unique. In this scenario, it may look like there are more people that land on the page than click the ad, which is incorrect.