Between Apple and the FTC, this is going to be an interesting year on social media.
The team at dysrupt has seen and helped manage first hand over 50 different companies' spend across every social platform. Some of these advertisers spend $1 MM a year and others $100 MM a year on Facebook alone.
Here is a list of gut checks, maxims, guide posts, heuristics, aphorisms, and principles developed over the years.
Folk wisdom from the frontline.
The Facebook pixel is the link between your business and the platform. If this link is weak, you can expect your advertising to have weak results. The first step to optimization on the platform should always be to ensure all best practices and data transfers are in place. This can be anything from pixel QA all the way up to building for Facebook’s Conversions API (CAPI). But at the end of the day, if you are feeding bad or incomplete data to an algorithm it will produce bad or inefficient results. Feed the machine the best data you can… because we guarantee your top competitors are!
It’s amazing how many marketing teams have never gone through their own purchase flow. No shame if you haven’t, you can start right now! Before you begin make sure to turn on some sort of pixel tracking plug-in. Explore around the site as if you were coming in through the homepage, click on an ad and go directly to a landing page, add multiple items to cart, input PII, create an account, etc. Do all the steps a standard customer would take. Did it seem like a lot? Were you losing interest or finding points of unnecessary friction? Now do the same thing on your competitors site. Track, compare, and see who’s flow you prefer.
If your business falls out of this norm, there should be a clear and concise answer as to why the difference exists. If retargeting is 10x more efficient than prospecting, then there are probably some improvements to be made on prospecting. If prospecting is on par or more efficient than retargeting - we’ve seen it happen - then there may be an investigation around the product’s pitch, website, user flow, etc.
The smartest engineers in Silicon Valley spent many years to ensure this is the case. If lookalikes are underperforming interest targeting, it’s time to examine what seed list is being leveraged to create the Lookalikes.
The largest advertisers on Facebook use broad targeting. Having broad targeting work for your business means that Facebook’s algorithm is tuned to search the masses for your next best customer. This only can happen when signal fidelity is high, account structure is lean and creative is following best practices. In short, this can take time but is a great goal to achieve.
The more data the algorithm can ingest, the more of a competitive advantage you will have as an advertiser. One often overlooked aspect of signal fidelity is the catalog. If you are selling multiple SKUs, it is critical to set up a catalog so Facebook can learn who is looking at what type of products on your site. It also unlocks a different campaign objective to test!
Alphas and Betas are great but can also cause a lot of extra work on the advertiser side. To offset this cost, ensure that if the Alpha or Beta is available after the testing period as an option to leverage so that the research cost is offset with a gain over those who didn’t gain access.
In the past decade of interacting with clients, about 95% of them have dictated budgeting via last click measurement. Of those, around 50% have pursued some type of MMM or MTA model from one of the top tier providers in a move to leave last click measurement. 0% of this cohort changed their behavior after they were implemented. While those models may be intriguing to the marketing team, at the end of the day Finance is dictating budgets and Finance is looking for consistent metrics that can be easily repackaged for C-Suite or a Board. Build marketing systems that align to the purse strings of a company (also known as the real KPIs).
Testing is great but the results have to be simplified to their core for them to have a lasting effect. Consider the game of telephone before starting a test that requires a lot of explanation and moving parts. Is the test able to produce a result that can be explained and implemented easily? Does the test bump into any cross functional teams that will be positively or negatively impacted by the results? Don’t forget that companies are just people, and every organization has its own unique set of internal politics.