SMX Overtime: Eternal testing, the key to Facebook Ads success
Earlier this month, I had the pleasure of presenting at the fall virtual SMX. Though nothing replaces the ability to network in-person, SMX did a great job creating an experience that facilitated the discussions and networking that we’ve all been missing, in addition to the excellent content!
In my session, “Eternal Testing: The Key to Facebook Ads Success,” there were several excellent questions that we weren’t able to get to, so we decided to put together this post as a means of answering those questions. I’ve also included the questions that we were able to answer because all of the questions were so on point that I thought it would be beneficial to share the answers all at once.
“How long should it take to contact a lead who filled out an instaform?”
How long to wait before responding to a lead depends on a few things:
- How hot is the lead, and did they request a sales call or a demo?
- If they requested content, do they have to wait to receive it until they get an email from you with the content?
If the answer to either of those questions is yes, then that indicates a sense of urgency and I would recommend reaching out as quickly as possible. For someone that requests a demo or a call, the best chance you have of converting them to a sale is at that moment. The longer you wait to follow up, the more time they have to shop around or even to become distracted or change their mind.
If the lead is a higher-funnel lead and they are reaching out for content, I don’t recommend trying to push a sale right away, as they likely won’t be ready for that type of commitment. But, I do recommend creating a first touch fairly quickly while they still have brand recall, so that they’re more likely to engage with your follow-up content. I suggest dropping these folks into an email nurture campaign so that they are being engaged in an automated way until their behavior indicates that they are ready to be contacted by sales. If you can make the content they requested the first piece of content, that creates a really seamless way of creating a quick follow-up without seeming overbearing.
“How long is Facebook’s learning curve? If we test something, should we let it run for at least 30 days?”
This is tough to answer — I hate to give an “it depends” answer but this is a scenario where that response applies. Facebook’s learning curve is better defined by the time it takes to get to 50 optimization events, which can vary from one account to the next — even from one ad set to the next within the same account.
The struggle with Facebook is that if you draw out the test too long, you can hit creative fatigue, which can negatively impact your results as well. The length of time before you hit creative fatigue seems to vary across a lot of different factors, which also vary from one campaign to the next. Audience size and frequency play a role here. So, an ad set with a large audience and low frequency will typically hit creative fatigue slower than an account with tiny audiences.
All that to say, it’s ideal to get out of the learning phase, which means achieving at least 50 optimization events. The goal should be to get out of the learning phase (ideally within a week) and also to collect enough data to where you can achieve statistical significance in your results. There are a lot of free statistical significance calculators out there — this one from CXL allows you to estimate how many more days you need in your test to achieve statistical significance based upon the current results and the number of days that the test has been running.
“Why would learning be limited on an ad campaign that has a bigger budget and more conversions than the other ad campaigns? The audience is large, from the client’s email list, with a lookalike audience.”
Facebook’s learning phase requires 50 optimization events and any changes that are made can send it back into learning mode. So, there are a few things that could be happening here:
- The audience may be bigger, but it may not have hit the threshold of 50 optimization events yet.
- Each ad set enters and exits the learning phase individually, so even if the campaign has hit the threshold of 50 optimization events, it could be that not all ad sets have achieved that threshold yet and some could still be in the learning phase.
- There may have been recent changes that could have sent parts of this campaign back into the learning phase even if it was out of learning mode before, such as new ads, budget changes, bidding changes, or changes to targeting. You can check this by looking at the recent changes to see if any edits were recently made. At that point, another 50 optimization events would need to happen after the most recent significant edit in order to exit the learning phase again.
“What is your take for learning phase status of Facebook Ads? Will it affect the testing scenario?”
The learning phase definitely can impact testing. Until you’re out of the learning phase, results can vary wildly. It’s best to get out of learning phase before calling a winner. Realistically, though, some ad sets never make it out of learning mode and, in those cases, you have to try to get a statistically significant result even if it’s still in learning mode.
Not only does the learning phase impact the results of tests you run, it also ultimately impacts long term performance of the ad set and ads. For the betterment of performance, I recommend trying to identify ways to get out of learning mode, even if it means finding creative ways to expand targeting while still remaining relevant.
“How do you find Facebook’s ‘Budget Optimization’ versus set budgets for ad sets?”
I want to love Facebook’s Campaign Budget Optimization (CBO) because, as much as it pains me to say this as a semi-control-freak, Facebook’s bidding algorithms are pretty good. By the same logic, you’d expect CBO to perform better than manual intervention, right? Unfortunately, I haven’t really seen that to be the case.
There have been a few times that I’ve used CBO (similar to the instances where it makes sense to use Shared Budgets in Google —because each individual budget would be too restrictive to get good results on its own). I typically find, though, that it doesn’t prioritize well for the goal of the campaign and doesn’t spend as efficiently as I can by setting ad set budgets.
“Facebook interest and targeting options are quite limited, especially on B2B. Do you have any suggestions on how we can optimize targeting of audiences?”
I love questions about B2B for Facebook because so many folks are resistant to the idea of using Facebook for B2B, so I am always excited when that isn’t the case! The audiences are limited but I’ve seen Facebook work really well for B2B.
My favorite audiences of all audiences, for B2B and B2C, are lookalikes. I typically find that they outperform any of Facebook’s interests. If you have first-party data by way of pixel or email lists, that’s where I would start. The more qualified audiences you can create (MQLs instead of just leads, closed won opportunities are better, being able to segment out enterprise clients, etc., the higher value list that you can create, the better). I would start with those as a means of testing. I would also take a look at your Facebook audience insights and see what comes up there as far as your page followers’ interests. Sometimes Facebook has interests around associations, in addition to some data around industries and loose job titles, which can also be good options.
“A huge issue we have on the B2B side with FB ads are prospects filling out bogus contact info on lead gen forms and on site lead gen forms.”
I’ve seen this to an extent, as well, unfortunately. We had a few interests that drove a ton of volume but also quite a bit of junk, and then that contaminated the pixel-based audiences that we were building lookalikes off of and they became a bit junked up, too — for lack of a better word. If your audiences are big enough, this is a good case for making sure things are segmented so that you can monitor performance at the targeting level — whether it be through on-Facebook leads or through an integration with your CRM. HubSpot, for instance, allows us to monitor lead quality at the ad set level, which allows us to make decisions from there.
Sometimes a decision we have to make is whether we’re okay with a little bit of noise in order to get more lead volume at an acceptable cost or if we need to ensure there’s no noise, which may mean moving toward smaller, more narrow audiences, but that might also mean a higher cost per lead due to having lower volume/less data powering the bidding algorithms.
Unfortunately, with bad data, lookalikes just make the problem worse — so, we usually either would move to a lookalike model off of only custom audiences from uploaded lists of the quality leads, or look for a different way to build a better pixel-based audience than all leads, if possible, such as a login if a portal exists, or taking the next step to schedule a demo on the calendar from a follow-up email.
“How would you update your testing strategy for an ad campaign that tends to get automatically disapproved any time it is updated? The ads follow policy and are typically reinstated after manual review.”
This is such a frustrating situation and I feel it in my bones because we, at Cultivative, have a client whose ads are automatically (and incorrectly, I might add) disapproved as soon as we launch them or make any edits. Depending on how many there are, I usually wind up reaching out to support with the campaign ID because individually submitting them for review takes so long.
All that said, usually the process looks like this:
- Launch ads (knowing they will automatically be disapproved).
- Send campaign ID to support in a chat as soon as disapproval comes through (which is usually almost instantaneous).
- Receive email follow-up from support confirming the ads were incorrectly disapproved, and now they are live, usually within 24 hours of sending the campaign ID.
- Begin test period now that everything is live; same day if it is early in the day when they are approved. If it’s mid-day or late in the day, then the following day will be considered the beginning of the test period.
“In your opinion, what is the best way to have Facebook and PPC search work hand-in-hand?”
Oh, I love this question. There’s a ton of ways that Facebook and PPC can work hand-in-hand. In so many ways that this could even warrant its own article!
Depending on how long you’ve been running each channel, and what you’re running in each channel, you often will see an increase in branded search traffic when you activate higher funnel campaigns — such as Facebook prospecting campaigns to new audiences, so look out for that! Monitoring Facebook impression and traffic trends against branded search trends is one way to look for impact outside of monitoring the direct conversions that come from Facebook, if you’re interested in understanding the impact outside of immediate direct response.
One of the easiest ways to coordinate campaigns cross-channel is by setting up audiences built on UTMs to monitor the performance of each. You can set up audiences off of traffic from Facebook campaigns and layer them as observation only on your search campaigns. You can also set up audiences off of UTMs to remarket folks who visited from your search campaigns with Facebook remarketing. And you can set up both audiences in Google Analytics to get better visibility into how folks in each audience come through other channels.
The nice thing about audiences is that they tell you something about the traffic that you may not have otherwise known, such as who they are (targets and demographics) and what actions they’ve already taken on your site. For example, in Facebook, you have a lot of targeting options that you don’t have in Google Ads and vice versa. If you know that they came from a specific ad set in Facebook, that likely tells you something about their demographics or interests that you might not have otherwise known. In both cases, you can customize ads to the audience to make sure you’re taking full advantage. You can also add negative audiences to avoid sharing the same content that they’ve already downloaded to get them to a different, lower-funnel CTA.
There are also so many ways that PPC & Facebook Ads can learn from each other — for example, shared learnings from messaging testing.
“What about restricted ad categories, such as housing? Do you have any recommendations on segmenting audiences since options are limited?”
Good question, with special categories such as housing, credit and employment, you do have a little bit less targeting segmentation options but there are still quite a few ways you can segment the data — if you have the audience size. One of the bigger struggles I find with housing and real estate is that, if you’re only targeting one city (especially a small one at that) it can be tough to get a big enough audience pool with some of the individual interests even without attempting to segment by age or gender (which wouldn’t be allowed anyway, of course).
If you have the data, there are still quite a few ways to segment data, keeping in mind all segmentation should still take into account whether the individual lists can still achieve 50 optimization events on their own (if not, then it probably doesn’t make sense to segment as performance may likely be better if they remain as part of a larger audience). Those segmentation opportunities could include:
- Platform or placement: I would only do this if there’s a performance outlier that indicates that your performance would be better if you could allocate budget to one individual placement, or push harder on a platform. If one placement seems it would perform better with customized creative, I would suggest doing that within the existing ad unit, without segmenting it out to see if it performs better.
- Segmenting out different first-party audience lists to build lookalikes off of based upon quality. So, for instance, if you were in real estate, uploading a list of folks who have purchased a home from you as a separate list than all leads, and building lookalikes off of both to see how they perform.
- Recency of remarketing lists: Targeting somebody in the past three to seven days separate of a longer list, as these are typically warmer prospects.
- Different remarketing lists for different on-site actions.
- Different custom audience lists for folks who may still be in-market for a home.
“For testing targeting, would you recommend the A/B testing feature on Facebook? Also, what is the ‘ideal budget’ to set for testing?”
The Campaign Experiments tool in Facebook is another one that I want to love but haven’t had the greatest of experiences with. I find that often the A/B tests that I run through the tool are skewed in the same ways that they would be without using the tool. For that reason, I often find that using the tool doesn’t necessarily reduce any of the limitations of testing within Facebook. I would love to see this change and am still hopeful that it will, as they have been investing in the tool. The main benefit that I’ve found, other than that it is nice, is that it gives a readout versus using your own statistical analysis tools.
That said, there are some things that you can do through the newer campaign experiment tool that you can’t do on your own without the tool, such as holdout tests and brand surveys. Technically, you could set up a holdout test on your own, but their tool makes it easier and gives a cleaner read than what many advertisers have the capability to do on their own, without access to other tools or data partners.
As far as budget goes, this varies from advertiser to advertiser. Because it takes 50 optimization events to exit the learning phase, you want to be realistic about budgeting enough to meet that threshold and then still have budget available for testing outside of the learning phase. That will vary from advertiser to advertiser, depending upon their cost of acquisition. The learning phase can be a bit volatile, so I recommend accounting for some extra budget as your cost efficiency may not be as optimal in that time.
Powered by WPeMatico