Irakli is Senior Customer Engagement Director for Social, driving performance marketing programs for a strategic Marin Software client portfolio. With over 10 years of experience in customer relationship management, he joined Marin Software in December 2014 and leads all aspects of technical account management around Facebook’s family of apps and services. Irakli holds a BBA degree in Business Administration and Management from Hofstra University, and is a Facebook Blueprint Certified Buyer.
In the previous article in our Marketing on Facebook series, we looked at how to build a robust A/B scope of test framework to help uncover optimal relevance and ROAS. In our third and final post, we analyze the results of our test and formulate an action plan. (Be sure to refer back to the previous article for a refresher.)
Remember, we’re working with a retail advertiser’s scope of test. The retail advertiser is using Conversion and Product Catalog sales objectives. They’ve set a goal to optimize their campaigns, with the broader challenge to drive ROAS improvements.
Let’s don the hat of a Marin Customer Engagement Manager. Reviewing performance insights, we can formulate our summary:
In response, we note several opportunities for optimization:
Because age, gender, and in many cases the location often have a significant impact on results, the gender A/B test weighted higher in importance for Phase 1, versus testing placement optimization. Additionally, this was a priority for the advertiser at the time because they were also thinking about a more gender-tailored approach to creative design.
We’ve noted that in setting up ad studies, a clear definition of success is very important for successful learnings, so be sure to define KPIs. We elected ‘overall campaign performance’ as the measured goal for our scope of test, and also noted improvements for our KPIs of Relevance Score and ROAS.
Let’s look at some of the highlights that the example scope of test produced.
Background: A/B test campaign targeting men and women together in an ad set, versus a campaign targeting men and women in unique ad sets. Winner is plugged into Phase 2.
Theory: Optimized ad sets—whether combining placement or genders—allow the Facebook auction algorithm to find the most opportunities from the defined audience pool. When we target women uniquely, do we see higher ROAS?
Test Results
Including men and women in the same ad set can work better in some cases. This is because the auction algorithm has more options in effectively placing impression opportunities for results (placing an ad impression in front of the person most likely to take action X), for the most efficient price.
In other cases, we need segmentation to better refine the audience versus the goal—to make it more relevant.
Learnings: The campaign segmenting men and women improved Relevance Scores by two points, and improved ROAS by 18%. Creating segmentation in the audience—limiting and refining its overall scope—helped generate a more relevant targeting pool. Because more relevant ads are more cost efficient, we saw improved ROAS in correlation with higher Relevance Scores.
Highlights: Looking at the ad set targeting men only, we saw that the Relevance Score and ROAS were about equal to that of the campaign ad set that combined gender targeting. However, the ad set targeting women only posted significantly higher Relevance Scores and ROAS. While men remained an overall difficult and more expensive conversion, the more focused and relevant ad set targeting women was able to efficiently serve impressions, and generate conversions and revenue.
Conclusion: Overall, the campaign segmenting gender targeting produced better results.
Targeting in this way achieved not only a more relevant—but also a more positive—user experience. The auction produced more value for our advertising outcomes, reaching people who mattered most to specific goals. As a result, we gained improved ROI.
Additional Insights
It’s possible for a budget to be spent faster yet win less auction impressions. Why? Because of low relevance. For comparison, the two campaigns with equal overall audiences produced the following insights in Round 1:
CPM (Campaign A, Segmented Genders): $26.16
CPM (Campaign B, Combined Genders): $29.66
For the same budget of $1,500, Campaign B produced 6,776 fewer impressions, and despite a higher conversion rate (52.94% vs. 49.89%), produced 18% less revenue.
In all, the ‘less relevant campaign’ produced less impressions, less clicks, higher overall CPAs, and lower revenue.
With similar results in Round 2, we were able to prove that fostering more relevance in our targeting, ads in the auction produced more results at higher overall campaign ROAS in response.
Marin Tip: Relevance Score can be a powerful tool in your campaign management. Look for it in the Marin Social dashboard at the campaign, ad set, and ad level:
With the above results, you can also test unique creatives to men and women in the future, to see if you can further optimize with a more tailored message or image/video.
A/B test campaign utilizing Placement Optimization (including Phase 1 winner: men and women segmented), versus a campaign utilizing Placement Optimization except for Instagram, which is a unique ad set. With noted insights that Instagram produces higher ROAS, can a controlled budget via a dedicated ad set improve overall campaign ROAS?
Winner is plugged into Phase 3.
Test Results
Learnings: Segmenting Instagram into a unique placement and providing for controlled budget produced a 4% improvement in ROAS overall. We noted a slight uptick in Relevance Score, (under 1 point).
Highlights: While Instagram continued to post performance gains in a dedicated at set, we saw smaller overall gains than we hoped for. We noticed that the ad set utilizing all other placement options without Instagram performed markedly worse than when we included Instagram.
In the ‘overall’ performance gains overview, Instagram carried the campaign’s success. We planned further testing to produce a placement-optimized ad set.
While not as impactful as we’d hoped, Phase 2 created an opportunity to build a follow-up scope of test to understand the best combination of placements. Just because our results don’t post a clear winner doesn’t necessarily mean a failed ad study. We should view such results as additional opportunities to test and refine our strategy.
Additional testing found that building Lookalike Audiences from Custom Audiences that used more narrow lookback windows (10days was best) helped improve Relevance Scores. In addition, on average, it produced 11% gains in ROAS, versus campaigns that used Lookalikes based on a 180-day lookback window seed Custom Audience.
We uncovered similar patterns for dynamic retargeting ads—using more tiers, plus more narrow windows in those tiers, proved most optimal from a ROAS perspective. There was no significant impact to Relevance Scores.
Dynamic retargeting tiers we found most successful were:
We also determined that using a Custom Conversion produced worse overall results, versus using a pixel event to track conversions. When using the pixel event, for example, our Relevance Scores were on average 1.2 points higher than when we used a Custom Conversion. Also, we also improved our ROAS when we used the pixel event.
Relevancy and quality can help advertisers achieve efficiency gains in assigned budgets on Facebook. However, they’re often overlooked in favor of bid and budget adjustments.
While bid changes and budget adjustments hold value in optimizing campaigns, Relevance Score—and overall the relevance concept in the auction as described within the total bid—is one of the key drivers of performance efficiency.
Time after time we’ve noticed that campaigns with low Relevance Scores perform worse when compared with campaigns posting a higher Relevance Score. In these cases, increasing the bid value or switching to Auto Bid doesn’t typically improve acquisition costs or revenue.
In a lot of cases, campaigns with low Relevance Scores also under-deliver to allocated budgets.
Finally, to achieve efficiency gains, you need a carefully outlined approach that incorporates an understanding of the baseline, plus measured steps to test improvements to this baseline.
If you’d like to partner on projects similar to the one we’ve described here, contact your Customer Engagement Manager and they’ll gladly schedule a time to review your campaign strategy. They can also help model and support your progress through a scope of test. Or, if you’re new to Marin, feel free to get in touch.
Marketers who conduct the most successful ad studies with the most quality, consistent learnings tend to do a few important things:
This is where the concept of a “scope of test” framework can help you succeed. In this second article in a three-part series, we unpack this framework so that you can better understand it. Using a real-world example, we’ll review how a scope of test used ad studies to help reveal what creates the most relevance, and in turn, how that improves ROAS.
While our focus in this post is more specific to testing for relevance and quality, we’re happy to support other forms of ad studies for advertisers.
Ad studies help test test the impact of different audience types, delivery optimization techniques, ad placements and creative, budgets, and more, on mutually exclusive audiences. Once they’re completed, these studies help you understand ‘what works’.[1]
Audiences are split into ‘cells’, ensuring that someone in one cell isn’t in the other. Because of this ‘split’ comparing one variable versus another—for example, News Feed Desktop placement versus News Feed Mobile placement—the data is statistically accurate. Each cell is exposed to a unique variation of the test variable, so a determination can be made as to which variable delivers performance in comparison.
When you’re creating an ad study, it’s important to follow a few test guidelines:
Once you follow and meet these guidelines and recommendations, you can create a scope of test in support of planned ad studies. It includes KPIs, schedules, etc., and builds a process to implement the ad studies, and acts as a compass in tracking results and winners.
A scope of test has several sections:
A phase is the umbrella to rounds, used as a proof of concept. For example, “Phase 1” could be “Testing Placement Optimized Ad Sets.” (We’ll review this in more detail in our next article.),.
There should be at least two rounds within a phase to establish a pattern in the data—a tiebreaker to determine winners and losers.
To help define KPIs, review historical data and establish a benchmark, both from an insights perspective and an Opportunities for Optimization perspective. Be sure to include summaries of both within the Understanding the Baseline and Goals section of the scope of test.
If no historical data is available, run a few campaigns to what you believe are the most relevant audiences, using the resolving data as the benchmark.
With available benchmarks, the summary of scope of test incorporates the KPIs that will determine the success of each study.
In our real-world example below, we review a retail advertiser’s scope of test. For the purposes of this article, we’ve condensed a lot of the summaries. If you’re a Marin customer, reach out to your account manager or Customer Engagement Team for more details on a scope of test and how to implement one.
The retail advertiser is using Conversion and Product Catalog sales objectives. They’ve set a goal to optimize their campaigns, with the broader challenge to drive ROAS improvements.
The advertiser is targeting men and women without segmenting the genders into separate ad sets, using a custom conversion to track results. They’re also running Dynamic Ads using a ‘Purchase’ event for tracking conversions.
Within the prospecting conversion campaigns, the advertiser’s targeting focuses on a similarity Lookalike Audience based on past purchasers (180-days lookback), as well as interests gleaned from Page Insights. Placement optimization incorporates all available placements for Carousel ads.
For retargeting business goals, they’re running Dynamic Ads, targeting people who’ve engaged with products but haven’t added them to cart, as well as those that did add to cart but haven’t completed the purchase. Both dynamic audiences are looking back 30 days.
In partnering with this advertiser, a Marin Software Customer Engagement Manager first outlined the Understanding the Baseline and Goals section of the Scope of Test, which provided the benchmarks and helped set the KPIs.
Here’s some of the content included within Understanding the Baseline and Goals:
Prospecting Campaigns
Retargeting Dynamic Ads Campaigns
On average, prospecting campaigns post a Relevance Score of 3; their dynamic retargeting campaigns post a Relevance Score of 5. In our test, the team reviewed 90 days of recent campaign insights.
In our next article, we’ll review these campaign insights and how they can be analyzed and acted on for future growth.
[1] Split Testing
Marketing on Facebook is as much an art as a science. Most importantly, it’s an opportunity to curate a marketing program for full-funnel success, using a variety of ad formats and optimization tools. Among the myriad tools available to advertisers, creating relevance is a key component in achieving optimal results for your budget.
In this first article in a three-part series, we’ll explore the basic concepts of the Facebook auction. In our last two posts, we’ll describe a framework advertisers can adopt to help create and execute A/B tests (ad studies) aiming to improve campaigns for relevancy—and in turn, drive better return on investment.
Facebook ads are paid messages from businesses that are written in their voice and help reach the people who matter most to them.[1] Ads (or orders) are placed into an auction within campaigns, and the auction works to create the most value for advertisers in response to objectives and goals. The auction also supports the best experience for people browsing on Facebook properties.
To start building and launching Facebook ads, you need:[2]
Once you submit your ad, it goes to the ad auction, which helps get it to the right people. At a high level, Facebook describes the ad auction in these terms:
“We try to show your ads evenly throughout the day so that the people most valuable to you in your target audience are more likely to see them. The more relevant we predict an ad will be to a person, the less it should cost for the advertiser to show the ad to that person.” — Facebook Blueprint
When marketing on Facebook, every auction opportunity to serve an impression to someone is won or lost in response to the Total Bid—a combination of:
The Total Bid applies across all campaign objectives using the formula:
An ad that's high quality and very relevant can beat an ad that has a higher bid, but is lower quality and has less relevance.
To put it another way, Ad Relevance determines winning ad impressions within a balance of two things:[3]
We’ve covered several key concepts, which we can sum up in a few points:
And, the most important: an ad that's high quality and very relevant can beat an ad that has a higher bid, but is lower quality and has less relevance.
As we’ve learned, the auction is supportive of producing the most results for advertisers, and ads that win in the auction and get shown deliver the highest total value—in other words, the highest Total Bid.
Total value isn't how much an advertiser is willing to pay us to show their ad—the bid alone doesn’t win the auction. It’s important to note that it’s a combination of three major factors:[4]
High relevance and quality is as much an audience targeting challenge as it is an ad creative one.
For example, advertisers can cast a wide net and target nearly everyone on Facebook and Instagram. Not everyone wants what the advertiser is offering, however; as a result, Ad Relevance will likely be negatively impacted.
This negative impact can come from two possible sources:
Creating a campaign is fairly simple, from a workflow perspective. Creating a relevant campaign, however, is what requires the most attention and care.
To empower advertisers to success, our Customer Engagement teams encourage building and incorporating an A/B testing framework which scientifically validates audience, optimization actions, and ads—a Scope of Test.
When advertisers design and implement such a framework, the ROI results are typically positive and improved over the long term in comparison with advertisers that don’t take opportunities to A/B test and refine their strategies.
In our next article, we’ll lay out exactly how to conduct a proper Scope of Test. Stay tuned!
[1]Prepare to Advertise on Facebook
[2] Getting Started with Ads
[3]About the delivery system: Ad auctions
[4]About the delivery system: Ad auctions
Digital marketers continually pursue optimal performance. This is especially true for ad budgets—the foundation on which audiences and creatives are built.
The more efficiently marketers can allocate budget towards performing audiences, the more likely they’ll see positive returns on investment. That said, monitoring and managing audience budgets is a manual task that can quickly grow to drain valuable marketing time and resources—especially considering the volume of campaigns that are typically created and active at any given time.
How can digital marketers improve their ability to efficiently identify and scale opportunities for optimizing budgets?
We designed Marin Budget Allocation (MBA) to solve this dilemma.
MBA is a proprietary algorithm that automatically adjusts budgets within your campaigns based on top-performing audiences.
When activated for a campaign, MBA:
Typically, marketers build ad sets in campaigns around a number of different target audiences. Performance for each target audience can vary depending on demographics, interests, and engagements with a brand, and products or services.
As a common best practice, advertisers will often look to monitor ad sets and their performance, checking them multiple times a day, and manually reallocating budget towards the best-performing main KPI.
This practice can be very time-consuming for advertisers managing a large number of campaigns at scale, and across business objectives that can span both branding and direct response goals.
Marketers have a finite amount of time and attention they can devote to active campaigns, which can potentially lead to missing out on key budget reallocation decisions.
To solve this, MBA improves the performance review and budget allocation practice by continuously monitoring ad set performance, and automatically reallocating the campaign budget towards ad sets driving efficiencies in main KPI performance. Data drives the process.
With automatic budget reallocation, a marketer can more comprehensively account for performance of multiple campaigns at the same time. Missed opportunities for optimization? MBA minimizes these moments or eliminates them altogether. Main KPI performance improves, as does return on ad spend.
Use Lifetime Budgets
When used in conjunction with MBA, the Lifetime Budgets option provides for more even pacing of the available campaign and ad set budgets. For example, with the Daily Budget option on Facebook, you can have spend variance, as ad sets can spend up to 125% of the allocation for a particular day in the campaign flight. If you spend more than 100% of the Daily Budget, on the next day you could see a scaling down of the total budget allocated towards serving impressions.
When you use the Lifetime Budget option, a calculation based on the remaining budget and remaining campaign schedule more evenly controls the spending limit and pacing of each day.
We’ve also observed efficiency gains in Lifetime Budgets and recommend pairing them with MBA. If you commonly use Daily Budgets and would like to activate MBA, simply multiply the number of days you expect to run the campaign by your typical Daily Budget, and set that total budget for the campaign with the Lifetime Budget option selected.
A/B Test Studies
We encourage you to set up Ad Studies to help understand performance gains, using a scientific approach to A/B testing.
For example, activate MBA in one campaign, allowing it to make budget allocation decisions for the campaign. In the other campaign, continue budget allocations manually. Be sure to keep only one variable—budget allocation actions—as the differentiator.
Alternatively, you can run a campaign without MBA, comparing performance versus a campaign with MBA active.
We recommend creating at least three rounds of A/B tests. Our account managers can collaborate with you to recommend best practices for modeling Ad Studies, reporting on results, and incorporating effective tactics.
MBA is designed to help advertisers address common pain points, including:
To get started with MBA today, just ask your account manager. Or, if you’re new to Marin and have additional questions around improving your marketing strategy and identifying opportunities for optimization, get in touch with us.