It’s here! Download our 2021 Holiday Playbook now.

3 Facebook Ad Experiments That Unlock Big Insights

One of the many benefits of being a consultancy that does both strategy and media buying is that we’re always finding new ways to answer big questions. By designing experiments for performance media (primarily Facebook), we can reach growth objectives while learning a lot about audiences and messaging. Here are three models Part and Sum has developed for our clients—we’ve tested these in the real world, and now you can try them, too.

3 Facebook Ad Experiments That Unlock Big Insights


This model lets the Facebook algorithm do what it does best: throw weight behind whatever gets results. In traditional testing, we give each message an equal amount of time and resources to prove itself. In this model, we see what the performance machine prefers and extrapolate insights.

You remember the Valentine’s Day experiment we ran for jewelry brand Lizzie Fortunato, right? Tl;dr: We used dynamic Facebook ads to test messaging against male audiences, to see if we could get them to buy (spoiler: they did). To create messaging that would work for a secondary audience, we had to figure out what would resonate with our core audience. We did that initial work using this model.

Who is this good for? 

Established brands who have a general understanding of their customer, but want to know more. Great for companies with limited ad budgets.

How does it work?

Step 1: Develop a couple of messages based on audience insights. For example, we knew that customers liked to reward themselves with baubles—just because. So, one of our messaging angles was “Treat yourself.”

Step 2: Deploy across the entire funnel. Just because a message is crafted with a loyal customer in mind doesn’t mean that it won’t resonate with brand newbies.

Step 3: Cut out messages that aren’t delivering, or that have below-average performance, and put more dollars behind everything else.

Step 4: Continue cutting and scaling until performance plateaus.

What you’ll be left with is a short list of creative and message formats that your customer responds to. Look for patterns, generate insights, and iterate to test what you’ve learned. 

The pros: This is the most cost-effective way of learning while building campaign performance.

The cons: There are so many unknowns about the algorithm. We can’t always explain why something didn’t work. This model is also limited in the number of variables you can test at one time. 


This one’s for the “empirical evidence or bust” crowd. It’s a multi-phased approach to testing that can narrow down the most lofty of hypotheses.

Most recently we used this model to uncover audience insights and whitespace recommendations for a major TV network. We knew that the key to building viewership was getting existing fans to watch more shows on the same network, so we set out to understand what messaging would resonate with people who already liked their programming. We did this by talking to superfans, first in an intimate setting, then at scale. Then, we refined our hypotheses and measured them against people’s actions on Facebook ads.

Who is this good for? 

Brands that have more unknowns to grapple with.  

How does it work?

Step 1: Get on the phone and talk to people. Ask them questions about their habits, what they love, what they don’t like. From there, pull out some key themes. 

Step 2: Get some scale behind your initial findings with a survey. Force-ranking different statements is a great way to confirm and prioritize your phase 1 insights and assumptions. 

Step 3: Compare what consumers say they like and what they actually respond to. For each hypothesis, build one or two ad units that convey new positioning, plus one control unit (something you would normally run). Then adjust budgets to get as even delivery as possible. This is key for not ruling out any positioning until you have the full picture. 

Use your data analysis chops to compare interest and engagement markers. When you take a look at reactions and comments, make sure you’re measuring sentiment as well as quantity. In our TV network experiment we found one positioning triggered a lot of snarky comments. This allowed us to eliminate messages that would build negative brand associations.

QUAL, QUANT, VALIDATION - Facebook Ad Experiment

The pros: Taking a phased approach allows you to manage risk by eliminating hypotheses or changing direction along the way.

The cons: This process takes time and money to execute properly.


This model beefs up your website cookie pool to supercharge the lower-funnel performance Facebook does so well. Your website is the central method for vetting audiences in a more controlled environment.

We created this model for a pharma brand, who was trying to validate ecommerce as a sales channel for a consumer goods product line that was previously retail-only. Given the nature of their business, HIPAA regulation prevented us from targeting many of their audience groups directly. So, we had to get creative and figure out new ways to find the right people.

Who is this good for? 

Brands trying to reach a tricky-to-target customer segment, or those without much existing customer data.

How does it work?

Step 1: Create a long list of platforms where you can use contextual cues to find audiences. A robust, varied approach is key to maximizing your reach.

Step 2: Drive people to campaign pages that speak narrowly to your audience segment, and allow them to take action such as email capture, coupon downloading, or purchase.

Step 3: Once you’ve driven enough on-site actions, you’ll have a qualified cookie pool. Use that information to retarget and build lookalike audiences in your Facebook campaigns.

Then, use insights from that optimized pool to better understand your ecommerce audience. Establishing these lower-funnel audiences opens up more possibilities for message and audience testing.

Facebook Ad experiment - CAST A WIDE NET

The pros: This allows you to build lucrative audience segments in a short timeframe.

The cons: Depending on your target, you’ll need to make substantial investments in multiple platforms in order to generate significant numbers. 


There’s no one right way to turn a media plan into an insight-generating experiment. Use these Facebook models as starting points—when I’m working with a client, I always customize my approach based on their needs. If you’d like to design your own experiments, be sure to work within your budget and pre-test assumptions so you’re not overspending or creating unnecessary risks. 

Most importantly, don’t lose sight of your campaign goals. It’s easy to get excited about testing models, especially once you have some results to sift through. But in the end, what really matters is how these experiments help your business grow.