One of the many benefits of being a consultancy that does both strategy and media buying is that we’re always finding new ways to answer big questions. By designing experiments for performance media (primarily Facebook), we can reach growth objectives while learning a lot about audiences and messaging. Here are three models Part and Sum has developed for our clients—we’ve tested these in the real world, and now you can try them, too.

https://uploads-ssl.webflow.com/5f0a720c9910176e911d1463/5f7392f962ac01407bee5a33_All.png

MODEL 1: SCALE SUCCESS

This model lets the Facebook algorithm do what it does best: throw weight behind whatever gets results. In traditional testing, we give each message an equal amount of time and resources to prove itself. In this model, we see what the performance machine prefers and extrapolate insights.

You remember the Valentine’s Day experiment we ran for jewelry brand Lizzie Fortunato, right? Tl;dr: We used dynamic Facebook ads to test messaging against male audiences, to see if we could get them to buy (spoiler: they did). To create messaging that would work for a secondary audience, we had to figure out what would resonate with our core audience. We did that initial work using this model.

Who is this good for? ‍

Established brands who have a general understanding of their customer, but want to know more. Great for companies with limited ad budgets.

How does it work?

Step 1: Develop a couple of messages based on audience insights. For example, we knew that customers liked to reward themselves with baubles—just because. So, one of our messaging angles was “Treat yourself.”

Step 2: Deploy across the entire funnel. Just because a message is crafted with a loyal customer in mind doesn’t mean that it won’t resonate with brand newbies.

Step 3: Cut out messages that aren’t delivering, or that have below-average performance, and put more dollars behind everything else.

Step 4: Continue cutting and scaling until performance plateaus.

What you’ll be left with is a short list of creative and message formats that your customer responds to. Look for patterns, generate insights, and iterate to test what you’ve learned.

https://uploads-ssl.webflow.com/5f0a720c9910176e911d1463/5f739344ca003c244e1a8f3e_fStrcqKkz7IRx4bdnZ0g4xf23xYK46qkvPf4505ISJCB106G68lwNwWjRCvPapnfJ3BVMJV2CnN41_fPwbF92uPReR5Qr9fRBg24jBTyuuER22TYeaovBOaLdU-QKgorQP5pv9Kq.png

The pros: This is the most cost-effective way of learning while building campaign performance.

The cons: There are so many unknowns about the algorithm. We can’t always explain why something didn’t work. This model is also limited in the number of variables you can test at one time.

MODEL 2: QUAL, QUANT, VALIDATION

This one’s for the “empirical evidence or bust” crowd. It’s a multi-phased approach to testing that can narrow down the most lofty of hypotheses.

Most recently we used this model to uncover audience insights and whitespace recommendations for a major TV network. We knew that the key to building viewership was getting existing fans to watch more shows on the same network, so we set out to understand what messaging would resonate with people who already liked their programming. We did this by talking to superfans, first in an intimate setting, then at scale. Then, we refined our hypotheses and measured them against people’s actions on Facebook ads.

Who is this good for?