Microsoft Advertising rolling out experiments A/B testing tool globally
What we should care? Similar to drafts & experiments in Google Ads, experiments in Microsoft Advertising allows you to set up a duplicate of your campaign and run a test on a segment of its traffic. “This way, you can run a true A/B test within a campaign to determine whether a particular update will work well for you and your business,” wrote Subha Hari, senior program manager and Piyush Naik, principal program manager, for Microsoft Advertising in the announcement.
Performics was among the agencies that participated in the experiments pilot. The agency used it to test the maximum clicks bidding strategy. Brian Hogue, media director at Performics told Microsoft Advertising that the feature was easy to set up, execute and implement results.
How to use it. The UI is very straightforward. From the experiments tab, name your test, set a start and end date and the percentage of ad traffic you wan to include in the test in the experiment split field.
To evaluate performance, be sure you’ve selected the right metrics in the table on the experiment’s page. The metric values will either be green (indicating the experiment is performing better than the original for that metric), red (the experiment is performing worse) or grey (meaning there is no statistically significant difference). You can then opt to apply an experiment to the original or to a new campaign. If you apply it to a new campaign, the original will be paused automatically.
You’ll want to build in at least four weeks for testing, per Microsoft’s recommendations.
A/A mode first. Microsoft suggests running in A/A mode in which your control and experiment are identical for two weeks. “this will allow time for the experiment campaign to ramp up and help validate that it’s running the same as the original, so that you can run a true A/B test,” said Hari and Naik.
Then you can make the change to your duplicate campaign to run the A/B test. Again, the recommendation is to run the test for at least two weeks — and four or more for bidding strategies such as target CPA and maximize conversions.
Experiment split considerations. When you’re determining the experiment split, you’ll want to be sure your ads are going to get enough traffic to run an effective test that doesn’t take forever to reach statistical significance. Microsoft recommends setting the split at at 50%, but that will vary depending on your volume. For lower volume campaigns, you may need to increase that, while higher volume campaigns may be able to test on a smaller segment.
Other considerations. Note, that you cannot change the experiment’s budget without changing the original campaign’s budget. The budget change will then apply to your experiment split. Any other change you make to the original campaign while an experiment is running will not be applied to the test. That means if you make changes to the original, that means you will no longer be running a true A/B test. That’s why it’s recommended to leave everything alone while an experiment is running.