Twitter, Facebook “pre-assessment” audit of brand safety on platform
Twitter has agreed to open its books to a “pre-assessment” audit of brand safety on the platform, which is one of the commitments it made following the abuses of social media during the tumultuous 2020 election and civil rights movement. Meanwhile, Facebook also said today that it struck a long-awaited deal with Media Rating Council, the same group doing Twitter’s pre-audit, to independently assess the effectiveness of its brand safety controls and content monetization policies.
Twitter announced today it worked with the Media Rating Council to lead the preliminary audit to verify how much toxic material lives on the service, a core question for advertisers.
“Twitter and the MRC have agreed that the Brand Safety audit will assess our compliance with certain Brand Safety standards when serving ads in environments including Home timeline, User profiles, Search results, and our Amplify pre-roll product,” Twitter said in a blog post on Wednesday.
Twitter called this first step a “pre-assessment” to study its “audit readiness.” Audits can be complicated and much of it relies on how open a given platform is to sharing data. MRC and Twitter did not disclose details other than to say the scope included the main Twitter timeline, profiles, search, and video advertising environments known as the Amplify program.
“We will use the findings of this pre-assessment to identify any areas of improvement and agree on high-level recommendations for remediating any gaps,” Twitter said.
Facebook said it has progressed beyond the “pre-assessment” phase with MRC and is now undergoing the main audit in two key areas: brand safety and content monetization. “We’ve begun an independent audit of our content monetization policies and brand safety controls with the MRC on July 21 that stems from our work with GARM—Global Alliance for Responsible Media,” a Facebook spokesperson said in an email to Ad Age. “To respect the process, our next update will be when the audit is complete and not before.”
It had taken some time for MRC and Facebook to agree to their terms for an audit, which was first raised as a possibility last year. In June, a Facebook spokesperson told Ad Age that Facebook was still determining exactly what MRC would be allowed to audit based on guidelines being established by GARM, the group tasked with defining hate speech and standards in digital media.
In 2020, more than 1,000 advertisers joined a civil rights boycott, pulling ads from Facebook for the month of July. There were growing protests about how social media, including Facebook, Twitter, YouTube and others, were used to spread disinformation during the presidential election. This year’s Jan. 6 attack on Congress put even more focus on social media. And just last week, President Joe Biden called out Facebook and others claiming these platforms helped in the spread of misinformation around COVID-19.
Facebook pushed back against the characterization that it was responsible for anti-vaccination sentiment, particularly in the U.S. With growing fears about a COVID-19 resurgence, Facebook critics have reemerged to pressure advertisers to reassess how they spend money on the service.
Rob Norman, the former global chief digital officer at GroupM, was one of the influential marketing voices to express frustration: “It has reached the point at which no one can be in denial,” Norman tweeted over the weekend. “You can’t buy ads from Facebook nor take a paycheck from Facebook without some complicity. Dangerous misinformation cannot be a cost of doing business.”
For many, the social media issues are larger than “brand safety” concerns and the impact on advertisers. For instance, Norman said he was mostly concerned with how social media platforms use algorithms to intensify the distribution of misinformation.
It’s clear that the intensity of the backlash from politicians and activists is having an impact on the social media sites. This week, YouTube announced a change that would add links to health-related videos to point to more “authoritative” sources. Twitter took new drastic measures by suspending the account of Rep. Marjorie Taylor Greene for allegedly sharing COVID-19 misinformation. Her account was offline for 12 hours. Twitter’s penalties could lead to ultimate bans if an account does not stop violating its rules.
Organizations like MRC have emerged as independent arbiters that could provide a baseline for brands about what to expect when they run ads on different digital platforms. Advertisers are worried about areas like “in-feed” environments, such as Twitter’s Timeline and Facebook’s News Feed, and they want to know how often sponsored messages appear surrounded by offensive speech or disinformation.
Facebook has begun to test a program that gives brands “topic exclusions” in News Feed so they might be able to avoid appearing near content that doesn’t meet their brand suitability standards.
Twitter said it would be open to other MRC audits, as well, allowing the group to measure audience data, ad viewability, and fraudulent traffic.