A 2018 Cheq and IPG Media survey revealed that most consumers believe that ad placement is intentional; further, they are 2.8 times less willing to associate with a brand when its ads are displayed in environments that are unsafe or unsavory. Given that tidbit, it’s no surprise to see the exodus of advertisers from YouTube in the wake of parallel scandals over sexually inappropriate and anti-vaccination content.
This week, the platform hastened a number of measures in the works to combat this content, and addressed its advertisers in the most comprehensive way yet.
A copy of the letter obtained by AdWeek revealed a combination of contrition and concentration, as the company works to crack down on offenders and demonetize content that presents danger to their users. “Because of the importance of getting child safety right,” they wrote, “we announced a series of blunt actions to sharpen our ability to act more precisely.
Among the measures swiftly undertaken this week:
- The disabling of comments on “tens of millions of videos” that could attract predatory behavior
- “Reducing the discoverability” of content that has been flagged (a strategy that Pinterest has used with some success)
- Terminating offending accounts
- “Increasing accountability” within the community of creators on the site, a community that uploads a staggering 400 hours of video to the site each second
- “A more unforgiving stance” for creators who post inappropriate children’s content
- The hastening of development on a machine learning powered “comments classifier,” which will ultimately be able to flag and remove twice as many comments as the existing system, a combination of algorithms and manual reviewers.
These measures have been deployed in hopes of stopping YouTube’s proverbial bleeding of advertisers, which in recent days has included AT&T, Disney, Nestle, McDonalds, Epic Games, and several others. AT&T reportedly told the New York Times, “until Google (YouTube’s parent company) can protect our brand from offensive content of any kind, we are removing all advertising from YouTube.” A similar exodus of advertisers took place back in 2017, when AT&T ads, along with those of Johnson & Johnson and Verizon were placed alongside racist content and videos posted by terrorist groups. This time, the company’s countermeasures have been swift and public.
The problem is a complicated one, as one executive (speaking anonymously) shared with FastCompany.
“There is no such thing as 100% safety when it comes to user-generated content, and marketers need to know that although there can be a zero-tolerance effort, there’s no such thing as 100% brand safety or 0% risk.”
The measures above, along with demonetizing videos in certain categories, can move a platform toward being a safer place. The 10,000 content reviewers that YouTube brought on in the wake of their 2017 scandal can also make a dent in the problem. But it still may not be enough.
In a statement, YouTube acknowledged, “there’s more to be done, and we continue to work to improve and catch abuse more quickly.” With any luck, this aggressive and comprehensive attention to addressing these concerns – for the sake of advertisers, and the viewing public using the platform – will continue long after the media firestorm dies down.
Join 100,000+ fellow marketers who advance their skills and knowledge by subscribing to our weekly newsletter.
WATCH OUR 2019 PROMO
The post YouTube Promises “Blunt Actions” to Secure Child Safety and Soothe Advertisers appeared first on Social Media Week.