As adoption of programmatic and retargeting ad buying has grown, brand safety — while certainly a concern — has often taken a back seat to reach and the ability to follow customers and site visitors no matter where they went on the web. However, in March 2017, brand safety took on a new sense of urgency in the advertising community after ads were reported showing next to extremist videos on YouTube, precipitating a boycott of more than 250 advertisers.
The boycott occurred at a time when Google and Facebook, in particular, had been under fire for months for facilitating the proliferation of fake news and extreme hyper-partisan content in the wake of the US presidential election cycle. Groups like Sleeping Giants had been publicly calling out advertisers to stop running their Google network ads on Breitbart.com, for example.
Rick Summers, Google’s global lead for publisher policies, told Marketing Land in April that in the summer of 2016, his team had noticed a trend of increasingly aggressive tones from people feeling freer to lodge personal attacks and express hateful thoughts online. In November, Summers’ group updated the Misrepresentative Content policy to address the growing number of fake news sites with domains that mimic legitimate news outlets that had been popping up on the AdSense network.
Additionally, advertisers had long been calling on Google (and Facebook) to provide greater transparency and third-party auditing of ad campaigns. In February, YouTube said it had initiated an MRC audit of the data collection and measurement practices of DoubleVerify, Integral Ad Science and Moat, the third-party measurement firms already integrated with YouTube.
Since the advertiser revolt in mid-March, Google has taken several steps to improve brand safety controls and keep ads from appearing on offensive content on YouTube and sites in its ad networks. To help keep track of what happened and when, we’ve compiled the following timeline of events and actions that Google has taken since the spring of 2017.
The timeline
March 16: The Guardian reports it pulled Google and YouTube advertising after its ads were spotted alongside extremist content and that the British government found similar extremist ad adjacencies and summoned Google to address the problem.
March 17: UK managing director Ronan Harris responds in a blog post that the company “will be making changes in the coming weeks to give brands more control over where their ads appear across YouTube and the Google Display Network.”
March 20: As more UK brands report pausing ads on Google platforms, Google’s EMEA head, Matt Brittin, apologizes at an industry conference to advertisers that had been affected.
March 21: Google’s chief business officer, Philip Schindler, says in a blog post that the company would be “taking a tougher stance on hateful, offensive and derogatory content that is attacking or harassing people based on their race, religion, gender or similar categories. This change will enable us to take action, where appropriate, on a larger set of ads and sites” and implementing more controls to shore up advertiser confidence, including:
- new default settings that exclude potentially objectionable content.
- account-level site placement and YouTube channel exclusions.
- more fine-tuned controls.
- new machine learning algorithms that can now find five times more non-brand-safe videos than before.
March 23: In response to a report by The Times of London, many large US brands follow UK brands’ lead, including Starbucks, Dish, AT&T and Pepsi. General Motors says it will advertise only on the YouTube home page, while Walmart and Johnson & Johnson said they’ll continue buying ads on YouTube Preferred channels.
April 6: YouTube updates its monetization eligibility rules. Channels must receive 10,000 views before creators can be eligible for the YouTube Partner Program and videos can be monetized. Once channels reach the threshold, they undergo a new review process.
April 26: Google expands the scope of its so-called Hate Speech policy for AdSense and launches page-level actions for publishers. The policy now applies to dangerous and derogatory content, as well as content that promotes discrimination or disparages an individual or group based on any characteristic “associated with systemic discrimination or marginalization.”
Early May: The exact date isn’t clear, and it may have been a bit earlier, but YouTube paused ads in its search results, called TrueView discover ads, in order to implement brand safety controls and visibility into where video ads appear. The ads are expected to come back online in Q3 2017.
May 15: First applied to hate speech, page-level actions can now be applied for all AdSense policy violations. Google says it started working on the technology in 2015 and began testing with publishers in the fall of 2016.
June 18: Google’s general counsel, Kent Walker, outlined four steps Google is taking to address extremist-related content on YouTube. Videos don’t have to expressly violate a policy to be ineligible for ads. For example, videos that contain inflammatory religious or supremacist content may not violate the hate speech policy but will appear behind an interstitial warning. Videos that carry this type of warning are not eligible for advertising or user comments or endorsements.
We will continue to update this timeline as needed.
Go to Source
Author: Ginny Marvin
The post Timeline: Google’s ongoing efforts to address brand safety on YouTube appeared first on On Page SEO Checker.
source http://www.onpageseochecker.com/timeline-googles-ongoing-efforts-to-address-brand-safety-on-youtube/
No comments:
Post a Comment