Fb is engaged on instruments to maintain newsfeed advertisements away from subjects like crime and politics


Facebook is developing tools that advertisers can use to keep their ad placements away from certain topics in its newsfeed.

The company announced that it would begin testing controls to exclude topics with a small group of advertisers. For example, a children's toy company could avoid “crime and tragedy” -related content if it so wished. Other topics are "News & Politics" and "Social Issues".

The company said it would take "much of the year" to develop and test the tools.

Facebook, along with players like Google's YouTube and Twitter, has worked with marketers and agencies through a group called the Global Alliance for Responsible Media [GARM] to develop standards in this area. They have worked on measures to help "consumer and advertiser safety" including establishing harmful content definitions, reporting standards, independent monitoring, and agreeing to develop tools to better manage ad adjacency .

Facebook's newsfeed tools build on tools that run in other areas of the platform, such as: B. In-Stream Videos or the Audience Network, which enables mobile software developers to deliver in-app advertisements to users based on Facebook data.

The concept of "brand safety" is important to any advertiser who wants to make sure their company's ads are not around specific topics. But the advertising industry has also put increasing pressure to make platforms like Facebook safer not only near their ad placements.

The CEO of the World Federation of Advertisers, which founded GARM, told CNBC last summer that it was a move away from "brand safety" to focus more on "social safety". The whole point is that even if ads don't appear in or next to certain videos, many platforms are essentially funded by advertising dollars. In other words, ad-supported content helps subsidize all ad-free content. Many advertisers claim that they feel responsible for what happens on the ad-supported web.

This became very evident last summer when a number of advertisers temporarily ripped their advertising dollars off Facebook and urged them to take stricter steps to stop the spread of hate speech and misinformation on their platform. Some of these advertisers not only wanted their ads to stay away from hateful or discriminatory content, they also wanted a plan to ensure that the content was completely removed from the platform.

Twitter is working on its own security tools for in-feed brands, it said in December.


Katherine Clark