Fake news has a distinct garbage-like quality. It emphasizes fear over facts, and is more jarring than it is journalistic. It’s the grainy, clickbait magazine that jumps out at you in the grocery store line. It is an industry determined to profit off of the vulnerability of those who struggle to determine fact from fiction.
Thankfully, Facebook is taking a stab at fighting this faux-media phenomenon. It is implementing tactics, such as automated and human review, that would keep advertisements from legitimate companies away from fake news articles or videos.
The issue here is brands having no control over the type of content they support. In what was referred to by Bloomberg as a “Google Ad Crisis” earlier this year, many big-name brands pulled their ads from Google and YouTube after they were run alongside videos advocating anti-Semitism and terrorism.
This content was not something advertisers chose to be featured in tandem with. Predictive analytics software is a category of tools that, according to its definition on G2 Crowd, “mines and analyzes historical data patterns to predict future outcomes by extracting information from data sets to determine patterns and trends.”
Facebook’s new measures not only knock fake news media off its platform, but decreases the funding with which they spread these stories.
In other words, factors such as a viewer’s recent searches, their interests or other activity-based details influence the type of advertising directed toward visitors on Google and YouTube. Facebook intends to battle this error of pairing reputable companies with falsified news stories.
“Currently, we do not allow advertisers to run ads that link to stories that have been marked false by third-party fact-checking organizations,” reads Facebook’s press release. “Now we are taking an additional step. If Pages repeatedly share stories marked as false, these repeat offenders will no longer be allowed to advertise on Facebook.”
This not only knocks fake news media off of a platform, but decreases the funding with which they spread these stories. It’s important to note how these stories are not declared “false” objectively, either; rather, these decisions are influenced by third-party fact-checking organizations.
It’s possible these companies that pulled their advertisements from Google and YouTube were using a cross-channel advertising tool that would help them publish ads, “across multiple digital advertising channels such as search, display, mobile, social media and video,” according to the G2 Crowd definition.
G2 Crowd recently published its updated Fall 2017 Grid® for Cross-Channel Advertising. One of the rated features of cross-channel advertising products is Fraud Protection, which is described as “ensur[ing] marketer’s ad reach and performance is not inflated by bots or spam websites.” This feature only received, on average, a 76 percent satisfaction rating. Even 4C, a product which 94 percent of reviewers believe is headed in the right direction, only received an 85 percent satisfaction rating for Fraud Protection.
From this data, we can infer that users of cross-channel advertising products would like greater protection against advertising on fraud or spammy platforms. While ad networks such as Facebook can do their part by implementing fact-checkers and content review, software tools also have a big role to play in where their clients’ ads go.
It seems, could the ad networks work in tandem with the tools used to place ads, we’d have a lot less reason to question our sources.
Want more cybersecurity news from around the web sent to your inbox once a week?