In what many see as an incredibly divided, hyper-partisan political climate, the result of last week’s 2016 president election has proven as controversial as it is unprecedented. Critics have accused Google and social media sites like Facebook and Twitter of inadvertently enabling divisive rhetoric and potentially influencing the election’s outcome by cultivating echo chambers and increasing exposure to misinformation in their respective networks.
On Monday, Facebook and Google announced that they will take extensive measures to vet digital publishers in an effort to curb the spread of “fake news” within their services.
Google led the way on Monday afternoon, when it announced that it would institute a ban on “fake news” sites benefitting from its AdSense network. Later in the afternoon, Facebook updated its audience network policy to prevent ads featuring deceptive or factually inaccurate content from reaching its users.
Facebook noted in a statement that their new policy will only apply to “fake news,” and that it will continue to monitor existing publishers to ensure compliance.
Though the tech giants have been criticized over the years for their relative inaction on misleading or abusive content, critics now argue that the spread of such information may have actually influenced the outcome of the 2016 election in President-elect Donald Trump’s favor.
Those same critics say that Google and Facebook can no longer ignore malicious, inaccurate content without being complicit in its influence on the culture and political climate.
“No lie or falsehood or hoax is more consequential than Facebook’s belief that it is not a media company, and thus can shirk the responsibilities of one—beginning with a basic fidelity to the truth,” wrote J.K. Trotter of Gizmodo.
Facebook in particular defended itself as a sharing platform, not an arbiter of truth.
“Of all the content on Facebook, more than 99% of what people see is authentic,” Facebook CEO Mark Zuckerberg wrote on his Facebook page last Friday. “Only a very small amount is fake news and hoaxes.”
“The hoaxes that do exist are not limited to one partisan view, or even to politics,” he continued. “Overall, this makes it extremely unlikely hoaxes changed the outcome of this election in one direction or the other.”
Despite Zuckerberg’s defense of Facebook, critics of its fake news crackdown argue that the partisan politics of the U.S. run deeper than what’s shared on our individual news feeds.
“[T]he idea that a better or different algorithm on Facebook would have made the results any different is just as ridiculous as the idea that newspaper endorsements or ‘fact checking’ mattered one bit,” wrote Mike Masnick of techdirt.
“People are angry because the system has failed them in many, many ways, and it’s not because they’re idiots who believed all the fake news Facebook pushed on them (even if some of them did believe it).”
Conservative publications like Breitbart fear that changes to Facebook’s policy will impact right-leaning publishers the most, while an investigation from BuzzFeed News implies that a conservative news crackdown is a necessary course correction. A recent analysis from the site found that right-leaning Facebook pages contributed nearly 40% of misleading information during the election, while factually inaccurate information shared by left-wing pages was roughly 20%.
While Facebook’s definition of “fake news” is still nebulous, few details about how this will practically affect political pages, ads or newsfeed posts are available at this time.
What do you think that Facebook and Google should do in order to crack down on “fake” news? Sound off in the comments