Facebook’s Fight Against Misinformation - The Khuram Dhanani Foundation
post-template-default,single,single-post,postid-15637,single-format-standard,bridge-core-3.0.2,qode-page-transition-enabled,ajax_fade,page_not_loaded,,qode-child-theme-ver-1.0.0,qode-theme-ver-28.2,qode-theme-bridge | shared by wptry.org,disabled_footer_top,wpb-js-composer js-comp-ver-6.9.0,vc_responsive

Facebook’s Fight Against Misinformation

Facebook’s Fight Against Misinformation

While many are skeptical about what side of the divide technology stands on in the quest to eliminate misinformation, Facebook shows great potential to solve the issue; just not on its own platform. 

Recently, there’s been a massive upsurge in discussions about misinformation with a heavy focus on the part of tech companies and the platforms they preside over. The online conversation surrounding the COVID-19 pandemic, vaccination, and the last US presidential election all exacerbated what was already a serious problem on the internet. 

However, the same tech companies responsible for platforming misinformation might seem hard at work to quell it elsewhere on the internet. 

The State of Online Discourse and Facebook’s Debt to Society

To properly understand the scope of the misinformation debate, it would be useful to take a look at Facebook. Mark Zuckerberg’s company has inevitably found itself at the forefront of almost every type of controversy in the tech space, ranging from questions about the misuse of user data and privacy violations to the aforementioned misinformation discussion. 

While the former issue has caused problems for Facebook, Including having Zuckerberg speak in front of Congress, the problem of misinformation on the sides has largely gone unpunished. While there have been a lot of public backlashes, Facebook was not hard-pressed to actually effect much change.

Part of what made Facebook vulnerable to this problem was (and still is) the demographic the product appeals to. Facebook as a social media platform is the most popular amongst older users, who are far more likely than younger demographics to stick to only one platform and take posts at face value. 

Additionally, Meta’s acquisition of WhatsApp, another platform that is largely popular amongst older users and famous for facilitating wide-reaching misinformation through the use of broadcast messages that make their way from one group chat to another on the app, didn’t help the company’s image or the misinformation problem. 

In a July 2021 interview on the Vergecast, Mark Zuckerberg admitted that he had done away with the notion of stopping every single thread of misinformation on his platforms, likening it to police trying to stop all crime in a city.  

“When you think about the integrity of a system like this, it’s a little bit like fighting crime in a city. No one expects that you’re ever going to fully solve crime in a city. The police department’s goal is not to make it so that if there’s any crime that happens, you say that the police department is failing. That’s not reasonable. I think, instead, what we generally expect is that the integrity systems, the police departments, if you will, will do a good job of helping to deter and catch the bad thing when it happens and keep it at a minimum, and keep driving the trend in a positive direction and be in front of other issues too. So we’re going to do that here.”

Internal Attempts and an External Focus 

This is not to say that the company did not try to make a concerted effort to fight misinformation. In the wake of the backlash from the public, Facebook implemented several tactics to quell the complaints, most of which focused on the company’s public image as opposed to any tangible improvement in behavior or user experience. 

Zuckerberg’s attitude seemed to be the consensus, however. Facebook execs showed a lack of interest in collating accurate statistics on COVID misinformation when approached by experts in 2021. This came out in light of the company’s removal of over 18 million pieces of misinformation from the site at the time, an unthinkable number that puts the stress of the job in perspective.

Dani Lever, a Facebook spokeswoman, summed it up like this;

“The suggestion that we haven’t put resources toward combating COVID misinformation and supporting the vaccine rollout is just not supported by the facts. With no standard definition for vaccine misinformation, and with both false and even true content (often shared by mainstream media outlets) potentially discouraging vaccine acceptance, we focus on the outcomes—measuring whether people who use Facebook are accepting of COVID-19 vaccines.”

However, when AI was applied to stem certain content, the company started to make some headway in curbing the misinformation problem. 

Facebook started to leverage AI technology for moderation in 2013 with the goal of using it to increase profits from the business of collecting, analyzing and selling user data, often without the user’s knowledge. Eventually, the technology was applied to moderation with 7 parameters of consideration;

  • nudity
  • graphic violence 
  • terrorism
  • hate speech
  • spam
  • fake account  
  • suicide prevention 

The AI monitored posts and automatically detected those violating the Community Standards regarding these 7 focus areas. In addition, Facebook partnered with moderator-style fact-checkers to fight against misinformation specifically. These moderators manually review posts that don’t technically violate the Community Standards but still spread misinformation.

However, the problem is still not solved to any standard of satisfaction. And it seems that Facebook might be more interested in solving other problems than working on its own platforms. 

How AI Could Help Facebook Be (Marginally) Better

Facebook has thrown a lot of weight behind its AI tech, recently announcing a project that aims to fight user-generated misinformation on Wikipedia through AI that works with citations. 

While this is a great thing that could help Wikipedia actually deserve its position as a trusted resource for many internet users, it does beg the question; why isn’t Facebook doing more about its own platform?

No AI will fix the unsavory business practices that the company employs for profit. However, the misinformation crisis does seem like a more salient focus than another site’s accuracy. 

This approach and the effort being poured in could instead be used to mitigate the societal damage caused by the discourse on Facebook, which has affected everything from acceptance of election results to public health and vaccine acceptance. 

If Meta were so inclined, its own AI technology could be put to use on its own platforms, with the exception of WhatsApp, which is likely doomed to remain a hive of misinformation due to being an Instant Messaging platform.