Misinformation in Social Media

AP+Photo%3A+Julio+Cortez

AP Photo: Julio Cortez

Samuel Abourezk, Staff Writer


The engine of social media isn’t powered solely with truth, it is also sustained with viral, clickbaiting content, without basis in reality. Social media sites often protect viral content, and would negatively impact their flow of cash if they were to meaningfully regulate themselves. Users are bombarded with torrential floods of information, which they do not often verify, instead only able to afford the time to passively consume. 

According to the Global Web Index, the average American social media user is on social media for 2 hours and 8 minutes a day. Is it naive to assume that every user will verify the accuracy of every post they see? In 1996, lawmakers decided that they couldn’t allow the internet to be anarchy, a legal gray area. 

Section 230 is a law referred to as the “26 words that created the internet.” This law protects social media sites from being held legally responsible for third-party content, content not created by the sites themselves, but by users and other sources. The law also allows sites to remove any material they find offensive, even if it abides by the 1st Amendment, as long as its removal is done in good faith. 

In 2020 and 2021, several bills were introduced to Congress which would change the law. 

In June 2020, the PACT (Platform Accountability and Consumer Transparency) Act was introduced by Senators Brian Schatz and John Thune. The PACT Act would require platforms to publicly disclose their methods for moderating user content. It would also require them to cooperate with court-ordered removal of illegal content, and would eliminate platforms’ protections from third-party content. 

There was also the SAFE TECH (Safeguarding Against Fraud, Exploitation, Threats, Extremism, and Consumer Harms) Act, introduced in February 2021, which would alter Section 230 to protect speech only, and not information. The SAFE TECH Act would remove the protection from third-party content, in the case of civil rights laws, antitrust laws, cyberstalking laws, and human rights laws or civil actions regarding a wrongful death. It would also remove protections for third-party content that was advertised, and would force platforms to comply with court orders. The SAFE TECH Act was introduced by Senators Mark Warner, Mazie Hirono and Amy Klobuchar. Section 230 is almost unanimously regarded as needing revision, and we can see the consequences of its dated legal interpretation of the internet.

The enabling of misinformation has resulted in the escalation of dangerous conspiracy theories, most recently, Q-Anon, a far right conspiracy group who American prosecutors commonly call a cult. Anti-vaccine conspiracy theories have recently become prominent on Facebook and YouTube, with the content itself not violating any of Facebook’s rules for content. 

Numerous politicians believe that Section 230 doesn’t hold companies responsible enough, and Biden plans to make changes to the law.

When social media platforms have billions of users, if only one out of a thousand users bought into a certain conspiracy theory, that would mean that millions believe it. 

According to an NPR (National Public Radio) poll conducted in December 2020, 40 percent of respondents said that they believed COVID-19 was created in a lab, and 30 percent also believed that voter fraud helped Joe Biden win the election. In the same NPR poll, 39 percent said that they believed that a deep state was working to undermine president Donald Trump. 

Lincoln Southeast (LSE) senior Owen Cheney is one among many that has seen conspiracy theories propagated on social media. Cheney says that he has seen “the chemtrails stuff, anti-vaxx, and a whole bunch of anything Alex Jones has ever said,” and believes that conspiracy theories and Alex Jones’ show have done harm to media consumers.

“I think with all the misinformation around voter fraud in the election that people were angry, that Trump didn’t get elected, and they thought that there was voter fraud and there wasn’t. Without misinformation, there wouldn’t have been an insurrection in the capitol,” Cheney said.

The Capitol Insurrection is, in many ways, the result of misinformation spiraling into conspiracy theories, aided by social media algorithms. Fadi Quran, a campaign director for Avaaz, a human rights organization, says that “Social media platforms, for years, have allowed their algorithms to boost disinformation and far-right organizing,” and that “QAnon conspiracists and other militias … would never have grown to this size without being turbo-charged by Facebook and Twitter.”

When a social media user interacts with content, liking and commenting, the algorithm will show them more of that content. This becomes an issue when users are only shown content that they agree with, and are shielded from any opposing opinions that would cause them to leave the site. In a Wall Street Journal article, Hany Farid, a computer science professor at University of California-Berkeley, says that “it’s not [just] that Facebook, YouTube and other social-media sites allow it on their platform . . . it’s that they amplify it.” Like-minded users are connected with each other, who then form communities of conspiracy theorists, internet holdouts of propagandized echo-chambers. 

Civics teacher David Nebels says that “these sites have radicalized individuals and moved them to do horrible things.” Algorithms give social media sites the tools to polarize individuals ideologically. 

“Insurrectionists were lied to by President Trump and then caught in a feedback loop online. This effectively radicalized these individuals into taking extreme action. But, none of that would have happened if our civic leaders behaved responsibly and spoke truthfully,” Nebel said. 

The anti-vaccine conspiracy theory is a theory that has been in past years, mostly rare, though the spread of vaccine misinformation has multiplied in size due to the Covid-19 pandemic. In a CDC (Centers for Disease Control and Prevention) report, the MMR (Morbidity and Mortality Weekly Report) for measles shows that in 2013, there were several measles outbreaks around the United States, which were “mainly among groups with low vaccination rates.” The CDC continues, adding that if vaccination rates dropped nationally, “diseases could become as common as they were before vaccines.” 

The distribution of such dangerous misinformation puts lives at risks. In a Reuters article, Biden’s Chief of Staff Ron Klain said that the administration is trying to ensure that the anti-vaccine content “does not start trending on such platforms and becomes a broader movement.” 

A Gallup survey conducted in December 2020 found that 37 percent of Americans weren’t willing to get the vaccine. Dr. Anthony Fauci, director of the NIAID (National Institute of Allergy and Infectious Diseases), said that “between 70 to 90 percent” of Americans need to get vaccinated for pandemic restrictions to be lifted. Maskless mass gatherings and superspreader events have contributed immensely to the transmission of the virus. Lack of belief in the seriousness of the disease is a psychological virus, an infection of misinformation which can only be halted with serious action. 

The problem of misinformation isn’t hopeless. Nebel says that “schools need . . . to give kids the tools necessary to make reasoned decisions as they interact with social media.” According to Nebel, LSE has taken steps to offer curriculum created by Stanford University that will provide students the skills to recognize misinformation online. LSE “helped pilots materials created by Stanford University to help.”

Nebel has utilized lesson plans from the Stanford History Education Group (SHEG), which teaches civic online reasoning. SHEG defines civic online reasoning as “the ability to judge the credibility of the information that floods smartphones, tablets and computer screens.” One of its lessons is “lateral reading,” which is when someone looks for information about a media source from other websites, to determine its credibility elsewhere. Not only do we need to teach about how to combat misinformation, but we also need to expect effort from social media platforms themselves.

Instagram marks posts as ‘false information,’ and uses independent fact-checkers to verify the accuracy of posts, which greatly combats misinformation. Twitter removed misinformation that would “undermine public confidence in an election,” and had an election hub for users looking for news on election events from reputable sources. YouTube would remove content that was edited and manipulated “in a way that misleads users . . . and may pose a serious risk of egregious harm.” Cheney thinks that features like this in social media, which “mark stuff as false information . . . are quite helpful.”

If social media sites universally used such features, and if users were prepared to deal with misinformation, then perhaps misinformation could be another extinct disease, long-dead and vaccinated against.