4 Social media and disinformation
The phenomenon of disinformation isn’t limited to politics. A meme captioned, “University students hard at work during lecture” shows students in what looks like the back of a college lecture hall, one sleeping and another watching a movie on a laptop. The meme wasn’t presented as news. It wasn’t tethered to a website. It was simply shared from one feed to another. And instead of making readers mad or suspicious, it might have made them laugh. If it’s funny, what’s the problem?
Here are the possible reasons for concern as suggested in an episode of the National Public Radio “Life Kit” podcast on (you guessed it) “fake news:”
- The picture wasn’t taken during a lecture.
- The image promotes stereotypes.
- The meme could reinforce someone’s belief that higher education shouldn’t be publicly funded.
Social media and social issues
The relationship between social media and social issues has always been complicated, and the 2020 election was a testament to that. Over the course of just a few months, social platforms served as forums for people to discuss controversial topics such as voter fraud, the COVID-19 vaccine and the riot at the Capitol.
Posts and comments about these kinds of topics triggered the beginning of greater content moderation from social media companies.

Shannon McGregor, senior researcher at the University of North Carolina’s Center for Information, Technology and Public Life, said that during the 2020 election, social media companies assumed the role of media moderators, whereas in the past, they have tried to distance themselves from this responsibility.
Content moderation is the term used to describe social media companies’ regulation of their users’ speech. Some people think of this as censorship or a First Amendment violation, but Amanda Reid, professor at UNC’s School of Law, said that’s not the case because the regulation is coming from private companies, not the government.
The First Amendment only prohibits government officials (state actors) from restricting a citizen’s free speech activities. Owners of social media companies don’t fall into that category. While their actions can definitely limit speech, they don’t violate the First Amendment.
So, why have social media companies become more involved in content moderation? McGregor said it is because social media magnifies extreme voices. Platforms like Instagram, Facebook and X (formerly known as Twitter) often include controversial posts and comments, as you no doubt know.
This can be contentious when people are polarized, especially about political and and social issues, as posts sometimes spread anger and confusion among users, she said.
In the United States, the freedom to express individual opinions and viewpoints is foundational to U.S. national identity, but McGregor points out that it can also prove harmful when used to bolster misinformation, hatred and division on social media.
“In general, it’s really good for democracy to have this tool that’s available. But on the other hand, we see those same tools being used … in really violent and deadly ways,” she said. Those are cases in which social media platforms might decide to moderate content.
Media Legislation
When it comes to defining content moderation, there are two main pieces of legislation that set expectations for it broadly. The Digital Millennium Copyright Act and Section 230 of the Communications Decency Act provide some parameters for social platforms, as they require moderation of copyrighted material and assign users liability for the content they post on these channels. So, Section 230 allows website owners to make rules about what people can or can’t post without being held legally responsible.
Although social media companies aren’t legally responsible for content, certain situations have prompted them to take action in the past. For example, many platforms banned President Donald Trump, after his first term, due to the content he posted during the Capitol riot.
This then prompts the question, is it acceptable for social media companies to moderate users’ political posts? When is it OK to do so?
Some people are opposed to content moderation because it might give social media platforms control of what voices are heard, implying they could silence certain political viewpoints.
Social media content moderation is instead a question of accountability. How do users deal with moderation efforts they don’t agree with? And what standards should platforms be held to? These are tough questions because some people feel like the current laws give companies too much power over content. Other people feel the laws don’t hold companies accountable for content.
“Content moderation is never going to be perfect. You’re never going to get rid of all the bad speech and only the bad speech. … Social media companies are empowered to do that, and they do it because otherwise people would stop using their platforms,” Reid said.
Regulations
Finally, instituting third-party regulatory bodies have been considered as ways to quell some of the backlash social media companies face with their content moderation efforts, according to Philip Napoli, professor of public policy at Duke University.

Facebook took some steps toward developing a third-party regulatory body by forming an oversight board that addressed freedom of expression online, specifically, “what to take down, what to leave up and why.” The board, which operates outside of Facebook’s parent company, Meta, has been made up of international experts, academics and former politicians.
This board could override decisions made by its CEO, Mark Zuckerberg. In addition to banning people from Facebook, it added disclaimers or warnings to questionable or false posts.
Reid said the nature of the oversight body is experimental since it’s the only governing body of its kind. Still, it was an example of a social media company taking action to address concerns with content moderation.
More recently, social media companies are trending away from oversight. After Elon Musk bought Twitter, he dismantled its team of content moderators. And while Facebook’s oversight board still exists, Meta discontinued using third-party fact-checkers for posts on Facebook and Instagram in January 2025.
Chapter 4: Questions to Consider
Have you ever seen anything on social media that you knew was false?
Have you seen or known anyone who has been banned from social media? Why?
Elon Musk, after purchasing Twitter, (the platform now known as X) reversed President Trump’s banishment from the platform. If you owned X, would you ban anyone? Who?
Does the First Amendment protection of free speech mean anyone can say anything on social media? Why or why not?
Media Attributions
- solen-feyissa-KWZa42a1kds-unsplash
- greg-bulla-KItSIXhXFDY-unsplash