The US Supreme Court recently heard arguments regarding recent laws in Texas and Florida that limit social media’s ability to moderate content. The Texas law prohibits social media platforms with >50 million active US users from censoring “a user, a user’s expression, or a user’s ability to receive the expression of another person.”  Florida’s law applies to companies with more than $100M in annual revenue and 100 million monthly active users. The language in that law is quite broad, labeling social media companies as “common carriers” and limiting their ability to moderate content.

When these laws were first passed in 2021, the media, perhaps rightfully so, covered the legislation as a political story. The motivation to create new legislation was driven by conservative concerns that right-wing viewpoints were being censored on these platforms.

Whether true or not, and despite that we are in an election year, I’d like to look at this case outside the political realm. The Supreme Court’s interpretation and decision could have wide-reaching impacts. Last week, I wrote about net neutrality in an age where internet-delivered content is increasingly viewed atop a private enterprise platform. This case is adjacent to the concepts I presented in that story.

Let’s begin with the language in the Texas case.  It prevents moderation of “a user, a user’s expression, or a user’s ability to receive the expression of another person.” On the surface, this protects our ability to post content online, knowing our content will not be deleted, altered or blocked by the social media platform. That feels like a simple way to protect our First Amendment rights.

But there are a few sticking points. How should social media companies handle user posted content that would violate other laws? If someone posted images of child pornography, for example, would we not want companies to remove that content? How should clearly libelous or factually inaccurate content be handled? It isn’t illegal to explain how to build a bomb, but do we want that instruction broadcast to 100 million monthly users?

The good news is that these new laws do not prohibit moderation of unlawful content. This is all about moderating viewpoints and opinions, which leads to a much more interesting question.

How is “content moderation” defined in the context of this legislation? After all, social media from the beginning has been 100% built upon algorithms that moderate what content fills our feeds. Those algorithms are the whole reason we each live in a social media bubble, keeping us entertained and engaged, reinforcing our views, connecting us to our friends and enticing us to spend hours upon hours on these platforms.

The foundation upon which social media is built is based on moderating content. Wouldn’t eliminating the ability to moderate, effectively end social media entirely?

Consider the business model for social media. These laws – if sustained by the Supreme Court – could lead to lawsuits against social media for feeding us advertisements. If “the user’s ability to receive the expression of another user” cannot be moderated by the social media company, then arguably the user has total control over what content they do and do not want to receive. Delivery of unwanted content would be illegal. With a strict interpretation of “content moderation,” users would fully control the algorithm.

Likely, the courts will interpret “content moderation” more loosely, allowing companies to moderate social media feeds, based on terms and conditions that users agree to when setting up their account. In those terms, companies would protect their ability to serve ads, for example, or to suggest new content from users that you did not subscribe to. The Florida law anticipates this and has language prohibiting “rapid changes to the terms of service,” with steep financial penalties for non-compliance. No sudden changes in how they moderate content.

The “common carrier” distinction in the Florida case is interesting. A common carrier is a class of companies that dominates a market and provides a public service. We normally think of common carriers in terms of telecommunications companies and how the FCC governs explicit content on TV, for example. This is a reimagining of that class of company. If social media could be classified and regulated as common carriers, it is not difficult to envision many, many large online companies as being similarly classified.

Amazon or Best Buy could lose the ability to moderate content in product user reviews. Today profane and other offensive content is removed from shopping platform reviews. Etsy could lose the ability to curate content. It is not a stretch to consider that Gmail could lose the ability to block spam.

I am a huge proponent of free speech. But in the case of the Texas and Florida laws, I think we see an unnecessary over-reach. There is still a place for sensible editorial control on private platforms. Digital storefronts should be allowed to delete or block content that is inappropriate and not related to why a user is on the platform. This isn’t a restriction on free speech any more than theaters requiring everyone to be quiet during a film.

We use social media platforms because we WANT those platforms to curate and moderate the content we receive. There will never be an end to our debate about whether the algorithms are biased or fair.  But that’s OK. Those debates create pressure for private companies to continually adjust to market feedback.

Similarly to how I approached data privacy recently (you can read that here), I think there is room for legislation to improve how well platforms manage some forms of content. I believe that social media platforms should be held accountable to illegal actions with clear ties back to poor content moderation. There should be accountability for providing hate speech to 100 million monthly users. There should be accessory damages for platforms that post child pornography or bomb-making instructions. Hold these large technology platforms accountable analogously to how the FCC fines networks that televise inappropriate content.

Ideally, the very largest of these platforms should also remain “net neutral.”  Content that is not in violation of other laws (fraud, libel, etc) should get free speech protection. This isn’t hard to allow on the content creation side. Delivery is more tricky, when content moderation algorithms prioritize user engagement and advertising interaction. It’s unlikely we’ll break out of our bubble and see those opposing views.

If the laws are upheld, and we consider these companies common carriers, then I would hope that comes with a mandate to transparently disclose moderation algorithms – or even better – comes with new and stronger government regulation that ensures the platforms remain “net neutral,” as I discussed last week. But defining social media as a common carrier is a slippery slope I don’t think we should try to climb.

Back to the case in question, I believe we should hold technology platform companies accountable to real crimes.  But let them moderate content and manage their private businesses as they like. Do not elevate these companies to be considered common carriers. We users can decide which social platforms to frequent or to delete. The Florida and Texas laws should be overturned. We will find out in June if the Supreme Court agrees.