Editor’s note: In The Fix’s new Friday column, tech and media journalist David Tvrdon reflects on how the forces of business, technology and journalism intersect and what that means for the media industry

It’s often referred to as the internet’s most important law – section 230 of the Communications Decency Act protects so-called “interactive computer service” providers from being sued over content posted by their users. 

Originally aimed at protecting sites like Wikipedia or news media with comments sections, the rules have also allowed social media to exist. Debate around section 230 is growing as more and more people want to see greater accountability for the tech giants (or less political bias, depending on whom you talk to). 

To understand the importance of protecting platforms, just look to the situation in Europe. In 2015, the Grand Chamber of the European Court of Human Rights (ECHR) found Delfi, an Estonian news portal, liable for comments posted by users on its online site.

As Dirk Voorhoof, at the time Professor of Media Law, Copyright Law and Journalism and Ethics at Ghent University then noted in his dissection of the ruling, the judgment created an open standard:

It is correct that the judgment seems only to be applicable on fora that are managed by professional, commercially-run publishing companies, and that this case does not concern “other fora on the Internet” where third-party comments can be disseminated, for example, an Internet discussion forum or a bulletin board where users can freely set out their ideas on any topics without the discussion being channeled by any input from the forum’s manager. 

The Grand Chamber’s finding is neither applicable on a social media platform where the platform provider does not offer any content and where the content provider may be a private person running the website or “a blog as a hobby”.

The ECHR ruling did not change the legal situation in Slovakia, or any EU country for that matter. But it did create an open standard with enough space for national courts to misinterpret the ruling, as many European news media noted at the time.

I remember quite vividly the discussions newsrooms had and many toughened their online comments’ policies. Some news portals even shut down their comments sections altogether.

Five years later and the discussion about liability for online comments in Europe is still going on – and far from over.

Section 230 and free speech

Meanwhile, online sites in the United States are protected by federal law, Section 230 of the Communications Decency Act (also known as CDA 230), based on which companies are not considered liable for content posted by users online. That goes both for social media and online news sites, blogs, and others.

On Wednesday, October 28th, the CEOs of Facebook, Twitter, and Google testified before Congress regarding Section 230 and online content moderation. Unfortunately, the hearing was low on substance.

First, many criticized the nature of how the hearing was framed and the timing (a few days before the elections, dubbed by some as “working the refs”). Second, the hearing and Senators seemed to only focus on the ramifications of repealing or updating Section 230 for social media companies, but not the potential impact on online news sites and others, to whom it also applies.

Nonetheless, it provided an opener for experts to weigh in – and many did. So did the platforms’ CEOs – Mark Zuckerberg, as The New York Times noted, “almost begged (somewhat disingenuously) for the government to write laws laying out what should be classified as dangerous and impermissible online speech.”

Jack Dorsey was more helpful in his opening statement which argued for more company transparency (“It is critical that people understand our processes and that we are transparent about what happens as a result”), fair processes (“We believe all companies should be required to provide a straightforward process to appeal decisions made by humans or algorithms”) and empowering algorithmic choice (“We believe that people should have choices about the key algorithms that affect their experience online”).

Algorithmic amplification, moderation, and neutrality

One of the pieces that stood out for me was an op-ed by Roddy Lindsay, a former Facebook data scientist and a co-founder of Hustle, a messaging startup, published by The Information

In it, Lindsay argued that an opinion filled by Supreme Court Justice Clarence Thomas offered an intriguing road map to legislate social media companies specifically: “preserve most Section 230 protections but eliminate them for algorithmically amplified content like that in Facebook’s News Feed, which boosts the distribution of stimulating items that attract more clicks and comments.”

Lindsay continued by saying that Justice Thomas in his 10-page opinion, “argued that lower courts’ interpretation of Section 230 is incorrect, providing undue immunity where none should exist.” The crux of the issue was that while social media companies may be insulated from publisher liability, they are not insulated from distributor liability, especially “if they distribute content they know to be problematic.” 

“Algorithmic amplification is not necessary for us to enjoy and get utility from the internet. The internet before algorithmic content feeds was less centralized and more fun, with fewer gatekeepers. We learned about the world and navigated using tools like bookmarks, blogs, RSS feeds, mailing lists, message boards, and Wikipedia,” he concluded

The platforms like to argue they do not moderate their feeds (algorithms do) and that they are not the referees, both of which are highly disputable. 

Social media giants say algorithms are neutral and so even though an algorithm decides what users see, the company is not deciding. Well, the company built the machine and as we have seen, engineers made choices that give certain types of content advantages over others:

In late 2017, when Facebook tweaked its newsfeed algorithm to minimize the presence of political news, policy executives were concerned about the outsize impact of the changes on the right, including the Daily Wire, people familiar with the matter said. Engineers redesigned their intended changes so that left-leaning sites like Mother Jones were affected more than previously planned, the people said. Mr. Zuckerberg approved the plans.”

The algorithmic amplification is, I think, at the core of it all. The question is whether your platform decides which content will be seen by billions. Media has done the same, albeit with human curation, since they have existed.

If your platform, be it a social media site or news site, decides what content people see, you choose to amplify it and by definition, it is not neutral. In the case of an algorithm, engineers made conscious choices about its effects.

As mentioned earlier, the rules for online news sites in Europe are stricter regarding content moderation but social media is exempt. With an ongoing discussion in the United States, there could be an opening also in Europe to revisit the social media exemption.

For now, let’s make this one thing really clear: Algorithms are not neutral, nor are the choices editors make managing the homepages of online news sites.

Obviously, there are differences between a social media site and a news site, I am not arguing for looking at them as the same.

Though, in the same manner, as some rules apply to news sites, there should be specific rules (laws?) for social media as their influence cannot be disputed and their liability should be aligned to their influence.

Now, the hardest question is how to create rules and shape the law in a way that helps competition thrive and does not give incumbents an even greater lead.