The Boundless Web: Navigating Legal Implications and Internet Expression

The intricate legal spiderweb of free speech in a digital world.

Lien Phuong Pham

Lien Phuong is a student at the British International School Ho Chi Minh City in Vietnam.


The First Amendment within the U.S Constitution enshrines a fundamental right: “Congress shall make no law abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances” (Bill of Rights Institute, n.d.). Today, the words of our founding fathers remain profoundly significant to American identity; the inviolable right to free speech not only solidifies individual liberties, but sets precedent for the rest of American history. Yet, the rise of a new digital era has precipitated a shift within society as promises of free speech now extend to billions of users online. This essay discusses the legal implications posed by online expression, examining the adequacy of existing legal frameworks in mitigating harmful content on the Internet.

The rampant evolution of the Internet has exposed notable gaps within the law, challenging its ability to uphold fundamental free speech principles. Most notably,  the case Twitter Inc. v. Taamneh, outlines the intricate tapestry between harmful content and free speech as the Internet expands its user base. On January 1st 2017, Abdulkadir Masharipov, an ISIS adherent, orchestrated a terrorist attack which killed 39 people at the Reina Nightclub in Turkey. (Counter Extremism Project, 2025). Witnessing this injustice, one of the victim’s family, Nawras Alassaf, sued the three largest social-media companies: Facebook, Twitter, and Google. The plaintiff alleges these companies hold responsibility for their inactions in prohibiting ISIS’ exploitation of media algorithms in order to recruit and spread propaganda, and are therefore, deemed secondarily liable under the Anti-Terrorism Act.

When appealed to the lower judicial courts, the district court dismissed the plaintiff’s complaint as “failure to state a claim”. Despite this, the Ninth Circuit Court reversed the decision, asserting that there was plausible proof for the defendants’ guilt. Finally, on May 18th 2023, the case was brought to the Supreme Court. 

The Court noted that the guilt of the defendant rested upon three core elements: (1) “ a wrongful act that causes injury”; (2) “the defendant’s awareness of his role in illegal or tortious activity”; (3) “knowing and substantial assistance within the act itself”. (Twitter Inc. v. Taamneh 598 US, 2023). Relying on this framework, they issued a unanimous dismissal of the case, concluding that the plaintiff lacked sufficient factual evidence to support their statement.  Notably, Justice Clarence Thomas emphasised that ISIS’s presence within these platforms did not comply with the legal standard of Halberstam v. Welch (Case Text, n.d.) – a standard for aiding-and-abetting liability. Hence, the defendants were not guilty of the following claims (Twitter Inc. v. Taamneh 598 US, 2023).

On the same day, the case of Google v. Gonzalez sought to challenge a similar issue. The plaintiff, a family of an American college student who was killed in a terrorist attack in Paris, similarly argued that Youtube’s recommendation algorithm had helped spread radical messaging. And by doing so, Google turned themselves into active participants of spreading harmful material and thereby forfeiting immunity. However, the Supreme Court dismissed the case, justifying that the charges were not permissible under anti-terrorism law. (Gonzalez v. Google, 598 US, 2023).

While it may seem insignificant, the overruling of both cases brings attention to the role that social media plays in organizing affairs of violence and its responsibility within the evolving landscape of free speech. So, the question stands: are our media services truly creating a positive digital environment? And, what implications does this have for us as a society?

These concerns were amplified as a wide-spread debate about Section 230 of the Communications Decency Act – a provision that previously laid the foundational principles of the Internet and had contributed to its remarkable success (Kelly O’Hara, 2023) – rose across the country. Enacted in 1996, Section 230 had immunized media organisations from any responsibility of third-party content posted on their platforms, stipulating that “no provider of an interactive computer service shall be treated as the publisher of any information provided by another content provider” (Cornell Law School, n.d.). Through these mere lines, media corporations can host billions of defamatory or misleading information on their websites without being sued or blamed for these behaviours (Ortutay, 2023).

However, critics argue that Section 230’s broad immunity has permitted platforms in evading accountability for systemic harm, including hate speech and propaganda (Citron et al, 2017). Some even claim that the provision is ‘outdated’, and its interpretation is chastised for not addressing newer threats posed by the contemporary online landscape (Smith et al, 2021). After all, the internet has significantly evolved since 1996 with the advancement of social media, and sophisticated regulation algorithms; it has conspicuously digressed away from its main intentions of protecting social media platforms from lawsuits based on user-generated content and thus, Section 230’s application to the modern world remains rather inadequate. 

 Evident in the obsolescence of Section 230,  technology seems to be advancing at a pace even the law cannot catch up with, creating a “governance gap” within our society (Zittrain, 2008).  Andrea Matwyshyn, a keen professor at the University of Pennsylvania, further deduces that the law “is at least five years behind technology” (Tanneeru, 2009), making us truly question the effectiveness of our justice system. In particular, it is evident that with the current regulations provided by platforms, the development of cyber laws has been, and continue to be, stagnant.  Existing legal framework has yet to comment upon subjects like harassment, misinformation on social media, and therefore, different media platforms are left to devise their own policies, so an orderly digital space is maintained. Ultimately, this leaves online users exploiting the First Amendment to their free will and for that reason, policymakers must enforce new regulations that address present-day digital harms. 

Despite this being the case, enforcement is more complex than one would think, especially within a rapidly evolving digital space. Many caution that imposing new laws or repealing them could destabilize the internet’s architecture, altogether hindering technological progress. Moreover, imposing liability on platforms for user-generated content risks over-censorship and strict regulations could even deter users from freely expressing themselves (Walker, 2024).Regardless, the internet demands legal architecture that transcends mere censorship while also equilibrating free expression. Though the First Amendment prohibits the direct suppression of free speech, Congress, nonetheless, can impose regulations on how platforms regulate harmful content without violating constitutional principles. As Justice Brandeis reminds us all, the remedy for harmful speech lies not in repression but in “more speech” (Whitney v. California 274 US, 1927). The path forward warrants an effort from both legislators and society to cultivate a digital ecosystem that hinges on regulating platform conduct rather than speech itself. Only through such collaboration can the law keep up with technological advancements, ensuring that our Internet remains a force for good, not harm.

Leave a comment