Updates

Platform Immunity

No liability for agnostic algorithmic recommendation of terror-related content

The US Supreme Court has dismissed two high-profile cases that attempted to limit platforms’ immunity from liability for third-party content.

In our previous Update, we highlighted the two prominent cases of Gonzales and Taamneh which challenged platforms’ immunity from liability as intermediaries of third-party content they host or distribute, as well as immunity from liability for third-party content they block, edit or delete. Both cases involved platforms’ ‘algorithmic recommendations’ of content posted by terrorist groups.

The cases argued that Section 230 of the 1996 US Communications Decency Act did not protect platforms from liability under Section 2333 of the country’s Antiterrorism Act which prohibits abetting and aiding terrorist acts.

The Supreme Court took the view that in these cases, platform’s content recommendation algorithms were agnostic as to the nature of the content and did not constitute substantial and active aiding to the specific terrorist attacks involved in each case.

“It might be that bad actors like ISIS are able to use platforms like defendants’ for illegal–and sometimes terrible–ends. But the same could be said of cell phones, email, or the internet generally. Yet, we generally do not think that internet or cell service providers incur culpability merely for providing their services to the public writ large,” the supreme court reasoned.

We continue to monitor these developments and will return here with further updates.

Your feedback is important to us. To discuss any aspect covered in this article and how we can help, contact us now.

Related Updates

The US Supreme Court has dismissed two high-profile cases that attempted to limit platforms’ immunity from liability for third-party content.

In our previous Update, we highlighted the two prominent cases of Gonzales and Taamneh which challenged platforms’ immunity from liability as intermediaries of third-party content they host or distribute, as well as immunity from liability for third-party content they block, edit or delete. Both cases involved platforms’ ‘algorithmic recommendations’ of content posted by terrorist groups.

The cases argued that Section 230 of the 1996 US Communications Decency Act did not protect platforms from liability under Section 2333 of the country’s Antiterrorism Act which prohibits abetting and aiding terrorist acts.

The Supreme Court took the view that in these cases, platform’s content recommendation algorithms were agnostic as to the nature of the content and did not constitute substantial and active aiding to the specific terrorist attacks involved in each case.

“It might be that bad actors like ISIS are able to use platforms like defendants’ for illegal–and sometimes terrible–ends. But the same could be said of cell phones, email, or the internet generally. Yet, we generally do not think that internet or cell service providers incur culpability merely for providing their services to the public writ large,” the supreme court reasoned.