Panel – Paths for Platform Liability: Section 230 and the Choice for America’s Digital Future
Brief session description:
Thursday, July 25, 2019 – As a result of growing global techlash, many countries globally are passing legislation about or at least strengthening regulatory penalties for online platforms that host harmful user-created content. The U.S. has so far not embraced this approach because the Communications Decency Act, passed into law in 1996, contained Section 230, a clause that said online platforms, with some exceptions, would not be held liable for content created and posted to their service by users. Section 230 has been credited with enabling free speech and a boom in user-created content online. This panel was tasked with considering how new criticism of Section 230 in Congress unfold and what may result.
Moderator – Ben Brody, reporter, Bloomberg
Jessica Ashoosh, director of policy, Reddit
Emma Llansó, director, Free Expression Project, Center for Democracy & Technology
Jeff Kosseff, assistant professor of cybersecurity law, United States Naval Academy
Billy Easley, senior policy analyst for judiciary, technology and free speech, Americans for Prosperity
Ryan Hagemann, technology policy executive, IBM
Details of the session:
In this breakout session, panelists discussed policy considerations for Section 230 of the United States’ Communications Decency Act, as intermediaries face challenges to adapt and incentivize their practices to monitor content, while distancing themselves from liability for user-generated content.
Panel moderator Ben Brody, a technology reporter for Bloomberg News, broke down the impact of Section 230.
“Whether you want to start a freedom movement or spread a hostile foreign power’s misinformation, Section 230 has likely made that easier and faster than ever before imaginable,” Brody said.
Jeff Kosseff, an assistant professor of Cybersecurity Law at the United States Naval Academy, said Section 230 was born in the aftermath of several obscenity cases, which eventually led to a bipartisan initiative from members of Congress in 1995 to protect Interactive Computer Service Providers (ICSPs) from being construed as publishers of information.
Besides establishing this safeguard for ICSPs innovation, another purpose of Section 230 was to incentivize ICSPs to provide good-faith efforts to monitor user-generated content. But recent partisan arguments have accused providers of not doing enough to protect free speech.
Some Republicans criticize ICSPs for the over-moderation of content, while Democrats accuse providers for not moderating enough. The ideological premise of Section 230 results in First Amendment concerns over a provider’s right to establish its own provisions to safeguard content.
“I would say at least for some platforms that they have not moderated enough bad content and they also haven’t been transparent,” Kosseff said.
In response to Congressional and Senate polarization, IBM recently supported a policy that would provide liability for a company’s content moderation based on how the company filters illegal content online.
Ryan Haggeman, technology policy executive at IBM, highlighted his goal for technology intermediaries to incentivize accountability for content moderation through a middle-ground approach.
“There isn’t really a voice talking about what specific changes to [Section] 230 should occur,” he said. “Our concern moving forward is that legislators may not view the distinction between the different levels of the digital economy with the same kind of care that other people would.”
But with IBM’s focus on developing accountability for intermediaries, the process of removing illegal content comes at a cost for users.
In the aftermath of the recently passed FOSTA-SESTA, a law committed to curb sex trafficking online, panelists questioned the determining factor in combatting illegal content, especially when the means of that content may not be used for illegal purposes and is removed based on time constraints.
Emma Llansó, director of The Free Expression Project at the Center for Democracy & Technology, questioned the removal of content without evaluating context.
“It really tilts the scales on the side of takedown without question as opposed to giving intermediaries the breathing room to have emergency responses in circumstances where an emergency is clearly evident, but also to be thoughtful about things like videos documenting war crimes in Syria,” Llansó said.
Jessica Ashooh, director of Policy for Reddit, advocated for user freedom and encouraged flexibility for intermediaries to moderate content. Because Reddit provides a platform for anonymity, Ashooh emphasized that there are other gateways such as warning screens for content to be monitored.
“It’s important to allow companies to have those tailor-made solutions,” Ashooh said.
Ultimately, all panelists agreed that Section 230 alone does not satisfy means for content moderation in the United States.
Billy Easley, senior policy analyst for judiciary, technology and free speech at Americans for Prosperity, said he believes intermediaries should focus the conversation on how to amend the law because there is too much bias and not enough moderation.
“I don’t think [Section 230] hits the mark,” Easley said.
With the debate becoming increasingly partisan, the functions of Section 230 have become muddled because of the approaches to moderating content.
“I don’t see any solution to satisfy both sides,” Kosseff said.
– By Abby Gibbs
The multimedia reporting team for Imagining the Internet at IGF-USA 2019 included the following Elon University School of Communications students, staff and faculty:
Janna Anderson, Maeve Ashbrook, Elisabeth Bachmann, Bryan Baker, Paloma Camacho, Samantha Casamento, Colin Donohue, Abby Gibbs, Jack Haley, Hannah Massen, Grace Morris, Jack Norcross, Maria Ramirez, Brian Rea, Alexandra Roat, Baylor Rodman, Zach Skillings, Ted Thomas, Victoria Traxler, Julia Walter, Courtney Weiner, Mackenzie Wilkes and Cameron Wolfslayer