How Section 230, ChatGPT and Social Code Platforms Relate
The US Supreme Court will weigh in soon on whether Section 230 remains as is or is modified, a decision with consequences either way
By John P. Desmond, Editor, AI in Business
The US Supreme Court in February will consider cases. Gonzalez vs. Google and Twitter vs. Taamneh, that challenge the liability shield of Section 230, which arguably gave birth to today’s internet by protecting platforms such as YouTube and Meta’s Facebook from being sued for harmful content posted by third parties on their sites. It also gives the platform providers the ability to police their sites without incurring liability.
Some have suggested the dawn of generative AI such as ChatGPT adds a new wrinkle to be considered, the ability to flood the internet with harmful content at a faster rate than ever before.
In the lawsuit, the plaintiffs argue that Section 230 should not protect platforms when they recommend harmful content, such as terrorist videos.
“Section 230 is fundamentally the economic backbone of the internet,” stated Halimah DeLaine Prado, general counsel for Google, in a recent account in The Wall Street Journal . “A ruling that undermines Section 230 would have significant unintended and harmful consequences.”
President Joe Biden and some lawmakers have called for Section 230 to be modified to address what they characterize as flaws in the law, but Congress has not been able to achieve any consensus on legislation. And some state laws opposing what their proponents see as censorship of free speech by Big Tech platforms are also before the Supreme Court.
Google and Twitter have argued that taking away Section 230 protections for recommendation algorithms would have wide-ranging negative effects, making it risky for sites to help users find content, according to an account in The Verge. The court is to decide whether recommendations are an extension of user-generated content covered by Section 230, or whether they are separate, unprotected speech made by the platform itself.
Among filings submitted in relation to the case is a joint filing by Frances Haugen, the Facebook whistleblow, and children’s safe group Common Sense, which urged the court not to extend Section 230 protections to “non-publishing” activities, including the collection and user of personal information, according to a recent account in The Washington Post.
“Google’s activities — creating profiles from the billions of collected data points about users and amplifying harmful content by regularly recommending targeted videos and ads based on user profiles — is simply not covered by the plain text of section 230,” they stated in the filing.
Flagging Role of ‘Social Code’ Platforms
An account in the Georgetown Law Technology Review flagged the distinction between “social media” and “social code” in this context. The authors cited the example of the AI bot GPT-4chan, which a YouTuber created in June 2022, to mimic racist, misogynistic and antisemitic posts. The bot posted 30,000 comments within a few days, then shared its underlying AI model on Hugging Face Hub, an online collaborative coding platform. That allowed any user to download the model and flood the internet with thousands of hateful comments.
Hugging Face’s CEO pulled the plug on it, announcing the platform would restrict access to prevent further harm.
“The GPT-4chan incident demonstrates the growing importance and potential harms of social code platforms: platforms that let users share and collaborate on datasets, AI models, and other coding projects (we call this content “social code”),” stated the authors, led by Sean Norick Long, a research assistant at the Georgetown University Law Center.
Social code platforms underlie today’s digital infrastructure, suggest the authors. GitHub stores code from over 83 million developers, including much open source software powering many devices in wide use. Kaggle hosts datasets from its user community of some 10 million. And Hugging Face hosts AI machine learning models that anyone can use; the company recently reached a valuation of $2 billion.
“Like social media platforms, social code platforms not only rely on user growth and network effects, but allow users to use them for both good and harm,” stated the authors. “Policymakers must start paying attention.”
Social code platforms are collaborative by nature, making them more akin to Wikipedia or Google Docs than a one-off Facebook post. Also, social code is often released under open source licenses, with minimal restrictions on their use, which has driven technical innovation. Python, for example, is one of the most widely used programming languages today due to its use in open source collaboration, the authors note.
“Another major difference is that social code platforms are meant to share tools to be used, rather than share content to be consumed,” the authors state. The tools can be used for good, such as in helping to attain Sustainable Development Goals, and “they can be co-opted by bad actors,” the authors state.
When AI is introduced, content can be shared at a much faster and larger scale, giving it the potential to be a foundation for many extremely harmful applications.
Social media and social content platforms converge in their use of stated guidelines for acceptable use and content, which help identify prohibited content and describe a moderation process. However, “Once a platform aims to comply with Section 230 by balancing collaborative sharing with limitations on what can be shared, it opens itself to debates over free speech and the principles of a free and open internet,” the authors state.
The challenge for rule makers in Washington, DC is to strike a balance. The authors state, “We therefore recommend that policymakers carefully consider the important role that social code platforms play in the global Internet. When Congress considers changes to platform liability, lawmakers should not legislate with solely Facebook and Twitter in mind.”
Section 230 Reform Hub Tracks Legislation
Many bills have been filed in Congress to reform Section 230. To track the legislation, the publishers of Slate, working with the New America civic platform and Arizona State University, created the Section 230 Reform Hub in March 2021, and they have been updating it since.
In a commentary published last October, author Matt Perault, who is director of the Center on Technology Policy at the University of North Carolina, Chapel Hill, provided an update, stating, “To many, Section 230 is the cornerstone of all that’s good about the internet, or the cause of everything that’s bad. But if reform were straightforward, we wouldn’t have a list of dozens of colliding and overlapping proposals. By cataloging the proposed reforms, perhaps we can help create a more productive discussion around the 26 words that have launched endless debates.”
Read the source articles and information in The Wall Street Journal, in The Verge, in The Washington Post, in the Georgetown Law Technology Review and in Slate.
(Write to the editor here.)