(Reuters) — As the U.S. Supreme Court rules in the coming months to weaken a powerful shield protecting internet companies, the ruling could also have ramifications for fast-developing technologies such as artificial intelligence chatbot ChatGPT.
The judges are to decide in late June whether Alphabet Inc.’s YouTube can be sued for its video recommendations to users. That case tests whether a US law that shields tech platforms from legal liability for content posted online by their users also applies when companies use algorithms to target users with recommendations.
What the court decides on those issues is relevant outside of social media platforms. Its ruling could affect the emerging debate over whether companies developing generative AI chatbots such as ChatGPT from OpenAI, a company in which Microsoft Corp. is a major investor, or Bard of Alphabet̵7;s Google should be protected from legal claims such as defamation or invasion of privacy, according to tech and legal experts.
That’s because algorithms that power generating AI tools like ChatGPT and its successor GPT-4 work in a roughly similar way to those that suggest videos to YouTube users, the experts added.
“The debate is really about whether the organization of information available online through recommendation engines is so important in shaping the content that it becomes responsible,” said Cameron Kerry, a visiting fellow at the Brookings Institution think tank in Washington and an expert on AI. “You have the same kind of problem with a chatbot.”
Representatives for OpenAI and Google did not respond to requests for comment.
During arguments in February, Supreme Court justices expressed uncertainty about whether protections in the law, known as Section 230 of the Communications Decency Act of 1996, would be weakened. Although the case does not directly relate to generative AI, Judge Neil Gorsuch noted that AI tools that generate “poetry” and “polemic” would likely not enjoy such legal protection.
The case is just one aspect of an emerging conversation about whether Section 230 immunity should apply to AI models trained on reams of existing online data but capable of producing original works.
Section 230 protections generally apply to third-party content from users of a technology platform and not to information that a company helped develop. Courts have yet to weigh in on whether a response from an AI chatbot would be covered.