An impending Supreme Court ruling focusing on whether legal protections given to Big Tech extend to their algorithms and recommendation features could have significant implications for future cases surrounding artificial intelligence, according to experts.

In late February, the Supreme Court heard oral arguments examining the extent of legal immunity given to tech companies that allow third-party users to publish content on their platforms.

One of two cases, Gonzalez v. Google, focuses on recommendations and algorithms used by sites like YouTube, allowing accounts to arrange and promote content to users.

MEET THE 72-YEAR-OLD CONGRESSMAN GOING BACK TO COLLEGE TO LEARN ABOUT AI

Supreme Court Building

Section 230, which allows online platforms significant leeway regarding responsibility for users' speech, has been challenged multiple times in the Supreme Court. (AP Photo/Patrick Semansky, File)

Nohemi Gonzalez, a 23-year-old U.S. citizen studying abroad in France, was killed by ISIS terrorists who fired into a crowded bistro in Paris in 2015. Her family filed suit against Google, arguing that YouTube, which Google owns, aided and abetted the ISIS terrorists by allowing and promoting ISIS material on the platform with algorithms that helped to recruit ISIS radicals.

Marcus Fernandez, an attorney and co-owner of KFB Law, said the outcome of the case could have "far-reaching implications" for tech companies, noting it remains to be seen whether the decision will establish new legal protections for content or if it will open up more avenues for lawsuits against tech companies.

He added that it is important to remember that the ruling could determine the level of protection given to companies and how courts could interpret such protections when it comes to AI-generated content and algorithmic recommendations.

"The decision is likely to be a landmark one, as it will help define what kind of legal liability companies can expect when they use algorithms to target their users with recommendations, as well as what kind of content and recommendations are protected. In addition to this, it will also set precedent for how courts deal with AI-generated content," he said.

According to Section 230 of the Communications Decency Act, tech companies are immune to lawsuits based on content curated or posted by platform users. Much of the discussion from the justices in February waded into whether the posted content was a form of free speech and questioned the extent to which recommendations or algorithms played a role in promoting the content.

AI PAUSE CEDES POWER TO CHINA, HARMS DEVELOPMENT OF ‘DEMOCRATIC’ AI, EXPERTS WARN SENATE

AI photo

Artificial Intelligence words are seen in this illustration taken March 31, 2023. REUTERS/Dado Ruvic/Illustration (REUTERS/Dado Ruvic/Illustration)

At one point, the plaintiff's attorney, Eric Schnapper, detailed how YouTube presents thumbnail images and links to various online videos. He argued that while users create the content itself, the thumbnails and links are joint creations of the user and YouTube, thereby exceeding the scope of YouTube's legal protections.

Google attorney Lisa Blatt said the argument was inadmissible because it was not a part of the plaintiff's original complaint filed to the court.

Justice Sonia Sotomayor expressed concern that such a perspective would create a "world of lawsuits." Throughout the proceedings, she remained skeptical that a tech company should be liable for such speech.

Attorney Joshua Lastine, the owner of Lastine Entertainment Law, told Fox News Digital he would be "very surprised" if the justices found some "nexus" between what the algorithms generate and push onto users and other types of online harm, such as somebody telling another person to commit suicide. He said up until that point he does not believe a tech company would face legal repercussions.

Lastine, citing the story of the Hulu drama "The Girl From Plainville," said it is already extremely difficult to establish one-on-one liability and bringing in a third party, like a social media site or tech company, would only increase the difficulty of winning a case.

In 2014, Michelle Carter fell under the national spotlight after it was discovered that she sent text messages to her boyfriend, Conrad Roy III, urging him to kill himself. Though she was charged with involuntary manslaughter and faced up to 20 years in prison, Carter was only sentenced to 15 months behind bars.

CLICK HERE TO READ MORE AI COVERAGE FROM FOX NEWS DIGITAL

Google headquarters in Mountain View, California

Google headquarters in Mountain View, California, US, on Monday, Jan. 30, 2023. Alphabet Inc. is expected to release earnings figures on February 2.  (Photographer: Marlena Sloss/Bloomberg via Getty Images)

"It was hard enough to find the girl who was sending the text messages liable, let alone the cell phone that was sending those messages," Lastine said. "Once algorithms and computers start telling people to start inflicting harm on other humans, we have bigger problems when machines start doing that."

Ari Lightman, a Distinguished Service Professor at the Carnegie Mellon Heinz College of Information Systems and Policy, told Fox News Digital that a change to Section 230 could open a "Pandora's box" of litigation against tech companies.

"If this opens up the floodgate of lawsuits for people to start suing all of these platforms for harms that have been perpetrated as they perceive toward them—that could really stifle down innovation considerably," he said.

However, Lightman also said the case reaffirmed the importance of consumer protection and noted that if a digital platform can recommend things to users with immunity, they need to design more accurate, usable, and safer products.

Lightman added that what constitutes harm in a particular case against a tech company is very subjective – for example, an AI chatbot making someone wait too long or giving erroneous information. According to Lightman, a standard in which lawyers attempt to tie harm to a platform could be "very problematic," leading to a sort of "open season" for lawyers.

"It's going to be litigated and debated for a long period of time," Lightman said.

ALTERNATIVE INVENTOR? BIDEN AMIN OPENS DOOR TO NON-HUMAN, AI PATENT HOLDERS

Lightman noted that AI has many legal issues associated with it, not just liability and erroneous information but also IP issues specific to the content. He said that greater transparency about where the model acquired its data, why it presented such data, and the ability to audit would be an important mechanism for an argument against tech companies' immunity from grievances filed by users unhappy with the AI's output. 

Throughout the oral arguments for the case, Schnapper reaffirmed his stance that YouTube's algorithm, which helps to present content to users, is in an of itself a form of speech on the part of YouTube and should therefore be considered separately from content posted by a third party.

Blatt claimed the company was not responsible because all search engines leverage user information to present results. For example, she noted that someone searching for "football" would be provided different results depending on whether they were in the U.S. or somewhere in Europe.

U.S. Deputy Solicitor General Malcolm Stewart compared the conundrum to a hypothetical situation where a bookstore clerk directs a customer to a specific table where a book is located. In this case, Stewart claimed the clerk's suggestion would be speech about the book and would be separate from any speech contained inside the book.

CLICK HERE TO GET THE FOX NEWS APP

The justices are expected to rule on the case by the end of June to determine whether YouTube could be sued over its algorithms used to push video recommendations.

Fox News' Brianna Herlihy contributed to this report.