AI chatbots wont enjoy techs legal shield, Section 230 authors say

Publish date: 2024-08-08

Happy Friday! I don’t know about you, but I’ve had Kesha’s “Tick Tock” stuck in my head for months. Thanks Washington. Send tips to: cristiano.lima@washpost.com.

Below: The White House’s plan to force a sale of TikTok will likely run into hurdles, and Meta employees grill CEO Mark Zuckerberg at an all-hands meeting following mass layoffs. … First:

AI chatbots won’t enjoy tech’s legal shield, Section 230 authors say

A Supreme Court case last month examining tech companies’ liability shield kicked off an unexpected debate: Will the protections apply to tools powered by artificial intelligence, like ChatGPT?

The question, which Justice Neil M. Gorsuch raised during arguments for Gonzalez v. Google, could have sweeping implications as tech companies race to capitalize on the popularity of the OpenAI chatbot and integrate similar products, as my colleague Will Oremus wrote last month.

Advertisement

But the two lawmakers behind the law told The Technology 202 that the answer is already clear: No, they won’t be protected under Section 230. 

The 1996 law, authored by Reps. Ron Wyden and Chris Cox, shields digital services from lawsuits over user content they host. And courts have typically held that Section 230 applies to search engines when they link to or publish excerpts from third parties, as Will wrote. 

But Gorsuch suggested last month that those protections might not apply for AI-generated content, positing that the tool “generates polemics today that would be content that goes beyond picking, choosing, analyzing or digesting content. And that is not protected.”

Gorsuch’s comment ignited a lively debate that’s becoming increasingly prescient as more Silicon Valley giants redouble their AI investments and roll out new products. 

Advertisement

According to Wyden and Cox, Gorsuch was right — meaning companies could be open to a deluge of lawsuits if AI tools go awry. 

"AI tools like ChatGPT, Stable Diffusion and others being rapidly integrated into popular digital services should not be protected by Section 230,” Wyden (D-Ore.), now a senator and a staunch defender of the law, said in a statement. “And it isn’t a particularly close call.”

Wyden, who has proposed requiring companies to vet AI for biases, added that, “Section 230 is about protecting users and sites for hosting and organizing users’ speech” and “has nothing to do with protecting companies from the consequences of their own actions and products.”

Cox, who now sits on the board for the tech trade group NetChoice, said “Section 230 as written provides a clear rule in this situation.” 

Advertisement

“To be entitled to immunity, a provider of an interactive computer service must not have contributed to the creation or development of the content at issue,” he told me. “So when ChatGPT creates content that is later challenged as illegal, Section 230 will not be a defense.”

NetChoice counts Google, Amazon and Meta as members, among other tech companies. (Amazon founder Jeff Bezos owns The Washington Post.) 

As Section 230’s co-authors, Wyden and Cox’s readings could be particularly influential as courts weigh how to interpret the law in future cases dealing with AI.

In Gonzalez v. Google, the Supreme Court is considering whether social networks can be shielded from liability for allegedly promoting content from terrorist groups. The case is set to test how Section 230 maps onto companies’ algorithmic recommendations. Section 230 also shields companies from lawsuits over “good faith” efforts to curb noxious content. 

Advertisement

In a brief filed to the court, Wyden and Cox argued that “Section 230 protects targeted recommendations to the same extent that it protects other forms of content curation and presentation,” and urged it to affirm a lower court ruling upholding the protections in the case.

Section 230 critics have argued that the protections should not apply when platforms amplify or recommend content, which should instead be treated as their own conduct.

While Wyden and Cox disputed those arguments, they are now drawing the line when it comes to content generated by AI-powered tools like ChatGPT. 

OpenAI, which owns ChatGPT, and Stability AI, which owns Stable Diffusion, did not return requests for comment.

Our top tabs

White House TikTok plan faces same legal challenges that doomed Trump’s ban

The Biden administration’s push to force a sale of TikTok will likely face similar congressional and legal hurdles to the Trump era’s failed efforts to ban the Chinese-linked app, our colleagues Drew Harwell and Cat Zakrzewski report.

Advertisement

The administration this week threw its weight behind a plan to force TikTok’s Chinese owners to divest their ownership in the app or face a total ban in the United States, but sources tell our colleagues the plan will likely run into the same obstacles that stifled President Donald Trump’s efforts to eject the app from the U.S. entirely in 2020.

TikTok has served as a punching bag for lawmakers’ national security and kids’ safety concerns. Amid a prolonged investigation into TikTok since 2019, Jim Lewis, director of the strategic technologies program at the Center for Strategic and International Studies, said TikTok could face a tough legal battle as it seeks to appease both Washington and Beijing.

TikTok CEO Shou Zi Chew told the Wall Street Journal’s Stu Woo that a divestment of the company from Chinese ownership will not offer any additional protection compared to the Project Texas plan TikTok is currently pushing, which would move U.S. user data into Oracle servers and maintain ByteDance’s ownership of the company.

FTC issues orders to social media, video companies over misleading ad handling

The Federal Trade Commission on Thursday issued orders to major social media and video platforms asking the companies to provide information on how they screen and scrutinize advertisements for misleading information.

Advertisement

The orders, issued unanimously by agency leaders, were sent to Meta, Meta-owned Instagram, Google’s YouTube, TikTok, Snap, Twitter, Pinterest and Twitch. The commission will collect information about the companies’ policies for paid commercial ads and their processes for screening and monitoring for compliance with those policies, including human review and the use of automated systems, a release from the FTC said.

“Social media has been a gold mine for scammers who tout sham products and other scams that have cost consumers enormously in recent years,” said Samuel Levine, the FTC’s Bureau of Consumer Protection director. “This study will help the FTC ensure that social media and video streaming companies are doing everything they can to keep scammers and deceptive ads off their platforms.”

YouTube spokesperson Christopher Lawton said the company is reviewing the FTC letter and will be working with the agency to provide a full response.

Advertisement

“We have strict ads policies and enforce them vigorously. We don’t allow advertisers to run ads that scam users by concealing or misstating information about the advertiser’s business, product or service. When we find ads that violate our policies we remove them," Lawton said.

Meta, TikTok, Snap, Twitter, Pinterest and Twitch did not return requests for comment.

Meta’s Zuckerberg grilled by employees following mass layoffs

Meta CEO Mark Zuckerberg doubled down on his company’s leadership plan in a town hall after the social media giant began the layoffs of 10,000 employees this week, our colleague Naomi Nix reports.

Asked how employees could trust company leadership, Zuckerberg said the company performance and corporate transparency should play a factor, Naomi writes, citing an audio livestream of the town hall.

Advertisement

Employees grilled the CEO on why a second round of layoffs was conducted months after a previous restructuring in which he said more layoffs were not expected. Zuckerberg attributed the decision to economic factors.

Zuckerberg has previously said 2023 would be a “year of efficiency” for Meta as it faces tense competition in the ad revenue space and seeks to build out its Metaverse vision.

Meta declined to comment on Naomi’s story.

Rant and rave

Agency scanner

The FBI And DOJ are investigating ByteDance’s use of TikTok to spy on journalists (Forbes)

Musk, rivals edge closer to satellite phone service with FCC nod (Bloomberg News)

The White House might be running out of time to bring back net neutrality (The Verge)

Inside the industry

Taiwan chip pioneer warns US plans will boost costs (The Associated Press)

Easy loans, great service: why Silicon Valley loved Silicon Valley Bank (The Wall Street Journal)

Competition watch

Lenovo must pay $138.7 mln for InterDigital patents - London court (Reuters)

Privacy monitor

U.K. bans TikTok on government devices (The New York Times)

Workforce report

Silicon Valley Bank worked with start-ups others rejected. Now founders are lost (Gerrit De Vynck)

Trending

Google glass is going away, again (The Wall Street Journal)

Before you log off

Thats all for today — thank you so much for joining us! Make sure to tell others to subscribe to The Technology202 here. Get in touch with tips, feedback or greetings on Twitter or email

ncG1vNJzZmivp6x7uK3SoaCnn6Sku7G70q1lnKedZL2wuMitoJyrX2d9c3%2BOaWpoaWdkrqp5wqGYrZqfqcBuw86nq2adnp%2B8unnTnpqhq12hsqity2aqoaGVobFuv8Scq6KnnmJ%2FdHyMmqytoJ%2BnwG6%2FwLJm