Democrats seek to hold social media companies accountable for online hate

Democratic state lawmakers in blue states are seeking to regulate social media platforms with a particular focus on combating hate speech and extremism.
New York Gov. Kathy Hochul shows a signed legislative package to reduce greenhouse gas emissions and create green jobs during a press conference, Tuesday, July 5, 2022, at the Brooklyn Navy Yard in the Brooklyn borough of New York. (AP Photo/Yuki Iwamura)

Democratic state lawmakers in blue states are seeking to regulate social media platforms with a particular focus on combating hate speech and extremism.

More than two dozen bills were introduced in more than a half-dozen states over the past two years, according to tracking by the Computer and Communications Industry Association, a trade group that opposes state regulatory efforts.

The state-level push is expected to resume, if not ramp-up, in 2023.

“We will continue to see these bills pop up,” said Jennifer Huddleston, policy counsel at NetChoice, another industry trade group that also has concerns about the bills.

The issue of online extremism has taken on greater urgency in the wake of high-profile cases. That includes the mass shooting in May at a Buffalo, N.Y., grocery store that killed 10 Black people and injured three others. The 18-year-old shooter, an avowed white supremacist, livestreamed a portion of the deadly attack.

An investigation conducted by New York Attorney General Letitia James’s (D) office concluded in a report released last month that the shooter “used online platforms to plan, prepare, and publicize his attack.”

“The tragic shooting in Buffalo exposed the real dangers of unmoderated online platforms that have become breeding grounds for white supremacy,” James said in a statement accompanying the report.

“Online platforms should be held accountable for allowing hateful and dangerous content to spread on their platforms,” James added.

The report made a series of recommendations for changes to both state and federal law. They include establishing civil liability for online platforms that “fail to take reasonable steps” to prevent violent criminal acts from appearing on their sites. The report also called for new restrictions on livestreaming, such as “tape delays,” to prevent the dissemination of videos depicting acts of violence.

Chris MacKenzie of the left-of-center Chamber of Progress, whose members include Twitter and Meta, said his organization is an “explicitly pro-content moderation group.” But he questioned the feasibility of the tape delay concept and pushed back on the idea that states should prescribe or dictate how companies moderate their platforms.

“I think that when the government starts to weigh in on what speech websites are allowed and not allowed to host, you create some politically dangerous situations,” MacKenzie said.

MacKenzie added that while Chamber of Progress opposes laws that would hold companies liable for the content that appears on their platforms, there should be consequences for people who share “homicidal content.”

“There should be some individual liability for what you post and what you share,” MacKenzie said.

The Democratic effort stands in contrast to Republican attempts to pass laws barring social media companies from removing content or suspending users, including politicians, for violating their content policies.

Earlier this year, New York Gov. Kathy Hochul (D) signed into law a requirement that social media companies make available “clear and concise” policies for how they respond to hateful conduct on their platforms. The law also requires companies to provide an easy way for users to report incidents where someone is vilifying or trying to incite violence against people based on race, religion, ethnicity or other protected traits.

“My legislation will empower social media users to keep virtual spaces safer for all by providing clear and consistent reporting mechanisms to flag hate speech,” New York state Sen. Anna Kaplan (D), the sponsor of the bill, said in a statement over the summer.

In September, California Gov. Gavin Newsom (D) signed a first-in-the-nation law that requires social media companies to file twice-yearly reports on efforts to combat hate speech, extremism, misinformation, harassment and foreign political interference on their platforms.

“California will not stand by as social media is weaponized to spread hate and disinformation that threaten our communities and foundational values as a country,” Newsom said in a statement at the time.

Industry groups oppose these efforts, citing the First Amendment and Section 230 of the federal Communications Decency Act. That provision in the 1996 law generally shields online companies from liability for what users post on their platforms.

NetChoice’s Huddleston said she was not familiar enough with the details of the New York attorney general’s recommendations to comment on them directly. But she said “horrific events” including the Buffalo mass shooting demonstrate the challenges of content moderation.

“We see companies constantly trying to engage and improve new tools to respond both to concerns about those kinds of nuanced content that can get flagged, but also to be able to respond more quickly to some of the truly horrific content that’s out there,” Huddleston said.