New York Attorney General Letitia James (D) has sent letters to half-a-dozen social media companies demanding to know what they are doing to address threats of violence against Jewish and Muslim people and institutions following Hamas’s attack on Israel.
The letters sent this week to Google, Meta, Reddit, Rumble, TikTok and X give the companies until next Friday to answer a series of 11 questions aimed at establishing how the companies are “identifying, removing, and blocking the re-uploading” of antisemitic and Islamophobic threats.
“In the wake of Hamas’ unspeakable atrocities, social media has been widely used by bad actors to spread horrific material, disseminate threats, and encourage violence,” James said in a statement announcing the letters. “These platforms have a responsibility to keep their users safe and prohibit the spread of violent rhetoric that puts vulnerable groups in danger.”
The letters ask the companies to explain in detail the steps they are taking to ensure their platforms are not used to “incite violence and further terrorist activities.”
Specifically, James is requesting information on the process the companies are using to remove content that calls for violence and ensure that it is not reposted, and how users who post such content are being disciplined.
“I am calling on these companies to explain how they are addressing threats and how they will ensure that no online platform is used to further terrorist activities,” James said in her statement.
Only Meta responded immediately to a request for comment from Pluribus News.
The parent company of Facebook, Instagram and Threads pointed to a lengthy statement issued Friday detailing the steps it is taking to monitor its platforms and moderate content. In the three days after Hamas attacked Israel, Meta says it removed or flagged nearly 800,000 posts in Hebrew and Arabic that violated its community standards.
The company also said that Hamas, as a designated terrorist organization, is banned from Meta’s platforms and that the company removes praise or support for Hamas “when we become aware of it.”
A Reddit spokesperson later confirmed to Pluribus News that the company has received the attorney general’s letter and planned to respond “in a timely manner.”
“We have strict sitewide policies against content that encourages, glorifies, incites, or calls for violence or physical harm, including content created by or promoting legally designated terrorist organizations and other forms of terrorist content,” the spokesperson wrote in an email.
In a social media post this week, the Safety Team at X, formerly known as Twitter, announced additional steps it has taken in recent days to enforce the rules on its site.
Those changes include removing Hamas-affiliated accounts, monitoring for antisemitic speech and working with the Global Internet Forum to Counter Terrorism to prevent terrorist content from being distributed online.
“X is committed to the safety of our platform and to giving people information as quickly as possible,” the statement said.
In a blog post, the Anti-Defamation League said it appreciated X’s efforts but that there was more the company could do, warning that social media sites would be “severely tested in the days and weeks ahead.”
The blog post included a series of recommendations for X, including that it establish a misinformation reporting category and reverse a recent decision to remove headlines from linked news articles.
“Platforms bear a particular responsibility to moderate content in a time of war, such as we are seeing currently in Israel and Gaza. However, bad actors are taking advantage of the tragic situation to fan the flames,” the ADL wrote.
Caitlin Chin-Rothmann, a fellow with the Center for Strategic and International Studies, was also critical of X for allowing false and violent content to reach “extreme heights” following Hamas’s attack.
“Social media platforms were not prepared to handle the flood of false and harmful content surrounding the Hamas attack,” Chin-Rothmann wrote in an analysis posted to the CSIS website. “To avoid further descending into chaos, technology companies need to significantly upgrade content moderation algorithms, scale user flagging systems, expand cultural and language competency, and ramp up overall staffing levels.”
TikTok videos about the war have drawn billions of views in recent days, according to The Washington Post.
This story has been updated to include comment from Reddit.