Over the last four years, social media companies, accused of standing idle as “fake news” and phony accounts overran their platforms in the run-up to the 2016 presidential election, have been gradually adopting stronger policies to stem the spread of misinformation.

With early voting in the 2020 election already well underway, Facebook and Twitter deployed those policies against a major news outlet for the first time Wednesday to prevent a sensational political story from going viral. The results called into question how much progress the companies have made in defining the limits of permissible online speech in a way that satisfies Democrats and Republicans alike.

Hours after the New York Post published a story about Hunter Biden’s involvement with Ukrainian natural gas company Burisma, Facebook said it would reduce the article’s distribution on its platform until it was verified by a third-party fact-checker.

Shortly after, Twitter blocked users from sharing the story at all. The company said its decision stemmed from its policy on “hacked materials.”

Alex Stamos, who served as Facebook’s chief security officer and is now director of the Stanford Internet Observatory, said in a tweet that he could not recall Facebook and Twitter ever taking similar actions against a mainstream news organization.

The unprecedented moves, which drew accusations of bias from President Trump and other Republican leaders, came just three weeks before the presidential election, with Democratic candidate Joe Biden, Hunter’s father, leading national polls by a wide margin.

Trying to portray the younger Biden’s role on Burisma’s board as corrupt has been a key political strategy for Trump, and his insistence that Ukraine launch an investigation into the matter led to his impeachment by the House of Representatives last year. Trump was acquitted by the Senate.

U.S. intelligence officials have said that allegations involving Ukrainian corruption are part of Moscow’s efforts to boost Trump’s reelection campaign, but that hasn’t persuaded the president or his allies to drop the issue.

Moscow sought to boost Trump’s campaign four years ago by hacking and releasing Democratic Party emails, and by using networks of social media accounts it controlled to disseminate their contents and other divisive content. During this year’s campaign, Trump and Russia have repeatedly echoed each other’s rhetoric and disinformation, such as by baselessly claiming that mail ballots are fraudulent.

The Post’s story cited emails obtained from a laptop that someone who may have been Hunter Biden dropped off for repair at a shop in Delaware last year but never picked up. The shop owner told the Post that he eventually copied the hard drive and provided it to a lawyer for Rudy Giuliani, the former New York City mayor who has worked with Trump to dig up dirt on the Bidens.

Twitter said it considered the haziness around the origins of the materials cited in the Post’s story in deciding to take action to limit its spread. A Twitter policy established in 2018 prohibits using the platform to distribute content obtained without authorization.

“We don’t want to incentivize hacking by allowing Twitter to be used as distribution for possibly illegally obtained materials,” the company said Wednesday evening.

Twitter also said images in the Post’s story included email addresses and phone numbers, which violates the platform’s rules.

In the closing weeks of the presidential campaign, Facebook has been strengthening its defenses against misinformation. Some of its policy changes, such as a suspension of political advertising after polls close Nov. 3, have been aimed at warding off electoral chaos.

In other cases, the timing is less obviously tied to the election. In the last two weeks, Facebook has implemented a comprehensive ban on QAnon-related pages, groups and Instagram accounts, reversed its long-standing policy of allowing posts that deny the Holocaust and restricted ads that discourage people from getting vaccines. On Wednesday, Twitter too said it would remove posts denying the Holocaust.

A Facebook spokesman said on Twitter that the extra caution on the New York Post story stems from the company’s “standard process” of preventing the spread of misinformation.

Asked about Facebook’s decision, spokesman Andy Stone referred a reporter to a 2019 blog post, “Helping to protect the 2020 U.S. elections.” That post states, “In many countries, including in the U.S., if we have signals that a piece of content is false, we temporarily reduce its distribution pending review by a third-party fact-checker.”

Stone did not elaborate on what signals Facebook used to determine the Post story required fact checking.

Disinformation researchers have long encouraged social media platforms to adopt “circuit breaker” tools that would allow the companies to detect and suspend potentially false content from going viral while they work to review and fact-check it.

Although it’s good that Facebook and Twitter are exercising this tool, the companies are “going to need to be much clearer” about the criteria they use to make such a decision, said Karen Kornbluh, director of GMF Digital, whose organization has proposed similar tools.

Without a clear understanding of those insights and metrics, these ad hoc decisions “are not helping the cause,” tweeted Baybars Örsek, director of the International Fact-Checking Network, which certifies Facebook’s fact-checking partners.

Sen. Josh Hawley (R-Mo.) highlighted that opacity in letters sent to Facebook Chief Executive Mark Zuckerberg and Twitter CEO Jack Dorsey. Hawley demanded they explain their policies’ decision-making processes.

Dorsey tweeted later Wednesday that the company’s communication about its ban on the Post article was “not great” and that preventing users from sharing a web page with “zero context” was “unacceptable.”

Trump also weighed, kicking off his Wednesday evening rally in Des Moines by touting the Post article — “explosive documents published by a very fine newspaper” — and accusing social media companies of trying to help Biden.

“They take negative posts down almost before they go up,” he said in a lengthy riff. “They’re trying to protect him.”

Curbing the runaway virality of posts deemed deceptive is a way to balance speech rights with the responsibility to provide reliable information, Kornbluh said.

“You don’t have to take the extreme step of taking it down, but we’re not going to amplify it if it poses a risk to overall society,” she said of the social media platforms’ response.

The public health consequences of COVID-19 misinformation and the threats to democracy posed by U.S. election misinformation have highlighted the potential harm that can occur when platforms don’t effectively moderate content, said Kat Lo, a researcher who studies online content moderation at the nonprofit Meedan.

In the past, Facebook may have thought that limiting the reach of a story still in dispute was an overreach of speech-policing policies, she said.

Recently, it has seen the importance of fast action.

As wildfires ravaged the West Coast, QAnon followers claimed baselessly that they were started by members of leftist anarchist groups. The false claims were boosted by Russian state media, and the subsequent barrage of calls to emergency hotlines overwhelmed and diverted the attention of local officials working to manage the fires and get residents to safety.

Facebook cited the QAnon-related wildfire claims as one reason for its more comprehensive ban on pages, groups and Instagram accounts associated with the conspiracy theory. QAnon is a conspiracy theory that baselessly claims that a Satan-worshiping cabal of Democrats and other elites operates a child sex-trafficking ring.

Times staff writer Brian Contreras contributed to this report.