Facebook chased lie after lie after lie in 2020

The pandemic, US elections and outrage over racial injustice created a perfect storm for social media misinformation.

Queenie Wong.
gettyimages-1229653388
Facebook CEO Mark Zuckerberg testifies remotely as Sen. John Kennedy of Louisiana listens during a Senate Judiciary Committee hearing in November titled Breaking the News: Censorship, Suppression, and the 2020 Election.

Getty Images

Facebook CEO Mark Zuckerberg had become a familiar sight on Capitol Hill, but there was something different about his congressional testimony in November. He may’ve been wearing a dark suit and light blue tie, but Zuckerberg wasn’t sitting in a Senate building surrounded by flashing cameras and large crowds. Instead, he appeared on video in front of a curtain and a potted plant.

ORIGINAL NOTE: https://www.cnet.com/news/facebook-chased-lie-after-lie-after-lie-in-2020/

The coronavirus pandemic, the US elections and outrage about racial injustice fueled a chaotic year for Facebook (as well as all social networks) as it struggled with the chronic problem of misinformation. In 2020, Facebook was tested more than ever. No one seemed fully satisfied with the results.

Sen. Richard Blumenthal, a Connecticut Democrat, accused Zuckerberg and Twitter CEO Jack Dorsey of building “terrifying tools of persuasion and manipulation.” The companies, he said, had taken only “baby steps” to combat online lies. “The destructive, incendiary misinformation is still a scourge on both your platforms and on others,” Blumenthal told the executives during the Senate Judiciary Committee hearing.

Blumenthal’s comments summed up a large chunk of Facebook’s 2020. Here are some of biggest challenges the company faced:

Now playing: Facebook, Twitter CEOs face Senate questions (again)
 1:27

Pandemic fuels misinformation and upends life

Misinformation has long been a problem for social networks. In 2020, though, the stakes were higher than ever. The coronavirus prompted a flood of false information that raised the prospect of harm to Facebook visitors. Bogus claims such as the notion that drinking bleach could be a cure were immediate worries. Misinformation about social distancing prompted concern that people could catch or spread the virus. And racist remarks against Asians, sparked by the pandemic, spread on Facebook and jumped into the real world.

Facebook cracked down harder on coronavirus misinformation than on political lies, but both continued to spread on the social network. Facebook and others removed videos, including some shared by President Donald Trump and other politicians, for perpetuating harmful coronavirus misinformation, including the false claim that children are almost immune to the virus. Facebook also directed users to an online hub filled with authoritative information about COVID-19. Last week, Facebook said it would remove misinformation about COVID-19 vaccines.

Policing harmful content seemed to be a never-ending game of whack-a-mole. The viral “Plandemic” video, which was filled with conspiracy theories about the pandemic, continued to pop up online despite efforts to slow its spread.

The pandemic also changed the way people socialized and worked. Facebook, like other big tech companies, canceled its annual developer conference but turned to virtual events to unveil new products. In September, the company said it had plans to release its first pair of smart glasses in 2021 and announced a new version of its Oculus Quest VR headset. More people, even those who detested Facebook, were spending time on social networks to keep in touch with family and friends. Facebook took on videoconferencing site Zoom with its own video calling features for Messenger RoomsFacebook Portal, the social network’s line of video chat devices, found a bigger audience among older adults.

Online hate becomes bigger focal point

brooklyn
A protester holds up a portrait of George Floyd during a Black Lives Matter demonstration on June 5.

Angela Weiss/Getty Images

Civil rights groups, celebrities, advertisers and politicians continued to pressure Facebook to do a better job of combating hate speech, after a summer marked by social unrest in the wake of the police killing of George Floyd, a 46-year-old Black man.

Conspiracy theories and misinformation about Floyd’s death spread on social networks, including in private online spaces such as Facebook groups. The Anti-Defamation League, the NAACP, Color of Change and other civil rights advocates called on businesses to “hit pause on hate” and refrain from advertising on Facebook during the month of July. Major brands, including outdoor-clothing company The North Face, consumer goods giant Unilever and telecom leader Verizon joined this Stop Hate for Profit campaign.

Facebook launched an independent board for reviewing the social network’s decisions, and users’ complaints, about whether specific content should be pulled down or allowed to remain on the site. And Facebook’s toughest critics banded together to create a new group called the Real Facebook Oversight Board.

Meanwhile, the social network touted improvements made to its artificial intelligence technology. In regard to hate speech posts removed by the company before users reported them, Facebook said that in the third quarter, its automated tools detected 94.7% of such content. From July to September, Facebook took action against 22.1 million pieces of content for hate speech.

None of this is enough for civil rights advocates, who complain that hateful content is falling through the cracks on social media. Even Facebook’s own workers thought the social network sometimes made the wrong call. Facebook’s employees staged a rare virtual walkout after the company let a Trump post remain on the site even though it had the potential to incite violence against people protesting police brutality. The company then faced scrutiny for how it handled content created by militia group the Kenosha Guard, following a fatal shooting at a Wisconsin protest.

The US election was a major test

With a surge in mail-in ballots amid the pandemic, Facebook and other social networks battled unfounded claims of voter fraud, including many posted by Trump. This put Facebook in a tricky spot because the social network typically has a hands-off approach to speech from politicians, exempting them from fact-checking. Political speech is already heavily scrutinized and the public should be able to see what politicians say, Facebook has argued.

screen-shot-2020-11-02-at-1-13-41-pm.png
Facebook labeled posts that contained premature claims of victory before a winner was projected.

Facebook

For the first time, though, the social network decided to add labels below posts by US politicians and directed users to its voting information center. The company labeled Trump posts that falsely claimed he won his reelection bid against Democratic challenger Joe Biden. It also temporarily barred political ads.

Facebook cracked down on content about QAnon as well. The far-right conspiracy theory falsely alleges that there’s a “deep state” plot against Trump and his administration.

The onslaught of misinformation kept coming even after Election Day. Fake accounts posed as news organizations. And conspiracy theories popped up online, including false claims that poll workers passed out Sharpie pens to invalidate Trump votes.

As Facebook and Twitter cracked down on misinformation, the moves enraged Republicans who repeatedly accuse the companies of anti-conservative bias. Social networks deny these allegations.

Congress eyes more regulation

gettyimages-1227829502
Amazon CEO Jeff Bezos, Facebook CEO Mark Zuckerberg, Google CEO Sundar Pichai and Apple CEO Tim Cook are sworn in before a House antitrust subcommittee hearing about antitrust in July.

Getty Images

Anticompetitive practices. Privacy. Misinformation. Tech addiction. Censorship. Lawmakers from both parties expressed grievances about the world’s largest tech companies. They wanted answers from those in charge.

Over the course of 2020, Zuckerberg appeared before lawmakers during three congressional hearings. The first, in July, focused on whether tech giants abused their powerAmazon CEO Jeff BezosApple CEO Tim Cook and Google CEO Sundar Pichai joined Zuckerberg to face members of a House antitrust subcommittee who accused the companies of anticompetitive business practices. Facebook, in particular, was questioned about buying out its competitors, including popular photo-service Instagram. The social network was hit Wednesday with antitrust lawsuits from more than 40 states and the Federal Trade Commission that targeted the company’s purchase of Instagram and WhatsApp, a messaging app.

Zuckerberg returned to Capitol Hill in October for a virtual hearing about Section 230, a federal law shielding internet platforms from liability for user-generated content. He told lawmakers that Congress should update the law to make sure it’s working as intended. US lawmakers seized on the hearing to criticize Zuckerberg, Dorsey and Pichai for how they police content. Democrats say social networks don’t do enough to combat hate speech and misinformation. Republicans allege their speech is getting censored, even though the companies deny political-bias allegations.

After Election Day, Zuckerberg made a third appearance, in a hearing before the Senate Judiciary Committee. Republicans called the hearing after Facebook and Twitter slowed the spread of a New York Post article containing unproven improprieties about Biden’s son Hunter. In the hearing, Republicans argued social media companies were publishers and shouldn’t be protected by Section 230. “You’re the ultimate editor,” Lindsey Graham, the South Carolina Republican who chairs the committee, told the executives in remarks about the slowdown of the article. “If that’s not making an editorial decision, I don’t know what would be.”

Zuckerberg denied Facebook was a publisher, noting that the social network doesn’t create content, like a news outlet. He signaled again to lawmakers that he was open to regulation. “I do think that we have responsibilities, and it may make sense for there to be liability for some of the content that is on the platform,” he said.