Facebook Will Not Fix Itself
Five years ago, I embarked on a mission to help Facebook change its culture, business model and algorithms. I had been involved with the company in its early days as an adviser and investor. Since then, I and countless others have pressed Facebook founder Mark Zuckerberg and chief operating officer Sheryl Sandberg to reform Facebook. I communicated with them privately. I spoke out in public. I wrote for TIME in 2019, urging Facebook and Silicon Valley to adopt human-driven technology over addictive, dangerous algorithms. Nothing happened.
The last three weeks have changed the game. The courageous Facebook whistle-blower Frances Haugen has transformed the conversation about technology reform, accomplishing more than what I and others had achieved in years of effort. The documents she provided to the Wall Street Journal’s “Facebook Files” series confirmed that harms from Facebook’s business model are not an accident, but rather the inevitable result of a dangerous design. In many cases, the documents show, Facebook chose to double-down despite awareness of the harm it was causing and the pressure for change. It is clear that policymakers and the media have consistently underestimated the threat posed by Facebook, buying into the company’s rosy claims about the power of connecting the world and giving benefit of the doubt where none was deserved.
Facebook will not fix itself. All incentives direct the company to stay on its current course. And recent history would support the cynic’s view that our democracy and government are too broken to rein in any large company. But we are now at a point where further inaction by Congress will likely result in ongoing catastrophes from which we may not recover for a generation or more.
Though it pays to keep one’s hopes and expectations in check when it comes to Congress, the present moment feels different from past technology scandals, especially those involving Facebook. Senators from both parties at this week’s hearing expressed support for Ms. Haugen’s testimony and for legislation to address it. In reality, few in Congress have a clear understanding of the regulatory path forward, but they know they want to find it.
Ms. Haugen expressed empathy for Facebook founder Mark Zuckerberg, but did not hesitate to note the moral failing of a CEO who prioritizes profits over the public good. I agree, but would add that this problem goes far beyond Zuckerberg and Facebook.
CEOs like Zuckerberg claim they have a mandate to maximize shareholder value. As with the phrase I’m just following orders, a single-minded focus on profits and shareholder value can be deployed to justify all manner of sins. Zuckerberg differs from other CEOs in the scope of harm his company can inflict and the absolute control he enjoys, but given the culture of business in America, I suspect far too many CEOs envy his power and in his shoes would pursue the same strategy, though they might be more astute about public relations.
Because many other companies are imitating Facebook in the hopes of profit, fixing Facebook will not be enough.
A huge portion of the U.S. economy operates according to the dictates of a system that Harvard University’s Shoshana Zuboff calls “surveillance capitalism.” Analogous to oil companies and other extraction businesses, surveillance capitalists assert property rights to every piece of data they touch, including data derived from public spaces and from the experience and property of others. The economics of surveillance capitalism come from converting human experience into data, building models for every human from that data, and using those models to predict and influence behavior. Advertisers pay for those predictions. Surveillance capitalists also use the models to inform recommendation engines that manipulate choices and sometimes behavior. It is immensely profitable because humans make decisions in predictable ways, which facilitates manipulation.
Everything we do on a smartphone, every financial transaction we make, every trip, every prescription and medical test, every action we take on the Internet or in apps is tracked, and most of it is available for purchase in a data marketplace. Companies throughout the economy use machine learning to look for patterns in this data and artificial intelligence to apply those patterns to improve their business. No one in this process gives a thought to the impact on the human beings affected by their influence and manipulation. Their only goal is to maximize shareholder value.
Personal autonomy and democracy are under assault from surveillance capitalism. And yet today’s tech industry is largely unregulated, having emerged in the midst of an era of deregulation and defunding of enforcement agencies. This has allowed tech giants to behave as unelected governments. Their communications systems have become central to our way of life, as the impact of this week’s Facebook, Instagram and WhatsApp outage underscores, but they have their thumb on the scale, amplifying content that triggers fear and outrage because doing so maximizes profits. Fear and outrage fuel anger, which undermines democracy.
Every time Facebook faces pressure for change, it does something that sounds helpful but is not. The most outrageous example is the algorithm change in 2018 that reduced the amplification of journalism in favor of posts from friends and family. Facebook claimed the change would increase meaningful interaction, but what it really did was increase engagement by increasing the relative weight of hate speech, disinformation and conspiracy theories. At this point, any defense of continued self-regulation seems naive.
Facebook likes to shift responsibility to others, including users, but users cannot be faulted for the harms caused by the platform, nor should they be expected to fix this. Countless families, small businesses, patients and artists are dependent on Facebook. The good parts of Facebook do not require surveillance capitalism, but without it, Facebook the company would be far less profitable.
That leaves regulation. Congress will be tempted to target legislative reform at Facebook, but that would be a mistake. We need legislation to address three related problems across the entire technology world: safety, privacy and competition.
The sad truth is that the unregulated tech industry produces products that are unsafe. Congress has faced the challenge of dangerous products in the past. When the food and medicine industries were unsafe, Congress created the Food and Drug Administration. When petrochemical companies dumped toxic waste indiscriminately, Congress approved a series of environmental laws. Just like tech companies today, the affected industries claimed they would not be able to operate with regulation, but that turned out to be wrong. Now we need something like an FDA for technology products, designed to prevent harmful technologies from coming to market. For qualifying products, it would set safety standards, require annual safety audits and certification as a condition for every product, and impose huge financial penalties for any harms that result. There should also be amendments to Section 230 of the Communications Decency Act to create better incentives for Internet platforms.
Congress also needs to protect people’s privacy from relentless surveillance. My preference would be for Congress to ban surveillance capitalism just as it banned child labor in 1938. (The many industries that employed child labor complained then that they could not survive without it.) At a minimum, Congress must ban third-party use of sensitive data, such as that related to health, location, financial transactions, web browsing and app data.
The third area for legislation is competition, where Congress needs to update antitrust laws for the 21st century. The six-hour outage of Facebook, Instagram and WhatsApp illustrated for many one downside of monopoly: absolute dependence on a service.
Until very recently, Zuckerberg could count on the loyalty of employees. Few pushed back after the company was accused of playing a harmful role in Brexit, the 2016 U.S. election, the ethnic cleansing in Myanmar, or the terrorism in Christchurch and Pittsburgh. It was not until 2020—when Facebook was used by white supremacists to spread hate after the murder of George Floyd, by allies of the Trump Administration to downplay the significance of COVID-19, by Trump loyalists to undermine confidence in the election, and then in 2021 by the antivaccination movement to undermine the nation’s pandemic response—that employees openly challenged management in significant ways. Despite this, Zuckerberg continues to speak as though Facebook has done nothing wrong. In response to the whistle-blower’s Senate testimony, he said:
“I’m sure many of you have found the recent coverage hard to read because it just doesn’t reflect the company we know. We care deeply about issues like safety, well-being and mental health. It’s difficult to see coverage that misrepresents our work and our motives. At the most basic level, I think most of us just don’t recognize the false picture of the company that is being painted.”
All of this may be true in Mark Zuckerberg’s mind, but the design of Facebook’s business model suggests that growth and profits are the only factors driving “the company we know.”
Based on the evidence of the past five years, one might say that Internet platforms have launched an attack against democracy and self-determination. It is a battle they will win unless voters and policymakers join forces to reassert their power. We have been losing the battle since 2016, but I would like to believe that this week was a turning point.
We have the power. The question is whether we have the courage to use it.
—With reporting by Nik Popli/Washington