In the nearly two centuries that have passed since Tocqueville wrote these words, many of those institutions and habits have deteriorated or disappeared . Most Americans no longer have much experience of “township” democracy. Some no longer have much experience of associations, in the Tocquevillian sense, either. Twenty-five years ago, the political scientist Robert Putnam was already describing the decline of what he called “social capital” in the U.S.: the disappearance of clubs and committees, community and solidarity. As internet platforms allow Americans to experience the world through a lonely, personalized lens, this problem has morphed into something altogether different.
Conversation in this new American public sphere is governed not by established customs and traditions in service of democracy but by rules set by a few for-profit companies in service of their needs and revenues. Instead of the procedural regulations that guide a real-life town meeting, conversation is ruled by algorithms that are designed to capture attention , harvest data, and sell advertising. The voices of the angriest, most emotional, most divisive—and often the most duplicitous—participants are amplified. Reasonable, rational, and nuanced voices are much harder to hear; radicalization spreads quickly. Americans feel powerless because they are.
In this new wilderness, democracy is becoming impossible. If one half of the country can’t hear the other, then Americans can no longer have shared institutions, apolitical courts, a professional civil service, or a bipartisan foreign policy. We can’t compromise. We can’t make collective decisions—we can’t even agree on what we’re deciding. No wonder millions of Americans refuse to accept the results of the most recent presidential election, despite the verdicts of state electoral committees, elected Republican officials, courts, and Congress. We no longer are the America Tocqueville admired, but have become the enfeebled democracy he feared, a place where each person,
withdrawn and apart, is like a stranger to the destiny of all the others: his children and his particular friends form the whole human species for him; as for dwelling with his fellow citizens, he is beside them, but he does not see them; he touches them and does not feel them; he exists only in himself and for himself alone, and if a family still remains for him, one can at least say that he no longer has a native country.
The world’s autocracies have long understood the possibilities afforded by the tools tech companies have created, and have made use of them. China’s leaders have built an internet based on censorship, intimidation, entertainment, and surveillance ; Iran bans Western websites; Russian security services have the legal right to obtain personal data from Kremlin-friendly social-media platforms, while Kremlin-friendly troll farms swamp the world with disinformation . Autocrats, both aspiring and actual, manipulate algorithms and use fake accounts to distort, harass, and spread “alternative facts.” The United States has no real answer to these challenges, and no wonder: We don’t have an internet based on our democratic values of openness, accountability, and respect for human rights. An online system controlled by a tiny number of secretive companies in Silicon Valley is not democratic but rather oligopolistic, even oligarchic.
And yet even as America’s national conversation reaches new levels of vitriol, we could be close to a turning point. Even as our polity deteriorates, an internet that promotes democratic values instead of destroying them—that makes conversation better instead of worse—lies within our grasp. Once upon a time, digital idealists were dreamers. In 1996, John Perry Barlow, a lyricist for the Grateful Dead and an early internet utopian, predicted that a new dawn of democracy was about to break : “Governments of the Industrial World, you weary giants of flesh and steel, I come from Cyberspace, the new home of Mind,” he declared, a place where “the dreams of Jefferson, Washington, Mill, Madison, DeToqueville [sic ], and Brandeis … must now be born anew.”
Those ideas sound quaint—as outdated as that other 1990s idea, the inevitability of liberal democracy. Yet they don’t have to. A new generation of internet activists, lawyers, designers, regulators, and philosophers is offering us that vision, but now grounded in modern technology, legal scholarship, and social science. They want to resurrect the habits and customs that Tocqueville admired, to bring them online, not only in America but all across the democratic world.
how social media made the world crazier
In the surreal interregnum that followed the 2020 election, the price of America’s refusal to reform its internet suddenly became very high. Then-President Donald Trump and his supporters pushed out an entirely false narrative of electoral fraud. Those claims were reinforced on extreme-right television channels, then repeated and amplified in cyberspace, creating an alternative reality inhabited by millions of people where Trump had indeed won. QAnon—a conspiracy theory that had burst out of the subterranean internet and flooded onto platforms such as YouTube, Facebook, and Instagram , convincing millions that political elites are a cabal of globalist pedophiles—spilled into the real world and helped inspire the mobs that stormed the Capitol. Twitter made the extraordinary decision to ban the U.S. president for encouraging violence; the amount of election disinformation in circulation immediately dropped.
Could these platforms have done more? As a matter of fact, Facebook keeps careful tabs on the toxicity of American discourse. Long before the election, the company, which conducts frequent, secret tests on its News Feed algorithm, had begun to play with different ways to promote more reliable information. Among other things, it created a new ranking system, designed to demote spurious, hyper-partisan sources and to boost “authoritative news content.” Shortly after Election Day, the ranking system was given greater weight in the platform’s algorithm, resulting in a purportedly “nicer News Feed” —one more grounded in reality. The change was part of a series of “break-glass measures” that the company announced would be put in place in periods of “heightened tension.” Then, a few weeks later, it was undone . After the Capitol insurrection, on January 6, the change was restored, in advance of Joe Biden’s inauguration. A Facebook spokesperson would not explain to us exactly when or why the company made those decisions, how it defines “heightened tension,” or how many of the other “break-glass measures” are still in place. Its published description of the ranking system does not explain how its metrics for reliable news are weighted, and of course there is no outside oversight of the Facebook employees who are making decisions about them. Nor will Facebook reveal anything about the impact of this change. Did conversation on the site become calmer? Did the flow of disinformation cease or slow down as a result? We don’t know.
The very fact that this kind of shift is possible points to a brutal truth: Facebook can make its site “nicer,” not just after an election but all the time. It can do more to encourage civil conversation, discourage disinformation, and reveal its own thinking about these things. But it doesn’t, because Facebook’s interests are not necessarily the same as the interests of the American public, or any democratic public. Although the company does have policies designed to fight disinformation, and although it has been willing to make adjustments to improve discourse, it is a for-profit organization that wants users to stay on Facebook as long as possible and keep coming back. Sometimes that goal may lead the company in a “nicer” direction, but not always, especially if users stay on the site to connect to fellow extremists, or to hear their prejudices reinforced. Tristan Harris, a former design ethicist at Google who now leads the Center for Humane Technology, put it more bluntly. “News feeds on Facebook or Twitter operate on a business model of commodifying the attention of billions of people per day,” he wrote recently .* “They have led to narrower and crazier views of the world.”
Not that Facebook bears sole responsibility. Hyper-partisanship and conspiracy thinking predate social media, and message manipulation is as old as politics. But the current design of the internet makes it easier than ever to target vulnerable audiences with propaganda, and gives conspiracy thinking more prominence.
Illustration by Yoshi Sodeoka; image from Culture Club / Getty
The buttons we press and the statements we make online are turned into data, which are then fed back into algorithms that can be used to profile and target us through advertising. Self-expression no longer necessarily leads to emancipation: The more we speak, click, and swipe online, the less powerful we are. Shoshana Zuboff, a professor emerita at Harvard Business School, coined the term coined the term surveillance capitalism to describe this system. The scholars Nick Couldry and Ulises Mejias have called it “data colonialism,” a term that reflects our inability to stop our data from being unwittingly extracted. When we spoke recently with Věra Jourová, who—as the wonderfully titled vice president for values and transparency—is the European Union official directly responsible for thinking about online democracy, she told us that when she first understood that “people in the online sphere are losing their freedoms by providing their private data that are used in an opaque way, by becoming objects and not subjects, it was a strong reminder of my life before 1989 in Czechoslovakia.” As everything in our homes and lives goes online—not just our phones but our fridges and stationary bikes, our family photos and parking fines—every bit of our behavior gets turned into bytes and used by artificial-intelligence systems that we do not control but that can dictate what we see, read, and buy. If Tocqueville were to visit cyberspace, it would be as if he had arrived in pre-1776 America and found a people who were essentially powerless.
We know alternatives are possible, because we used to have them. Before private commercial platforms definitively took over, online public-interest projects briefly flourished. Some of the fruits of that moment live on. In 2002, the Harvard Law professor Lawrence Lessig helped create the Creative Commons license, allowing programmers to make their inventions available to anyone online; Wikipedia—which for all the mockery once directed its way has emerged as a widely used and mostly unbiased source of information—still operates under one. Wikipedia is a glimpse of the internet that might have been : a not-for-profit, collaborative space where disparate people follow a common set of norms as to what constitutes evidence and truth, helped along by public-spirited moderators. Online collaboration was also put to impressive use from 2007 to 2014, when a Brazilian lawyer named Ronaldo Lemos used a simple tool, a WordPress plug-in, to allow Brazilians from all classes and professions to help write an “internet bill of rights.” The document was eventually inscribed in Brazilian law, guaranteeing people freedom of speech and privacy from government intrusion online.
All of that began to change with the mass-market arrival of smartphones and a shift in the tactics of the major platforms. What the Harvard Law professor Jonathan Zittrain calls the “generative” model of the internet—an open system in which anyone could introduce unexpected innovations—gave way to a model that was controlled, top-down, and homogeneous. The experience of using the internet shifted from active to passive; after Facebook introduced its News Feed, for example, users no longer simply searched the site but were provided a constant stream of information, tailored to what the algorithm thought they wanted to read. As a few companies came to control the market, they used their monopoly power to undermine competitors, track users across the internet, collect massive troves of data, and dominate advertising.
It’s a grim story, and yet not entirely unfamiliar. Americans should recognize it from their own history. After all, only a few decades after Tocqueville wrote Democracy in America , the U.S. economy came to be controlled by just a few very large companies. By the end of the 19th century, the country seemed condemned to monopoly capitalism, financial crisis, deep inequality, a loss of trust in institutions, and political violence. After the 25th president, William McKinley, was murdered by an anarchist, his successor, Theodore Roosevelt—who denounced the “unfair money-getting” that created a “small class of enormously wealthy and economically powerful men, whose chief object is to hold and increase their power”—rewrote the rules. He broke up monopolies to make the economy more fair, returning power to small businesses and entrepreneurs. He enacted protections for working people. And he created the national parks, public spaces for all to enjoy.
In this sense, the internet has taken us back to the 1890s: Once again, we have a small class of enormously wealthy and economically powerful people whose obligations are to themselves, and perhaps to their shareholders, but not to the greater good. But Americans didn’t accept this reality in the 1890s, and we don’t need to accept it now. We are a democracy; we can change the rules again. This is not just a matter of taking down content or even of removing a president’s Twitter account—decisions that should be determined by a public process, not a lone company’s discretion. We must alter the design and structure of online spaces so that citizens, businesses, and political actors have better incentives, more choices, and more rights.
theodore roosevelt 2.0
Tom Malinowski knows that algorithms can cause real-world harm. Last year, the U.S. representative from New Jersey introduced a bill, the Protecting Americans From Dangerous Algorithms Act, that would, among other things, hold companies liable if their algorithms promoted content tied to acts of terrorism. The legislation was partly inspired by a 2016 lawsuit claiming that Facebook had provided “material support” to the terrorist group Hamas—its algorithm allegedly helped steer potential recruits Hamas’s way. The courts held that Facebook wasn’t liable for Hamas’s activity, a legal shield that Malinowski hopes to chip away at. Regulators, he told us, need to “get under the hood” of companies, and not become caught up in arguments about this or that website or blog. Others in Congress have demanded investigations of possibly illegal racial biases perpetuated by algorithms that, for example, show Black people and white people different advertisements. These ideas represent the beginning of an understanding of just how different internet regulation will need to be from anything we have tried previously.
This way of thinking has some distinct advantages. Right now companies fight intensely to retain their exemption from “intermediary liability,” guaranteed to them by the now-infamous Section 230 of the Communications Decency Act. This frees them from legal responsibility for nearly all content posted on their platform. Yet striking down Section 230 could mean that the companies will either be sued out of existence or start taking down swaths of content to avoid being sued. Focusing on regulating algorithms, by contrast, would mean that companies wouldn’t be liable for each tiny piece of content, but would have legal responsibility for how their products distribute and amplify material. This is, after all, what these companies actually do: organize, target, and magnify other people’s content and data. Shouldn’t they take responsibility for that?
Other countries are already focusing their regulatory efforts on engineering and design. France has discussed appointing an algorithm auditor, who would oversee the effects of platform engineering on the French public. The U.K. has proposed that companies assess the impact of algorithms on illegal content distribution and illegal activity on their platforms. Europe is heading in that direction too. The EU doesn’t want to create a 1984 -style “Ministry of Truth,” Věra Jourová has said, but it cannot ignore the existence of “organized structures aimed at sowing mistrust, undermining democratic stability.” Action must be taken against “inauthentic use” and “automated exploitation” if they harm “civic discourse,” according to the EU’s Digital Services Act, which seeks to update the legal framework for policing platforms. The regulatory focus in Europe is on monitoring scale and distribution, not content moderation. One person writing a tweet would still qualify for free-speech protections—but a million bot accounts pretending to be real people and distorting debate in the public square would not. Facebook and other platforms already track and dismantle inauthentic disinformation and amplification campaigns—they all have invested heavily in staff and software to carry out this job—but there is hardly any way to audit their success. European governments are seeking ways that they and other civic-minded actors can at least monitor what the platforms are doing.
Still, some of the conceptual challenges here are large. What qualifies as “legal but harmful” content, as the U.K. government calls it? Who will draw the line between disinformation and civic discourse ? Some think that agreeing on these definitions in America will be impossible. It’s a “chimera” to imagine otherwise, says Francis Fukuyama, one of America’s leading philosophers of democracy; “you cannot prevent people from believing really crazy stuff, as we’ve seen in the past month,” he told us in December. What Fukuyama and a team of thinkers at Stanford have proposed instead is a means of introducing competition into the system through “middleware,” software that allows people to choose an algorithm that, say, prioritizes content from news sites with high editorial standards. Conspiracy theories and hate campaigns would still exist on the internet, but they would not end up dominating the digital public square the way they do now.
“If Facebook is forced to divest WhatsApp and Instagram, that’s not going to solve the core issue—the ability of these large platforms to either amplify or suppress certain kinds of political information in a way that potentially could sway a democratic election.”
A deeper problem, though, is the ingrained attitudes we bring to this debate. Most of us treat algorithms as if they constitute a recognizable evil that can be defined and controlled. What if they’re not? J. Nathan Matias, a scholar who has migrated from the humanities to the study of online behavior, argues that algorithms are totally unlike any other product devised by human beings. “If you buy a car from Pennsylvania and drive it to Connecticut,” he told us, “you know that it will work the same way in both places. And when someone else takes the driver’s seat, the engine is going to do what it always did.” Algorithms, by contrast, change as human behavior changes. They resemble not the cars or coal mines we have regulated in the past, but something more like the bacteria in our intestines, living organisms that interact with us. In one experiment, for example, Matias observed that when users on Reddit worked together to promote news from reliable sources, the Reddit algorithm itself began to prioritize higher-quality content. That observation could point us in a better direction for internet governance.
Matias has his own lab, the Citizens and Technology Lab at Cornell, dedicated to making digital technologies that serve the public and not just private companies. He reckons labs like his could be part of internet governance in the future, supporting a new generation of citizen-scientists who could work with the companies to understand how their algorithms function, find ways of holding them accountable if they refuse to cooperate, and experiment with fresh approaches to governing them. This idea, he argues, is nothing new: As far back as the 19th century, independent scientists and consumer-rights advocates have tested such factors as the strength of light bulbs and the effects of pharmaceuticals, even inventing elaborate machines to test the durability of socks. In response, companies have improved their products accordingly. Maybe it’s time to let independent researchers test the impact of algorithms, share the results, and—with the public’s participation—decide which ones are most useful.
This project should engage anyone who cares about the health of our democracy. Matias sees the behavior of the tech platforms as essentially authoritarian; in some ways, they sound far more like the Chinese state than we usually assume. Both American tech platforms and Chinese bureaucrats conduct social-engineering experiments in the dark; both have aims that differ from those of the public. Inspired by the philosopher Karl Popper, the doyen of “open society” and a critic of untransparent social engineering, Matias thinks we have to not just take control of our own data, but also help oversee the design of algorithmic experiments, with “individual participation and consent at all decision levels possible.” For example, victims of prejudice should be able to help create experiments that explore how algorithms can reduce racism. Rohingya in Myanmar should be able to insist on social-media design that doesn’t facilitate their oppression. Russians, and for that matter non-Russians, should be able to limit the amount of government propaganda they see.
This kind of dynamic regulation would solve one of the most embarrassing problems for would-be regulators: At the moment, they lag years behind the science. The EU’s first attempt to regulate Google Shopping using antitrust law proved a giant waste of time; by the time regulators handed down their judgment, the technology in question had become irrelevant. Other attempts are too focused on simply breaking up the platforms, as if that alone will solve the problem. Dozens of U.S. states and the Justice Department are already suing Google for cornering the markets in search and digital advertising, which is not surprising, because the breakup of the oil and railroad companies is the Progressive regulation everyone learned about in school. Yet the parallels to the early 20th century are not exact. Historically, antitrust regulation sought to break up price-setting cartels and to lower costs for consumers. But in this case the products are free—consumers don’t pay to use Google or Facebook. And while breaking up the big companies could help diversify the online economy, it won’t automatically be good for democracy . Why would 20 data-sucking disinformation machines be better than one? “If Facebook is forced to divest WhatsApp and Instagram,” Fukuyama told us, “that’s not going to solve the core issue—the ability of these large platforms to either amplify or suppress certain kinds of political information in a way that potentially could sway a democratic election.”
Perhaps the most apt historical model for algorithmic regulation is not trust-busting, but environmental protection. To improve the ecology around a river, it isn’t enough to simply regulate companies’ pollution. Nor will it help to just break up the polluting companies. You need to think about how the river is used by citizens—what sort of residential buildings are constructed along the banks, what is transported up and down the river—and the fish that swim in the water. Fishermen, yachtsmen, ecologists, property developers, and area residents all need a say. Apply that metaphor to the online world: Politicians, citizen-scientists, activists, and ordinary people will all have to work together to co-govern a technology whose impact is dependent on everyone’s behavior, and that will be as integral to our lives and our economies as rivers once were to the emergence of early civilizations.
reconstructing the public sphere
The internet is not the first promising technology to have quickly turned dystopian. In the early 20th century, radio was greeted with as much enthusiasm as the internet was in the early 21st. Radio will “fuse together all mankind” wrote Velimir Khlebnikov, a Russian futurist poet, in the 1920s. Radio would connect people, end war, promote peace!
Almost immediately, a generation of authoritarians learned how to use radio for hate propaganda and social control. In the Soviet Union, radio speakers in apartments and on street corners blared Communist agitprop. The Nazis introduced the Volksempfänger, a cheap wireless radio, to broadcast Hitler’s speeches; in the 1930s, Germany had more radios per capita than anywhere else in the world.** In America, the new information sphere was taken over not by the state but by private media companies chasing ratings—and one of the best ways to get ratings was to promote hatred. Every week, more than 30 million would tune in to the pro-Hitler, anti-Semitic radio broadcasts of Father Charles Coughlin, the Detroit priest who eventually turned against American democracy itself.
In Britain, John Reith, the visionary son of a Scottish clergyman, began to look for an alternative: radio that was controlled neither by the state, as it was in dictatorships, nor by polarizing, profit-seeking companies. Reith’s idea was public radio, funded by taxpayers but independent of the government. It would not only “inform, educate and entertain”; it would facilitate democracy by bringing society together: “The voice of the leaders of thought or action coming to the fireside; the news of the world at the ear of the rustic … the facts of great issues, hitherto distorted by partisan interpretation, now put directly and clearly before them; a return of the City-State of old.” This vision of a radio broadcaster that could create a cohesive yet pluralistic national conversation eventually became the BBC , where Reith was the first director-general.
Reith’s legacy lives on in a new generation of thinkers, among them Ethan Zuckerman, the director of the Institute for Digital Public Infrastructure at the University of Massachusetts at Amherst and a tech wizard who wrote the code that underlies the pop-up ad , one of the biggest milestones in the growth of online advertising. Partly as penance, Zuckerman now dedicates his time to thinking about nonprofit online spaces that could compete with the online commercial world he helped create. Social media, he told us, is broken: “I helped break it. Now I am interested in building new systems from scratch. And part of what we should be building are networks that have explicit social promise.”
Invoking the example of Reith’s BBC, Zuckerman imagines social-media sites designed deliberately in the public interest that could promote civil discourse, not just absorb your attention and data, and that would help reduce the angry tone of American debate. As proof that polarization really can be reduced, Zuckerman, borrowing from a colleague, cited the example of Quebec, the Canadian province that had been deeply polarized between French speakers who wanted independence and English speakers who wanted to remain part of Canada. Nowadays, Quebec is pleasingly dull. “It took an enormous amount of work to get politics to be that boring,” Zuckerman said. “It involved putting real issues on the table that forced people to work together and compromise.” He reckons that if at least a part of the internet becomes a place where partisan groups argue about specific problems, not a place where people show off and parade their identities, it too can become usefully boring. Instead of making people angry, participation in online forums can give them the same civic thrill that town halls or social clubs once did. “Elks Club meetings were what gave us experience in democracy,” he said. “We learned how to run an organization. We learned how to handle disagreement. We learned how to be civilized people who don’t storm out of an argument.”
Illustration by Yoshi Sodeoka; image from Universal History Archive / Getty
Versions of this idea already exist. A Vermont-based site, Front Porch Forum, is used by roughly a quarter of the state’s residents for all sorts of community activity, from natural-disaster response to job-hunting, as well as civic discussion. Instead of encouraging users to interact as much and as fast as possible, Front Porch slows the conversation down: Your posts come online 24 hours after you’ve written them. Sometimes, people reach out to the moderators to retract something said in anger. Everyone on the forum is real, and they have to sign up using real Vermont addresses. When you go on the site, you interact with your actual neighbors, not online avatars.
Of course, moderated public-service social media can’t be created for free. It needs funding, just like the BBC. Zuckerman suggests raising the money through a tax on online advertising that collects lots of user data—perhaps a 2 percent levy to start: “That money is going to go into a fund that is analogous to the Corporation for Public Broadcasting. And it’s going to be available for people who want to try different ideas of what online communities, online spaces could look like.” The idea is to then let a thousand flowers bloom—let people apply to use the money to create different types of communities—and see which ones flourish.
Larger-scale versions of community forums already exist too, most notably in Taiwan, where they have been pioneered by Audrey Tang, a child prodigy who became a high-school dropout who became a Silicon Valley entrepreneur who became a political activist who became the digital minister of Taiwan, the role she occupies today. Tang prefers to say that she works with the government, not for the government; her co-workers are “given a space to form a rough consensus.” She publishes transcripts of all of her conversations with almost everybody, including us, because “the state needs to be transparent to its citizens.”
Among many other experimental projects, Tang has sponsored the use of software called Polis, invented in Seattle. This is a platform that lets people make tweet-like, 140-character statements, and lets others vote on them. There is no “reply” function, and thus no trolling or personal attacks. As statements are made, the system identifies those that generate the most agreement among different groups. Instead of favoring outrageous or shocking views, the Polis algorithm highlights consensus.
Polis is often used to produce recommendations for government action. For example, when the Taiwanese government designed a Polis debate around the subject of Uber, participants included people from the company itself, as well as from the Taiwanese taxi associations, which were angered by some of Uber’s behavior—and yet a consensus was reached. Uber agreed to train its drivers and pay transport taxes; Taiwan Taxi, one of the country’s largest fleets, promised to offer better services. It’s possible to imagine a world in which local governments hold such online consultations regularly, thereby increasing participation in politics and giving people some influence over their society and environment.
Of course, this system works only if real people—not bots—join these debates. Anonymity does have its place online, just as in real life: It allows dissidents in repressive countries a way of speaking. Anonymity also has a long and distinguished history in American politics, going back to The Federalist Papers , which were signed with a collective pseudonym, “Publius.” But Publius never conceived of a world in which anonymous accounts promoting the hashtag #stopthesteal could convince millions of Americans that Donald Trump won the 2020 election.
One possible solution to the anonymity problem comes from Ronaldo Lemos, the Brazilian lawyer who crowdsourced his country’s internet bill of rights. Lemos advocates for a system known as “self-sovereign identity,” which would accrue through the symbols of trust built up through different activities—your diploma, your driver’s license, your work record—to create a connective tissue of trusted sources that proves you are real. A self-sovereign identity would still allow you to use pseudonyms online, but it would assure everyone else that you are an actual human, making it possible for platforms to screen out bots. The relative prominence of various ideas in our public conversation would more accurately reflect what real people really think, and not what an army of bots and trolls is promulgating. Solving the online-identity problem is also, of course, one of the keys to fighting organized disinformation campaigns.
But once real humans have provable identities, once governments or online activists have created the groups and set the rules, how many people will really want to participate in worthy online civic discussions? Even in Taiwan, where Tang encourages what she calls the “social sector” to get involved in governing, it’s not easy. Ttcat, a Taiwanese “hacktivist” whose work involves countering disinformation campaigns, and who has collaborated extensively with Tang, told us he worries that the number of people using Polis remains too low. Most people still have their political discussions on Facebook. Tiago C. Peixoto, a Mozambique-based political scientist who promotes online participatory democracy around the world, thinks that the issues will have to be higher-stakes if people are to join the forums. Peixoto has developed projects that could, for example, allow citizens to help put together a city budget. But those would require politicians to cede real power, which is not something many politicians like to do. Even beyond that, some skepticism about the attraction of the forums is surely warranted: Aren’t we all addicted to the rage and culture wars available on social media? Don’t we use social media to perform, or to virtue signal, or to express identity—and don’t we like it that way?
Maybe. Or maybe we think that way only because we lack the imagination to think differently. That’s the conclusion drawn by Eli Pariser, a co-founder of Avaaz and Upworthy, two websites designed to foster online political engagement, and Talia Stroud, the director of the Center for Media Engagement at the University of Texas at Austin. Pariser and Stroud have spent the past few years running polls and focus groups across 20 countries , trying to find out what people actually want from their internet, and how that matches up to what they have. Among other things, they found that Twitter super-users—people who use Twitter more than other social media—rate the platform highly for making them “feel connected,” but give it low marks for “encouraging the humanization of others,” ensuring people’s safety, and producing reliable information. YouTube super-users care about “inviting everyone to participate,” and they like that the platform does that, but they don’t think it does a good job of providing reliable information. Facebook super-users have the same fear, and aren’t convinced that the platform keeps their personal information secure, although Facebook contends that it has numerous tools in place to protect its users’ information, and says that it does not share this information without users’ permission. Pariser and Stroud’s research suggests that the current menu of options does not completely satisfy us. People are eager for alternatives—and they want to help invent them.
In early January, while America was convulsed by a lurid crisis perpetrated by people who had absorbed paranoid conspiracy theories online, Pariser and Stroud hosted a virtual festival they described as a “dispatch from the future of digital public space.” Designers who build ad-free social media that don’t extract your data chatted with engineers who design apps that filter out harassment on Twitter. Even as men in paramilitary costumes posted pictures of themselves smashing up the Capitol, Pariser and Stroud were hosting discussions about how to build algorithms that favor online connection, empathy, and understanding, and how to design online communities that favor evidence, calm, and respect over disinformation, outrage, and vitriol. One of the festival speakers was Deb Roy, a former chief media scientist at Twitter, who is now a professor at MIT. In January, he launched a new center aimed at creating technology that fosters “constructive communication”—such as algorithms designed to overcome divides.
None of these initiatives will ever be “the new Facebook”—but that’s exactly the point. They are intended to solve specific problems, not to create another monolithic mega-platform. This is the heart of Pariser and Stroud’s vision, the one shared by Zuckerman and Tang. Just as John Reith once looked at radio as a way to re-create the “City-State of old,” Pariser and Stroud argue that we should think of cyberspace as an urban environment. Nobody wants to live in a city where everything is owned by a few giant corporations, consisting of nothing but malls and billboards—yet that is essentially what the internet has become. To flourish, democratic cities need parks and libraries, department stores and street markets, schools and police stations, sidewalks and art galleries. As the great urban thinker Jane Jacobs wrote, the best urban design helps people interact with one another, and the best architecture facilitates the best conversation. The same is true of the internet.
If we were to visit this online democratic city of the future, what might it be like? It would not be anarchy, or a wilderness. Rather, we might find, as Tocqueville wrote in describing the America of the 1830s, not only “commercial and industrial associations in which all take part,” but also “a thousand other kinds: religious, moral, grave, futile, very general and very particular, immense and very small.” We might discover thousands of participatory “township institutions” of the sort pioneered by Tang, inhabited by real people using the secure identities proposed by Lemos—all of them sharing ideas and opinions free of digital manipulation or distortion, thanks to the citizen-scientists Matias has taught to work with the algorithms. In this city, government would cede power to citizens who use digital tools to get involved in budgets and building projects, schools and the environment.
Let your imagination loose: What would it really mean to have human rights online? Instead of giving private companies the ultimate decision about whose accounts—whether yours or the president’s—should be deleted, it might mean online citizens could have recourse to a court that would examine whether they violated their terms of service. It would also mean being in charge of your own data. You could give medics all the information they need to help fight diseases, for example, but would also be guaranteed that these data couldn’t be repurposed. If you were to see advertising, political or otherwise, you would have the right to know not only who was behind it, but how your data were used to target you specifically.
There are other possible benefits too. Rebuilding a civically healthier internet would give us common cause with our old alliances, and help build new ones. Our relationships with Europe and with the democracies of Asia, which so often feel obsolete, would have a new center and focus: Together we could create this technology, and together we could offer it to the world as an empowering alternative to China’s closed internet, and to Russia’s distorted disinformation machine. We would have something to offer beleaguered democrats, from Moscow to Minsk to Hong Kong: the hope of a more democratic public space.
Happily, this future democratic city is not some far-off utopia. Its features derive not from an abstract grand theory, but from harsh experience. We often forget that the U.S. Constitution was the product of a decade of failure. By 1789, its authors knew exactly how bad confederation had been, and they understood what needed to be fixed. Our new internet would also embrace all of the lessons we have so bitterly learned, not only in the past 20 years but in the almost two centuries since Tocqueville wrote his famous book. We now know that cyberspace did not, in the end, escape the legacy of John Perry Barlow’s “weary giants of flesh and steel.” It just recapitulated the pathologies of the past: financial bubbles, exploitative commercialization, vicious polarization, attacks from dictatorships, crime.
But these are problems democracies have solved before. The solutions are in our history, in our DNA, in our own memories of how we have fixed broken systems in other eras. The internet was the future once, and it can be again, if we remember Reith and Roosevelt, Popper and Jacobs—if we apply the best of the past to the present.
This article appears in the April 2021 print edition with the headline “The Internet Doesn’t Have to Be Awful.”