Harmonizing Democracy and Tech (Andrew B. Hall)

Share This

Andrew Hall is a senior fellow at the Hoover Institution and the Davies Family Professor of Political Economy at the Stanford Graduate School of Business. He is an advisor to Meta Platforms Inc. and the a16z crypto research group. His research team uses econometrics, statistics, and data science to study technology, politics, and decentralized governance. Here he talks about the future of American democracy in an increasingly online world.

Chris Herhalt: You’ve been focused in recent months on preventing slant in AI and on how we use other digital tools to foster public discourse. How do you think we’re doing on the discourse front? When I go on X, it’s just troll and bot city; then on Bluesky, it’s groupthink; and then on Threads and Instagram politics have been restricted. What’s wrong with this picture?

Andrew B. Hall: It’s easy to point to the challenges. I think we are in a transition period. In, say, the mid- to late twentieth century, we had a constantly evolving but somewhat well-understood information environment. It was already disrupted by cable news and talk radio—there have been so many disruptions, it’s hard to keep track. But what is unique about this moment, relative to these past episodes, is this somewhat sudden fragmentation of the environment.

For a while, most people were on a couple of dominant platforms when it comes to talking about politics and consuming news information. I would say the 2016–22 period was defined in terms of the challenges of content moderation at scale. Because many people were on one of two platforms—Twitter and Facebook—when it came to discussing politics in America, we focused on the big content-moderation decisions being made by those two companies and, to a lesser extent, YouTube and elsewhere. But what has happened in the past couple of years, starting with Elon Musk’s acquisition of Twitter, has been a striking fragmentation of that world.

We used to have Democrats and Republicans and everyone else all in the same place, fighting with one another and trying to create new narratives and seeing what the other side was saying, Now, more than in the past, they are talking to one another on separate services where they don’t see each other as much. And I think you can spin that as better or worse.

On the potentially better side, it lowers the stakes of some of these content-moderation problems. Now, if you don’t like the moderation on one of these platforms, you can go consume politics and discuss it with other people on a different service. I think there was a point when you felt that if you weren’t on Twitter, you weren’t part of this conversation. In terms of a marketplace for ideas, there is less of a sense that a small number of people can unduly influence the information environment through their moderation decisions, so it seems like potentially a positive to have more fragmentation.

Further, at least some people claim that having everyone disagree with one another, and constantly confronted with that fact, actually can backfire because it makes people angrier; it makes them entrench more. And so, maybe not having everyone on Twitter yelling at each other is actually a plus.

On the other hand, there are downsides. Echo chambers are definitely real. Maybe if you’re only talking to like-minded people on X or on Bluesky or wherever, it’s strengthening your polarization, leaving you feeling even more entrenched in your views, and not forcing you to confront counterarguments to your beliefs.

So, it’s hard to know whether it’s worse or better. In terms of this transition period, the big question that I don’t think anyone really understands is, what is the economic model for the provision of news in this environment? One thing that is particularly interesting—and again, I don’t know if this is a bad or a good thing—is it’s pretty clear with the rise of AI and the ability to generate images and videos so easily that we are not going to have a successful information ecosystem that relies purely on any random account posting a video or photo and saying this is a thing that really happened. It’s getting too easy to create completely fake stuff.

I’m one of those people who think that legacy media, while they provided a lot of valuable expertise, haven’t been doing a very good job, for a variety of reasons, and am excited that citizen journalism online can help raise different perspectives and views. But a huge impediment to that is if I have no way, as a “citizen journalist,” to prove to you that the thing I’m showing you occurred in the real world, it’s going to limit the impact or the credibility of that material.

Here’s where I think this is all heading: if we’re going to move to a world with more decentralized, bottom-up provision of information in news and analysis, like in Community Notes on X and now on Meta, that needs to be accompanied by some new way of evaluating the reputation and the trustworthiness of those sources. A major focus right now is trying to figure out how to do that.

Chris Herhalt: Who do you think is best equipped to build the thing or enact the law that detects such material and flags it to everybody? Is it the social media firms, the AI companies, the government?

Andrew B. Hall: I think it’s going to require all of those people. I want to be clear that I’m only talking about detecting AI-generated content here, not asking companies or platforms to make judgments about what content makes “true” or “false” claims—which we know is terribly fraught and problematic. The AI companies have thought a lot about this question of AI-generated content. There have been ideas to watermark the content. It has some limitations we need to be aware of. First, it’s generally very straightforward for people who want to remove the watermark in various ways. The AI tools cannot directly prevent the removal of the watermarks through any method I’m aware of, and kind of lose control of where the content then gets distributed. Second, people may not know how to interpret that watermark when they see it. It might say, “This video was AI generated,” but there’s some research indicating that people may have trouble with that. Does it mean that I should always assume this video is telling me things that are not true? Well, no, a lot of AI-edited content is completely factual. When I use a red-eye-reduction tool, that’s now an AI-generated photo, but it’s not fake.

That’s where the social-media companies come in. They can try to link up. And obviously, in places where this is all on one service like TikTok or Instagram, where you’re generating the AI content right in the app, it is easier to watermark it, at least at first. They also can try to detect whether something’s AI-generated even if it isn’t watermarked. But that’s very challenging, and we should worry about false positives. So that direction, while definitely important, seems unlikely to me to solve the problem.

We’re probably not going be able to tell you every time a video or picture you’re seeing is fake. But for really high-stakes, important content, we might have a way for the person who recorded it to prove to you that it’s real. And maybe that’s enough: to create a mechanism by which we can be sure that some subset of content is real.

And I think we have two ways to go about that. One, there are ways to embed some kind of cryptographic signature that says, “This was recorded and captured at this time in the real world on this physical device.” It might allow you to trace it through, to see that the content hasn’t been edited since it was captured. Some journalists already have cameras that work this way, and social-media platforms might integrate that technology so that you’ll get a check mark.

That’s still not a panacea. You could still do things like take a picture with that camera of a fake image and then it might appear to be verified. But then the other part, the soft part of this, is that ecosystem I’m talking about where we need a kind of marketplace. It can be where news outlets, citizen journalists, or community-led operations sense that there’s an advantage to having a reputation for posting material that’s real and for calling out people who post stuff that’s fake. Often, even if you can’t prove it, you can assemble a constellation of facts that either strongly support whether something is real or indicate that it’s fake.

I’m pretty optimistic that we’ll figure some of this out. In the short run, it can be very painful, and it has been painful in the past. But in the long run, are we always going to be permanently fooled by everything? No, I think people adapt.

Chris Herhalt: Of the AI challenges discussed in the recent conference you chaired, co-sponsored by Hoover’s Center for Revitalizing American Institutions, which did you find most concerning?

Andrew B. Hall: The purpose of the conference was to discuss the problem of political slant in large language models. As four different research papers presented at the conference all showed, these models tend to adopt a particular constellation of political values, ones that happen to appear somewhat left of center. This led to a broader conversation about the potential for AI models to influence the information environment and affect our freedom of expression and our marketplace of ideas in various ways.

I’m not overly alarmist about any of these issues, but in the long run, the thing that I worry about in this space—and which is closely related to the topic of my forthcoming book [A Constitution in the Sky: How to Adapt Democracy to the Digital World, Harvard University Press]—has to do with the centralized control of the information environment that could occur if these AI systems become as important as we think they’re going to become.

If some large fraction of all work is done by them and through them, and we all rely on these AI agents to do everything for us in the future, this is incredibly powerful technology for the future. I don’t know that we’re heading there, but if we are, then decisions being made today about how these AI platforms are designed—and that seem kind of trivial and even humorous today, like when Google Gemini depicted a range of historical figures as the incorrect race or gender for political reasons—could become very disturbing in that world.

Political slant is just one example of that much broader concern. If any particular company or set of companies ends up controlling this technology that’s fused into so much of what we do in our daily lives, and it then perceives, whether for its own reasons or because governments are pressuring it, a benefit to socially engineering the AI, that could be very alarming in the long run.

What if everyone in the world is doing all of their work through something like Google Gemini? And then, whoever happens to run Google at that point starts imposing their values over what you can and can’t do with this tool? Imagine it’s completely necessary for your work but now you’re unable to express certain views, or if you try to generate content, the system tells you what to do. And when you think about how it’s potentially going to transform education, you really start to worry.

Chris Herhalt: Tell me about your book.

Andrew B. Hall: It’s going to be about this concern in the long run with AI and how we keep humans in charge of new technology by adapting democracy to the online world and building online platforms where users have more power and elites have less. The premise is that our lives are increasingly influenced by technology in the sense that more and more of our economic, social, political, and cultural lives are taking place online. Our lives are intermediated by technological platforms, whether social media, Google, AI, augmented reality—so much more in the future. And all that technology has occurred at such a large scale because of the network effects of the technology itself. We’ve ended up in this odd and important point where many of the decisions that affect how we lead those lives online are being made by the leaders of a small number of tech companies and algorithms and without any of the democratic procedures we might expect in an accountable society.

At the same time, our government—which is supposed to oversee these things and ensure we have an open marketplace for ideas—is being run using essentially thousand-year-old technology.

Since technology is moving faster and faster, and the decisions made by these tech platforms are becoming a larger and larger share of all the important decisions that affect our lives, and if democracy is just too rickety and slow to deal with this problem, then can we take this very old technology of democracy and port it into the online world and update it? How would we figure out how to do that? That’s what the book is about.

I think I’m in an unusual position to tell this story because I’ve been an academic, a political scientist studying these things, but I’ve also worked with tech companies in social media and in crypto to design and build and test these democratic systems for the online world. And we don’t know how to do it perfectly because it’s very hard. For all the reasons democracy struggles in the real world, those are all multiplied a hundredfold online. The point of the book is to walk through many of the experiments I’ve been involved in to adapt democratic institutions for the online world and what we’ve learned from them so that in the long run, we have a world where our democracy and our technology are more harmonious, and where AI and other new tech platforms are controlled by everyday people and reflect our values and preferences.

This interview was edited for length and clarity

Source

Leave a Comment