Home

Donate
Podcast

Considering How AI Destroys Democratic Institutions

Justin Hendrix / Mar 22, 2026

Audio of this conversation is available via your favorite podcast service.

Across the world, governments and other institutions are racing to apply artificial intelligence in countless ways. In a draft paper titled “How AI Destroys Institutions” that is forthcoming in the UC Law Journal, Boston University law professors Woodrow Hartzog and Jessica Silbey argue that the design of AI systems—from large language models to predictive and automated decision tools—is fundamentally incompatible with the civic institutions that hold democratic society together, including the rule of law, universities, a free press, and civic life itself. This isn't necessarily because AI is being misused or falling into the wrong hands, they say—in most instances AI is working exactly as intended and, in doing so, eroding the expertise, decision-making structures, and human connection that give institutions their legitimacy.

What follows is a lightly edited transcript of the discussion.

Woodrow Hartzog:

I'm Woodrow Hartzog. I'm the Andrew R. Randall Professor of Law at Boston University School of Law.

Jessica Silbey:

I'm Jessica Silbey. I am the Frank Kenison Professor of Law at Boston University School of Law and the Associate Dean for Intellectual Life.

Justin Hendrix:

I'm pleased to speak to you today about this draft, a paper that you have published; How AI Destroys Institutions, which has already got a lot of attention and probably a lot more downloads than you might've anticipated for a draft. We're going to talk a little bit about the details and about the arguments you're making here, and some of the feedback you've already had.

But I think to start, I just want to read from the first couple of lines from this. You state, "If you wanted to create a tool that would enable the destruction of institutions that prop up democratic life, you could not do better than artificial intelligence. Authoritarian leaders and technology oligarchs are deploying AI systems to hollow out public institutions with astonishing alacrity."

Why do you come out swinging? This is not couched language. This is not the type of wishy-washy. In some circumstances, AI might be a danger to democracy, or if we go down a certain line, this may emerge. You're saying the threat is here, that it's urgent, and that what's happening is on purpose.

Woodrow Hartzog:

Yeah, that's right. So, I suppose the simple answer to that question is we call it like we see it. But the longer answer is probably that we've been in the game long enough to see that if you don't come out swinging, then history is going to repeat itself.

So, when I first started writing in law and technology, I was relatively equivocal. I would say, "Oh, there's some good and bads, and we just have to make sure to put the guardrails up, and then everything will be okay as long as everyone acts responsibly. And we just have some good sound common wisdom rules."

That was all well and good. And then tech companies decided that was just a free pass to do whatever they wanted to maximize their profits. And then we saw them routinely leverage the uncertainty of the current moment to just stall long enough to get people acclimated to these technologies and particular business models and dependent upon them.

Once that happens, then there's reduced political accountability and the chance for meaningful rules goes away. And so within anytime a new technology comes out, there is a window where some meaningful rules might get passed that you've got some potential political accountability. But the longer time goes on and the more tech companies can just stall and run out the clock, the less time there's going to be to meaningfully create a rule that's going to push against the harms.

In STS studies, this is called the Collingridge dilemma, where lawmakers are in a little bit of a double bind. Where if you go regulate too early, then maybe you squash the potential benefits of a particular technology. But if you wait long enough, people are acclimated and dependent upon the tool and there's nothing you can do about it. And so I've heard that called the avocado problem, which is not yet too late. And so that's why we came out swinging.

Jessica Silbey:

I'll just add briefly that in 2019, Woody and I published an article, a very small article, short, called The Upside of Deep Fakes. It was meant to think hard about in 2019, which feels like a long time ago already. I think we even wrote it before then; how do we deal with the problem of deep fakes without chilling speech and art and all these things?

We relied heavily in that article on certain institutions like education, media literacy, journalism and journalistic ethics, certain kinds of legal rules that we thought would work. These institutions, we thought if we shore up the institutions, the deep fake problem could be contained.

As we were asked to think about AI in that context for what was supposed to be a short provocation, this article was meant to be short. It turned out that as we dug into updating the deep fakes analysis with institutions, we became very alarmed actually about those institutions.

They've been eroding for a while, which is why we argued that we have to shore them up in 2019, but they're even weaker today, and that is why the alarming call in the paper.

Justin Hendrix:

So, you do spend a little time talking about the role of institutions and why they're important. You say they are the way that complex societies encourage cooperation and stability. They enable human flourishing by fostering collaboration in the service of a shared commitment.

Institutions and democracies are basically our means to come to consensus, to organize ourselves around collective challenges. One way you might read these lines is to see them as a defense of our current institutions. Is that what you intend?

Jessica Silbey:

So, institutional theory is over 100 years old. It became pretty prominent in the beginning of the 20th century with certain social sciences, sociology, Durkheim and Baber. As life became much more complex, we started thinking in institutional terms rather than organizational terms. So, not hospital, but the institution of medicine. Not schools, but the institution of public education, for example.

So, this is a call. The paper is in many ways a call to worry about our institutions, really worry. I think we take them for granted. And that's one of the features of institutions and how institutional theory has informed our understanding of how to strengthen them, which is they're almost invisible to us.

When we think about, for example, when we send our kids to school, we think they're going to line up for certain classes. We think there's going to be a front door. We think there are certain features to the organization of the public school and to the goals of public education.

They're normalized in our society, and that normalization makes those rules and the structures invisible to us, but our everyday interaction with the organizations that are part of that organized conceptual institution of education are how we buy into it, how it remains durable, and how it can change slowly over time to meet our needs. I think this paper in many ways is we really need to worry that these institutions are going to fall apart.

Woodrow Hartzog:

One of the pieces of feedback that we've already got, you're right, the paper is in draft form and we're actively updating it now. One of the pieces of feedback that we've gotten is the pushback that our institutions have been ailing for quite some time.

And so it's not as though we are defending the status quo of these institutions, but rather that the deployment of AI due to the current design of AI and its related pathologies will prevent our institutions from ever achieving their optimal goal. And to the extent that they're good will eat them up alive over time. And so when we say destroy, we mean destroy as we know them and turn them into probably a much worse version of what we have.

Justin Hendrix:

So, let's talk about what you call the destructive affordances of AI, which could have that effect. You talk about AI undermining expertise, short cutting decision making, isolating humans. Let's talk about those and what you mean by affordances in this context?

Jessica Silbey:

So, one of the things that we think about with AI is that it isolates humans from each other. You can't read the paper today without reading about how teams are talking more to chatbots than to their peers, for example. Or workers, everyday people are dealing with AI decision makers rather than service vendors, for example.

So, the more and more we're not engaging with humans and more and more engaging with these AI agents, for example, or decision makers, the less I think we are able to tolerate humans in some ways. And that is a scary dystopian future. Institutions need humans to survive.

Woodrow Hartzog:

And also with the isolation of humans, I'll just add that it also removes opportunities to talk with other people. And when you do that, you run into several different problems. Maybe this is where I'll talk about the other two pathologies, one of which is, you remove points of contestation within any organization or a system.

So, whereas reasonable people could disagree and in the past would've come together to talk about it and say, "I think X." The other person says, "Well, I think Y." And then we talk about it. A couple of three really important things happen with that.

One of which is, you have the opportunity to transfer expertise among people that work within the institution. And this is one of the ways in which we say that one of the characteristics of AI is to degrade expertise over time. So, it's not just the skill atrophy thing, which a lot of people are already talking about. And so when you offload critical thinking onto machines and writing, then you atrophy your critical thinking skills and other writing skills.

But you also miss the opportunity when you remove points of contestation and human interaction to A, collectively discuss the common mission of the organization and the institution, and decide how to adapt to changed circumstances. You don't do that when you are isolated, when you've removed that point of human contestation.

The other thing you do is you deny the organization, and then by extension, the institution, an opportunity to call accountability. You remove every human that gets removed from the system as a whistleblower, a potential whistleblower that gets taken out. We rely upon those people for internal accountability, and AI is not going to be a whistleblower. Whistleblowers speak truth to the power and AI is the power and-

Jessica Silbey:

It's no skin in the game.

Woodrow Hartzog:

Right. It's got no skin in the game at all, and so it's got no internal agency or anything along those lines. And then the last thing that you do is when you remove the points of contestation, you remove opportunities to build solidarity. So, civic life, higher education, we need to feel that there is common purpose. The way in which we build that common purpose is by talking with each other regularly.

And so the more we're isolated, the more humans are cut out of the decision making process, the more eroded over time it follows that our adherence to common purpose will become.

Justin Hendrix:

One of the other areas you focus on is expertise and how AI may diminish expertise. I think a lot of folks who are excited about this technology are excited about its ability to potentially bring greater rationality to certain types of institutional decisions, or even to apply what is known in a more efficient manner. Why is that wrong from your perspective?

Jessica Silbey:

Well, one of the ways you can think about it is how the AI is being deployed in the organization. For sure, there are ways to use AI tools to ideate and to test things and to question existing models. For example, I think about the use of AI for diagnostics and science or to protein fold, for example. There are so many ways in specific context, we can imagine these complex computational tools to help us.

But if you think about the way AI is being deployed in legal situations, for example, like, I don't know, to set bail or to decide whether someone's a risk of flight, or arguably to diminish bias, judicial bias, for example, all for good purposes. And yet, the decision for a sentence or a bail, for example, or dangerousness determination, when you can't give reasons, when you can't explain the decision that has been arrived at by the machine, the public reason giving of the judge or the decision maker is opaque.

Law depends on transparency and accountability for its legitimacy. Its decisions need to be contestable and need to be understandable. And so when it's deployed in that way, even for a good reason, like let's get rid of judicial bias, it's creating a whole ‘nother problem of accountability and legitimacy that people just will not want to go to law courts to decide their cases anymore, and that's a real problem. So, I really think it depends on how it's deployed and where it's deployed in these institutions.

Woodrow Hartzog:

There's also the fact that a lot of the benefits that are being touted by, that you talked about, the way in which we can use these tools to better rationalize, I think this is short-term benefits that we're talking about here. A lot of what Jessica and I are trying to do is play the long game.

This is not a world that is set for the optimal deployment of AI. This is what we mean by the destructive characteristics of AI. It's destructive within the system that it's going to be placed. So, there's two things that are not going to change in our lifetime, one of which is human nature. The other of which is it's going to be placed in a capitalistic system that's going to have financial pressures to be used in particular sorts of ways.

We can talk about maybe we can change that over time, but it's a slow moving glacier. And so we know that it's going to be deployed in ways not optimally, not in the way that allows us to use it as an augmenter of skills, as a way, I'm just going to use it to get some good ideas, but then I'll do the critical thinking myself.

Instead, financial pressures are going to be, now that we have this tool, this thing that used to take four hours that allowed you to develop your skills, that allowed us to input new feedback into the system, and know which questions to ask, and know how to check the outputs. All of those things that used to take time are now going to be thrown into the dust bin because this thing that used to take four hours just by practical reality, now only takes 30 minutes or 20 minutes. And so you should do it in those 20 minutes.

Once you realize the system within which all of these tools are going to be deployed, you realize that the best version of these tools is not going to be possible under the constraints that we're talking about.

Jessica Silbey:

I think there's a sense in which the goal of AI is to save us time to produce more accurate results. The idea is that we're going to be able to do more, more and better. And more and better are not equal to one another necessarily. It's not true that more leads to better.

I don't know if people listening to this podcast, I know Woody and I have talked about this a lot. The fact that we can do things in a shorter amount of time does not mean we work less. The hamster wheel of work is real. Humans are constrained by time and our lifespan, and the idea that we are going to produce more in a shorter time doesn't lead to a necessarily a higher quality of life for most of us.

And so I think one of the questions we want to ask ourselves is, where is this AI actually being deployed and to whose benefit actually?

There is pretty good evidence right now that all the AI slop that's being produced by AI, whether it's meeting agendas, or email drafts, or even code is creating as much work for people to read and to sort through as it is saving people in the creation of it.

So, we're definitely not there at the place where it's creating efficiencies. And efficiency for me, is often at least code for capital accumulation and most people don't benefit from that capital accumulation.

Justin Hendrix:

In your section, you call the institutions on AI's death row. You reference the Department of Government Efficiency, DOGE, which I assume was looming large, particularly in the timeframe that you were writing this. And of course, we've seen a lot of these issues come to the fore under the onslaught of news around DOGE and what it's tried to do, and the types of pooling of data it's done across the federal government.

And of course, the deconstruction of whole federal agencies and the implementation of AI systems to try to replace human labor in different parts of the federal government. But this is not a phenomenon that's just happening in the US. I always think DOGE in a way as a speed run and some might argue, a legally quite dubious version of what we're seeing in other governments around the world that are trying to build digital public infrastructure systems, and digital identity. And various other automations that are intended to speed up the operation of institutions, the delivery of benefits, the execution of state power, security, all kinds of things.

But let me just ask you to pause on DOGE for a moment and what we're seeing in the US. The Trump administration is pushing hard to incorporate AI across the government. We're seeing a huge uptake of AI tools across federal agencies. The latest AI inventories that are published by the different agencies attest to that. But I don't know, what do you make of the situation in the US right now when it comes to what's happening on the ground?

Woodrow Hartzog:

Well, it's not great, Justin. I think that there are several things going on here. One of which is you can't separate the story from the abuse of power by tech oligarchs. And so it's hard to say that that's solely attributable to the characteristics of a particular tool. And also what we're talking about is the accumulation of power. But of course, that tool allows you to more effectively accumulate power, to centralize power.

It also, there's the characteristics of AI as it actually works, and then there's the mythology of AI that allows for all sorts of usurping of power. The idea that just because we think that agentic AI is going to be able to revolutionize everything just right around the corner, that we'll be able to deploy it broadly and it'll work great, even though it doesn't work great. So, part of the problem is that everybody thinks this technology can do things that it just can't do.

This is part of what I think Arvind Narayanan and Sayash Kapoor are getting at in AI Snake Oil. And then of course, the other thing is that this is a really effective way to cut out the points of contestation. If you want to move fast and break things, then AI is your best friend, because it gives you an excuse to remove the points of friction.

Which are human points of friction that would say, "Hey, this is not a great idea." Or, "Hey, you're engaging in wrongdoing." Or, "Hey, this is wildly inefficient." Particularly if you have a certain end that you would like to achieve that doesn't involve adapting to circumstances for the betterment of the mission of the institution.

Jessica Silbey:

Just out of the headlines in the past two days has been the Pentagon's dispute with Anthropic over the use of the AI tool in weaponry. What we're seeing there is the Department of War wanting to take the tool, Anthropic's tool without the guardrails that Anthropic believes needs to be on it. One of the questions you have to ask is, why are they doing that?

What contestation or chain of command is the Department of War trying to skip over by developing a tool in its own way rather than in the way Anthropic believes in its own expertise needs to happen? The military chain of command is essential. The military is its own institution. That is a different institution.

The chain of command has been essential to so many. If we think about the Bay of Pigs, if we think about different aspects of the Cold War. Throughout history, we have seen time and time again of different chains of command thwarting catastrophe.

We're seeing the lead up now in Iran of the tankers circling in the Middle East. We're seeing it play out right now on the uses of AI that are going to cut people out of the decision-making process.

Woodrow Hartzog:

And then just to add on that, Jessica, I think I read this morning that Anthropic has now dropped its flagship safety pledge. Which just underscores the need for strong bright line rules, because the market is going to push companies, even companies that say they want to do good, to ultimately race to the bottom.

Justin Hendrix:

Probably the most important. One of the things that you think AI may undermine, which is the rule of law. You've already gotten into this a bit, but let's talk just a little bit more about the principles here. You give some examples. You talk about the IRS, for instance, as one institution where we might see something like this pop up. But you talk about other, what you call algorithmic invasions of our legal institutions. Just maybe expand a little bit on the challenge to the fundamental rule of law.

Jessica Silbey:

Well, I'm sure Woody has his own examples, but we've seen examples of people thinking we should have AI juries, for example, not jury of peers, but AI juries. Debt collection, public benefits that are entitlements, and debt collection. So, it could be the IRS or it could be state and municipal debt collections. We're seeing it already in legal practice with lawyers using AI tools to draft whole briefs, for example.

In every step of the way, the question of authority, where the reason for the assertion of power. Law is just a system that justifies the use of force by the state in certain circumstances. We've all bought into the idea that a transparent, accountable legal system allows the state to do certain things; take our house, put us in jail. As long as there are these rules, accountable rules and systems of appeal, for example, and the rules can be changed by Democratic legislators.

When machines instead of a jury, for example, or a machine decides how much you owe, and we don't have basis for contesting them or we can't look the people in the eye who will be sentencing us, for example, the idea of the legitimacy of the public system that justifies the use of force against its people, I think just completely evaporates.

Woodrow Hartzog:

You start to lose shared common purpose and belief in the institution to say nothing of the slow atrophy of skills where we feel confident we can look at the output of a AI judge or something like that and say, "Oh, I know that that's wrong or that's right." But the early returns on skill atrophy and transmission of knowledge in organizations is not good with respect to AI.

And so over time, that's only going to get worse. This is not something that you can expect would improve within the organizations or the institution, given the incentives that we're facing here.

Jessica Silbey:

But faster you have to write the brief, the less likely you're going to check the cases, the more likely the cases are going to be wrong and the clients get hurt and the judges get frustrated. If you can't identify a right case from a wrong case, then what are you going to do? These are just such obvious problems in accuracy, reliability and trust in a system that depends on trust. We need people to trust the law in order for people to follow it.

Woodrow Hartzog:

As well as the points of contestation, by the way. That the idea of AI passing judgment stands in stark contrast to the fact that judges can't even agree on what the right answer is. We've got seven district court judges all that'll disagree with each other, that then the appellate court disagrees with them. And that's a good thing actually.

That's part of the system is it's these points of contestation, where we routinely revisit human values and the collective enterprise of what it is that we're doing here. The more you offload that onto machines, the fewer opportunities you have to consistently recalibrate and retune and keep that expertise.

Justin Hendrix:

You also bring up, of course, institutions of higher education and the challenge that AI poses in those. As someone who has worked in institutions of higher education for some time, I can attest to some of that phenomena, what I'm seeing myself. But I want to focus in on what you say are the destructive affordances of AI that auger havoc for the press, for journalism and news media in particular.

I was just reading this morning a report on the excellent indicator media about an AI generated podcast network that has published 11,000 episodes a day, now hundreds of thousands of episodes. They say ripping off media outlets and displacing top search results for local news podcasts with AI content. To some extent, the argument you're making feels very much borne out by what we're seeing happen, but let's just focus in on this one for a moment.

Is there any path towards getting our hands around this? Towards a place where AI is a boon for public understanding, public knowledge and not a threat to it? Or do you think we'll end up in this sad state for the public sphere for some time as you put it in the paper?

Woodrow Hartzog:

You put your finger on the one glaring weakness in the paper, which is that it doesn't really have a prescriptive section in it. This, again, was originally meant to be a very short provocation paper. And so Jessica and I are actually actively working on the follow-up, which will be the prescriptive section. We also are working on a piece about AI slops specifically that will probably focus pretty heavy on journalism as part of it.

I think the thing to keep in mind is that there have been a lot of relatively weak metaphors for free expression and journalism as an institution for a long time now, that were not very good. One of them that immediately springs to mind is the marketplace idea and the idea that more speech is always better speech. AI slop is proving that not to be true.

If you drown out, if you have the capacity, particularly at scale, to drown out the entire meaningful information ecosystem, then I don't know what we're doing here. How could people find the signal with that massive amount of noise?

To say nothing of the importance of journalism and a free press of speaking truth to power, which also gets at the whole point of in ways the press is the best version of whistleblowers, maybe not from the inside, but externally. I worry what happens when we offload so much of that onto AI systems. I don't know, Jessica, that doesn't answer the first thing you're asking about.

Jessica Silbey:

If you could talk about journalism specifically, I think the profession of journalism was overhauled and professionalized at the turn of last century. I think the first school of journalism, University of Missouri, was 1908, and it came on the heels of the Yellow Journalism scare. The Hearst shakeup in the 1880s of newspapers and the debates between telegraphs and the newspapers and when local journalism became more national.

The attention economy changed then and people were fighting for different kinds of markets. With that shakeup in the market of journalism where people were just reading their local papers and now they could read the East Coast news on the West Coast in a second because of Telegraph, came a professional re-birth of journal. What it meant to be a journalist changed in the 1910s, 20s and 30s, and that's when we got fact-checkers. That's when we got the journalistic code of ethics, for example.

So, one of the responses to the beginning of fake news in the 1890s, yellow journalism and the erosion of privacy, for example, by journalism was a professionalization of the institution. And so you could say, for example, we need an AI code of ethics. We've been talking about ethics. AI for now at least a decade, I think, and under the Biden administration, there was conversations about that.

So, you can imagine each institution on its own, whether it's higher ed, whether it's journalism, whether it's medicine, for example, the military, decide each on their own, what is the proper use of AI? How do we use it in a way that doesn't erode the public purpose that we are serving? And that's going to be less efficient. Building specific AI tools for specific institutions is not going to create the efficiency aims, and it's not going to create the market capital accumulation that these big five companies are thinking about.

But that's probably one way to think about it if we're looking backwards about what helped journalism become what it was in the 20th century with star journalists and stuff like that. So, we could think in those terms. And in fact, I think a lot of scientists are thinking in those terms. I think a lot of medical professionals are thinking in those terms.

I think many schools, like at BU, we are thinking hard about in what way do we want to incorporate AI into our infrastructure, for example? Hard choices, but to just bring them in as general purpose tools without thinking, the way it's massively being deployed, I think is really a nightmare.

Woodrow Hartzog:

Yeah. I agree. I think that where we've come out on a lot of this prescriptive stuff is it's got to be bottom up. It has to happen locally and good things happen when you start acting locally, and you start making choices about particular uses or deployments at the municipal level, then maybe at the state level. I've long since given up hope of Congress doing anything, but maybe at the international level, we could also get some coordination, like an accountability sandwich at the bottom and the top.

Justin Hendrix:

You've already turned the page towards where you're effectively suggesting where to start. You point out that some of the stuff people are doing right now; AI ethics, principles, responsible AI, that type of thing, doesn't seem up to snuff for the scale of the challenge. Even transparency is not enough on some level.

That's a core argument that folks make often on this podcast. Researchers in particular, who think if we can just see what's going on inside these models or inside these companies, then perhaps it will lead to change. You also, in the paper, bring up some bigger ideas. In the footnotes, you quote William Butler Yeats. You talk about Robert Putnam and this idea of Bowling Alone, and the extent to which our society has been in a multi-decade trajectory around certain societal issues. I don't know.

That goes back to that point you're making about the context, that we're not introducing AI into a rosy scenario. It's a difficult spot, but are you at all optimistic that we can slow down, look at some of these more fundamental issues first? And is that really where our effort should be first and foremost, address inequalities, address problems in our democracy? And if we do that, then maybe AI isn't such a big threat?

Woodrow Hartzog:

That was the original idea behind our first paper about the upside of deep fakes. Let's do our best to shore up these institutions and that would be a meaningful way to counteract a lot of the worst tendencies of these tools. I tell my law and technology students, in a sense, everything is law and technology now.

So, corporate governance law is law and technology and the fact that one dude can control one of the most powerful corporations ever to live, because of dual class stock structures or something. Or infrastructure, tax law, voting, access to voting, encouraging people to be more politically active at the local municipal and civic level, which is a lot of Robert Putnam's work and Bowling Alone. All of that is needed and should be part of this conversation.

And so I think what Jessica and I probably will end up arguing for, and Jessica, you can correct me if I'm wrong here, but is a really holistic approach to repairing institutions. Some of that will be targeting specific rules around the design and deployment of AI systems. Some of it will be meant to get at the precursors, the bedrock foundation upon which these AI systems sit in order to inoculate us from some of the worst tendencies of these tools.

It'll probably take a combination of both to get us there. It starts with having these conversations about it and trying to decide at the very local level and get some of these bespoke systems. I don't know, Jessica, if you've got?

Jessica Silbey:

I have lots of big philosophical ideas in my head, but one is that it's very hard to know when you're living history how things are going to unfold. I try to look backwards a little bit to figure out where am I in this moment. Woody has already said this, we really lost the opportunity with social media, and I would like to learn from that mistake.

We are seeing now more regulation of social media for young people. That wasn't so long ago that we can figure this out, early interventions with certain uses. But also, I think about when you think hard about what makes life meaningful, whether it is the relationships with people, it's the interactions, it could be music, it could be art, it could be nature. Where do people actually get their joy?

We're not getting the joy necessarily from AI and from our screens. I don't mean to be a Luddite, but I feel like when we are facing, you're the one that asked the question, Justin, we're facing huge problems of climate change, threats to our planet, the rise of authoritarianism. I really think we have to prioritize what matters to human beings as stewards of the planet, of the animals and the trees and the rivers, what matters to us.

And to just take a look back and say, "Really, when you let these five AI companies take things over." The data centers that are being built all over the United States without actual people being employed by them, we're all debating that now. It feels so shortsighted to me. And so I do hope that we can have the conversations about what really matters at this very critical moment.

Justin Hendrix:

So, folks who are more enthusiastic about AI have a term they use for people who espouse arguments like this, they call you decels. You're the people who want to slow things down. You're the people who want to stand in the way, put up barricades to our progress on this technology.

One argument some of those folks might make about your paper is that you seem to be targeting mostly generative AI. Your concerns are rooted in some of the considerations that people have foregrounded over the last couple of years, certainly since ChatGPT blew up the conversation, but that the technology is changing fast. What we even mean by AI is changing fast. And maybe some of these multi-agent systems or some of the various fixes that people have for these systems would alleviate some of your concerns.

How would you answer them if they were in the room? Let's assume there's at least one of those people listening to us right now.

Woodrow Hartzog:

We've gotten a lot of comments along those lines. I'll even lean into the criticism of us a little more and say some of the feedback we've gotten was that everyone always gets scared when there's a new technology. That everyone thought that TV was going to melt everyone's brains, and that the telegraph was going to wake the souls of the dead, and that writing would replace critical thought. I think it was Plato that was like, "Writing is the worst thing that could ever happen."

That's fair, but I think that A, I think it's fair to characterize AI as a distinct technology, a unique technology that's worthy of its own separate criticism. It can't be that all technological innovations are good. As a matter of fact, I suppose that I would look back and say social media has largely been a disaster, at least a version of it that we have now, even though there are undeniable benefits from it.

There are all sorts of tools that we would look back on and say, "We really should have done that one differently while we had it, while we had the chance to." I would list several. I think that we're already on the path to ruin with facial recognition. And so if you read the essay, a lot of it is focused on generative AI.

But Jessica and I made the explicit decision not to limit it to generative AI, precisely because you could imagine even worse carnage being done by agentic AI, particularly removing points of contestation or reducing human interactivity and isolating people.

In many ways, agentic AI is an even worse disaster. Predictive AI is also one of the major problems here when you're thinking about, and this is what Arvind and Sayash wrote about in AI Snake Oil. They said, "The only way that predictive AI can really meaningfully predict the future is if nothing else changes." The whole point of institutions is to create roles and structure to adapt to changing circumstances.

And so I won't debate that there are short-term efficiencies to be had in all of this. I feel like that you've got to concede that. I just feel like we're not having the conversation on a big enough scale or in a long enough timeline. Because if you play this out, I think that it ends up pretty poorly.

Jessica Silbey:

I love just new discoveries. I am in a family of scientists and I think about there is new things to be discovered with AI tools and the hopium that surrounds... Don't slow us down, I think really does not appreciate where we are right now. This is not the moment to be deploying AI at scale.

It's also, I think, does it learn from history enough? I think we just need more humility. I really just think we need a little bit more humility with that. Nobody wants to prevent the next cancer treatment, but I think we do need to prevent the AI weaponization of the military.

Justin Hendrix:

Well, and I suppose part of that humility means slowing down, reading our William Butler Yeats, apparently, but certainly reading this paper, which is called “How AI Destroys Institutions.” It's in draft form. There'll be a link to it in the show notes. We'll look forward to seeing how all of those responses you've had play out when the final version comes along. But Woodrow Hartzog and Jessica Silbey, thank you very much.

Jessica Silbey:

Thank you.

Woodrow Hartzog:

Justin, thanks so much for having us.

Authors

Justin Hendrix
Justin Hendrix is CEO and Editor of Tech Policy Press, a nonprofit media venture concerned with the intersection of technology and democracy. Previously, he was Executive Director of NYC Media Lab. He spent over a decade at The Economist in roles including Vice President of Business Development & In...

Related

DOGE Understands Something the US Policy Establishment Does Not: Technology is the Spinal Cord of GovernmentFebruary 18, 2025
Transcript
How AI Can Support Democracy MovementsApril 24, 2025
Perspective
AI, Inequality, and Democratic BackslidingApril 14, 2025
Perspective
America Needs Better AI AmbitionsDecember 11, 2025

Topics