How to Become an Algorithmic Problem
Justin Hendrix / Feb 22, 2026The Tech Policy Press podcast is available via your favorite podcast service.
As AI technologies proliferate, a growing number of people are asking what it means to live in a world dominated by algorithms and automated systems—and what gets lost when those systems optimize human behavior at scale. These questions sit at the intersection of political theory, technology policy, and everyday life, and they are drawing scholars from fields well outside computer science into the conversation.
José Marichal is a political scientist at California Lutheran University who has been writing and teaching about technology and politics for more than two decades. His 2012 book, Facebook Democracy, examined how social media platforms were reshaping the nature of political discourse—specifically, how the commodification of friendship relationships on platforms like Facebook blurred the line between affective solidarity and genuine deliberation. That work anticipated the mainstream conversation, as many of the concerns Marichal raised would later come to define debates about social media's effects on democratic life.
Marichal's new book, You Must Become an Algorithmic Problem: Renegotiating the Socio-Technical Contract, considers the age of recommendation systems and large language models. Drawing on political philosophy, he argues that individuals have entered into an implicit bargain with technology companies, trading unpredictability and novelty for the convenience of algorithmically curated experience. The consequences of that bargain, he contends, reach beyond personal preference and into the foundations of liberal democratic citizenship.
What follows is a lightly edited transcript of the discussion.

You Must Become an Algorithmic Problem: Renegotiating the Socio-Technical Contract, by José Marichal. Bristol University Press
José Marichal:
My name is José Marichal. I'm a professor of political science at California Lutheran University, and the title of the book is You Must Become an Algorithmic Problem.
Justin Hendrix:
José, I am so pleased to speak to you. It feels like it's been a while since I've had this opportunity, and I feel like I have known you for many years, despite the fact that I don't think we've ever been in a room together. You have been writing for Tech Policy Press for some time, and have contributed on a variety of different things over the years.
I remember first coming to know about your work, well around the time that you published Facebook Democracy. And the way I think about that book, which for any listener that's not familiar with it, came out in 2012, is I always think of it as having kind of, I don't know, almost predicted or presaged a lot of the conversations we'd end up having about social media. But some years before, a lot of folks arrived at similar concerns around Facebook, social media, more generally its impact on democracy. I thought I might just ask, for anybody that isn't familiar with your work or your research, your prior book, how would you describe your intellectual curiosity? What brings you to this subject of technology?
José Marichal:
Okay. Well first of all, thank you Justin for the opportunity to chat with you and for the opportunity to write for Tech Policy Press and all the great work that you do. It started very early. I've been teaching a class called Technology and Politics for, gosh, almost 20 years now. I think the first class started in 2008, back in the very optimistic days of Clay Shirky and Henry Jenkins and Larry Lessig, and all these books, and Yochai Benkler, and all these books that came out on the long tail, all the positive benefits that the internet culture was going to bring to democracy.
And I think I shared that initially. I think I have a mentality of being an early adopter, and somebody like right now I'm torn between my disdain for tech oligarchs and my fascination with AI as a tool. And so I think the book, that project that you're talking about, and thank you for the kind words about it, really did begin with a lingering growing skepticism I had about the effect of commodifying relationships, particularly friendship relationships in that Facebook project. That the idea was that we were going to take something that's very intimate and very private, communication between friends, and put it on a platform for public consumption.
And the thing with that project was I was looking at political Facebook groups, and I studied about 200 of them. And I found practically nobody asked anybody to do anything. So you'd have a Facebook group, and it would be, I love Ronald Reagan. If you love Ronald Reagan too, follow me. And it started to make me realize that this isn't quite political in the way that I was trained in graduate school to think of politics. It was much more about finding like-minded others. And obviously that's a part of the policy process and the political process is sort of solidarity and coalition building.
But political discourse, there's always a delicate balance that has to happen between bridging and bonding, to use the social capital language. Between the language of finding your like-minded others and the language of moving across your like-minded others to have conversations with people that you either disagree with or are neutrals. That in a coalition building process, you can't just stay with your solidarity group. You also need to move beyond it.
And so that was a concern I had is that is with what Facebook was doing, and I don't think at the time I wrote it, that they day had really figured out how to monetize it yet, that what they were doing was taking a very intimate kind of discourse and making it public discourse. And that kind of discourse, what I might call effective solidarity seeking discourse, is very different than deliberative discourse. And so when you start blurring those two, it fundamentally changes the way we talk to each other about politics. I might've written that book too early, but it certainly, I think other people built on that, unwittingly or not. And now I think that it's pretty common wisdom that social media writ large has done some damage to our discourse environment.
Justin Hendrix:
Yeah. Going to say you have seen, I guess, a lot of empirical evidence pile up for effectively your initial observations, your theory. And I suppose all of that evidence has led you now to this book. You start off with a kind of statistical concept, a statistical concept, this idea of the outlier. Why do you start there?
José Marichal:
Really, the entire book is a meditation on the idea of an outlier. And you can think about an outlier in two different ways. If you're a social scientist and you go to graduate school, you take econometric methods and you have to learn how to do multivariate modeling. And when you're modeling, you're trying to explain reality by putting together variables into an equation that will have some predictive power. So you run that model, and in graduate school you always get that scatterplot, and you have a linear line that runs through the different points on your model that represent the different cases. And if there's that one point that's way far away from the line, you call that an outlier. And it's a problem because it's undermining the predictive power of your model.
So in graduate school, you have to fix that somehow. You create a dummy variable, you can adjust the weights of model, you can do a bunch of things to deal with the outlier problem. But also in life, the outlier is a thing. Malcolm Gladwell wrote a whole book about it. It's being anachronistic. And it is also being anachronistic in the social science part of it.
And so what I really wanted to think about was what value does the outlier provide to democratic life? And the reason to start with the statistical part is because I think the outlier becomes easier and easier to deal with as you develop more and more case, more ability to have more cases more. In graduate school, we had variables, now you call those parameters or features. Instead of having seven or eight variables in a model, modern pattern detection algorithms or modern generative AI algorithms are talking about a billion parameter model. We're talking about a scale that's just incomprehensible. That in and of itself is interesting in terms of, if you're dealing with such large numbers, so many hidden layers in a neural network, there's no way that you can actually explain what you've found. Whereas conventional social science is about explanation. The whole enlightenment project is about explanation.
So the idea of the outlier then becomes much more easy to deal with. But it's still a problem. If you are Netflix or if you are some company that wants to be able to say something about your customer base, that outlier is undermining your ability for that prediction model to be able to predict a future. So basic like machine learning training, if you have training data, you run into this problem of if you have outliers in your training data, and you try to accommodate for it, then you're overfitting the model so that it can't predict future cases easily.
So it's still a problem. So one of the things that I talk about in the book is how do we address, or how do these companies deal with the fact that there are outliers in the world, that we in some ways are naturally outliers? Humans are creatures of habit, but humans also like novelty and they like to do the unpredictable. And I think tech companies, the whole sort of data collection and the data is oil industry is really built on the idea that humans are predictable. And if humans act in unpredictable ways, that poses problems. So are there incentives that tech companies have to create platforms, to create environments, that encourage people to stay predictable, to become more predictable than they might actually be?
So a case in point might be Netflix again has something like 2,000 clusters that they put their users in when they do movie recommendations. So of course we have agency. We can choose not to agree with or not to click on the film that Netflix is recommending. But if it's not just Netflix, if it's YouTube, if it's Spotify, if it's more and more aspects of society become oriented around training us or conditioning us to stay within the narrow confines of our current taste set, then does that really undermine our ability to become anachronistic, iconoclastic, to think in those ways that are also necessary.
So in some ways, Justin, I never thought about it this way, but it is a little bit of a different spin on the Facebook book that I wrote 12 years ago. And that balance between sameness and novelty, between regularity and novelty, that I think we're drifting more and more towards habituating ourselves towards sameness and comfort that we've preselected, but we're also clustered into. So when Netflix clusters me, I don't know who else is in this cluster, I don't even know what the criteria for the cluster are. So are we habituating ourselves towards those things that we've already told the algorithm we like, and forgoing those kind of flights of fancy or those engagements with serendipity that are necessary, I think in not just democratic life, but to live a fully rounded human life.
Justin Hendrix:
So a core concept in the book, of course, is the algorithmic contract, to this idea that we have to renegotiate that contract. What is this term? What does it mean?
José Marichal:
Yeah. So that is, after sort of thinking about the outlier for a while, I'm like, yeah, we're choosing part of this, right? It's a two-way relationship. A lot of times, and you know this as well as I, when we do tech scholarship, we talk about affordances and platforms and all the ways that the tech companies are doing things to us. And they certainly are. We're not going to debate that. But it is an interaction, it is a relationship. We get things out of it.
And so maybe what I bring to the conversation is, being trained as a political scientist, one of the go-to frameworks in political science, especially in American political thought, but in just political theory, is the idea of the social contract. Probably gives a lot of listeners flashbacks to their freshman political science courses to think about this term. But the social contract is basically a thought experiment to say, why would individuals leave the hypothetical or actual world before government to enter into an arrangement where some sovereign has power over them? Why do we submit to someone having power over us?
And Hobbes would say it's because we want to be free from the war of all against all in a state of nature. The nasty, brutish, and short world. Rousseau would say, because we're in society, and society is what's painful and the comparative self makes us have negative assessments about ourselves. And so the social contract allows us to submit to this sort of community, this general will, that will make us all better off. Locke splits the difference I think. Rawls talks about, uses it in his Veil of Ignorance in some ways.
And so I said, well, what if we apply this to tech companies? Certainly the metaphor doesn't completely fit because Facebook is not the state, even though it often wants to act like a nation state. But it gives us some leverage to say, what do we get out of our engagement with tech companies. Because tech companies are providing us with something that we would otherwise not get if we exited.
And so what I say in the book is, in order to resolve the anxiety and boredom of everyday life, and the anxiety of the flood of information that we have online, we submit to these frameworks that help curate our life for us. So instead of having to deal with the tsunami of information out there, I can go on YouTube, and YouTube will recommend things I should watch based on what it knows or what I've told it and what it also has determined will keep me on the platform. It will provide me with these things and help give order to my life. And we can then expand it to Ring doorbells or all of the many surveillance technologies that we have out there that give us this sort of sense of order and control over our life. Or at least a perception, a perceived sense of order and control over our life that we might not otherwise have. And so that's what we get, or that's what at least I believe we get.
The flip side of that is if we go too far into that kind of rabbit hole of ourselves, sometimes I think about that great Charlie Kaufman movie, Being John Malkovich. If we go too much down to John Malkovich rabbit hole of ourselves, we miss out on all the other possibilities of life that we fail to explore because we've been in this contract. And I think in that way that diminishes us as humans, but it also diminishes our potential as a liberal subjects or our capacities as liberal subjects. Because we then can't see the other side that inhibits our ability to do the things that liberal subjects need to do in a democratic society.
We need to be able to have rational faculties. We need to be able to engage in reasonableness, not all the time, and obviously reason, rationality, is overhyped in some ways as the main way that we communicate with each other. But we need some sense of reasonableness in order to be able to say, okay, well, they see the world differently than I do. That doesn't make them an existential threat. But I think more and more, the more we get habituated into this kind of, I'm playing with this term for a new book I'm writing called Ontological Enclosure, where we fence ourself off from other ways of seeing. And by doing that, it inhibits our ability to do a very vital thing in democracy, which is talk to the other side, engage with the other side.
Justin Hendrix:
Let's pick up on a couple of things. I mean, one of the things that comes through, obviously what I would think of is we hear a lot of concern around effectively homogenization, that AI is somehow going to smush the culture, as you say, pushes out the outliers, smooths everything out in a way. You focus on optimization culture. You talk about the metaphor of factory farming. You talk about the idea of the factory farm citizen. I don't know. How far do you think we are along this curve right now? Are these things that are well along the way, or do you feel like, I don't know, you're more kind of projecting into the future here where we might be headed in an age of artificial intelligence?
José Marichal:
I think I want to say both. I think we're far along the way, and it's worth maybe getting a little bit into the idea of the factory farming analogy. And I co-wrote an article with some colleagues about this too. And it's basically saying that when we optimize for this personality that we think we are in a given time. So when we use recommendation algorithms, we're reifying locking in place, amplifying those preferences that we already have. When we for ourselves in a sense, for lack of a better term, there are unintended consequences. There are externalities that we don't necessarily recognize.
And so it's very akin to factory farming where what is factory farming? We're optimizing meat production by warehousing animals. And I won't get too much into the grossness of it. But it leads into these really negative externalities. Too much production of "fertilizer," quote, unquote, that gets into groundwater or that gets into lakes and creates algae blooms. Or the overuse of antibiotics that gets consumed by humans, and has impacts on health and puberty, early puberty. Or the overuse of antibiotics that contributes to antibiotic resistance.
And so I like that analogy because it makes me, and hopefully the reader, think about what are the externalities of having a media and cultural environment that pretty much gives you everything you want? And one of my favorite chapters is chapter two where I talk about the way that art and culture is being optimized. So architecture and interior design and music and film is all kind of being optimized in ways that, so Netflix is actually trialing out make content that sort of fits this particular cluster. So artists know that if they want to get picked up by Spotify, they have to change the beginning of their songs in order to have the right hook or the right structure that the algorithm likes, or that certain people already say they like.
So where's the space for novelty there? Of course, artists are artists, they're going to do the novel thing anyway. But the problem is if great art falls into woods and no one's there to hear it, it doesn't make a sound right? If great work is out there and it doesn't get picked up in the algorithm, then does that make it even harder for people to pick up on it? So I think that's the negative externality.
And same thing with policymaking. Where does the innovation in policymaking come? Policymaking is like any other realm. It needs good ideas. And where do the good ideas come from, or will the good ideas be picked up if we just develop this, what I'd call an algorithmic mentality, where we're optimizing for the things that we already think we know and we like?
So to get back to your original question then, where are we? We're pretty far down the road. But then AI just takes all of this and locks it in in terms of a platform that's basically looking for modal responses. It's basically, at least base LLM models are very sophisticated autocomplete. So they're giving you a string of words that the algorithm or that the neural network has determined has a high probability of addressing the query of the user.
So what that means is that when people engage with AI, they're getting modal, average, predictable responses from the AI, and that cuts out novelty. And now of course people can engage with AI in ways that might produce novelty. So there are some possibilities there, and that's something that I'm thinking about for future projects. But I do think if we don't, here's promoting the book, renegotiate the algorithmic contract and insist on some level of novelty, creativity, plurality, that we're headed down a really dangerous path.
Justin Hendrix:
I want to ask you about this concept you bring in around stochastic terrorism. I mean, you are in conversation here with a few other folks who have been on this podcast in recent months, including you reference the work of folks like Chris Gilliard. This idea that, to some extent, the flow of information, the constant looking for outliers or observing phenomena that need to result in notification somehow, that that ends up creating a kind of, I guess just the way I think of it is almost like a background noise of anxiety. You call it ambient stochastic terror.
José Marichal:
The state of nature metaphor, the state is supposed to keep you free from the war of all against all, but the state also is force. So I talk a little bit about this is that we think we're getting relief from anxiety through our algorithmic contract or social technical contract, but the very thing that we think is giving us relief is also creating and amplifying the anxiety.
And I was having a conversation with somebody about this recently about how it's not just that TikTok or Instagram or any of these platforms gives you what you say you want. It also embeds in almost this kind of like A-B testing kind of way, violence. So that you see, and every time, I have this weird relationship with TikTok where I'll install it and then I'll delete it after about five minutes. Something, somebody, or somehow I say, you know what? There's some really inventive, and there is, there's some very inventive creative stuff going on TikTok. But there's also this way in which they slip in violence.
And I don't know that I have a particularly violent feed, or I've told TikTok that I like violence, but it'll find its way to my feed. And then I have to say, no, I don't want that. And I think the same thing with reels and shorts, Instagram and YouTube using in shorts that it'll try to test out how much prurient content you want. And if you say you don't, it'll go away for a while, but then it'll bring it back. And I'm thinking about, well, why does it want to do that?
And I do think that there is a lot of incentive that these companies have to keep us afraid. Keeping us afraid keeps us wanting to stay within our ontological enclosure, within our these very defined set identities. And I think the reason for that is because a predictable self, a self that's staying within the ontological enclosure, is much more easy to sell to advertisers. Obviously you have to have some novelty because you wouldn't buy new things, but if you're too novel, then it's really hard to be able to sell you to advertisers, to say, hey, this person is very likely to buy this product.
So it isn't just surveillance products or new Ring cameras and doorbells, but I think it's like an artifact or a condition of techno capitalism that scared subjects are better consumers. If you don't leave your house, you buy a lot of things online. And it's not something I've fully fleshed out, but I do think that there is something there. And it's another reason why it's like we need to assess, we need to step outside and awaken to that this is a contract. And to think about, well, how could it be better? And what can we do to demand that we not be placed in this ambient stochastic terror mode? Anyway.
Justin Hendrix:
So I mean, I think I've even mentioned this in past on this podcast, but there's something that gets me about when I walk around in my neighborhood in Brooklyn these days, and every third house chirps at me that I'm being recorded. Almost as if just walking down the street lends me to suspicion. I might be about to steal a package or otherwise encroach on someone's property. And I do wonder about that somehow, both the kind of anxiety that gives me as the pedestrian who's wandering around, but I also think about all those notifications happening on the back end to my neighbor who, once again, this strange guy has walked past the house at X time. So I don't know, I was thinking about that as I was reading that.
But I want to ask you a little bit about implications for society more generally, for democracy in particular. I mean, that's one of the chief concerns that you and I have always corresponded about, and that you've written about for Tech Policy Press. But this idea of the kind of augmented state of nature, the sort of distorted threat perception, how does that erode democratic norms, beyond just making us all more or less on edge all the time?
José Marichal:
Yeah. I think it inhibits our desire to be in community with one another. That example, that anecdote that you had about walking through your neighborhood and the beeps, the chirps going off, first, that inhibits your desire to want to be the subject. And I think it reinforces the idea that our friends and neighbors are threats. And democracy, and I've been writing a book now called Machine Liberalism, and I'm trying to understand how everything we're talking about impacts the way that we both experience liberalism, our rights and our freedoms, and what we expect from a liberal society. What rights do we think that we need protected?
And so I think that this sort of environment in which we're suspicious of one another is illiberal. Because even though most people think that liberalism is really about individuality, it requires a lot of it. Mill and Rawls and a lot of the great liberal theorists assumed that our rights were going to be enjoyed in community. So Tocqueville talks about it. Our interests properly understood or rightly understood. That for us to fully enjoy freedoms, we need to be in community with one another because that's where our views of the world get vetted. That's where if we believe, if the fundamental assumption of liberalism is the value of the self, the dignity of the individual, then in order for us to really recognize that dignity for others, we need to be in each other's lives. In order to have empathy for one another, we need to be in each other's lives.
And so I don't know if tech companies do this intentionally, but there certainly is, especially with AI, there is all this push towards replacing human relationships with synthetic relationships. And even in the most simplest of forms. So I'm a college professor, I have 25 students in a class, in a freshman class, small liberal arts college. Instead of asking me to clarify the readings, they can now go to an app to clarify the reading for them. The app, to be honest, may do a better job that I might. I'm really playing around with NotebookLM, I'm dumbfounded at how well, I know people might disagree, but I actually think it does a pretty good job of summarizing and communicating. And that's great.
That student instead of coming to me is now going to the AI. And that means one less opportunity for me to be a mentor, or to just find out how that student's doing in their life. And the more and more we say, well, let's rely on synthetic. I saw this really great article a couple of days ago, Justin, about the decrease in posts to Stack Exchange for coding problems. Because if anybody has tried, if anybody's like a baby coder like I am, and knows enough Python to be dangerous and really starts a Python project and then gets stuck and it goes to Stack Exchange or Stack Overflow to go figure out the answer, people aren't doing that anymore because now they have Claude Code. Now they have these desktop-based AIs that will just run the code for them.
So instead of having to go to another person to get the answer, you can just ask the AI, who's gotten that from humans. And that might not be a big deal, but think about what that meant. That meant that an individual used to be like, oh, I've solved this problem. Now I'm going to post a problem that I solve to this community so that others can benefit from it. So that's the big fear I have because I think that's a liberal impulse.
I just been reading a book by a colleague, Jennifer Forestal, who's at University of Illinois Chicago wrote this great book Designing for Democracy, and she talks about democratic affordances. Those affordances that enhance the doing of democracy, the promoting of liberal democratic values. And I think that somebody posting to Stack Exchange is a democratic affordance. It's a liberal affordance. You're trying to help other people because you believe in the value of the well-being and dignity of others, which I think is a liberal principle.
Another book that people might be interested in is a guy, Alexandre Lefebvre, wrote a book about Liberalism as a Way of Life. And this is an argument that he tries to make, that liberalism isn't neutrality. There are a set of values that are associated with them, and one of them is a care and regard for the other, irrespective of whether or not that other is of your race or of your ethnic group or of your nation. It's a universal care in regard for others. So I think that's liberal, that's a liberal principle.
And so the real concern is that the more we focus on an algorithmic contract, to bring it back to the book, that caters to our own preferences, that caters to solving the problem, optimizing having the most optimal answer at the expense of engaging with other humans in community relationally, that undermines liberal democracy. That is, and not to go too far into Hannah Arendt, but that's what Hannah Arendt warned in the Origins of Totalitarianism, that people get so isolated from their community, that the world, that world, the world of others is just foreign to them. And then they believe whatever fanciful story the authoritarian wants to impose. The totalitarian, actually. I'm sorry. The totalitarian wants to impose on them. That isolation is a precondition to totalitarianism. So it's a huge concern for me.
Justin Hendrix:
One of the things I've been doing lately is looking at the US federal government's AI inventories. They're beginning to roll out, the Office of Management and Budget will eventually kind of put together this entire inventory of the various investments that few dozen federal agencies have made in artificial intelligence tools and products and services.
And it just feels to me like, in many ways, the government itself is rolling these things out almost kind of in a slapdash way. I mean, most of the reporting doesn't include much around rights or impact, or any of the kind of considerations that are even in the fields for the reports themselves. They're just simply blank.
So I don't know, I mean, are you in any way hopeful that the kind of concern that you're talking about is going to be addressed in the context of the political system we have, which appears to just want to move faster, wants to encourage the industry, believes that this industry has to be super big and successful or we'll lose to our global competitor, and at the same time, the goal is to kind of shrink in many ways the human aspect of the government and replace individuals with, well, more algorithms?
José Marichal:
Am I hopeful? I think my attitude is we have no choice but to figure out ways to have a, and I would imagine it would've to be a multi-front resistance. Because there are certainly folks that are AI refusalists. And I'm sympathetic to many of the arguments that they make. But also independent of the desires of the Peter Thiels, the strange desires of the Peter Thiels of the world, there are lots of people that are doing interesting experiments in deliberative democracy with AI tools. And so if you just strip away the motives of the Elon Musks and Peter Thiels of the world, there are some possibilities for having AI tools that expand democratic engagement.
So one example might be increasing the low resource access to AI for low resource languages. So here in Ventura County where I teach, we have a Mixteco from Mexico population. And there's not a lot of documents in that language. But if you can train an AI to be able to train it on Mixteco, and then that could be an English to Mixteco translator or a Spanish to Mixteco translator, that immediately might be able to allow those folks to be part of democratic deliberation processes, or it could allow them better access to social services or better access to entrepreneurship resources. Whatever those folks want, it might allow them to tap in.
So one example, translation software or summarization software that can be used if, it's oriented correctly, it can be used to not simply identify the average answer from a deliberative session, but it can cluster all of the responses. And so it makes it much more easier for local governments to know all of the different responses that the community is providing. And so it can, and again, it's back to this insisting on plurality, insisting on expansion instead of contraction. So I think this is a language I'm playing with that if we can demand that these tools expand our possibilities and illuminate different ways that we can move forward, rather than contracting our possibilities and moving us towards a modal answer.
Now of course, that's not what lots of very powerful actors want, but I do think that those are the dividing lines or those are the battle lines that are being drawn. I think the politics of contraction or reduction versus the politics of expansion. And maybe it's always been that way, but I do think maybe this is a new iteration of it.
And so I don't even think about it in terms of hopeful. I just think about it in what choice do we have? What choice do we have? This is why, again, I so much appreciate the work you do and all the people that you give voice to and platform because there is no option but to engage in this kind of struggle over ensuring that these tools promote human flourishing and human dignity. But I don't know if I'm optimistic. I'm just like we all have to be in the battle.
Justin Hendrix:
So if there are policymakers listening to this podcast, I do want to say that you do get into some policy questions. You talk about data rights and sovereignty. And you make an argument that we need to renegotiate the algorithmic contract to include a right to not have our potentialities limited by optimization algorithms. That that's part of the thing that we should be seeking beyond just sort of straightforward perhaps privacy and data rights considerations. You talk about a right to serendipity, which I think is interesting. The right to digital potentiality. These secondary ideas that perhaps may be important in the algorithmic or the AI age, I suppose. What you call, and again, you're back to I suppose, the language of statistics, but the Boolean fuzzy citizenship. What's that fuzziness about?
José Marichal:
Yeah. So in the last chapter, I try to think about, well, what would a renegotiated contract look like? And I think all of those could be themed by this politics of expansion. And so take like the Boolean citizenship is thinking about, and this gets to this question of the binary versus probabilistic, and thinking about how should we think about our membership in community? Should we think about our memberships as binary, or should we think about them as probabilistic? And it's a longstanding question in social science about, if anybody who's listening, I imagine many who are social scientists or who do statistics know when you have demographic variables in a model, they're usually like binary. Race, you're either this, one, or you're not, zero.
So the Boolean fuzziness. So the fuzzy proposition says instead of thinking in terms of binary or non-binary, can we think about membership as probabilistic? And so instead of thinking of ourselves as members as ontologically enclosed, we think of ourselves as like, yeah, well, I prefer this. So I'm 66% of this, but I'm also 65% of that. And I think a lot of models already think in those fuzzy terms. So it's really more a call to fuzziness. And it's really a call to intellectual humility and a call to not being so binary in our thinking.
Now of course, there are definitely times when you need to take a stand and you need to be binary. This is right, that is wrong. I can think of very many recent events in American politics where it's like that's wrong and this is right. But generally speaking, I think a liberal citizen has an intellectual humility to say I think I'm 80% right on this. I'm open to having my mind changed. That's a precondition of a liberal subject, the whole Karl Popper conjecture and refutation that I'm open to the possibility that my mind can be changed on this. I can be refuted and I can update my priors in the language of game theory.
And the other one is this great book by a Dutch design professor wrote a book on serendipity, and the social need for serendipity and how most scientific discoveries have come through that. I'm like, yeah, how do we create tools that allow us to capitalize on serendipity, allow us to capitalize on finding novelty searches. The early web, you and I are old enough to remember the early web, and the website StumbleUpon and cosplaying sort of French post-structuralist theorists here, Deleuzian flights, the rhizomatic sort of different paths that one can take. That it's important to maintain a sense of a not predetermined future.
That one of the hallmarks, I'm teaching right now in global politics, and we're reading a classic essay by Schmitter and Karl about what democracy is and what it isn't. And one of the things that democracy isn't is where outcomes are predetermined. When an election is predetermined, like in Russia, you know that Vladimir Putin's going to win the next election, you don't have a democracy. So flights having a future that is not predetermined is a really important precondition for a liberal democratic subject.
And then the last one is a right to potentiality. And the example I use in the book is Google's mishap a year and a half ago when they rolled out an early version of Gemini, and it was maybe the temperature setting was set too high on the image AI. And so when someone would ask for a Pope, it would have a Black pope or something that has never existed. Or when someone would ask for the founding fathers, it would look like the cast of Hamilton.
And of course some people were just apoplectic about that because it's not true. That's not what happened. And it's like, well look, why are we expecting our AI to provide us with the truth? Why can't we have image AI be set up so that it's really uncovering potentialities for us, helping us imagine worlds? And so Google just released its world building model, and I know that there's a school that's much more oriented towards that world building models are going to be the future of AI, especially with robotics. Can we use AI to just imagine how we can be different? And again, I think that's central to the liberal project, right? Progress, bettering human society, bettering the human condition. Can we use these tools to envision and imagine how we can be better? Not easy, many people do not share that vision, but I think that that's an important precondition to these tools being beneficial to us, or at least not catastrophic for us.
Justin Hendrix:
You tell us we need to become algorithmic problems. How can I be an algorithmic problem?
José Marichal:
I think it's part and parcel. It's both, demanding that the tools be different, but then also recognizing that we're in a contract. And recognizing that this sort of milieu or these affordances or this world, that they're not definitive. So the reality on Bluesky, and you can say this about the left as much as, maybe not as much, but you can say it about the left too. If you spend too much time on Bluesky, you start getting a view of the world. And you say to yourself, wow, am I 100% accurate in my assessment of reality by spending my time with like-minded others who are all enraged by what's going on? And so that means that I sometimes go inhabit spaces that I don't feel comfortable in with discourse all the time. And not because I'm going to be raw, or because I necessarily want to support, but it's like vetting yourself, is being a liberal subject is engaging in conjecture and refutation.
I think the way to become an algorithmic problem is to recognize that part of your responsibility as a liberal subject is to engage in politics. And that means expanding your coalition, not simply keeping your coalition the same size because everybody believes in what you believe, but being creative in finding ways to tell stories to others that might convince them. Engaging in argumentation, engaging in creative storytelling. Thinking about the ways it might be different. And having the discernment to be able to say now is when I have to take a stand, but I don't have to take a stand all the time, and I don't have to take a stand on all issues.
And that other that really irritates me and angers me, some of those others might be persuadable subjects, and I have to go and figure out where they're coming from. And maybe this gets into this world of what's called agonistic politics, where it's like we're not going to find a consensus in between us, but I might be able to tell a story in a way that moves that person five degrees. And in return, they might tell a story in a way that moves me a couple of degrees. And that doesn't mean we all move towards the middle because that would be the exact opposite of being anachronistic.
But I think being an algorithmic problem is a commitment to anachronism, to idiosyncrasy. Like putting yourself in situations where you are uncomfortable and engage with content and material that might not be the norm in the groups that you're inhabiting. It's important for a democratic society to maintain a sense of that idiosyncrasy. Part of us have to be idiosyncratic, otherwise we're stale and reified and nothing moves. And it becomes a, what is it, Carl Schmitt, right? The politics becomes about friends and enemies. And too much of that is destructive.
Justin Hendrix:
This book's called You Must Become an Algorithmic Problem: Renegotiating the Socio-Technical Contract. José Marichal, I appreciate you so much coming on this podcast. Again, the book's from Bristol University Press Digital, but folks can find it at your favorite bookstore. What next for you? What can we expect in the near term? You mentioned multiple book projects there it sounds like.
José Marichal:
Yeah. So I have another book, that one's due to the publisher by the end of the summer of this year. And that's called Machine Liberalism: Reconceptualizing Rights in the Age of AI, and that is by Intellect Books and University of Chicago Press. And it's part of a series on AI and politics. So that's coming.
Justin Hendrix:
We'll have you back when that one arrives. I appreciate it.
José Marichal:
Yeah, I appreciate you and I appreciate all that you do with Tech Policy Press.
Authors
