Project Maven and the Age of AI Warfare
Justin Hendrix / Apr 9, 2026Audio of this conversation is available via your favorite podcast service.
Questions around applications of artificial intelligence in warfare have been building for decades, but they have become increasingly urgent in recent months. We've seen AI systems deployed in conflicts such as in Ukraine, in Gaza, and in recent strikes on Iran. We've seen the Pentagon push to make AI central to American defense strategy, and we've seen the tensions that creates—including a very public conflict with one AI firm, Anthropic, over where the lines should be drawn around autonomous weapons systems. For the Trump administration, supremacy in AI isn't just a military goal; it's a primary strategic aim, the foundation from which all other forms of power are seen to flow.
And yet even as such ambitions accelerate, we are living through a moment that puts questions of power and judgment in the starkest possible terms. As I recorded this podcast, the world was still processing the fact that the President of the United States posted on social media that "a whole civilization will die tonight, never to be brought back again"—a threat against civilian targets in Iran that legal experts described as promising war crimes.
When that is the context in which these tools are being built and deployed, the stakes feel almost impossible to overstate. What does AI do to the judgment of those who conduct wars? What happens when friction around the use of lethal force is removed? What do the wars of the future actually look like, and will the people that execute them be subject to democratic checks and balances? My guest today has spent years reporting on such questions, which are the subject of her new book.
Katrina Manson is a reporter and the author of Project Maven: A Marine Colonel, His Team, and the Dawn of AI Warfare, a book just published by W.W. Norton & Company that tells the history of the Department of Defense program launched in April 2017 to apply AI in military targeting and logistics. I spoke to her about the book and about recent events, including the use of AI targeting in the war in Iran and the battle between the Pentagon and Anthropic.
What follows is a lightly edited transcript of the discussion.

A 163rd Reconnaissance Wing MQ-1 Predator is shown during post flight inspection at dusk from Southern California Logistics Airport in Victorville, California, Jan. 7, 2012. (US Air Force photo)
Katrina Manson:
I'm Katrina Manson. I'm a reporter and I'm also author of the book, Project Maven: A Marine Colonel, His Team, and the Dawn of AI Warfare, which has just come out.
Justin Hendrix:
Katrina, I'm so pleased to speak to you. And I want to start with this character, this person that you put in the title, this Marine Colonel. Who is Drew Cukor? And when did you realize he would be a central character in your book?
Katrina Manson:
He is the Marine Colonel at the time who was Chief of Project Maven. He led the real doing of it. So there was someone above him who was the director of Project Maven, who's been a public figure, General Jack Shanahan. But Drew Cukor was the colonel who really drove the day-to-day activity. And as I found out, also had a lot of the vision underlying Project Maven, even before it got going and then leading to some of the key platforms that Project Maven developed, including Maven Smart System, which is being used today in every US combatant command and in support of US military operations against Iran. I started writing about Project Maven and where it got to in about 2023. At that time, I went down to the 18th Airborne Corps and was given an unclassified demonstration of Maven Smart System. And after that magazine article came out, I really was encouraged that to tell the story of Project Maven, I had to speak to Drew Cukor and had been seeking to speak to him for some time. And eventually over the course of that year, managed to meet him and learn more about him.
Justin Hendrix:
You say in your first meeting, your main task was to meet his stare.
Katrina Manson:
Yes. He's been described to me in various ways. For me, he was always extremely soft-spoken and searching, I think, and very keen to discuss things. I describe him like a medium-altitude drone. Any question, however, tough I put to him, he was always able to answer with context first, kind of circle closer to the detail, but he never shirked a question. And we spoke every Friday for a year, but he did make a decision about whether to speak with me. It took a long time for him to decide to go on record and to participate in a book. I think it's an interesting decision because, of course, I'm a reporter. I'm not a think tank. I don't share the US military position. I look at everything as unbiased and balanced away as I possibly can. It was really important for me to get to the what and the how of Project Maven.
It's obviously a very famous, infamous for some effort that the US military undertook to try and put AI at the heart of how America makes war. But there were lots of different claims for what it was really trying to do and what it did do, how successfully on their own terms it was achieved, how controversially it was seen by campaigners who were really ranged against it. And what I felt was that the debate was happening without concrete examples. What actually was AI delivering on the battlefield? Is AI all hype? Could it be used? Could it be trained? Could it be made reliable? Was the US military sufficiently cognizant of all these worries about unpredictability, black box, automation bias? And so having this chance to explore it from the military operator's perspective, and certainly the people backing Project Maven for me presented a critical window of opportunity to try and really show what had they been trying to do and had they done it.
Justin Hendrix:
Before we dive into some of those bigger questions, those bigger issues, the way that AI got baked into the United States military, the way that Cukor and the rest were able to effectively take the project forward. I want to ask you just a few other framing questions about him and how this all got started. I mean, you call him a leading historical figure in a war that hasn't happened yet. I wanted to ask you, is that war happening now?
Katrina Manson:
I've been thinking about that. I don't know. I mean, the war I meant when I wrote it was some kind of putative World War III seeing a conflict between US and China. But of course, military history never rolls out in the way people expect. I think it's certainly fair to say that the US is using Maven in combat at a scale that it never has done before right now. So you have Central Command publicly saying they've hit more than, I think it's now 12,000 targets since the campaign started. They hit a thousand targets within the first 24 hours. Now that doesn't have to be because of AI, but they've also taken time out, both the commander and the spokesperson in an interview with me to say they are using a variety of AI tools. They haven't named Maven Smart System, but I have separate reporting that it is Maven Smart System, which is using not only computer vision, which is the main focus I look at in the book, but also large language models, and in this case, of course, Claude, the model that has become so controversial within the Pentagon.
So both of those are being used and CENTCOM themselves have said that AI is helping to speed up processes that used to take days and hours down sometimes to as little as seconds. So they themselves are saying it. The war I had in mind for the future really involves both sides using AI at speeds faster than what Pentagon officials told me would always come, at faster than humans can think. So there may be more wars to come, whether it does or doesn't come, whether it's deterred. And of course, for many of the advocates of AI, they claim that AI will help deter a war and if they have to fight that war, win it. So it's not happening at that very futuristic scale that was envisaged, but it's certainly happening at a never before seen scale.
Justin Hendrix:
I want to ask you about another thing that occurred to me just in reading this sort of pre-history of Maven and some of the motivations of the individuals involved. I'm going to get the sense that one of the motivations here really is born out of failure, born out of the failures of the wars in Iraq and Afghanistan, and some of the conviction that the individuals pursuing these systems have, that's where it comes from on some level, just sort of profound sense of being upset with bureaucracy, being upset with the way things are done, being upset with constraints.
Katrina Manson:
Yeah. I think certainly the book really explores the long tail of the forever wars and how let down many military operators felt. In Drew Cukor's case, he was an intelligence officer. So his aim was to always bring better intelligence to operations. And for him, he's also fighting a slightly different battle that isn't just to do with AI. It's about the distinction, the separation between the intelligence side of things and the operation side of things. And much of his fight is to go around intelligence and deliver AI straight to operations. And that's what Maven Smart System is. It's a sort of intelligence-infused platform used by operators. There are other intelligence platforms that rely on AI, but it doesn't have such an integral role on the operation side. And for me, I think I could see this pattern that the US could bring enormous firepower to bear, but it couldn't always put it in the right place.
That was down to often a number of mistakes, but also technology that was too sophisticated in some cases for the moment that was required. And then of course with improvised explosive devices, which is the main way that Americans themselves were dying and being maimed in those wars, just record-keeping and bringing that record-keeping to the fore, that is the thing that's so frustrated and infuriated Cukor. He couldn't establish that kind of information on a regular turn. He was using analog records. He was using old-fashioned programs that he just didn't think was suitable for a modern war. And the US had very sophisticated weapons, but it didn't have sophisticated intelligence processes.
Justin Hendrix:
You have this anecdote that I love, this moment where they're trying to use a computer server, I think in Kandahar and kind of recognize that at the time it's really only good as a heater.
Katrina Manson:
Yes. Drew Cukor and a young Marine lug the computer around. It even takes up the space of a seat on a helicopter going into Kandahar. Cukor is very, I suppose, drawl about this. And for him, the tools were completely useless. Others have told me, I think he said this, Microsoft is where data goes to die and yeah, the room had all the windows blown in because of the fighting and that the heater kept him warm.
Justin Hendrix:
One of the things that this individual's doing is also bringing in other people who want to shake things up and to have very aggressive tactics. One of those characters is Alex Karp, the CEO of Palantir. Talk about how Palantir comes in early on.
Katrina Manson:
Palantir actually isn't involved in Project Maven early on. At the very beginning, Drew Cukor is desperate to get Google. He sees Google as the global A team. He wants DeepMind. If he doesn't get DeepMind, which he doesn't because he can't even get a response from them, he told me he wanted Google Brain. If he doesn't get Google Brain, he didn't get traction from them either. He goes to Google Cloud, which is looking for customers. It's very early on and they start to provide what Cougar hopes will be a user interface, a kind of Google Earth for war that can map coordinates on a digital display using AI to find coordinates and infuse that intelligence with extra information, and also to produce the algorithms, the actual computer vision algorithms that will look over drone video feeds and identify what is there, classifying it or even subclassifying it.
Now, Google leaves in spectacular fashion after the Project Maven protests, which is a kind of continuing fault line between Silicon Valley and the Pentagon, and I think really in US civil military relations writ large. When that happens, so Google decides not to renew its contract, Google workers protest against any involvement in the business of war. Cukor picks up the phone to Alex Karp or well, to Palantir and says to Aki Jain, who's one of the early engineers at Palantir, "It's been a minute, can I come see you? " And he flies to Palantir and he makes his pitch and it really is for Palantir to become the user interface. So somewhere where all intelligence feeds can go into a digital display, the information can be crunched and then displayed in one easy single pane of glass for a military operator to look at. And it's not even an easy sell for him at that stage.
Palantir doesn't want just to make a fancy user interface, I discover in the book. They want to do the data analysis. And Cukor's used Palantir before to solve some of the problems that he saw in Afghanistan. He convinces them. And then over the series of six months, Palantir kitted against some other companies, develop a pilot for what becomes Maven Smart System. I think it's clear from my reporting and probably fair to say that Cukor always expected Palantir to win. He saw Palantir as the best. Some of the others dropped out. Some people claimed he gave special favor to Palantir. It's harder to analyze that after the fact, but I did certainly establish that he spent Friday nights at Palantir for hours at a time. Some of the people working with him would constantly miss their trains. Cukor's a very hardworking, pressured individual who wanted results and looked at software development, tech development for war as if he was still fighting a war.
And so he was working on a timetable that many people who worked in and around him couldn't fathom, but some could, and some who sort of hated him at the time even look back and consider it one of the best times of their life, just they were working so hard for something that ultimately they believed in. Palantir starts to develop this interface. And it's not a straightforward process because Palantir's already really controversial within the department. In 2016, Palantir sues the army to get a foothold over its data analytic platforms. And then even as they're developing Maven Smart System, sometimes the Palantir font is being used. Sometimes Palantir branding is on the documents. And so there's this effort from even Palantir supporters within the Pentagon to say, "Please don't make such a big deal of yourselves. This is a Pentagon project." So you do see that continuing fault line.
And then I think the money that Palantir starts making really doesn't kick in for a while. I'm told at the beginning they weren't making a tool much money for this, but it was a foothold, a greater foothold into the department.
Justin Hendrix:
Your book does suggest that Cukor had a relationship to Palantir going well back to 2009, before even the Google involvement.
Katrina Manson:
Yes. So he hears about Palantir early on and he goes to see them, I think it is 2009. He's introduced to them through another Marine who's junior to him. And that Marine actually sends Palantir some of Cukor's papers. So at the same time that Cukor is thinking about how to really crunch data and extract the most from it, Palantir has a platform that's doing something similar and they are brought together. And Palantir had a very interesting effort to try and spread itself throughout the Defense Department. Rather than working with the very top brass, they wanted to work with the people on the front lines and secure support that way, often hired people from those roles as advisors or full-time. And so the Palantir revolving door, and every defense contractor has a revolving door, but their evolving door looked a little different from some of the traditional big defense contractors like Lockheed Martin.
They tend to be hiring from the middle ranks. Cukor never ends up working for Palantir, but there is this long controversy that he is accused at some point of favoring Palantir and enabling Palantir to gain more contracts within the department. He contests those very heavily, and my understanding is nothing was ever uphold. He continues to have a relationship with Palantir. He never went to work for them in his now post-government job, which isn't actually focused on defense. It's focused on finance.
So his perspective, I think, is that he always just really believed in the tech and that they would deliver. He also very interestingly counsels Palantir on how to improve their standing within the Defense Department. And there's this scene I relate in the book, which is essentially saying, "Look, you're perceived as extremely arrogant. You need to tone it down. Yes, we think you're great, but just change the tone a little bit." And Aki Jain goes on what he called a listening tour and what others have framed to me as an apology tour and slowly that relationship starts to improve. But even today, Palantir is divisive within the Defense Department.
Justin Hendrix:
You tell various stories about the moments where this technology starts to get actually deployed in the field and what some of the first learnings were. I want to talk about Somalia in particular. You have a chapter on Somalia, and this I think is interesting just for the choice of the place where they decided to take a stand, make a go of it, see if AI could be deployed in a useful way, but also where you start to see the first glimmers of the kind of vision being realized. There's this story you tell about the algorithm spotting a person hiding in the bushes that none of the analysts had seen.
Katrina Manson:
Somalia was chosen partly because if it went wrong, it wouldn't ruffle so many feathers. And also because at the beginning, Project Maven wasn't making any headway to the extent that they wanted within the services. And so in the face of a lot of pushback, they relied on people they already knew. And someone on Project Maven had a relationship with a commander in Somalia and they got the algorithms out to a place I report in the book called Baladogle. And those first algorithms don't work well. They are not very able to integrate with the legacy systems. This is a time before cloud. So if they were rolling Project Maven out now, it would be much easier, but they were doing it in a pre-cloud environment, having to get algorithms onto systems that need an enormous number of cybersecurity safeguards, also just a lot of internal process checks.
They were, I think in some of their own words, cutting corners wherever they could. Cukor would say, "If it breaks the law, don't do it." But if it breaks policy, he didn't mind so much. So they were pushing their way in and initially the algorithms would flicker because what they were doing was assessing each frame of video footage, which of course each second is made up of multiple stills and the AI may or may not detect the object on each still. And so it would come on and off. It was very distracting to the operators. It was also picking up everything. And so the analysts didn't like it and they very quickly stopped using it. They then send someone who understands the work of drone screen analysis to help, who effectively becomes free labor and starts working among them and then encourages them to start trying to use the algorithms again and to start fine-tuning the way it's used.
So should it be a box that can reveal what is in front of the operator or should it be a blob which risks obscuring what might be on their shoulder if it's a weapon or something else? Obviously a very important distinction to be able to make. And they do have this final breakthrough moment where even the person who was supposed to be selling project may even thought it was never going to work and an algorithm for the first time spot someone hiding in the bushes that no human had yet spotted and in a tense live situation, that could mean the difference between life and death for the US military operators and others. So yeah, that was one of the first breakthroughs where even the quite skeptical supporters of Maven began to change their mind and they began to get some traction. But it was still extremely shonky for years after that.
Justin Hendrix:
Well, I assume that it's the utility of these systems that ends up overwhelming any external pressure, including, for instance, the backlash against Google over its contract, et cetera. Inside the Pentagon, folks are, I suppose now very much all in. This is a key part of the sort of strategic plank of Pete Hegseth's Department of War. When do you think that that, I don't know, shift occurred in full? Now you've got Silicon Valley firms that are essentially bought in. There's much less resistance in Silicon Valley to working with the Department of Defense. When do you think that that really turned?
Katrina Manson:
I think there are multiple turning points. One would be the Ukraine war, which we can come back to, but that is really a moment where US development of Maven was sort of put through its paces. It still wasn't good enough, but the US was able to see and argue internally that Maven Smart System was really, really helpful. The other is that a lot of people who were involved in Project Maven have returned to the Pentagon under the second Trump administration, really pushing for the adoption of AI. And then the public debate over the use of AI, of course, whatever public debate there's been, it's never really focused on the specifics because the specifics have never been made public before. But even as recently as 2025, Google itself drops its objections to working on weapons at war ever since the LLM explosion, OpenAI, now X, obviously Anthropic trying to work with the Department of Defense and the Department of Defense really rushing to try and turn LLMs into a new part of this effort to bring AI to war.
And Drew Cukor had always argued that LLM's transformer models would be the future of where AI could help not only for computer vision. And Project Maven I learned was much more than computer vision. It was also analyzing, translating text from captured enemy materials. They were also trying to do edge even at the beginning, meaning putting AI on drones or maybe even missiles. All the things that campaigners worried about that wasn't made public at the time were well-founded concerns that this effort would do exactly that. The public temperature has changed. The protests went away. There are now a new generation of protests about the way AI may be used for domestic purposes, more for a DHS ICE component, but the effort to prevent AI going to war, I think those protests have diminished just as ... If we look at the company's own positions on working on this, those positions have changed.
And what many of the companies did was say, "If you want to work on government work, work in this department, in this part of our company. If that's not for you, you have other parts of the company you can work at." So there was this sort of accommodation of what those concerns looked like, which has satisfied some and not others.
Justin Hendrix:
You mentioned Ukraine, and I want to just ask about another technology that seems like in parallel to AI, it seems so important to this book, which is drones, which of course have transformed the situation in Ukraine. And now we're seeing the same thing happen, of course, in the Middle East and beyond. But how do you think about this relationship between AI and drones?
Katrina Manson:
I think I'll go back to something Deputy Defense Secretary at the time, Bob Work saw. He was the Deputy Defense Secretary at the time Project Maven started. And when we spoke about his aims, he explained that AI for him was always about the pursuit of autonomy. And he described autonomy in two ways. You could have autonomy at rest and autonomy on the move. Autonomy at rest was essentially what Maven Smart System has become, a system to identify targets, pair it with weapons that could ultimately become automated. It's not entirely automated. There's several stages where humans are still involved, but conceptually that could become automated. And the second was autonomy on the move. This would be machines, robots being the first across the front lines. And in the case of drones, of course, not only in the air, which we're more familiar with from Ukraine, but also on water and underwater as well, and trying to create an autonomous software system architecture so that drones in the air, on the surface of the water and underneath it, can all somehow speak to each other and identify targets and pursue them in some way.
That really is the ultimate vision of some behind Project Maven. Of course, at the time people said publicly, Maven has nothing to do with targeting. This is not the way we're intending to use AI, but I think I'm able to trace quite robustly that that was the intention. And today, as we look at the arrival of drones at such scale, of course, Ukraine is not the first war that has both sides have used drones. That happened in 2020, but in the Ukrainian case, you have four million drones being produced by the Ukrainians used in some way in a single year, and the US simply not being able to keep up with that, it changing the conversation about what war looks like. Rapid developments that no one I'd spoken to had foreseen that drones would fly tethered to a fiber optic cable, just an extraordinary thing in order to get around jamming.
The point of autonomy, particularly in a scenario where the US may choose to defend Taiwan against a potential and certainly not inevitable Chinese invasion of Taiwan is the fear that drones will be jammed. And so drones need to be able to move autonomously without human intervention and select a target and pursue it autonomously as well. That's really hard tech. That effort really did start under the Biden administration in response to watching the number of drones that were being used in Ukraine against Ukraine. And there was a program named Replicator, which sought to develop cheap drones, one-way attack drones that could be used to defend against China in this scenario. And I learned throughout the course of reporting the book that there were efforts to integrate the algorithms that Project Maven produced and make them specifically good at identifying Chinese vessels. So that involved all the lessons that Project Maven had undertaken, collecting a huge new amount of data of Chinese vessels, training the algorithms, and then seeing if they could develop good enough algorithms and then integrate them onto the drone platforms that were made by commercial vendors.
That process didn't go very well, but I understand that that store of data is still the strongest store, and there were video demonstrations given to even the chairman of the joint chiefs of staff showing that Maven algorithms could identify Chinese destroyers. Some of the problems they've encountered is that if you imagine a ship drone, a surface drone, quite a small boat, taking imagery, if an ocean spray got on the lens, that could on occasion interrupt the ability of the algorithm to keep tracking a certain vessel. Then there were big problems getting the algorithms onto the actual commercial vendors platforms and the commercial vendors themselves didn't think they needed Maven algorithms. They had their own computer vision, although my understanding is not fed by such a huge amount of data. So the accuracy of any of these algorithms begins to really be something you might question.
It was always put to me that with AI, taking a little bit more risk would be part of the scenario that the US military understands that, and that pursuing an algorithmic war at sea, where if miss, you hit water versus pursuing an algorithmic war in a city where if you miss, you hit civilians were really different scenarios. And that in a wartime scenario, the US would likely be prepared to take a higher level of risk.
Justin Hendrix:
You say early in the book that the biggest moral and practical question, who or what gets to decide to take a human life, who bears that cost? That question reverberates throughout. You've kind of just got at it here. One of the things that kind of comes through from the book as well is the extent to which a lot of this is about interface design and about prompting people to make decisions, what types of choices people have in front of them. We talk about the gamification of these systems, the extent to which they are maybe leading to certain outcomes just based on purely the interface design. I don't know. How do you think about that at this stage? Are humans more or less still making mortal decisions or have we crossed that line?
Katrina Manson:
I think the US military would certainly argue, yes, humans are making those decisions, but there's a constant problem within the way the US military leaders talk about this. They say there will always be a human in the loop. That isn't US Defense Department policy. US Defense Department policy is there should be appropriate levels of human judgment over the use of force. That's something really different. That implies something more like supervision. And when I spent time with the people trying to create Maven Smart System, it was very clear that in their own decision-making system, humans were present in six points on what is called a decision-making loop. You could substitute that for something towards a kill chain, but a cycle that could lead to a lethal action. With the help of Maven Smart System, computers are replacing humans at three of those places where a human is involved and a human is becoming supervisory in one of those places.
So even according to that standard, which is based on reporting I did in '23, '24, humans play less of a role in decision making. Now, they are making still the final decision to hit. The commander still bears that responsibility, and we will have to see what happens in the cases where there are mistakes, how that is investigated, what level of audibility there is. It's been put to me that these systems could add to transparency and accountability because every piece of data is tagged, everything can be followed through, and headquarters now has a much better sense of what is happening on the front lines than ever before those in headquarters can watch operators moving around. You can even put a beacon on a person. So being able to have that overview could, depending on how it's used, add to accountability, essentially that's not so much to do with AI as data integration, cloud spreading that proliferation of information or surveillance.
The extent to which it speeds up decisions that aren't adequately vetted will require a huge amount of transparency from the military to share how long they are taking for each decision, how many different information sources they are using to corroborate and vet each target, how they are even thinking through targeting. One of the main claims that Drew Cukor made to me about how AI could help with war wasn't even just for targeting. He said it would help with the questions, with the preparation to make sure the US military is ready. I thought it was very instructive that former Defense Secretary Jim Mattis recently came out and said that targetry is no substitute for strategy. And if you look at the US war aims, what the US itself has said about what it's trying to achieve in Iran, despite the huge number of targets struck, they have not yet achieved what they themselves set out to achieve.
And so where you put AI in that system and where of course you ask the big questions, which may be human or data assisted, what is the point of this operation? Will we achieve it more quickly thanks to AI or will our overreliance on AI mean that we fail to look at the medium and long-term ramifications? That still really is down to critical analysis and AI can help with that potentially if you ask it the question and if you go through the vetting and checks and balances, but if you're not putting sufficient energy and thought into that vetting checks and balances, you may not get to a better solution at all.
Justin Hendrix:
So it's almost like you may see, as one of your chapters puts it, tens of thousands of targets, but you don't really see the battlefield. You don't really see the sort of strategic questions that need to be answered in order to determine what's the right step forward.
Katrina Manson:
You could. It depends on how you bring together all the data. And in Drew Cukor's vision, you would. The whole point was to try and pierce the fog of war. But if the pursuit of targets is considered sufficient to overcome something that the US looked at in the case of Ukraine, for example, the will to fight, something the US got really wrong in its analysis of what was happening in Afghanistan before the withdrawal. Those mass scale intelligence failures combined with the US obviously has repeatedly said how proud it is that it was able to judge in advance that Russia was going to invade Ukraine and encountered great difficulty in trying to convince European allies that Russia really was intending to invade Ukraine. It just depends, but in no way, and I think even advocates of AI say it shouldn't be seen as a magic solution.
And I think again and again in the book, I see examples where it's only as good as the question asked of it, and even then it could fail if it's been fed faulty data or used beyond its capacity.
Justin Hendrix:
I want to ask about one detail. You point out that one of the things that Maven is drawing in is social media information. You talk in particular about Ukraine and the idea that Maven would get a ping every time a member of the public would post a missile explosion to their phone, TikTok showing up with geotags, the extent that social media is getting hoovered into this thing, sentiment analysis is being done. Not hard to imagine some group of people being deemed enemies or combatants merely from the sentiment of their TikTok posts. Don't know if that's happening, but from what you've collected here, makes you wonder exactly how decisions are made. But I don't know, how do you think about that piece of it, this sort of mass surveillance aspect of it, how that changes the way we do warfare?
Katrina Manson:
So two things. The way that was used in Ukraine was to try and establish whether local sentiment in Ukraine was switching to support Russia or in any way losing the will to fight against Russia. And so that was helping the US decide where they should best put their support to Ukraine and where Ukraine would be best positioned to move limited troops and munitions to fight hardest. So a very different scenario from the one you just sketched out in terms of picking out targets. It was really to inform whether the US might best support Ukraine or Ukraine pursue its own decisions. What I do think it shows is the ability of anything to become a data point and how those data points are collected, the knowledge that the public needs to have of how much we've already been told for years now your phone is essentially a weapon in your pocket, the extent to which ad tech is used by the military machine, the extent to which any of this will be used to harness the information of civilians, of course, becomes really important.
Now, the US military is not conducting operations against US citizens. So the question for me becomes, to what extent are there similar platforms operated by domestic agencies? And certainly I explore in the book this idea of imperial boomerang that the technologies a country invents to pursue so called enemies abroad or perceived enemies abroad eventually comes back to be used against a country's own citizenry. We know because DHS puts out a very interesting list of the AI tools it's using that many of the ICE enforcement officers are using AI tools. There's been reporting elsewhere, it's not my reporting, that Palantir products are being used, that AI is looking at information. For me, the big question is what information, which data sets are being brought together? To what extent do people understand when they sign those licenses where they may end up? To what extent do the companies themselves bear any responsibility for where those datasets end up?
Looking at the way the US military is developing its relationship with ad tech brokers will certainly be a question of mine moving forward. And it is just extraordinary that when you have satellites, drones, phones, and the internet all combined, that you can create a very comprehensive picture of almost anywhere in the world.
Justin Hendrix:
I have to ask you about Anthropic and the Pentagon. Of course, you've covered it, came after the book was published. I'm sure you've done probably hundreds of media interviews on that at this point, but have we learned anything from this event?
Katrina Manson:
I think it's a very significant dispute to have the government take this unprecedented action and to try and pass to what extent is this a real philosophical debate? To what extent is it about the inability of technology to have boundaries? And to what extent this is a political fight? Obviously the president's called the company left-wing nut jobs. When you have that language, it makes it difficult to try and understand, is this a partisan political fight, or is this really about what Anthropic says it's about, which is having red lines on the use of LLMs in fully autonomous weapon systems and for domestic mass surveillance, both of which the Pentagon has pushed back very strongly on to say, "We don't do mass domestic surveillance and we have a policy guiding us on autonomy and the development of autonomous weapon systems," which is the one I described earlier.
The court case is still rolling, so we'll see who wins or not. I think for me, it is really interesting that Anthropic leaned in so early to classified work. The classified networks are where the US military fights its wars, and the extent to which Anthropic had full vision over the way its LLMs were being used or could be used in a future war is a big question for me. Even if you have auditability at the time, what I've learned through Project Maven is that no one has access to the full use of its tools in a classified setting. So to some extent, you are asking the US military to be good custodians of very advanced tech and never going to be able to know exactly the way it was used. That was Anthropic leaning in very early on with this technology and now trying to pull back.
The US is trying to develop fully autonomous weapon systems. I report in the book that some systems already exist and even this year, the second Trump administration launched a prize contest for, it's a prize challenge involving the AI frontier labs to create voice controlled drone swarming tech. So if you imagine someone being out, let's say on a beach saying, "Move left," and then the drones being able to move left, it's something like that, and it would involve voice translation using LLMs. Anthropic, I've reported, put in for that, but was not selected. So they were even comfortable that that level of the pursuit of autonomy could work for them. They had some research goals that they associated with that. It's a research and development program. XAI is part of that, OpenAI is supporting, two bids I've reported. Palantir is part of that. So you start to see those who are most forward-leaning in data and AI really trying to develop these weapons, anthropic pulling back may not slow down that pursuit.
Justin Hendrix:
The book's penultimate chapter is called The Winchester House. Why did you choose that as the title? And what do you think it communicates? What are you trying to get across?
Katrina Manson:
Well, I spoke at length for the book with Drew Cukor's wife, Kirsten Cukor, and she had been recommended to me by people who knew her. And I did discover that throughout the time of Project Maven, of course, they were talking about the significance of what they were undertaking. Drew Cukor would be coming home saying people are accusing him of trying to build Skynet. And she is a really interesting and important person for me trying to understand what is it like for a family so close to a new, highly controversial technology that is not well understood, claims over which are this could end the world and humanity. How did people think about that in their private lives? And Kirsten Cukor discusses that with me. She is no fan of war. She has made her peace with what her husband does because she argues that if anyone is going to deliver war, it should be done ethically.
She believes that her husband is ethical, but she does relate to me that she half jokes to him one day, "Will I become like the lady in the Winchester House?" And she's talking about the Winchester rifle. And at the time when this was in mass supply, it was an effort to ... It was a self-reloading rifle that could speed up the pace and scale of war and in theory, deliver greater accuracy again. So for her, it was already a metaphor for the claims being made over AI. And the widow of the Winchester family ends up making a home with all her millions that she keeps building and rebuilding and rebuilding. And the kind of folklore around it is that she's haunted by the spirits of those killed by the Winchester rifle. And so Kirsten Cukor muses, "Will I be haunted by the spirits of dead Russians?" And she is grappling with what the legacy of AI tech will be.
And I think for me, one of the constant claims made for tech is that tech will make war nicer, better, will win more easily. And if you go back, even as early as 1899, there's a Polish banker named Ivan Bloch who wrote this treaties that's translated in English as, "Is war now impossible?" And he was looking at whether the mass manufacturer of weapons, this question, this claim made for that, that war would be so gruesome and killing would be so quick that no one would ever dare go to war again. And he argued that was wrong, that war would not become shorter thanks to better weapons. It would become longer. There wouldn't be decisive victories, there would be stalemate. And he almost predicts the trench warfare of World War I, not quite, but he's almost there. And I've spent a long time, many years, just mulling his position and trying to understand that those who claim tech will improve war, what that really looks like and the ways in which since technology is coming, whether people want it or not, what those checks and balances and those understandings of where technology can fail really look like.
There's a very, if you'll permit me an example, early example I turned to in the book of US forces in Afghanistan in 2001, and a strike is called in using a machine that runs out of battery. And when it turns back on, the coordinates reset to the location of where the machine is instead of the coordinates of the enemy and the missile strikes the US operators themselves. And in so many of those moments, there is a big gap between human understanding and technology. And if you're going to use powerful technology, you need operators to be trained on it. You need to know where it fails. You need to understand so much more than the best case scenario, you need to really work through all those worst case scenarios and too often those worst case scenarios are learned in the field.
Justin Hendrix:
Well, if it's not clear to my listeners, just from this interview, these are urgent issues and this book is must reading. It's called Project Maven: A Marine Colonel, His Team, and the Dawn of AI Warfare by Katrina Manson.
Katrina, thank you very much for speaking to me.
Katrina Manson:
Thanks. Thanks for having me.
Authors

