STAFF: OK, well, welcome, everybody. Thank you very much for joining us. Really apologize for the delay there. In just a moment, Deputy Secretary of Defense Kathleen Hicks is going to join us to discuss the state of A.I. in the Department of Defense. This briefing builds on President Biden and Vice President Harris' announcements earlier this week on the safe, secure and trustworthy developments and use of artificial intelligence.
Deputy Secretary is going to begin with some remarks to lay out the landscape of artificial intelligence use within the DOD, and then she'll take a few questions. And immediately following her briefing, Brigadier General Pat Ryder will step to the podium to brief news of the day and answer current events questions in line with our normal schedule. And time is tight, so I'll stop here and introduce the 35th deputy secretary of defense, Kathleen Hicks.
DEPUTY SECRETARY OF DEFENSE KATHLEEN HICKS: OK, thanks, everybody. Good afternoon, and thank you for joining us to talk about the state of A.I. in the Department of Defense.
So earlier this week, President Biden and Vice President Harris spoke eloquently about the administration's commitment to advancing the safe, secure and trustworthy development and use of artificial intelligence, and the president signed an executive order that lays out a strong, positive vision for government-wide responsible A.I. adoption, setting a model for industry and the world.
As part of that, we here in DOD look forward to working with the White House and other national security agencies on a national security memorandum on A.I. that we expect will build on the responsible A.I. work we've done here at DOD.Â
Because this is a topic we care a lot about at DOD, and we've been working on it for quite some time, for not only is A.I. on the minds of many Americans today, it's a key part of the comprehensive warfighter-centric approach to innovation that Secretary Austin and I have been driving from day one. After all, DOD is hardly a newcomer to A.I. The Pentagon has been investing in A.I. and fielding data- and A.I.-enabled systems for over 60 years. From DARPA funding for the first academic research hubs of A.I. at MIT, Stanford and Carnegie Mellon in the 1960s to the Cold War-era SAGE air defense system, which could ingest vast amounts of data from multiple radars, process it in real-time and produce targeting information for intercepting aircraft and missiles, to the Dynamic Analysis and Replanning Tool, DART, that DOD started using in the early 1990s, saving millions of dollars and logistical headaches in moving forces to the Middle East for Operations Desert Shield and Desert Storm. More recently, Apple's SIRI has roots not just in decades of DOD-driven research on A.I. and voice recognition, but also in a specific DARPA project to create a virtual assistant for military personnel.
Of course, increasingly over the last dozen years, advances in machine learning have heralded and accelerated new generations of A.I. breakthroughs, with much of the innovation happening outside DOD and government, and so our task in DOD is to adopt these innovations wherever they can add the most military value. That's why we've been rapidly iterating and investing over the past two-plus years to develop a more modernized, data-driven and A.I.-empowered military now.
In DOD, we always succeed through teamwork, and here, we're fortunate to work closely with a strong network of partners in national labs, universities, the intelligence community, traditional defense industry and also, nontraditional companies in Silicon Valley and hubs of A.I. innovation all across the country. In several of those, we're physically present, including through offices of the Defense Innovation Unit, which we recently elevated to report directly to the secretary.
As we focused on integrating A.I. into our operations responsibly and at speed, our main reason for doing so has been straightforward: because it improves our decision advantage. From the standpoint of deterring and defending against aggression, A.I.-enabled systems can help accelerate the speed of commanders' decisions and improve the quality and accuracy of those decisions, which can be decisive in deterring a fight and in winning a fight.
And from the standpoint of managing across the world's largest enterprise, since our vast scale can make it difficult for DOD to see itself clearly, spot problems and solve them, leveraging data and A.I. can help leaders make choices that are smarter, faster and even lead to better stewardship of taxpayer dollars.Â
Since the spring of 2021, we've undertaken many foundational efforts to enable all of this, spanning data and talent and procurement and governance. For instance, we issued data decrees to mandate all DOD data be visible, accessible, understandable, linked, trustworthy, interoperable and secure. Our AIDA initiative deployed data scientists to every combatant command, where they're integrating data across applications, systems and users. We awarded joint warfighting cloud capability contracts to four leading-edge commercial cloud providers, ensuring we have computing, storage, network infrastructure and advanced data analytics to scale on demand. We stood up DOD's Chief Digital and Artificial Intelligence Office, or CDAO, to accelerate adoption of data analytics and A.I. from the boardroom to the battlefield. The secretary and I are ensuring CDAO is empowered to lead change with urgency from the E Ring to the tactical edge.Â
We've also invested steadily and smartly in accompanying talent and technology, more than $1.8 billion in A.I. and machine learning capabilities alone over the coming fiscal year.Â
And today, we're releasing a new Data Analytics and A.I. Adoption Strategy, which not only builds on DOD's prior year A.I. and data strategies but also includes updates to account for recent industry advances in federated environments, decentralized data management, generative A.I., and more. I'm sure our CDAO, Dr. Craig Martell, will say more about that when you all speak with him later this afternoon.
All this and more is helping realize combined joint all-domain command and control, CJADC2. To be clear, CJADC2 isn't a platform or single system we're buying. It's a whole set of concepts, technologies, policies, and talent that are advancing a core U.S. warfighting function, the ability to command and control forces.
So we're integrating sensors, infusing data across every domain, while leveraging cutting edge decision support tools to enable high op tempo operations. It's making us even better than we already are at joint operations and combat integration.
CJADC2 is not some futuristic dream. Based on multiple global information dominance experiments, work in the combatant commands, like INDOPACOM and CENTCOM, as well as work in the military services, it's clear these investments are rapidly yielding returns.
That's the beauty of what software can do for hard power. Delivery doesn't take several years or a decade. Our investments in data, A.I., and compute are empowering warfighters in the here and now, in a matter of months, weeks, and even days.
We've worked tirelessly for over a decade to be a global leader in the fast and responsible development and use of A.I. technologies in the military sphere, creating policies appropriate for their specific use. Safety is critical because unsafe systems are ineffective systems.Â
The Pentagon first issued a responsible use policy for autonomous systems in 2012, and we've maintained our commitment since as technology has evolved, adopting and affirming ethical principles for using A.I., issuing a new strategy and implementation pathway last year focused on responsible use of A.I. technologies, and updating that original 2012 directive earlier this year to ensure we remain the global leader of not just development and deployment but also safety.
As I said before, our policy for autonomy in weapons systems is clear and well established. There is always a human responsible for the use of force, full stop, because even as we are swiftly embedding A.I. in many aspects of our mission, from battle space awareness, cyber and reconnaissance, to logistics, force support, and other back office functions, we are mindful of A.I.'s potential dangers and determined to avoid them.
Unlike some of our strategic competitors, we don't use A.I. to sensor, constrain, repress, or disempower people. By putting our values first and playing to our strengths, the greatest of which is our people, we've taken a responsible approach to A.I. that will ensure America continues to come out ahead.
Meanwhile, as commercial tech companies and others continue to push forward the frontiers of A.I., we're making sure we stay at the cutting edge with foresight, responsibility, and a deep understanding of the broader implications for our nation.
For instance, mindful of the potential risks and benefits offered by large language models and other generative A.I. tools, we stood up Task Force Lima to ensure DOD responsibly adapts, implements, and secures these technologies.
Candidly, most commercially available systems enabled by large language models aren't yet technically mature enough to comply with our ethical A.I. principles, which is required for responsible operational use. But we have found over 180 instances where such generative A.I. tools could add value for us with oversight, like helping to debug and develop software faster, speeding analysis of battle damage assessments, and verifiably summarizing texts from both open source and classified data sets.
Not all of these use cases are notional. Some DOD components started exploring generative A.I. tools before ChatGPT and similar products captured the world's attention. A few even made their own models, isolating foundational models, fine-tuning them for specific tasks with clean, reliable, secure DOD data, and taking the time to further test and refine the tools.
While we have much more evaluating to do, it's possible some might make fewer factual errors than publicly available tools, in part because with effort, they can be designed to cite their sources clearly and proactively.
Although it would be premature to call most of them operational, it's true that some are actively being experimented with and even used as part of people's regular work flows, of course with appropriate human supervision and judgment, not just to validate but also to continue improving them.
We are confident in the alignment of our innovation goals with our responsible A.I. principles. Our country's vibrant innovation ecosystem is second to none precisely because it's powered by a free and open society committed to responsible use, values, and ideals.
We are world leaders in the promotion of the responsible use of A.I. and autonomy with our allies and partners.Â
One example is the political declaration that we launched back in February and that Vice President Harris highlighted in London this week, which creates strong norms for responsible behavior. As the Vice President noted, over 30 countries have endorsed the declaration, ranging from members of the G7 to countries in the Global South.Â
Another example is our A.I. Partnership for Defense, where we work with allies and partners to talk through how we can turn our commitments to responsible A.I. into reality.
Those common values are a big reason why America and the U.S. military have many capable allies and partners around the world and why growing numbers of world leading commercial tech innovators want to work with us.Â
Our strategic competitors can't say that, and we are better off for it. Those nations take a different approach. It's deeply concerning, for instance, to see some countries using generative A.I. for disinformation campaigns against America, as has been reported by tech companies and the press.
But there is still time to work toward more responsible approaches. For example, in the 2022 Nuclear Posture Review, the United States made clear that in all cases, we will maintain a human in the loop for all actions critical to informing and executing decisions by the President to initiate or terminate nuclear weapons.
Other nations have drawn similar bright lines. We call on and would welcome more countries to do the same. And we should be able to sit down, talk, and try to figure out how to make such commitments credible. And we hope all nations would agree.
As we've said previously, the United States does not seek an AI arms race with any country, including the PRC, just as we do not seek conflict. With AI and all our capabilities, we seek only to deter aggression and defend our country, our allies and partners, and our interests. That's why we will continue to encourage all countries to commit to responsible norms of military use of AI. And we will continue to ensure our own actions clearly live up to that commitment from here at the Pentagon and across all our commands and bases worldwide, to the flotilla of uncrewed ships that recently steamed across the entire Pacific, to the thousands of all domain-attritable autonomous systems we aim to field in the next two years through DoD's recently announced Replicator Initiative.Â
The state of AI and DoD is not a short story, nor is it static. We must keep doing more, safely and swiftly, given the nature of strategic competition with the PRC, our pacing challenge. At the same time we benefit from a national position of strength. And our own uses grow stronger every day. And we will be keeping up the momentum, ensuring we make the best possible use of AI technology responsibly and at speed.Â
With that, I'll take your questions.Â
STAFF: OK, we're going to start Tara Copp, AP.Â
Q: One question on some current events and then I have a couple of AI questions for you.Â
On the holds, Senator Tuberville has said that his holds do not affect military readiness. He said that they don't add stress to the officers who have had to do two jobs at once. And I just wanted to ask, from your point of view, when you walk the halls, is that your experience? Has this not affected readiness? Have officers been just fine fulfilling two roles at the same time?Â
HICKS: So we've said many times in the last six-plus months that the hold is unnecessary, unprecedented, and unsafe, and that it's bad for our military, it's for our military families, it's bad for the country. We have seen tragic effects of that stress. But in a day-to-day sense, we've also seen the stress at the individual human level. And I think that has been well-communicated on Capitol Hill.Â
We're pleased to see today Admiral Franchetti and General Allvin having already been confirmed. We understand General Mahoney will be confirmed, we very much hope, by the end of the day. But even then you have 370, I think is the number, you know, officers who have dedicated their lives to service to the nation. It's just -- it's wrong, and it is unsafe, and it is absolutely hurting readiness.Â
Q: And then the AI. Just when you say you've the tragic effects, do you think that this added stress and workload may have contributed to General Smith's illness?Â
HICKS: Yes, I'm not going to comment on my personal views on that. I will only say that General Smith has indicated that, you know, he's trying to work two jobs, that he's working from 6am to 11 at night. I -- I think it speaks for itself.Â
Q: And then a few on AI. You touched on this in your remarks, but for the international norms for responsible use, are there red lines for which the member countries that want to sign on will and will not use AI for?Â
HICKS: I'm sure how to answer that. I think what I would say is the United Statesâ position of making sure that there is always a human that's in control is vital for kinetic effects. And we're going to stand by that. That's where we're coming from and we think we have a lot of nations to join us with this. We would invite all nations to join that.Â
Q: OK. And then two on the Replicator. Can you give us a ballpark cost of what it will cost to fulfill Replicator?Â
HICKS: Did you want to ask the second and I'll do them together?
Q: Is it mostly going to be UAVs, is it sea drones? What kind of the product?
HICKS: Sure. So cost is the wrong way to think about it. I'm sure when all is said and done, we would be able to retrospectively tell you everything that goes through Replicator, kind of what the value is, much as I talked about the $1.8 billion value, for instance, of our A.I. and machine learning programs.Â
But the reality is Replicator is removing kinks in the hose of the system that is innovation in DOD. There are a multitude of programs that already exist in the department that need help to get from where they are to delivery at scale, and that is where Replicator is focused.
So again, I think we could be able to go back and retrospectively capture the cost of that, but I think it's the wrong way to think about the program. It's not a program, it's a process for improving our ability to scale.
So as to the types of systems, as I've said before, you know, what I will say at this point is we are looking across multiple domains, we are seeing systems that fit that definition I just gave you, which is, you know, on the delivery pathway but facing some challenges delivering at scale in all of the domains, and that's what we're going to be focused on.
STAFF: OK, thank you. Next, we'll go to Brandi Vincent from FedScoop.
Q: Thanks, Eric, and thank you for doing this. Two lines of questions for you.
First, top tech experts have warned that DOD must be A.I. ready by 2025. What does that really mean to you? And what will it take for the Department to meet that aim? Can you speak to any tangible actions in this new strategy from the CDAO that overly ensure that the U.S. military reaches near-term A.I. readiness goals?
HICKS: Sure. I think the number one thing I would say we need is on time appropriations and predictable resourcing to the plans that we've already put out. Our 2024 budget request has this really healthy, focused investment on A.I., for example.Â
I think that's a major plus-up, once we can get those appropriations, to getting to those goals, but absent any predictability in our funding stream, it's very difficult for us to be able to project with accuracy what we can and will deliver.
What I can tell you is we are making strides every single day in our experimental approach. The strategy you'll hear from Craig later today, really building out that iterative learning as we go and improving as we go is great, and I think that's a key piece of how we get there.Â
The other key piece, as I mentioned in my remarks, is the partnerships. We have, you know, leading-edge companies, researchers ready and willing to work with the United States. They see the value proposition, they see the challenges that authoritarian states are putting forward with their approach to A.I., and they want to work with us. That's really what's going to help propel us as well.
Q: And then can you speak to how the CDAO and DOD has been advising and supporting Ukraine's experimentation and application of A.I. in this conflict? Is the team at CDAO providing algorithms or other capabilities to Ukraine?
HICKS: Sure. What I would say is CDAO is part of our whole team effort here in the department to provide assistance and support to Ukrainian partners. So they are part of the team. I think what we've seen play out in Ukraine is instructive for where the department in general is going, which is you've got to have really good quality data and then you've got to take that decision quality data and move it to the either operator, logistician, decision maker, and that's what we're doing here at DOD.
STAFF: Thank you. Tony Capaccio, Bloomberg?
Q: Replicator, can you segue Replicator and A.I.? Where does A.I. fit into the program? You're envisioning masses of thousands of autonomous, attritable kamikaze drones basically. Where would A.I. fit in? Targeting -- preventing targeting information or command and control architecture or maybe ...Â
HICKS: Yeah, let me say first I don't think kamikaze drone is the right way to think about it. You need to think again well beyond the kinetic side of this into the ability to deliver logistics, command and control, ISR, if you will, and again, multiple domains.
So the idea that this is all about sort of kinetic swarms I think is very misleading. But I do think, back to my example on Ukraine, again, in everything that we're already doing today in DOD, it is really in the command and control, the ability to take information, fuse it together, whatever the purpose of it is, and use that to create decision advantage, and that Replicator's going to help with that, regardless of which domain it's operating in. That's where the A.I. would intersect with the autonomy.
Q: Well, it wouldn't be, like sending them on mission -- programming missions into the various drones, and it would be a planning of the overall use of them?
HICKS: It could be either, yeah. Both of those are perfectly routine ways to think about the use of A.I. as you push it out through systems, whether those are attritable systems -- in the case of this Replicator first tranche that we're doing -- or any other kind of military application.
Q: Thanks.
HICKS: Yeah.
STAFF: Ladies and gentlemen, I'd like to thank the Deputy Secretary. Unfortunately, we're out of time today.
 Â
Â
Unsubscribe | Contact Us
Unsubscribe at Anytime | Privacy Policy
This mailing list is announce-only.
Military Report List
Private List