Congress pushes Defense Department to upgrade AI capabilities
As technology continues to advance at a dizzying pace, the U.S. Department of Defense finds itself struggling to keep up with the likes of Russia and China in bolstering its artificial intelligence capabilities.
A report published Tuesday by the website Politico highlighted congressional efforts to get the military to get smarter and faster about cutting-edge AI technology through bills and provisions in the upcoming National Defense Authorization Act.
A broad swath of defense pundits believe that the future competitiveness of the U.S. military depends on how quickly it can acquire and deploy AI and other innovative software. These technologies have the potential to enhance intelligence gathering, autonomous weapons, surveillance platforms and robotic vehicles.
This sentiment was echoed by Sen. Joe Manchin, D-West Virginia, during a Senate Armed Services hearing in April, where he stated that AI “changes the game” of war.
Sen. Angus King, I-Maine, also a member of the Senate Armed Services Committee, voiced his concerns about the DOD’s adaptation to the changing nature of warfare during a call with Gen. Mark Milley, chairman of the Joint Chiefs of Staff. Milley acknowledged the urgent need for the military to adapt to the new demands of warfare, stating that they are currently in a transition period.
Concrete uses for AI in defense are already being implemented, ranging from piloting unmanned fighter jets to providing real-time tactical suggestions for military leaders on the battlefield. The Defense Department has requested $1.4 billion for a project aimed at centralizing data from all AI-enabled technologies and sensors into a single network.
However, many of these new digital platforms and tools — in particular software — are developed by small and fast-moving startups that have minimal to no experience doing business with the Pentagon.
One of the major challenges the Defense Department faces is keeping up with the development of generative AI, a rapidly evolving platform that can communicate and reason like humans. The power of this technology is increasing almost month-by-month, raising concerns about its potential consequences when deployed in the field.
Rep. Rob Wittman, R-Virginia, vice chair of the House Armed Services Committee, has proposed the establishment of a new Joint Autonomy Office dedicated to autonomous systems. The House version of the 2024 NDAA contains provisions that set the stage for this office, which would be the first to specifically focus on autonomous systems, including weaponry.
The push for AI advancement within the military is not without controversy. Experts, including Signal Foundation President Meredith Whittaker, have raised concerns about the collateral consequences of deploying these models, especially in decision-making contexts.
The lack of strategic focus from the Pentagon has become a point of concern for lawmakers. The current procurement rules, designed for acquiring traditional weapons like fighter jets, do not translate well when it comes to buying new AI-enabled software technologies.
In an effort to address this issue, Project Maven was launched in 2017, with the aim of integrating commercially developed artificial intelligence into the U.S. military. Congress also established the Joint Artificial Intelligence Center in 2019 to develop, nurture and deploy AI technologies for military use.
However, retired Lt. Gen. Jack Shanahan, the inaugural director of JAIC, revealed that none of the military services were implementing AI at the desired speed and scale.
To address these shortfalls and challenges, the Defense Department is developing an unclassified data, analytics and AI adoption strategy that will replace an outdated plan from 2018. This new strategy, spearheaded by Craig Martell, the chief data officer and director of the Chief Digital and Artificial Intelligence Office at the Treasury Department, is expected to be released later this summer.
The push for AI advancement within the military is not without controversy. Experts, including Signal Foundation President Meredith Whittaker, have raised concerns about the collateral consequences of deploying these models, especially in decision-making contexts.