The Defense Advanced Research Projects Agency (DARPA) has developed new technology to protect military Artificial Intelligence (AI) systems from being tricked, according to a senior official.
The technology comes from DARPA’s Guaranteeing AI Robustness Against Deception (GARD) program, which started a few years ago.
Matt Turek, deputy director for DARPA’s Information Innovation Office, explained during a virtual event hosted by the Center for Strategic and International Studies that the program aims to defend AI systems from adversarial attacks.
Adversaries could deceive AI systems by inserting noise patterns into sensor data, causing misclassification. For example, adding noise to an image could disrupt a machine learning algorithm. They could also create physically realizable attacks, like designing stickers that confuse AI systems.
Through the GARD program, DARPA has collaborated with industry partners to develop algorithms and capabilities to counter such trickery.
Deception attacks could allow adversaries to control autonomous systems, manipulate ML-based decision support applications, and compromise tools and systems reliant on ML and AI technologies.
DARPA plans to transition GARD-related capabilities to other Defense Department components by fiscal 2024, when the project is set to conclude.
The Chief Digital and AI Office (CDAO), formed in 2022 to accelerate the adoption of AI and related tech within the Defense Department, will receive these capabilities.
Turek noted that DARPA’s mission is to prevent and create strategic surprises, making the CDAO a natural transition partner.
However, DARPA also aims to aid organizations outside the Defense Department by providing open-source tools and algorithms developed through its research.
While the Pentagon requested $10 million for GARD in fiscal 2024, there’s no additional funding allocated for fiscal 2025 as the program is concluding.
The GARD initiative is part of DARPA’s broader efforts in AI. Around 70% of its programs involve AI, machine learning, or autonomy.
Through initiatives like AI Next, DARPA has invested over $2 billion since 2018 to advance AI for national security purposes.
The president’s fiscal 2025 budget includes an additional $310 million for DARPA’s AI Forward initiative to develop trustworthy AI technology for national security and societal needs in an ethical manner.
Turek emphasized DARPA’s commitment to advancing highly trustworthy AI for critical applications.
Source: Defensescoop