The Pentagon is working on stopping AI weapons from acting on their own. They’re worried that enemies might trick the AI into doing the wrong thing. This could be by showing it fake images or sending false signals. They started a project called GARD in 2022 to deal with this.
Scientists found out that AI can be easily fooled. Even a harmless picture can confuse it. For example, it might see a bus full of people as a tank if the picture is edited right.
People are worried about the Pentagon making AI weapons. To calm these worries, they made new rules for using AI. Now, they have to make sure the AI acts responsibly and get permission before using it.
GARD doesn’t have a lot of money, but they’re doing good work. They’re giving tools to the Defense Department’s AI office. But some groups are still scared. They think AI weapons might attack for no reason, even without anyone tricking them.
The Pentagon is making more and more AI weapons. That’s why it’s important to fix these problems fast. The Defense Advanced Research Projects Agency (DARPA) and some companies like Two Six Technologies, IBM, and Google Research are helping with this. They’ve made tools and tests to help make AI safer.
Source: Ndtv