You are currently viewing NIST Releases New Tool to Test AI Model Risks

NIST Releases New Tool to Test AI Model Risks

Rate this post

The National Institute of Standards and Technology (NIST) has launched a tool to test the risks in AI models. This tool, called Dioptra, helps measure how harmful attacks on AI training data can affect the performance of AI systems.

Dioptra, an open-source web-based tool first released in 2022, assists companies and users in assessing and analyzing AI risks. It can be used to test and research AI models, providing a common platform to expose models to simulated threats.

“Testing the effects of adversarial attacks on machine learning models is one of the goals of Dioptra,” NIST said in a press release. The software is free to download and can help government agencies and businesses evaluate AI developers’ claims about their systems’ performance.

Dioptra was launched along with documents from NIST and the new AI Safety Institute, which offer ways to reduce AI dangers, such as misuse for generating nonconsensual content. This follows the U.K.’s launch of a similar toolset, Inspect, as part of a partnership between the U.S. and U.K. to develop advanced AI testing.

Dioptra is also part of President Joe Biden’s executive order on AI, which requires NIST to help with AI system testing and establish safety standards. Companies developing AI models must notify the federal government and share safety test results before public deployment.

While AI benchmarks are challenging due to the complexity and secrecy of advanced AI models, NIST believes Dioptra can help identify which attacks might affect AI performance and quantify this impact. However, Dioptra currently only works with models that can be downloaded and used locally, not those accessed via APIs, like OpenAI’s GPT-4.

Source: techcrunch