Researchers from Binghamton University, Virginia State University, and Intelligent Fusion Technology have developed new ways to detect fake images and videos created by AI. As deepfakes become harder to spot, the researchers used a technique called frequency domain analysis to find signs that an image was made by AI.
In their study, the team used Generative Adversarial Networks Image Authentication (GANIA) to check if photos were AI-generated. They created thousands of images using AI tools like Adobe Firefly and DALL-E and found that AI-generated images have different features compared to real ones, especially when the images are resized.
Professor Yu Chen explained that real photos capture a lot of background information, but AI images focus only on what you ask it to create. This makes it easier to spot AI fakes by looking at the details in the image.
The researchers also created a tool called “DeFakePro” to detect fake videos and audio. This tool analyzes tiny electrical signals in recordings to see if they have been edited. DeFakePro was effective in spotting deepfake videos, which could help protect against misinformation.
Misinformation, especially with the rise of AI-generated content, is a growing problem. The team hopes their work will help keep online information trustworthy and stop the spread of fake photos and videos.
Source: biometricupdate