Microsoft has named four developers it sued for misusing its AI tools to create deepfake images of celebrities. The company filed a lawsuit in December 2023 to uncover their identities. A court order allowed Microsoft to seize a website linked to the operation, helping it track them down.
The four developers are part of a cybercrime group called Storm-2139:
- Arian Yadegarnia (Iran) – also known as “Fiz”
- Alan Krysiak (UK) – also known as “Drago”
- Ricky Yuen (Hong Kong) – also known as “cg-dot”
- Phát Phùng Tấn (Vietnam) – also known as “Asakuri”
Microsoft claims they bypassed security controls on its AI tools to create any image they wanted. They then sold access to these tools, which others used to make deepfake images, including fake nude pictures of celebrities.
After Microsoft took legal action and shut down their website, the group reportedly panicked. Some members started blaming each other. Microsoft also stated that more individuals are involved, but it is not revealing their names yet to avoid interfering with an ongoing investigation.
Deepfake images have become a major concern, affecting celebrities like Taylor Swift. In January 2024, Microsoft had to update its AI tools after fake images of Swift spread online. The rise of deepfake technology has also caused scandals in schools, leaving victims feeling violated and unsafe.
The use of AI for creating deepfakes has sparked debate. Some argue that AI models should remain closed-source to prevent misuse, while others believe open-source AI encourages innovation and that abuse can be controlled in other ways. However, the immediate issue is that AI is spreading fake and harmful content online.
Governments are now taking action against deepfake crimes. The NO FAKES Act in the U.S. aims to make it illegal to create fake images of people without consent. The UK already punishes those who share deepfake porn and plans to criminalize its production. Australia has also banned the creation and distribution of non-consensual deepfakes.
While AI’s risks are often exaggerated, its misuse for deepfakes is a real problem. Legal measures are being used to fight back, with arrests already happening worldwide.
Source: gizmodo