You are currently viewing Microsoft’s AI Image Generator Raises Safety Concerns

Microsoft’s AI Image Generator Raises Safety Concerns

5/5 - (1 vote)

A Microsoft artificial intelligence (AI) engineer, Shane Jones, recently voiced concerns about the company’s AI image generator. In a letter published on LinkedIn, Jones claims that the AI lacks necessary safeguards against producing violent and explicit content. Despite his efforts to alert Microsoft management, Jones asserts that no action has been taken.

Jones, identified as a “principal software engineering manager,” sent his concerns to the Federal Trade Commission and Microsoft’s board of directors. In the letter, he contends that Microsoft is aware of systemic issues within the product that could lead to offensive and inappropriate images. Microsoft denies these allegations, stating they have robust internal reporting channels to address AI problems.

Microsoft ignored safety problems with AI image generator, engineer complains

The focus of Jones’s letter is on Microsoft’s Copilot Designer, a tool utilizing OpenAI’s DALL-E 3 AI system to generate images from text prompts. According to Jones, the tool has “systemic problems” producing harmful content and should be taken offline until these issues are resolved. He points out that Copilot Designer tends to generate sexually objectifying images of women, even with unrelated prompts.

For instance, using the prompt “car accident,” Copilot Designer generated an image of a woman in her underwear kneeling in front of a car. Microsoft claims to have dedicated teams evaluating safety concerns, and meetings with Jones were facilitated by the Office of Responsible AI.

Microsoft launched Copilot as an AI companion last year, promoting it as a revolutionary tool for businesses and creative endeavors. Jones argues that marketing Copilot as accessible for public use without disclosing associated risks is irresponsible. He highlights Microsoft’s failure to disclose safety concerns and mentions a prior incident in which fake, sexualized images of Taylor Swift spread on social media.

In January, Microsoft updated Copilot Designer due to safety concerns, closing loopholes that allowed the generation of inappropriate content. Jones cites this incident as evidence that his concerns were valid. He also alleges pressure from Microsoft’s legal team to remove a LinkedIn post where he urged OpenAI’s board of directors to suspend the availability of DALL-E 3.

Generative AI image tools, like Google’s Gemini AI, have faced issues generating harmful content and reinforcing biases. Recently, Google suspended Gemini AI after it generated inappropriate images of people of color when asked to depict historical figures.

Soruce: NBCNews