You are currently viewing AI Chatbots Providing Wrong Election Info, Warns New Study

AI Chatbots Providing Wrong Election Info, Warns New Study

5/5 - (1 vote)

A recent study by AI Democracy Projects and Proof News reveals that AI-driven chatbots are frequently sharing inaccurate election information. The study examined popular chatbots like Google’s Gemini and OpenAI’s GPT-4, finding that over half of their responses were incorrect, harmful, or incomplete. This alarming discovery coincides with the ongoing U.S. presidential primaries, where more Americans rely on these chatbots for information.

AI chatbots are serving up wildly inaccurate election information

The rise of advanced AI technology promised quicker access to factual information, but the study indicates a different reality. The AI models, including Meta’s Llama 2, often provided misleading details, suggesting non-existent polling places or inventing illogical responses based on outdated information.

Notably, none of the tested AI models accurately conveyed information about election rules, such as the prohibition of wearing campaign-logoed clothing at Texas polls. Experts worry that misinformation from AI chatbots could misguide voters or discourage them from participating in elections.

While some believe AI could enhance election processes, there are concerns about misuse. The study points out instances like AI-generated robocalls manipulating voters and the recent pause of Google’s Gemini AI due to historical inaccuracies.

Critics question the testing processes and safety measures of these AI models. Google, Meta, and Anthropic responded to the findings, with Meta emphasizing that Llama 2 is a developer model, not for public use. Despite concerns, Congress is yet to pass laws regulating AI in politics, leaving tech companies responsible for managing their chatbots’ accuracy.

Source: Cbsnews