Study reveals the perils of AI-driven search engines in influencing voter information
"ChatGPT and Co: Are AI-driven search engines a threat to democratic elections?" 5 October 2023
A new study by AlgorithmWatch and AI Forensics shows that using Large Language Models like Bing Chat as a source of information for deciding how to vote is a very bad idea. As their answers to important questions are partly completely wrong and partly misleading, the likes of ChatGPT can be dangerous to the formation of public opinion in a democracy...
Fake poll numbers and fake candidates
If one asked the supposedly "intelligent" search engine on September 12 what the three most recent polls’ results said in regard to the upcoming election in Bavaria, the answer was that Freie Wähler would end up with 4 percent of the votes. In fact, the election forecasts on that day predicted between 12 and 17 percent for Freie Wähler...
Answers concerning the vote: Misleading and being completely off the mark
In a joint research project with technology experts from AI Forensics, AlgorithmWatch examined the quality of Bing Chat’s answers to questions about the state elections in Bavaria, Hesse, and Switzerland. As the answers were often either completely wrong or at least misleading, we came to the conclusion that it would be best not to use this search feature to read up about upcoming elections or votes. Even if some results were correct, one can never know if the information the chatbot provides is reliable or not...
...Every so often, experts accused Big Tech companies of launching their systems too early and of not sufficiently testing them. Such accusations were not only directed toward Microsoft, Bing Chat’s provider, or OpenAI, ChatGPT’s provider, but also toward Google and Facebook. Chatbots admittedly often phrase things so well that people have the impression that they’re trustworthy. Since the seemingly trustworthy facts are often distorted, the bot’s persuasiveness is particularly dangerous. A Belgian man’s suicide was attributed to the fact that EleutherAI’s LLM-based chatbot GPT-J had convinced him that he could stop climate change by sacrificing his life. It is currently completely unclear who is to be held accountable in such a case...
...A Microsoft spokesperson told AlgorithmWatch: "Accurate information about elections is essential for democracy, which is why we improve our services if they don't meet the expectations. We have already made significant improvements to increase the accuracy of Bing Chat’s responses, with the system now creating responses based on search results and taking content from the top results. We continue to invest in improvements. Recently, we corrected some of the answers the report cites as examples for misinformation. In addition, we're also offering an 'Exact' mode for more precise answers. We encourage users to click through the advanced links provided to get more information, share their feedback, and report issues by using the thumbs-up or thumbs-down button." (Please note that this is a translation of the original statement by Microsoft.)..