Global: Researchers raise concerns about ChatGPT's impact on democracy
In the article "How ChatGPT Hijacks Democracy", data scientists and researchers from Harvard University explain how effective content moderation will become even more complicated, especially when elected officials are the targets. Quashing the spread of mis- and disinformation continues to be one of the greatest challenges for platforms, and now the speed and ease at which bad faith actors can create original, human-sounding content poses an even greater threat to our ability to access quality, reliable information. Elected officials could be more easily influenced by highly targeted campaigns that impersonate human lobbying:
'Platforms have gotten better at removing “coordinated inauthentic behavior.” Facebook, for example, has been removing over a billion fake accounts a year. But such messages are just the beginning. Rather than flooding legislators’ inboxes with supportive emails, or dominating the Capitol switchboard with synthetic voice calls, an A.I. system with the sophistication of ChatGPT but trained on relevant data could selectively target key legislators and influencers to identify the weakest points in the policymaking system and ruthlessly exploit them through direct communication, public relations campaigns, horse trading or other points of leverage.'
The Business & Human Rights Resource Centre invited Microsoft and Open AI to respond to concerns from researchers about ChatGPT's impact on democratic processes. The companies did not respond.