close
close

Guiltandivy

Source for News

According to OpenAI, more and more cyber actors are using its platform to disrupt elections
Update Information

According to OpenAI, more and more cyber actors are using its platform to disrupt elections

Jaap Arriens | NurPhoto via Getty Images

OpenAI is increasingly becoming the platform of choice for cyber actors seeking to influence democratic elections around the world.

In a 54-page report published Wednesday, the ChatGPT creator said it disrupted “more than 20 operations and fraudulent networks from around the world that attempted to exploit our models.” The threats ranged from AI-generated website articles to social media posts from fake accounts.

The company said its “Influence and Cyber ​​Operations” update was intended to provide a “snapshot” of what it is seeing and “identify an initial set of trends that we believe will inform the debate.” can influence how AI fits into the broader threat landscape.”

OpenAI's report comes less than a month before the US presidential election. Beyond the United States, it is a significant year for elections around the world. Elections are taking place that affect more than 4 billion people in more than 40 countries. The rise of AI-generated content has raised serious concerns about election-related misinformation. According to data from Clarity, a machine learning company, the number of deepfakes created has increased by 900% year over year.

Election misinformation is not a new phenomenon. It has been a major problem since the 2016 US presidential election campaign, when Russian actors found cheap and easy ways to spread false content across social platforms. In 2020, social media was flooded with misinformation about Covid vaccines and election fraud.

Lawmakers' concerns today are more focused on the rise of generative AI, which began with the launch of ChatGPT in late 2022 and is now being adopted by companies of all sizes.

OpenAI wrote in its report that election-related uses of AI “ranged in complexity from simple content generation requests to complex, multi-stage efforts to analyze and respond to social media posts.” The social media content primarily related to elections in the US and Rwanda and, to a lesser extent, elections in India and the EU, OpenAI said.

In late August, an Iranian operation used OpenAI's products to generate “long-form articles” and social media comments about the US election and other topics, but the company said the majority of the posts identified received few or no likes, shares and more would have received comments. In July, the company suspended ChatGPT accounts in Rwanda that posted election-related comments on X. And in May, an Israeli company used ChatGPT to generate social media comments about elections in India. OpenAI wrote that it was able to resolve the case in less than 24 hours.

In June, OpenAI engaged in a covert operation that used its products to generate commentary on the European Parliament elections in France, as well as politics in the United States, Germany, Italy and Poland. The company said that while most of the social media posts it identified received few likes or shares, some real people responded to the AI-generated posts.

None of the election-related operations were able to generate “viral engagement” or build a “sustainable audience” through the use of ChatGPT and OpenAI's other tools, the company wrote.

REGARD: The election outlook could be positive or very negative for China

The election outcome could be positive or very negative for China in the short term, says Dan Niles

LEAVE A RESPONSE

Your email address will not be published. Required fields are marked *