In this big election year, AI architects oppose its misuse

In this big election year, AI architects oppose its misuse

Artificial intelligence companies have been at the forefront of developing this transformative technology. Now, they are also working to set limits on the use of AI in a year marked by major elections around the world.

Last month, OpenAI, the creator of the chatbot ChatGPT, said it worked to prevent abuse of its tools in elections, in part by banning their use to create chatbots pretending to be real people or institutions. In recent weeks, Google also announced that it would limit the response of its AI chatbot, Bard, to certain election-related prompts to avoid inaccuracies.. And Meta, owner of Facebook and Instagram, has promised to best AI generated label content on its platforms so that voters can more easily discern which information is real and which is false.

On Friday, Anthropic, another leading AI startup, joined its peers in banning its technology from being applied to political campaigns or lobbying. In a blog post, the company, which makes a chatbot called Claude, said it would warn or suspend any user who violates its rules. It added that it uses tools trained to automatically detect and block disinformation and influence operations.

“The history of AI deployment has also been full of surprises and unexpected effects,” the company said. “We predict that 2024 will see surprising uses of AI systems – uses that were not anticipated by their own developers. »

The efforts are part of AI companies’ efforts to master a technology they popularized as billions of people went to the polls. At least 83 elections worldwide, the largest concentration in at least the next 24 years, are expected this year, according to Anchor Change, a consultancy. In recent weeks, people in Taiwan, Pakistan and Indonesia have voted, while India, the world’s largest democracy, is expected to hold its general election in the spring.

The effectiveness of restrictions on AI tools remains unclear, especially as tech companies move forward with increasingly sophisticated technology. On Thursday, OpenAI unveiled Realistic Sora, a technology capable of instantly generating videos. Such tools could be used to produce text, sounds and images in political campaigns, blurring fact and fiction and raising questions about whether voters can determine what content is real.

AI-generated content has already appeared in US political campaigns, regulatory incentives and legal pushback. Some state lawmakers are drafting bills to regulate AI-generated political content.

Last month, New Hampshire residents received robocall messages discouraging them from voting in the state’s primaries, in a voice most likely artificially generated to sound like President Biden. The Federal Communications Commission last week banned such calls.

“Malicious actors are using AI-generated voices in unsolicited robocalls to extort vulnerable family members, impersonate celebrities, and misinform voters,” FCC Chairwoman Jessica Rosenworcel said at the time.

AI tools have also created misleading or misleading depictions of politicians and political subjects in Argentina, Australia, Britain and Canada. Last week, former Prime Minister Imran Khan, whose party won the most seats in Pakistan’s elections, used AI’s voice to declare victory while in prison.

In one of the most important election cycles in living memory, the misinformation and deceptions that AI can create could be devastating for democracy, experts say.

“We’re behind the times here,” said Oren Etzioni, a University of Washington professor specializing in artificial intelligence and founder of True Media, a nonprofit that works to identify online misinformation in political campaigns. “We need tools to respond to this in real time. »

Anthropic said in its announcement Friday that it plans tests to identify how its Claude chatbot might produce biased or misleading content related to political candidates, policy issues and election administration. These “red team” tests, which are often used to break a technology’s protections to better identify its vulnerabilities, will also explore how the AI ​​responds to harmful queries, such as prompts asking for removal tactics. voters.

In the coming weeks, Anthropic will also launch a trial aimed at redirecting U.S. users with voting-related questions to authoritative information sources, such as TurboVote from Democracy Works, a nonpartisan nonprofit group. The company said its AI model was not trained frequently enough to reliably provide real-time information on specific elections.

Similarly, OpenAI said last month that it plans to direct people to voting information via ChatGPT, as well as label AI-generated images.

“Like any new technology, these tools have benefits and challenges,” OpenAI said in a blog post. “They are also unprecedented, and we will continue to evolve our approach as we learn more about how our tools are used.”

(The New York Times sued OpenAI and its partner Microsoft in December, alleging copyright infringement over news content related to AI systems.)

Synthesia, a startup with an AI video generator that has been linked to disinformation campaigns, also prohibits the use of the technology for “news-like content,” including false, polarizing content that divide or mislead. The company has improved the systems it uses to detect abuse of its technology, said Alexandru Voica, Synthesia’s head of business and corporate policy.

Stability AI, a startup with an image-generating tool, said it prohibits the use of its technology for illegal or unethical purposes, works to block image generation dangerous and applied an imperceptible watermark to all images.

The biggest tech companies also spoke out. Last week, Meta said it was collaborating with other companies on technology standards to help recognize when content was generated with artificial intelligence. On the eve of the European Union parliamentary elections in June, TikTok said in a blog post On Wednesday, it would ban potentially misleading manipulated content and require users to label realistic AI creations.

Google said in December that it would also require YouTube video creators and all election advertisers to disclose edited or digitally generated content. The company said it is preparing for the 2024 election by preventing its AI tools, like Bard, from returning answers to certain election-related queries.

“Like any emerging technology, AI presents new opportunities as well as new challenges,” Google said. AI can help combat abuse, the company added, “but we’re also preparing for how it can change the landscape of misinformation.”

Avatar photo

David B.Otero

Related Posts

Read also x