Microsoft CEO Claims ‘Enough Tech’ Exists To Stop AI Deepfakes In US Elections

by

in

Microsoft CEO Satya Nadella believes existing technology can efficiently safeguard the US elections from AI-generated deepfakes and misinformation.

Generative AI, which alludes to a type of artificial intelligence (AI) that can create new content, such as text, images, music and videos, has taken the technology segment by storm.

A recent UK-wide study, exploring the use of generative AI among students, shows over 50 per cent of undergraduates rely on AI tools to help with their essays. Likewise, a considerable number of users leverage generative AI capabilities across various fields like education, medicine and use it as a productivity tool at work.

In contrast, some users are reluctant to use the technology due to concerns that primarily centre around its safety. Their concerns seem to be well-founded as a separate study showed how ChatGPT-like large language models (LLMs) can be trained to “go rogue” and behave maliciously.

Still, the technology has a major impact on the information that’s floating on the internet. Understandably, some users have expressed strong reservations about the technology citing reports about AI-generated deepfakes, which AI expert Oren Etzioni says could trigger a “tsunami of misinformation” and influence the impending US elections.

Last year, US President Joe Biden issued an all-new executive order on AI, which requires new safety assessments, equity and civil rights guidance. Regrettably, some issues seem to persist.

Insights from Satya Nadella on stopping AI from going rogue

Nadella joined Lester Holt on NBC’s Nightly News to discuss the measures in place to prevent AI from playing a key role in spreading misinformation about the forthcoming 2024 presidential election.

Holt kicked off the interview by asking Nadella what measures are in place to protect and guard the upcoming elections from deepfakes and misinformation.

Bing’s market share has remained stagnant despite Microsoft spending a lot of money on AI. As a result, the company has been struggling to crack Google’s dominance in the category.

Google and Microsoft’s Bing have previously faced severe criticism for featuring deepfake pornography among its top results. In fact, the issue got worse recently with fake images of pop star Taylor Swift surfacing across social media. Nadella believes explicit content generated with the help of AI is alarming and terrible.

According to a report by Windows Central, there is a possibility that Swift’s deepfakes were generated using Microsoft Designer. Microsoft rolled out a new update that regulates how users interact with the tool.

Now, the Designer tool blocks nudity-based image generation prompts. Aside from this, the newly imposed Disrupt Explicit Forged Images and Non-Consensual Edits (DEFIANCE) Act has been tasked with regulating and preventing such occurrences.

Originally Appeared Here