STOCK TITAN

IBM: Here's What Policymakers Can Do About Deepfakes

Rhea-AI Impact
(Low)
Rhea-AI Sentiment
(Negative)
Tags
Rhea-AI Summary
IBM takes a stand against deepfakes by signing the Tech Accord to Combat Deceptive Use of AI in elections. The company advocates for regulations to protect elections, creators, and people's privacy from the harmful impacts of deepfakes.
Positive
  • None.
Negative
  • None.

Insights

The commitment by IBM to the Munich Tech Accord highlights the increasing recognition within the tech industry of the legal implications of deepfake technology. The use of deepfakes can lead to complex legal challenges, particularly in the areas of intellectual property rights, defamation and privacy law. Given the potential for deepfakes to impersonate individuals and disseminate false information, there is a pressing need for legislation that adequately addresses these issues without stifling innovation and free speech.

Proactive engagement by corporations in shaping these regulations is crucial. It ensures that laws are informed by technical capabilities and limitations and that they're practical for enforcement. Moreover, the impact on businesses could be significant if regulations are not balanced, as overly stringent laws could hamper the development of beneficial AI technologies, while lax regulations could lead to reputational damage and legal liabilities for companies whose platforms are used to disseminate harmful deepfakes.

From a cybersecurity perspective, the proliferation of deepfakes represents a novel threat vector that could be used in social engineering attacks, disinformation campaigns and other malicious activities. The Tech Accord to Combat Deceptive Use of AI in 2024 Elections is an important step in recognizing and combating these threats. It indicates a proactive approach by signatories like IBM to develop technical measures, such as detection algorithms and authentication protocols, to identify and mitigate the spread of deepfakes.

These technical solutions must evolve rapidly to keep pace with the advancements in AI-generated content. For businesses, the implications are twofold: there is a market opportunity for companies that can provide effective countermeasures and there is a risk for those that fail to protect their platforms and users from the malicious use of deepfakes. The latter could face not only direct financial losses due to fraud or reputational damage but also indirect costs associated with regulatory compliance and potential litigation.

IBM's advocacy for targeted regulations reflects an ethical stance on the responsible use of AI. It underscores the importance of developing AI technologies in a manner that aligns with societal values, such as privacy, fairness and democracy. The ethical implications of deepfakes extend beyond the legal and technical realms, raising questions about consent, authenticity and trust in digital media.

For policymakers, the challenge is to create a regulatory framework that balances the protection of individuals and society against the benefits of AI innovation. This involves not only deterrence through legal penalties but also education to raise public awareness about the nature of deepfakes. Businesses stand to benefit from a well-informed public that can discern between authentic and manipulated content, thereby maintaining trust in digital communications and transactions.

By Christina Montgomery, Chief Privacy & Trust Officer, IBM and Joshua New, Senior Fellow, IBM Policy Lab

WASHINGTON, Feb. 28, 2024 /PRNewswire/ -- Deepfakes – realistic AI-generated audio, video, or images that can recreate a person's likeness – are one of the most pressing challenges posed by generative AI, given the potential for bad actors to use it to undermine democracy, exploit artists and performers, and harass and harm everyday people.

These are not speculative risks. Already, deepfakes are being used to spread election disinformation, capitalize on unsanctioned imitations of musicians, and create nonconsensual pornography of minors, private citizens, and celebrities alike. And as AI capabilities continue to improve, these challenges will become even more pernicious.

What this moment requires is both technical and legal solutions. That's why IBM (NYSE: IBM) signed the Tech Accord to Combat Deceptive Use of AI in 2024 Elections (Munich Tech Accord), pledging to help mitigate the risks of AI being used to deceive the public and undermine elections. It's also why IBM has long advocated for regulations that precisely target harmful applications of technology

We outline below three key priorities for policymakers to mitigate the harms of deepfakes:

  • Protecting elections,
  • Protecting creators, and
  • Protecting people's privacy

Protecting elections

Democracy depends on a population's ability to participate in free and fair elections. Unfortunately, bad actors can use deepfakes to impersonate public officials and candidates to deceive voters in a variety of ways that would undermine this key principle. For example, deepfakes could mislead voters about where, when, and how they can vote, or falsely portray a candidate making controversial statements or participating in scandalous activity.

Policymakers should prohibit the distribution of materially deceptive deepfake content related to elections. For example, the Protect Elections from Deceptive AI Act, introduced by Senators Klobuchar, Hawley, Coons, and Collins, would curtail the use of AI to generate deceptive content falsely depicting federal candidates in political advertisements with the intent of influencing an election. Other policy approaches could enable candidates targeted by materially deceptive AI-generated content used in political advertisements or fundraising campaigns to seek damages or remove deceptive content, while preserving protections for free speech.

In the EU, IBM has been supportive of the Digital Services Act, which imposes on large internet platforms certain obligations regarding the moderation of online content. Recent guidelines published by the European Commission have also proposed additional requirements for consumer-facing platforms to mitigate against "systemic risks for electoral processes."

Protecting creators

Musicians, artists, actors, and creators of all kinds use their talents and likeness to help shape culture, inspire, entertain, and make a living. Deepfakes can enable bad actors to exploit creators' likenesses to push deceptive advertising, scam and mislead consumers, improperly reduce a creator's ability to profit from their talents, and more.

Policymakers should hold people accountable for producing unauthorized deepfakes of creator performances and hold platforms accountable if they knowingly disseminate such unauthorized content. Some jurisdictions already have what are known as "likeness laws" that prohibit the unauthorized use of a person's likeness for commercial exploitation, but these can be inconsistent, and few explicitly cover digital replicas or the rights to use a person's likeness after they die. Given these jurisdictional inconsistencies, IBM supports the NO FAKES Act in the U.S., which would create federal protections for individuals whose voices and/or likenesses are generated by third parties without their consent.

Protecting people's privacy

Everyday people are already being harmed by deepfakes in profoundly concerning ways, particularly by bad actors using their likeness to create nonconsensual pornography. This abuse primarily targets women, victims have included minors, and could potentially enable further abuse and extortion by bad actors. Nonconsensual sharing of intimate imagery, also known as revenge porn, is expanding with the use of deepfakes but ultimately is not a new problem. But few existing laws adequately hold bad actors accountable for sharing or threatening to share this material, or necessarily cover AI-generated content.

Policymakers should create strong criminal and civil liability for people that distribute nonconsensual intimate audiovisual content, including AI-generated content, as well as for people that threaten to do so. Penalties should be particularly severe when the victim is a minor. Legislators can act on this recommendation now by supporting and passing the bipartisan Preventing Deepfakes of Intimate Images Act in the U.S. This bill would create liability for individuals who disclose, or threaten to disclose, a nonconsensual intimate digital depiction of someone, including AI-generated content, and allow affected parties to pursue damages. This legislation would create a much-needed federal baseline of accountability that is inconsistently addressed in various revenge porn laws at the state level, providing greater protections for victims and individuals across the United States.

The EU AI Act – which IBM has long-supported – already addresses many of these types of issues, covering deepfakes more generally and imposing transparency requirements that clarify when particular content is not authentic. As policymakers look towards the Act's implementation in the coming months, particular attention should be paid to ensuring individuals are protected from non-consensual intimate audiovisual content.

Conclusion

Solving the problems posed by deepfakes will require thoughtful, whole-of-society approaches leveraging both changes in law and technology. Technology companies have a responsibility to pursue technical and governance solutions that address AI-generated content, such as those articulated in the Munich Accord, the White House Voluntary AI Commitments, and Canada's Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems.

IBM encourages policymakers to seize this opportunity to swiftly target three of the most significant harmful applications of deepfakes to ensure that AI remains a positive force for the global economy and society.

Media Contact:
Ashley Bright
brighta@us.ibm.com

Cision View original content to download multimedia:https://www.prnewswire.com/news-releases/ibm-heres-what-policymakers-can-do-about-deepfakes-302073471.html

SOURCE IBM

FAQ

What did IBM do to combat the deceptive use of AI in elections?

IBM signed the Tech Accord to Combat Deceptive Use of AI in 2024 Elections (Munich Tech Accord) to help mitigate the risks of AI being used to deceive the public and undermine elections.

What are the key priorities outlined by IBM to mitigate the harms of deepfakes?

IBM's key priorities are protecting elections, protecting creators, and protecting people's privacy from the negative impacts of deepfakes.

How can deepfakes be used to undermine democracy according to the PR?

Deepfakes can be used to spread election disinformation, impersonate public officials and candidates, exploit artists, and create nonconsensual pornography, posing significant risks to democracy.

Why are technical and legal solutions necessary to address the challenges posed by deepfakes?

Technical and legal solutions are required to combat the potential misuse of deepfakes by bad actors, safeguarding elections, creators, and individuals' privacy.

What risks are associated with the increasing capabilities of AI according to the PR?

As AI capabilities advance, the risks associated with deepfakes, such as spreading disinformation and exploiting individuals, will become more severe.

What is the significance of protecting elections in the context of democracy according to the PR?

Protecting elections is crucial for maintaining democracy as it ensures the integrity of the electoral process and prevents deception of voters through deepfakes.

How does IBM aim to help mitigate the risks of AI misuse in elections?

IBM aims to mitigate the risks by advocating for regulations targeting harmful applications of technology and by signing agreements like the Tech Accord to Combat Deceptive Use of AI in elections.

International Business Machines Corporation

NYSE:IBM

IBM Rankings

IBM Latest News

IBM Stock Data

189.93B
918.60M
0.12%
61.77%
2.56%
Information Technology Services
Computer & Office Equipment
Link
United States of America
ARMONK