Ad
Opinion
Share this article

How Blockchains Can Help Solve AI’s Deepfake Problem

AI-generated content creates a huge online disinformation threat. Blockchain can help verify and authenticate that what is being said is true, says William Ogden Moore, Research Analyst at Grayscale Investments.

Updated May 22, 2024, 4:00 p.m. Published May 22, 2024, 4:00 p.m.
Deepfake
Deepfake

As AI continues to work itself into our daily lives, it's hard not to see the impact it's already having on nearly every sector. Within the finance industry, for example, AI is facilitating smarter investments, analyzing market trends and predicting stock performance, ultimately helping individuals and institutions make more informed business decisions.

While most of the advancements with AI are exciting and continue to push different industries forward, there are those abusing the technology for more nefarious purposes. With generative AI, one of the biggest risks that individuals and organizations need to be aware of are called “deepfakes.”

You're reading Crypto Long & Short, our weekly newsletter featuring insights, news and analysis for the professional investor. Sign up here to get it in your inbox every Wednesday.

Deepfakes are highly realistic digital forgeries produced with AI to manipulate or generate visual and/or audio content. For example, a deepfake might involve an AI-generated video showing a celebrity engaging in actions or making statements that never actually occurred, such as when comedian Jordan Peele created a deepfake of Barack Obama to showcase the threat AI-generated technology could present.

While we may default to believing what we see, this type of forged or deceptive AI-generated content is becoming increasingly more common. Between 2022 and the first half of 2023, deepfakes as a proportion of content in the U.S.increased almost 13 times from 0.2% to 2.6%, according to a recent report from Sumsub Research.

Experts are already concerned deepfakes could be used to try to sway public opinion or influence important events like elections, with bad actors trying to use AI to impersonate elected officials. They are “completely terrified” that the upcoming Presidential race will involve a “tsunami of misinformation,” driven heavily by deepfake and misleading AI-generated content, another recent report noted. Many view deepfakes’ ability to blur the lines between truth and fiction as a fundamental threat to democracies and fair elections around the globe.

So how do we – as a society – mitigate the prevalence and risks of deepfakes, as well as similar risks that may emerge as generative AI only continues to get more sophisticated?

Blockchains could be the crucial technology we need to help tackle this issue. At their core, public blockchains, such as Ethereum, have several key features that make them uniquely positioned to establish authenticity for content and information. This includes blockchain’s inherent transparency, decentralized nature and focus on network security and immutability.

For those unfamiliar, a public blockchain transparently records information in a time-bound manner, accessible to all, globally, and without gatekeeping. This allows anyone to verify the validity of information, such as its creator or a timestamp, making it a source of truth. Public blockchains are also decentralized, eliminating the need for a central decision-maker, and reducing the risk of manipulation. This decentralized structure also offers high network security by eliminating single points of failure, and ensuring an immutable and tamper-resistant record.

Furthermore, blockchains have already demonstrated their ability to authenticate content. For instance, with digital art as non-fungible tokens (NFTs), blockchain tech allows anyone to verify the creator and owner of a piece of art, enabling our ability to distinguish between the original and its potential replicas. This transparency and authentication potential extends to videos, images, and text, providing important foundations for developers to create solutions and tools geared at combating deepfakes, such as OpenAI’s Worldcoin, Irys and Numbers Protocol.

As AI's impact on society grows, AI-generated content and deepfakes will only become more prominent. Harvard experts already predict thatmore than 90% of content online will be AI-generated in the future. To protect against threats such as deepfakes, it's crucial we get ahead of the issue and implement innovative solutions. Public blockchains, collectively owned and operated by users, offer promising features like network security, transparency, and decentralization which can help against the issues deepfakes present.

However, much of the work underway remains in its early stages, and challenges remain with the technical development and widespread adoption of blockchain-related protocols. While there is no quick fix, we must remain committed to shaping a future that upholds truth, integrity, and transparency, as our society navigates these emerging technologies (and the risks they present) together.

Note: The views expressed in this column are those of the author and do not necessarily reflect those of CoinDesk, Inc. or its owners and affiliates.

William Ogden Moore

William Ogden Moore is a Research Analyst at Grayscale Investments with a focus on how frontier technology is impacting society. Prior to joining Grayscale in 2023, Will co-founded and sold an alternative investing website, and was a VC Investment Analyst at The Chernin Group (TCG).

picture of William Ogden Moore