The Deepfake Dangers Ahead

source: wsj.com, contributed by Artemus Founder, Bob Wallace  |  image: pexels.com

 

AI-generated disinformation, especially from hostile foreign powers, is a growing threat to democracies based on the free flow of ideas

 

By Daniel BymanChris Meserole And V.S. Subrahmanian

Feb. 23, 2023 9:58 am ET

Bots, trolls, influence campaigns: Every day we seem to be battling more fake or manipulated content online. Because of advances in computing power, smarter machine learning algorithms and larger data sets, we will soon share digital space with a sinister array of AI-generated news articles and podcasts, deepfake images and videos—all produced at a once unthinkable scale and speed. As of 2018, according to one study, fewer than 10,000 deepfakes had been detected online. Today the number of deepfakes online is almost certainly in the millions.

We can hardly imagine all the purposes that people will find for this new synthetic media, but what we’ve already seen is cause for concern. Students can have ChatGPT write their essays. Stalkers can create pornographic videos featuring images of the people they are obsessed with. A criminal can synthesize your boss’s voice and tell you to transfer money.

Deepfakes risk leading people to view all information as suspicious.

Deepfakes pose not only criminal risks but also threats to national security. To stoke divisions in the U.S. in 2020, Russia used conventional means of propaganda, deploying fake news about vaccination and real but selectively chosen imagery of destruction from Black Lives Matter protests. Deepfake technology will take such efforts to a new level, allowing the creation of a convincing alternate reality. In 2022, for example, Russia released a crude deepfake of Ukrainian President Volodymyr Zelensky calling on Ukrainians to put down their arms.

Imagine what might be done as the technology grows more sophisticated. Jihadists seeking to mobilize recruits could show convincing clips of French President Emmanuel Macron denigrating Islam. A Chinese invasion of Taiwan could begin with a deepfake of a Taiwanese naval commander telling forces under his command to allow Chinese forces to pass unmolested. Troops fighting a war might despair after reading thousands of divisive or provocative Facebook posts ostensibly by fellow soldiers but in fact generated by ChatGPT. The scale, speed and verisimilitude of such information warfare threatens to overwhelm the ability of militaries and intelligence services to guard against it.

Even the most ingenious fake detectors will have their limits, because breakthroughs in detection will almost certainly be used to improve the next generation of deepfake algorithms.

Domestically, deepfakes risk leading people to view all information as suspicious. Soldiers might not trust actual orders, and the public may think that genuine scandals and outrages aren’t real. A climate of pervasive suspicion will allow politicians and their supporters to dismiss anything negative that is reported about them as fake or exaggerated.

China’s powerful cyberspace administration has already anticipated such concerns. In January, Beijing began enforcing ambitious new regulations on deepfake content, ranging from strict rules requiring that synthetic images of people be used only with those people’s consent to more Orwellian prohibitions on “disseminating fake news.”

Democratic societies need to start addressing the potential harms of deepfakes as well, but we can’t do it the same way China does. We need a response that preserves the free flow of ideas and expression, the exchange of information that allows citizens to determine what is fake and what is real. Disinformation is dangerous precisely because it undermines the very notion of truth. Bans like Beijing’s play into this problem by making the discernment of truth and falsity a government prerogative, susceptible to politics and brute enforcement.

The options for democracies are complicated and will have to blend technical, regulatory and social approaches. Intel has already begun work on the technical side. Last November, the company’s researchers proposed a system called FakeCatcher that claimed 96% accuracy in identifying deepfakes. That number is impressive, but given the sheer volume of synthetic material that can be churned out, even a 99% accurate detector would miss an unacceptable volume of disinformation. Moreover, governments will have the services of highly skilled programmers, which means that their deepfakes are likely to be among the least detectable. Even the most ingenious detectors will have their limits, because breakthroughs in detection will almost certainly be used to improve the next generation of deepfake algorithms.

There is a workaround that may help detectors stay ahead of this cycle, and it is related to a technique that social-media companies are already exploring. Developers of detection technology can focus less on the video or image itself than on how it is being used by creating algorithms that analyze metadata and context. Social media platforms currently deploy these sorts of tools to detect fake accounts used for what some platforms call “coordinated campaigns of inauthentic behavior”—a term that covers the efforts of Iran, Russia and other malicious actors to sow disinformation or discredit specific public figures. Such an algorithm for deepfakes would be able to distinguish a Renoir-style deepfake painting of a loved one, say, from a deepfake showing a nude celebrity or drugged-out political figure.

The U.S. government and other democracies can’t tell their people what is or isn’t true, but they can insist that companies that produce and distribute synthetic media at scale make their algorithms more transparent. The public should know what a platform’s policies are and how these rules are enforced. Platforms that disseminate deepfakes can even be required to allow independent, third-party researchers to study the effects of this media and monitor whether the platforms’ algorithms are behaving in accordance with their policies.

Deepfakes are going to change the way a lot of institutions in democracies do business. The military will need very secure systems for verifying orders and making sure that automated systems can’t be triggered by potential deepfakes. Political leaders responding to crises will have to build in delays so that they can make sure the information before them isn’t false or even partially manipulated by an adversary. Journalists and editors will have to be leery of shocking news stories, doubling down on the standard of verifying facts with multiple sources. Where there is doubt, an outlet might mark some news with bright “this information not verified” warnings.

Ultimately, it is the public that will have to distinguish information sources operating in good faith from those designed to manipulate. Many democracies struggle with media literacy, but Finland offers a promising example. There, media literacy is folded into the school curriculum starting in preschool, and libraries have become centers for adult media-literacy instruction. Finland now ranks first in the world for resilience against misinformation.

It is in the nature of democracy that no single policy can effectively quell the proliferation of disinformation. To deal with the problem, free societies will need a combination of efforts and, in contrast to China’s Ministry of Truth approach, should give priority to preserving open discourse and respecting the discernment of citizens. The key is to begin this process before deepfakes seep into our information ecosystems and overwhelm them. Once they do, mistrust and confusion will be much harder to contain.

Messrs. Byman, Meserole and Subrahmanian are co-authors (with Chongyang Gao) of a new Brookings Institution research report, “Deepfakes and International Conflict,” from which this essay is adapted.