Nina Schick on disinformation and the rise of synthetic media
By Robert Wiblin and Keiran Harris · Published April 6th, 2021
Nina Schick on disinformation and the rise of synthetic media
By Robert Wiblin and Keiran Harris · Published April 6th, 2021
On this page:
- Introduction
- 1 Highlights
- 2 Articles, books, and other media discussed in the show
- 3 Transcript
- 3.1 Rob's intro [00:00:00]
- 3.2 The interview begins [00:01:28]
- 3.3 Deepfakes [00:05:49]
- 3.4 The influence of synthetic media today [00:17:20]
- 3.5 The history of misinformation and disinformation [00:28:13]
- 3.6 Text vs. video [00:34:05]
- 3.7 Privacy [00:40:17]
- 3.8 Deepfake pornography [00:49:05]
- 3.9 Russia and other bad actors [00:58:38]
- 3.10 2016 vs. 2020 US elections [01:13:44]
- 3.11 Authoritarian regimes vs. liberal democracies [01:24:08]
- 3.12 Law reforms [01:31:52]
- 3.13 Positive uses [01:37:04]
- 3.14 Technical solutions [01:40:56]
- 3.15 Careers [01:52:30]
- 3.16 Rob's outro [01:58:27]
- 4 Learn more
- 5 Related episodes
Technology is just going to be an amplifier of human intention, this human innate desire…to deceive, to manipulate. The visual medium is a very powerful way of doing that.
Nina Schick
You might have heard fears like this in the last few years: What if Donald Trump was woken up in the middle of the night and shown a fake video — indistinguishable from a real one — in which Kim Jong Un announced an imminent nuclear strike on the U.S.?
Today’s guest Nina Schick, author of Deepfakes: The Coming Infocalypse, thinks these concerns were the result of hysterical reporting, and that the barriers to entry in terms of making a very sophisticated ‘deepfake’ video today are a lot higher than people think.
But she also says that by the end of the decade, YouTubers will be able to produce the kind of content that’s currently only accessible to Hollywood studios. So is it just a matter of time until we’ll be right to be terrified of this stuff?
Nina thinks the problem of misinformation and disinformation might be roughly as important as climate change, because as she says: “Everything exists within this information ecosystem, it encompasses everything.” We haven’t done enough research to properly weigh in on that ourselves, but Rob did present Nina with some early objections, such as:
- Won’t people quickly learn that audio and video can be faked, and so will only take them seriously if they come from a trusted source?
- If photoshop didn’t lead to total chaos, why should this be any different?
But the grim reality is that if you wrote “I believe that the world will end on April 6, 2022” and pasted it next to a photo of Albert Einstein — a lot of people would believe it was a genuine quote. And Nina thinks that flawless synthetic videos will represent a significant jump in our ability to deceive.
She also points out that the direct impact of fake videos is just one side of the issue. In a world where all media can be faked, everything can be denied.
Consider Trump’s infamous Access Hollywood tape. If that happened in 2020 instead of 2016, he would have almost certainly claimed it was fake — and that claim wouldn’t be obviously ridiculous. Malignant politicians everywhere could plausibly deny footage of them receiving a bribe, or ordering a massacre. What happens if in every criminal trial, a suspect caught on camera can just look at the jury and say “that video is fake”?
Nina says that undeniably, this technology is going to give bad actors a lot of scope for not having accountability for their actions.
As we try to inoculate people against being tricked by synthetic media, we risk corroding their trust in all authentic media too. And Nina asks: If you can’t agree on any set of objective facts or norms on which to start your debate, how on earth do you even run a society?
Nina and Rob also talk about a bunch of other topics, including:
- The history of disinformation, and groups who sow disinformation professionally
- How deepfake pornography is used to attack and silence women activitists
- The key differences between how this technology interacts with liberal democracies vs. authoritarian regimes
- Whether we should make it illegal to make a deepfake of someone without their permission
- And the coolest positive uses of this technology
Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: Type 80,000 Hours into your podcasting app. Or read the transcript below.
Producer: Keiran Harris
Audio mastering: Ben Cordell
Transcriptions: Sofia Davis-Fogel
Highlights
Deepfakes
Nina Schick: A deepfake is essentially a piece of media, a piece of synthetic media. That’s to say, a piece of fake media that’s either manipulated by AI — or, increasingly, as the technology improves — entirely generated by AI. The first thing to say is that the ability of AI to do this is nascent. A huge breakthrough came only in 2014 when somebody named Ian Goodfellow — who is now actually one of the lead AI scientists at Apple — when he was a graduate student, he published a paper on this challenge of how to actually get a machine learning system to create content that didn’t exist before. It was a really difficult one.
Nina Schick: Thanks to the revolution in deep learning over the past decade, AI was very good at categorizing things. That’s why we can get things like autonomous cars. But actually getting it to create something new is a difficult challenge. What Goodfellow did was that he decided he could pit two machine learning systems or two neural networks against each other in an adversarial game. He found that once you did that, it could actually generate content — which was a huge breakthrough in the sense that we hadn’t been able to see a machine learning system do this before.
Nina Schick: It’s only in 2014 that this first paper was emerging at the cutting edge of artificial intelligence. And a few years down the line, in 2017, that’s when, as this field was developing, we started to see the emergence of the first deepfakes online. I’m sure we’ll get into that in the form of nonconsensual pornography. Again, the first thing to point out is that the unbelievable ability of AI to actually manipulate and generate entirely synthetic media is new, we’re at the very, very beginning of this journey. This can come in many different forms.
Nina Schick: It can come in audio format. It can come in video format. It can come as an image. It can even come as synthetic text. One of the unique things about synthetic media — which is very relevant to our conversation today — is its ability to recreate humans. This is now manifesting in two ways. One, AI is starting to be used to create entirely synthetic people. You already see that with thispersondoesnotexist.com, for example, where you go to that website, you click the page, and you see AI generating autonomously an image of a person who does not exist — who to your eye looks absolutely real, photorealistic. We would not be able to tell that that’s a person that doesn’t exist. Increasingly, AI will be able to do that also in audio visual format, so voice synthesis, video synthesis.
The influence of synthetic media today
Nina Schick: I don’t think it’s right to think of deepfakes and existing mis- and disinformation as two separate issues. Deepfakes are merely the more sophisticated form of visual disinformation that is going to become increasingly ubiquitous in already what is a corroding information ecosystem. When people suggest that, oh, deepfakes haven’t been as harmful as the existing modes of mis- and disinformation, I don’t see why those are two different issues.
Nina Schick: I think it is absolutely vital when you talk about mis- and disinformation to underline that way before deepfakes even started doing damage, old forms of mis- and disinformation were already doing a significant amount of real-world harm. Then getting to the issue of deepfakes more specifically, the reason why perhaps they haven’t been seen to do as much damage as sometimes has been predicted — at least in hysterical media reporting — this is particularly relevant to the realm of politics. A lot of the fears around deepfakes was that somehow an election would be swung or what happens if Kim Jong-un releases a deepfake saying he’s nuking America and then we’re in the nuclear armageddon scenario. The reason why we haven’t had that yet is because existing forms of mis- and disinformation when it comes to politics are already devastatingly effective. There’s no need to make a deepfake right now. Because, again, unlike what is sometimes perceived as being true from the hysterical reporting, the barriers to entry in terms of making a very sophisticated video deepfake are a lot higher than people think.
Nina Schick: That’s not to say that those barriers are going to exist forever, because we’ve already touched upon how this field of technologies is evolving so quickly that any kind of restrictions you see right now are not because of ethical concerns. It’s purely to do with technical limitations.
The history of disinformation
Nina Schick: One of the brilliant early examples is a photograph of Abraham Lincoln, who — although he was lionized after his death as this iconic president, during his lifetime he was beset by rumors of ugliness. After he was assassinated, a portrait painter needed to find photographs of him looking heroic, and he couldn’t find any. What he did was take an engraving of a southern politician, John C. Calhoun, who ironically was a bitter rival of Lincoln’s during his lifetime, because they were opposed on the abolition of slavery…
Robert Wiblin: On slavery, right?
Nina Schick: Yeah, exactly. He took a photograph of Lincoln’s head and superimposed it onto Calhoun’s body, because Calhoun was the kind of politician, he had the gravitas, he had the posture that this portrait painter was looking for. That was only discovered to be a manipulation in the 1980s, I think. So 100 years after the fact. The thing that’s different now is accessibility, scale, fidelity, and what type of media we’re talking about. We’re not talking about editing images with Photoshop. It’s far more sophisticated than that.
Nina Schick: Before, we weren’t able to say, okay, we’re going to take AI and take five seconds of your voice, and now I can clone your voice. It’s not, for me, comparable at all to what has been possible in the past. But I also absolutely agree that if you look at the history of visual or media manipulation, it’s been something that’s been around since the birth of modern media. Again, to me, that just is more of an interesting point about the nature of humanity. Technology is just going to be an amplifier of human intention, this human innate desire. It’s always going to exist to deceive, to manipulate. The visual medium is a very powerful way of doing that.
Text vs. video
Nina Schick: I’m not at all saying that text cannot be compelling or convincing. Again, when it comes to synthetic text generation, if you look at what GPT-3 is capable of right now, I can see it being an extremely powerful tool of persuasion and coercion or manipulation. Because at scale, you could create human conversations or interactions in a way that is just mind blowing. There’s a great paper you should read by the Middlebury Institute of International Studies Center on Terrorism, Extremeism, and Counterterrorism, where they tested GPT-3’s capability to radicalize people online, and it makes for very scary reading.
Nina Schick: Again, going back to the visual side here, the reason why I focus on this — and this is by no means saying that synthetic texts should be discounted — is because the most important medium of human communication right now is audiovisual media. People read less. People interact with text less. The majority of the world who’s going to join the information ecosystem in the next 10 years is going to be, well, the literacy levels might be lower than some in western countries. It’s already two-thirds of humanity.
Nina Schick: That’s 67% of people who go to video as their first source of information. You know this, like from the attention economy we’ve built over the past 30 years. Is it easier for you to scroll through your phone on Instagram or Twitter and get information that way? Or is it easier for you to sit down and read a textbook? It’s not to say that texts cannot be compelling or a written lie cannot be compelling, and I’m sure, again, there’s a whole area of study to be done into AI-generated synthetic texts and how that could be used as a way to convince people. But I think to say that somehow visual media is not the most important medium of communication, when video seems to be becoming the first source of information for most people in the world, I think is probably intellectually dishonest.
Positive uses
Nina Schick: Every industry that uses media (and what industry doesn’t?) is going to be touched by the rise of synthetic media. And that’s because AI is going to democratize content creation, it’s going to make it so much cheaper. By the end of the decade, a YouTuber or a TikToker will be able to produce the same kind of content that’s only accessible right now to a Hollywood studio. So, that is going to mean so many opportunities for the creative industries. I mean, for one, entertainment and film are just going to get very good. And you won’t need to be a Hollywood studio to produce some really amazing creative content.
Nina Schick: Another real-world legitimate application of synthetic media is a startup that I really think is doing fantastic work. They’re based in London, they’re called Synthesia. And they basically use their synthetic media platform to generate corporate communications videos, training videos, educational videos for their Fortune 500 clients. You don’t need to go into a studio anymore and hire actors and get a green screen, you can basically create your communications video as easily as though you’re writing an email. And you can then, on their backend, choose to put that out in like 16 different languages with the click of a button, right? So it’s going to transform every industry imaginable. I think by the end of the decade, some experts who I was talking to — and it’s a really punchy stat — but I think the direction of travel is clear. They think that up to 90% of audiovisual content online will be synthetically generated.
Robert Wiblin: It’s a big forecast within 10 years.
Nina Schick: Punchy stat, yeah. But I think that is the direction of travel. And for a real social good example, here’s one. There’s a company called VocaliD, which is working on synthetic voice generation, to give those who have lost the ability to speak through stroke, cancer, neurodegenerative disease, etc. their voice back. Or those who never had the ability to speak at all can have a synthetic voice. Again, this technology is just an amplifier of human intention. It will be weaponized by bad actors and used for mis- and disinformation, but it’s also going to be commercially very relevant, transform entire industries, and also be used for good.
Articles, books, and other media discussed in the show
By and about Nina
- Nina Schick
- Deepfakes: The Coming Infocalypse
- Don’t underestimate the cheapfake, MIT Tech Review
- ‘Deep fake’ videos threaten the world order, The Times
AI companies doing interesting work
Mis- and disinformation programs and resources
- The Harvard Kennedy School’s Misinformation Review
- The Oxford Martin Programme on Misinformation, Science and Media
- University of Washington Fake News and Misinformation: Mini Lecture Series
- University of Washington’s Center for an Informed Public
- Chatham House’s disinformation topic page
- Arizona State University’s Global Security Initiative
- Deakin University Misinformation Lab
- University of Copenhagen Department of Political Science project on digital disinformation
Other links
Transcript
Table of Contents
- 1 Rob’s intro [00:00:00]
- 2 The interview begins [00:01:28]
- 3 Deepfakes [00:05:49]
- 4 The influence of synthetic media today [00:17:20]
- 5 The history of misinformation and disinformation [00:28:13]
- 6 Text vs. video [00:34:05]
- 7 Privacy [00:40:17]
- 8 Deepfake pornography [00:49:05]
- 9 Russia and other bad actors [00:58:38]
- 10 2016 vs. 2020 US elections [01:13:44]
- 11 Authoritarian regimes vs. liberal democracies [01:24:08]
- 12 Law reforms [01:31:52]
- 13 Positive uses [01:37:04]
- 14 Technical solutions [01:40:56]
- 15 Careers [01:52:30]
- 16 Rob’s outro [01:58:27]
Rob’s intro [00:00:00]
Hi listeners, this is the 80,000 Hours Podcast, where we have unusually in-depth conversations about the world’s most pressing problems and what you can do to solve them. I’m Rob Wiblin, Head of Research at 80,000 Hours.
When we ran our 80,000 Hours user survey last year, the most requested topic for new podcast episodes was ‘exploring new problem areas’ (like episode 34 with Aaron Hamlin on approval voting, or episode 50 with Dave Denkenberger on feeding everyone through nuclear war).
In that spirit, today’s interview with Nina Schick is on the apparently growing issue of misinformation and disinformation – including the rise of so called deepfakes, that is, fake videos that can’t be distinguished from the real thing.
Keiran and I independently loved Nina’s podcast with Sam Harris, and thought she’d be the perfect person to talk with as a follow-up to our interview with Tristan Harris last year.
Nina thinks this problem might be roughly as important as climate change, because as she says: “everything exists within this information ecosystem, it encompasses everything.”
We haven’t done enough research to properly weigh in on that ourselves, but I did present Nina with some early objections, such as:
- Won’t people quickly learn that audio and video can be faked, and so will only take them seriously if they come from a trusted source?
- If it’s such a big problem, why haven’t we had more big deepfake scandals yet?
- Could the ability to deny that you’ve said anything controversial actually lead to more privacy?
We also talk about a bunch of other topics, including:
- The history of disinformation, and groups who sow disinformation professionally
- How deepfake pornography is used to attack and silence women activitists
- And the coolest positive uses of this technology
Alright, without further ado, here’s Nina Schick.
The interview begins [00:01:28]
Robert Wiblin: Today, I’m speaking with Nina Schick. Nina is an author and consultant on how emerging technologies are affecting politics and international relations. Last year she published the book Deepfakes: The Coming Infocalypse, which looks at how technology — in particular, the ability to make sophisticated deepfakes — is aiding the spread of mis- and disinformation, and potentially making society even less connected to reality than it already was.
Robert Wiblin: Nina has advised many people and organizations on this topic, including Joe Biden and a former Secretary General of NATO. She has also been featured in the MIT Tech Review, The Times, and CNN, among many others. She originally studied history, politics, and language at University College London, and then at the University of Cambridge. She is nothing if not international, speaking seven languages and living across London, Berlin, and Kathmandu, at least until the COVID-19 pandemic I assume. Thanks for coming on the podcast, Nina.
Nina Schick: Thanks for having me. Great to be here.
Robert Wiblin: I hope we’ll get to talk about what effect we should expect new technologies to have on society’s weak connection to reality and what could be done to make our information ecosystem just a little bit less of a madhouse. But first, what are you working on at the moment, and why do you think it’s really important?
Nina Schick: I am working on researching the synthetic future, because my foray into this started with deepfakes and disinformation. I very quickly realized that it’s actually much more profound than that. I think what we’re facing is an AI-led paradigm change when it comes to the future of human creation, content creation, and even human perception. I advise companies and organizations and individuals on some of the exponential tech-led changes that are underway and think about how that might impact politics and society and geopolitics.
Robert Wiblin: I imagine that a lot of listeners have thought a little bit about the subject of misinformation before. But if they’re really new, they might be interested in going back and listening to my conversation with Tristan Harris in Episode 88, which covers some related issues to what we’re talking about here, and might give some useful background. Briefly, what is it that you think people don’t appreciate about the current failures of our information ecosystem?
Nina Schick: Well, I think the starting point has to be the conceptual understanding that what we’ve built over the past 30 years with regard to our information ecosystem is something that’s completely unprecedented in the history of humanity. Of course, it starts with the invention of the internet and the so-called ‘Information Age.’ Unfortunately, a lot of the early utopian founders of the internet assumed that this would be an unmitigated good for humanity, that everyone would be able to access all information, and that somehow this would propel human progress forward.
Nina Schick: I think that’s a read that interprets the true nature of humans as being inherently good. What we know is that that’s not the true nature of humans — humans are neither inherently good nor inherently bad. It’s a mixed bag. These exponential technologies which have so radically changed our information ecosystem over the past 30 years — specifically the internet, social media, the smartphone, and now increasingly the age of synthetic media — is just something that we haven’t even begun to comprehend in terms of what it’s done.
Nina Schick: We certainly haven’t begun to understand how to build a safer information ecosystem. I think this is really one of the critical challenges of our time. I think it’s something that is just as important as climate change, because everything exists within this information ecosystem, it encompasses everything. Everyone who will be listening to this podcast exists within it. The only way you can opt out of it is if you decide to go off the grid completely and live like a hermit in deep dark forests.
Robert Wiblin: Yeah. I still remember when we used to think that the internet was going to cause everyone to be super informed and make society a whole lot more rational. I think people still thought that in 2010, they definitely thought it in 2000. I guess we were all so young and naïve.
Nina Schick: Right, and you saw that with the reaction to like — especially with my political hat on — you saw that when the Arab Spring started happening, how the first reaction was like, wow, people standing up for democracy using the internet. This is amazing. But 10 years down the line, we know that it’s actually far more complicated than that.
Robert Wiblin: Yeah, I still remember my initial reaction to the Arab Spring, and it was inaccurate in retrospect. Yeah, extremely naïve.
Deepfakes [00:05:49]
Robert Wiblin: What are some useful examples that people might not have already heard about of deepfakes or other similar sorts of chicanery being used to trick a lot of people into believing something that actually causes harm in the real world?
Nina Schick: I think my starting point would be just to point out that when people talk about deepfakes, they often talk about them in isolation. Deepfakery is just the latest emerging threat in the context of an information ecosystem that’s already become inundated with mis- and disinformation, particularly over the past 10 years. Again, my background being in geopolitics, I’ve been able to see how political events and societies have been transformed by a lot of visual disinformation that’s been shared online.
Nina Schick: I saw it in the context of the Russian invasion of Eastern Ukraine and the annexation of Crimea, in the context of Brexit. In some countries like Myanmar, which was pretty much shut off from this information ecosystem until the junta decided to open up almost overnight in 2014, Facebook was used as a platform where a lot of cheap fakes — the forebear to deepfakes, manipulated imagery which has nothing to do with AI, something taken out of context, miscontextualized clips — was even used to incite things like genocide. The first thing to say about deepfakes is that they are just the latest evolving threat in a spectrum of disinformation, which has increasingly come to include visual media. That’s no surprise. Because in this ecosystem, we increasingly interact with digital and visual media. Again, that’s no surprise because it’s the most compelling way to communicate. As humans, we have a cognitive bias known as ‘processing fluency,’ so that when we see something that looks or sounds right, we believe it to be true.
Nina Schick: A lot of visual disinformation can, in some cases, be even more compelling than the written word. I’m sure we’ll get into specific case studies of deepfakery that exist right now. To answer this question, I would just say, rather than talking about an example of a deepfake that exists in isolation, I would say it’s the continuation of this trend.
Robert Wiblin: Yeah. I guess we should back up and just explain to people who haven’t been keeping up what is possible with deepfakes, and I guess other manipulation of video and audio. As far as I understand it, if you have a lot of samples of someone’s voice — like I guess people have with us in our various interviews — you can get people, like any voice, to say basically anything that you want. At least humans can’t tell the difference. Sometimes the machine learning techniques can tell that they’re fake. To humans, it’s pretty indistinguishable.
Robert Wiblin: I guess with video, if you have a lot of video of someone’s face and then you want to put their face onto someone else’s face in a video, you can basically do that. It looks pretty seamless. It only requires a normal computer that most people have. It’s a pretty accessible technology. Again, maybe with sophisticated forensics, you could figure out that it’s fake. Or if someone did it badly, you could figure it out. But anyone can create a reasonably convincing fake video of someone doing something. Is that about right?
Nina Schick: A deepfake is essentially a piece of media, a piece of synthetic media. That’s to say, a piece of fake media that’s either manipulated by AI — or, increasingly, as the technology improves — entirely generated by AI. The first thing to say is that the ability of AI to do this is nascent. A huge breakthrough came only in 2014 when somebody named Ian Goodfellow — who is now actually one of the lead AI scientists at Apple — when he was a graduate student, he published a paper on this challenge of how to actually get a machine learning system to create content that didn’t exist before. It was a really difficult one.
Nina Schick: Thanks to the revolution in deep learning over the past decade, AI was very good at categorizing things. That’s why we can get things like autonomous cars. But actually getting it to create something new is a difficult challenge. What Goodfellow did was that he decided he could pit two machine learning systems or two neural networks against each other in an adversarial game. He found that once you did that, it could actually generate content — which was a huge breakthrough in the sense that we hadn’t been able to see a machine learning system do this before.
Nina Schick: It’s only in 2014 that this first paper was emerging at the cutting edge of artificial intelligence. And a few years down the line, in 2017, that’s when, as this field was developing, we started to see the emergence of the first deepfakes online. I’m sure we’ll get into that in the form of nonconsensual pornography. Again, the first thing to point out is that the unbelievable ability of AI to actually manipulate and generate entirely synthetic media is new, we’re at the very, very beginning of this journey. This can come in many different forms.
Nina Schick: It can come in audio format. It can come in video format. It can come as an image. It can even come as synthetic text. One of the unique things about synthetic media — which is very relevant to our conversation today — is its ability to recreate humans. This is now manifesting in two ways. One, AI is starting to be used to create entirely synthetic people. You already see that with thispersondoesnotexist.com, for example, where you go to that website, you click the page, and you see AI generating autonomously an image of a person who does not exist — who to your eye looks absolutely real, photorealistic.
Nina Schick: We would not be able to tell that that’s a person that doesn’t exist. Increasingly, AI will be able to do that also in audio visual format, so voice synthesis, video synthesis.
Nina Schick: The second way this is manifesting is — again, going back to this unique ability of AI to create humans synthetically — is that it is being used to clone or hijack people’s biometrics. You can create fake media of real people who do exist. One way you can do this is by cloning voices, cloning faces. This is where the whole public hysteria or interest about deepfakes has been around, this unique ability of AI to clone your biometrics. You’re right that as this field was emerging, it required a lot of training data.
Nina Schick: I was working with an AI company in 2018, and as a research project we were basically trying to bring to a group of global leaders who I was advising — which included Biden and the former NATO Secretary General — the power of this emerging technology. We wanted to present them with a case study. We wanted to use AI to synthesize Donald Trump’s voice to say something really silly that would grab their attention. In 2018, that still required hours and hours of training data. The end result wasn’t that good. You could tell. It’s a bit like Trump, but not really. Fast forward, three years down the line, and now you already have companies saying they can synthesize someone’s voice with five seconds of audio. You no longer need—
Robert Wiblin: Really?
Nina Schick: Yeah. You no longer need to be a prolific public persona in order for your biometrics to be cloned or hijacked. Obviously, there’ll be numerous commercial applications — licensing digital images of your likeness, if you’re a celebrity or a politician, is going to become a big business. It also means that everybody potentially is vulnerable to having their digital identity, their biometrics hijacked by an anonymous actor. And not just without their consent, but without their knowledge.
Robert Wiblin: Yeah. Where are we on the technology to detect whether something is real or fake? I guess at the moment, it seems like sometimes we can and sometimes we can’t. We don’t know whether at full maturity of this technology whether it’s going to be the case that you’ll be able to tell or whether it might just be that eventually these… I guess, what are they called? ‘Generalized adversarial networks,’ is that right?
Nina Schick: Yeah.
Robert Wiblin: Whether they’ll be able to generate audio and video that is indistinguishable, even to the best forensic technology.
Nina Schick: Yeah. Again, on detection, we’re way behind. It’s an area of AI research where there’s increasing interest, especially because deepfakes have become such an area for fruitful public conversation and debate. You’re absolutely right to point out that the models that do exist now… I mean, there’s a lot of work to be done. Because the kind of models that have been put out that say oh, we can detect deepfakes with 90% accuracy or 95% accuracy…that’s only true for the training data that those models have been trained on.
Nina Schick: There isn’t actually a universal model that works with deepfakes in the wild. Whenever I talk to AI researchers about the technical challenges in this, part of the problem is, how do you get enough training data to train your models in a way that will work in the wild? Also, because there’s going to be so many different ways of actually generating the synthetic media, it’s unlikely that you’re going to be able to have one model that will be able to detect them all. You’re going to have to have a multi-layer kind of approach.
Nina Schick: Having said that, there are some really interesting companies that work in this space. Sensity is one of them. They’re a startup based in Amsterdam. They were the first startup to look into deepfake detection, very technically difficult, something that is going to be a constant adversarial game. I think it only works if you look at it from a cybersecurity perspective, where you understand that every time you build a decent detector, the generator is going to outwit it. You’re never going to, I don’t think, have a perfect detector that works for all synthetic media in the wild.
Nina Schick: I think detection as a solution can only be a risk mitigation strategy. It’s something that I think increasingly companies are going to have to invest in as part of their cybersecurity strategy. The other really interesting point, which you just mentioned, is that given the adversarial nature of these neural networks pitted against each other, the jury is still out — and AI researchers who I talk to have differing views on this — as to whether or not you get to a point where the synthetic media generation becomes so good that even an AI detector can’t find something in the DNA of that piece of media that is actually synthetic.
Nina Schick: It’s clear that to the human eye, or using any kind of digital forensics techniques — which is currently still something that is being done, because deepfakes are not ubiquitous yet — there’s a long way to go in terms of how sophisticated they’ll become. That is going to become redundant pretty quickly. Humans aren’t going to be able to tell. Not only because the fidelity will improve, but because as they become ubiquitous, it’s going to be an impossible task to have a human—
Robert Wiblin: It’s just too much work.
Nina Schick: Yeah, it’s going to be too much work and we won’t be able to tell anyway. You have to think about alternative solutions. Then obviously, detection is one solution, but there are difficult challenges ahead for detection as well.
Robert Wiblin: Yeah, I’m very worried about misinformation and disinformation spreading online and convincing people of things in general. I’m just personally uncertain how much impact on the margin these deepfake images and audio are going to have. There’s already lots of disinformation, lots of silly things that people believe. Then as this technology gets better, is it going to cause a massive increase or not? I’m just unsure. I’m curious to probe the arguments for and against that.
Robert Wiblin: I suppose it is fairly new, but it’s been true for a couple of years now that people could have been using this technology in order to push their agenda. Especially bad actors. I mean, I’ve been following this a little bit for a couple of years. I think over that time, people have predicted that there would be more chaos created by deepfakes than what we’ve seen already.
The influence of synthetic media today [00:17:20]
Robert Wiblin: Is there any reason why perhaps things aren’t worse today than they might be, other things that are limiting the influence deepfakes and other synthetic media can have as of now?
Nina Schick: The first thing I’d say is that I don’t think it’s right to think of deepfakes and existing mis- and disinformation as two separate issues. Deepfakes are merely the more sophisticated form of visual disinformation that is going to become increasingly ubiquitous in already what is a corroding information ecosystem. When people suggest that, oh, deepfakes haven’t been as harmful as the existing modes of mis- and disinformation, I don’t see why those are two different issues.
Nina Schick: I think it is absolutely vital when you talk about mis- and disinformation to underline that way before deepfakes even started doing damage, old forms of mis- and disinformation were already doing a significant amount of real-world harm. Then getting to the issue of deepfakes more specifically, the reason why perhaps they haven’t been seen to do as much damage as sometimes has been predicted — at least in hysterical media reporting — this is particularly relevant to the realm of politics. A lot of the fears around deepfakes was that somehow an election would be swung or what happens if Kim Jong-un releases a deepfake saying he’s nuking America and then we’re in the nuclear armageddon scenario. The reason why we haven’t had that yet is because existing forms of mis- and disinformation when it comes to politics are already devastatingly effective. There’s no need to make a deepfake right now. Because, again, unlike what is sometimes perceived as being true from the hysterical reporting, the barriers to entry in terms of making a very sophisticated video deepfake are a lot higher than people think.
Nina Schick: That’s not to say that those barriers are going to exist forever, because we’ve already touched upon how this field of technologies is evolving so quickly that any kind of restrictions you see right now are not because of ethical concerns. It’s purely to do with technical limitations. The other thing to say is that they are already having a really pernicious effect on political discourse, but perhaps not in the way that people think. This refers to something known as the ‘liars dividend,’ which was coined in the context of deepfakes by two academics.
Nina Schick: It’s true of all mis- and dis-information, and it’s true of a corroding information ecosystem: The more people understand that anything can be faked, including video… Until now, we’ve tended to see video as an extension of our own perception. This is why video is so compelling in a court of law when it’s presented as evidence. We understand that when we see video in the movies or in the cinema, it’s make-believe. We’ve tended to think of that degree of video manipulation as something that’s beyond our reach, that it’s only a very well-resourced actor like a Hollywood studio which has multimillion-dollar budgets and teams of special effects artists that can do that kind of video manipulation.
Nina Schick: When we understand that even video can be manipulated in this way, thanks to AI, we start becoming more critical of all media, including authentic media. That is already a real-world harm that deepfakes are having, even before they become ubiquitous. People are starting to question the authenticity and veracity of authentic media, which is pretty devastating in a world where trust in digital media is absolutely essential for society and politics functioning.
Nina Schick: A really good example of this was last year when the George Floyd video came out, just this awful video of this man being suffocated to death slowly. It was so visceral, so brutal, so powerful that it united millions of people together in protest — not only in the United States, but across the entire world. At the time, I had just submitted the manuscript for my book. I was thinking to myself, it’s not going to be long before a video like that is even going to be questioned as to whether or not it’s actually authentic.
Nina Schick: Even I was surprised at how quickly that happened, and it came from a really unlikely person. Her name is Dr. Winnie Heartstrong. She was a republican candidate for the House of Representatives. She has a PhD and she’s an African-American woman. She basically released a 23-page academic report about why George Floyd’s death was a deepfake hoax. She argued in that paper that George Floyd had actually died in 2016 and that what we saw on that video was actually a former NBA player and a former game show host, their bodies, and George Floyd’s face had been swapped into the video using AI to make it look like he was in the video and that Derek Chauvin, the police officer, was actually a former game show host.
Nina Schick: Now, she went and did a pretty public campaign about why this was a deepfake hoax. She was invited on people’s podcasts. She wrote the paper. There was a website. But in 2020, her impact was still limited. I only came across it because I was a deepfake researcher and I was looking for something like this, and it blew my mind. But in 2024 or in 2028 or in 2030, given how polarized democratic society has become, especially on these issues around identity, politics, and race, you can see how people might start believing — depending on whoever the influencer was that was telling them that this video wasn’t authentic, that it was a completely deepfake thing — that people might start believing that. So there is already a real-world harm of deepfakes even before they become ubiquitous, just in enhancing the liar’s dividend.
Nina Schick: The other real-world harm that already exists is in nonconsensual pornography. The first application of deepfakes, when they emerged at the end of 2017, they emerged on Reddit. This anonymous Redditor, calling himself ‘deepfakes,’ a portmanteau of ‘deep learning’ and ‘fakes,’ figured out how to use some of the open-source tools that were emerging out of the AI research community to make his own fake porn videos, where he face-swapped celebrities’ faces onto the bodies of real porn stars in films. When I saw that at the end of 2017, I was like, wow, this is so different from somebody’s face being photoshopped onto the body of a porn star. These women are alive in these films. They’re laughing. They’re moving. They have different expressions.
Robert Wiblin: The expressions are believable.
Nina Schick: Real cut above Photoshop. Since then, since the end of 2017, there’s an entire deepfake porn ecosystem online. Of course, it’s not only female celebrities who are targeted. Increasingly it’s actually normal women — your wife, your mother, your sister, your colleagues, your friends — and, alarmingly, also minors. The easiest form of deepfake generation is an image, a video is still a much more challenging piece of synthetic media to create. You’ve already seen apps out there where you can just take an image of a woman clothed and you can generate an image of her nude just by running that through your app. My friend Henry Ajder, who’s a brilliant deepfake researcher, did this investigation in the summer where he actually found a deepfake porn bot being hosted on Telegram which was doing just that, just generating nude images of women. There were over 100,000 images of normal women being shared on public Telegram channels. That included many images of minors. It’s an undeniably gendered phenomenon, almost 100% targeted against women, but done, obviously, without their consent and often without their knowledge. Just with their data being scraped from social media.
Robert Wiblin: We’ll come back to the deepfake pornography in a minute. To back up, I suppose there’s two different things that you could worry about with synthetic media. One might be that people would remain really credulous and just be constantly fooled all the time by synthetic media that’s completely made up. The alternative would just be that people become incredibly cynical and stop believing anything in particular. It sounds like you basically think that the latter is more likely, that people are going to eventually realize that none of this audio or video is very believable.
Robert Wiblin: Then they just become skeptical about anything they see on the news or anything they see online because any of it could be completely constructed. Then maybe you pull out witnesses who say they saw George Floyd getting strangled, and it’s like, well, these could just be completely fake people. Because we can make people out of whole cloth now and then just even get GPT-3 or something else to generate text for them to say. There’s almost no limit to the amount of complete fantasy world that you could construct if you are willing to put in the time, so why would you believe anything you see?
Nina Schick: Absolutely. I think that is my more prevalent fear. Again, just thinking about it in the context of, how does a democracy function in the age of exponential tech-led change we’re living through? The first view, that people will be fooled, is the immediate reaction that people come to. But you can become more digitally literate. If you look at younger generations, for example, who engage with lots of synthetic content — whether it’s Instagram filters or CGI-generated virtual influences and avatars, they understand in an instinctive way that perhaps my mom or your dad might not, that that’s not real. It’s manipulated media.
Nina Schick: I think over time, with digital literacy, you can figure out to have a degree of critical awareness as you navigate the information ecosystem. It has to be said that one-half of the world, which isn’t connected into this information ecosystem yet — predominantly in Africa and India — will be joining soon. Within the next decade, almost all of humanity will have a smartphone and internet access. They, arguably, have even less protection than us in the West.
Nina Schick: There is a really legitimate debate to be had about protecting people who have had no means of building up digital literacy or being inoculated against this type of manipulated or synthetic media. I think the far more prevalent risk — especially in the democratic context — is that people become cynical. That is an existential risk to a liberal democracy. Because if you can’t agree on any set of objective facts or norms to start your debate on, how on earth do you even run a society? Again, the liar’s dividend, the corrosion of trust, the corrosion of trust in all authentic media… Increasing polarization is something that I think is a potential existential threat, at least to democracies.
The history of misinformation and disinformation [00:28:13]
Robert Wiblin: Some people are a bit skeptical that this is going to be as problematic as it might seem. There’s a couple of different lines of argument. One would be, we only really developed audio and video recording in the 20th century, or only scaled up to the point where a significant fraction of even important events were being videotaped or recorded in audio. In a sense, before, like in 1900, we were living in this world where there was no way of proving that things happened. There was no way of using video and audio in order to demonstrate things. So people were stuck in the same state of potential mistrust, because all they would get was text written down in a newspaper claiming that something happened. There was no way to prove after the fact that it did by e.g. showing a video. You just had to make do with believing that particular sources were credibly reporting things that really did happen in text and in an article, because there was no alternative. Maybe we’ve been in this funny twilight zone period for the last century, where we figured out how to record audio and video, but we hadn’t yet learned to synthesize them.
Robert Wiblin: Now we’re going back to where we were before 1900, before we had videotapes, or before we had photography, where you have to have good sourcing, basically. You have to have people who you believe, who claim “I saw this thing happen myself.”
Nina Schick: Okay. I see the parallel you’re drawing there. I don’t know if it’s necessarily a good thing to be like, “Well, let’s go back to the pre-Enlightenment age when people just believed whatever they wanted to believe.” That’s what you’re saying. It’ll be similar to the Dark Ages, where you just believed what was told to you. In that case, because of a lack of information, your sources of information were very limited. The same effect can be had in an age of information abundance.
Nina Schick: It’s actually called ‘censorship through noise,’ where you’re inundated with information and there is no distinction between whether it’s good information or bad information. Then similarly to somebody living in the 1600s, you’re probably going to believe what your instincts tell you is true. The fact that audio and video emerged in the 19th and 20th century as an irrefutable way of documenting evidence — and was accepted as that — was not necessarily a bad thing. Now that’s not to say that before deepfakes came about there wasn’t visual manipulation in film and audio.
Nina Schick: That has a very, very long history going all the way back to the birth of modern photography. One of the brilliant early examples is a photograph of Abraham Lincoln, who — although he was lionized after his death as this iconic president, during his lifetime he was beset by rumors of ugliness. After he was assassinated, a portrait painter needed to find photographs of him looking heroic, and he couldn’t find any. What he did was take an engraving of a southern politician, John C. Calhoun, who ironically was a bitter rival of Lincoln’s during his lifetime, because they were opposed on the abolition of slavery…
Robert Wiblin: On slavery, right?
Nina Schick: Yeah, exactly. He took a photograph of Lincoln’s head and superimposed it onto Calhoun’s body, because Calhoun was the kind of politician, he had the gravitas, he had the posture that this portrait painter was looking for. That was only discovered to be a manipulation in the 1980s, I think. So 100 years after the fact. The thing that’s different now is accessibility, scale, fidelity, and what type of media we’re talking about. We’re not talking about editing images with Photoshop. It’s far more sophisticated than that.
Nina Schick: Before, we weren’t able to say, okay, we’re going to take AI and take five seconds of your voice, and now I can clone your voice. It’s not, for me, comparable at all to what has been possible in the past. But I also absolutely agree that if you look at the history of visual or media manipulation, it’s been something that’s been around since the birth of modern media. Again, to me, that just is more of an interesting point about the nature of humanity. Technology is just going to be an amplifier of human intention, this human innate desire. It’s always going to exist to deceive, to manipulate. The visual medium is a very powerful way of doing that.
Robert Wiblin: Yeah, I guess that’s a reasonable response, that potentially this will take us back to the past, but then the past is pretty bad in a lot of ways. I’m sure there’s lots of misinformation. The fact that you couldn’t prove that people did or said anything in 1800 was presumably abused by lots of people. This raises a general question where it doesn’t seem like we have a great metric of how bad is misinformation or how misinformed people have been over time. Are people more misinformed now than they were 10 years ago or 50 years ago or 100 years ago or 200 years ago?
Robert Wiblin: It’s a slight shame, perhaps, that we don’t have a way of tracking that, because it means that people can claim that things are worse or better than they used to be. It just seems very hard to prove it one way or the other.
Nina Schick: Yeah, it’s really difficult to answer that. Because, again, how do you measure that?
Robert Wiblin: Yeah, what’s the metric?
Nina Schick: I think the more important point is, again, just going back to the starting point, which is: This is a paradigm change in our information ecosystem. Whilst I can’t say how you track throughout history whether people are more informed, less informed, etc., what we do know is true is that we haven’t faced such an abundance of bad information in all these sophisticated forms that can proliferate so easily. Again, going back to a historical example, disinformation and manipulation of the visual record is something as old as humanity itself. Joseph Stalin was a keen proponent of visual disinformation. Because again, humans have a cognitive bias. When we see something and we think it looks right, we want to believe it to be true.
Text vs. video [00:34:05]
Robert Wiblin: How strong is that effect? Because I think this might be a crux for some people. If it seems like you’ve placed a lot of weight on the idea that visual and audio evidence is going to be much more intuitively compelling to people than lies in text would be, I’m wondering like, is it twice as compelling? Three times as compelling? What’s the measure?
Nina Schick: I think it’s not that text cannot be compelling. It can be compelling, but there have been certain studies to show that… For example, if you say macadamia nuts are related to coconuts, researchers did this, and you just write it, people are just willing to look at that and be like, huh, okay. If you put a picture of macadamia nuts and coconuts next to that text, then they’re more likely to believe it. I’m sure there’s a whole body of research to look specifically into the visual information angle versus how compelling or convincing text can be.
Nina Schick: I’m not at all saying that text cannot be compelling or convincing. Again, when it comes to synthetic text generation, if you look at what GPT-3 is capable of right now, I can see it being an extremely powerful tool of persuasion and coercion or manipulation. Because at scale, you could create human conversations or interactions in a way that is just mind blowing. There’s a great paper you should read by the Middlebury Institute of International Studies Center on Terrorism, Extremeism, and Counterterrorism, where they tested GPT-3’s capability to radicalize people online, and it makes for very scary reading.
Nina Schick: Again, going back to the visual side here, the reason why I focus on this — and this is by no means saying that synthetic texts should be discounted — is because the most important medium of human communication right now is audiovisual media. People read less. People interact with text less. The majority of the world who’s going to join the information ecosystem in the next 10 years is going to be, well, the literacy levels might be lower than some in western countries. It’s already two-thirds of humanity.
Nina Schick: That’s 67% of people who go to video as their first source of information. You know this, like from the attention economy we’ve built over the past 30 years. Is it easier for you to scroll through your phone on Instagram or Twitter and get information that way? Or is it easier for you to sit down and read a textbook? It’s not to say that texts cannot be compelling or a written lie cannot be compelling, and I’m sure, again, there’s a whole area of study to be done into AI-generated synthetic texts and how that could be used as a way to convince people. But I think to say that somehow visual media is not the most important medium of communication, when video seems to be becoming the first source of information for most people in the world, I think is probably intellectually dishonest.
Robert Wiblin: Yeah. I think I might be a bit unusual on this, because I guess I probably spend more time… Well, I just don’t watch that many videos in the scheme of things or I think compared to other people. I’m more heavy on audio, and to some extent, text. I guess, from the engagement hours that I’ve seen, it does seem like video is taking over people’s online consumption. That’s one reason we’ve been thinking about sticking the podcast more on YouTube and trying to take videos of the interviews, is that we think it will get more engagement than audio alone.
Robert Wiblin: I wonder what we can learn about where this will go by looking at ways you can create synthetic media that have been around for quite a long time. For a long time, people have been able to, say, take a fake quote from someone and stick it next to a photo of that person, and then spread that as a meme. Now, obviously, you could never Photoshop someone’s face next to a quote and export it as a JPEG, and then submit that to a court or get a journalist to believe that they said it.
Robert Wiblin: Indeed, I guess most people if they stop and think about it, they’ll realize that it’s not really any evidence at all that the person said that thing, because anyone could make that effortlessly. They know it. They could do it themselves. From that point of view, you might think, well, once people realize that you can make these videos about as easily as you can make a quote and a photo and stick it next to one another, that it’s not going to be convincing, that people will stop and think.
Robert Wiblin: But it is true, I think, that fake quotes attached to people do sometimes go viral. Because they seem intuitively right, or because people just want to spread that message. It’s like it’s a useful very short package of information that you can pass around and promote your view. Maybe that’s where we’ll end up potentially in 10 years’ time with deepfakes. Maybe everyone knows if they want to really think about it, that it’s not really evidence of anything. Nonetheless, it’s used all the time for people to push their agenda.
Nina Schick: Yeah. I mean, you just mentioned right now, you take a quote, you put a photo of someone and that goes viral as a meme and people are like, oh, yes, Einstein really said that. That shows you it doesn’t even require a very sophisticated manipulation in order for people to believe it. We’re not even talking about AI cloning your voice or AI recreating you in films synthetically doing something or AI resurrecting someone from the dead. People literally will believe a quote next to a photo.
Nina Schick: When it comes to deepfakery, I mean, I think there are going to be people — like there always have been — who won’t fall for it. Because they’re like, okay, well, this obviously isn’t true, what was the context of this video? Critical without being cynical. On the other side, there are going to be many people who do. This is one reason why cyber fraud is so prevalent, even before AI is into it. This is why you get emails like “I’m from The Bank of Nigeria and you’ve won a million dollars, and all I need are these personal details and then I can deposit the money.” They exist because some people fall for them.
Nina Schick: I think the fidelity and the sophistication of deepfakery will just mean that perhaps the number of people who’ll fall for it will become wider. Of course, there’ll always be those who don’t. Again, this to me just points out the kind of inherent nature of humanity, which is that there’s a spectrum of gullibility. On one side, there’ll be the person who believes everything. On the other side, there’s a person who is way too cynical and they don’t believe anything is true. Somewhere on that spectrum is you and I, but there will be a lot of people who fall on the negative side of that and might become victims, or become manipulated in a way that they don’t even realize.
Privacy [00:40:17]
Robert Wiblin: Yeah. A peculiar upside of this synthetic media technology is that in an unusual way, I think, it could increase people’s privacy. I’ll explain how. For a while, there’s been microphones everywhere, video cameras everywhere, potentially spying on people. There was that Black Mirror episode where people were hacking into people’s computers and then taking videos of them, and then using that to blackmail them. People do worry about being recorded when they don’t realize that they’re being recorded.
Robert Wiblin: After everyone realizes that synthetic media is ubiquitous and that anyone can be made to say anything using machine learning, then if you’re ever recorded saying something that you didn’t wish to make public, then you’ll be free to just deny it, and say, well, this was just created using machine learning. And 10 or 20 years ago or, I guess, 40 years ago, when Nixon was recorded conspiring to commit a crime, that simply wasn’t plausible.
Robert Wiblin: I don’t know really what to make of this, like whether this is a meaningful benefit or not. Of course, it means that people can now conspire to commit crimes in private and then if ever the audio comes out, it’s not convincing evidence in court. On the other hand, people who do just want to have more of a private life and don’t want people outing them for having a private view that they’d rather not share, they have a bit more liberty to do that, potentially. Do you have any take on that?
Nina Schick: Yeah. I mean, this is again going back to the liar’s dividend. In a world where all media can be faked, everything can be denied. Undeniably, this is going to give bad actors a lot of scope for not having accountability for their actions. We already saw this crazy case of that in 2018, where the President of Gabon, Ali Bongo Ondimba, had a stroke and was incapacitated. He hadn’t been seen for months in public and his political opponents started saying that he had died and that they actually had a body double in his place, and there was all this political upheaval in the country as these rumors started to spread.
Nina Schick: To squash those rumors, Ali Bongo and his camp decided to release his traditional New Year’s Eve address on national TV. He did this address. He looked really strange in the video because he’d obviously had plastic surgery to fix some of the effects of the strokes, so his face looked unnatural. His eyes were wide. People watched that video and they were like, that isn’t Ali Bongo. That’s a deepfake. He’s dead. That led to an attempted coup one week later.
Nina Schick: Now, thankfully, that political situation — which was a matchbox waiting to be lit — didn’t escalate into violence. The coup failed. That shows you the power of the liar’s dividend, how bad actors can not only use it to avoid accountability for their own actions, but orchestrate something that is in their own interest. They think, oh, that’s a deepfake. He’s dead. Let’s do the coup. That, to me, is quite alarming. Because another great example is Donald Trump’s infamous moment when he — arguably, at that time, that was the nadir of American politics — when the video emerged about grabbing women by the pussy, etc. Loads of people at the time thought, well, this is it. He’s ended his bid. He won’t be the Republican candidate anymore.
Nina Schick: It didn’t end his bid. He came out and he apologized, churlishly. It was ‘locker-room talk’ and his supporters forgave him for that. Now, he could just say that’s a deepfake. He actually started doing that when that video came up later, after 2016. He’s like, “That was fake, fake news. It’s not real.” Ultimately, to me, this just means that if you’re well-resourced, if you’re lawyered up and you have a public platform and you have people who follow you, they will probably believe your version of events. And you can probably get your lawyers to say that that’s fake or it’s not true.
Nina Schick: However, if you are not well-resourced, and you’re an individual who basically gets blackmailed because someone’s made a fake porn video of you and they’re like, “If you don’t pay me $1,000 I’m going to release this on YouTube and I’m going to put your personal details on it so it’s the first thing that comes up when people Google you,” then you’re vulnerable. There’s this double edge to that as well. I think, ultimately, it’s going to come down to how well resourced you are, how much you’re able to protect yourself or to push your version of events.
Robert Wiblin: Yeah, it is very interesting. I mean in my mind, there’s no doubt that Trump, if that came out today, would say that it was fake. I suppose we might just see this happen all the time with politicians now, that there’ll be no audio or video that would be able to convincingly demonstrate that they’ve done anything wrong.
Nina Schick: There was another really interesting case in Malaysia. This is a country where homosexuality is illegal; you can get lengthy prison sentences for it. One of the ministers in the cabinet who’s a close associate of the Prime Minister was allegedly involved in a homosexual affair, and video leaked. He just said it was a deepfake. But the other man who was involved admitted that it was true. So that man was put in prison. The Prime Minister’s associate just said, “My political opponent is trying to smear me,” and he didn’t get punished. Again, this was him insisting that this authentic video was a deepfake. He got away with it, whilst the other guy was put in prison.
Robert Wiblin: I actually feel like this demonstrates my original point, that there’s a potential upside. Because I don’t think there’s anything wrong with having gay sex. If he had gay sex with this man, I hope he enjoyed it. Good for him. The fact that he’s able to deny it because of the deepfake thing, it’s actually good because it’s protecting his privacy. He’s able to do something that’s illegal, but shouldn’t be illegal.
Nina Schick: No, no, no, no. Whilst I agree with you that nobody should be punished for having gay sex — that’s a problem in Malaysia — I’m just talking more about the fact that when this video leaked, he, being in a position of power, was able to defend himself, whereas the other guy who was not in a position of power was put in jail. My point is more about people in power avoiding accountability, not whether or not people should be allowed to have gay sex or not. Obviously, I think people should be allowed to have gay sex. My problem is more with Malaysia for making it illegal. But this is about one side being like this is a deepfake, I had nothing to do with it, and the other guy being punished for it.
Robert Wiblin: Yeah. It is interesting. I suppose a question there that we’re not going to get answered is, what would have happened if the other guy denied it as well, and said yeah, it was a deepfake? Potentially, both of them could have avoided any punishment if they’d both been willing to go along with the same story that it never happened.
Nina Schick: Sure. Putting aside our views on the fact that homosexuality should not be punished, you still have a minister who… The precedent that sets for accountability… In this case, it’s a video pertaining to a gay relationship. But what if that was a video of him massacring civilians? And he’s like “Well, that’s a deepfake. It’s got nothing to do with me.” Or a video of him accepting a bribe?
Robert Wiblin: Yeah, I mean, I guess the overall thing is that being able to deny that video or audio demonstrates you’ve done anything both means that if you really do something bad, you might be able to get away with it, and it also means that if society is out to punish you, or society does condemn something that shouldn’t be condemned, then you’re potentially able to get away with that as well. I guess, on balance, it seems like it would be better if we were able to tell what people have done in general. On the other hand, because society does sometimes condemn people for doing things that are actually not wrong at all, it does have this at least partial compensation, that it increases people’s protection against that.
Nina Schick: There are a lot of human rights organizations involved in this space. One that I would really encourage listeners to look at is Witness. It’s a brilliant organization. They’ve been doing a lot of work on disinformation and manipulated media and deepfakes. Their organization is all about using documented evidence to support civilians who are trying to document human rights abuses in parts of the world where this is very prevalent. The way they put it is that there’s a very fragile consensus anyway on audiovisual media that’s coming from some parts of the world where it’s very dangerous, and there are lots of human rights abuses.
Nina Schick: I think their overall take is that the prevalence of manipulated media and deepfakes will further put human rights activists and civilians in danger, rather than the other way around. Because ultimately, it’s not that people will be able to get away with indiscretions that we think shouldn’t be indiscretions, it’s more that people in power will be able to twist the narrative in a way that suits them. I think the deciding factor here is not whether or not you committed something that abides by a moral code we don’t agree with, but whether or not you have the power to avoid accountability for your actions. I still think that’s going to be more prevalent.
Robert Wiblin: The dominant effect, yeah.
Deepfake pornography [00:49:05]
Robert Wiblin: Let’s talk a bit more about deepfake pornography, which you mentioned earlier. There’s this pretty harrowing story about an Indian activist, a woman who was basically completely silenced and driven out of public life by, I guess…an activist who is part of the BJP party, the Hindu nationalist party that currently rules India, who produced, I think, deepfake pornography with her face on it, and then used that in order to get her harassed and disrespected.
Robert Wiblin: I guess there’s a lot that we could talk about with deepfake pornography. One uncertainty I have is like, how long will this continue to work as a method of harassment? I wonder whether over time, people will just become so familiar with this and it will become so overplayed that it might just stop being effective, basically, because everyone has seen this or people have seen this trick used so many times, and they realize that it’s not credible evidence of anything whatsoever. It might stop having the sting that it has had over the last few years. What do you think of that?
Nina Schick: I think that’s a very optimistic outlook. Women have been targeted like this even before deepfakes were around. The same kind of fake pornography was created many, many years ago — definitely as long as Photoshop has been around, women have been put in acts they didn’t commit, which is so humiliating and demeaning for them. Even if it’s not revenge porn, if it’s fake porn. Their experience has been that it is devastating, because it’s the first thing that people find of them online. They have no control over it.
Nina Schick: They don’t know who has made this of them. There is no legal recourse. If you go to the police and someone’s made deepfake pornography of you, there’s nothing that they could really do. One of the craziest things that I found out is that if you look at some of the early deepfake videos where actresses’ faces were face-swapped into real porn films, one of the easier way of getting that content removed was for the porn company to do a copyright claim, rather than the actress to be like, my image has been abused in this way.
Nina Schick: Now that might get that content removed from that one website, but what about the hundreds of other websites or thousands of other websites? What about the content that’s in different jurisdictions? What about if you know who’s done it, but because they live in a different country, there’s nothing you can do to… They get away with it. The first point I’d make is that it’s deeply damaging, even though it’s not something you’ve actually done. That’s been the unanimous consensus amongst women who’ve been targeted in this way.
Nina Schick: The second thing I’d say is that if, again, you are well-resourced — like a Hollywood actress or a politician’s wife or a high powered executive — you probably have resources in place in terms of lawyers sending out cease and desist letters, your PR team that can come out and basically defend you publicly, and say this is all fake… You have a chance to get out your side of the narrative. For a lot of normal women who don’t have the access to these resources, it’s hugely embarrassing.
Nina Schick: If it comes up on Google search, it’s the first thing that people see. It might deny them opportunities for employment. It might deny them relationship opportunities. It often makes them feel trapped and unsafe to the extent where they’re not willing to leave their house because they don’t know who’s targeting them in this way and why. Another really harmful thing is how you started seeing this being used against minors. If you tell a teenage girl… Already this malicious content is being spread around in high schools, for example, of girls being filmed doing real sex acts and then that being shared. That has been devastating to lots of female teenagers.
Nina Schick: Now imagine you can get fake pornographic content of them and that’s being shared around their peer group. If you’re like, oh, don’t worry about it, because it’s not real and everybody knows that that’s deepfake porn. I don’t think that’s going to be their experience. I don’t think the sting of deepfake pornography is going to cease in terms of how effective it is. That seems to certainly be the consensus amongst women who’ve been targeted in this way.
Robert Wiblin: Seems like we’re in very bad trouble then. There’s this quote from Scarlett Johansson who’s been targeted a lot with this kind of deepfake pornography, and she’s extremely upset about it. She’s basically concluded that trying to stop it is like trying to hold back the sea with a bucket. It sounds like she stopped trying, because it is so hard to stop. Presumably, we’ll just see an awful lot of it. If people don’t get bored of it, it continues to cause a lot of damage to people. I guess I was saying it’s going to be terrible.
Robert Wiblin: I wonder, is there any way that society can adapt to this? I mean, I suppose like one extreme adaptation would be people thinking that it’s not bad to be in pornography or to be in a sex tape. I suppose that’s a complete fantasy that we’re going to reach that point anytime in the next few centuries.
Nina Schick: That is a fantasy. Because the taboo and the stigma that’s associated with this is so ingrained in so many cultures. Maybe less so in the West, but even then you hear Western women who are targeted either in revenge porn or fake porn or deepfake porn. This is a spectrum of different types of ways that… I guess you could call it image-based abuse, whether consensual or non-consensually, whether it actually happened, authentic or non-authentically.
Nina Schick: Going back to the Rana Ayyub case, I mean, she is an Indian investigative journalist, right? I grew up in South Asia. I’m half Nepalese. And she talked about when this fake pornography of her was circulated — and this was done for a political motivation, because she was very critical of the ruling Hindu nationalist party. She was somebody who was quite used to being attacked publicly, and she has soldiered on and built a thick shell, but when she was targeted specifically by the porn… And whoever made the porn also released her private telephone number. They doxed her. And then she was inundated with messages asking for her rates for sex, and threatening her life. And what she said was that that was so different from any other kind of harassment she had experienced before. She went to the police station to show police officers this content of her that was circulating, and the humiliation when they were watching it snickering and asking her, “Are you sure this isn’t you?” was almost unbearable.
Nina Schick: I think the cultural, societal taboo around pornography is not something that we can overcome in a day and be like, “Oh, well, don’t worry because it’s just deepfake porn anyway, so get over it. That doesn’t really impact you.” And the only way I think we can really start dealing with it is, again, specifically when it comes to deepfake porn, is just understanding that a lot of our methods for recourse, including our whole judicial system, isn’t fit for purpose in this sense. Because you can go to the police, and they’re like, “Ah, yeah. Well, we don’t know who made it. And what do we do if they are on the other side of the world?”
Robert Wiblin: Is it actually a crime? Is there anywhere that this falls into the criminal code, or can you sue people for a tort or something like that?
Nina Schick: There is no national legislation in any country in the world that criminalizes deepfake pornography. There are some state legislative efforts in the United States. And here in the U.K. there is some discussion about this in the context of revenge porn and whether revenge porn should be broadened out to apply to deepfake porn as well. But, for me, this is, again, just a very interesting case study of how in the exponential age, a lot of our institutions — including our judicial systems, the way that we think about crime — is no longer fit for purpose for the different world that we now inhabit, right? Where the paradigm has completely changed. We’re just playing catch up.
Nina Schick: And the other thing I’d say about deepfake porn is there are women who work specifically on deepfake porn who are activists and researchers, and they’re doing amazing work. And they often, and I can understand why, they get really frustrated that when people talk about deepfakes, they talk about — we haven’t gotten to this yet, but I’m sure we will — all the cool applications of synthetic media. And how it’s going to change entire industries, how it can also be used for good. Or they talk about deepfakes in the context of political information. And these women get frustrated, they say “You’re not focusing enough on the real-world harm, which is this non-consensual pornography against women.” But, I would say that, again, this is a spectrum and they’re all interconnected.
Nina Schick: Just talking about political disinformation or synthetic media in its various commercial applications, that doesn’t mean that you’re not talking about how it’s weaponized against women. And what’s more, this case study of non-consensual porn against women, for me, is a sign of things to come, which is that it’s going to become an emerging civil liberties issue. If somebody can take your likeness, if somebody can hijack your biometrics and put you in content without your knowledge or consent, it’s going to go way further than porn. It’s just that porn is pioneering. Just like with the story of the internet, where first it was basically people like, “Oh, what’s the internet? It’s where weirdos go and share porn. That’s never going to take off.” Again, in this case, porn is pioneering. And it tells you, I think, there are lessons to be learned in terms of data privacy and civil liberties.
Robert Wiblin: Yeah. I hope to come back to law reform options to try to minimize the damage of deepfake pornography and I guess other synthetic media where people’s identity is appropriated later on.
Russia and other bad actors [00:58:38]
Robert Wiblin: Let’s talk a little bit about Russia. It seems like one reason that all of this hasn’t caused more chaos than it has already is that there just aren’t that many well-resourced, coordinated, thoughtful actors in the world whose goal is to cause chaos and harm. Most people we know have their jobs, have their families, they have things to do. And maybe they troll people online occasionally, but it’s not their day job to go out and mess with people.
Robert Wiblin: But, I guess Russia is like… Or the Information Research Bureau, the KGB’s successor, they’re one of the main groups that does have a lot of money and basically is just out there in order to damage the United States’ society as much as possible. Are we likely to see other groups that are equivalently skilled and resourced appear over time? Will China become the new Russia and be interested in sowing chaos in the same way? Or are there ways that private or criminal organizations might get into this because there’s some way of making money? How many people might end up working professionally on sowing disinformation?
Nina Schick: I think the broader point is that modern technology has made disinformation more accessible to a wider range of actors. Whereas perhaps in the 20th century during the Cold War, an orchestrated sophisticated disinformation campaign — like the ones that the Russians ran, for example, in the 1980s, where they perpetrated the myth that HIV/AIDS was created by the CIA to kill African-American people…which, by the way, was a very effective way to manipulate existing tensions within U.S. society around race relations — it required a lot of time. It required a lot of careful planning. It required a lot of careful coordination. However, now, when you look at the information ecosystem, a lot more actors are in the game. You don’t have to be a state actor to make a piece of disinformation that can go viral and cause real-world harm.
Nina Schick: I already mentioned, I think, how Myanmar is a very good example of a country that was basically shut off from this ecosystem. And then in 2014, when the military junta decided to loosen up restrictions and they got the internet overnight, Facebook became synonymous with the internet there. And lots of local groups — monks, ultra nationalist monks — started spreading a lot of disinformation on Facebook about the Muslim minority Rohingya group. There had always traditionally had been this tension between the majority Buddhist and the minority Muslim Rohingyas, but a lot of that disinformation that was spread — not by a state actor, but by individuals and groups on Facebook — helped lead to the genocide of the Rohingya people starting in 2017.
Nina Schick: It’s not only state actors that have the ability now to orchestrate and disseminate disinformation campaigns; groups do and individuals do as well. And this has been the democratizing power of technology. But, going specifically back to your question about other state actors, there is no doubt that traditionally when it comes to state-sponsored disinformation or influence operations, Russia has been the most sophisticated. And this goes all the way back to their long history. I mean, the word disinformation comes from the Russian ‘dezinformatsiya’, the KGB black ops.
Nina Schick: It was a terrific way for them to punch above their geopolitical weight, disinformation operations. There was a study from Princeton, I think it’s from 2019, which found that… It tried to assess state-led disinformation campaigns, and it found that Russia was still responsible for over 70% of the state-led disinformation campaigns globally. However, that balance is undoubtedly shifting. The Chinese are definitely becoming more interested in infiltrating Western information spaces. And there was a big transformation starting in 2019 with the protests in Hong Kong. And that’s significantly accelerated in the last year, thanks to COVID. And you’ve seen, I mean, some of it is really brazen.
Nina Schick: A lot of Chinese diplomats now have Twitter accounts. Given that Twitter is banned within China for the Chinese citizens, they’re not doing it for the benefit of Chinese citizens. It’s targeted at Western information spaces. And there was this incident, I think it was a few months ago, where they basically… The spokesperson of the Chinese foreign ministry pinned a very unsophisticated, cheap, fake, manipulated image to his Twitter page, where it showed… It was like something a kid had done in Photoshop. It showed an Australian soldier basically holding a knife to the throat of an Afghan boy. And this was in response to a report that had come out saying that allied forces, including Australian soldiers, had been responsible for civilian deaths during the Afghanistan campaign.
Nina Schick: Now, here there was evidence, a very badly manipulated, cheap fake image of… I mean, could the symbolism be any more rife? It’s an Australian soldier standing on the Australian flag, holding a knife to this little boy’s throat. The boy was holding a sheep…lamb to the slaughter. And that became a major diplomatic incident, because they were saying, “This is true. This is your human rights abuses in Australia.” Australia was like, “Take down this manipulated image. Otherwise, we’re going to curtail our trading relationships.” China is newly aggressive in the information operations space. I think it’s just a new theater of war, right?
Nina Schick: We’ve created this information ecosystem where we’re all interconnected. And I think it’s actually something I’m going to be looking into for my next book, but the geopolitics of this virtual space are fascinating. And not only state actors are involved, other organizations and individuals are as well, but on the state level. Yeah. A lot more of all of that.
Robert Wiblin: Yeah. That story about the Chinese diplomat creating this obviously fake image of an Australian soldier killing an Afghan child… It almost caused me to reinterpret all of this, because it’s so clearly fake. I guess they’re called these cheap fakes, which is like dodgy Photoshop, where it wouldn’t be hard to tell that it’s not real. And yet that has a lot of impact. And it suggests to me that it’s not so much that… The issue may be less that people are getting tricked by sophisticated imagery, but rather just that people can push their agenda, as they always can, using emotive ideas, emotive language, emotive imagery, and people don’t care about the fact that it’s not real because I guess in their mind… Well, in this case, I think it does speak to something that really happened, right?
Robert Wiblin: There were some reports suggesting that Australian soldiers had killed people when they shouldn’t have, but it was being used, I guess, as a way of punishing Australia for something that they really had done that was wrong. But I guess, I mean, I almost wonder… Is it even that wrong? Because inasmuch as Australia was in the wrong to begin with, and then they’re using this imagery in order to highlight that fact and make it more emotive… That almost seems within the bounds of reasonable discourse, or at least with the way people often talk to one another. Maybe not if they are pretending that it’s a real image, but I don’t even know whether that really was what they were doing.
Nina Schick: No, they were pretending it was a real image.
Robert Wiblin: Oh, they were? Okay.
Nina Schick: But, again, to me, rather than getting drawn into the ethical considerations about Western democracies… And there is a problem here, right, Western democracies preaching about human rights and disinformation, etc. And this is how they certainly see it from Beijing or Moscow, “You hypocrites. Don’t you dare lecture us on our human rights record when your troops have been found guilty of perpetrating civilian massacres in your military campaigns.” But, there is a difference, I think, in terms of scale. I mean, what happened in Afghanistan is reprehensible. Any civilian deaths… And I’m sure many Western countries were involved in that, but it’s still not on the scale of, for instance, what is being done in Xinjiang province against the Uyghurs, right? You can see why, to the administration in Beijing, it seems very hypocritical, and so they want to point it out.
Nina Schick: But, the broader point I was going to make is that it’s really interesting how everything has now just become about information warfare, right? Not only between state actors, but even amongst individuals in society. And you see this in the increasingly polarized political debates around the West. And the problem is, if truth or reality doesn’t matter, then the only thing that matters is the pursuit of power. And if the only thing that matters is power — and anything in the pursuit of power is permissible, including sharing manipulated media or spreading disinformation — then I don’t know if that’s the type of society that I want to live in. But, hey, maybe I’m old fashioned because I still believe in Enlightenment values and facts and stuff like that.
Nina Schick: Of course, there’ll be people who see it differently. And this is actually one of the really interesting things about the information ecosystem that we’re building for ourselves, because it might play out differently in different countries, depending on the political system. For instance, going back to deepfakes, it’s interesting that China is the only country that has outlawed deepfakes outright. And I think that kind of reactive policy making is not the way to go, not least because, again, I’m sure we’ll talk about it, but not all applications of deepfakery are going to be bad. In fact, there’s going to be so many legitimate commercial and even social good applications. By passing that legislation, the central government is saying, “We are the arbiter of truth. We can tell you if a piece of media is synthetic or not.”
Nina Schick: If video were to emerge of, say, human rights abuses in Xinjiang province, they can say, “Well, this is a deepfake,” right? If you have control over your information ecosystem in the unique way that the Chinese government does amongst its own citizenry — where they can control what information they have — this could actually become a brilliant tool of coercion. You have more power to shape the reality amongst your citizenry. And there is a cultural element at play here as well, which is that, again, I’m half Nepalese, so I grew up on the borders of Tibet/China. We’re always in fear of our big Chinese brother across the border, but it’s a far more collective society, right?
Nina Schick: And if the grand bargain has been, look, we’re not going to care so much about human rights stuff and individual rights like they do in the West, as long as we as a society feel like our condition is improving. We’re growing richer. My children are in a better position than I was, or that our grandparents were, so we’re fine with the government having control over the information ecosystem. It could play out very differently in the West, where actually, because of free speech and freedom of information, everybody has the right to disseminate all kinds of information or disinformation. And rather than society being pushed towards a collective direction because of one single narrative that’s accepted as true, you have a complete corrosion of society, because no one can accept anything as being true anymore. It could play out differently in different parts of the world.
Robert Wiblin: One ray of hope that comes out in the book is that as far as we can tell, it seems like Estonia has pretty successfully combated quite aggressive Russian disinformation efforts. Estonia used to be part of the Soviet Union, and Russia used it as a borderline possession or certainly part of their sphere of influence. And they haven’t liked the steps that Estonia has taken to integrate with the West. Yeah. Could you talk about how they have managed to harden their society against disinformation efforts? And can other countries learn from this and potentially follow suit and make themselves at less risk than they would be otherwise?
Nina Schick: Yeah, we have to give a happy case study of how we can combat this kind of thing. But, I mean, the Estonia case study is really interesting, although it is slightly different from a country like the United States or the U.K. in the sense that it’s a small country, it’s very homogenous, and they constantly have this fear of the Russian bear on their border. And the experience of living during the Cold War under Soviet influence was enough for them that when they became targeted by aggressive Soviet disinformation campaigns in the early 2000s, they decided to build society-wide resilience against it. I think the way that they think about their defenses of society is just like cybersecurity in the sense that you think about building a moat around your castle, and then you have the inner walls, and then you have the outer walls, and then you have the ramparts.
Nina Schick: You build layers of security so that any malicious actor trying to infiltrate society, there are always deterrents. This included all kinds of things like digital education — making disinformation studies something that children studied at school — having armies of volunteers who would fight disinformation online, I think they were called the ‘Estonian Elves’ or something. There’s an actual government strategy for psychological warfare and disinformation operations. The Estonian case study is interesting because it shows how a society-wide mobilization can be effective in deterring some of the worst impacts of disinformation. But it’s also different, because then perhaps in the context of what you’d be facing in the United States… Because in the case of Estonia, there is a clear outside aggressor, right? But, when you’re looking at the United States, even though Russia and China and other state actors like increasingly Saudi Arabia and Iran are aggressors, there is also a problem of homegrown disinformation, right?
Nina Schick: It’s lazy and intellectually dishonest to be like, “Oh, all of America’s problems and the election of Trump is due to Russia.” And because some people have insisted on pushing and peddling this very simple narrative, the pushback from the other side has been like, “Okay, this is delusional. Russia did nothing.” The reality is they did something, they intervened, but they’re not responsible for all of the problems in American society. And in large part, those problems are because of homegrown disinformation, but not only even disinformation. There is a distinction to be made between disinformation and misinformation.
Nina Schick: Disinformation is when it’s done by a bad actor with malicious intent; misinformation is just bad information that spreads without necessarily any bad intent. Your naïve mother sending you COVID stuff, or everyone who believes in the Q Anon conspiracy theory, they’re not bad actors. They don’t mean to undermine society. They genuinely believe it’s true. It’s really complicated. The Estonian case study is definitely a really good one in terms of how to fight information operations that are launched against you by an aggressive state actor though.
2016 vs. 2020 US elections [01:13:44]
Robert Wiblin: I guess a possible ray of hope, another one, is that I suppose I expected in the 2020 election that Russia would intervene — and perhaps have a similarly devastating or similarly powerful effect as they did in 2016. I guess in 2016, it seems like the thing they did that by far was the most influential was the hacking of the DNC emails and gradually leaking them in a kind of manipulative or misleading way. And I kind of expected, well, why wouldn’t they just try to do the same thing in 2020? It’s proven itself to work the first time around. And yet, for some reason, they didn’t seem to manage to make quite a big splash. I’m curious to think ab out why that is. Could it be that U.S. society has learned this trick and maybe adapted in some ways that has now made it a little bit harder for Russia to persuade the media to cover their misinformation in quite the same credulous way that they did in the past?
Nina Schick: There’s an element of that. And certainly, the DNC WikiLeaks Podesta emails was the classic hack and dump, right? You hack, you dump, you release. And certainly, the media was more responsible in reporting any kind of unverified hack and dump operations. One of those was the whole Hunter Biden laptop saga, which didn’t take off in the same way that the Podesta emails did. By the way, the Podesta emails are so fascinating because that led to Pizzagate, which then has taken a life of its own. It’s now led to QAnon, which is not only this weird fringe thing in the United States, but it’s become the first global internet cult, right? We see it here in the U.K. as well, but it’s not that they didn’t try in 2020. They absolutely tried. And they would have tried in the same ways that they did in 2016 by attacking the actual election infrastructure voting machines, by hacking like they did with the DNC, but also by doing influence operations on social media.
Nina Schick: And what happened with their influence operations on social media in 2020 was similar to 2016, but it was far more sophisticated. So in 2016, you basically had Russian agents sitting in St. Petersburg at the IRA (the Internet Research Agency) who spent years creating fake communities, pages, personas, posing as authentically American, and they played identity politics. So they would fill these groups up with a distinct pride in their identity, and they would make these groups across the political spectrum. So ‘Texas Secessionist and Proud,’ or ‘Gun Owners and Proud.’ But on the other side, a lot of focus on the African-American community. And over years they groomed these communities so that real Americans started joining them. You instill pride in your distinct identity, racial identity often by sharing memes, like empowering quotes. And then as they got closer and closer to the election, they started injecting these communities with (often legitimate) political grievances.
Nina Schick: So in 2016, it would be like, don’t go vote because Hillary Clinton doesn’t care about black people. Donald Trump doesn’t either, but neither does Hillary. We cannot quantify how successful those information operations were in terms of actually suppressing the Black vote, or what it did in terms of impacting the result of the election. If at all, it was a significant factor in an election which was basically won by Trump by 70,000 votes. But I think, again, it is really intellectually lazy to say that Trump only won because of the Russians, right. It completely overlooks all the other issues in American society that led to the election of Trump. I think in 2016 it made such a splash because this was the first time that American democracy had been infiltrated in this way. So it’s not that the same thing didn’t happen in 2020, my God, the same thing was happening in 2020.
Nina Schick: And not only that, it wasn’t just Russia. Saudi Arabia was trying, Iran was trying, China was trying. There were far more influence operations that were led by foreign state actors than in 2016. But the other thing, which actually became the headline — and this is what I was arguing in my book — was that the bigger battle would be the domestic information war. And you saw that with the president. As soon as COVID, it was like, Oh, well, this election’s going to be stolen because of the mail-in voter fraud. This was the backup strategy, because he was never going to concede. He was never going to go quietly. This was his way of saving face, which was a disinformation campaign seeded a year… From the beginning of the year that the election would be stolen. And that led to real-world violence.
Nina Schick: When we saw the storming of the Capitol on January 6th, it had absolutely to do with the fact that the president and his closest associates had been spreading a lie that the election had not been free and fair. And the pernicious effect of that is that a lot of people still believe that now. And if a Republican candidate wins the next time around, I’m sure that Democratic voters will believe the same, that somehow the election was stolen for them…like they did in 2016, right? That oh, he didn’t legitimately win. It was because of the Russians.
Robert Wiblin: Yes, it does seem like the domestic misinformation is substantially more influential, I guess. I’m not sure exactly why. Perhaps the volume is just far higher because there’s so many more Americans involved in American politics than there are Russians, even if some of them are professionals.
Robert Wiblin: To what extent is it now the norm for all political campaigns — including ones that you might previously have thought of as being reputable — to use disinformation? And do you think that we’ll see just basically all political parties potentially using synthetic media to push their agenda in future? Or do you think it will remain something that only fringe and generally less-trustworthy political groups use?
Nina Schick: Again, I think all political campaigns, all politicians, all people, they lie, right? This is again an inherently human quality. And especially if you’re trying to win public office or have a place of leadership in society… That’s not to say that all politicians lie or that every human being lies. But it would be incorrect to say that misinformation, disinformation only became prevalent in politics recently. It’s been there for as long as we can remember. But again, the difference here is that we inhabit a different information ecosystem, where there is more of an abundance of mis- or disinformation because anybody can make it. It’s not just well-resourced actors. It can not just be in written form. It can actually be in visual form. It can actually be made by video. And more importantly, trust is becoming something that is increasingly in short measure. So if you are a politician or you’re standing on a public platform and nobody trusts anything you say… Which is in part to do with the fact that there are so many sources of information in this information ecosystem, that there are no longer any monolithic authorities, right?
Nina Schick: Fifty years ago, you’d have, I don’t know, the state broadcaster, the leading political parties, the leading newspapers, and most people in society would have a more monolithic view based on those arbiters of information. That is no longer the case anymore. You could get your information from anywhere in the world. And I suspect what’s going to happen is that rather than placing trust in institutions or political parties, people are going to start placing trust in their influencer of choice. So I think that is definitely going to have an impact on how political campaigns are run. But the basic idea that somehow misinformation is not inherent in politics is just not true.
Robert Wiblin: Yes, it’s a funny thing that it seems like in the U.S. it has for a long time been acceptable just to lie outright. Whereas the U.K. has this very funny political culture where it’s acceptable to completely mislead people about the substance of what’s going on, as long as what you say is very narrowly, technically true. So they spend a lot of time picking particular start and end dates for statistics, or gerrymandering the definition of some statistical term in order to create a completely misleading impression about what’s going on. The government last year spent all of this time trying to convince people that they were doing far more COVID-19 testing than they actually were by counting tests when they were sent out, rather than when the tests were actually used, but then being very misleading about how they presented it.
Robert Wiblin: The fact that there already is so much dishonesty and people are justified in mistrusting things that lots of authorities say, it does make me wonder how much is any of this really different? I suppose, I feel like I managed to somehow muddle through, despite the fact that lots of people are trying to mislead me. I managed to have some reasonable connection to the world and to have some idea of how the U.K. government is performing, even though they’re trying to mislead me constantly. So yes, I suppose possibly the future will not look so superficially… Maybe it will look different because the techniques are different, but the underlying struggle will be very familiar.
Nina Schick: Yes. And I think this again goes back to what I was just saying about how there are so many sources of information now that there are no longer monolithic institutions or parties or people in power who can dominate with their narrative. And perhaps in the past, the fact that there weren’t so many sources of information, and there were monolithic arbiters of narrative or truth… In a way, that made society more cohesive.
Nina Schick: One of the political implications of this new information ecosystem is certainly that many people, myself included, don’t really feel that there is a political party that represents my norms or that stands for what I want. So I think there are just increasingly people who feel disconnected, cynical, and mistrusting. And as I already mentioned previously, that isn’t necessarily the way that the information ecosystem is developing in other parts of the world, specifically those where there is an authoritarian regime that has more control, and is using the age of information to exert more monolithic control over collective narratives. So it will be interesting to see if this develops differently in Western democratic societies than in other countries around the world.
Authoritarian regimes vs. liberal democracies [01:24:08]
Robert Wiblin: Yes, let’s talk about that. In the book, it comes across fairly strongly that you think that these new methods of misinformation and synthetic media are likely to empower authoritarian regimes. To make it perhaps easier for them to control their populations. Maybe make it easier for countries to slip out of being imperfect democracies into being like partial authoritarian states. That seems very intuitive to me.
Robert Wiblin: I wonder whether there’s a possible case though in the other direction. That the ability to just promote all of these different ideas and to create evidence in favor of any position potentially just leads to social decay and chaos more than it leads to one organization being able to control the narrative. I guess I don’t have a good sense of how good China is at controlling the narrative internally today, whether there’s a lot of… Or potentially there’s groups like promoting alternative narratives that have some purchase in the population. Yeah, do you have any thoughts on this? Is there any way that you could imagine that in 30 years time, we’ll look back and say oh, actually, this was just as troublesome, just as problematic for authoritarian regimes as it was for liberal democracies?
Nina Schick: Whilst not wanting to overstate the power of the Chinese state — they of course have blind spots and are vulnerable, they’re not omnipotent — it is undeniable that China has a unique internet ecosystem, and it’s undeniable that the state has a lot of control about what can and can’t be said or what information can and can’t be spread in that ecosystem. And the perfect example is in the context of COVID. If you look at what happened when they first started getting reports of this mysterious virus emerging in Wuhan and how quickly the government was able to censor any information about that virus on all social media platforms — and not only that, but also censor any reaction that was critical to the government censoring this information — there is absolutely no equal precedent in Western society. But you know, that whistleblower, the very brave man from the hospital in Wuhan, he became a national hero later. Sadly, he tragically passed away because of COVID.
Nina Schick: But at the time when he tried to raise the flag, he was hauled in to see the authorities and was made to sign a statement saying that he had disrupted the public order by spreading disinformation and spreading fear. So, the fact that the government tried to silence him is not lost, right? He became like this mythical hero within China. Nonetheless, I think my point still stands. Again, whilst not saying that they’re omnipotent, the kind of control the state does have over the information ecosystem is unique. And it’s interesting to see how other countries like Russia are trying to build an information ecosystem or an internet ecosystem that’s far more like the Chinese one.
Robert Wiblin: Yes, one thing you say in the book is that it’s very difficult, maybe impossible for a democratic country, a pluralistic country to hold together without at least some shared conception of reality. Some shared conception of like the basic empirical facts about what’s going on in the world. That also seems plausible. I wonder, can we think of any counterexamples? Can we think of any countries that somehow have managed to hold together despite being so pluralistic that there’s different political groups or different ethnic groups that really just do not see eye to eye whatsoever? The Ottoman Empire, the Mongolian Empire, are there any like very large bodies that somehow managed to? I suppose they weren’t democracies, but yeah, how far can you potentially stretch a public while keeping them in the same country?
Nina Schick: Well, looking at historical examples of huge empires that have held together, like cohesively with a strong sense of identity… I mean obviously there’s the entire history of China, but that is in large part to do with the domination by the Han Chinese and the strong sense of one centralized collective identity, right? This feeling that the empire is the center of the earth. And in fact, the history of China in the 20th century was an anomaly. And now, China is coming back slowly to take its place in the world where it historically always has been. But of course, the Chinese example… I suppose it doesn’t work that well in the sense that the identity that held it together as a cohesive society was not super diverse, it was based on the identity of being like Han Chinese, right?
Nina Schick: It wasn’t like oh, we’re united in our diversity, and we welcome all the other different groups. But the real case study of a country that does work being a multi-pluralistic society is the United States, right? The most important democracy in the world, the most powerful country the world has ever seen, the richest country in the world, has traditionally held together because there is a strong sense that, despite all our differences, despite us being a country of immigrants, we have this American identity. And that’s something that’s missing in Europe, by the way. This is something like if you’re French, you feel more French than European, or if you’re British you feel more British than you do European, which is why the E.U. has been this failed experiment in the sense that there is no European deimos, whereas in America, a civil war was fought. A war of independence was fought, but there was certainly a sense of, okay, we are the United States of America. So, it’ll be very interesting to see whether this plurality and diversity, which has traditionally always been America’s strength, will develop in this information ecosystem. And I’m not saying America is doomed at all. This is a resilient country. I read a great quote recently, was it Oscar Wilde? Anyway, it was an author who had basically said that his death had always been over-reported and that he was still very much alive. And I think—
Robert Wiblin: Yes. “Reports of my death are much exaggerated.”
Nina Schick: —that’s the one. That’s true for America. Everyone is always so keen to predict its demise. And I think the latest iteration of this might be the political polarization. Which exists, it’s a real problem. But if any country, any Western country, can overcome it, it would be America in my view.
Robert Wiblin: I wonder whether India might be another counterexample. Obviously, it’s a very diverse country by typical standards. A huge number of people, different ethnic groups, a reasonable degree of religious diversity as well. And has managed to function as a kind of imperfect democracy for 70 years now, or thereabouts. I suppose I don’t quite know enough to comment about that one. But it’s a little bit intuitively surprising that India has managed to hang together as a democratic, fairly pluralistic society for that long without descending into a greater degree of chaos, at least to me.
Nina Schick: Yeah. I mean, I grew up in South Asia, again between China and India in Nepal, this tiny little country wedged between these two geopolitical giants. And yeah, India prides itself on being the world’s largest democracy, but there are communal and sectarian divides that are very, very real. And in India, just like in America, I think, if identity politics started taking hold in a much more aggressive way, then I don’t see what good can come of that. And unfortunately, I think that in the West, just as in countries like India, increasingly identity politics seem to become quite prevalent in the political discourse. So, I think that slightly depends on how this develops. Obviously with the BJP in power, that has led to a lot of discord in India.
Nina Schick: So, it isn’t fair to paint India as like a thriving, very successful democracy with no kind of sectarian brutal violence. Because India experiences this all the time. And I think in part, it depends on how the identity politics side of the discourse develops.
Law reforms [01:31:52]
Robert Wiblin: Alright. With the rest of the conversation, I’m very keen to focus on what different actors can do to potentially ameliorate these problems, and maybe what listeners might be able to do if they wanted to tackle this broad problem with their career. I guess first off, what law reforms do you think we need to allow people to control their identity in this new age? Should we make it illegal to make a deepfake of someone without their permission, especially if they’re saying something and endorsing something, and you’re potentially misappropriating their trademark or their intellectual property in some sense?
Nina Schick: I think it’s really hard to answer that question, because I think kind of piecemeal legislation, where it’s just like, “I’m going to outlaw deepfakes” could actually have an adverse impact, right? If you do agree that consent should be a guiding principle in any kind of legislative framework, where you’re talking about using someone’s biometrics synthetically…
Nina Schick: However, there are so many gray areas. So for example, there’s a whole emerging field of political satire using deepfakes. Now, if you’re a political satirist, just like a cartoonist, you’re not going to get the consent of the person who you’re satirizing in order to make your point. So, should that be illegal? Lots of gray areas. And this is why it’s so difficult to think about legislative structures around the use of synthetic media.
Nina Schick: But look, I think it’s going to become a really important area because there are so many commercial incentives at play as well here. I don’t know if you’ve seen, but the synthesizing of your digital persona is going to become this huge business. Especially if you’re a celebrity, or a film star, or a sports star, you don’t have to do the personal appearances anymore, you can get your AI avatar to do it, and you can just be like clocking in the money without even having to be there. Without even having to step in front of a camera.
Nina Schick: There was a campaign that came out just last week, and it was run by Lay’s and UEFA. And it basically features Lionel Messi, the footballing legend. And this campaign is so clever, it’s called Messi Messages. And it’s a website where anyone can generate your own synthetically generated message from Messi to your friend. And it looks like he’s speaking to you from his smartphone. So, there is a real commercial interest in figuring out the legals of this area. And what I suspect is that the entire new industries that are going to flourish around this will probably lead the way in terms of the legislation around consent, rather than activism on some of the really malicious misappropriations of your identity like in the case of porn.
Robert Wiblin: Yeah. I mean, I haven’t thought about this nearly as much as you have, but I feel like saying that you can’t make synthetic media that is indistinguishable from reality, where people really will be convinced that it’s real, without the consent of the person whose face you’re using…who you’re effectively impersonating… I can’t think of that much that would be really valuable that would really lose out from that. I mean, maybe it won’t really be able to stop the deepfake pornography, because it’s just going to be cross-border. It’s just so hard to police anything on the internet. But it could reduce it somewhat, because at least people who are using it maliciously are more likely to be in the same country as you, and maybe it would be evident who they are because they’re someone who hates you and is out to get you.
Nina Schick: I can see that being like a sound guiding principle. So, I think rather than the concept of that as like a legal principle, the problem will be in enforcement, right? How do you prevent people from doing it anyway? Especially if they’re in a different jurisdiction. I mean, there’ll be different rules state to state in the United States — let alone somebody doing it from, I don’t know, a different continent. How do you punish them? And what if that person is anonymous? I think the enforcement, just like with all kinds of cyber crime, that’s more the problem.
Robert Wiblin: I mean, we’re never going to be able to stamp it out, or probably even reduce it more than half. But potentially reducing it by half — and especially, I guess, clamping down on the malicious uses that people use against the people they know, or their competitors, or political enemies — things like that, where they are likely to be in the same country and law enforcement might be practical… It seems like it’s at least a start.
Robert Wiblin: And with the satire, I suppose I want to protect people’s ability to do satire. But I don’t think you need it to be indistinguishable from reality. Like the standard would have to be that a viewer can tell that it’s not real, that it is a deepfake, and then it’s permissible. Or at least that’s intuitively plausible to me.
Nina Schick: So, this is why I think there needs to be a broader debate about labeling as well. So, perhaps not always consent is needed, particularly in the case of satire for instance, but that piece of media should absolutely be labeled as being synthetically generated. And this entire area is so new that there is no taxonomy decided around it. There’s no legal structure around it. So there is a very important conversation to be had around, how do you label synthetic media? Because as we’ve already touched upon, it’s not all going to be used for bad. There are going to be so many valid applications of synthetic media, that it’s important not to throw the baby out with the bathwater. It’s important not to take a technology that’s coming whether we like it or not…it’s not only going to be used for bad purposes. So we shouldn’t have a very reactive kind of legislative approach to it.
Positive uses [01:37:04]
Robert Wiblin: Are there any other positive uses that you want to flag as things that potentially people should work on, because they’re just good uses of this technology?
Nina Schick: Obviously I came at this from the angle of geopolitics and disinformation, but the more I researched it — and the more I’ve been involved in this emerging field for the past four years — the more I’m convinced that this is just a paradigm change in the future of content production, human communication, and human perception.
Nina Schick: Every industry that uses media (and what industry doesn’t?) is going to be touched by the rise of synthetic media. And that’s because AI is going to democratize content creation, it’s going to make it so much cheaper. By the end of the decade, a YouTuber or a TikToker will be able to produce the same kind of content that’s only accessible right now to a Hollywood studio. So, that is going to mean so many opportunities for the creative industries. I mean, for one, entertainment and film are just going to get very good. And you won’t need to be a Hollywood studio to produce some really amazing creative content.
Nina Schick: Another real-world legitimate application of synthetic media is a startup that I really think is doing fantastic work. They’re based in London, they’re called Synthesia. And they basically use their synthetic media platform to generate corporate communications videos, training videos, educational videos for their Fortune 500 clients. You don’t need to go into a studio anymore and hire actors and get a green screen, you can basically create your communications video as easily as though you’re writing an email. And you can then, on their backend, choose to put that out in like 16 different languages with the click of a button, right? So it’s going to transform every industry imaginable. I think by the end of the decade, some experts who I was talking to — and it’s a really punchy stat — but I think the direction of travel is clear. They think that up to 90% of audiovisual content online will be synthetically generated.
Robert Wiblin: It’s a big forecast within 10 years.
Nina Schick: Punchy stat, yeah. But I think that is the direction of travel. And for a real social good example, here’s one. There’s a company called VocaliD, which is working on synthetic voice generation, to give those who have lost the ability to speak through stroke, cancer, neurodegenerative disease, etc. their voice back. Or those who never had the ability to speak at all can have a synthetic voice. Again, this technology is just an amplifier of human intention. It will be weaponized by bad actors and used for mis- and disinformation, but it’s also going to be commercially very relevant, transform entire industries, and also be used for good.
Robert Wiblin: I think Keiran will enjoy potentially being able to just write down what he wished I had said and then get the ML to produce a pickup that he can chuck into these episodes without having to bother me and get me to re-record things. Could potentially save him minutes a week.
Nina Schick: That already exists. There’s a company called Descript, that is already synthesizing voice for podcasters.
Robert Wiblin: Alright. So, it takes samples of my voice and then Keiran can write out whatever he wants me to say?
Nina Schick: It’s on rails now, in the sense that they’re building it as a service for podcasters. So, I can’t just go on there and take an audio clip of you and then use their software to synthesize your voice. You have to opt in, and get your producer to do that and so on. But obviously there’s like a user experience crunch point there if you need to get consent from everyone to do anything. So, it’ll be interesting to see how that develops. But that as a service for you as a podcaster already exists.
Robert Wiblin: Listeners, if my intro ever sounds a little bit artificial, maybe it’s because I procrastinated too long on recording it, and Keiran just got sick of it and threw it into Descript instead.
Technical solutions [01:40:56]
Robert Wiblin: Are there any things that you would like tech companies to potentially do?
Nina Schick: I mean more broadly, of course they have a epic responsibility, because they are these new forms of power in this information ecosystem, and they’re not really answerable to anyone apart from their shareholders. And I suppose not all tech companies are built equally, but there are lots of ethical concerns. And I find some of Facebook’s position, some of the social media platforms’ positions quite problematic — specifically when it comes to deepfakes and disinformation. There is a tendency to do the bare minimum to be like, “Well, we’re working on this problem,” but also not really doing it.
Nina Schick: Zuckerberg was testifying in front of Congress recently. And he basically said, “We don’t allow political misinformation in our ads.” It was just categorically untrue, although I’m sure he has a very convoluted explanation as to why he thinks that’s true. But also because they have the money and the resources to really help build some of the technical solutions. And they are paying lip service to this.
Nina Schick: So, Facebook launched, for example, the deepfake detection challenge last year. They offered a prize of $500,000 for the researchers who could come up with the best deepfake detector. And then they released training data to help everyone who wanted to get involved in that challenge.
Nina Schick: But I think the model that won, they were like, “Oh, we have 90% efficacy.” But that’s just on models that have been trained on that data set. But if somebody really had the resources, including the brilliant minds, like the deep learning engineers, the money, the R&D budget that’s necessary to really look into it, I mean, it would be some company like Facebook or Twitter or Google. They’re engaged to the extent that they have to be, but they could do a lot more.
Robert Wiblin: In the book you talk about the beginnings of a technical solution where you might have some kind of chip installed in cameras or in video recorders, that will certify somehow that the video was taken at a particular time and location, or perhaps it will sign it somehow with the identity of the person who owns the phone, certifying that this was taken by that person and they vouch that is real. Can you talk about that general technical approach to stewardship of photos and videos and audio, that could be used to demonstrate to a newspaper that it’s real, or at least that one person is claiming that it’s real?
Nina Schick: Look, I think the real long-term proactive solution, if the problem is that we have this information ecosystem where we don’t know what’s authentic or synthetic, we don’t know what to trust anymore, it’s the wild west, anything goes… If you diagnose the problem as such, then the only way to remedy this is to actually build a safer information ecosystem with the technical solutions right in the architecture of that ecosystem. An alternative kind of trusted information ecosystem.
Nina Schick: And the way to do that is by authenticating real media. So, you can do that in multiple ways. You can either have the technology implanted into the hardware of your device, so that, I don’t know, if you’re a journalist or just anybody, if you take a piece of media, the kind of metadata for that media stays with that piece of media for the rest of its life. So you can always tell where did this image come from, where did this video come from. You don’t have to, as a journalist, argue about whether your video is real or not, because the technology can prove exactly where it came from.
Nina Schick: And that concept of media authentication or media provenance, is something that can also now be implemented via software. And interestingly, there is actually an international coalition pushing for a global standard for media provenance. And it’s led by Adobe, it’s called the Content Authenticity Initiative. There are brilliant partners involved. There is Truepic, which is like a startup which authenticates images at capture. Qualcomm, the chip manufacturer. They’ve been around for about 18 months. And in those 18 months, they’ve already launched the prototype for a device that will basically authenticate media at point of capture.
Nina Schick: And they have a really sophisticated roadmap for how you build this authenticated or trusted information ecosystem. I know the people involved, and ultimately as we saw during the pandemic, if you cannot verify the authenticity of all digital transactions — not just only media authenticity — it’s really difficult to do any kind of business at all, right? How does e-commerce work? How does anything work? So, you’re going to have to have some basis of authentication, not only for all media, but all digital transactions. And I think that’s the way to go.
Nina Schick: I think the bigger challenge is not necessarily the technical challenge, because if you look at what the Content Authenticity Initiative has already done in its relatively short life, it’s really impressive, including the technology that they’ve built. And I’m sure they’ll build many more technical solutions. The problem is wide-scale adoption, and engagement from not only legislators and policy makers, but also some of the big tech companies. It’s interesting to see which tech companies are involved. Like Microsoft is involved, Twitter is involved. I don’t know to what extent, but like Facebook notably isn’t. So, I think the engagement and setting the global standard, that’s actually a harder challenge than building the technical tools.
Robert Wiblin: Interesting. I haven’t thought about this that long, but it seems like there’s something very promising there. So for example, how do people know that I’ve written something? Probably it’s because I guess it’s on a domain that we control like 80000Hours.org, or it’s on my personal website robwiblin.com and they think that only I have the password to put stuff on there, or it’s on my Twitter account and they’re like, “This sure seems like Rob, it has been for many years. So probably this new tweet is also written by him.” And it seems like maybe we need some more comprehensive system like that, where you trust people and they have some way of demonstrating that at least they are claiming that they took this video, or that they wrote this thing, or that this is a photo that I believe is true.
Robert Wiblin: And then if you trust that person, you trust that source, then you think, well, probably this is legit as well. And at the moment, we have that for some things like I have a Twitter feed and if I post on there, probably — unless my password was stolen — it was legit. But it seems like we need that in a more comprehensive way, in a way of posting all kinds of different things and having archives of that all certified, I guess, maybe using some kind of cryptographic signature.
Nina Schick: Exactly.
Robert Wiblin: Interesting. Cool. If that is the most promising way, I’m curious to know whether you know of organizations or universities or academics maybe who are working on this, that listeners — if they’re interested in working on that, or I guess other solutions to misinformation and synthetic media — that they could potentially reach out to.
Nina Schick: I think on the solution side, I think the Content Authenticity Initiative they’ve actually just put together this alliance, the C2PA, is the leading alliance and organization on content authenticity. So, I’d definitely encourage you to look them up. In terms of the actual synthetic media generation, there are over 150 startups that have popped up in this space. And we discussed a few of them today, VocaliD, Descript, Synthesia. They’re so fascinating. So, if you’re interested in the synthetic future but not necessarily from a disinformation perspective, but more from a kind of, “Oh, how’s this going to change the future of content production?” There are tons of startups that are doing really exciting things that need clever people to think about the ethical ways to apply this technology.
Nina Schick: From a particular disinformation/human rights angle, I would really encourage you to look up the work of Witness. What they’re doing is really fascinating. And from the detection side, right? Which is also a huge technical challenge, which has to work in conjunction with media authenticity and provenance, Sensity is an AI startup which has been working on detection solutions since 2018. And I think increasingly you’ll start to see some cybersecurity companies. If you’re thinking about protecting your business, or an enterprise solution to protect your brand or your company from what could be very costly disinformation, not only deepfakes, but just any way that your company can be hurt by disinformation, I think there are increasingly going to be cybersecurity offerings in this space as well.
Robert Wiblin: In the immediate term, do you think that we maybe need more stunts to alert a wider range of people, including people who don’t normally attract to these issues, to the fact that deepfake technology is where it is? So like, have videos of very famous celebrities who a wide range of people might be interested in seeing, saying things that are outrageous or doing things that are outrageous, and then maybe being broken into pieces so it’s like very demonstrably not real? Because obviously people like you and I are aware that we can be misled in this way, but there’s probably like a whole tail of people who are not aware at all that this is going on.
Nina Schick: Totally. Inoculation is key. It’s just part of digital literacy and “education.” I hate using that word on people. We need to “educate” citizens, but I used it in quotation marks. So that is key, right? And one really clever way to do it is by allowing people to play with synthetic media in a controlled way that isn’t harmful.
Nina Schick: Dessa, the AI company, did something a few years ago which was great. They basically synthesized Joe Rogan’s voice. And everybody recognizes Joe Rogan’s voice. And then they created this website called Faux Rogan, where you could guess like, “Is it real Rogan or is it Faux Rogan?” It was like a quiz.
Nina Schick: It was just an interesting project that the deep learning engineers were put on as a technical challenge: Can you synthesize this voice? And again, this was quite a few years ago. So, it took them hours and hours of training data. And then they put some videos on YouTube of Joe Rogan saying crazy things like, “Oh, maybe the singularity is near.” Or whatever. Like, “These guys have trapped me in an algorithm.” And that got millions and millions of views and plays.
Nina Schick: So, that is without a doubt… Inoculation is part of the solution. However, it comes with a catch. It’s a little bit of a catch-22. And that is going back again to this liar’s dividend thing. Ironically, the more you “educate” people about deepfakery and synthetic media and its potential misuses, the more people start to become cynical or critical of all media, including authentic media. So, you need to do it, but…
Robert Wiblin: It sucks.
Nina Schick: Yeah. On the other hand, people just might be like, “Everything’s fake. I don’t believe that.” The amount of times I’ve had people message me on Twitter or DM me because they see a video that they don’t like of, I don’t know, Donald Trump doing something or whatever, something to support their own cognitive bias, and they send it to me and they’re like, “This is a deepfake.” And mostly it’s not. So, deepfake videos are still… When considered in relation to all the other media that exists online, it’s a tiny tiny tiny fraction. That’s not to say that synthetic media won’t become ubiquitous in the end though.
Careers [01:52:30]
Robert Wiblin: Do you have any general comments for people in the audience who are interested in using their career to improve the information ecosystem that we have in general? Are there any opportunities or ways of improving things that people should look into if they’re trying to plan out their career or figure out where to make a difference?
Nina Schick: I think just in general, if you’re listening to this podcast, if you’re engaging with Rob and 80,000 Hours, you’re going to be exactly the intelligent type of person who can dedicate your bandwidth to thinking about some of these issues. This is just one example of how quickly the exponential tech-led change is coming, and how quickly society and politics and everything we know about it is being transformed. And I think it’s an arguable point, but I’d be interested in your views on this, but you could make a strong case that in our lifetime, we’re going to see more change and disruption led by technology than the entirety of humanity has known before us, right? The entirety of human experience has not been as disrupted as what we’ll see in our lifetime.
Nina Schick: So, I think that the understanding about the massive scale, the massive kind of paradigm change that is underway, is really desperately needed as a conceptual framework. Seeing the bigger picture and not the reactive, “We need to do this.” Or, “We need to introduce a law banning all deepfakes, and technology is bad, and all social media companies must shut down.” This is what we don’t need. We need more proactive, long-term thinking. And we need thinking that is also networked.
Nina Schick: So, we need people who are brilliant engineers or data scientists to be able to talk to policy makers, to be able to talk to communications experts. That’s the only way. In a way it’s quite analogous to the problem of climate change. The only way you build a solution is by taking a networked approach. And you need many different people from many different disciplines to look at it from their perspective.
Robert Wiblin: Are there any think tanks with policy or legal research programs in this area that are worth looking out for?
Nina Schick: Yeah. There are loads of programs at universities and think tanks who are looking at misinformation from a broader perspective, loads of universities. I mean, I don’t have a list off the top of my head, but certainly there are many kinds of academic and think tanks and institutes that are looking into this. And I think that a quick Google can reveal those.
Robert Wiblin: Yeah. We’ll try to find out some programs like that and stick up links in the blog post attached to the episode.
Robert Wiblin: I know you’ve got a meeting to go off to in just a minute, but — I guess, in this conversation, it’s been a little bit biased towards doom and gloom, and things being terrible, as is our wont at 80,000 Hours, trying to worry about the world’s most pressing problems. We don’t get to spend as much time talking about the world’s most amazing, wonderful opportunities. Maybe we should consider that a little bit more than we do, perhaps that’s a problem with the framing that we have. But I’m curious to know, are there any deepfake or misinformation stories that you think are just legitimately very funny or perhaps heartwarming? Is there any positive side to this that’s entertaining?
Nina Schick: Yeah. So first of all, I think your angle is not wrong, because we live in this age of transformation, and it can be anxiety inducing. And probably the best way to get people to engage is by putting the fear in them, certainly that’s true when it comes to policy makers. But it is also a really exciting time to be alive. There are so many opportunities because everything is just going to be done differently, right? All these legacy institutions, legacy ways of thinking, all of these which have developed over centuries and decades for the analog age, all of those rules are being rewritten and scrapped.
Nina Schick: So yeah, fine. I know it’s got a bad name now, but move fast and break things is certainly true in the sense that there are many opportunities. I think what we have to do going forward is think about the ethical implications of this technology. Because I don’t think it’s the technology itself that is bad, it’s neutral, right? It’s just this amplifier of human intention.
Nina Schick: But given what we know about our experiment with the internet, and how that didn’t turn out to be the utopian dream that we’d hoped for, that should give us some guiding principles on how to manage the age of synthetic media. And I think ultimately, it’s going to take some time. Because if you look at the history of human communication and the technologies that have transformed it, it does take time for society to catch up, right? Like you had the invention of the modern printing press, which led to the Reformation, which changed the course of world history. And then the next big evolution or technological discovery when it came to communication was arguably the invention of modern photography. But there were 400 years between the printing press and photography.
Nina Schick: Now, we’re talking about something that has happened in the last 30 years, we have the internet, smartphone, social media. Now we’re entering the age of synthetic media. So, just the pace of change is so difficult as a society to keep up with. So yeah, I think there are loads of exciting opportunities, but I think we need to be, if we can, we need to try to think about the ethical considerations before we build it, not after the fact. And to be honest with you, there are many people who are working on that now. So, loads of brilliant minds are coming together in that space, which is really exciting and encouraging.
Robert Wiblin: Yeah. I slightly have the picture that it’s like humanity is skiing down a very steep slope at breakneck speed, and there’s trees everywhere. And we have to ski around them and see how things can go wrong. But then hopefully if we can make it to the end of the slope, then we’ll have a more sensible society and much better technology by which to make life better.
Robert Wiblin: But yeah, it’s a little doom and gloom, but as long as we can stay on a decent track, then we should have much better lives than we do. Just as we have much better lives than people in the 18th century did, by and large.
Robert Wiblin: My guest today has been Nina Schick. Thanks so much for coming on the 80,000 Hours podcast, Nina.
Nina Schick:
Thanks for having me Rob.
Rob’s outro [01:58:27]
If you’re interested in working on novel ways to do the most good possible then you might be able to find the right opportunity to do that at our job board.
It lists a wide range of jobs that help you work on pressing global problems or build the skills necessary to do so in future.
As I write this it has a total of 462 current vacancies listed, including 45 on ‘other’ problem areas which would include roles working to combat disinformation among many others.
It also lists 195 policy roles, 15 assistant roles, 91 management roles, 348 for people with an undergraduate degree, and 66 roles or people with 5 or more years of experience in an area.
So go take a look and use the filters to narrow down to the jobs of greatest interest to you.
You can find all that at 80000hours.org/jobs.
Finally, just wanted to highlight one role in particular:
The Council on Strategic Risks are calling for applications for their Fellowship for Ending Bioweapons. It goes for 1 year, and you’ll work with leading experts committed to biological threat reduction, including former guest of the show Andy Weber — who helped dismantle Cold War-era bioweapons programs.
At the end of that year, you should have deep knowledge of what it will take to end bioweapons programs and a strong network among biosecurity and biotechnology experts.
Applications are due by 5pm Eastern Standard Time on April 7, 2021, and you can find out more at councilonstrategicrisks.org – we’ll include a link in the blog posts associated with this episode, as well as #93 – Andy Weber on rendering bioweapons obsolete & ending the new nuclear arms race.
The 80,000 Hours Podcast is produced by Keiran Harris.
Audio mastering by Ben Cordell.
Full transcripts are available on our site and made by Sofia Davis-Fogel.
Thanks for joining, talk to you again soon.
Related episodes
About the show
The 80,000 Hours Podcast features unusually in-depth conversations about the world's most pressing problems and how you can use your career to solve them. We invite guests pursuing a wide range of career paths — from academics and activists to entrepreneurs and policymakers — to analyse the case for and against working on different issues and which approaches are best for solving them.
The 80,000 Hours Podcast is produced and edited by Keiran Harris. Get in touch with feedback or guest suggestions by emailing [email protected].
What should I listen to first?
We've carefully selected 10 episodes we think it could make sense to listen to first, on a separate podcast feed:
Check out 'Effective Altruism: An Introduction'
Subscribe here, or anywhere you get podcasts:
If you're new, see the podcast homepage for ideas on where to start, or browse our full episode archive.