What is Synthetic Media?
History of Synthetic Media |
“Synthetic Media” is a pretty broad term that is used to describe some sort of data that is manipulated by algorithms and artificial intelligence to create some media. While we will be discussing a specific type of synthetic media called “Deepfakes,” it is very important to note that synthetic media can refer to a variety of media types such as “text to speech generation, music synthesis, image synthesis, and more” [1].
|
Synthetic media as we know it really took off in the 1950s with generative art, art that is generated “by using autonomous machines or algorithms.” [2] One of the first recognized examples of this was the string quartet composition known as the Illiac Suite of 1957 by Leonard Issacson and Lejaren Hiller of the University of Illinois at Urbana-Champaign [3]. In 1958, the world saw “Perceptron,” the first functional artificial neural network, created by Frank Rosenblatt [4]. The rise of artificial neural networks paved the way for more advanced synthetic media as computers could now “learn,” allowing for the artificial synthesis and creation of media similar to that of a human.
|
Illiac Suite 1/4 |
Some Types of Synthetic Media |
Now that we understand where synthetic media came from and what has made it possible, let’s briefly touch on some of the most common types before getting into the heart of this website’s topic...
Music from AI-generated Sounds |
We learned a little bit about how music could be generated, but machines are able to create just about any sound you can think of through the process of “Audio Synthesis.” This process consists of an algorithm being trained to manipulate waveforms to recreate sounds. This is commonly used for recreating sounds for movies, or even inventing sounds that do not exist.
|
Audio synthesis can go even further and become “Speech Synthesis.” You have likely used some sort of text-to-speech system like the one found in Google Translate where you input some string of words and can hear what those words sound like in real-time. (Click to Try) Algorithms are trained on a very large database of human voice recordings and sounds called “phonemes,” which are the smallest unit of speech distinguishing one word (or word element) from another” [5]. These algorithms take in text and are able to put these phonemes together to form the correct sound of a phrase and then use their training on human voice recordings to accurately mimic the cadence and pronunciation of a human voice [6].
Example from ResearchGate |
The last type of synthetic media I would like to share with you is “Image Synthesis.” While there are many subfields of image synthesis, the process consists of an algorithm being able to create “new images from some form of image description” [7]. This is commonly seen in the subfield of “Human Image Synthesis,” where “photo realistic renditions” of humans are created [8].
|
What are "Deepfakes?"
History of Deepfakes |
|
Now that we have a better understanding of what synthetic media is, we can go a step further and learn about the technological advancements that has been taking the world by storm: “Deepfakes.” The term “deepfake” combines “deep learning” and “fake,” and describes media, generally photos or videos, that is manipulated by an algorithm to create something that does not really exist. While deepfakes are sometimes used in popular culture for entertainment purposes, deepfakes are often used in harmful scenarios and spread disinformation.
|
Tom Cruise Deepfake
|
In 1997, Christoph Bregler, Michele Covell, and Malcolm Slaney created Video Rewrite. This program took video of someone speaking, reanimated the face, and replaced the existing audiotrack to make it look like they were saying something completely different [9]. While this was the world’s first look at deepfake technology, it did not really become popular until 2017.
|
What is Disinformation? |
Disinformation is defined as information that is spread to deliberately mislead others or manipulate narratives [10]. The key difference between disinformation and misinformation is that disinformation is done on purpose. This is a key point of deepfakes, as they are oftentimes created for quite harmful and manipulative purposes.
Deepfakes in Popular Culture
Before we get too deep into the negatives of deepfake technology, I want to give some understanding of how deepfakes have been used in more “positive” situations, specifically in movies, memes, and parodies.
Movies
If you have seen Furious 7, you may remember seeing Paul Walker despite his death occurring 2 years prior to the release date. Deepfake technology has been getting some looks in the film industry recently due to the possibilities it brings. Essentially, the tech can be used to “bring back” actors that may have passed during the filming of a movie, for example. There are also talks of deepfake technology being used for de-aging actors. In fact, there was deliberation by Disney to use this technology to “create young Luke Skywalker” in The Mandalorian [11]. With CGI and VFX, these kinds of things have always been present in a way, but deepfake tech can make it better.
Memes & Parodies
Obama Sings Baka Mitai |
The “positive” sides of the technology do not stop there. Deepfakes have been wildly popular when it comes to memes and parodies. From animating the Mona Lisa to making President Obama sing songs, people with access to the technology have been able to create some really wacky things. Typically, the deepfakes made for meme and parody purposes are very easy to identify. They are usually made with an AI service online that will take a picture and spit out a reanimated video. While this content is made for fun and entertainment, it is proof that the technology can also cause harm.
|
Deepfake Pornography
*Slight NSFW Content Warning*
Brief History
I wanted to give this topic it’s own section as it is one of the most prevalent issues plaguing our society today regarding deepfake technology. As we learned previously, deepfakes are typically made for harmful reasons. Probably the most common outcome of deepfake technology is deepfake pornography; pornography made by taking existing pornographic videos or photos and swapping the heads of the adult actors/actresses with other people (typically celebrities, important figures, or even people you know). In fact, there was a report called “Deeptrace” by Sensity in 2019 that claimed that of the nearly 15,000 deepfake videos that were online, 96% of them were pornographic in nature [12].
|
The Problem |
Deepfake pornography is a undoubtedly a problem. For one, deepfaked pornography videos are exactly that: fake! The spread of these altered videos could seriously tarnish someone’s reputation, resulting in “loss of employment, harassment, social isolation, and threats or acts of violence” [13]. Not only can this fake content harm people’s lives from a societal standpoint, deepfake pornography has been accepted as being a sexual offense by society. This is because those who are being digitally altered and placed in these photos and videos are not aware and therefore not consenting. While deepfakes can affect just about anyone, deepfake pornography typically targets and affects women at an astronomically higher rate than men.
On top of affecting high-profile individuals in our society, deepfake porn is also being made of normal people. Deepfaked “revenge porn,” as it is most commonly called, is content made, usually by angry exes, of normal people to get revenge on the victims. This content is, again, non-consensual and can be very damaging to those affected.
On top of affecting high-profile individuals in our society, deepfake porn is also being made of normal people. Deepfaked “revenge porn,” as it is most commonly called, is content made, usually by angry exes, of normal people to get revenge on the victims. This content is, again, non-consensual and can be very damaging to those affected.
Believe it or not, there was an app in 2019 called "DeepNude." This app allowed you to upload a photo of a woman which would then output a deepfaked nude photograph created by AI [14]. Of course, this app did not operate on consent. The creator of the app went so far as to say “So if someone has bad intentions, having DeepNude doesn’t change much...” in response to being asked if this technology was considered harmful [14]. This is one of the many ways in which deepfake technology disproportionately affected women over men.
|
Real output from the "DeepNude" app.
|
Legality of Deepfaked Porn
Despite public outcry and demands by our society for harsh punishments for deepfake pornography, it has historically been quite rare for victims to really receive any help. In the past few years, things have begun to change a bit. For example, the 2021 U.S. National Defense Authorization Act requires Homeland Security to “issue an annual report for the next five years on deepfakes” as well as to “study deepfake creation technology and possible detection and mitigation solutions” [15]. This is in part due to the explosion of deepfake pornography among other things. Prior to these new provisions, Virginia banned deepfake pornography in 2019. The following year, New York banned deepfake pornography created without the consent of the subject [16].
The Dangers of Deepfakes
Political Dangers
Donald Trump reposts Nancy Pelosi Deepfake |
Lately, the general public has been concerned about how deepfaked content can affect a country politically. With the possibilities deepfakes provide, this concern makes sense. In 2019, there was a deepfaked video of Nancy Pelosi giving a speech while drunk. While the video was proven as fake, it did not stop millions of people from viewing the video. In fact, Donald Trump posted the video to Twitter, saying “PELOSI STAMMERS THROUGH NEWS CONFERENCE” [17].
|
Some say this technology has the potential to even sway elections. Imagine a deepfaked video of a presidential candidate saying something problematic. Now imagine if thousands of people viewed this video, never realized it was fake, and avoided voting for this candidate because of it. And even worse, imagine there is a real video of a candidate doing something problematic, but people are convinced it is fake!
|
Fraud |
We know how deepfakes can be used to spread disinformation, but did you know that some people use deepfake technology to carry out fraud? In 2020, a company director’s voice was cloned using AI and used to authorize the transfer of $35 million from a bank in Hong Kong [17]. Jake Moore, a cybersecurity expert at ESET, says “Manipulating audio...is only going to increase in volume and without the education and awareness of this new type of attack...more businesses are likely to fall victim…” [18]. It is typically much easier to deepfake audio than it is video, so these kinds of attacks, while not so common at the moment, may one day be much more prominent. Imagine receiving a call and hearing a relative’s voice saying they need to borrow some money. These kinds of attacks will probably be targeted towards businesses, but who is to say it cannot happen to anyone.
Harassment, Bullying, & Extortion
We know how deepfakes can be a threat to politics and privacy, but how else can deepfakes be harmful to us? Due to the accessibility of deepfakes and deepfake technology nowadays, it is quite possible that you could become a victim yourself. As deepfakes are becoming more and more realistic, we are beginning to see cases where deepfake content is generated and used to blackmail and frame people. For example, in the beginning of 2021, a mother attempted to frame some of her daughter’s cheerleading teammates with deepfakes. In the photos and videos, the teenage girls were “naked, drinking, and smoking” [19]. It is quite possible that you, or someone you know, could become a victim.
|
Raffaela Spone charged with harassment and cyberharassment |
Staying Safe
With the growing numbers of deepfake media coming out into the world, it is important for people to at least be aware of their existence. More importantly, people should know how to protect themselves against being deceived or becoming a victim.
Avoiding Deception
How to Spot a Deepfake |
It may seem cliché, but you must not believe everything you see on the internet. Deepfakes are getting good… really good. Luckily, there are things you can look for to stop them. For example, unnatural or awkward eye movement, facial expressions, body movement, or body parts are typically the first things you will be able to spot, according to Norton [20]. You may also notice some inconsistencies in content such as “poor lip-syncing” and “robotic-sounding voices” [20]. If you still cannot tell, be skeptical. Search for proof of the content elsewhere or read fact checking blogs.
|
Avoiding Being a Victim
The last thing you want to do is to become a victim. Deepfakes are being used for extremely malicious purposes and becoming a victim could hurt your reputation or worse. In most cases, if someone wants to target you, it may be hard to prevent. With that being said, there are some things you can do. According to Lexology, the best you can do is to “Stop posting photos of yourself or your family on unprotected social media accounts” [20]. This is a general internet safety rule, but it really applies in this situation as deepfakes rely on some sort of existing content. If you do ever happen to come across deepfaked content of yourself, “collect evidence of the images,” “report [the] images to the platforms” and speak to legal professionals and the police [21]. Do what you can to stay safe. It is unlikely that you will become a victim, but it is important to be prepared.