When Tech Goes Bad. Revenge Porn Explodes with Deepfakes

(Stock image.)

With more resources to manipulate photos than to detect them, there has been an increasing worry about the harmful side of deepfakes. 

Deepfake technology—a term from a combination of deep learning and fake as in false, from Ethics in Engineering and The Rise of Deepfakes—was created by the movie industry to take an actor and put them in a scene. For example, creating a scene with late actor Paul Walker in Fast & Furious 7. Nowadays, anyone can get cheap and easy automatic computer-graphics software and, with help of a YouTube video, do the same. Sadly, the technology is most often used to target women with pornographic results. 

Deepfakes use deep learning to create fake media. The creator must first provide data to a neural network for it to gain an understanding of how the target appears. The software then uses a generative adversarial network (GAN) to take a video or image and replace it with something else. GANs go through a lot of data to correctly replace the face or body in the original image with a different one based on the kinetics of the host. The algorithm will create multiple results until it obtains the best one. The creator can then go in and manually tweak any artifacts in the image.

Tech and social media platforms are finding it hard to track misinformation and manipulated content while taking it down as soon as they get a hold of it. Since they are easy to produce, there is a lot of fake media online. Sensity found that there were 14,678 deepfake videos online in July 2019. This number has risen to over 50,000. Most of these videos were pornographic and targeted women.

Herein lies the ethical issue of deepfake technology: Should sellers of deepfake technology be accountable for the damage their products cause?

DeepNude: The Rise of Software That Erases Women's Autonomy and Consent

On June 23, 2019, programmers launched DeepNude as an extension for Windows and Linux apps that use deep learning. DeepNude allowed users to upload a photo of a clothed woman for $50 and receive a deepfake of the woman undressed in return. The website received 500,000 visitors and 95,000 downloads in about a week. Following a Vice article, the website was taken down by the owners due to backlash.

The software did not require much technical expertise to use and worked similarly to a pix2pix or GAN-based deepfake software; however, it was much simpler and cheaper to use. 

Pix2pix is a conditional GAN with a generative network and a discriminator network. The GAN uses more than 10,000 nude photos of women to train the algorithm. It learns how to translate the images into deepfakes while the discriminator determines whether the images are real or computer-generated. Over time, the system will improve against itself. 

Due to the original input of data in the software, it did not work using an image of a man since it would replace his body with a female one.

The free version of the software covered the image in a large watermark while the paid versions had a stamp that said “FAKE” in the upper left corner, which was easy enough to crop out. 

In their final statement, the creators wrote “surely some copies of DeepNude will be shared on the web, but we don’t want to be the ones to sell it.” On July 19, 2019, the creators sold their license to an anonymous buyer for $30,000. The software was then reverse engineered and can now be found on torrenting websites. 

Telegram: The Current Generation of Victimizing Algorithms and Apps 

An open-source version of DeepNude is likely to have inspired the new Telegram bot, which is connected to seven Telegram channels with a combined total of over 100,000 members. The image-sharing pages include interfaces that people can use to post and judge their nude creations. The more a photo gets liked, the more its creator is rewarded with tokens to access the community. The pages are easily discoverable via a quick search as well as social media, which is why the pages have steadily grown in membership over the last year. 

There has been significant growth in the number of images shared on the “image collections” channel since its creation. (Image courtesy of Sensity.)

As previously mentioned, the software uses GANs to collect data and strip images of a clothed woman by synthetically replacing her body with a bare one. The latest version uses pix2pix GANs to select the clothes to be removed, mark which points need editing and synthesize those body parts. 

Normally, the software would require a computer with a graphics processing unit (GPU), a specialized processor originally designed to accelerate graphics rendering. However, the Telegram bot is powered by cloud servers, which removes the processing restriction for users so that anyone with a smartphone can make deepfakes.

Telegram users can upload a source image using the standard instant messaging app. The bot completes the process behind the scenes and delivers the image to the user, which they can download or forward within the app in minutes. Again, the bot only performs the process successfully on pictures of women.

The app is free, but people can pay to remove watermarks and skip the queue, which can cost as little as $1.50.

In an expanded set of frequently asked questions, Telegram says it processes requests to take down illegal public content, but Telegram chats and group chats are private. While the company cannot take down pictures shared with others using the software, it can take down specific pages and bots.  

VK, cross-promotion and advertising played a significant role in attracting new users. (Image courtesy of Sensity.)

Telegram has been banned in several countries, including Russia, China and Iran, but the creators have allowed people to bypass regulations by using a VPN via Germany to circumvent the ban or go in their app settings. The bot has also made its way to VK, the largest social media platform in Russia. 

Automatic Image Abuse: A Study

According to Sensity, 104,852 women have had their photos shared using online DeepNude software as of July 2020. This number has grown by 198 percent in the last three months. Around 70 percent of the women were private individuals, and a small number appeared to be underage.  

An estimate of the total number of individuals targeted since the creation of this community. However, the actual number is likely much higher since many have not been shared publicly. (Image courtesy of Sensity.)

It appears that many users took social media or private photos to submit to the bot and share them on private or public channels on Telegram, VK and other apps. People used the manipulated content in public shaming or extortion-based attacks. About 60 percent of users submitted photos to the bot to target women they knew while only 16 percent targeted celebrities. 

The bot and affiliated channels currently have more than 100,000 members, with a majority of users in Russia and ex-USSR countries. Due to the advertising of the Russian social media website VK, there are over 380 pages sharing manipulated content as well as attracting new users to use the bot. Many of these pages also offer similar automated bots with identical user interfaces and payment schemes. 

A user poll posted on the ecosystem’s “central hub” channel explicitly indicated that the majority of users wanted to target private individuals. (Image courtesy of Sensity.)

A Brighter Future: Putting an End to Malicious Manipulated Media

In 2019, a study from the American Psychological Association found that one in 12 women are victims of revenge porn. With the rise of DeepNude software, many expect this number to increase. 

Racing to stem the tide are organizations and public agencies. Research labs have developed ways to identify manipulated media. Detector software, such as Reality Defender and Deeptrace, aims to use an application programming interface (API) as a hybrid antivirus/spam filter. The software will access data, prescreen incoming media and redirect fake content to a quarantine zone.

Several states, such as Texas, Virginia and California, have also introduced ways to criminalize deepfake pornography and prohibit its use. 

Recently, President Donald Trump signed the first federal law as part of the National Defense Authorization Act. Congress is also looking to implement a new deepfake bill with the help of computer scientists, disinformation experts and human rights advocates. The first part of the bill would require all companies and researchers to automatically add watermarks to all content. The second part would require social-media companies to build better manipulation detection on their sites. Finally, the third part would introduce new fines or jail time for individuals creating deepfakes to harm individuals or threaten national security. 

Many social media networks have already started their plans to detect or tag manipulated content. 

Facebook is removing misleading manipulated media as well as all content produced through artificial intelligence (AI) or machine learning that pretends to be authentic. The company has also recruited researchers from Berkeley, Oxford and other institutions to build a deepfake detector. 

Twitter will remove manipulated content or at least tag any deepfakes if it has been substantially edited to change its composition, sequence, timing, content or framing.

Overall, there are relatively few ways and little effort to prevent what may be a tsunami of deepfakes heading toward females. We hope that by raising the awareness of this issue and with people realizing the terrible harm deepfakes can have—the psychological trauma, damage to reputation, shame and embarrassment—that more people will rise to address and stop it.