Deepfake: Weapon of mass disinformation
Move over Photoshop, Deepfake is here
- Published 5.08.19, 4:53 PM
- Updated 5.08.19, 4:53 PM
- 2 mins read
Nothing seems real anymore; nothing is real anymore. Move over Photoshop, Deepfake is here. Just before elections, videos of politicians go viral on the Internet, saying and doing crazy things that they actually didn’t. Celebrity artists, who never do pornography, find their videos all over the web. Who could believe that even videos could be doctored?
It is the dawning of the age of deepfakes. The term comes from deep learning, and, of course, fake. Deep learning is a specialised branch of machine learning based on learning data representations like speech, audio and image recognition. They belong to the overall area of artificial intelligence (AI).
A couple of years ago researchers at Stanford in the US, and Max Planck Institute and the University of Erlangen-Nuremberg in Germany, came up with a paper on Face2Face, a software technology that enables facial recognition. It captures the facial expressions of a person as they talk to a webcam and morph them directly onto the face of a person talking in a YouTube video. The target person in the video may say things that he or she did not actually say. The footage can be manipulated in real-time, putting your words into his or her mouth.
Stuff like this used to belong to the realm of professionals. But now anyone can do it. The tool is open-source software. That means anyone can modify and enhance the program, allowing open collaboration and editing by the general public. In fact, a whole community at Reddit and Github has dedicated itself to maintaining and sharing of deepfake tools.
Deepfakes are curated from photos and videos available on the Net. First, a new technique is used to reconstruct a high-detail, 3D face model from an image. (3D scanning of the person is not necessary.) This also works on videos by running the same algorithm on each frame and generating a moving 3D model. Even the wrinkles on the face, which come and go with expressions, can also be controlled in this model.
The FakeApp was the first program that gave people a shot at making deepfake videos. The website is now defunct.
The application that is used now for facial re-enactment and facial replacement in videos is called DeepFaceLab and is hosted on Github. You can find the tutorial videos on YouTube. Just search for derpfakes or go to https:/www.reddit.com/user/derpfakes. All you need is a computer with a proper graphics card and Windows 7, 8 or 10.
And it’s not only about images and video. Lyrebird is developing a new generation of speech synthesis technology that lets anyone copy someone else’s voice using a voice imitation algorithm. You can then make the target say anything you want it to say in his own voice. If you have an iPhone, you can try out the technology by downloading the Lyrebird app. It copies your voice and lets you play with it. You can read a few sentences for the software to learn your voice and create your voice avatar. The more you record, the better your avatar becomes.
Adobe also came up with speech synthesis technology and presented Adobe VoCo at an ideas forum. But they did not take it any further, perhaps due to ethical reasons.
Deepfakes could be used positively, especially in developing teaching tools. Imagine having Stephen Hawkins give a science presentation. But the negative aspects are overwhelming because even computers cannot detect a good fake. Professionals are working on having a reality defender built into the browser to detect fake videos but they are a long way off.
The Washington Post has come up with The Fact Checker for manipulated videos. Readers are invited to submit any video they have doubts about and the team of journalists at The Fact Checker will check out its authenticity.
Instagram and Facebook do not take down fake videos with even its founder Mark Zuckerberg becoming a victim recently. Will they change their policy now?
Send in your problems to askdoss@abpmail. com with TechTonic as the subject line