
In our last article, we discovered some of the potential advantages and disadvantages behind the technology of deepfakes – videos, images, or audio recordings that use AI to replace someone’s face and/or voice with that of another. This results in a type of fake news called manipulated content or, in more intentional cases, satire.
In this article, we’ll dive a little deeper into deepfakes to learn how we could potentially spot and prevent the spread of misinformation. We’ll also discover what governments and other organizations are doing to combat this risky, yet promising, technology.
How to Spot Deepfakes
As we determined in our previous article, deepfakes use a face-swapping, AI-powered technology to make a video of your favourite celebrity or politician that never actually happened. The reason this works so well with famous people is because of the abundance of images and videos of that person available on the internet, which allows the facial gestures of the famous person to replace that of virtually anyone. Ultimately, a deepfake can be made of anyone, including your family and best friends!
For an example of deepfake technology, take a look at this Indiana Jones video which replaces Harrison Ford’s face with Chris Pratt’s.
It’s not just videos and images that use this technology; deepfakes can also replicate a politician’s voice or create songs never sung by your favourite band. This means that virtually most types of media that we regularly consume will have to be viewed with a skeptical eye.
That said, there are ways that you can spot deepfakes with the naked eye. When looking at ANY image or video you’re uncertain about, try to spot the following:
- Unnatural eye movement or facial expressions, including signs such as a lack of blinking, odd eye movements, too much blinking, or uneven eye/eyebrow shadows.
- Unnatural facial positioning, body, or posture. Do the lips sync with what the person is supposedly saying?
- A face that lacks the proper emotion in relation to what someone is supposedly saying.
- Cheeks and forehead that appear too smooth or wrinkly. “Is the agedness of the skin similar to the agedness of the hair and eyes?“
- Unrealistic hair or teeth. Frizzy hair or loose ends usually aren’t present in more elaborate deepfakes, and individual teeth are difficult for AI to replicate.
- Glasses glare or facial hair. Is there too much or too little glare? Does the facial hair look fake in any way?
- Discolouration: “abnormal skin tone, discoloration, weird lighting, and misplaced shadows” are all signs of a deepfake.
- Does the video still look real if you slow it down? Deepfakes become easier to spot when you analyze them image by image. For instance, poor lip-syncing becomes more apparent.
- Inconsistent noise or audio. It’s not just images and videos that you have to analyze. Listen carefully for “robotic-sounding voices, strange word pronunciation, digital background noise, or even the absence of audio“.
- Reverse image search. Here’s a quick guide on how to perform a reverse image search, which allows you to find other copies of the image and see which websites they appear on. If it’s fake, there’s a chance that someone else has already determined its authenticity.
- Fake news-detecting websites will often have articles written by others that determine the veracity of an image, video, or voice recording. For instance, Snopes is an excellent resource for discovering the truth behind recent deepfakes.
Deepfakes become easier to spot with practice. Search for deepfakes on YouTube, or try this quiz with various true and false images, videos, and audio recordings!
How Governments and Organizations Are Dealing with Deepfakes
As deepfakes become more realistic, it’s becoming harder for the human eye to detect them naturally. Thankfully, there are other technologies developed to combat deepfakes that seek to spread misinformation.
- Artificial Intelligence is used to create deepfakes, but it is also being used to detect them. As mentioned in The Guardian, “tech firms are now working on detection systems that aim to flag up fakes whenever they appear“. The military’s funded the creation of an accurate detector called DefakeHop, and a tech startup in Amsterdam called Deeptrace is developing a tool that is essentially deepfake antivirus software.
- Blockchains, which are not owned by centralized banks or governments but operated by literally anyone with an internet connection, prevent fake information, “is a digital ledger that documents any modifications made to an original video so that the creator can track changes“. While new versions of an image, video, or audio recording can be added (hence the chain), the original CANNOT be modified or deleted. Blockchains are virtually hack-proof because records are kept safe on numerous personal computers within the network.
- Hashing, which acts like a digital watermark, provides a “video with a combination of numbers that is unique to the video and is lost if the video is modified“. By referring to the hash number, a potential deepfake video can be compared to the original. If they don’t share the same number, then the video has been edited in some manner.
. . .
So what’s the lesson here? Perhaps it’s the fact that although technology has presented issues for us through deepfakes, technology also has the potential to combat any potential disadvantages of this form of manipulated content. Ultimately, technology doesn’t have to be feared, but rather understood, in order for positive changes to occur worldwide.
At techKNOWtutors, we realize that adapting to technology isn’t easy. Although most services have closed, we remain open – digitally – to answer your internet-related questions. If you need help with improving your digital literacy, send us an email at techknowtutors@cscnl.ca or join our Facebook group and send us a message.
Better yet, sign up for one of our online classes that we offer for FREE every week! Until then, stay in the techKNOW.
It is great and relieving to know that Governments and other organizations are dealing with deepfakes, then we will be able to enjoy the fun and creativity of artistic harmless deepfakes worry-free. I ignored some of the facts mentioned in these two deepfake articles. Thank you Kyle Wiseman and TechKNOWtutors for your great research and informative article.
LikeLiked by 1 person