Teaching Children About Deepfake Technologies

Teaching Children About Deepfake Technologies
shuttersv/Shutterstock.com
Summary: Deepfakes can deceive children and erroneously lead them to even believe that the world is flat. They can also cause unwanted psychological harm to youth.

How Should A Child Decide On What To Believe

The specter of deepfake videos and their dangers haunt many parents and teachers alike. How can we teach children about deepfakes? Given children’s proclivity to engage with social media, might they actually know more about manipulated content than we do? This article offers insights into deepfakes that can help you engage children in meaningful conversations and guard their safety online.

What Are Deepfakes

Deepfakes are videos that have been manipulated via Artificial Intelligence tools and machine learning techniques, which allow for the superimposition of existing images and videos onto other pieces of content. Experts anticipate that advances in technology will allow for the fabrication of more complete and longer pieces of video footage in the future. For example, deepfake manipulators could potentially create “anti-footage,” showing the opposite of what really occurred in certain situations—think going to the moon or the outcome of a war.

Deepfake video clips first appeared on the internet in 2017, and by the beginning of 2019 more than 7900 deepfake videos were available online. Nine months later that number nearly doubled, reaching 14,678 [1].

At the moment, the term “deepfake” is a bit of a catchall, encompassing both deceptive content and Hollywood’s benign use of artificially developed content. The term is also, on occasion, used mistakenly when describing edited video content in which Artificial Intelligence and machine learning tools were not involved.

Who Creates Deepfakes

While the occasional funny and benign deepfake emerges on the internet, the answer is largely "persons with malicious intent." Children’s engagement with social media sites, where they post photos of themselves, can potentially increase their vulnerability to bad actors who are interested in developing deepfakes. These days 300,000,000 photos are uploaded to Facebook every day and 46,740 photos are uploaded to Instagram each minute [2]. The pictures posted to these sites can serve as a library of content for those looking to create phony video content. Once the content is on the internet, it can be used for nearly any purpose—good or bad.

Why Are Deepfakes A Danger

Deepfake technologies can distort reality. If used to bully, defame or victimize children (or adults), the situation can damage a child’s mental health and well-being.

What Should We Teach Young People About Deepfakes

Young people should take care when deciding on whether or not to share photos and videos of themselves on social media sites. The larger the photo and video repositories featuring people’s images, the simpler it is for a bad actor to extract, manipulate, and create harmful content. In addition, young people should keep track of where they post personal content. Well-known websites typically put forth privacy policies, but smaller social media outlets may have questionable policies or none at all.

How Can Children Identify Deepfakes

To date, many deepfake videos have telltale signs indicating manipulation. For example:

  • The audio and video speeds may not align fully
  • The shadowing may appear "off"
  • Videos may be pixelated
  • The ideas expressed appear incongruent with what is known about a certain person or topic

To address media literacy and to help children think more deeply about deepfake content, these questions can be included in either formal or informal discussions:

  • Why might a person want to create “fake” videos?
  • Why would a person feel as though “fake” or manipulated content might help them achieve their goals?
  • How is it possible for deepfake videos to look so real?
  • Why is it difficult to discern real information from misinformation in some cases?
  • Is there a difference between a deepfake video of a politician and identity theft?

The precise questions used in any discussion and the nature of the discussion itself should, of course, vary based on the ages and interests of the children with whom you’re speaking.

Conclusion

The responsibility for identifying deepfakes is not solely on the individual, nor should it be. Deepfake detection technologies and laws regarding deepfakes remain under continued development. The tech exists. The laws exist. All could potentially benefit from continuous improvement as deepfake technologies advance.

Informing children about the dangers of always trusting their eyes when it comes to online content can help close the gap between the laws, technological advances, and private enterprise policies. Media literacy is a must-have in the digital age, especially as online learning expands and children rely on internet-retrieved information to a greater extent than millennials or Gen Zers ever have.

References:

[1] Deepfake technology, are we prepared?

[2] Expert Micki Boland on Deepfakes and the threats they pose