KEY LEARNINGS
  • Deepfakes utilize generative AI to create highly realistic synthetic video, audio, and images that depict events that never occurred.
  • Voice cloning technology has advanced to the point where mere seconds of audio can train a system to impersonate a CEO or family member.
  • The 'Liar's Dividend' is a secondary risk where bad actors dismiss genuine evidence of misconduct by claiming it is AI-generated.
  • Detection software faces an asymmetric disadvantage because generators evolve faster than detectors can catch up.
  • Defense requires a shift from detection to provenance standards like C2PA and human verification protocols.
  • Stupp, C. (2019). Fraudsters Used AI to Mimic CEO's Voice in Unusual Cybercrime Case. The Wall Street Journal.
  • Chesney, R., & Citron, D.K. (2019). Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security.
  • Coalition for Content Provenance and Authenticity (C2PA). (2024). Explaining the C2PA Standard.