KEY LEARNINGS
  • AI privacy risks extend beyond data theft to include 'inference risks,' where sensitive details are deduced from public data.
  • The principle of data minimization conflicts with modern AI's hunger for massive training datasets.
  • Re-identification attacks can reveal individual identities even within datasets that have been anonymized.
  • Privacy-Enhancing Technologies (PETs) like differential privacy allow organizations to learn from data without exposing individuals.
  • Governance requires shifting from 'notice and consent' models to proactive Privacy by Design frameworks.
  • Dwork, C., & Roth, A. (2014). The Algorithmic Foundations of Differential Privacy.
  • Hill, K. (2020). The Secretive Company That Might End Privacy as We Know It. The New York Times.
  • McMahan, B., et al. (2017). Communication-Efficient Learning of Deep Networks from Decentralized Data.