KEY LEARNINGS
- AI harms extend beyond technical glitches to include individual rights violations, discrimination, and physical safety risks.
- Group harms occur when AI systems amplify historical biases, systematically disadvantaging specific populations at scale.
- Societal harms involve threats to democratic processes and the information ecosystem, often occurring without a single identifiable victim.
- Organizational harms manifest as reputational damage and legal liability when systems operate outside governance boundaries.
- Effective governance requires anticipating these categories of harm during the design phase, rather than reacting to them post-deployment.
- 📄NIST AI Risk Management FrameworkOfficial NIST framework for managing AI risks.
- 🌐OECD Framework for the Classification of AI SystemsInternational taxonomy for AI system classification.
- 🌐AI Incident DatabaseSearchable database of AI failures and incidents.
- National Transportation Safety Board. (2019). Collision Between Vehicle Controlled by Developmental Automated Driving System and Pedestrian.
- Obermeyer, Z., et al. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science.
- Dastin, J. (2018). Amazon scraps secret AI recruiting tool that showed bias against women. Reuters.





