KEY LEARNINGS
- Algorithmic bias occurs when AI systems make systematically unfair decisions based on protected characteristics like race, gender, or age.
- Disaggregated performance metrics reveal bias that overall accuracy numbers hide—Amazon's hiring tool appeared accurate overall but discriminated against women.
- Multiple conflicting definitions of fairness exist (demographic parity, equal opportunity, predictive parity)—choosing which matters is a values decision.
- Adversarial testing with edge cases and diverse testers uncovers bias that standard testing misses.
- Technical debiasing alone is insufficient—diverse teams, human oversight, and feedback channels are essential for mitigation.
- 📰ProPublica - Machine Bias in Criminal SentencingInvestigative report on COMPAS algorithm and racial bias in recidivism prediction.
- 🔗Gender Shades - Intersectional Accuracy DisparitiesStudy revealing facial recognition bias across gender and skin tone.
- 🔗Google Research - Model Cards for Model ReportingFramework for documenting ML model performance and limitations.
- 🔗Microsoft Research - Datasheets for DatasetsStandardized documentation template for training datasets.
- 🔗NIST - AI Risk Management FrameworkComprehensive framework including bias detection and mitigation strategies.
- Vigdor, N. (2019). "Apple Card Investigated After Gender Discrimination Complaints." The New York Times.
- Buolamwini, J., & Gebru, T. (2018). "Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification." Proceedings of Machine Learning Research, 81, 1-15.
- Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016). "Machine Bias." ProPublica.
- Dastin, J. (2018). "Amazon scraps secret AI recruiting tool that showed bias against women." Reuters.
- Mitchell, M., et al. (2019). "Model Cards for Model Reporting." Proceedings of the Conference on Fairness, Accountability, and Transparency.
- Gebru, T., et al. (2021). "Datasheets for Datasets." Communications of the ACM, 64(12), 86-92.





