KEY LEARNINGS
  • Algorithmic bias occurs when AI systems make systematically unfair decisions based on protected characteristics like race, gender, or age.
  • Disaggregated performance metrics reveal bias that overall accuracy numbers hide—Amazon's hiring tool appeared accurate overall but discriminated against women.
  • Multiple conflicting definitions of fairness exist (demographic parity, equal opportunity, predictive parity)—choosing which matters is a values decision.
  • Adversarial testing with edge cases and diverse testers uncovers bias that standard testing misses.
  • Technical debiasing alone is insufficient—diverse teams, human oversight, and feedback channels are essential for mitigation.