KEY LEARNINGS
  • AI accountability requires designating identifiable individuals who are answerable for system outcomes, ensuring that responsibility is not diffused across a machine or a team.
  • The 'Many Hands Problem' in AI development makes traditional negligence hard to prove because harm often results from the interaction of many small decisions rather than a single error.
  • Legal frameworks are shifting from a negligence standard toward strict liability for high-risk AI, meaning developers or deployers may be liable regardless of intent.
  • The Three Lines Model provides a robust governance structure by separating risk ownership (First Line), risk oversight (Second Line), and independent assurance (Third Line).
  • True accountability requires redress mechanisms, ensuring that individuals harmed by AI decisions have a clear path to challenge the outcome and receive compensation.
  • Bovens, M. (2007). Analysing and Assessing Accountability: A Conceptual Framework. European Law Journal.
  • Nissenbaum, H. (1996). Accountability in a Computerized Society. Science and Engineering Ethics.
  • Institute of Internal Auditors. (2020). The IIA's Three Lines Model: An Update of the Three Lines of Defense.