- 📰Center for AI Safety - Statement on AI RiskExpert consensus statement on AI existential risk.
- 🌐Future of Life Institute - AI Safety ResearchResearch grants and resources for AI safety and x-risk.
- 🌐MIRI - Research on AGI SafetyMachine Intelligence Research Institute's technical safety research.
- 🌐DeepMind - Technical AGI Safety ResearchIndustry-leading research on AI alignment and safety.
- 📰Anthropic - Constitutional AI and Safety ResearchResearch on building safe, beneficial AI systems.
- Center for AI Safety. (2023). Statement on AI Risk.
- Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies. Oxford University Press.
- Bender, E.M., et al. (2021). On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? FAccT.





