Help us build a comprehensive map of the AI policy landscape. Submit a person, organization, or influential resource that should be included.
Anyone with knowledge of the U.S. AI policy landscape can contribute. You don't need to be affiliated with the person or organization you're submitting. All submissions are reviewed before publication.
See example submission
Name: Dario Amodei Category: Frontier Lab Title: CEO, Anthropic Primary org: Anthropic Location: San Francisco, CA Regulatory stance: Moderate (mandatory safety evals + transparency) How publicly stated: Explicitly stated (speeches, testimony, writing) AGI timeline: Within 2–3 years AI risk level: Potentially catastrophic Key concerns: Concentration of power, Weapons proliferation, Loss of human control Influence type: Decision-maker, Builder, Narrator Twitter/X: @DarioAmodei Notes: Co-founded Anthropic after leaving OpenAI over safety disagreements. Published "Machines of Loving Grace" (Oct 2024). Advocates for "responsible scaling" rather than pausing.
See example submission
Name: Anthropic Category: Frontier Lab Website: https://anthropic.com Location: San Francisco, CA Funding model: Mixed (commercial + philanthropic) Regulatory stance: Moderate (mandatory safety evals + transparency) How publicly stated: Explicitly stated (speeches, testimony, writing) AGI timeline: Within 2–3 years AI risk level: Potentially catastrophic Key concerns: Weapons proliferation, Loss of human control, Cybersecurity threats Influence type: Builder, Researcher/analyst, Advisor/strategist Twitter/X: @AnthropicAI Bluesky: @anthropic.ai Last verified: 2026 Notes: Public benefit corporation. Pioneered "responsible scaling policy" framework. Revenue from Claude API and consumer products.
See example submission
Title: Situational Awareness Author(s): Leopold Aschenbrenner Type: Essay URL: https://situational-awareness.ai Year: 2024 Category: AI Capabilities Key argument: AGI is likely by 2027, superintelligence by end of decade. The US needs to treat frontier AI as a national security priority. Notes: Widely circulated in Silicon Valley and DC policy circles. Shifted discourse toward framing AI as geopolitical competition.
Thank you for your contribution. We'll review your submission.